developer_uid: Trace2333
submission_id: trace2333-mistral-trial4_v2
model_name: trace2333-mistral-trial4_v1
model_group: Trace2333/mistral_trial4
status: torndown
timestamp: 2024-09-05T16:41:28+00:00
num_battles: 12588
num_wins: 6592
celo_rating: 1258.09
family_friendly_score: 0.0
submission_type: basic
model_repo: Trace2333/mistral_trial4
model_architecture: MistralForCausalLM
model_num_parameters: 12772070400.0
best_of: 8
max_input_tokens: 512
max_output_tokens: 64
latencies: [{'batch_size': 1, 'throughput': 0.6895319345857461, 'latency_mean': 1.4501631462574005, 'latency_p50': 1.451418161392212, 'latency_p90': 1.6278084754943847}, {'batch_size': 3, 'throughput': 1.314462035167879, 'latency_mean': 2.26788019657135, 'latency_p50': 2.285667657852173, 'latency_p90': 2.5072937488555906}, {'batch_size': 5, 'throughput': 1.5626472152226634, 'latency_mean': 3.185376397371292, 'latency_p50': 3.1835036277770996, 'latency_p90': 3.5393301963806154}, {'batch_size': 6, 'throughput': 1.5957457230560923, 'latency_mean': 3.743380060195923, 'latency_p50': 3.7930917739868164, 'latency_p90': 4.2027220010757445}, {'batch_size': 8, 'throughput': 1.597750950264357, 'latency_mean': 4.981525360345841, 'latency_p50': 5.048504590988159, 'latency_p90': 5.643458223342895}, {'batch_size': 10, 'throughput': 1.5499386501217547, 'latency_mean': 6.400956081151962, 'latency_p50': 6.42552387714386, 'latency_p90': 7.304919171333313}]
gpu_counts: {'NVIDIA RTX A5000': 1}
display_name: trace2333-mistral-trial4_v1
is_internal_developer: False
language_model: Trace2333/mistral_trial4
model_size: 13B
ranking_group: single
throughput_3p7s: 1.6
us_pacific_date: 2024-09-05
win_ratio: 0.5236733396885923
generation_params: {'temperature': 0.9, 'top_p': 1.0, 'min_p': 0.06, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '</s>', '###'], 'max_input_tokens': 512, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name trace2333-mistral-trial4-v2-mkmlizer
Waiting for job on trace2333-mistral-trial4-v2-mkmlizer to finish
trace2333-mistral-trial4-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
trace2333-mistral-trial4-v2-mkmlizer: ║ _____ __ __ ║
trace2333-mistral-trial4-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
trace2333-mistral-trial4-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
trace2333-mistral-trial4-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
trace2333-mistral-trial4-v2-mkmlizer: ║ /___/ ║
trace2333-mistral-trial4-v2-mkmlizer: ║ ║
trace2333-mistral-trial4-v2-mkmlizer: ║ Version: 0.10.1 ║
trace2333-mistral-trial4-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
trace2333-mistral-trial4-v2-mkmlizer: ║ https://mk1.ai ║
trace2333-mistral-trial4-v2-mkmlizer: ║ ║
trace2333-mistral-trial4-v2-mkmlizer: ║ The license key for the current software has been verified as ║
trace2333-mistral-trial4-v2-mkmlizer: ║ belonging to: ║
trace2333-mistral-trial4-v2-mkmlizer: ║ ║
trace2333-mistral-trial4-v2-mkmlizer: ║ Chai Research Corp. ║
trace2333-mistral-trial4-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
trace2333-mistral-trial4-v2-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
trace2333-mistral-trial4-v2-mkmlizer: ║ ║
trace2333-mistral-trial4-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Failed to get response for submission blend_katim_2024-08-22: ('http://zonemercy-lexical-nemo-1518-v18-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:36090->127.0.0.1:8080: read: connection reset by peer\n')
trace2333-mistral-trial4-v2-mkmlizer: quantized model in 36.274s
trace2333-mistral-trial4-v2-mkmlizer: Processed model Trace2333/mistral_trial4 in 63.051s
trace2333-mistral-trial4-v2-mkmlizer: creating bucket guanaco-mkml-models
trace2333-mistral-trial4-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
trace2333-mistral-trial4-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/trace2333-mistral-trial4-v2
trace2333-mistral-trial4-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/trace2333-mistral-trial4-v2/config.json
trace2333-mistral-trial4-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/trace2333-mistral-trial4-v2/special_tokens_map.json
trace2333-mistral-trial4-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/trace2333-mistral-trial4-v2/tokenizer_config.json
trace2333-mistral-trial4-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/trace2333-mistral-trial4-v2/tokenizer.json
trace2333-mistral-trial4-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/trace2333-mistral-trial4-v2/flywheel_model.0.safetensors
trace2333-mistral-trial4-v2-mkmlizer: Loading 0: 0%| | 0/363 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/363 [00:00<00:06, 52.49it/s] Loading 0: 6%|▌ | 22/363 [00:00<00:04, 82.18it/s] Loading 0: 9%|▊ | 31/363 [00:00<00:03, 83.17it/s] Loading 0: 11%|█ | 40/363 [00:00<00:04, 78.19it/s] Loading 0: 13%|█▎ | 49/363 [00:00<00:04, 76.42it/s] Loading 0: 16%|█▌ | 58/363 [00:00<00:03, 80.23it/s] Loading 0: 18%|█▊ | 67/363 [00:01<00:14, 20.54it/s] Loading 0: 21%|██ | 76/363 [00:01<00:10, 26.55it/s] Loading 0: 23%|██▎ | 85/363 [00:02<00:08, 32.14it/s] Loading 0: 26%|██▌ | 94/363 [00:02<00:06, 39.59it/s] Loading 0: 28%|██▊ | 103/363 [00:02<00:05, 45.88it/s] Loading 0: 31%|███ | 112/363 [00:02<00:04, 52.72it/s] Loading 0: 33%|███▎ | 121/363 [00:02<00:04, 57.84it/s] Loading 0: 36%|███▌ | 130/363 [00:02<00:04, 58.15it/s] Loading 0: 38%|███▊ | 139/363 [00:02<00:03, 63.17it/s] Loading 0: 40%|████ | 147/363 [00:03<00:10, 20.21it/s] Loading 0: 42%|████▏ | 153/363 [00:04<00:09, 23.14it/s] Loading 0: 44%|████▍ | 160/363 [00:04<00:07, 27.82it/s] Loading 0: 47%|████▋ | 169/363 [00:04<00:05, 34.51it/s] Loading 0: 49%|████▉ | 178/363 [00:04<00:04, 42.23it/s] Loading 0: 52%|█████▏ | 187/363 [00:04<00:03, 48.91it/s] Loading 0: 54%|█████▍ | 196/363 [00:04<00:03, 52.85it/s] Loading 0: 56%|█████▋ | 205/363 [00:04<00:02, 58.53it/s] Loading 0: 59%|█████▉ | 214/363 [00:04<00:02, 60.21it/s] Loading 0: 61%|██████▏ | 223/363 [00:06<00:07, 19.95it/s] Loading 0: 64%|██████▍ | 233/363 [00:06<00:04, 26.92it/s] Loading 0: 66%|██████▋ | 241/363 [00:06<00:03, 32.66it/s] Loading 0: 69%|██████▉ | 250/363 [00:06<00:02, 40.26it/s] Loading 0: 71%|███████▏ | 259/363 [00:06<00:02, 45.66it/s] Loading 0: 74%|███████▍ | 268/363 [00:06<00:01, 50.65it/s] Loading 0: 76%|███████▋ | 277/363 [00:06<00:01, 55.79it/s] Loading 0: 79%|███████▉ | 286/363 [00:06<00:01, 61.55it/s] Loading 0: 81%|████████▏ | 295/363 [00:07<00:01, 66.69it/s] Loading 0: 84%|████████▎ | 304/363 [00:08<00:02, 21.08it/s] Loading 0: 86%|████████▌ | 313/363 [00:08<00:01, 27.25it/s] Loading 0: 89%|████████▊ | 322/363 [00:08<00:01, 33.00it/s] Loading 0: 91%|█████████ | 331/363 [00:08<00:00, 39.95it/s] Loading 0: 94%|█████████▎| 340/363 [00:08<00:00, 45.01it/s] Loading 0: 96%|█████████▌| 349/363 [00:08<00:00, 52.56it/s] Loading 0: 99%|█████████▊| 358/363 [00:08<00:00, 59.22it/s]
Job trace2333-mistral-trial4-v2-mkmlizer completed after 85.82s with status: succeeded
Stopping job with name trace2333-mistral-trial4-v2-mkmlizer
Pipeline stage MKMLizer completed in 86.79s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.19s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service trace2333-mistral-trial4-v2
Waiting for inference service trace2333-mistral-trial4-v2 to be ready
Failed to get response for submission blend_susol_2024-08-22: ('http://zonemercy-virgo-edit-v1-1e5-v3-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:60634->127.0.0.1:8080: read: connection reset by peer\n')
Inference service trace2333-mistral-trial4-v2 ready after 181.10698294639587s
Pipeline stage MKMLDeployer completed in 181.78s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.3693742752075195s
Received healthy response to inference request in 1.630089282989502s
Received healthy response to inference request in 2.072727918624878s
Received healthy response to inference request in 2.8804397583007812s
Received healthy response to inference request in 2.5399980545043945s
5 requests
0 failed requests
5th percentile: 1.7186170101165772
10th percentile: 1.8071447372436524
20th percentile: 1.9842001914978027
30th percentile: 2.1320571899414062
40th percentile: 2.250715732574463
50th percentile: 2.3693742752075195
60th percentile: 2.4376237869262694
70th percentile: 2.5058732986450196
80th percentile: 2.608086395263672
90th percentile: 2.7442630767822265
95th percentile: 2.8123514175415036
99th percentile: 2.8668220901489256
mean time: 2.298525857925415
Pipeline stage StressChecker completed in 12.97s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 6.09s
Shutdown handler de-registered
trace2333-mistral-trial4_v2 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.16s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.12s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service trace2333-mistral-trial4-v2-profiler
Waiting for inference service trace2333-mistral-trial4-v2-profiler to be ready
Inference service trace2333-mistral-trial4-v2-profiler ready after 150.34495902061462s
Pipeline stage MKMLProfilerDeployer completed in 150.71s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/trace2333-mistral-trial4-v2-profiler-predictor-00001-deploxkpqt:/code/chaiverse_profiler_1725554972 --namespace tenant-chaiml-guanaco
kubectl exec -it trace2333-mistral-trial4-v2-profiler-predictor-00001-deploxkpqt --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1725554972 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 512 --output_tokens 64 --summary /code/chaiverse_profiler_1725554972/summary.json'
kubectl exec -it trace2333-mistral-trial4-v2-profiler-predictor-00001-deploxkpqt --namespace tenant-chaiml-guanaco -- bash -c 'cat /code/chaiverse_profiler_1725554972/summary.json'
Pipeline stage MKMLProfilerRunner completed in 955.59s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service trace2333-mistral-trial4-v2-profiler is running
Tearing down inference service trace2333-mistral-trial4-v2-profiler
Service trace2333-mistral-trial4-v2-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 1.60s
Shutdown handler de-registered
trace2333-mistral-trial4_v2 status is now inactive due to auto deactivation removed underperforming models
trace2333-mistral-trial4_v2 status is now torndown due to DeploymentManager action