developer_uid: Trace2333
submission_id: trace2333-mistral-trial2_v3
model_name: trace2333-mistral-trial2_v1
model_group: Trace2333/mistral_trial2
status: torndown
timestamp: 2024-09-05T08:14:35+00:00
num_battles: 12170
num_wins: 6178
celo_rating: 1247.02
family_friendly_score: 0.0
submission_type: basic
model_repo: Trace2333/mistral_trial2
model_architecture: MistralForCausalLM
model_num_parameters: 12772070400.0
best_of: 8
max_input_tokens: 512
max_output_tokens: 64
latencies: [{'batch_size': 1, 'throughput': 0.6893786893992181, 'latency_mean': 1.4505229604244232, 'latency_p50': 1.4450286626815796, 'latency_p90': 1.6263309240341186}, {'batch_size': 3, 'throughput': 1.313684775116294, 'latency_mean': 2.2734776175022127, 'latency_p50': 2.2863636016845703, 'latency_p90': 2.5240588903427126}, {'batch_size': 5, 'throughput': 1.5551656367268276, 'latency_mean': 3.1947600662708284, 'latency_p50': 3.1884845495224, 'latency_p90': 3.588741683959961}, {'batch_size': 6, 'throughput': 1.5973586918687148, 'latency_mean': 3.732247235774994, 'latency_p50': 3.752244710922241, 'latency_p90': 4.240312242507935}, {'batch_size': 8, 'throughput': 1.5833352909675271, 'latency_mean': 5.0173499870300295, 'latency_p50': 5.041218638420105, 'latency_p90': 5.745888328552246}, {'batch_size': 10, 'throughput': 1.537048910170404, 'latency_mean': 6.459498023986816, 'latency_p50': 6.463552355766296, 'latency_p90': 7.423413252830505}]
gpu_counts: {'NVIDIA RTX A5000': 1}
display_name: trace2333-mistral-trial2_v1
is_internal_developer: False
language_model: Trace2333/mistral_trial2
model_size: 13B
ranking_group: single
throughput_3p7s: 1.61
us_pacific_date: 2024-09-05
win_ratio: 0.5076417419884963
generation_params: {'temperature': 0.9, 'top_p': 1.0, 'min_p': 0.06, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '</s>', '###'], 'max_input_tokens': 512, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name trace2333-mistral-trial2-v3-mkmlizer
Waiting for job on trace2333-mistral-trial2-v3-mkmlizer to finish
trace2333-mistral-trial2-v3-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
trace2333-mistral-trial2-v3-mkmlizer: ║ _____ __ __ ║
trace2333-mistral-trial2-v3-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
trace2333-mistral-trial2-v3-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
trace2333-mistral-trial2-v3-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
trace2333-mistral-trial2-v3-mkmlizer: ║ /___/ ║
trace2333-mistral-trial2-v3-mkmlizer: ║ ║
trace2333-mistral-trial2-v3-mkmlizer: ║ Version: 0.10.1 ║
trace2333-mistral-trial2-v3-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
trace2333-mistral-trial2-v3-mkmlizer: ║ https://mk1.ai ║
trace2333-mistral-trial2-v3-mkmlizer: ║ ║
trace2333-mistral-trial2-v3-mkmlizer: ║ The license key for the current software has been verified as ║
trace2333-mistral-trial2-v3-mkmlizer: ║ belonging to: ║
trace2333-mistral-trial2-v3-mkmlizer: ║ ║
trace2333-mistral-trial2-v3-mkmlizer: ║ Chai Research Corp. ║
trace2333-mistral-trial2-v3-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
trace2333-mistral-trial2-v3-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
trace2333-mistral-trial2-v3-mkmlizer: ║ ║
trace2333-mistral-trial2-v3-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
trace2333-mistral-trial2-v3-mkmlizer: Downloaded to shared memory in 29.757s
trace2333-mistral-trial2-v3-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpgf5p5qf_, device:0
trace2333-mistral-trial2-v3-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Retrying (%r) after connection broken by '%r': %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
trace2333-mistral-trial2-v3-mkmlizer: quantized model in 35.648s
trace2333-mistral-trial2-v3-mkmlizer: Processed model Trace2333/mistral_trial2 in 65.406s
trace2333-mistral-trial2-v3-mkmlizer: creating bucket guanaco-mkml-models
trace2333-mistral-trial2-v3-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
trace2333-mistral-trial2-v3-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/trace2333-mistral-trial2-v3
trace2333-mistral-trial2-v3-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/trace2333-mistral-trial2-v3/tokenizer.json
trace2333-mistral-trial2-v3-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/trace2333-mistral-trial2-v3/flywheel_model.0.safetensors
trace2333-mistral-trial2-v3-mkmlizer: Loading 0: 0%| | 0/363 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/363 [00:00<00:07, 45.87it/s] Loading 0: 4%|▍ | 16/363 [00:00<00:05, 61.97it/s] Loading 0: 7%|▋ | 25/363 [00:00<00:05, 65.33it/s] Loading 0: 9%|▉ | 34/363 [00:00<00:05, 64.97it/s] Loading 0: 12%|█▏ | 43/363 [00:00<00:04, 68.29it/s] Loading 0: 14%|█▍ | 52/363 [00:00<00:04, 72.52it/s] Loading 0: 17%|█▋ | 61/363 [00:01<00:15, 19.31it/s] Loading 0: 19%|█▉ | 70/363 [00:02<00:11, 25.50it/s] Loading 0: 22%|██▏ | 79/363 [00:02<00:08, 32.63it/s] Loading 0: 24%|██▍ | 88/363 [00:02<00:06, 39.53it/s] Loading 0: 27%|██▋ | 97/363 [00:02<00:05, 47.11it/s] Loading 0: 29%|██▉ | 106/363 [00:02<00:04, 55.10it/s] Loading 0: 32%|███▏ | 115/363 [00:02<00:03, 62.24it/s] Loading 0: 34%|███▍ | 124/363 [00:02<00:03, 66.26it/s] Loading 0: 37%|███▋ | 133/363 [00:02<00:03, 69.79it/s] Loading 0: 39%|███▉ | 142/363 [00:03<00:10, 21.04it/s] Loading 0: 42%|████▏ | 151/363 [00:04<00:07, 27.16it/s] Loading 0: 44%|████▍ | 161/363 [00:04<00:05, 35.47it/s] Loading 0: 47%|████▋ | 170/363 [00:04<00:04, 43.07it/s] Loading 0: 49%|████▉ | 178/363 [00:04<00:03, 48.97it/s] Loading 0: 52%|█████▏ | 187/363 [00:04<00:03, 54.98it/s] Loading 0: 54%|█████▍ | 196/363 [00:04<00:02, 61.36it/s] Loading 0: 56%|█████▋ | 205/363 [00:04<00:02, 67.78it/s] Loading 0: 59%|█████▉ | 215/363 [00:04<00:01, 75.44it/s] Loading 0: 62%|██████▏ | 224/363 [00:05<00:06, 21.90it/s] Loading 0: 64%|██████▎ | 231/363 [00:06<00:05, 25.51it/s] Loading 0: 66%|██████▌ | 238/363 [00:06<00:04, 30.33it/s] Loading 0: 69%|██████▊ | 249/363 [00:06<00:02, 41.35it/s] Loading 0: 71%|███████▏ | 259/363 [00:06<00:02, 48.01it/s] Loading 0: 74%|███████▍ | 268/363 [00:06<00:01, 54.19it/s] Loading 0: 76%|███████▋ | 277/363 [00:06<00:01, 60.36it/s] Loading 0: 79%|███████▉ | 287/363 [00:06<00:01, 68.84it/s] Loading 0: 83%|████████▎ | 301/363 [00:06<00:00, 77.25it/s] Loading 0: 85%|████████▌ | 310/363 [00:07<00:02, 23.91it/s] Loading 0: 88%|████████▊ | 319/363 [00:08<00:01, 29.91it/s] Loading 0: 90%|█████████ | 328/363 [00:08<00:00, 36.80it/s] Loading 0: 93%|█████████▎| 337/363 [00:08<00:00, 43.75it/s] Loading 0: 95%|█████████▌| 346/363 [00:08<00:00, 50.56it/s] Loading 0: 98%|█████████▊| 355/363 [00:08<00:00, 57.58it/s]
Job trace2333-mistral-trial2-v3-mkmlizer completed after 94.84s with status: succeeded
Stopping job with name trace2333-mistral-trial2-v3-mkmlizer
Pipeline stage MKMLizer completed in 95.77s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.13s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service trace2333-mistral-trial2-v3
Waiting for inference service trace2333-mistral-trial2-v3 to be ready
Inference service trace2333-mistral-trial2-v3 ready after 140.82430768013s
Pipeline stage MKMLDeployer completed in 141.25s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 3.286086320877075s
Received healthy response to inference request in 1.899984359741211s
Received healthy response to inference request in 2.3080012798309326s
Received healthy response to inference request in 1.934964656829834s
Received healthy response to inference request in 1.9470906257629395s
5 requests
0 failed requests
5th percentile: 1.9069804191589355
10th percentile: 1.9139764785766602
20th percentile: 1.9279685974121095
30th percentile: 1.937389850616455
40th percentile: 1.9422402381896973
50th percentile: 1.9470906257629395
60th percentile: 2.0914548873901366
70th percentile: 2.235819149017334
80th percentile: 2.5036182880401614
90th percentile: 2.894852304458618
95th percentile: 3.0904693126678464
99th percentile: 3.2469629192352296
mean time: 2.2752254486083983
Pipeline stage StressChecker completed in 12.52s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 4.85s
Shutdown handler de-registered
trace2333-mistral-trial2_v3 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.12s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.11s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service trace2333-mistral-trial2-v3-profiler
Waiting for inference service trace2333-mistral-trial2-v3-profiler to be ready
Inference service trace2333-mistral-trial2-v3-profiler ready after 150.34693789482117s
Pipeline stage MKMLProfilerDeployer completed in 150.69s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/trace2333-mistral-trial2-v3-profiler-predictor-00001-deplos9gjh:/code/chaiverse_profiler_1725524520 --namespace tenant-chaiml-guanaco
kubectl exec -it trace2333-mistral-trial2-v3-profiler-predictor-00001-deplos9gjh --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1725524520 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 512 --output_tokens 64 --summary /code/chaiverse_profiler_1725524520/summary.json'
kubectl exec -it trace2333-mistral-trial2-v3-profiler-predictor-00001-deplos9gjh --namespace tenant-chaiml-guanaco -- bash -c 'cat /code/chaiverse_profiler_1725524520/summary.json'
Pipeline stage MKMLProfilerRunner completed in 957.21s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service trace2333-mistral-trial2-v3-profiler is running
Tearing down inference service trace2333-mistral-trial2-v3-profiler
Service trace2333-mistral-trial2-v3-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 1.87s
Shutdown handler de-registered
trace2333-mistral-trial2_v3 status is now inactive due to auto deactivation removed underperforming models
trace2333-mistral-trial2_v3 status is now torndown due to DeploymentManager action