submission_id: meta-llama-llama-3-1-8b-_7331_v5
developer_uid: chai_backend_admin
best_of: 8
celo_rating: 1219.01
display_name: meta-llama-llama-3-1-8b-_7331_v5
family_friendly_score: 0.5917068039639041
family_friendly_standard_error: 0.006318081866340825
formatter: {'memory_template': '### Instruction:\n{memory}\n', 'prompt_template': '### Input:\n{prompt}\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '### Response:\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 100, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['</s>', 'You:', '\n', '<|eot_id|>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
is_internal_developer: True
language_model: meta-llama/Llama-3.1-8B-Instruct
max_input_tokens: 1024
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: meta-llama/Llama-3.1-8B-
model_name: meta-llama-llama-3-1-8b-_7331_v5
model_num_parameters: 8030261248.0
model_repo: meta-llama/Llama-3.1-8B-Instruct
model_size: 8B
num_battles: 6257
num_wins: 2884
ranking_group: single
status: torndown
submission_type: basic
timestamp: 2024-10-11T16:08:18+00:00
us_pacific_date: 2024-10-11
win_ratio: 0.4609237653827713
Download Preference Data
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name meta-llama-llama-3-1-8b-7331-v5-mkmlizer
Waiting for job on meta-llama-llama-3-1-8b-7331-v5-mkmlizer to finish
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ _____ __ __ ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ /___/ ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ Version: 0.11.12 ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ https://mk1.ai ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ The license key for the current software has been verified as ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ belonging to: ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ Chai Research Corp. ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ║ ║
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: Downloaded to shared memory in 40.362s
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmplicewbgq, device:0
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: Saving flywheel model at /dev/shm/model_cache
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: quantized model in 26.422s
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: Processed model meta-llama/Llama-3.1-8B-Instruct in 66.784s
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: creating bucket guanaco-mkml-models
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/meta-llama-llama-3-1-8b-7331-v5
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/meta-llama-llama-3-1-8b-7331-v5/config.json
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/meta-llama-llama-3-1-8b-7331-v5/special_tokens_map.json
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/meta-llama-llama-3-1-8b-7331-v5/tokenizer_config.json
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/meta-llama-llama-3-1-8b-7331-v5/tokenizer.json
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/meta-llama-llama-3-1-8b-7331-v5/flywheel_model.0.safetensors
meta-llama-llama-3-1-8b-7331-v5-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:08, 33.68it/s] Loading 0: 5%|▍ | 14/291 [00:00<00:06, 46.09it/s] Loading 0: 8%|▊ | 23/291 [00:00<00:05, 50.08it/s] Loading 0: 11%|█ | 32/291 [00:00<00:05, 51.34it/s] Loading 0: 14%|█▍ | 41/291 [00:00<00:04, 52.17it/s] Loading 0: 17%|█▋ | 49/291 [00:00<00:04, 57.96it/s] Loading 0: 19%|█▉ | 56/291 [00:01<00:04, 52.73it/s] Loading 0: 21%|██▏ | 62/291 [00:01<00:04, 51.12it/s] Loading 0: 23%|██▎ | 68/291 [00:01<00:04, 46.51it/s] Loading 0: 26%|██▌ | 76/291 [00:01<00:03, 53.80it/s] Loading 0: 28%|██▊ | 82/291 [00:01<00:04, 50.23it/s] Loading 0: 30%|███ | 88/291 [00:01<00:05, 34.77it/s] Loading 0: 33%|███▎ | 95/291 [00:02<00:05, 36.32it/s] Loading 0: 35%|███▌ | 103/291 [00:02<00:04, 43.75it/s] Loading 0: 37%|███▋ | 109/291 [00:02<00:04, 43.56it/s] Loading 0: 39%|███▉ | 114/291 [00:02<00:04, 43.96it/s] Loading 0: 42%|████▏ | 121/291 [00:02<00:03, 48.28it/s] Loading 0: 44%|████▎ | 127/291 [00:02<00:03, 46.05it/s] Loading 0: 45%|████▌ | 132/291 [00:02<00:03, 44.19it/s] Loading 0: 48%|████▊ | 139/291 [00:02<00:03, 49.83it/s] Loading 0: 50%|████▉ | 145/291 [00:03<00:03, 46.45it/s] Loading 0: 52%|█████▏ | 150/291 [00:03<00:03, 45.85it/s] Loading 0: 54%|█████▍ | 157/291 [00:03<00:02, 51.54it/s] Loading 0: 56%|█████▌ | 163/291 [00:03<00:02, 47.89it/s] Loading 0: 58%|█████▊ | 168/291 [00:03<00:02, 46.24it/s] Loading 0: 59%|█████▉ | 173/291 [00:03<00:02, 47.17it/s] Loading 0: 62%|██████▏ | 180/291 [00:03<00:02, 52.84it/s] Loading 0: 64%|██████▍ | 186/291 [00:03<00:02, 50.52it/s] Loading 0: 66%|██████▌ | 192/291 [00:04<00:02, 34.54it/s] Loading 0: 68%|██████▊ | 198/291 [00:04<00:02, 39.33it/s] Loading 0: 70%|██████▉ | 203/291 [00:04<00:02, 37.20it/s] Loading 0: 73%|███████▎ | 211/291 [00:04<00:01, 45.82it/s] Loading 0: 75%|███████▍ | 217/291 [00:04<00:01, 45.16it/s] Loading 0: 76%|███████▋ | 222/291 [00:04<00:01, 45.10it/s] Loading 0: 79%|███████▊ | 229/291 [00:04<00:01, 50.44it/s] Loading 0: 81%|████████ | 235/291 [00:05<00:01, 45.95it/s] Loading 0: 82%|████████▏ | 240/291 [00:05<00:01, 45.12it/s] Loading 0: 85%|████████▍ | 247/291 [00:05<00:00, 50.87it/s] Loading 0: 87%|████████▋ | 253/291 [00:05<00:00, 46.90it/s] Loading 0: 89%|████████▊ | 258/291 [00:05<00:00, 44.34it/s] Loading 0: 91%|█████████ | 265/291 [00:05<00:00, 49.55it/s] Loading 0: 93%|█████████▎| 271/291 [00:05<00:00, 47.68it/s] Loading 0: 95%|█████████▍| 276/291 [00:05<00:00, 46.87it/s] Loading 0: 97%|█████████▋| 281/291 [00:06<00:00, 47.63it/s] Loading 0: 98%|█████████▊| 286/291 [00:06<00:00, 43.73it/s] Loading 0: 100%|██████████| 291/291 [00:11<00:00, 3.08it/s]
Job meta-llama-llama-3-1-8b-7331-v5-mkmlizer completed after 83.84s with status: succeeded
Stopping job with name meta-llama-llama-3-1-8b-7331-v5-mkmlizer
Pipeline stage MKMLizer completed in 84.34s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.15s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service meta-llama-llama-3-1-8b-7331-v5
Waiting for inference service meta-llama-llama-3-1-8b-7331-v5 to be ready
Inference service meta-llama-llama-3-1-8b-7331-v5 ready after 140.9641034603119s
Pipeline stage MKMLDeployer completed in 141.43s
run pipeline stage %s
Running pipeline stage StressChecker
HTTPConnectionPool(host='guanaco-submitter.guanaco-backend.k2.chaiverse.com', port=80): Read timed out. (read timeout=20)
Received unhealthy response to inference request!
Received healthy response to inference request in 2.02235746383667s
Received healthy response to inference request in 1.468430757522583s
Received healthy response to inference request in 1.6490440368652344s
Received healthy response to inference request in 1.336503505706787s
5 requests
1 failed requests
5th percentile: 1.3628889560699462
10th percentile: 1.3892744064331055
20th percentile: 1.442045307159424
30th percentile: 1.5045534133911134
40th percentile: 1.5767987251281739
50th percentile: 1.6490440368652344
60th percentile: 1.7983694076538086
70th percentile: 1.9476947784423828
80th percentile: 5.646898460388186
90th percentile: 12.895980453491212
95th percentile: 16.52052145004272
99th percentile: 19.420154247283936
mean time: 5.324279642105102
%s, retrying in %s seconds...
Received healthy response to inference request in 1.2381455898284912s
Received healthy response to inference request in 1.4435064792633057s
Received healthy response to inference request in 1.5539700984954834s
Received healthy response to inference request in 1.565622091293335s
Received healthy response to inference request in 1.4841499328613281s
5 requests
0 failed requests
5th percentile: 1.2792177677154541
10th percentile: 1.320289945602417
20th percentile: 1.4024343013763427
30th percentile: 1.4516351699829102
40th percentile: 1.467892551422119
50th percentile: 1.4841499328613281
60th percentile: 1.5120779991149902
70th percentile: 1.5400060653686523
80th percentile: 1.5563004970550538
90th percentile: 1.5609612941741944
95th percentile: 1.5632916927337646
99th percentile: 1.565156011581421
mean time: 1.4570788383483886
Pipeline stage StressChecker completed in 37.43s
Shutdown handler de-registered
meta-llama-llama-3-1-8b-_7331_v5 status is now deployed due to DeploymentManager action
meta-llama-llama-3-1-8b-_7331_v5 status is now inactive due to auto deactivation removed underperforming models
meta-llama-llama-3-1-8b-_7331_v5 status is now torndown due to DeploymentManager action