submission_id: trace2333-fd-llama3-v1-f_4147_v1
developer_uid: Trace2333
alignment_samples: 11055
alignment_score: 1.3668372174314571
best_of: 16
celo_rating: 1191.24
display_name: trace2333-fd-llama3-v1-f_4147_v1
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\nYou: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.15, 'top_p': 1.0, 'min_p': 0.06, 'top_k': 250, 'presence_penalty': 0.0, 'frequency_penalty': 0.1, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: False
language_model: Trace2333/fd_llama3_v1_fdall_r32a16_5w
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: Trace2333/fd_llama3_v1_f
model_name: trace2333-fd-llama3-v1-f_4147_v1
model_num_parameters: 8030261248.0
model_repo: Trace2333/fd_llama3_v1_fdall_r32a16_5w
model_size: 8B
num_battles: 11055
num_wins: 5124
propriety_score: 0.7220376522702104
propriety_total_count: 903.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-08-13T01:45:27+00:00
us_pacific_date: 2024-08-12
win_ratio: 0.4635006784260516
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name trace2333-fd-llama3-v1-f-4147-v1-mkmlizer
Waiting for job on trace2333-fd-llama3-v1-f-4147-v1-mkmlizer to finish
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ _____ __ __ ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ /___/ ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ Version: 0.9.9 ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ https://mk1.ai ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ The license key for the current software has been verified as ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ belonging to: ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ Chai Research Corp. ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: Downloaded to shared memory in 59.483s
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpazrkkwlh, device:0
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: quantized model in 28.850s
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: Processed model Trace2333/fd_llama3_v1_fdall_r32a16_5w in 88.333s
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: creating bucket guanaco-mkml-models
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-4147-v1
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-4147-v1/config.json
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-4147-v1/special_tokens_map.json
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-4147-v1/tokenizer_config.json
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-4147-v1/tokenizer.json
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-4147-v1/flywheel_model.0.safetensors
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:10, 26.40it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:06, 43.07it/s] Loading 0: 6%|▌ | 17/291 [00:00<00:06, 40.45it/s] Loading 0: 8%|▊ | 22/291 [00:00<00:06, 39.46it/s] Loading 0: 9%|▉ | 27/291 [00:00<00:06, 41.27it/s] Loading 0: 11%|█ | 32/291 [00:00<00:06, 40.31it/s] Loading 0: 13%|█▎ | 37/291 [00:01<00:10, 24.76it/s] Loading 0: 14%|█▍ | 41/291 [00:01<00:10, 24.86it/s] Loading 0: 16%|█▋ | 48/291 [00:01<00:07, 32.75it/s] Loading 0: 18%|█▊ | 53/291 [00:01<00:07, 33.86it/s] Loading 0: 20%|█▉ | 57/291 [00:01<00:06, 35.16it/s] Loading 0: 21%|██ | 61/291 [00:01<00:06, 34.03it/s] Loading 0: 23%|██▎ | 66/291 [00:01<00:06, 35.83it/s] Loading 0: 24%|██▍ | 70/291 [00:02<00:06, 34.53it/s] Loading 0: 25%|██▌ | 74/291 [00:02<00:06, 34.94it/s] Loading 0: 27%|██▋ | 78/291 [00:02<00:06, 35.25it/s] Loading 0: 28%|██▊ | 82/291 [00:02<00:08, 23.44it/s] Loading 0: 30%|██▉ | 86/291 [00:02<00:07, 26.52it/s] Loading 0: 31%|███▏ | 91/291 [00:02<00:06, 29.56it/s] Loading 0: 33%|███▎ | 95/291 [00:02<00:06, 31.54it/s] Loading 0: 34%|███▍ | 100/291 [00:03<00:05, 33.49it/s] Loading 0: 36%|███▌ | 104/291 [00:03<00:05, 34.93it/s] Loading 0: 37%|███▋ | 108/291 [00:03<00:05, 35.96it/s] Loading 0: 38%|███▊ | 112/291 [00:03<00:05, 33.78it/s] Loading 0: 40%|███▉ | 116/291 [00:03<00:05, 34.18it/s] Loading 0: 42%|████▏ | 122/291 [00:03<00:04, 39.04it/s] Loading 0: 44%|████▎ | 127/291 [00:03<00:04, 36.52it/s] Loading 0: 46%|████▌ | 133/291 [00:04<00:05, 29.80it/s] Loading 0: 47%|████▋ | 137/291 [00:04<00:05, 30.22it/s] Loading 0: 48%|████▊ | 141/291 [00:04<00:05, 28.97it/s] Loading 0: 51%|█████ | 147/291 [00:04<00:04, 33.99it/s] Loading 0: 52%|█████▏ | 151/291 [00:04<00:04, 32.93it/s] Loading 0: 54%|█████▎ | 156/291 [00:04<00:03, 35.58it/s] Loading 0: 55%|█████▍ | 160/291 [00:04<00:03, 34.32it/s] Loading 0: 57%|█████▋ | 165/291 [00:04<00:03, 36.52it/s] Loading 0: 58%|█████▊ | 169/291 [00:05<00:03, 34.38it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:03, 36.95it/s] Loading 0: 61%|██████ | 178/291 [00:05<00:03, 35.18it/s] Loading 0: 63%|██████▎ | 183/291 [00:05<00:02, 38.16it/s] Loading 0: 64%|██████▍ | 187/291 [00:05<00:04, 25.37it/s] Loading 0: 66%|██████▌ | 191/291 [00:05<00:03, 26.11it/s] Loading 0: 67%|██████▋ | 195/291 [00:06<00:03, 24.72it/s] Loading 0: 69%|██████▉ | 201/291 [00:06<00:02, 30.30it/s] Loading 0: 70%|███████ | 205/291 [00:06<00:02, 30.62it/s] Loading 0: 72%|███████▏ | 210/291 [00:06<00:02, 33.71it/s] Loading 0: 74%|███████▎ | 214/291 [00:06<00:02, 32.69it/s] Loading 0: 75%|███████▌ | 219/291 [00:06<00:02, 34.84it/s] Loading 0: 77%|███████▋ | 223/291 [00:06<00:02, 33.37it/s] Loading 0: 78%|███████▊ | 227/291 [00:06<00:01, 33.18it/s] Loading 0: 79%|███████▉ | 231/291 [00:07<00:01, 33.29it/s] Loading 0: 81%|████████ | 235/291 [00:07<00:02, 23.50it/s] Loading 0: 82%|████████▏ | 239/291 [00:07<00:02, 24.12it/s] Loading 0: 85%|████████▍ | 246/291 [00:07<00:01, 32.00it/s] Loading 0: 86%|████████▌ | 250/291 [00:07<00:01, 31.86it/s] Loading 0: 88%|████████▊ | 255/291 [00:07<00:01, 34.87it/s] Loading 0: 89%|████████▉ | 259/291 [00:08<00:00, 33.63it/s] Loading 0: 91%|█████████ | 264/291 [00:08<00:00, 34.95it/s] Loading 0: 92%|█████████▏| 268/291 [00:08<00:00, 33.53it/s] Loading 0: 94%|█████████▍| 273/291 [00:08<00:00, 34.93it/s] Loading 0: 95%|█████████▌| 277/291 [00:08<00:00, 33.90it/s] Loading 0: 97%|█████████▋| 281/291 [00:08<00:00, 34.37it/s] Loading 0: 98%|█████████▊| 286/291 [00:14<00:01, 2.57it/s] Loading 0: 99%|█████████▉| 289/291 [00:14<00:00, 3.21it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: warnings.warn(
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: warnings.warn(
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: warnings.warn(
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: Saving duration: 1.454s
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 12.062s
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: creating bucket guanaco-reward-models
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-4147-v1_reward
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-4147-v1_reward/config.json
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-4147-v1_reward/special_tokens_map.json
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-4147-v1_reward/tokenizer_config.json
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-4147-v1_reward/merges.txt
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-4147-v1_reward/vocab.json
trace2333-fd-llama3-v1-f-4147-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-4147-v1_reward/tokenizer.json
Job trace2333-fd-llama3-v1-f-4147-v1-mkmlizer completed after 136.16s with status: succeeded
Stopping job with name trace2333-fd-llama3-v1-f-4147-v1-mkmlizer
Pipeline stage MKMLizer completed in 137.06s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service trace2333-fd-llama3-v1-f-4147-v1
Waiting for inference service trace2333-fd-llama3-v1-f-4147-v1 to be ready
Failed to get response for submission undi95-meta-llama-3-70b_6209_v18: ('http://undi95-meta-llama-3-70b-6209-v18-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:36240->127.0.0.1:8080: read: connection reset by peer\n')
Inference service trace2333-fd-llama3-v1-f-4147-v1 ready after 211.62257957458496s
Pipeline stage ISVCDeployer completed in 213.29s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.2978060245513916s
Received healthy response to inference request in 1.5755627155303955s
Received healthy response to inference request in 1.5474250316619873s
Received healthy response to inference request in 1.4223263263702393s
Received healthy response to inference request in 1.4211807250976562s
5 requests
0 failed requests
5th percentile: 1.4214098453521729
10th percentile: 1.4216389656066895
20th percentile: 1.4220972061157227
30th percentile: 1.447346067428589
40th percentile: 1.4973855495452881
50th percentile: 1.5474250316619873
60th percentile: 1.5586801052093506
70th percentile: 1.569935178756714
80th percentile: 1.7200113773345949
90th percentile: 2.0089087009429933
95th percentile: 2.153357362747192
99th percentile: 2.2689162921905517
mean time: 1.652860164642334
Pipeline stage StressChecker completed in 9.04s
trace2333-fd-llama3-v1-f_4147_v1 status is now deployed due to DeploymentManager action
trace2333-fd-llama3-v1-f_4147_v1 status is now inactive due to auto deactivation removed underperforming models
trace2333-fd-llama3-v1-f_4147_v1 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics