submission_id: trace2333-fd-llama3-v1-f_1618_v1
developer_uid: Trace2333
alignment_samples: 12099
alignment_score: 0.8359959124288546
best_of: 16
celo_rating: 1193.23
display_name: trace2333-fd-llama3-v1-f_1618_v1
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\nYou: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.15, 'top_p': 1.0, 'min_p': 0.06, 'top_k': 250, 'presence_penalty': 0.0, 'frequency_penalty': 0.1, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: False
language_model: Trace2333/fd_llama3_v1_fdall_r32a16_bs32
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: Trace2333/fd_llama3_v1_f
model_name: trace2333-fd-llama3-v1-f_1618_v1
model_num_parameters: 8030261248.0
model_repo: Trace2333/fd_llama3_v1_fdall_r32a16_bs32
model_size: 8B
num_battles: 12099
num_wins: 5523
propriety_score: 0.7414589104339797
propriety_total_count: 1083.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-08-12T09:57:02+00:00
us_pacific_date: 2024-08-12
win_ratio: 0.45648400694272256
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name trace2333-fd-llama3-v1-f-1618-v1-mkmlizer
Waiting for job on trace2333-fd-llama3-v1-f-1618-v1-mkmlizer to finish
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ _____ __ __ ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ /___/ ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ Version: 0.9.9 ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ https://mk1.ai ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ The license key for the current software has been verified as ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ belonging to: ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ Chai Research Corp. ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: Downloaded to shared memory in 51.855s
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpnbllkg8l, device:0
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: quantized model in 28.079s
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: Processed model Trace2333/fd_llama3_v1_fdall_r32a16_bs32 in 79.934s
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: creating bucket guanaco-mkml-models
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-1618-v1
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-1618-v1/config.json
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-1618-v1/special_tokens_map.json
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-1618-v1/tokenizer_config.json
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-1618-v1/tokenizer.json
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-1618-v1/flywheel_model.0.safetensors
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:10, 27.89it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:07, 37.40it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:07, 35.38it/s] Loading 0: 7%|▋ | 21/291 [00:00<00:07, 38.40it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:07, 36.11it/s] Loading 0: 11%|█ | 31/291 [00:00<00:06, 42.74it/s] Loading 0: 12%|█▏ | 36/291 [00:01<00:10, 24.68it/s] Loading 0: 14%|█▍ | 41/291 [00:01<00:09, 26.51it/s] Loading 0: 16%|█▋ | 48/291 [00:01<00:07, 33.16it/s] Loading 0: 18%|█▊ | 53/291 [00:01<00:07, 33.35it/s] Loading 0: 20%|█▉ | 57/291 [00:01<00:06, 34.58it/s] Loading 0: 21%|██ | 61/291 [00:01<00:06, 34.04it/s] Loading 0: 23%|██▎ | 66/291 [00:01<00:06, 36.74it/s] Loading 0: 24%|██▍ | 70/291 [00:02<00:06, 35.51it/s] Loading 0: 25%|██▌ | 74/291 [00:02<00:06, 35.80it/s] Loading 0: 27%|██▋ | 78/291 [00:02<00:05, 36.19it/s] Loading 0: 28%|██▊ | 82/291 [00:02<00:08, 24.98it/s] Loading 0: 30%|██▉ | 86/291 [00:02<00:07, 27.86it/s] Loading 0: 31%|███ | 90/291 [00:02<00:06, 30.04it/s] Loading 0: 32%|███▏ | 94/291 [00:02<00:06, 30.75it/s] Loading 0: 34%|███▍ | 99/291 [00:03<00:05, 34.65it/s] Loading 0: 35%|███▌ | 103/291 [00:03<00:05, 33.43it/s] Loading 0: 37%|███▋ | 108/291 [00:03<00:05, 36.19it/s] Loading 0: 38%|███▊ | 112/291 [00:03<00:05, 35.08it/s] Loading 0: 40%|███▉ | 116/291 [00:03<00:04, 35.60it/s] Loading 0: 42%|████▏ | 122/291 [00:03<00:04, 40.37it/s] Loading 0: 44%|████▎ | 127/291 [00:03<00:04, 37.59it/s] Loading 0: 46%|████▌ | 133/291 [00:04<00:05, 31.32it/s] Loading 0: 47%|████▋ | 137/291 [00:04<00:04, 31.58it/s] Loading 0: 48%|████▊ | 141/291 [00:04<00:05, 29.74it/s] Loading 0: 51%|█████ | 147/291 [00:04<00:04, 34.94it/s] Loading 0: 52%|█████▏ | 151/291 [00:04<00:04, 34.50it/s] Loading 0: 54%|█████▎ | 156/291 [00:04<00:03, 37.19it/s] Loading 0: 55%|█████▍ | 160/291 [00:04<00:03, 36.09it/s] Loading 0: 57%|█████▋ | 165/291 [00:04<00:03, 38.92it/s] Loading 0: 58%|█████▊ | 169/291 [00:04<00:03, 37.11it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:02, 39.20it/s] Loading 0: 61%|██████ | 178/291 [00:05<00:03, 37.11it/s] Loading 0: 64%|██████▎ | 185/291 [00:05<00:02, 40.12it/s] Loading 0: 65%|██████▍ | 189/291 [00:05<00:03, 26.43it/s] Loading 0: 67%|██████▋ | 194/291 [00:05<00:03, 27.76it/s] Loading 0: 69%|██████▉ | 201/291 [00:05<00:02, 34.18it/s] Loading 0: 70%|███████ | 205/291 [00:06<00:02, 33.24it/s] Loading 0: 72%|███████▏ | 210/291 [00:06<00:02, 35.86it/s] Loading 0: 74%|███████▎ | 214/291 [00:06<00:02, 34.65it/s] Loading 0: 75%|███████▌ | 219/291 [00:06<00:01, 36.94it/s] Loading 0: 77%|███████▋ | 223/291 [00:06<00:01, 35.69it/s] Loading 0: 78%|███████▊ | 227/291 [00:06<00:01, 35.94it/s] Loading 0: 79%|███████▉ | 231/291 [00:06<00:01, 36.22it/s] Loading 0: 81%|████████ | 235/291 [00:07<00:02, 27.28it/s] Loading 0: 82%|████████▏ | 239/291 [00:07<00:01, 27.16it/s] Loading 0: 85%|████████▍ | 246/291 [00:07<00:01, 34.99it/s] Loading 0: 86%|████████▌ | 250/291 [00:07<00:01, 34.37it/s] Loading 0: 88%|████████▊ | 255/291 [00:07<00:00, 37.50it/s] Loading 0: 89%|████████▉ | 259/291 [00:07<00:00, 36.11it/s] Loading 0: 91%|█████████ | 264/291 [00:07<00:00, 38.03it/s] Loading 0: 92%|█████████▏| 268/291 [00:07<00:00, 36.45it/s] Loading 0: 94%|█████████▍| 273/291 [00:08<00:00, 39.24it/s] Loading 0: 96%|█████████▌| 278/291 [00:08<00:00, 39.06it/s] Loading 0: 97%|█████████▋| 282/291 [00:08<00:00, 38.45it/s] Loading 0: 98%|█████████▊| 286/291 [00:13<00:01, 2.54it/s] Loading 0: 99%|█████████▉| 289/291 [00:13<00:00, 3.19it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: warnings.warn(
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: warnings.warn(
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: warnings.warn(
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: Saving duration: 1.355s
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 14.145s
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: creating bucket guanaco-reward-models
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-1618-v1_reward/special_tokens_map.json
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-1618-v1_reward/config.json
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-1618-v1_reward/tokenizer_config.json
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-1618-v1_reward/merges.txt
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-1618-v1_reward/vocab.json
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-1618-v1_reward/tokenizer.json
trace2333-fd-llama3-v1-f-1618-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-1618-v1_reward/reward.tensors
Job trace2333-fd-llama3-v1-f-1618-v1-mkmlizer completed after 124.97s with status: succeeded
Stopping job with name trace2333-fd-llama3-v1-f-1618-v1-mkmlizer
Pipeline stage MKMLizer completed in 125.95s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.11s
Running pipeline stage ISVCDeployer
Creating inference service trace2333-fd-llama3-v1-f-1618-v1
Waiting for inference service trace2333-fd-llama3-v1-f-1618-v1 to be ready
Failed to get response for submission blend_mofos_2024-07-31: ('http://neversleep-noromaid-v0-8068-v142-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:49592->127.0.0.1:8080: read: connection reset by peer\n')
Inference service trace2333-fd-llama3-v1-f-1618-v1 ready after 201.20715022087097s
Pipeline stage ISVCDeployer completed in 203.17s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.2653160095214844s
Received healthy response to inference request in 1.4654769897460938s
Received healthy response to inference request in 1.4594693183898926s
Received healthy response to inference request in 1.3478922843933105s
Received healthy response to inference request in 1.3661293983459473s
5 requests
0 failed requests
5th percentile: 1.351539707183838
10th percentile: 1.3551871299743652
20th percentile: 1.3624819755554198
30th percentile: 1.3847973823547364
40th percentile: 1.4221333503723144
50th percentile: 1.4594693183898926
60th percentile: 1.461872386932373
70th percentile: 1.4642754554748536
80th percentile: 1.625444793701172
90th percentile: 1.9453804016113283
95th percentile: 2.105348205566406
99th percentile: 2.2333224487304686
mean time: 1.5808568000793457
Pipeline stage StressChecker completed in 8.67s
trace2333-fd-llama3-v1-f_1618_v1 status is now deployed due to DeploymentManager action
trace2333-fd-llama3-v1-f_1618_v1 status is now inactive due to auto deactivation removed underperforming models
trace2333-fd-llama3-v1-f_1618_v1 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics