submission_id: trace2333-fd-llama3-v2-f_5485_v2
developer_uid: Trace2333
alignment_samples: 0
best_of: 16
celo_rating: 1200.24
display_name: trace2333-fd-llama3-v2-f_5485_v2
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\nYou: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.05, 'top_p': 1.0, 'min_p': 0.06, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.1, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: False
language_model: Trace2333/fd_llama3_v2_fdall_r64a128_bs32
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: Trace2333/fd_llama3_v2_f
model_name: trace2333-fd-llama3-v2-f_5485_v2
model_num_parameters: 8030261248.0
model_repo: Trace2333/fd_llama3_v2_fdall_r64a128_bs32
model_size: 8B
num_battles: 9246
num_wins: 4522
propriety_score: 0.6968641114982579
propriety_total_count: 861.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-08-06T02:14:03+00:00
us_pacific_date: 2024-08-05
win_ratio: 0.4890763573437162
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name trace2333-fd-llama3-v2-f-5485-v2-mkmlizer
Waiting for job on trace2333-fd-llama3-v2-f-5485-v2-mkmlizer to finish
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ _____ __ __ ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ /___/ ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ Version: 0.9.9 ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ https://mk1.ai ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ The license key for the current software has been verified as ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ belonging to: ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ Chai Research Corp. ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ║ ║
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: Downloaded to shared memory in 41.567s
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpkub2msvq, device:0
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: quantized model in 29.178s
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: Processed model Trace2333/fd_llama3_v2_fdall_r64a128_bs32 in 70.746s
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: creating bucket guanaco-mkml-models
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/trace2333-fd-llama3-v2-f-5485-v2
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v2-f-5485-v2/config.json
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/trace2333-fd-llama3-v2-f-5485-v2/special_tokens_map.json
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v2-f-5485-v2/tokenizer_config.json
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/trace2333-fd-llama3-v2-f-5485-v2/tokenizer.json
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/trace2333-fd-llama3-v2-f-5485-v2/flywheel_model.0.safetensors
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:10, 26.51it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:07, 35.71it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:08, 32.03it/s] Loading 0: 7%|▋ | 21/291 [00:00<00:07, 35.14it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:08, 32.72it/s] Loading 0: 10%|█ | 30/291 [00:00<00:07, 37.20it/s] Loading 0: 12%|█▏ | 34/291 [00:01<00:10, 24.51it/s] Loading 0: 13%|█▎ | 38/291 [00:01<00:09, 25.97it/s] Loading 0: 14%|█▍ | 42/291 [00:01<00:09, 25.56it/s] Loading 0: 16%|█▋ | 48/291 [00:01<00:07, 30.81it/s] Loading 0: 18%|█▊ | 52/291 [00:01<00:07, 30.42it/s] Loading 0: 20%|█▉ | 57/291 [00:01<00:07, 33.16it/s] Loading 0: 21%|██ | 61/291 [00:01<00:07, 31.98it/s] Loading 0: 23%|██▎ | 66/291 [00:02<00:06, 33.99it/s] Loading 0: 24%|██▍ | 70/291 [00:02<00:06, 32.75it/s] Loading 0: 25%|██▌ | 74/291 [00:02<00:06, 33.00it/s] Loading 0: 27%|██▋ | 78/291 [00:02<00:06, 33.10it/s] Loading 0: 28%|██▊ | 82/291 [00:02<00:09, 21.92it/s] Loading 0: 30%|██▉ | 86/291 [00:02<00:08, 24.94it/s] Loading 0: 31%|███ | 90/291 [00:03<00:07, 27.39it/s] Loading 0: 32%|███▏ | 94/291 [00:03<00:07, 27.29it/s] Loading 0: 34%|███▍ | 99/291 [00:03<00:06, 31.00it/s] Loading 0: 35%|███▌ | 103/291 [00:03<00:06, 30.32it/s] Loading 0: 37%|███▋ | 108/291 [00:03<00:05, 33.60it/s] Loading 0: 38%|███▊ | 112/291 [00:03<00:05, 32.60it/s] Loading 0: 40%|███▉ | 116/291 [00:03<00:05, 32.86it/s] Loading 0: 42%|████▏ | 122/291 [00:03<00:04, 37.22it/s] Loading 0: 44%|████▎ | 127/291 [00:04<00:04, 35.31it/s] Loading 0: 46%|████▌ | 133/291 [00:04<00:05, 29.51it/s] Loading 0: 47%|████▋ | 137/291 [00:04<00:05, 29.86it/s] Loading 0: 48%|████▊ | 141/291 [00:04<00:05, 28.30it/s] Loading 0: 51%|█████ | 147/291 [00:04<00:04, 32.90it/s] Loading 0: 52%|█████▏ | 151/291 [00:04<00:04, 32.24it/s] Loading 0: 54%|█████▎ | 156/291 [00:05<00:03, 34.73it/s] Loading 0: 55%|█████▍ | 160/291 [00:05<00:03, 33.54it/s] Loading 0: 57%|█████▋ | 165/291 [00:05<00:03, 35.71it/s] Loading 0: 58%|█████▊ | 169/291 [00:05<00:03, 33.18it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:03, 34.58it/s] Loading 0: 61%|██████ | 178/291 [00:05<00:03, 33.20it/s] Loading 0: 63%|██████▎ | 184/291 [00:05<00:02, 39.28it/s] Loading 0: 65%|██████▍ | 189/291 [00:06<00:04, 23.82it/s] Loading 0: 67%|██████▋ | 194/291 [00:06<00:03, 25.53it/s] Loading 0: 69%|██████▉ | 201/291 [00:06<00:02, 31.99it/s] Loading 0: 70%|███████ | 205/291 [00:06<00:02, 31.62it/s] Loading 0: 72%|███████▏ | 210/291 [00:06<00:02, 34.04it/s] Loading 0: 74%|███████▎ | 214/291 [00:06<00:02, 33.23it/s] Loading 0: 75%|███████▌ | 219/291 [00:06<00:02, 35.41it/s] Loading 0: 77%|███████▋ | 223/291 [00:07<00:02, 33.79it/s] Loading 0: 78%|███████▊ | 227/291 [00:07<00:01, 33.89it/s] Loading 0: 79%|███████▉ | 231/291 [00:07<00:01, 33.91it/s] Loading 0: 81%|████████ | 235/291 [00:07<00:02, 24.95it/s] Loading 0: 82%|████████▏ | 239/291 [00:07<00:02, 24.68it/s] Loading 0: 85%|████████▍ | 246/291 [00:07<00:01, 31.57it/s] Loading 0: 86%|████████▌ | 250/291 [00:08<00:01, 31.20it/s] Loading 0: 88%|████████▊ | 255/291 [00:08<00:01, 33.72it/s] Loading 0: 89%|████████▉ | 259/291 [00:08<00:00, 32.66it/s] Loading 0: 91%|█████████ | 264/291 [00:08<00:00, 35.32it/s] Loading 0: 92%|█████████▏| 268/291 [00:08<00:00, 34.00it/s] Loading 0: 94%|█████████▍| 273/291 [00:08<00:00, 36.32it/s] Loading 0: 95%|█████████▌| 277/291 [00:08<00:00, 34.21it/s] Loading 0: 97%|█████████▋| 281/291 [00:08<00:00, 33.12it/s] Loading 0: 98%|█████████▊| 286/291 [00:14<00:01, 2.59it/s] Loading 0: 99%|█████████▉| 289/291 [00:14<00:00, 3.21it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: warnings.warn(
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: warnings.warn(
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: warnings.warn(
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: Saving duration: 1.390s
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 11.106s
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: creating bucket guanaco-reward-models
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/trace2333-fd-llama3-v2-f-5485-v2_reward
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-f-5485-v2_reward/config.json
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-f-5485-v2_reward/tokenizer_config.json
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-f-5485-v2_reward/special_tokens_map.json
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/trace2333-fd-llama3-v2-f-5485-v2_reward/merges.txt
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-f-5485-v2_reward/vocab.json
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-f-5485-v2_reward/tokenizer.json
trace2333-fd-llama3-v2-f-5485-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/trace2333-fd-llama3-v2-f-5485-v2_reward/reward.tensors
Job trace2333-fd-llama3-v2-f-5485-v2-mkmlizer completed after 115.14s with status: succeeded
Stopping job with name trace2333-fd-llama3-v2-f-5485-v2-mkmlizer
Pipeline stage MKMLizer completed in 116.55s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Creating inference service trace2333-fd-llama3-v2-f-5485-v2
Waiting for inference service trace2333-fd-llama3-v2-f-5485-v2 to be ready
Inference service trace2333-fd-llama3-v2-f-5485-v2 ready after 171.04992961883545s
Pipeline stage ISVCDeployer completed in 173.13s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.209019422531128s
Received healthy response to inference request in 1.4395205974578857s
Received healthy response to inference request in 1.4156427383422852s
Received healthy response to inference request in 1.2547283172607422s
Received healthy response to inference request in 1.333683967590332s
5 requests
0 failed requests
5th percentile: 1.2705194473266601
10th percentile: 1.286310577392578
20th percentile: 1.317892837524414
30th percentile: 1.3500757217407227
40th percentile: 1.382859230041504
50th percentile: 1.4156427383422852
60th percentile: 1.4251938819885255
70th percentile: 1.4347450256347656
80th percentile: 1.5934203624725343
90th percentile: 1.901219892501831
95th percentile: 2.0551196575164794
99th percentile: 2.178239469528198
mean time: 1.5305190086364746
Pipeline stage StressChecker completed in 8.42s
trace2333-fd-llama3-v2-f_5485_v2 status is now deployed due to DeploymentManager action
trace2333-fd-llama3-v2-f_5485_v2 status is now inactive due to auto deactivation removed underperforming models
trace2333-fd-llama3-v2-f_5485_v2 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics