submission_id: trace2333-fd-llama3-v2-n_2212_v3
developer_uid: Trace2333
alignment_samples: 0
best_of: 16
celo_rating: 1188.84
display_name: trace2333-fd-llama3-v2-n_2212_v3
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\nYou: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\nBot:', 'truncate_by_message': False}
generation_params: {'temperature': 1.15, 'top_p': 1.0, 'min_p': 0.06, 'top_k': 100, 'presence_penalty': 0.0, 'frequency_penalty': 0.1, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: False
language_model: Trace2333/fd_llama3_v2_Nall_bot
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: Trace2333/fd_llama3_v2_N
model_name: trace2333-fd-llama3-v2-n_2212_v3
model_num_parameters: 8030261248.0
model_repo: Trace2333/fd_llama3_v2_Nall_bot
model_size: 8B
num_battles: 16290
num_wins: 7517
propriety_score: 0.7181113460183227
propriety_total_count: 1419.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-08-04T04:01:17+00:00
us_pacific_date: 2024-08-03
win_ratio: 0.4614487415592388
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name trace2333-fd-llama3-v2-n-2212-v3-mkmlizer
Waiting for job on trace2333-fd-llama3-v2-n-2212-v3-mkmlizer to finish
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ _____ __ __ ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ /___/ ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ Version: 0.9.9 ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ https://mk1.ai ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ The license key for the current software has been verified as ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ belonging to: ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ Chai Research Corp. ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ║ ║
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: Downloaded to shared memory in 39.402s
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp9amic8l6, device:0
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: Saving flywheel model at /dev/shm/model_cache
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: quantized model in 28.936s
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: Processed model Trace2333/fd_llama3_v2_Nall_bot in 68.338s
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: creating bucket guanaco-mkml-models
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/trace2333-fd-llama3-v2-n-2212-v3
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v2-n-2212-v3/config.json
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/trace2333-fd-llama3-v2-n-2212-v3/special_tokens_map.json
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v2-n-2212-v3/tokenizer_config.json
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/trace2333-fd-llama3-v2-n-2212-v3/tokenizer.json
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:10, 27.59it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:07, 37.08it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:08, 34.06it/s] Loading 0: 7%|▋ | 21/291 [00:00<00:07, 36.35it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:07, 34.10it/s] Loading 0: 11%|█ | 31/291 [00:00<00:06, 40.32it/s] Loading 0: 12%|█▏ | 36/291 [00:01<00:11, 23.17it/s] Loading 0: 14%|█▍ | 41/291 [00:01<00:09, 25.05it/s] Loading 0: 16%|█▋ | 48/291 [00:01<00:07, 31.38it/s] Loading 0: 18%|█▊ | 52/291 [00:01<00:07, 31.02it/s] Loading 0: 20%|█▉ | 57/291 [00:01<00:06, 33.46it/s] Loading 0: 21%|██ | 61/291 [00:01<00:07, 31.91it/s] Loading 0: 23%|██▎ | 66/291 [00:02<00:06, 34.17it/s] Loading 0: 24%|██▍ | 70/291 [00:02<00:06, 32.94it/s] Loading 0: 25%|██▌ | 74/291 [00:02<00:06, 32.01it/s] Loading 0: 27%|██▋ | 78/291 [00:02<00:06, 31.98it/s] Loading 0: 28%|██▊ | 82/291 [00:02<00:09, 22.02it/s] Loading 0: 29%|██▉ | 85/291 [00:02<00:08, 23.11it/s] Loading 0: 31%|███ | 90/291 [00:03<00:07, 26.99it/s] Loading 0: 32%|███▏ | 94/291 [00:03<00:07, 27.77it/s] Loading 0: 34%|███▍ | 99/291 [00:03<00:06, 31.53it/s] Loading 0: 35%|███▌ | 103/291 [00:03<00:05, 31.38it/s] Loading 0: 37%|███▋ | 108/291 [00:03<00:05, 34.54it/s] Loading 0: 38%|███▊ | 112/291 [00:03<00:05, 33.40it/s] Loading 0: 40%|███▉ | 116/291 [00:03<00:05, 33.89it/s] Loading 0: 42%|████▏ | 122/291 [00:03<00:04, 38.55it/s] Loading 0: 44%|████▎ | 127/291 [00:04<00:04, 35.55it/s] Loading 0: 46%|████▌ | 133/291 [00:04<00:05, 29.36it/s] Loading 0: 47%|████▋ | 137/291 [00:04<00:05, 30.10it/s] Loading 0: 48%|████▊ | 141/291 [00:04<00:05, 28.14it/s] Loading 0: 51%|█████ | 147/291 [00:04<00:04, 33.00it/s] Loading 0: 52%|█████▏ | 151/291 [00:04<00:04, 32.17it/s] Loading 0: 54%|█████▎ | 156/291 [00:04<00:03, 34.81it/s] Loading 0: 55%|█████▍ | 160/291 [00:05<00:03, 33.41it/s] Loading 0: 57%|█████▋ | 165/291 [00:05<00:03, 35.79it/s] Loading 0: 58%|█████▊ | 169/291 [00:05<00:03, 34.10it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:03, 36.13it/s] Loading 0: 61%|██████ | 178/291 [00:05<00:03, 34.37it/s] Loading 0: 63%|██████▎ | 184/291 [00:05<00:02, 39.92it/s] Loading 0: 65%|██████▍ | 189/291 [00:06<00:04, 23.89it/s] Loading 0: 67%|██████▋ | 194/291 [00:06<00:03, 25.17it/s] Loading 0: 69%|██████▉ | 201/291 [00:06<00:02, 31.50it/s] Loading 0: 70%|███████ | 205/291 [00:06<00:02, 31.33it/s] Loading 0: 72%|███████▏ | 210/291 [00:06<00:02, 34.03it/s] Loading 0: 74%|███████▎ | 214/291 [00:06<00:02, 33.11it/s] Loading 0: 75%|███████▌ | 219/291 [00:06<00:02, 35.45it/s] Loading 0: 77%|███████▋ | 223/291 [00:07<00:01, 34.14it/s] Loading 0: 78%|███████▊ | 227/291 [00:07<00:01, 34.24it/s] Loading 0: 79%|███████▉ | 231/291 [00:07<00:01, 34.23it/s] Loading 0: 81%|████████ | 235/291 [00:07<00:02, 24.24it/s] Loading 0: 82%|████████▏ | 239/291 [00:07<00:02, 24.61it/s] Loading 0: 85%|████████▍ | 246/291 [00:07<00:01, 32.03it/s] Loading 0: 86%|████████▌ | 250/291 [00:07<00:01, 31.78it/s] Loading 0: 88%|████████▊ | 255/291 [00:08<00:01, 34.81it/s] Loading 0: 89%|████████▉ | 259/291 [00:08<00:00, 33.52it/s] Loading 0: 91%|█████████ | 264/291 [00:08<00:00, 36.28it/s] Loading 0: 92%|█████████▏| 268/291 [00:08<00:00, 34.86it/s] Loading 0: 94%|█████████▍| 273/291 [00:08<00:00, 37.59it/s] Loading 0: 95%|█████████▌| 277/291 [00:08<00:00, 35.19it/s] Loading 0: 97%|█████████▋| 281/291 [00:08<00:00, 34.02it/s] Loading 0: 98%|█████████▊| 286/291 [00:14<00:01, 2.60it/s] Loading 0: 99%|█████████▉| 289/291 [00:14<00:00, 3.24it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: warnings.warn(
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: warnings.warn(
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: warnings.warn(
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: Saving duration: 1.432s
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 10.906s
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: creating bucket guanaco-reward-models
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: Bucket 's3://guanaco-reward-models/' created
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/trace2333-fd-llama3-v2-n-2212-v3_reward
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-n-2212-v3_reward/special_tokens_map.json
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-n-2212-v3_reward/config.json
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-n-2212-v3_reward/tokenizer_config.json
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/trace2333-fd-llama3-v2-n-2212-v3_reward/merges.txt
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-n-2212-v3_reward/vocab.json
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/trace2333-fd-llama3-v2-n-2212-v3_reward/tokenizer.json
trace2333-fd-llama3-v2-n-2212-v3-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/trace2333-fd-llama3-v2-n-2212-v3_reward/reward.tensors
Job trace2333-fd-llama3-v2-n-2212-v3-mkmlizer completed after 115.04s with status: succeeded
Stopping job with name trace2333-fd-llama3-v2-n-2212-v3-mkmlizer
Pipeline stage MKMLizer completed in 116.21s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Creating inference service trace2333-fd-llama3-v2-n-2212-v3
Waiting for inference service trace2333-fd-llama3-v2-n-2212-v3 to be ready
Connection pool is full, discarding connection: %s. Connection pool size: %s
Inference service trace2333-fd-llama3-v2-n-2212-v3 ready after 161.21263813972473s
Pipeline stage ISVCDeployer completed in 163.07s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.1918485164642334s
Received healthy response to inference request in 1.3774299621582031s
Received healthy response to inference request in 1.406257152557373s
Received healthy response to inference request in 1.3256750106811523s
Received healthy response to inference request in 1.409278154373169s
5 requests
0 failed requests
5th percentile: 1.3360260009765625
10th percentile: 1.3463769912719727
20th percentile: 1.367078971862793
30th percentile: 1.383195400238037
40th percentile: 1.394726276397705
50th percentile: 1.406257152557373
60th percentile: 1.4074655532836915
70th percentile: 1.4086739540100097
80th percentile: 1.565792226791382
90th percentile: 1.8788203716278078
95th percentile: 2.03533444404602
99th percentile: 2.160545701980591
mean time: 1.5420977592468261
Pipeline stage StressChecker completed in 8.41s
trace2333-fd-llama3-v2-n_2212_v3 status is now deployed due to DeploymentManager action
trace2333-fd-llama3-v2-n_2212_v3 status is now inactive due to auto deactivation removed underperforming models
trace2333-fd-llama3-v2-n_2212_v3 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics