submission_id: trace2333-fd-llama3-v1-f_7971_v1
developer_uid: Trace2333
alignment_samples: 0
best_of: 16
celo_rating: 1197.69
display_name: trace2333-fd-llama3-v1-f_7971_v1
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\nYou: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.05, 'top_p': 1.0, 'min_p': 0.06, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.1, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: False
language_model: Trace2333/fd_llama3_v1_fdall_r32a64_bs24
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: Trace2333/fd_llama3_v1_f
model_name: trace2333-fd-llama3-v1-f_7971_v1
model_num_parameters: 8030261248.0
model_repo: Trace2333/fd_llama3_v1_fdall_r32a64_bs24
model_size: 8B
num_battles: 10267
num_wins: 4963
propriety_score: 0.7207207207207207
propriety_total_count: 888.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-08-06T01:57:01+00:00
us_pacific_date: 2024-08-05
win_ratio: 0.48339339631830136
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name trace2333-fd-llama3-v1-f-7971-v1-mkmlizer
Waiting for job on trace2333-fd-llama3-v1-f-7971-v1-mkmlizer to finish
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ _____ __ __ ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ /___/ ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ Version: 0.9.9 ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ https://mk1.ai ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ The license key for the current software has been verified as ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ belonging to: ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ Chai Research Corp. ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ║ ║
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: Downloaded to shared memory in 59.020s
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpui49cqty, device:0
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: quantized model in 29.767s
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: Processed model Trace2333/fd_llama3_v1_fdall_r32a64_bs24 in 88.788s
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: creating bucket guanaco-mkml-models
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-7971-v1
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-7971-v1/config.json
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-7971-v1/special_tokens_map.json
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-7971-v1/tokenizer_config.json
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-7971-v1/tokenizer.json
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/trace2333-fd-llama3-v1-f-7971-v1/flywheel_model.0.safetensors
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:11, 25.43it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:08, 34.69it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:08, 32.66it/s] Loading 0: 7%|▋ | 21/291 [00:00<00:07, 35.58it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:07, 33.48it/s] Loading 0: 10%|█ | 30/291 [00:00<00:06, 37.62it/s] Loading 0: 12%|█▏ | 34/291 [00:01<00:10, 23.49it/s] Loading 0: 13%|█▎ | 38/291 [00:01<00:09, 25.39it/s] Loading 0: 14%|█▍ | 42/291 [00:01<00:10, 24.83it/s] Loading 0: 16%|█▋ | 48/291 [00:01<00:08, 29.66it/s] Loading 0: 18%|█▊ | 52/291 [00:01<00:08, 29.68it/s] Loading 0: 20%|█▉ | 57/291 [00:01<00:07, 32.94it/s] Loading 0: 21%|██ | 61/291 [00:02<00:07, 31.81it/s] Loading 0: 23%|██▎ | 66/291 [00:02<00:06, 34.54it/s] Loading 0: 24%|██▍ | 70/291 [00:02<00:06, 32.57it/s] Loading 0: 25%|██▌ | 74/291 [00:02<00:06, 32.54it/s] Loading 0: 27%|██▋ | 78/291 [00:02<00:06, 32.86it/s] Loading 0: 28%|██▊ | 82/291 [00:02<00:09, 22.80it/s] Loading 0: 29%|██▉ | 85/291 [00:02<00:08, 23.83it/s] Loading 0: 31%|███ | 90/291 [00:03<00:07, 28.20it/s] Loading 0: 32%|███▏ | 94/291 [00:03<00:06, 28.76it/s] Loading 0: 34%|███▍ | 99/291 [00:03<00:05, 32.55it/s] Loading 0: 35%|███▌ | 103/291 [00:03<00:05, 31.75it/s] Loading 0: 37%|███▋ | 108/291 [00:03<00:05, 34.55it/s] Loading 0: 38%|███▊ | 112/291 [00:03<00:05, 33.22it/s] Loading 0: 40%|███▉ | 116/291 [00:03<00:05, 32.93it/s] Loading 0: 42%|████▏ | 122/291 [00:03<00:04, 37.04it/s] Loading 0: 44%|████▎ | 127/291 [00:04<00:04, 34.72it/s] Loading 0: 46%|████▌ | 133/291 [00:04<00:05, 28.99it/s] Loading 0: 47%|████▋ | 137/291 [00:04<00:05, 29.47it/s] Loading 0: 48%|████▊ | 141/291 [00:04<00:05, 27.71it/s] Loading 0: 51%|█████ | 147/291 [00:04<00:04, 32.87it/s] Loading 0: 52%|█████▏ | 151/291 [00:04<00:04, 32.21it/s] Loading 0: 54%|█████▎ | 156/291 [00:05<00:03, 34.19it/s] Loading 0: 55%|█████▍ | 160/291 [00:05<00:03, 32.82it/s] Loading 0: 57%|█████▋ | 165/291 [00:05<00:03, 35.12it/s] Loading 0: 58%|█████▊ | 169/291 [00:05<00:03, 33.96it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:03, 36.07it/s] Loading 0: 61%|██████ | 178/291 [00:05<00:03, 34.29it/s] Loading 0: 63%|██████▎ | 184/291 [00:05<00:02, 40.07it/s] Loading 0: 65%|██████▍ | 189/291 [00:06<00:04, 23.86it/s] Loading 0: 67%|██████▋ | 194/291 [00:06<00:03, 25.29it/s] Loading 0: 69%|██████▉ | 201/291 [00:06<00:02, 31.58it/s] Loading 0: 70%|███████ | 205/291 [00:06<00:02, 31.16it/s] Loading 0: 72%|███████▏ | 210/291 [00:06<00:02, 33.39it/s] Loading 0: 74%|███████▎ | 214/291 [00:06<00:02, 32.23it/s] Loading 0: 75%|███████▌ | 219/291 [00:07<00:02, 34.72it/s] Loading 0: 77%|███████▋ | 223/291 [00:07<00:02, 33.25it/s] Loading 0: 78%|███████▊ | 227/291 [00:07<00:01, 33.18it/s] Loading 0: 79%|███████▉ | 231/291 [00:07<00:01, 32.89it/s] Loading 0: 81%|████████ | 235/291 [00:07<00:02, 23.54it/s] Loading 0: 82%|████████▏ | 239/291 [00:07<00:02, 24.04it/s] Loading 0: 85%|████████▍ | 246/291 [00:07<00:01, 31.61it/s] Loading 0: 86%|████████▌ | 250/291 [00:08<00:01, 31.34it/s] Loading 0: 88%|████████▊ | 255/291 [00:08<00:01, 34.04it/s] Loading 0: 89%|████████▉ | 259/291 [00:08<00:00, 33.08it/s] Loading 0: 91%|█████████ | 264/291 [00:08<00:00, 35.28it/s] Loading 0: 92%|█████████▏| 268/291 [00:08<00:00, 33.79it/s] Loading 0: 94%|█████████▍| 273/291 [00:08<00:00, 36.43it/s] Loading 0: 95%|█████████▌| 277/291 [00:08<00:00, 34.76it/s] Loading 0: 97%|█████████▋| 281/291 [00:08<00:00, 34.08it/s] Loading 0: 98%|█████████▊| 286/291 [00:14<00:01, 2.55it/s] Loading 0: 99%|█████████▉| 289/291 [00:14<00:00, 3.16it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: warnings.warn(
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: warnings.warn(
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: warnings.warn(
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: Saving duration: 1.385s
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 10.939s
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: creating bucket guanaco-reward-models
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-7971-v1_reward
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-7971-v1_reward/config.json
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-7971-v1_reward/special_tokens_map.json
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-7971-v1_reward/tokenizer_config.json
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-7971-v1_reward/merges.txt
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-7971-v1_reward/vocab.json
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-7971-v1_reward/tokenizer.json
trace2333-fd-llama3-v1-f-7971-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/trace2333-fd-llama3-v1-f-7971-v1_reward/reward.tensors
Job trace2333-fd-llama3-v1-f-7971-v1-mkmlizer completed after 136.57s with status: succeeded
Stopping job with name trace2333-fd-llama3-v1-f-7971-v1-mkmlizer
Pipeline stage MKMLizer completed in 137.69s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Creating inference service trace2333-fd-llama3-v1-f-7971-v1
Waiting for inference service trace2333-fd-llama3-v1-f-7971-v1 to be ready
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Failed to get response for submission mistralai-mixtral-8x7b_3473_v117: ('http://mistralai-mixtral-8x7b-3473-v117-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:43568->127.0.0.1:8080: read: connection reset by peer\n')
Inference service trace2333-fd-llama3-v1-f-7971-v1 ready after 171.15475153923035s
Pipeline stage ISVCDeployer completed in 173.02s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.240992784500122s
Received healthy response to inference request in 1.4797887802124023s
Received healthy response to inference request in 1.4550962448120117s
Received healthy response to inference request in 1.7748970985412598s
Received healthy response to inference request in 1.3826539516448975s
5 requests
0 failed requests
5th percentile: 1.3971424102783203
10th percentile: 1.4116308689117432
20th percentile: 1.4406077861785889
30th percentile: 1.4600347518920898
40th percentile: 1.469911766052246
50th percentile: 1.4797887802124023
60th percentile: 1.5978321075439452
70th percentile: 1.7158754348754883
80th percentile: 1.8681162357330323
90th percentile: 2.054554510116577
95th percentile: 2.1477736473083495
99th percentile: 2.2223489570617674
mean time: 1.6666857719421386
Pipeline stage StressChecker completed in 9.03s
trace2333-fd-llama3-v1-f_7971_v1 status is now deployed due to DeploymentManager action
trace2333-fd-llama3-v1-f_7971_v1 status is now inactive due to auto deactivation removed underperforming models
trace2333-fd-llama3-v1-f_7971_v1 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics