submission_id: bbchicago-brt-v1-12-narr_4789_v2
developer_uid: Bbbrun0
alignment_samples: 10606
alignment_score: 1.9221535984216322
best_of: 16
celo_rating: 1235.24
display_name: auto
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_header_id|>', '<|eot_id|>', '\n\n{user_name}'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: False
language_model: BBChicago/Brt_v1.12_narration_alignment_s4000
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: BBChicago/Brt_v1.12_narr
model_name: auto
model_num_parameters: 8030261248.0
model_repo: BBChicago/Brt_v1.12_narration_alignment_s4000
model_size: 8B
num_battles: 10606
num_wins: 5550
propriety_score: 0.7388414055080722
propriety_total_count: 1053.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-08-15T10:33:24+00:00
us_pacific_date: 2024-08-15
win_ratio: 0.5232887045068829
Download Preferencedata
Resubmit model
Running pipeline stage MKMLizer
Starting job with name bbchicago-brt-v1-12-narr-4789-v2-mkmlizer
Waiting for job on bbchicago-brt-v1-12-narr-4789-v2-mkmlizer to finish
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ _____ __ __ ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ /___/ ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ Version: 0.9.9 ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ https://mk1.ai ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ The license key for the current software has been verified as ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ belonging to: ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ Chai Research Corp. ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ║ ║
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: Downloaded to shared memory in 58.712s
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpadhxemp0, device:0
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: quantized model in 29.834s
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: Processed model BBChicago/Brt_v1.12_narration_alignment_s4000 in 88.546s
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: creating bucket guanaco-mkml-models
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-4789-v2
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-4789-v2/config.json
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-4789-v2/special_tokens_map.json
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-4789-v2/tokenizer_config.json
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-4789-v2/tokenizer.json
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-4789-v2/flywheel_model.0.safetensors
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:11, 25.30it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:08, 33.60it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:08, 31.25it/s] Loading 0: 7%|▋ | 21/291 [00:00<00:07, 33.81it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:08, 31.39it/s] Loading 0: 10%|█ | 30/291 [00:00<00:07, 35.46it/s] Loading 0: 12%|█▏ | 34/291 [00:01<00:10, 24.58it/s] Loading 0: 13%|█▎ | 37/291 [00:01<00:10, 23.48it/s] Loading 0: 14%|█▍ | 41/291 [00:01<00:10, 23.01it/s] Loading 0: 16%|█▌ | 46/291 [00:01<00:08, 28.47it/s] Loading 0: 17%|█▋ | 50/291 [00:01<00:09, 26.08it/s] Loading 0: 20%|█▉ | 57/291 [00:01<00:07, 32.56it/s] Loading 0: 21%|██ | 61/291 [00:02<00:07, 30.50it/s] Loading 0: 23%|██▎ | 66/291 [00:02<00:06, 32.28it/s] Loading 0: 24%|██▍ | 70/291 [00:02<00:07, 31.09it/s] Loading 0: 25%|██▌ | 74/291 [00:02<00:07, 30.88it/s] Loading 0: 27%|██▋ | 78/291 [00:02<00:06, 30.91it/s] Loading 0: 28%|██▊ | 82/291 [00:02<00:09, 21.70it/s] Loading 0: 29%|██▉ | 85/291 [00:03<00:09, 22.77it/s] Loading 0: 31%|███ | 90/291 [00:03<00:07, 26.65it/s] Loading 0: 32%|███▏ | 93/291 [00:03<00:07, 25.29it/s] Loading 0: 34%|███▍ | 99/291 [00:03<00:06, 30.55it/s] Loading 0: 35%|███▌ | 103/291 [00:03<00:06, 29.67it/s] Loading 0: 37%|███▋ | 108/291 [00:03<00:05, 32.01it/s] Loading 0: 38%|███▊ | 112/291 [00:03<00:05, 30.63it/s] Loading 0: 40%|███▉ | 116/291 [00:04<00:05, 30.92it/s] Loading 0: 42%|████▏ | 122/291 [00:04<00:04, 35.39it/s] Loading 0: 44%|████▎ | 127/291 [00:04<00:04, 33.39it/s] Loading 0: 45%|████▌ | 132/291 [00:04<00:04, 36.88it/s] Loading 0: 47%|████▋ | 136/291 [00:04<00:06, 25.09it/s] Loading 0: 48%|████▊ | 140/291 [00:04<00:06, 24.27it/s] Loading 0: 50%|████▉ | 145/291 [00:05<00:05, 29.10it/s] Loading 0: 51%|█████ | 149/291 [00:05<00:05, 27.18it/s] Loading 0: 54%|█████▎ | 156/291 [00:05<00:03, 33.92it/s] Loading 0: 55%|█████▍ | 160/291 [00:05<00:04, 32.12it/s] Loading 0: 57%|█████▋ | 165/291 [00:05<00:03, 34.31it/s] Loading 0: 58%|█████▊ | 169/291 [00:05<00:03, 32.01it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:03, 34.21it/s] Loading 0: 61%|██████ | 178/291 [00:05<00:03, 32.10it/s] Loading 0: 63%|██████▎ | 183/291 [00:06<00:02, 36.22it/s] Loading 0: 64%|██████▍ | 187/291 [00:06<00:04, 25.39it/s] Loading 0: 66%|██████▌ | 191/291 [00:06<00:03, 26.09it/s] Loading 0: 67%|██████▋ | 195/291 [00:06<00:03, 24.89it/s] Loading 0: 69%|██████▉ | 201/291 [00:06<00:03, 29.79it/s] Loading 0: 70%|███████ | 205/291 [00:06<00:02, 29.17it/s] Loading 0: 72%|███████▏ | 210/291 [00:07<00:02, 31.53it/s] Loading 0: 74%|███████▎ | 214/291 [00:07<00:02, 30.33it/s] Loading 0: 75%|███████▌ | 219/291 [00:07<00:02, 32.21it/s] Loading 0: 77%|███████▋ | 223/291 [00:07<00:02, 30.23it/s] Loading 0: 78%|███████▊ | 227/291 [00:07<00:02, 30.23it/s] Loading 0: 79%|███████▉ | 231/291 [00:07<00:02, 29.79it/s] Loading 0: 81%|████████ | 235/291 [00:08<00:02, 22.25it/s] Loading 0: 82%|████████▏ | 239/291 [00:08<00:02, 21.96it/s] Loading 0: 85%|████████▍ | 246/291 [00:08<00:01, 28.78it/s] Loading 0: 86%|████████▌ | 250/291 [00:08<00:01, 28.25it/s] Loading 0: 88%|████████▊ | 255/291 [00:08<00:01, 30.50it/s] Loading 0: 89%|████████▉ | 259/291 [00:08<00:01, 29.42it/s] Loading 0: 91%|█████████ | 264/291 [00:09<00:00, 31.86it/s] Loading 0: 92%|█████████▏| 268/291 [00:09<00:00, 30.50it/s] Loading 0: 94%|█████████▍| 273/291 [00:09<00:00, 31.94it/s] Loading 0: 95%|█████████▌| 277/291 [00:09<00:00, 31.29it/s] Loading 0: 97%|█████████▋| 281/291 [00:09<00:00, 32.07it/s] Loading 0: 98%|█████████▊| 286/291 [00:15<00:01, 2.54it/s] Loading 0: 99%|█████████▉| 289/291 [00:15<00:00, 3.16it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: warnings.warn(
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: warnings.warn(
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: creating bucket guanaco-reward-models
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-4789-v2_reward
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-4789-v2_reward/config.json
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-4789-v2_reward/special_tokens_map.json
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-4789-v2_reward/tokenizer_config.json
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-4789-v2_reward/merges.txt
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-4789-v2_reward/vocab.json
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-4789-v2_reward/tokenizer.json
bbchicago-brt-v1-12-narr-4789-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-4789-v2_reward/reward.tensors
Job bbchicago-brt-v1-12-narr-4789-v2-mkmlizer completed after 125.32s with status: succeeded
Stopping job with name bbchicago-brt-v1-12-narr-4789-v2-mkmlizer
Pipeline stage MKMLizer completed in 125.79s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service bbchicago-brt-v1-12-narr-4789-v2
Waiting for inference service bbchicago-brt-v1-12-narr-4789-v2 to be ready
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Inference service bbchicago-brt-v1-12-narr-4789-v2 ready after 221.5061411857605s
Pipeline stage ISVCDeployer completed in 222.37s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.3433384895324707s
Received healthy response to inference request in 1.4587938785552979s
Received healthy response to inference request in 1.4603970050811768s
Received healthy response to inference request in 1.4271879196166992s
Received healthy response to inference request in 1.4958505630493164s
5 requests
0 failed requests
5th percentile: 1.433509111404419
10th percentile: 1.4398303031921387
20th percentile: 1.4524726867675781
30th percentile: 1.4591145038604736
40th percentile: 1.4597557544708253
50th percentile: 1.4603970050811768
60th percentile: 1.4745784282684327
70th percentile: 1.4887598514556886
80th percentile: 1.6653481483459474
90th percentile: 2.0043433189392093
95th percentile: 2.1738409042358398
99th percentile: 2.3094389724731443
mean time: 1.6371135711669922
Pipeline stage StressChecker completed in 8.93s
bbchicago-brt-v1-12-narr_4789_v2 status is now deployed due to DeploymentManager action
bbchicago-brt-v1-12-narr_4789_v2 status is now inactive due to auto deactivation removed underperforming models
bbchicago-brt-v1-12-narr_4789_v2 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics