submission_id: bbchicago-brt-v1-12-narr_8638_v2
developer_uid: Bbbrun0
alignment_samples: 10705
alignment_score: 1.761749144629258
best_of: 16
celo_rating: 1229.06
display_name: auto
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_header_id|>', '<|eot_id|>', '\n\n{user_name}'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: False
language_model: BBChicago/Brt_v1.12_narration_alignment_s3000
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: BBChicago/Brt_v1.12_narr
model_name: auto
model_num_parameters: 8030261248.0
model_repo: BBChicago/Brt_v1.12_narration_alignment_s3000
model_size: 8B
num_battles: 10705
num_wins: 5503
propriety_score: 0.7491007194244604
propriety_total_count: 1112.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-08-15T10:32:16+00:00
us_pacific_date: 2024-08-15
win_ratio: 0.5140588510042037
Download Preferencedata
Resubmit model
Running pipeline stage MKMLizer
Starting job with name bbchicago-brt-v1-12-narr-8638-v2-mkmlizer
Waiting for job on bbchicago-brt-v1-12-narr-8638-v2-mkmlizer to finish
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ _____ __ __ ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ /___/ ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ Version: 0.9.9 ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ https://mk1.ai ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ The license key for the current software has been verified as ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ belonging to: ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ Chai Research Corp. ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ║ ║
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: Downloaded to shared memory in 42.850s
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpcd2imox4, device:0
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: quantized model in 29.646s
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: Processed model BBChicago/Brt_v1.12_narration_alignment_s3000 in 72.497s
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-8638-v2
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-8638-v2/config.json
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-8638-v2/tokenizer_config.json
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-8638-v2/special_tokens_map.json
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-8638-v2/tokenizer.json
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/bbchicago-brt-v1-12-narr-8638-v2/flywheel_model.0.safetensors
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:10, 27.64it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:06, 46.02it/s] Loading 0: 6%|▌ | 18/291 [00:00<00:06, 44.12it/s] Loading 0: 8%|▊ | 23/291 [00:00<00:07, 35.17it/s] Loading 0: 11%|█ | 32/291 [00:00<00:05, 44.65it/s] Loading 0: 13%|█▎ | 37/291 [00:01<00:08, 30.25it/s] Loading 0: 14%|█▍ | 41/291 [00:01<00:08, 28.96it/s] Loading 0: 16%|█▋ | 48/291 [00:01<00:06, 36.50it/s] Loading 0: 18%|█▊ | 53/291 [00:01<00:06, 36.50it/s] Loading 0: 20%|█▉ | 58/291 [00:01<00:06, 37.22it/s] Loading 0: 22%|██▏ | 63/291 [00:01<00:05, 39.47it/s] Loading 0: 23%|██▎ | 68/291 [00:01<00:06, 33.29it/s] Loading 0: 25%|██▌ | 74/291 [00:02<00:05, 38.61it/s] Loading 0: 27%|██▋ | 79/291 [00:02<00:06, 35.14it/s] Loading 0: 29%|██▊ | 83/291 [00:02<00:08, 23.83it/s] Loading 0: 31%|███ | 90/291 [00:02<00:06, 31.19it/s] Loading 0: 33%|███▎ | 95/291 [00:02<00:06, 32.50it/s] Loading 0: 34%|███▍ | 99/291 [00:02<00:05, 34.02it/s] Loading 0: 35%|███▌ | 103/291 [00:03<00:05, 32.18it/s] Loading 0: 37%|███▋ | 108/291 [00:03<00:05, 33.27it/s] Loading 0: 38%|███▊ | 112/291 [00:03<00:05, 32.05it/s] Loading 0: 40%|███▉ | 116/291 [00:03<00:05, 32.64it/s] Loading 0: 42%|████▏ | 122/291 [00:03<00:04, 37.57it/s] Loading 0: 44%|████▎ | 127/291 [00:03<00:04, 35.11it/s] Loading 0: 46%|████▌ | 133/291 [00:03<00:05, 29.41it/s] Loading 0: 47%|████▋ | 137/291 [00:04<00:05, 29.65it/s] Loading 0: 48%|████▊ | 141/291 [00:04<00:05, 27.97it/s] Loading 0: 51%|█████ | 147/291 [00:04<00:04, 32.19it/s] Loading 0: 52%|█████▏ | 151/291 [00:04<00:04, 30.99it/s] Loading 0: 54%|█████▎ | 156/291 [00:04<00:04, 33.18it/s] Loading 0: 55%|█████▍ | 160/291 [00:04<00:04, 31.78it/s] Loading 0: 57%|█████▋ | 165/291 [00:04<00:03, 34.34it/s] Loading 0: 58%|█████▊ | 169/291 [00:05<00:03, 32.43it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:03, 34.91it/s] Loading 0: 61%|██████ | 178/291 [00:05<00:03, 32.57it/s] Loading 0: 63%|██████▎ | 182/291 [00:05<00:03, 33.74it/s] Loading 0: 64%|██████▍ | 186/291 [00:05<00:05, 19.20it/s] Loading 0: 65%|██████▍ | 189/291 [00:06<00:05, 18.83it/s] Loading 0: 67%|██████▋ | 194/291 [00:06<00:04, 21.49it/s] Loading 0: 69%|██████▉ | 201/291 [00:06<00:03, 28.96it/s] Loading 0: 70%|███████ | 205/291 [00:06<00:02, 29.58it/s] Loading 0: 72%|███████▏ | 210/291 [00:06<00:02, 33.27it/s] Loading 0: 74%|███████▎ | 214/291 [00:06<00:02, 30.74it/s] Loading 0: 75%|███████▌ | 219/291 [00:06<00:02, 34.15it/s] Loading 0: 77%|███████▋ | 223/291 [00:06<00:02, 33.46it/s] Loading 0: 78%|███████▊ | 227/291 [00:07<00:01, 34.11it/s] Loading 0: 79%|███████▉ | 231/291 [00:07<00:01, 33.25it/s] Loading 0: 81%|████████ | 235/291 [00:07<00:02, 25.85it/s] Loading 0: 82%|████████▏ | 239/291 [00:07<00:02, 25.22it/s] Loading 0: 85%|████████▍ | 246/291 [00:07<00:01, 32.43it/s] Loading 0: 86%|████████▌ | 250/291 [00:07<00:01, 31.45it/s] Loading 0: 88%|████████▊ | 255/291 [00:08<00:01, 34.71it/s] Loading 0: 89%|████████▉ | 259/291 [00:08<00:00, 33.30it/s] Loading 0: 91%|█████████ | 264/291 [00:08<00:00, 35.23it/s] Loading 0: 92%|█████████▏| 268/291 [00:08<00:00, 33.49it/s] Loading 0: 94%|█████████▍| 273/291 [00:08<00:00, 36.06it/s] Loading 0: 95%|█████████▌| 277/291 [00:08<00:00, 34.59it/s] Loading 0: 97%|█████████▋| 281/291 [00:08<00:00, 33.68it/s] Loading 0: 98%|█████████▊| 286/291 [00:14<00:01, 2.52it/s] Loading 0: 99%|█████████▉| 289/291 [00:14<00:00, 3.13it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: warnings.warn(
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: warnings.warn(
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: warnings.warn(
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: Saving duration: 1.434s
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 10.987s
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-8638-v2_reward
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-8638-v2_reward/special_tokens_map.json
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-8638-v2_reward/config.json
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-8638-v2_reward/tokenizer_config.json
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-8638-v2_reward/merges.txt
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-8638-v2_reward/vocab.json
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-8638-v2_reward/tokenizer.json
bbchicago-brt-v1-12-narr-8638-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/bbchicago-brt-v1-12-narr-8638-v2_reward/reward.tensors
Job bbchicago-brt-v1-12-narr-8638-v2-mkmlizer completed after 145.92s with status: succeeded
Stopping job with name bbchicago-brt-v1-12-narr-8638-v2-mkmlizer
Pipeline stage MKMLizer completed in 146.38s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Creating inference service bbchicago-brt-v1-12-narr-8638-v2
Waiting for inference service bbchicago-brt-v1-12-narr-8638-v2 to be ready
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Failed to get response for submission chaiml-llama-8b-pairwise_8189_v2:
Inference service bbchicago-brt-v1-12-narr-8638-v2 ready after 221.43628191947937s
Pipeline stage ISVCDeployer completed in 222.12s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.2617485523223877s
Received healthy response to inference request in 1.4977593421936035s
Received healthy response to inference request in 1.460798978805542s
Received healthy response to inference request in 1.4430041313171387s
Received healthy response to inference request in 1.4917829036712646s
5 requests
0 failed requests
5th percentile: 1.4465631008148194
10th percentile: 1.4501220703125
20th percentile: 1.4572400093078612
30th percentile: 1.4669957637786866
40th percentile: 1.4793893337249755
50th percentile: 1.4917829036712646
60th percentile: 1.4941734790802002
70th percentile: 1.4965640544891357
80th percentile: 1.6505571842193605
90th percentile: 1.9561528682708742
95th percentile: 2.1089507102966305
99th percentile: 2.231188983917236
mean time: 1.6310187816619872
Pipeline stage StressChecker completed in 8.92s
bbchicago-brt-v1-12-narr_8638_v2 status is now deployed due to DeploymentManager action
bbchicago-brt-v1-12-narr_8638_v2 status is now inactive due to auto deactivation removed underperforming models
bbchicago-brt-v1-12-narr_8638_v2 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics