submission_id: bbchicago-brt-8b-l3-v1-_4895_v27
developer_uid: Bbbrun0
alignment_samples: 0
best_of: 16
celo_rating: 1237.97
display_name: brt-v1_7-dpo-s2500-cf
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n### Instruction:\n{bot_name}'s Persona: {memory}\n", 'prompt_template': '### Input:\n{prompt}<START>\n<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n### Response:\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.95, 'top_p': 0.95, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_header_id|>', '<|eot_id|>', '\n\n{user_name}'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: False
language_model: BBChicago/Brt-8B-L3_v1.7_with_dpo_lora_s2500
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: BBChicago/Brt-8B-L3_v1.7
model_name: brt-v1_7-dpo-s2500-cf
model_num_parameters: 8030261248.0
model_repo: BBChicago/Brt-8B-L3_v1.7_with_dpo_lora_s2500
model_size: 8B
num_battles: 14290
num_wins: 7987
propriety_score: 0.7183673469387755
propriety_total_count: 1225.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-07-25T08:12:47+00:00
us_pacific_date: 2024-07-25
win_ratio: 0.5589223233030091
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer
Waiting for job on bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer to finish
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ _____ __ __ ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ /___/ ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ Version: 0.9.7 ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ https://mk1.ai ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ The license key for the current software has been verified as ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ belonging to: ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ Chai Research Corp. ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ║ ║
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: Downloaded to shared memory in 36.660s
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmprnl1_giu, device:0
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: Saving flywheel model at /dev/shm/model_cache
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: quantized model in 28.655s
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: Processed model BBChicago/Brt-8B-L3_v1.7_with_dpo_lora_s2500 in 65.315s
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: creating bucket guanaco-mkml-models
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-4895-v27
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-4895-v27/config.json
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-4895-v27/tokenizer.json
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-4895-v27/flywheel_model.0.safetensors
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:10, 27.04it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:06, 44.64it/s] Loading 0: 6%|▌ | 18/291 [00:00<00:06, 43.28it/s] Loading 0: 8%|▊ | 23/291 [00:00<00:08, 33.23it/s] Loading 0: 11%|█ | 31/291 [00:00<00:05, 44.79it/s] Loading 0: 13%|█▎ | 37/291 [00:01<00:09, 27.68it/s] Loading 0: 14%|█▍ | 41/291 [00:01<00:09, 26.63it/s] Loading 0: 16%|█▋ | 48/291 [00:01<00:07, 33.87it/s] Loading 0: 18%|█▊ | 53/291 [00:01<00:06, 34.70it/s] Loading 0: 20%|█▉ | 58/291 [00:01<00:06, 35.83it/s] Loading 0: 22%|██▏ | 63/291 [00:01<00:05, 38.16it/s] Loading 0: 23%|██▎ | 68/291 [00:02<00:06, 32.05it/s] Loading 0: 25%|██▌ | 74/291 [00:02<00:05, 36.84it/s] Loading 0: 27%|██▋ | 79/291 [00:02<00:06, 34.56it/s] Loading 0: 29%|██▊ | 83/291 [00:02<00:08, 23.61it/s] Loading 0: 31%|███ | 90/291 [00:02<00:06, 31.26it/s] Loading 0: 33%|███▎ | 95/291 [00:02<00:06, 32.16it/s] Loading 0: 34%|███▍ | 100/291 [00:03<00:05, 33.69it/s] Loading 0: 36%|███▌ | 104/291 [00:03<00:05, 34.75it/s] Loading 0: 37%|███▋ | 108/291 [00:03<00:05, 34.92it/s] Loading 0: 38%|███▊ | 112/291 [00:03<00:05, 32.76it/s] Loading 0: 40%|███▉ | 116/291 [00:03<00:05, 32.16it/s] Loading 0: 42%|████▏ | 122/291 [00:03<00:04, 37.14it/s] Loading 0: 44%|████▎ | 127/291 [00:03<00:04, 35.42it/s] Loading 0: 46%|████▌ | 133/291 [00:04<00:05, 30.98it/s] Loading 0: 47%|████▋ | 137/291 [00:04<00:04, 31.12it/s] Loading 0: 48%|████▊ | 141/291 [00:04<00:05, 29.66it/s] Loading 0: 51%|█████ | 147/291 [00:04<00:04, 34.72it/s] Loading 0: 52%|█████▏ | 151/291 [00:04<00:04, 33.39it/s] Loading 0: 54%|█████▎ | 156/291 [00:04<00:03, 35.54it/s] Loading 0: 55%|█████▍ | 160/291 [00:04<00:03, 34.28it/s] Loading 0: 57%|█████▋ | 165/291 [00:04<00:03, 36.98it/s] Loading 0: 58%|█████▊ | 169/291 [00:05<00:03, 33.58it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:03, 37.15it/s] Loading 0: 61%|██████ | 178/291 [00:05<00:03, 35.10it/s] Loading 0: 64%|██████▎ | 185/291 [00:05<00:02, 39.31it/s] Loading 0: 65%|██████▍ | 189/291 [00:05<00:04, 25.36it/s] Loading 0: 67%|██████▋ | 194/291 [00:05<00:03, 26.21it/s] Loading 0: 69%|██████▉ | 201/291 [00:06<00:02, 32.76it/s] Loading 0: 70%|███████ | 205/291 [00:06<00:02, 31.73it/s] Loading 0: 72%|███████▏ | 210/291 [00:06<00:02, 34.62it/s] Loading 0: 74%|███████▎ | 214/291 [00:06<00:02, 33.36it/s] Loading 0: 75%|███████▌ | 219/291 [00:06<00:02, 35.94it/s] Loading 0: 77%|███████▋ | 223/291 [00:06<00:01, 34.24it/s] Loading 0: 78%|███████▊ | 227/291 [00:06<00:01, 34.31it/s] Loading 0: 79%|███████▉ | 231/291 [00:06<00:01, 34.44it/s] Loading 0: 81%|████████ | 235/291 [00:07<00:02, 25.36it/s] Loading 0: 82%|████████▏ | 239/291 [00:07<00:02, 25.11it/s] Loading 0: 85%|████████▍ | 246/291 [00:07<00:01, 32.51it/s] Loading 0: 86%|████████▌ | 250/291 [00:07<00:01, 31.59it/s] Loading 0: 88%|████████▊ | 255/291 [00:07<00:01, 34.40it/s] Loading 0: 89%|████████▉ | 259/291 [00:07<00:00, 33.42it/s] Loading 0: 91%|█████████ | 264/291 [00:07<00:00, 36.05it/s] Loading 0: 92%|█████████▏| 268/291 [00:08<00:00, 33.88it/s] Loading 0: 94%|█████████▍| 273/291 [00:08<00:00, 36.77it/s] Loading 0: 95%|█████████▌| 277/291 [00:08<00:00, 35.32it/s] Loading 0: 97%|█████████▋| 281/291 [00:08<00:00, 34.67it/s] Loading 0: 98%|█████████▊| 286/291 [00:14<00:01, 2.57it/s] Loading 0: 99%|█████████▉| 289/291 [00:14<00:00, 3.19it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: warnings.warn(
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: warnings.warn(
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: warnings.warn(
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: Saving duration: 1.433s
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 11.180s
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: creating bucket guanaco-reward-models
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: Bucket 's3://guanaco-reward-models/' created
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-4895-v27_reward
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-4895-v27_reward/special_tokens_map.json
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-4895-v27_reward/config.json
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-4895-v27_reward/tokenizer_config.json
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-4895-v27_reward/merges.txt
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-4895-v27_reward/vocab.json
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-4895-v27_reward/tokenizer.json
bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-4895-v27_reward/reward.tensors
Job bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer completed after 157.77s with status: succeeded
Stopping job with name bbchicago-brt-8b-l3-v1-4895-v27-mkmlizer
Pipeline stage MKMLizer completed in 159.07s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service bbchicago-brt-8b-l3-v1-4895-v27
Waiting for inference service bbchicago-brt-8b-l3-v1-4895-v27 to be ready
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Inference service bbchicago-brt-8b-l3-v1-4895-v27 ready after 80.5516722202301s
Pipeline stage ISVCDeployer completed in 82.17s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.245549440383911s
Received healthy response to inference request in 1.5027048587799072s
Received healthy response to inference request in 1.4543192386627197s
Received healthy response to inference request in 1.5932929515838623s
Received healthy response to inference request in 1.4388821125030518s
5 requests
0 failed requests
5th percentile: 1.4419695377349853
10th percentile: 1.445056962966919
20th percentile: 1.4512318134307862
30th percentile: 1.4639963626861572
40th percentile: 1.4833506107330323
50th percentile: 1.5027048587799072
60th percentile: 1.5389400959014892
70th percentile: 1.5751753330230713
80th percentile: 1.7237442493438722
90th percentile: 1.9846468448638916
95th percentile: 2.1150981426239013
99th percentile: 2.219459180831909
mean time: 1.6469497203826904
Pipeline stage StressChecker completed in 8.93s
bbchicago-brt-8b-l3-v1-_4895_v27 status is now deployed due to DeploymentManager action
bbchicago-brt-8b-l3-v1-_4895_v27 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of bbchicago-brt-8b-l3-v1-_4895_v27
Running pipeline stage ISVCDeleter
Checking if service bbchicago-brt-8b-l3-v1-4895-v27 is running
Tearing down inference service bbchicago-brt-8b-l3-v1-4895-v27
Service bbchicago-brt-8b-l3-v1-4895-v27 has been torndown
Pipeline stage ISVCDeleter completed in 4.97s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key bbchicago-brt-8b-l3-v1-4895-v27/config.json from bucket guanaco-mkml-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27/tokenizer.json from bucket guanaco-mkml-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key bbchicago-brt-8b-l3-v1-4895-v27_reward/config.json from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27_reward/merges.txt from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27_reward/reward.tensors from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-4895-v27_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.62s
bbchicago-brt-8b-l3-v1-_4895_v27 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics