submission_id: bbchicago-brt-8b-l3-v1-6_5783_v1
developer_uid: Bbbrun0
best_of: 16
celo_rating: 1222.21
display_name: bbchicago-brt-v1_6-dpo-s5532
family_friendly_score: 0.0
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.95, 'top_p': 0.95, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_header_id|>', '<|eot_id|>', '\n\n{user_name}'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
is_internal_developer: False
language_model: BBChicago/Brt-8B-L3_v1.6_DPO_steps_5532_ep3
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: BBChicago/Brt-8B-L3_v1.6
model_name: bbchicago-brt-v1_6-dpo-s5532
model_num_parameters: 8030261248.0
model_repo: BBChicago/Brt-8B-L3_v1.6_DPO_steps_5532_ep3
model_size: 8B
num_battles: 14682
num_wins: 7997
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-07-18T04:59:15+00:00
us_pacific_date: 2024-07-17
win_ratio: 0.5446805612314398
Resubmit model
Running pipeline stage MKMLizer
Starting job with name bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer
Waiting for job on bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer to finish
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ _____ __ __ ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ /___/ ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ Version: 0.9.5.post3 ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ https://mk1.ai ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ The license key for the current software has been verified as ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ belonging to: ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ Chai Research Corp. ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ║ ║
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: Downloaded to shared memory in 41.175s
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpiolpx_0q, device:0
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Failed to get response for submission hastagaras-dirtybu8bl3-r_5954_v3: ('http://hastagaras-dirtybu8bl3-r-5954-v3-predictor-default.tenant-chaiml-guanaco.knative.ord1.coreweave.cloud/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 4%|▍ | 13/291 [00:00<00:02, 110.86it/s] Loading 0: 9%|▉ | 27/291 [00:00<00:02, 125.00it/s] Loading 0: 14%|█▎ | 40/291 [00:00<00:02, 101.10it/s] Loading 0: 18%|█▊ | 51/291 [00:00<00:02, 101.24it/s] Loading 0: 23%|██▎ | 66/291 [00:00<00:02, 111.92it/s] Loading 0: 27%|██▋ | 78/291 [00:00<00:02, 106.01it/s] Loading 0: 31%|███ | 89/291 [00:01<00:04, 50.05it/s] Loading 0: 35%|███▌ | 102/291 [00:01<00:03, 62.75it/s] Loading 0: 39%|███▉ | 113/291 [00:01<00:02, 68.71it/s] Loading 0: 44%|████▍ | 129/291 [00:01<00:01, 83.89it/s] Loading 0: 48%|████▊ | 140/291 [00:01<00:01, 86.03it/s] Loading 0: 54%|█████▎ | 156/291 [00:01<00:01, 99.20it/s] Loading 0: 58%|█████▊ | 168/291 [00:01<00:01, 98.55it/s] Loading 0: 62%|██████▏ | 180/291 [00:02<00:01, 103.62it/s] Loading 0: 66%|██████▌ | 192/291 [00:02<00:01, 49.73it/s] Loading 0: 70%|██████▉ | 203/291 [00:02<00:01, 57.04it/s] Loading 0: 73%|███████▎ | 212/291 [00:02<00:01, 61.95it/s] Loading 0: 77%|███████▋ | 225/291 [00:02<00:00, 74.77it/s] Loading 0: 81%|████████▏ | 237/291 [00:03<00:00, 84.48it/s] Loading 0: 85%|████████▌ | 248/291 [00:03<00:00, 87.29it/s] Loading 0: 91%|█████████ | 264/291 [00:03<00:00, 103.51it/s] Loading 0: 95%|█████████▍| 276/291 [00:03<00:00, 104.67it/s] Loading 0: 99%|█████████▉| 288/291 [00:10<00:00, 5.65it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: quantized model in 29.964s
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: Processed model BBChicago/Brt-8B-L3_v1.6_DPO_steps_5532_ep3 in 71.139s
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: creating bucket guanaco-mkml-models
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-6-5783-v1
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-6-5783-v1/config.json
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-6-5783-v1/special_tokens_map.json
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-6-5783-v1/tokenizer_config.json
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-6-5783-v1/tokenizer.json
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/bbchicago-brt-8b-l3-v1-6-5783-v1/flywheel_model.0.safetensors
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:950: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: warnings.warn(
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:778: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: warnings.warn(
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: warnings.warn(
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: Saving duration: 2.379s
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 14.300s
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: creating bucket guanaco-reward-models
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-6-5783-v1_reward
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-6-5783-v1_reward/config.json
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-6-5783-v1_reward/special_tokens_map.json
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-6-5783-v1_reward/tokenizer_config.json
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-6-5783-v1_reward/merges.txt
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-6-5783-v1_reward/vocab.json
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-6-5783-v1_reward/tokenizer.json
bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/bbchicago-brt-8b-l3-v1-6-5783-v1_reward/reward.tensors
Job bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer completed after 120.88s with status: succeeded
Stopping job with name bbchicago-brt-8b-l3-v1-6-5783-v1-mkmlizer
Pipeline stage MKMLizer completed in 122.28s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.12s
Running pipeline stage ISVCDeployer
Creating inference service bbchicago-brt-8b-l3-v1-6-5783-v1
Waiting for inference service bbchicago-brt-8b-l3-v1-6-5783-v1 to be ready
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Inference service bbchicago-brt-8b-l3-v1-6-5783-v1 ready after 92.13800573348999s
Pipeline stage ISVCDeployer completed in 93.87s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.3079004287719727s
Received healthy response to inference request in 1.4200432300567627s
Received healthy response to inference request in 1.4200794696807861s
Received healthy response to inference request in 1.4223835468292236s
Received healthy response to inference request in 1.4072589874267578s
5 requests
0 failed requests
5th percentile: 1.4098158359527588
10th percentile: 1.4123726844787599
20th percentile: 1.4174863815307617
30th percentile: 1.4200504779815675
40th percentile: 1.4200649738311768
50th percentile: 1.4200794696807861
60th percentile: 1.421001100540161
70th percentile: 1.4219227313995362
80th percentile: 1.5994869232177735
90th percentile: 1.953693675994873
95th percentile: 2.1307970523834228
99th percentile: 2.2724797534942627
mean time: 1.5955331325531006
Pipeline stage StressChecker completed in 8.75s
bbchicago-brt-8b-l3-v1-6_5783_v1 status is now deployed due to DeploymentManager action
bbchicago-brt-8b-l3-v1-6_5783_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of bbchicago-brt-8b-l3-v1-6_5783_v1
Running pipeline stage ISVCDeleter
Checking if service bbchicago-brt-8b-l3-v1-6-5783-v1 is running
Tearing down inference service bbchicago-brt-8b-l3-v1-6-5783-v1
Service bbchicago-brt-8b-l3-v1-6-5783-v1 has been torndown
Pipeline stage ISVCDeleter completed in 5.91s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1/config.json from bucket guanaco-mkml-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1_reward/config.json from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key bbchicago-brt-8b-l3-v1-6-5783-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.68s
bbchicago-brt-8b-l3-v1-6_5783_v1 status is now torndown due to DeploymentManager action