submission_id: jic062-instruct-dp7-lr1-g32_v1
developer_uid: chace9580
best_of: 16
celo_rating: 1230.28
display_name: jic062-instruct-dp7-lr1-g32_v1
family_friendly_score: 0.0
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_of_text|>', '|eot_id|'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: False
language_model: jic062/instruct_dp7_lr1_g32
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: jic062/instruct_dp7_lr1_
model_name: jic062-instruct-dp7-lr1-g32_v1
model_num_parameters: 8030261248.0
model_repo: jic062/instruct_dp7_lr1_g32
model_size: 8B
num_battles: 10914
num_wins: 5543
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: ChaiML/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-08-12T18:40:10+00:00
us_pacific_date: 2024-08-12
win_ratio: 0.5078797874289903
Resubmit model
Running pipeline stage MKMLizer
Starting job with name jic062-instruct-dp7-lr1-g32-v1-mkmlizer
Waiting for job on jic062-instruct-dp7-lr1-g32-v1-mkmlizer to finish
Stopping job with name jic062-instruct-dp7-lr1-g32-v1-mkmlizer
%s, retrying in %s seconds...
Starting job with name jic062-instruct-dp7-lr1-g32-v1-mkmlizer
Waiting for job on jic062-instruct-dp7-lr1-g32-v1-mkmlizer to finish
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ _____ __ __ ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ /___/ ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ Version: 0.9.9 ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ https://mk1.ai ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ The license key for the current software has been verified as ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ belonging to: ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ Chai Research Corp. ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ║ ║
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: Downloaded to shared memory in 30.426s
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp9cxcwzo4, device:0
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: quantized model in 26.627s
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: Processed model jic062/instruct_dp7_lr1_g32 in 57.054s
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: creating bucket guanaco-mkml-models
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v1
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v1/config.json
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v1/special_tokens_map.json
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v1/tokenizer_config.json
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v1/tokenizer.json
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v1/flywheel_model.0.safetensors
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: loading reward model from ChaiML/gpt2_xl_pairwise_89m_step_347634
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/291 [00:00<00:05, 50.30it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:03, 68.81it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:03, 76.85it/s] Loading 0: 12%|█▏ | 34/291 [00:00<00:03, 80.12it/s] Loading 0: 15%|█▍ | 43/291 [00:00<00:03, 80.90it/s] Loading 0: 20%|█▉ | 57/291 [00:00<00:02, 99.83it/s] Loading 0: 23%|██▎ | 68/291 [00:00<00:02, 90.71it/s] Loading 0: 27%|██▋ | 78/291 [00:00<00:02, 92.62it/s] Loading 0: 30%|███ | 88/291 [00:02<00:09, 22.30it/s] Loading 0: 34%|███▎ | 98/291 [00:02<00:06, 29.13it/s] Loading 0: 36%|███▋ | 106/291 [00:02<00:05, 34.70it/s] Loading 0: 40%|███▉ | 115/291 [00:02<00:04, 40.42it/s] Loading 0: 43%|████▎ | 124/291 [00:02<00:03, 48.06it/s] Loading 0: 48%|████▊ | 139/291 [00:02<00:02, 61.80it/s] Loading 0: 51%|█████ | 148/291 [00:02<00:02, 66.66it/s] Loading 0: 54%|█████▍ | 157/291 [00:02<00:01, 71.27it/s] Loading 0: 58%|█████▊ | 169/291 [00:03<00:01, 74.86it/s] Loading 0: 63%|██████▎ | 184/291 [00:03<00:01, 83.05it/s] Loading 0: 67%|██████▋ | 194/291 [00:04<00:03, 24.89it/s] Loading 0: 70%|███████ | 204/291 [00:04<00:02, 31.31it/s] Loading 0: 74%|███████▎ | 214/291 [00:04<00:02, 36.97it/s] Loading 0: 77%|███████▋ | 223/291 [00:04<00:01, 43.09it/s] Loading 0: 80%|████████ | 233/291 [00:04<00:01, 51.69it/s] Loading 0: 85%|████████▍ | 247/291 [00:05<00:00, 62.01it/s] Loading 0: 88%|████████▊ | 256/291 [00:05<00:00, 66.55it/s] Loading 0: 91%|█████████ | 265/291 [00:05<00:00, 69.84it/s] Loading 0: 94%|█████████▍| 274/291 [00:05<00:00, 73.34it/s] Loading 0: 97%|█████████▋| 283/291 [00:05<00:00, 77.00it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: warnings.warn(
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: warnings.warn(
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: warnings.warn(
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|█████ | 1/2 [00:06<00:06, 6.08s/it] Downloading shards: 100%|██████████| 2/2 [00:12<00:00, 6.37s/it] Downloading shards: 100%|██████████| 2/2 [00:12<00:00, 6.33s/it]
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|█████ | 1/2 [00:00<00:00, 2.09it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.36it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.08it/s]
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: creating bucket guanaco-reward-models
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v1_reward
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v1_reward/config.json
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v1_reward/special_tokens_map.json
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v1_reward/tokenizer_config.json
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v1_reward/merges.txt
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v1_reward/vocab.json
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v1_reward/tokenizer.json
jic062-instruct-dp7-lr1-g32-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v1_reward/reward.tensors
Job jic062-instruct-dp7-lr1-g32-v1-mkmlizer completed after 116.3s with status: succeeded
Stopping job with name jic062-instruct-dp7-lr1-g32-v1-mkmlizer
Pipeline stage MKMLizer completed in 117.91s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service jic062-instruct-dp7-lr1-g32-v1
Waiting for inference service jic062-instruct-dp7-lr1-g32-v1 to be ready
Inference service jic062-instruct-dp7-lr1-g32-v1 ready after 211.24852871894836s
Pipeline stage ISVCDeployer completed in 213.19s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.31097412109375s
Received healthy response to inference request in 1.4579310417175293s
Received healthy response to inference request in 1.417069673538208s
Received healthy response to inference request in 1.4657814502716064s
Received healthy response to inference request in 1.4135377407073975s
5 requests
0 failed requests
5th percentile: 1.4142441272735595
10th percentile: 1.4149505138397216
20th percentile: 1.416363286972046
30th percentile: 1.4252419471740723
40th percentile: 1.4415864944458008
50th percentile: 1.4579310417175293
60th percentile: 1.4610712051391601
70th percentile: 1.464211368560791
80th percentile: 1.6348199844360354
90th percentile: 1.9728970527648926
95th percentile: 2.1419355869293213
99th percentile: 2.277166414260864
mean time: 1.6130588054656982
Pipeline stage StressChecker completed in 8.90s
jic062-instruct-dp7-lr1-g32_v1 status is now deployed due to DeploymentManager action
jic062-instruct-dp7-lr1-g32_v1 status is now inactive due to auto deactivation removed underperforming models
jic062-instruct-dp7-lr1-g32_v1 status is now torndown due to DeploymentManager action