developer_uid: chace9580
submission_id: jic062-instruct-dp7-lr1-g32_v2
model_name: jic062-instruct-dp7-lr1-g32_v2
model_group: jic062/instruct_dp7_lr1_
status: torndown
timestamp: 2024-08-12T18:40:56+00:00
num_battles: 10534
num_wins: 5453
celo_rating: 1236.28
family_friendly_score: 0.0
submission_type: basic
model_repo: jic062/instruct_dp7_lr1_g32
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/gpt2_xl_pairwise_89m_step_347634
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: jic062-instruct-dp7-lr1-g32_v2
is_internal_developer: False
language_model: jic062/instruct_dp7_lr1_g32
model_size: 8B
ranking_group: single
us_pacific_date: 2024-08-12
win_ratio: 0.517657110309474
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_of_text|>', '|eot_id|'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
Resubmit model
Running pipeline stage MKMLizer
Starting job with name jic062-instruct-dp7-lr1-g32-v2-mkmlizer
Waiting for job on jic062-instruct-dp7-lr1-g32-v2-mkmlizer to finish
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ _____ __ __ ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ /___/ ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ Version: 0.9.9 ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ https://mk1.ai ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ The license key for the current software has been verified as ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ belonging to: ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ Chai Research Corp. ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ║ ║
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Downloaded to shared memory in 21.318s
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpcwl2p5rw, device:0
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: quantized model in 25.717s
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Processed model jic062/instruct_dp7_lr1_g32 in 47.036s
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: creating bucket guanaco-mkml-models
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v2
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v2/special_tokens_map.json
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v2/config.json
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v2/tokenizer_config.json
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/jic062-instruct-dp7-lr1-g32-v2/tokenizer.json
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: loading reward model from ChaiML/gpt2_xl_pairwise_89m_step_347634
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/291 [00:00<00:05, 54.32it/s] Loading 0: 8%|▊ | 22/291 [00:00<00:03, 85.66it/s] Loading 0: 11%|█ | 32/291 [00:00<00:02, 90.89it/s] Loading 0: 14%|█▍ | 42/291 [00:00<00:02, 87.46it/s] Loading 0: 18%|█▊ | 51/291 [00:00<00:02, 84.76it/s] Loading 0: 21%|██ | 61/291 [00:00<00:02, 80.12it/s] Loading 0: 24%|██▍ | 70/291 [00:00<00:02, 82.87it/s] Loading 0: 27%|██▋ | 79/291 [00:00<00:02, 84.85it/s] Loading 0: 30%|███ | 88/291 [00:02<00:09, 21.47it/s] Loading 0: 35%|███▌ | 103/291 [00:02<00:05, 31.89it/s] Loading 0: 38%|███▊ | 112/291 [00:02<00:04, 38.04it/s] Loading 0: 43%|████▎ | 124/291 [00:02<00:03, 46.57it/s] Loading 0: 48%|████▊ | 139/291 [00:02<00:02, 58.75it/s] Loading 0: 51%|█████ | 148/291 [00:02<00:02, 63.87it/s] Loading 0: 55%|█████▍ | 159/291 [00:02<00:01, 72.92it/s] Loading 0: 58%|█████▊ | 169/291 [00:02<00:01, 72.65it/s] Loading 0: 63%|██████▎ | 184/291 [00:03<00:01, 81.88it/s] Loading 0: 67%|██████▋ | 194/291 [00:04<00:03, 25.56it/s] Loading 0: 70%|███████ | 204/291 [00:04<00:02, 32.05it/s] Loading 0: 73%|███████▎ | 212/291 [00:04<00:02, 37.16it/s] Loading 0: 76%|███████▌ | 220/291 [00:04<00:01, 42.76it/s] Loading 0: 79%|███████▉ | 231/291 [00:04<00:01, 53.62it/s] Loading 0: 83%|████████▎ | 241/291 [00:04<00:00, 57.35it/s] Loading 0: 86%|████████▌ | 250/291 [00:04<00:00, 62.45it/s] Loading 0: 89%|████████▉ | 259/291 [00:05<00:00, 67.91it/s] Loading 0: 92%|█████████▏| 268/291 [00:05<00:00, 73.03it/s] Loading 0: 96%|█████████▌| 278/291 [00:05<00:00, 79.64it/s] Loading 0: 99%|█████████▊| 287/291 [00:05<00:00, 43.09it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: warnings.warn(
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: warnings.warn(
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: warnings.warn(
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|█████ | 1/2 [00:05<00:05, 5.80s/it] Downloading shards: 100%|██████████| 2/2 [00:08<00:00, 4.07s/it] Downloading shards: 100%|██████████| 2/2 [00:08<00:00, 4.33s/it]
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|█████ | 1/2 [00:00<00:00, 2.37it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.95it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.59it/s]
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Saving duration: 1.324s
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Processed model ChaiML/gpt2_xl_pairwise_89m_step_347634 in 13.659s
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: creating bucket guanaco-reward-models
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v2_reward
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v2_reward/config.json
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v2_reward/tokenizer_config.json
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v2_reward/special_tokens_map.json
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v2_reward/merges.txt
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v2_reward/vocab.json
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v2_reward/tokenizer.json
jic062-instruct-dp7-lr1-g32-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/jic062-instruct-dp7-lr1-g32-v2_reward/reward.tensors
Job jic062-instruct-dp7-lr1-g32-v2-mkmlizer completed after 95.43s with status: succeeded
Stopping job with name jic062-instruct-dp7-lr1-g32-v2-mkmlizer
Pipeline stage MKMLizer completed in 96.36s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service jic062-instruct-dp7-lr1-g32-v2
Waiting for inference service jic062-instruct-dp7-lr1-g32-v2 to be ready
Failed to get response for submission blend_mofos_2024-07-31: ('http://mistralai-mixtral-8x7b-3473-v102-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:33478->127.0.0.1:8080: read: connection reset by peer\n')
Inference service jic062-instruct-dp7-lr1-g32-v2 ready after 201.25631260871887s
Pipeline stage ISVCDeployer completed in 202.75s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.2463912963867188s
Received healthy response to inference request in 1.570066213607788s
Received healthy response to inference request in 1.5021286010742188s
Received healthy response to inference request in 1.5063226222991943s
Received healthy response to inference request in 1.9095711708068848s
5 requests
0 failed requests
5th percentile: 1.502967405319214
10th percentile: 1.503806209564209
20th percentile: 1.5054838180541992
30th percentile: 1.519071340560913
40th percentile: 1.5445687770843506
50th percentile: 1.570066213607788
60th percentile: 1.7058681964874267
70th percentile: 1.8416701793670653
80th percentile: 1.9769351959228516
90th percentile: 2.111663246154785
95th percentile: 2.1790272712707517
99th percentile: 2.232918491363525
mean time: 1.746895980834961
Pipeline stage StressChecker completed in 9.55s
jic062-instruct-dp7-lr1-g32_v2 status is now deployed due to DeploymentManager action
jic062-instruct-dp7-lr1-g32_v2 status is now inactive due to auto deactivation removed underperforming models
jic062-instruct-dp7-lr1-g32_v2 status is now torndown due to DeploymentManager action