developer_uid: Trace2333
submission_id: trace2333-fd-llama3-v4-n16_v4
model_name: trace2333-fd-llama3-v4-n16_v4
model_group: Trace2333/fd_llama3_v4_N
status: torndown
timestamp: 2024-08-07T06:21:33+00:00
num_battles: 13348
num_wins: 6762
celo_rating: 1220.92
family_friendly_score: 0.0
submission_type: basic
model_repo: Trace2333/fd_llama3_v4_N16
model_architecture: LlamaForCausalLM
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: trace2333-fd-llama3-v4-n16_v4
is_internal_developer: False
language_model: Trace2333/fd_llama3_v4_N16
model_size: 8B
ranking_group: single
us_pacific_date: 2024-08-06
win_ratio: 0.5065927479772251
generation_params: {'temperature': 1.05, 'top_p': 0.95, 'min_p': 0.06, 'top_k': 200, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
Resubmit model
Running pipeline stage MKMLizer
Starting job with name trace2333-fd-llama3-v4-n16-v4-mkmlizer
Waiting for job on trace2333-fd-llama3-v4-n16-v4-mkmlizer to finish
Failed to get response for submission trace2333-fd-llama3-v4_v4: ('http://trace2333-fd-llama3-v4-v4-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', '{"error":"ValueError : [TypeError(\\"\'numpy.int64\' object is not iterable\\"), TypeError(\'vars() argument must have __dict__ attribute\')]"}')
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ _____ __ __ ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ /___/ ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ Version: 0.9.9 ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ https://mk1.ai ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ The license key for the current software has been verified as ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ belonging to: ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ Chai Research Corp. ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ║ ║
trace2333-fd-llama3-v4-n16-v4-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission trace2333-fd-llama3-v4_v4: ('http://trace2333-fd-llama3-v4-v4-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission trace2333-fd-llama3-v4_v4: ('http://trace2333-fd-llama3-v4-v4-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
trace2333-fd-llama3-v4-n16-v4-mkmlizer: Downloaded to shared memory in 60.057s
trace2333-fd-llama3-v4-n16-v4-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp80q72j_h, device:0
trace2333-fd-llama3-v4-n16-v4-mkmlizer: Saving flywheel model at /dev/shm/model_cache
trace2333-fd-llama3-v4-n16-v4-mkmlizer: quantized model in 29.282s
trace2333-fd-llama3-v4-n16-v4-mkmlizer: Processed model Trace2333/fd_llama3_v4_N16 in 89.340s
trace2333-fd-llama3-v4-n16-v4-mkmlizer: creating bucket guanaco-mkml-models
trace2333-fd-llama3-v4-n16-v4-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
trace2333-fd-llama3-v4-n16-v4-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/trace2333-fd-llama3-v4-n16-v4
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v4-n16-v4/config.json
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/trace2333-fd-llama3-v4-n16-v4/special_tokens_map.json
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/trace2333-fd-llama3-v4-n16-v4/tokenizer_config.json
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/trace2333-fd-llama3-v4-n16-v4/tokenizer.json
trace2333-fd-llama3-v4-n16-v4-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
trace2333-fd-llama3-v4-n16-v4-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:10, 26.73it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:07, 36.43it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:08, 33.74it/s] Loading 0: 7%|▋ | 21/291 [00:00<00:07, 35.86it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:07, 33.49it/s] Loading 0: 10%|█ | 30/291 [00:00<00:06, 37.41it/s] Loading 0: 12%|█▏ | 34/291 [00:01<00:09, 26.45it/s] Loading 0: 13%|█▎ | 38/291 [00:01<00:09, 27.88it/s] Loading 0: 14%|█▍ | 42/291 [00:01<00:09, 26.90it/s] Loading 0: 16%|█▋ | 48/291 [00:01<00:07, 32.30it/s] Loading 0: 18%|█▊ | 52/291 [00:01<00:07, 31.54it/s] Loading 0: 20%|█▉ | 57/291 [00:01<00:06, 34.10it/s] Loading 0: 21%|██ | 61/291 [00:01<00:07, 32.08it/s] Loading 0: 23%|██▎ | 66/291 [00:02<00:06, 34.73it/s] Loading 0: 24%|██▍ | 70/291 [00:02<00:06, 33.58it/s] Loading 0: 25%|██▌ | 74/291 [00:02<00:06, 33.77it/s] Loading 0: 27%|██▋ | 78/291 [00:02<00:06, 33.79it/s] Loading 0: 28%|██▊ | 82/291 [00:02<00:08, 24.36it/s] Loading 0: 29%|██▉ | 85/291 [00:02<00:08, 24.84it/s] Loading 0: 31%|███ | 90/291 [00:02<00:06, 28.96it/s] Loading 0: 32%|███▏ | 94/291 [00:03<00:06, 29.38it/s] Loading 0: 34%|███▍ | 99/291 [00:03<00:05, 32.53it/s] Loading 0: 35%|███▌ | 103/291 [00:03<00:05, 31.56it/s] Loading 0: 37%|███▋ | 108/291 [00:03<00:05, 34.51it/s] Loading 0: 38%|███▊ | 112/291 [00:03<00:05, 33.52it/s] Loading 0: 40%|███▉ | 116/291 [00:03<00:05, 33.81it/s] Loading 0: 42%|████▏ | 122/291 [00:03<00:04, 37.76it/s] Loading 0: 44%|████▎ | 127/291 [00:03<00:04, 35.46it/s] Loading 0: 46%|████▌ | 133/291 [00:04<00:05, 30.46it/s] Loading 0: 47%|████▋ | 137/291 [00:04<00:05, 29.69it/s] Loading 0: 48%|████▊ | 141/291 [00:04<00:05, 27.21it/s] Loading 0: 51%|█████ | 147/291 [00:04<00:04, 31.40it/s] Loading 0: 52%|█████▏ | 151/291 [00:04<00:04, 30.58it/s] Loading 0: 54%|█████▎ | 156/291 [00:04<00:04, 33.21it/s] Loading 0: 55%|█████▍ | 160/291 [00:05<00:04, 32.25it/s] Loading 0: 57%|█████▋ | 165/291 [00:05<00:03, 34.57it/s] Loading 0: 58%|█████▊ | 169/291 [00:05<00:03, 33.37it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:03, 36.09it/s] Loading 0: 61%|██████ | 178/291 [00:05<00:03, 34.53it/s] Loading 0: 63%|██████▎ | 183/291 [00:05<00:02, 38.39it/s] Loading 0: 64%|██████▍ | 187/291 [00:05<00:03, 27.20it/s] Loading 0: 66%|██████▌ | 191/291 [00:06<00:03, 27.92it/s] Loading 0: 67%|██████▋ | 195/291 [00:06<00:03, 26.10it/s] Loading 0: 68%|██████▊ | 199/291 [00:06<00:03, 28.98it/s] Loading 0: 70%|██████▉ | 203/291 [00:06<00:03, 26.50it/s] Loading 0: 72%|███████▏ | 210/291 [00:06<00:02, 33.01it/s] Loading 0: 74%|███████▎ | 214/291 [00:06<00:02, 31.44it/s] Loading 0: 75%|███████▌ | 219/291 [00:06<00:02, 33.21it/s] Loading 0: 77%|███████▋ | 223/291 [00:07<00:02, 29.35it/s] Loading 0: 78%|███████▊ | 227/291 [00:07<00:02, 29.73it/s] Loading 0: 79%|███████▉ | 231/291 [00:07<00:01, 30.28it/s] Loading 0: 81%|████████ | 235/291 [00:07<00:02, 23.26it/s] Loading 0: 82%|████████▏ | 239/291 [00:07<00:02, 23.88it/s] Loading 0: 85%|████████▍ | 246/291 [00:07<00:01, 31.27it/s] Loading 0: 86%|████████▌ | 250/291 [00:08<00:01, 31.04it/s] Loading 0: 88%|████████▊ | 255/291 [00:08<00:01, 34.04it/s] Loading 0: 89%|████████▉ | 259/291 [00:08<00:00, 32.88it/s] Loading 0: 91%|█████████ | 264/291 [00:08<00:00, 34.43it/s] Loading 0: 92%|█████████▏| 268/291 [00:08<00:00, 32.39it/s] Loading 0: 94%|█████████▍| 273/291 [00:08<00:00, 34.70it/s] Loading 0: 95%|█████████▌| 277/291 [00:08<00:00, 33.22it/s] Loading 0: 97%|█████████▋| 281/291 [00:08<00:00, 33.09it/s] Loading 0: 98%|█████████▊| 286/291 [00:14<00:01, 2.59it/s] Loading 0: 99%|█████████▉| 289/291 [00:14<00:00, 3.22it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v4-n16-v4-mkmlizer: warnings.warn(
trace2333-fd-llama3-v4-n16-v4-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v4-n16-v4-mkmlizer: warnings.warn(
trace2333-fd-llama3-v4-n16-v4-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
trace2333-fd-llama3-v4-n16-v4-mkmlizer: warnings.warn(
trace2333-fd-llama3-v4-n16-v4-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
trace2333-fd-llama3-v4-n16-v4-mkmlizer: Saving duration: 1.351s
trace2333-fd-llama3-v4-n16-v4-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 11.539s
trace2333-fd-llama3-v4-n16-v4-mkmlizer: creating bucket guanaco-reward-models
trace2333-fd-llama3-v4-n16-v4-mkmlizer: Bucket 's3://guanaco-reward-models/' created
trace2333-fd-llama3-v4-n16-v4-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/trace2333-fd-llama3-v4-n16-v4_reward
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/trace2333-fd-llama3-v4-n16-v4_reward/config.json
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/trace2333-fd-llama3-v4-n16-v4_reward/special_tokens_map.json
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/trace2333-fd-llama3-v4-n16-v4_reward/tokenizer_config.json
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/trace2333-fd-llama3-v4-n16-v4_reward/merges.txt
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/trace2333-fd-llama3-v4-n16-v4_reward/vocab.json
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/trace2333-fd-llama3-v4-n16-v4_reward/tokenizer.json
trace2333-fd-llama3-v4-n16-v4-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/trace2333-fd-llama3-v4-n16-v4_reward/reward.tensors
Job trace2333-fd-llama3-v4-n16-v4-mkmlizer completed after 136.52s with status: succeeded
Stopping job with name trace2333-fd-llama3-v4-n16-v4-mkmlizer
Pipeline stage MKMLizer completed in 137.48s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service trace2333-fd-llama3-v4-n16-v4
Waiting for inference service trace2333-fd-llama3-v4-n16-v4 to be ready
Inference service trace2333-fd-llama3-v4-n16-v4 ready after 191.27577352523804s
Pipeline stage ISVCDeployer completed in 192.93s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.252211809158325s
Received healthy response to inference request in 1.3857009410858154s
Received healthy response to inference request in 1.3784127235412598s
Received healthy response to inference request in 1.4238622188568115s
Received healthy response to inference request in 1.3871421813964844s
5 requests
0 failed requests
5th percentile: 1.3798703670501709
10th percentile: 1.381328010559082
20th percentile: 1.3842432975769043
30th percentile: 1.3859891891479492
40th percentile: 1.3865656852722168
50th percentile: 1.3871421813964844
60th percentile: 1.4018301963806152
70th percentile: 1.416518211364746
80th percentile: 1.5895321369171145
90th percentile: 1.9208719730377197
95th percentile: 2.0865418910980225
99th percentile: 2.2190778255462646
mean time: 1.5654659748077393
Pipeline stage StressChecker completed in 8.54s
trace2333-fd-llama3-v4-n16_v4 status is now deployed due to DeploymentManager action
trace2333-fd-llama3-v4-n16_v4 status is now inactive due to auto deactivation removed underperforming models
trace2333-fd-llama3-v4-n16_v4 status is now torndown due to DeploymentManager action