developer_uid: rinen0721
submission_id: rinen0721-llama8bv1_v1
model_name: rinen0721-llama8bv1_v1
model_group: rinen0721/llama8bv1
status: torndown
timestamp: 2024-08-13T13:32:31+00:00
num_battles: 8336
num_wins: 3653
celo_rating: 1166.43
family_friendly_score: 0.0
submission_type: basic
model_repo: rinen0721/llama8bv1
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/gpt2_xl_pairwise_89m_step_347634
model_num_parameters: 8030261248.0
best_of: 4
max_input_tokens: 512
max_output_tokens: 64
display_name: rinen0721-llama8bv1_v1
is_internal_developer: False
language_model: rinen0721/llama8bv1
model_size: 8B
ranking_group: single
us_pacific_date: 2024-08-13
win_ratio: 0.4382197696737044
generation_params: {'temperature': 0.9, 'top_p': 1.0, 'min_p': 0.1, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 4, 'max_output_tokens': 64, 'reward_max_token_input': 512}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
Resubmit model
Running pipeline stage MKMLizer
Starting job with name rinen0721-llama8bv1-v1-mkmlizer
Waiting for job on rinen0721-llama8bv1-v1-mkmlizer to finish
Stopping job with name rinen0721-llama8bv1-v1-mkmlizer
%s, retrying in %s seconds...
Starting job with name rinen0721-llama8bv1-v1-mkmlizer
Waiting for job on rinen0721-llama8bv1-v1-mkmlizer to finish
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:42810->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:41466->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:42872->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'dial tcp 127.0.0.1:8080: connect: connection refused\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'dial tcp 127.0.0.1:8080: connect: connection refused\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'dial tcp 127.0.0.1:8080: connect: connection refused\n')
Stopping job with name rinen0721-llama8bv1-v1-mkmlizer
%s, retrying in %s seconds...
Starting job with name rinen0721-llama8bv1-v1-mkmlizer
Waiting for job on rinen0721-llama8bv1-v1-mkmlizer to finish
rinen0721-llama8bv1-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
rinen0721-llama8bv1-v1-mkmlizer: ║ _____ __ __ ║
rinen0721-llama8bv1-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
rinen0721-llama8bv1-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
rinen0721-llama8bv1-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
rinen0721-llama8bv1-v1-mkmlizer: ║ /___/ ║
rinen0721-llama8bv1-v1-mkmlizer: ║ ║
rinen0721-llama8bv1-v1-mkmlizer: ║ Version: 0.9.9 ║
rinen0721-llama8bv1-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
rinen0721-llama8bv1-v1-mkmlizer: ║ https://mk1.ai ║
rinen0721-llama8bv1-v1-mkmlizer: ║ ║
rinen0721-llama8bv1-v1-mkmlizer: ║ The license key for the current software has been verified as ║
rinen0721-llama8bv1-v1-mkmlizer: ║ belonging to: ║
rinen0721-llama8bv1-v1-mkmlizer: ║ ║
rinen0721-llama8bv1-v1-mkmlizer: ║ Chai Research Corp. ║
rinen0721-llama8bv1-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
rinen0721-llama8bv1-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
rinen0721-llama8bv1-v1-mkmlizer: ║ ║
rinen0721-llama8bv1-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
rinen0721-llama8bv1-v1-mkmlizer: Downloaded to shared memory in 32.700s
rinen0721-llama8bv1-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp_pr2ixsm, device:0
rinen0721-llama8bv1-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
rinen0721-llama8bv1-v1-mkmlizer: quantized model in 25.961s
rinen0721-llama8bv1-v1-mkmlizer: Processed model rinen0721/llama8bv1 in 58.661s
rinen0721-llama8bv1-v1-mkmlizer: creating bucket guanaco-mkml-models
rinen0721-llama8bv1-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
rinen0721-llama8bv1-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/rinen0721-llama8bv1-v1
rinen0721-llama8bv1-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/rinen0721-llama8bv1-v1/special_tokens_map.json
rinen0721-llama8bv1-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/rinen0721-llama8bv1-v1/config.json
rinen0721-llama8bv1-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/rinen0721-llama8bv1-v1/tokenizer_config.json
rinen0721-llama8bv1-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/rinen0721-llama8bv1-v1/tokenizer.json
rinen0721-llama8bv1-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/rinen0721-llama8bv1-v1/flywheel_model.0.safetensors
rinen0721-llama8bv1-v1-mkmlizer: loading reward model from ChaiML/gpt2_xl_pairwise_89m_step_347634
rinen0721-llama8bv1-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/291 [00:00<00:05, 53.41it/s] Loading 0: 7%|▋ | 21/291 [00:00<00:02, 98.56it/s] Loading 0: 11%|█ | 32/291 [00:00<00:02, 87.30it/s] Loading 0: 14%|█▍ | 42/291 [00:00<00:02, 91.00it/s] Loading 0: 18%|█▊ | 52/291 [00:00<00:03, 76.28it/s] Loading 0: 21%|██ | 61/291 [00:00<00:02, 78.35it/s] Loading 0: 24%|██▍ | 70/291 [00:00<00:02, 80.94it/s] Loading 0: 27%|██▋ | 80/291 [00:00<00:02, 86.28it/s] Loading 0: 31%|███ | 89/291 [00:02<00:09, 22.02it/s] Loading 0: 33%|███▎ | 97/291 [00:02<00:07, 27.40it/s] Loading 0: 36%|███▋ | 106/291 [00:02<00:05, 34.37it/s] Loading 0: 41%|████ | 120/291 [00:02<00:03, 49.43it/s] Loading 0: 45%|████▍ | 130/291 [00:02<00:02, 53.90it/s] Loading 0: 48%|████▊ | 139/291 [00:02<00:02, 60.49it/s] Loading 0: 52%|█████▏ | 151/291 [00:02<00:02, 67.12it/s] Loading 0: 57%|█████▋ | 166/291 [00:02<00:01, 77.66it/s] Loading 0: 60%|██████ | 176/291 [00:03<00:01, 79.18it/s] Loading 0: 64%|██████▍ | 186/291 [00:03<00:01, 83.69it/s] Loading 0: 67%|██████▋ | 196/291 [00:04<00:03, 23.96it/s] Loading 0: 70%|███████ | 205/291 [00:04<00:02, 28.90it/s] Loading 0: 74%|███████▎ | 214/291 [00:04<00:02, 35.00it/s] Loading 0: 77%|███████▋ | 224/291 [00:04<00:01, 43.59it/s] Loading 0: 80%|████████ | 233/291 [00:04<00:01, 50.93it/s] Loading 0: 83%|████████▎ | 242/291 [00:04<00:00, 57.10it/s] Loading 0: 86%|████████▋ | 251/291 [00:04<00:00, 62.88it/s] Loading 0: 89%|████████▉ | 260/291 [00:05<00:00, 68.09it/s] Loading 0: 92%|█████████▏| 269/291 [00:05<00:00, 71.19it/s] Loading 0: 96%|█████████▌| 278/291 [00:05<00:00, 75.56it/s] Loading 0: 99%|█████████▊| 287/291 [00:05<00:00, 42.84it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
rinen0721-llama8bv1-v1-mkmlizer: warnings.warn(
rinen0721-llama8bv1-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
rinen0721-llama8bv1-v1-mkmlizer: warnings.warn(
rinen0721-llama8bv1-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
rinen0721-llama8bv1-v1-mkmlizer: warnings.warn(
rinen0721-llama8bv1-v1-mkmlizer: Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|█████ | 1/2 [00:05<00:05, 5.71s/it] Downloading shards: 100%|██████████| 2/2 [00:08<00:00, 4.16s/it] Downloading shards: 100%|██████████| 2/2 [00:08<00:00, 4.40s/it]
rinen0721-llama8bv1-v1-mkmlizer: Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|█████ | 1/2 [00:00<00:00, 2.42it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.93it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.59it/s]
rinen0721-llama8bv1-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
rinen0721-llama8bv1-v1-mkmlizer: Saving duration: 1.329s
rinen0721-llama8bv1-v1-mkmlizer: Processed model ChaiML/gpt2_xl_pairwise_89m_step_347634 in 14.125s
rinen0721-llama8bv1-v1-mkmlizer: creating bucket guanaco-reward-models
rinen0721-llama8bv1-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
rinen0721-llama8bv1-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/rinen0721-llama8bv1-v1_reward
rinen0721-llama8bv1-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/rinen0721-llama8bv1-v1_reward/config.json
rinen0721-llama8bv1-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/rinen0721-llama8bv1-v1_reward/special_tokens_map.json
rinen0721-llama8bv1-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/rinen0721-llama8bv1-v1_reward/tokenizer_config.json
rinen0721-llama8bv1-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/rinen0721-llama8bv1-v1_reward/merges.txt
rinen0721-llama8bv1-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/rinen0721-llama8bv1-v1_reward/vocab.json
rinen0721-llama8bv1-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/rinen0721-llama8bv1-v1_reward/tokenizer.json
rinen0721-llama8bv1-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/rinen0721-llama8bv1-v1_reward/reward.tensors
Job rinen0721-llama8bv1-v1-mkmlizer completed after 107.65s with status: succeeded
Stopping job with name rinen0721-llama8bv1-v1-mkmlizer
Pipeline stage MKMLizer completed in 120.45s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Creating inference service rinen0721-llama8bv1-v1
Waiting for inference service rinen0721-llama8bv1-v1 to be ready
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'activator request timeout')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:52042->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:52930->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:44084->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:45958->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Inference service rinen0721-llama8bv1-v1 ready after 201.2950189113617s
Pipeline stage ISVCDeployer completed in 204.10s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.1788318157196045s
Received healthy response to inference request in 1.248661756515503s
Received healthy response to inference request in 1.1961722373962402s
Received healthy response to inference request in 1.221491813659668s
Received healthy response to inference request in 1.2256717681884766s
5 requests
0 failed requests
5th percentile: 1.2012361526489257
10th percentile: 1.2063000679016114
20th percentile: 1.2164278984069825
30th percentile: 1.2223278045654298
40th percentile: 1.2239997863769532
50th percentile: 1.2256717681884766
60th percentile: 1.234867763519287
70th percentile: 1.2440637588500976
80th percentile: 1.4346957683563235
90th percentile: 1.806763792037964
95th percentile: 1.992797803878784
99th percentile: 2.1416250133514403
mean time: 1.4141658782958983
Pipeline stage StressChecker completed in 7.99s
rinen0721-llama8bv1_v1 status is now deployed due to DeploymentManager action
rinen0721-llama8bv1_v1 status is now inactive due to auto deactivation removed underperforming models
rinen0721-llama8bv1_v1 status is now torndown due to DeploymentManager action