developer_uid: rinen0721
submission_id: rinen0721-llama8bbase_v1
model_name: rinen0721-llama8bbase_v1
model_group: rinen0721/llama8bbase
status: torndown
timestamp: 2024-08-13T14:29:33+00:00
num_battles: 7487
num_wins: 2889
celo_rating: 1131.16
family_friendly_score: 0.0
submission_type: basic
model_repo: rinen0721/llama8bbase
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/gpt2_xl_pairwise_89m_step_347634
model_num_parameters: 8030261248.0
best_of: 4
max_input_tokens: 512
max_output_tokens: 64
display_name: rinen0721-llama8bbase_v1
is_internal_developer: False
language_model: rinen0721/llama8bbase
model_size: 8B
ranking_group: single
us_pacific_date: 2024-08-13
win_ratio: 0.3858688393214906
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 4, 'max_output_tokens': 64, 'reward_max_token_input': 512}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
Resubmit model
Running pipeline stage MKMLizer
Starting job with name rinen0721-llama8bbase-v1-mkmlizer
Waiting for job on rinen0721-llama8bbase-v1-mkmlizer to finish
rinen0721-llama8bbase-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
rinen0721-llama8bbase-v1-mkmlizer: ║ _____ __ __ ║
rinen0721-llama8bbase-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
rinen0721-llama8bbase-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
rinen0721-llama8bbase-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
rinen0721-llama8bbase-v1-mkmlizer: ║ /___/ ║
rinen0721-llama8bbase-v1-mkmlizer: ║ ║
rinen0721-llama8bbase-v1-mkmlizer: ║ Version: 0.9.9 ║
rinen0721-llama8bbase-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
rinen0721-llama8bbase-v1-mkmlizer: ║ https://mk1.ai ║
rinen0721-llama8bbase-v1-mkmlizer: ║ ║
rinen0721-llama8bbase-v1-mkmlizer: ║ The license key for the current software has been verified as ║
rinen0721-llama8bbase-v1-mkmlizer: ║ belonging to: ║
rinen0721-llama8bbase-v1-mkmlizer: ║ ║
rinen0721-llama8bbase-v1-mkmlizer: ║ Chai Research Corp. ║
rinen0721-llama8bbase-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
rinen0721-llama8bbase-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
rinen0721-llama8bbase-v1-mkmlizer: ║ ║
rinen0721-llama8bbase-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'dial tcp 127.0.0.1:8080: connect: connection refused\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'dial tcp 127.0.0.1:8080: connect: connection refused\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'dial tcp 127.0.0.1:8080: connect: connection refused\n')
rinen0721-llama8bbase-v1-mkmlizer: quantized model in 26.063s
rinen0721-llama8bbase-v1-mkmlizer: Processed model rinen0721/llama8bbase in 58.588s
rinen0721-llama8bbase-v1-mkmlizer: creating bucket guanaco-mkml-models
rinen0721-llama8bbase-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
rinen0721-llama8bbase-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/rinen0721-llama8bbase-v1
rinen0721-llama8bbase-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/rinen0721-llama8bbase-v1/config.json
rinen0721-llama8bbase-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/rinen0721-llama8bbase-v1/special_tokens_map.json
rinen0721-llama8bbase-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/rinen0721-llama8bbase-v1/tokenizer_config.json
rinen0721-llama8bbase-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/rinen0721-llama8bbase-v1/tokenizer.json
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
rinen0721-llama8bbase-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/rinen0721-llama8bbase-v1/flywheel_model.0.safetensors
rinen0721-llama8bbase-v1-mkmlizer: loading reward model from ChaiML/gpt2_xl_pairwise_89m_step_347634
rinen0721-llama8bbase-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:07, 36.92it/s] Loading 0: 5%|▍ | 14/291 [00:00<00:05, 46.93it/s] Loading 0: 8%|▊ | 23/291 [00:00<00:05, 50.14it/s] Loading 0: 11%|█ | 31/291 [00:00<00:04, 58.64it/s] Loading 0: 13%|█▎ | 38/291 [00:00<00:04, 55.95it/s] Loading 0: 15%|█▌ | 44/291 [00:00<00:04, 54.04it/s] Loading 0: 17%|█▋ | 50/291 [00:01<00:05, 46.82it/s] Loading 0: 20%|█▉ | 58/291 [00:01<00:04, 54.81it/s] Loading 0: 22%|██▏ | 64/291 [00:01<00:04, 49.58it/s] Loading 0: 24%|██▍ | 70/291 [00:01<00:04, 50.28it/s] Loading 0: 26%|██▌ | 76/291 [00:01<00:04, 51.98it/s] Loading 0: 28%|██▊ | 82/291 [00:01<00:04, 48.51it/s] Loading 0: 30%|███ | 88/291 [00:01<00:05, 35.13it/s] Loading 0: 32%|███▏ | 94/291 [00:02<00:04, 40.02it/s] Loading 0: 34%|███▍ | 100/291 [00:02<00:04, 40.65it/s] Loading 0: 36%|███▌ | 105/291 [00:02<00:04, 41.39it/s] Loading 0: 38%|███▊ | 112/291 [00:02<00:03, 47.21it/s] Loading 0: 41%|████ | 118/291 [00:02<00:03, 44.46it/s] Loading 0: 42%|████▏ | 123/291 [00:02<00:03, 44.28it/s] Loading 0: 44%|████▍ | 129/291 [00:02<00:03, 47.83it/s] Loading 0: 46%|████▋ | 135/291 [00:02<00:03, 50.64it/s] Loading 0: 48%|████▊ | 141/291 [00:03<00:03, 44.28it/s] Loading 0: 51%|█████ | 148/291 [00:03<00:02, 48.92it/s] Loading 0: 53%|█████▎ | 154/291 [00:03<00:02, 46.55it/s] Loading 0: 55%|█████▍ | 159/291 [00:03<00:02, 46.48it/s] Loading 0: 57%|█████▋ | 166/291 [00:03<00:02, 51.34it/s] Loading 0: 59%|█████▉ | 172/291 [00:03<00:02, 44.72it/s] Loading 0: 62%|██████▏ | 179/291 [00:03<00:02, 48.23it/s] Loading 0: 64%|██████▎ | 185/291 [00:03<00:02, 50.51it/s] Loading 0: 66%|██████▌ | 191/291 [00:04<00:02, 33.81it/s] Loading 0: 67%|██████▋ | 196/291 [00:04<00:02, 35.09it/s] Loading 0: 69%|██████▉ | 202/291 [00:04<00:02, 40.07it/s] Loading 0: 71%|███████▏ | 208/291 [00:04<00:02, 40.13it/s] Loading 0: 73%|███████▎ | 213/291 [00:04<00:01, 41.30it/s] Loading 0: 76%|███████▌ | 220/291 [00:04<00:01, 46.60it/s] Loading 0: 78%|███████▊ | 226/291 [00:04<00:01, 44.15it/s] Loading 0: 79%|███████▉ | 231/291 [00:05<00:01, 43.74it/s] Loading 0: 82%|████████▏ | 238/291 [00:05<00:01, 48.41it/s] Loading 0: 84%|████████▍ | 244/291 [00:05<00:01, 46.31it/s] Loading 0: 86%|████████▌ | 249/291 [00:05<00:00, 46.19it/s] Loading 0: 88%|████████▊ | 256/291 [00:05<00:00, 50.95it/s] Loading 0: 90%|█████████ | 262/291 [00:05<00:00, 47.22it/s] Loading 0: 92%|█████████▏| 267/291 [00:05<00:00, 46.26it/s] Loading 0: 94%|█████████▍| 273/291 [00:05<00:00, 47.87it/s] Loading 0: 96%|█████████▌| 278/291 [00:06<00:00, 47.45it/s] Loading 0: 97%|█████████▋| 283/291 [00:06<00:00, 39.98it/s] Loading 0: 99%|█████████▉| 288/291 [00:11<00:00, 3.06it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
rinen0721-llama8bbase-v1-mkmlizer: warnings.warn(
rinen0721-llama8bbase-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
rinen0721-llama8bbase-v1-mkmlizer: warnings.warn(
rinen0721-llama8bbase-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
rinen0721-llama8bbase-v1-mkmlizer: warnings.warn(
rinen0721-llama8bbase-v1-mkmlizer: Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|█████ | 1/2 [00:05<00:05, 5.52s/it] Downloading shards: 100%|██████████| 2/2 [00:08<00:00, 4.21s/it] Downloading shards: 100%|██████████| 2/2 [00:08<00:00, 4.41s/it]
rinen0721-llama8bbase-v1-mkmlizer: Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|█████ | 1/2 [00:00<00:00, 2.42it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.97it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.62it/s]
rinen0721-llama8bbase-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
rinen0721-llama8bbase-v1-mkmlizer: Saving duration: 1.378s
rinen0721-llama8bbase-v1-mkmlizer: Processed model ChaiML/gpt2_xl_pairwise_89m_step_347634 in 13.873s
rinen0721-llama8bbase-v1-mkmlizer: creating bucket guanaco-reward-models
rinen0721-llama8bbase-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
rinen0721-llama8bbase-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/rinen0721-llama8bbase-v1_reward
rinen0721-llama8bbase-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/rinen0721-llama8bbase-v1_reward/config.json
rinen0721-llama8bbase-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/rinen0721-llama8bbase-v1_reward/special_tokens_map.json
rinen0721-llama8bbase-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/rinen0721-llama8bbase-v1_reward/tokenizer_config.json
rinen0721-llama8bbase-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/rinen0721-llama8bbase-v1_reward/reward.tensors
Job rinen0721-llama8bbase-v1-mkmlizer completed after 105.73s with status: succeeded
Stopping job with name rinen0721-llama8bbase-v1-mkmlizer
Pipeline stage MKMLizer completed in 107.18s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.11s
Running pipeline stage ISVCDeployer
Creating inference service rinen0721-llama8bbase-v1
Waiting for inference service rinen0721-llama8bbase-v1 to be ready
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'dial tcp 127.0.0.1:8080: connect: connection refused\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'dial tcp 127.0.0.1:8080: connect: connection refused\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:52618->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:53154->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:50594->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Failed to get response for submission mistralai-mistral-nemo-_9330_v42: ('http://mistralai-mistral-nemo-9330-v42-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'EOF\n')
Inference service rinen0721-llama8bbase-v1 ready after 211.58337306976318s
Pipeline stage ISVCDeployer completed in 214.71s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.0839099884033203s
Received healthy response to inference request in 1.2450077533721924s
Received healthy response to inference request in 0.8984591960906982s
Received healthy response to inference request in 0.8961811065673828s
Received healthy response to inference request in 0.5915007591247559s
5 requests
0 failed requests
5th percentile: 0.6524368286132812
10th percentile: 0.7133728981018066
20th percentile: 0.8352450370788574
30th percentile: 0.8966367244720459
40th percentile: 0.8975479602813721
50th percentile: 0.8984591960906982
60th percentile: 1.037078619003296
70th percentile: 1.1756980419158936
80th percentile: 1.4127882003784182
90th percentile: 1.7483490943908693
95th percentile: 1.9161295413970945
99th percentile: 2.0503538990020753
mean time: 1.1430117607116699
Pipeline stage StressChecker completed in 6.72s
rinen0721-llama8bbase_v1 status is now deployed due to DeploymentManager action
rinen0721-llama8bbase_v1 status is now inactive due to auto deactivation removed underperforming models
rinen0721-llama8bbase_v1 status is now torndown due to DeploymentManager action