submission_id: nousresearch-meta-llama_4941_v85
developer_uid: zonemercy
best_of: 1
celo_rating: 1096.79
display_name: nousresearch-meta-llama_4941_v85
family_friendly_score: 0.0
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.95, 'top_p': 1.0, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 1, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: True
language_model: NousResearch/Meta-Llama-3-8B-Instruct
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: NousResearch/Meta-Llama-
model_name: nousresearch-meta-llama_4941_v85
model_num_parameters: 8030261248.0
model_repo: NousResearch/Meta-Llama-3-8B-Instruct
model_size: 8B
num_battles: 14217
num_wins: 5367
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: ChaiML/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-07-25T00:03:56+00:00
us_pacific_date: 2024-07-24
win_ratio: 0.37750580291200675
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name nousresearch-meta-llama-4941-v85-mkmlizer
Waiting for job on nousresearch-meta-llama-4941-v85-mkmlizer to finish
nousresearch-meta-llama-4941-v85-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
nousresearch-meta-llama-4941-v85-mkmlizer: ║ _____ __ __ ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ /___/ ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ Version: 0.9.6 ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ https://mk1.ai ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ The license key for the current software has been verified as ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ belonging to: ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ Chai Research Corp. ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
nousresearch-meta-llama-4941-v85-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v85-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission neversleep-lumimaid-mist_1884_v1: ('http://neversleep-lumimaid-mist-1884-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1893_v1: ('http://neversleep-lumimaid-mist-1893-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1884_v1: ('http://neversleep-lumimaid-mist-1884-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
nousresearch-meta-llama-4941-v85-mkmlizer: Downloaded to shared memory in 23.842s
nousresearch-meta-llama-4941-v85-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpi7ewiftp, device:0
nousresearch-meta-llama-4941-v85-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Failed to get response for submission neversleep-lumimaid-mist_7954_v1: ('http://neversleep-lumimaid-mist-7954-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1893_v1: ('http://neversleep-lumimaid-mist-1893-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1884_v1: ('http://neversleep-lumimaid-mist-1884-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_7954_v1: ('http://neversleep-lumimaid-mist-7954-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
nousresearch-meta-llama-4941-v85-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:08, 35.49it/s] Loading 0: 5%|▍ | 14/291 [00:00<00:05, 50.34it/s] Loading 0: 8%|▊ | 23/291 [00:00<00:05, 53.23it/s] Loading 0: 11%|█ | 32/291 [00:00<00:04, 55.55it/s] Loading 0: 14%|█▍ | 41/291 [00:00<00:04, 57.18it/s] Loading 0: 17%|█▋ | 50/291 [00:00<00:04, 57.65it/s] Loading 0: 20%|██ | 59/291 [00:01<00:04, 57.01it/s] Loading 0: 23%|██▎ | 68/291 [00:01<00:03, 55.93it/s] Loading 0: 26%|██▌ | 76/291 [00:01<00:03, 60.63it/s] Loading 0: 29%|██▊ | 83/291 [00:01<00:04, 42.54it/s] Loading 0: 31%|███ | 89/291 [00:01<00:04, 43.54it/s] Loading 0: 33%|███▎ | 95/291 [00:01<00:04, 40.13it/s] Loading 0: 36%|███▌ | 104/291 [00:02<00:04, 44.71it/s] Loading 0: 39%|███▉ | 113/291 [00:02<00:03, 47.35it/s] Loading 0: 42%|████▏ | 122/291 [00:02<00:03, 48.00it/s] Loading 0: 45%|████▍ | 130/291 [00:02<00:03, 53.53it/s] Loading 0: 47%|████▋ | 136/291 [00:02<00:03, 50.18it/s] Loading 0: 49%|████▉ | 142/291 [00:02<00:02, 51.63it/s] Loading 0: 51%|█████ | 149/291 [00:02<00:02, 49.91it/s] Loading 0: 54%|█████▍ | 157/291 [00:03<00:02, 56.05it/s] Loading 0: 56%|█████▌ | 163/291 [00:03<00:02, 53.28it/s] Loading 0: 58%|█████▊ | 169/291 [00:03<00:02, 53.99it/s] Loading 0: 61%|██████ | 177/291 [00:03<00:02, 54.28it/s] Loading 0: 63%|██████▎ | 183/291 [00:03<00:02, 53.82it/s] Loading 0: 65%|██████▍ | 189/291 [00:03<00:02, 34.93it/s] Loading 0: 67%|██████▋ | 194/291 [00:04<00:02, 37.34it/s] Loading 0: 69%|██████▉ | 202/291 [00:04<00:01, 46.06it/s] Loading 0: 71%|███████▏ | 208/291 [00:04<00:01, 45.81it/s] Loading 0: 74%|███████▎ | 214/291 [00:04<00:01, 47.80it/s] Loading 0: 76%|███████▌ | 221/291 [00:04<00:01, 44.18it/s] Loading 0: 79%|███████▊ | 229/291 [00:04<00:01, 51.64it/s] Loading 0: 81%|████████ | 235/291 [00:04<00:01, 46.61it/s] Loading 0: 83%|████████▎ | 241/291 [00:04<00:01, 47.10it/s] Loading 0: 85%|████████▍ | 247/291 [00:05<00:00, 49.20it/s] Loading 0: 87%|████████▋ | 253/291 [00:05<00:00, 47.53it/s] Loading 0: 89%|████████▊ | 258/291 [00:05<00:00, 45.59it/s] Loading 0: 91%|█████████ | 265/291 [00:05<00:00, 50.03it/s] Loading 0: 93%|█████████▎| 271/291 [00:05<00:00, 47.53it/s] Loading 0: 95%|█████████▍| 276/291 [00:05<00:00, 47.73it/s] Loading 0: 97%|█████████▋| 282/291 [00:05<00:00, 45.04it/s] Loading 0: 99%|█████████▊| 287/291 [00:11<00:01, 3.27it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
nousresearch-meta-llama-4941-v85-mkmlizer: quantized model in 25.695s
nousresearch-meta-llama-4941-v85-mkmlizer: Processed model NousResearch/Meta-Llama-3-8B-Instruct in 49.538s
nousresearch-meta-llama-4941-v85-mkmlizer: creating bucket guanaco-mkml-models
nousresearch-meta-llama-4941-v85-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
nousresearch-meta-llama-4941-v85-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v85
nousresearch-meta-llama-4941-v85-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v85/config.json
nousresearch-meta-llama-4941-v85-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v85/special_tokens_map.json
nousresearch-meta-llama-4941-v85-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v85/tokenizer_config.json
nousresearch-meta-llama-4941-v85-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v85/tokenizer.json
nousresearch-meta-llama-4941-v85-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v85/flywheel_model.0.safetensors
nousresearch-meta-llama-4941-v85-mkmlizer: loading reward model from ChaiML/gpt2_xl_pairwise_89m_step_347634
nousresearch-meta-llama-4941-v85-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:950: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v85-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v85-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:778: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v85-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v85-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v85-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v85-mkmlizer: Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|█████ | 1/2 [00:05<00:05, 5.51s/it] Downloading shards: 100%|██████████| 2/2 [00:08<00:00, 4.07s/it] Downloading shards: 100%|██████████| 2/2 [00:08<00:00, 4.28s/it]
nousresearch-meta-llama-4941-v85-mkmlizer: Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|█████ | 1/2 [00:00<00:00, 2.40it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.98it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 3.62it/s]
nousresearch-meta-llama-4941-v85-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
nousresearch-meta-llama-4941-v85-mkmlizer: Saving duration: 1.354s
nousresearch-meta-llama-4941-v85-mkmlizer: Processed model ChaiML/gpt2_xl_pairwise_89m_step_347634 in 13.399s
nousresearch-meta-llama-4941-v85-mkmlizer: creating bucket guanaco-reward-models
nousresearch-meta-llama-4941-v85-mkmlizer: Bucket 's3://guanaco-reward-models/' created
nousresearch-meta-llama-4941-v85-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/nousresearch-meta-llama-4941-v85_reward
nousresearch-meta-llama-4941-v85-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v85_reward/config.json
nousresearch-meta-llama-4941-v85-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v85_reward/special_tokens_map.json
nousresearch-meta-llama-4941-v85-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v85_reward/tokenizer_config.json
nousresearch-meta-llama-4941-v85-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/nousresearch-meta-llama-4941-v85_reward/merges.txt
nousresearch-meta-llama-4941-v85-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v85_reward/vocab.json
nousresearch-meta-llama-4941-v85-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v85_reward/tokenizer.json
nousresearch-meta-llama-4941-v85-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/nousresearch-meta-llama-4941-v85_reward/reward.tensors
Job nousresearch-meta-llama-4941-v85-mkmlizer completed after 94.37s with status: succeeded
Stopping job with name nousresearch-meta-llama-4941-v85-mkmlizer
Pipeline stage MKMLizer completed in 95.36s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Failed to get response for submission neversleep-lumimaid-mist_1893_v1: ('http://neversleep-lumimaid-mist-1893-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Creating inference service nousresearch-meta-llama-4941-v85
Waiting for inference service nousresearch-meta-llama-4941-v85 to be ready
Failed to get response for submission neversleep-lumimaid-mist_1884_v1: ('http://neversleep-lumimaid-mist-1884-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1884_v1: ('http://neversleep-lumimaid-mist-1884-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1893_v1: ('http://neversleep-lumimaid-mist-1893-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1884_v1: ('http://neversleep-lumimaid-mist-1884-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1893_v1: ('http://neversleep-lumimaid-mist-1893-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_7954_v1: ('http://neversleep-lumimaid-mist-7954-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1893_v1: ('http://neversleep-lumimaid-mist-1893-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1884_v1: ('http://neversleep-lumimaid-mist-1884-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_1893_v1: ('http://neversleep-lumimaid-mist-1893-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission neversleep-lumimaid-mist_7954_v1: ('http://neversleep-lumimaid-mist-7954-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Inference service nousresearch-meta-llama-4941-v85 ready after 60.47726392745972s
Pipeline stage ISVCDeployer completed in 62.39s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.9501371383666992s
Received healthy response to inference request in 0.8962717056274414s
Received healthy response to inference request in 1.0685832500457764s
Received healthy response to inference request in 0.8950216770172119s
Received healthy response to inference request in 0.854541540145874s
5 requests
0 failed requests
5th percentile: 0.8626375675201416
10th percentile: 0.8707335948944092
20th percentile: 0.8869256496429443
30th percentile: 0.8952716827392578
40th percentile: 0.8957716941833496
50th percentile: 0.8962717056274414
60th percentile: 0.9651963233947753
70th percentile: 1.0341209411621093
80th percentile: 1.2448940277099612
90th percentile: 1.59751558303833
95th percentile: 1.7738263607025144
99th percentile: 1.9148749828338623
mean time: 1.1329110622406007
Pipeline stage StressChecker completed in 6.39s
nousresearch-meta-llama_4941_v85 status is now deployed due to DeploymentManager action
nousresearch-meta-llama_4941_v85 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of nousresearch-meta-llama_4941_v85
Running pipeline stage ISVCDeleter
Checking if service nousresearch-meta-llama-4941-v85 is running
Tearing down inference service nousresearch-meta-llama-4941-v85
Service nousresearch-meta-llama-4941-v85 has been torndown
Pipeline stage ISVCDeleter completed in 4.06s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key nousresearch-meta-llama-4941-v85/config.json from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v85/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v85/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v85/tokenizer.json from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v85/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key nousresearch-meta-llama-4941-v85_reward/config.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v85_reward/merges.txt from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v85_reward/reward.tensors from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v85_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v85_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v85_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v85_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.57s
nousresearch-meta-llama_4941_v85 status is now torndown due to DeploymentManager action