submission_id: meta-llama-meta-llama-3-8b_v9
developer_uid: Meliodia
alignment_samples: 0
best_of: 16
celo_rating: 1178.32
display_name: meta-base-model
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 256}
is_internal_developer: True
language_model: meta-llama/Meta-Llama-3-8B
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: meta-llama/Meta-Llama-3-
model_name: meta-base-model
model_num_parameters: 8030261248.0
model_repo: meta-llama/Meta-Llama-3-8B
model_size: 8B
num_battles: 12270
num_wins: 5914
propriety_score: 0.7125581395348837
propriety_total_count: 1075.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: ChaiML/gpt2_medium_pairwise_60m_step_937500
status: torndown
submission_type: basic
timestamp: 2024-07-26T15:50:08+00:00
us_pacific_date: 2024-07-26
win_ratio: 0.4819885900570497
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name meta-llama-meta-llama-3-8b-v9-mkmlizer
Waiting for job on meta-llama-meta-llama-3-8b-v9-mkmlizer to finish
meta-llama-meta-llama-3-8b-v9-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ _____ __ __ ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ /___/ ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ Version: 0.9.7 ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ https://mk1.ai ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ The license key for the current software has been verified as ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ belonging to: ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ Chai Research Corp. ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ║ ║
meta-llama-meta-llama-3-8b-v9-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
meta-llama-meta-llama-3-8b-v6-mkmlizer: quantized model in 26.410s
meta-llama-meta-llama-3-8b-v6-mkmlizer: Processed model meta-llama/Meta-Llama-3-8B in 62.937s
meta-llama-meta-llama-3-8b-v6-mkmlizer: creating bucket guanaco-mkml-models
meta-llama-meta-llama-3-8b-v6-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
meta-llama-meta-llama-3-8b-v6-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v6
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v6/config.json
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v6/special_tokens_map.json
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v6/tokenizer_config.json
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v6/tokenizer.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: Downloaded to shared memory in 33.480s
meta-llama-meta-llama-3-8b-v9-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpk98dp2zc, device:0
meta-llama-meta-llama-3-8b-v9-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Failed to get response for submission cycy233-l3-ba-e-v3-c6_v1: ('http://cycy233-l3-ba-e-v3-c6-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:43642->127.0.0.1:8080: read: connection reset by peer\n')
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v6/flywheel_model.0.safetensors
meta-llama-meta-llama-3-8b-v6-mkmlizer: loading reward model from ChaiML/gpt2_medium_pairwise_60m_step_937500
meta-llama-meta-llama-3-8b-v6-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:08, 34.78it/s] Loading 0: 4%|▍ | 13/291 [00:00<00:05, 55.27it/s] Loading 0: 7%|▋ | 19/291 [00:00<00:05, 48.43it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:05, 50.69it/s] Loading 0: 11%|█ | 32/291 [00:00<00:05, 46.00it/s] Loading 0: 14%|█▎ | 40/291 [00:00<00:04, 55.05it/s] Loading 0: 16%|█▌ | 46/291 [00:00<00:04, 51.37it/s] Loading 0: 18%|█▊ | 52/291 [00:01<00:04, 51.85it/s] Loading 0: 20%|██ | 59/291 [00:01<00:04, 47.54it/s] Loading 0: 23%|██▎ | 67/291 [00:01<00:04, 54.09it/s] Loading 0: 25%|██▌ | 73/291 [00:01<00:04, 47.86it/s] Loading 0: 27%|██▋ | 79/291 [00:01<00:04, 48.04it/s] Loading 0: 29%|██▉ | 84/291 [00:01<00:06, 33.45it/s] Loading 0: 31%|███ | 89/291 [00:01<00:05, 36.22it/s] Loading 0: 33%|███▎ | 95/291 [00:02<00:05, 36.06it/s] Loading 0: 35%|███▌ | 103/291 [00:02<00:04, 44.69it/s] Loading 0: 37%|███▋ | 109/291 [00:02<00:04, 43.60it/s] Loading 0: 39%|███▉ | 114/291 [00:02<00:04, 43.77it/s] Loading 0: 41%|████ | 120/291 [00:02<00:03, 47.50it/s] Loading 0: 43%|████▎ | 126/291 [00:02<00:03, 49.68it/s] Loading 0: 45%|████▌ | 132/291 [00:02<00:03, 42.99it/s] Loading 0: 48%|████▊ | 139/291 [00:03<00:03, 47.72it/s] Loading 0: 50%|████▉ | 145/291 [00:03<00:03, 45.32it/s] Loading 0: 52%|█████▏ | 150/291 [00:03<00:03, 44.96it/s] Loading 0: 54%|█████▍ | 157/291 [00:03<00:02, 49.74it/s] Loading 0: 56%|█████▌ | 163/291 [00:03<00:02, 46.12it/s] Loading 0: 58%|█████▊ | 168/291 [00:03<00:02, 44.83it/s] Loading 0: 60%|██████ | 175/291 [00:03<00:02, 50.87it/s] Loading 0: 62%|██████▏ | 181/291 [00:03<00:02, 44.46it/s] Loading 0: 64%|██████▍ | 187/291 [00:04<00:02, 35.00it/s] Loading 0: 66%|██████▌ | 192/291 [00:04<00:02, 36.37it/s] Loading 0: 68%|██████▊ | 197/291 [00:04<00:02, 38.34it/s] Loading 0: 69%|██████▉ | 202/291 [00:04<00:02, 39.76it/s] Loading 0: 71%|███████▏ | 208/291 [00:04<00:02, 38.65it/s] Loading 0: 73%|███████▎ | 213/291 [00:04<00:01, 39.38it/s] Loading 0: 75%|███████▌ | 219/291 [00:04<00:01, 43.97it/s] Loading 0: 77%|███████▋ | 224/291 [00:05<00:01, 44.18it/s] Loading 0: 79%|███████▊ | 229/291 [00:05<00:01, 44.94it/s] Loading 0: 81%|████████ | 235/291 [00:05<00:01, 42.49it/s] Loading 0: 82%|████████▏ | 240/291 [00:05<00:01, 41.60it/s] Loading 0: 85%|████████▍ | 247/291 [00:05<00:00, 46.60it/s] Loading 0: 87%|████████▋ | 253/291 [00:05<00:00, 43.57it/s] Loading 0: 89%|████████▊ | 258/291 [00:05<00:00, 42.42it/s] Loading 0: 91%|█████████ | 264/291 [00:05<00:00, 45.90it/s] Loading 0: 92%|█████████▏| 269/291 [00:06<00:00, 45.36it/s] Loading 0: 94%|█████████▍| 274/291 [00:06<00:00, 45.25it/s] Loading 0: 96%|█████████▌| 280/291 [00:06<00:00, 41.99it/s] Loading 0: 98%|█████████▊| 285/291 [00:06<00:00, 42.09it/s] Loading 0: 100%|█████████▉| 290/291 [00:11<00:00, 3.08it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-8b-v6-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-8b-v6-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-8b-v6-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-8b-v6-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-8b-v6-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-8b-v6-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
meta-llama-meta-llama-3-8b-v6-mkmlizer: Bucket 's3://guanaco-reward-models/' created
meta-llama-meta-llama-3-8b-v6-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v6_reward
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v6_reward/config.json
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v6_reward/tokenizer_config.json
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v6_reward/special_tokens_map.json
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v6_reward/merges.txt
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v6_reward/vocab.json
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v6_reward/tokenizer.json
meta-llama-meta-llama-3-8b-v6-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v6_reward/reward.tensors
Job meta-llama-meta-llama-3-8b-v6-mkmlizer completed after 96.63s with status: succeeded
Stopping job with name meta-llama-meta-llama-3-8b-v6-mkmlizer
Pipeline stage MKMLizer completed in 98.04s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.25s
Running pipeline stage ISVCDeployer
Creating inference service meta-llama-meta-llama-3-8b-v6
Waiting for inference service meta-llama-meta-llama-3-8b-v6 to be ready
meta-llama-meta-llama-3-8b-v9-mkmlizer: quantized model in 25.409s
meta-llama-meta-llama-3-8b-v9-mkmlizer: Processed model meta-llama/Meta-Llama-3-8B in 58.890s
meta-llama-meta-llama-3-8b-v9-mkmlizer: creating bucket guanaco-mkml-models
meta-llama-meta-llama-3-8b-v9-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
meta-llama-meta-llama-3-8b-v9-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v9
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v9/config.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v9/special_tokens_map.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v9/tokenizer_config.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v9/tokenizer.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v9/flywheel_model.0.safetensors
meta-llama-meta-llama-3-8b-v9-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-8b-v9-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-8b-v9-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
meta-llama-meta-llama-3-8b-v9-mkmlizer: Saving duration: 0.333s
meta-llama-meta-llama-3-8b-v9-mkmlizer: Processed model ChaiML/gpt2_medium_pairwise_60m_step_937500 in 5.721s
meta-llama-meta-llama-3-8b-v9-mkmlizer: creating bucket guanaco-reward-models
meta-llama-meta-llama-3-8b-v9-mkmlizer: Bucket 's3://guanaco-reward-models/' created
meta-llama-meta-llama-3-8b-v9-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v9_reward
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v9_reward/config.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v9_reward/special_tokens_map.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v9_reward/tokenizer_config.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v9_reward/merges.txt
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v9_reward/vocab.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v9_reward/tokenizer.json
meta-llama-meta-llama-3-8b-v9-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v9_reward/reward.tensors
Job meta-llama-meta-llama-3-8b-v9-mkmlizer completed after 96.96s with status: succeeded
Stopping job with name meta-llama-meta-llama-3-8b-v9-mkmlizer
Pipeline stage MKMLizer completed in 98.03s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.13s
Running pipeline stage ISVCDeployer
Creating inference service meta-llama-meta-llama-3-8b-v9
Waiting for inference service meta-llama-meta-llama-3-8b-v9 to be ready
Inference service meta-llama-meta-llama-3-8b-v9 ready after 91.09577822685242s
Pipeline stage ISVCDeployer completed in 91.68s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.0244367122650146s
Received healthy response to inference request in 1.176342248916626s
Received healthy response to inference request in 1.1784987449645996s
Received healthy response to inference request in 1.1654036045074463s
Received healthy response to inference request in 1.1928737163543701s
5 requests
0 failed requests
5th percentile: 1.1675913333892822
10th percentile: 1.1697790622711182
20th percentile: 1.17415452003479
30th percentile: 1.1767735481262207
40th percentile: 1.1776361465454102
50th percentile: 1.1784987449645996
60th percentile: 1.1842487335205079
70th percentile: 1.189998722076416
80th percentile: 1.3591863155364992
90th percentile: 1.691811513900757
95th percentile: 1.8581241130828856
99th percentile: 1.9911741924285888
mean time: 1.3475110054016113
Pipeline stage StressChecker completed in 8.47s
meta-llama-meta-llama-3-8b_v9 status is now deployed due to DeploymentManager action
meta-llama-meta-llama-3-8b_v9 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of meta-llama-meta-llama-3-8b_v9
Running pipeline stage ISVCDeleter
Checking if service meta-llama-meta-llama-3-8b-v9 is running
Tearing down inference service meta-llama-meta-llama-3-8b-v9
Service meta-llama-meta-llama-3-8b-v9 has been torndown
Pipeline stage ISVCDeleter completed in 4.21s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key meta-llama-meta-llama-3-8b-v9/config.json from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-3-8b-v9/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-3-8b-v9/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-3-8b-v9/tokenizer.json from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-3-8b-v9/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key meta-llama-meta-llama-3-8b-v9_reward/config.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-8b-v9_reward/merges.txt from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-8b-v9_reward/reward.tensors from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-8b-v9_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-8b-v9_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-8b-v9_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-8b-v9_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.48s
meta-llama-meta-llama-3-8b_v9 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics