submission_id: meta-llama-meta-llama-gu_1295_v1
developer_uid: chai_backend_admin
alignment_samples: 0
best_of: 1
celo_rating: 1043.29
display_name: meta-llama-meta-llama-gu_1295_v1
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 1, 'max_output_tokens': 64}
is_internal_developer: True
language_model: meta-llama/Meta-Llama-Guard-2-8B
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: meta-llama/Meta-Llama-Gu
model_name: meta-llama-meta-llama-gu_1295_v1
model_num_parameters: 8030261248.0
model_repo: meta-llama/Meta-Llama-Guard-2-8B
model_size: 8B
num_battles: 5321
num_wins: 1613
propriety_score: 0.7146507666098807
propriety_total_count: 1174.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
status: torndown
submission_type: basic
timestamp: 2024-07-18T23:26:40+00:00
us_pacific_date: 2024-07-18
win_ratio: 0.30313850779928586
Resubmit model
Running pipeline stage MKMLizer
Starting job with name meta-llama-meta-llama-gu-1295-v1-mkmlizer
Waiting for job on meta-llama-meta-llama-gu-1295-v1-mkmlizer to finish
meta-llama-meta-llama-gu-1295-v1-mkmlizer: Downloaded to shared memory in 51.382s
meta-llama-meta-llama-gu-1295-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpxmxg0zkk, device:0
meta-llama-meta-llama-gu-1295-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
meta-llama-meta-llama-gu-1295-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 3%|▎ | 8/291 [00:00<00:03, 77.54it/s] Loading 0: 8%|▊ | 22/291 [00:00<00:02, 106.40it/s] Loading 0: 12%|█▏ | 36/291 [00:00<00:02, 121.07it/s] Loading 0: 17%|█▋ | 49/291 [00:00<00:01, 121.91it/s] Loading 0: 22%|██▏ | 64/291 [00:00<00:01, 131.09it/s] Loading 0: 27%|██▋ | 78/291 [00:00<00:01, 122.71it/s] Loading 0: 31%|███▏ | 91/291 [00:01<00:03, 59.96it/s] Loading 0: 35%|███▌ | 103/291 [00:01<00:02, 69.09it/s] Loading 0: 40%|████ | 117/291 [00:01<00:02, 82.26it/s] Loading 0: 45%|████▍ | 130/291 [00:01<00:01, 91.01it/s] Loading 0: 49%|████▉ | 144/291 [00:01<00:01, 101.81it/s] Loading 0: 54%|█████▍ | 157/291 [00:01<00:01, 106.69it/s] Loading 0: 58%|█████▊ | 170/291 [00:01<00:01, 112.35it/s] Loading 0: 63%|██████▎ | 183/291 [00:01<00:00, 117.02it/s] Loading 0: 67%|██████▋ | 196/291 [00:02<00:01, 57.89it/s] Loading 0: 73%|███████▎ | 211/291 [00:02<00:01, 70.70it/s] Loading 0: 78%|███████▊ | 226/291 [00:02<00:00, 84.93it/s] Loading 0: 82%|████████▏ | 238/291 [00:02<00:00, 88.15it/s] Loading 0: 86%|████████▌ | 250/291 [00:02<00:00, 90.28it/s] Loading 0: 91%|█████████ | 264/291 [00:02<00:00, 100.24it/s] Loading 0: 95%|█████████▍| 276/291 [00:03<00:00, 102.49it/s] Loading 0: 99%|█████████▉| 288/291 [00:10<00:00, 5.73it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
meta-llama-meta-llama-gu-1295-v1-mkmlizer: quantized model in 32.101s
meta-llama-meta-llama-gu-1295-v1-mkmlizer: Processed model meta-llama/Meta-Llama-Guard-2-8B in 83.483s
meta-llama-meta-llama-gu-1295-v1-mkmlizer: creating bucket guanaco-mkml-models
meta-llama-meta-llama-gu-1295-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
meta-llama-meta-llama-gu-1295-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/meta-llama-meta-llama-gu-1295-v1
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/meta-llama-meta-llama-gu-1295-v1/config.json
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/meta-llama-meta-llama-gu-1295-v1/special_tokens_map.json
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/meta-llama-meta-llama-gu-1295-v1/tokenizer_config.json
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/meta-llama-meta-llama-gu-1295-v1/tokenizer.json
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/meta-llama-meta-llama-gu-1295-v1/flywheel_model.0.safetensors
meta-llama-meta-llama-gu-1295-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
meta-llama-meta-llama-gu-1295-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:950: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-gu-1295-v1-mkmlizer: warnings.warn(
meta-llama-meta-llama-gu-1295-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:778: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-gu-1295-v1-mkmlizer: warnings.warn(
meta-llama-meta-llama-gu-1295-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:469: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-gu-1295-v1-mkmlizer: warnings.warn(
meta-llama-meta-llama-gu-1295-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
meta-llama-meta-llama-gu-1295-v1-mkmlizer: creating bucket guanaco-reward-models
meta-llama-meta-llama-gu-1295-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
meta-llama-meta-llama-gu-1295-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/meta-llama-meta-llama-gu-1295-v1_reward
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/meta-llama-meta-llama-gu-1295-v1_reward/config.json
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/meta-llama-meta-llama-gu-1295-v1_reward/special_tokens_map.json
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/meta-llama-meta-llama-gu-1295-v1_reward/tokenizer_config.json
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/meta-llama-meta-llama-gu-1295-v1_reward/merges.txt
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/meta-llama-meta-llama-gu-1295-v1_reward/vocab.json
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/meta-llama-meta-llama-gu-1295-v1_reward/tokenizer.json
meta-llama-meta-llama-gu-1295-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/meta-llama-meta-llama-gu-1295-v1_reward/reward.tensors
Job meta-llama-meta-llama-gu-1295-v1-mkmlizer completed after 120.31s with status: succeeded
Stopping job with name meta-llama-meta-llama-gu-1295-v1-mkmlizer
Pipeline stage MKMLizer completed in 122.26s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.11s
Running pipeline stage ISVCDeployer
Creating inference service meta-llama-meta-llama-gu-1295-v1
Waiting for inference service meta-llama-meta-llama-gu-1295-v1 to be ready
Failed to get response for submission hastagaras-dirtybu8bl3-r_5954_v3: ('http://hastagaras-dirtybu8bl3-r-5954-v3-predictor-default.tenant-chaiml-guanaco.knative.ord1.coreweave.cloud/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Inference service meta-llama-meta-llama-gu-1295-v1 ready after 102.10623598098755s
Pipeline stage ISVCDeployer completed in 104.18s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.2270853519439697s
Received healthy response to inference request in 0.37272167205810547s
Received healthy response to inference request in 1.027482032775879s
Received healthy response to inference request in 0.6405124664306641s
Received healthy response to inference request in 0.6995422840118408s
5 requests
0 failed requests
5th percentile: 0.4262798309326172
10th percentile: 0.4798379898071289
20th percentile: 0.5869543075561523
30th percentile: 0.6523184299468994
40th percentile: 0.6759303569793701
50th percentile: 0.6995422840118408
60th percentile: 0.830718183517456
70th percentile: 0.9618940830230712
80th percentile: 1.0674026966094972
90th percentile: 1.1472440242767334
95th percentile: 1.1871646881103515
99th percentile: 1.2191012191772461
mean time: 0.7934687614440918
Pipeline stage StressChecker completed in 4.85s
meta-llama-meta-llama-gu_1295_v1 status is now deployed due to DeploymentManager action
meta-llama-meta-llama-gu_1295_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of meta-llama-meta-llama-gu_1295_v1
Running pipeline stage ISVCDeleter
Checking if service meta-llama-meta-llama-gu-1295-v1 is running
Tearing down inference service meta-llama-meta-llama-gu-1295-v1
Service meta-llama-meta-llama-gu-1295-v1 has been torndown
Pipeline stage ISVCDeleter completed in 4.41s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key meta-llama-meta-llama-gu-1295-v1/config.json from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-gu-1295-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-gu-1295-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-gu-1295-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-gu-1295-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key meta-llama-meta-llama-gu-1295-v1_reward/config.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-gu-1295-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-gu-1295-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-gu-1295-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-gu-1295-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-gu-1295-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-gu-1295-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.38s
meta-llama-meta-llama-gu_1295_v1 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics