developer_uid: Azazelle
submission_id: magpie-align-llama-3-8b-_7655_v1
model_name: Llama-3-8B-Magpie-Align-v0_1
model_group: Magpie-Align/Llama-3-8B-
status: torndown
timestamp: 2024-07-06T20:00:19+00:00
num_battles: 30486
num_wins: 14447
celo_rating: 1172.95
family_friendly_score: 0.0
submission_type: basic
model_repo: Magpie-Align/Llama-3-8B-Magpie-Align-v0.1
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: Llama-3-8B-Magpie-Align-v0_1
is_internal_developer: False
language_model: Magpie-Align/Llama-3-8B-Magpie-Align-v0.1
model_size: 8B
ranking_group: single
us_pacific_date: 2024-07-06
win_ratio: 0.47388965426753266
generation_params: {'temperature': 0.95, 'top_p': 1.0, 'min_p': 0.08, 'top_k': 50, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
Resubmit model
Running pipeline stage MKMLizer
Starting job with name magpie-align-llama-3-8b-7655-v1-mkmlizer
Waiting for job on magpie-align-llama-3-8b-7655-v1-mkmlizer to finish
magpie-align-llama-3-8b-7655-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ _____ __ __ ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ /___/ ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ Version: 0.8.14 ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ https://mk1.ai ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ The license key for the current software has been verified as ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ belonging to: ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ Chai Research Corp. ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ║ ║
magpie-align-llama-3-8b-7655-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
magpie-align-llama-3-8b-7655-v1-mkmlizer: Downloaded to shared memory in 47.641s
magpie-align-llama-3-8b-7655-v1-mkmlizer: quantizing model to /dev/shm/model_cache
magpie-align-llama-3-8b-7655-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
magpie-align-llama-3-8b-7655-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 3/291 [00:05<09:22, 1.95s/it] Loading 0: 1%|▏ | 4/291 [00:12<16:23, 3.43s/it] Loading 0: 2%|▏ | 5/291 [00:19<22:21, 4.69s/it] Loading 0: 2%|▏ | 7/291 [00:20<12:12, 2.58s/it] Loading 0: 3%|▎ | 8/291 [00:22<11:33, 2.45s/it] Loading 0: 3%|▎ | 9/291 [00:24<11:15, 2.40s/it] Loading 0: 3%|▎ | 10/291 [00:25<08:59, 1.92s/it] Loading 0: 4%|▍ | 12/291 [00:30<10:48, 2.33s/it] Loading 0: 4%|▍ | 13/291 [00:32<09:29, 2.05s/it] Loading 0: 5%|▍ | 14/291 [00:34<09:31, 2.06s/it] Loading 0: 10%|█ | 30/291 [00:34<01:20, 3.24it/s] Loading 0: 14%|█▍ | 41/291 [00:34<00:43, 5.80it/s] Loading 0: 20%|█▉ | 57/291 [00:34<00:21, 10.88it/s] Loading 0: 23%|██▎ | 68/291 [00:34<00:14, 15.24it/s] Loading 0: 29%|██▊ | 83/291 [00:35<00:10, 19.27it/s] Loading 0: 32%|███▏ | 94/291 [00:35<00:07, 24.94it/s] Loading 0: 36%|███▋ | 106/291 [00:35<00:05, 33.01it/s] Loading 0: 41%|████ | 120/291 [00:35<00:03, 44.17it/s] Loading 0: 45%|████▌ | 131/291 [00:35<00:03, 51.99it/s] Loading 0: 51%|█████ | 147/291 [00:35<00:02, 67.40it/s] Loading 0: 55%|█████▍ | 159/291 [00:35<00:01, 75.08it/s] Loading 0: 60%|█████▉ | 174/291 [00:35<00:01, 89.93it/s] Loading 0: 64%|██████▍ | 187/291 [00:36<00:01, 62.99it/s] Loading 0: 68%|██████▊ | 198/291 [00:36<00:01, 70.55it/s] Loading 0: 73%|███████▎ | 211/291 [00:36<00:00, 81.33it/s] Loading 0: 78%|███████▊ | 228/291 [00:36<00:00, 96.95it/s] Loading 0: 82%|████████▏ | 240/291 [00:36<00:00, 101.21it/s] Loading 0: 88%|████████▊ | 255/291 [00:36<00:00, 113.05it/s] Loading 0: 92%|█████████▏| 268/291 [00:36<00:00, 115.11it/s] Loading 0: 97%|█████████▋| 282/291 [00:37<00:00, 116.01it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
magpie-align-llama-3-8b-7655-v1-mkmlizer: quantized model in 58.322s
magpie-align-llama-3-8b-7655-v1-mkmlizer: Processed model Magpie-Align/Llama-3-8B-Magpie-Align-v0.1 in 105.963s
magpie-align-llama-3-8b-7655-v1-mkmlizer: creating bucket guanaco-mkml-models
magpie-align-llama-3-8b-7655-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
magpie-align-llama-3-8b-7655-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/magpie-align-llama-3-8b-7655-v1
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/magpie-align-llama-3-8b-7655-v1/config.json
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/magpie-align-llama-3-8b-7655-v1/special_tokens_map.json
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/magpie-align-llama-3-8b-7655-v1/tokenizer_config.json
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/magpie-align-llama-3-8b-7655-v1/tokenizer.json
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/magpie-align-llama-3-8b-7655-v1/flywheel_model.0.safetensors
magpie-align-llama-3-8b-7655-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
magpie-align-llama-3-8b-7655-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:919: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
magpie-align-llama-3-8b-7655-v1-mkmlizer: warnings.warn(
magpie-align-llama-3-8b-7655-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
magpie-align-llama-3-8b-7655-v1-mkmlizer: warnings.warn(
magpie-align-llama-3-8b-7655-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:769: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
magpie-align-llama-3-8b-7655-v1-mkmlizer: warnings.warn(
magpie-align-llama-3-8b-7655-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
magpie-align-llama-3-8b-7655-v1-mkmlizer: warnings.warn(
magpie-align-llama-3-8b-7655-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
magpie-align-llama-3-8b-7655-v1-mkmlizer: return self.fget.__get__(instance, owner)()
magpie-align-llama-3-8b-7655-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
magpie-align-llama-3-8b-7655-v1-mkmlizer: Saving duration: 0.434s
magpie-align-llama-3-8b-7655-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 12.908s
magpie-align-llama-3-8b-7655-v1-mkmlizer: creating bucket guanaco-reward-models
magpie-align-llama-3-8b-7655-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
magpie-align-llama-3-8b-7655-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/magpie-align-llama-3-8b-7655-v1_reward
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/magpie-align-llama-3-8b-7655-v1_reward/special_tokens_map.json
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/magpie-align-llama-3-8b-7655-v1_reward/config.json
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/magpie-align-llama-3-8b-7655-v1_reward/tokenizer_config.json
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/magpie-align-llama-3-8b-7655-v1_reward/merges.txt
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/magpie-align-llama-3-8b-7655-v1_reward/vocab.json
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/magpie-align-llama-3-8b-7655-v1_reward/tokenizer.json
magpie-align-llama-3-8b-7655-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/magpie-align-llama-3-8b-7655-v1_reward/reward.tensors
Job magpie-align-llama-3-8b-7655-v1-mkmlizer completed after 289.14s with status: succeeded
Stopping job with name magpie-align-llama-3-8b-7655-v1-mkmlizer
Pipeline stage MKMLizer completed in 290.06s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.11s
Running pipeline stage ISVCDeployer
Creating inference service magpie-align-llama-3-8b-7655-v1
Waiting for inference service magpie-align-llama-3-8b-7655-v1 to be ready
Inference service magpie-align-llama-3-8b-7655-v1 ready after 50.333632469177246s
Pipeline stage ISVCDeployer completed in 57.41s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.140141725540161s
Received healthy response to inference request in 1.1308422088623047s
Received healthy response to inference request in 1.2713522911071777s
Received healthy response to inference request in 1.242903470993042s
Received healthy response to inference request in 1.308330774307251s
5 requests
0 failed requests
5th percentile: 1.153254461288452
10th percentile: 1.1756667137145995
20th percentile: 1.2204912185668946
30th percentile: 1.248593235015869
40th percentile: 1.2599727630615234
50th percentile: 1.2713522911071777
60th percentile: 1.286143684387207
Connection pool is full, discarding connection: %s
70th percentile: 1.3009350776672364
Connection pool is full, discarding connection: %s
80th percentile: 1.4746929645538331
90th percentile: 1.8074173450469972
95th percentile: 1.973779535293579
99th percentile: 2.1068692874908446
mean time: 1.4187140941619873
Pipeline stage StressChecker completed in 7.92s
magpie-align-llama-3-8b-_7655_v1 status is now deployed due to DeploymentManager action
magpie-align-llama-3-8b-_7655_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of magpie-align-llama-3-8b-_7655_v1
Running pipeline stage ISVCDeleter
Checking if service magpie-align-llama-3-8b-7655-v1 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 4.12s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key magpie-align-llama-3-8b-7655-v1/config.json from bucket guanaco-mkml-models
Deleting key magpie-align-llama-3-8b-7655-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key magpie-align-llama-3-8b-7655-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key magpie-align-llama-3-8b-7655-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key magpie-align-llama-3-8b-7655-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key magpie-align-llama-3-8b-7655-v1_reward/config.json from bucket guanaco-reward-models
Deleting key magpie-align-llama-3-8b-7655-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key magpie-align-llama-3-8b-7655-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key magpie-align-llama-3-8b-7655-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key magpie-align-llama-3-8b-7655-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key magpie-align-llama-3-8b-7655-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key magpie-align-llama-3-8b-7655-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.95s
magpie-align-llama-3-8b-_7655_v1 status is now torndown due to DeploymentManager action