developer_uid: frank2030
submission_id: frank2030-llama3-chat-tu_7635_v1
model_name: frank2030-llama3-chat-tu_7635_v1
model_group: frank2030/llama3_chat_tu
status: torndown
timestamp: 2024-07-02T21:20:42+00:00
num_battles: 13927
num_wins: 5988
celo_rating: 1128.88
family_friendly_score: 0.0
submission_type: basic
model_repo: frank2030/llama3_chat_tune_lora_merged
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
model_num_parameters: 8030261248.0
best_of: 4
max_input_tokens: 512
max_output_tokens: 64
display_name: frank2030-llama3-chat-tu_7635_v1
is_internal_developer: False
language_model: frank2030/llama3_chat_tune_lora_merged
model_size: 8B
ranking_group: single
us_pacific_date: 2024-07-02
win_ratio: 0.42995620018668773
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 4, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
Resubmit model
Running pipeline stage MKMLizer
Starting job with name frank2030-llama3-chat-tu-7635-v1-mkmlizer
Waiting for job on frank2030-llama3-chat-tu-7635-v1-mkmlizer to finish
frank2030-llama3-chat-tu-7635-v1-mkmlizer: Downloaded to shared memory in 22.960s
frank2030-llama3-chat-tu-7635-v1-mkmlizer: quantizing model to /dev/shm/model_cache
frank2030-llama3-chat-tu-7635-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
frank2030-llama3-chat-tu-7635-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:11<27:55, 5.80s/it] Loading 0: 4%|▍ | 13/291 [00:11<03:04, 1.51it/s] Loading 0: 10%|▉ | 28/291 [00:11<01:05, 4.04it/s] Loading 0: 14%|█▍ | 42/291 [00:11<00:34, 7.24it/s] Loading 0: 19%|█▉ | 55/291 [00:12<00:21, 11.20it/s] Loading 0: 23%|██▎ | 67/291 [00:12<00:14, 15.80it/s] Loading 0: 27%|██▋ | 78/291 [00:12<00:09, 21.34it/s] Loading 0: 31%|███ | 89/291 [00:12<00:09, 21.68it/s] Loading 0: 35%|███▌ | 103/291 [00:12<00:06, 30.54it/s] Loading 0: 40%|███▉ | 116/291 [00:12<00:04, 40.41it/s] Loading 0: 44%|████▍ | 129/291 [00:13<00:03, 51.27it/s] Loading 0: 48%|████▊ | 141/291 [00:13<00:02, 57.94it/s] Loading 0: 53%|█████▎ | 153/291 [00:13<00:02, 68.32it/s] Loading 0: 56%|█████▋ | 164/291 [00:13<00:01, 72.54it/s] Loading 0: 60%|██████ | 175/291 [00:13<00:01, 72.69it/s] Loading 0: 64%|██████▍ | 187/291 [00:14<00:02, 47.40it/s] Loading 0: 67%|██████▋ | 195/291 [00:14<00:01, 52.15it/s] Loading 0: 72%|███████▏ | 210/291 [00:14<00:01, 67.80it/s] Loading 0: 76%|███████▌ | 221/291 [00:14<00:00, 75.56it/s] Loading 0: 81%|████████▏ | 237/291 [00:14<00:00, 93.73it/s] Loading 0: 86%|████████▌ | 249/291 [00:14<00:00, 98.31it/s] Loading 0: 91%|█████████ | 264/291 [00:14<00:00, 110.64it/s] Loading 0: 95%|█████████▌| 277/291 [00:14<00:00, 113.57it/s] Loading 0: 100%|█████████▉| 290/291 [00:21<00:00, 6.27it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
frank2030-llama3-chat-tu-7635-v1-mkmlizer: quantized model in 40.875s
frank2030-llama3-chat-tu-7635-v1-mkmlizer: Processed model frank2030/llama3_chat_tune_lora_merged in 63.836s
frank2030-llama3-chat-tu-7635-v1-mkmlizer: creating bucket guanaco-mkml-models
frank2030-llama3-chat-tu-7635-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
frank2030-llama3-chat-tu-7635-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/frank2030-llama3-chat-tu-7635-v1
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/frank2030-llama3-chat-tu-7635-v1/config.json
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/frank2030-llama3-chat-tu-7635-v1/special_tokens_map.json
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/frank2030-llama3-chat-tu-7635-v1/tokenizer_config.json
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/frank2030-llama3-chat-tu-7635-v1/tokenizer.json
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/frank2030-llama3-chat-tu-7635-v1/flywheel_model.0.safetensors
frank2030-llama3-chat-tu-7635-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
frank2030-llama3-chat-tu-7635-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:919: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
frank2030-llama3-chat-tu-7635-v1-mkmlizer: warnings.warn(
frank2030-llama3-chat-tu-7635-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
frank2030-llama3-chat-tu-7635-v1-mkmlizer: warnings.warn(
frank2030-llama3-chat-tu-7635-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:769: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
frank2030-llama3-chat-tu-7635-v1-mkmlizer: warnings.warn(
frank2030-llama3-chat-tu-7635-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
frank2030-llama3-chat-tu-7635-v1-mkmlizer: return self.fget.__get__(instance, owner)()
frank2030-llama3-chat-tu-7635-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
frank2030-llama3-chat-tu-7635-v1-mkmlizer: Saving duration: 0.504s
frank2030-llama3-chat-tu-7635-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 7.227s
frank2030-llama3-chat-tu-7635-v1-mkmlizer: creating bucket guanaco-reward-models
frank2030-llama3-chat-tu-7635-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
frank2030-llama3-chat-tu-7635-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/frank2030-llama3-chat-tu-7635-v1_reward
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/frank2030-llama3-chat-tu-7635-v1_reward/special_tokens_map.json
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/frank2030-llama3-chat-tu-7635-v1_reward/tokenizer_config.json
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/frank2030-llama3-chat-tu-7635-v1_reward/config.json
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/frank2030-llama3-chat-tu-7635-v1_reward/merges.txt
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/frank2030-llama3-chat-tu-7635-v1_reward/vocab.json
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/frank2030-llama3-chat-tu-7635-v1_reward/tokenizer.json
frank2030-llama3-chat-tu-7635-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/frank2030-llama3-chat-tu-7635-v1_reward/reward.tensors
Job frank2030-llama3-chat-tu-7635-v1-mkmlizer completed after 124.91s with status: succeeded
Stopping job with name frank2030-llama3-chat-tu-7635-v1-mkmlizer
Pipeline stage MKMLizer completed in 126.13s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.13s
Running pipeline stage ISVCDeployer
Creating inference service frank2030-llama3-chat-tu-7635-v1
Waiting for inference service frank2030-llama3-chat-tu-7635-v1 to be ready
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Inference service frank2030-llama3-chat-tu-7635-v1 ready after 40.25804376602173s
Pipeline stage ISVCDeployer completed in 47.17s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.9179925918579102s
Received healthy response to inference request in 1.1253070831298828s
Received healthy response to inference request in 1.1365151405334473s
Received healthy response to inference request in 1.1357653141021729s
Received healthy response to inference request in 1.1439924240112305s
5 requests
0 failed requests
5th percentile: 1.1273987293243408
10th percentile: 1.1294903755187988
20th percentile: 1.1336736679077148
30th percentile: 1.1359152793884277
40th percentile: 1.1362152099609375
50th percentile: 1.1365151405334473
60th percentile: 1.1395060539245605
70th percentile: 1.1424969673156737
80th percentile: 1.2987924575805665
90th percentile: 1.6083925247192383
95th percentile: 1.7631925582885741
99th percentile: 1.887032585144043
mean time: 1.2919145107269288
Pipeline stage StressChecker completed in 7.21s
frank2030-llama3-chat-tu_7635_v1 status is now deployed due to DeploymentManager action
frank2030-llama3-chat-tu_7635_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of frank2030-llama3-chat-tu_7635_v1
Running pipeline stage ISVCDeleter
Checking if service frank2030-llama3-chat-tu-7635-v1 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 5.78s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key frank2030-llama3-chat-tu-7635-v1/config.json from bucket guanaco-mkml-models
Deleting key frank2030-llama3-chat-tu-7635-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key frank2030-llama3-chat-tu-7635-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key frank2030-llama3-chat-tu-7635-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key frank2030-llama3-chat-tu-7635-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key frank2030-llama3-chat-tu-7635-v1_reward/config.json from bucket guanaco-reward-models
Deleting key frank2030-llama3-chat-tu-7635-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key frank2030-llama3-chat-tu-7635-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key frank2030-llama3-chat-tu-7635-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key frank2030-llama3-chat-tu-7635-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key frank2030-llama3-chat-tu-7635-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key frank2030-llama3-chat-tu-7635-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.20s
frank2030-llama3-chat-tu_7635_v1 status is now torndown due to DeploymentManager action