developer_uid: Hastagaras
submission_id: hastagaras-llama-3-8b-o64_v2
model_name: pleasebesafe
model_group: Hastagaras/llama-3-8b-o6
status: torndown
timestamp: 2024-05-14T22:25:23+00:00
num_battles: 12044
num_wins: 6376
celo_rating: 1199.5
family_friendly_score: 0.0
submission_type: basic
model_repo: Hastagaras/llama-3-8b-o64
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: pleasebesafe
is_internal_developer: False
language_model: Hastagaras/llama-3-8b-o64
model_size: 8B
ranking_group: single
us_pacific_date: 2024-05-14
win_ratio: 0.5293922284955165
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 1.1, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou're {bot_name} in this roleplay chat between {user_name} and {bot_name}. Always write your response as {bot_name} based on the following description.\n\nDescription: {memory}\n", 'prompt_template': 'Previously: {prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': 'Bot: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n", 'prompt_template': '{prompt}\n', 'response_template': 'Bot:', 'truncate_by_message': False, 'user_template': 'User: {message}\n'}
model_eval_status: success
Resubmit model
Running pipeline stage MKMLizer
Starting job with name hastagaras-llama-3-8b-o64-v2-mkmlizer
Waiting for job on hastagaras-llama-3-8b-o64-v2-mkmlizer to finish
hastagaras-llama-3-8b-o64-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ _____ __ __ ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ /___/ ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ Version: 0.8.14 ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ https://mk1.ai ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ The license key for the current software has been verified as ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ belonging to: ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ Chai Research Corp. ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ║ ║
hastagaras-llama-3-8b-o64-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
hastagaras-llama-3-8b-o64-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
hastagaras-llama-3-8b-o64-v2-mkmlizer: warnings.warn(warning_message, FutureWarning)
hastagaras-llama-3-8b-o64-v2-mkmlizer: Downloaded to shared memory in 13.148s
hastagaras-llama-3-8b-o64-v2-mkmlizer: quantizing model to /dev/shm/model_cache
hastagaras-llama-3-8b-o64-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
hastagaras-llama-3-8b-o64-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 7%|▋ | 21/291 [00:00<00:01, 199.48it/s] Loading 0: 14%|█▍ | 41/291 [00:00<00:01, 190.56it/s] Loading 0: 23%|██▎ | 66/291 [00:00<00:01, 212.84it/s] Loading 0: 30%|███ | 88/291 [00:00<00:01, 104.51it/s] Loading 0: 38%|███▊ | 112/291 [00:00<00:01, 130.47it/s] Loading 0: 47%|████▋ | 136/291 [00:00<00:00, 155.12it/s] Loading 0: 54%|█████▎ | 156/291 [00:01<00:00, 165.20it/s] Loading 0: 62%|██████▏ | 179/291 [00:01<00:00, 180.51it/s] Loading 0: 69%|██████▊ | 200/291 [00:01<00:00, 112.16it/s] Loading 0: 76%|███████▌ | 220/291 [00:01<00:00, 127.59it/s] Loading 0: 85%|████████▍ | 246/291 [00:01<00:00, 152.14it/s] Loading 0: 91%|█████████▏| 266/291 [00:01<00:00, 159.18it/s] Loading 0: 99%|█████████▊| 287/291 [00:06<00:00, 13.98it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
hastagaras-llama-3-8b-o64-v2-mkmlizer: quantized model in 16.897s
hastagaras-llama-3-8b-o64-v2-mkmlizer: Processed model Hastagaras/llama-3-8b-o64 in 31.013s
hastagaras-llama-3-8b-o64-v2-mkmlizer: creating bucket guanaco-mkml-models
hastagaras-llama-3-8b-o64-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
hastagaras-llama-3-8b-o64-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/hastagaras-llama-3-8b-o64-v2
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/hastagaras-llama-3-8b-o64-v2/config.json
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/hastagaras-llama-3-8b-o64-v2/tokenizer_config.json
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/hastagaras-llama-3-8b-o64-v2/special_tokens_map.json
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/hastagaras-llama-3-8b-o64-v2/tokenizer.json
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/hastagaras-llama-3-8b-o64-v2/flywheel_model.0.safetensors
hastagaras-llama-3-8b-o64-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
hastagaras-llama-3-8b-o64-v2-mkmlizer: warnings.warn(
hastagaras-llama-3-8b-o64-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
hastagaras-llama-3-8b-o64-v2-mkmlizer: return self.fget.__get__(instance, owner)()
hastagaras-llama-3-8b-o64-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
hastagaras-llama-3-8b-o64-v2-mkmlizer: Saving duration: 0.209s
hastagaras-llama-3-8b-o64-v2-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 6.709s
hastagaras-llama-3-8b-o64-v2-mkmlizer: creating bucket guanaco-reward-models
hastagaras-llama-3-8b-o64-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
hastagaras-llama-3-8b-o64-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/hastagaras-llama-3-8b-o64-v2_reward
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/hastagaras-llama-3-8b-o64-v2_reward/config.json
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/hastagaras-llama-3-8b-o64-v2_reward/tokenizer_config.json
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/hastagaras-llama-3-8b-o64-v2_reward/special_tokens_map.json
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/hastagaras-llama-3-8b-o64-v2_reward/vocab.json
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/hastagaras-llama-3-8b-o64-v2_reward/merges.txt
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/hastagaras-llama-3-8b-o64-v2_reward/tokenizer.json
hastagaras-llama-3-8b-o64-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/hastagaras-llama-3-8b-o64-v2_reward/reward.tensors
Job hastagaras-llama-3-8b-o64-v2-mkmlizer completed after 52.44s with status: succeeded
Stopping job with name hastagaras-llama-3-8b-o64-v2-mkmlizer
Pipeline stage MKMLizer completed in 57.24s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Creating inference service hastagaras-llama-3-8b-o64-v2
Waiting for inference service hastagaras-llama-3-8b-o64-v2 to be ready
Inference service hastagaras-llama-3-8b-o64-v2 ready after 40.236597299575806s
Pipeline stage ISVCDeployer completed in 48.01s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.27065372467041s
Received healthy response to inference request in 1.3000991344451904s
Received healthy response to inference request in 1.367532730102539s
Received healthy response to inference request in 1.3393146991729736s
Received healthy response to inference request in 1.3120629787445068s
5 requests
0 failed requests
5th percentile: 1.3024919033050537
10th percentile: 1.304884672164917
20th percentile: 1.3096702098846436
30th percentile: 1.3175133228302003
40th percentile: 1.328414011001587
50th percentile: 1.3393146991729736
60th percentile: 1.3506019115447998
70th percentile: 1.361889123916626
80th percentile: 1.5481569290161135
90th percentile: 1.9094053268432618
95th percentile: 2.0900295257568358
99th percentile: 2.234528884887695
mean time: 1.517932653427124
Pipeline stage StressChecker completed in 8.26s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.03s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.04s
M-Eval Dataset for topic stay_in_character is loaded
hastagaras-llama-3-8b-o64_v2 status is now deployed due to DeploymentManager action
hastagaras-llama-3-8b-o64_v2 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of hastagaras-llama-3-8b-o64_v2
Running pipeline stage ISVCDeleter
Checking if service hastagaras-llama-3-8b-o64-v2 is running
Tearing down inference service hastagaras-llama-3-8b-o64-v1
Tearing down inference service hastagaras-llama-3-8b-o64-v2
Toredown service hastagaras-llama-3-8b-o64-v1
Pipeline stage ISVCDeleter completed in 5.56s
Toredown service hastagaras-llama-3-8b-o64-v2
Running pipeline stage MKMLModelDeleter
Pipeline stage ISVCDeleter completed in 5.23s
Cleaning model data from S3
Running pipeline stage MKMLModelDeleter
Cleaning model data from model cache
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Cleaning model data from S3
Deleting key hastagaras-llama-3-8b-o64-v1/config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key hastagaras-llama-3-8b-o64-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key hastagaras-llama-3-8b-o64-v2/config.json from bucket guanaco-mkml-models
Deleting key hastagaras-llama-3-8b-o64-v2/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key hastagaras-llama-3-8b-o64-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key hastagaras-llama-3-8b-o64-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key hastagaras-llama-3-8b-o64-v1/tokenizer_config.json from bucket guanaco-mkml-models
Deleting key hastagaras-llama-3-8b-o64-v2/special_tokens_map.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key hastagaras-llama-3-8b-o64-v2/tokenizer.json from bucket guanaco-mkml-models
Deleting key hastagaras-llama-3-8b-o64-v2/tokenizer_config.json from bucket guanaco-mkml-models
Deleting key hastagaras-llama-3-8b-o64-v1_reward/config.json from bucket guanaco-reward-models
Cleaning model data from model cache
Deleting key hastagaras-llama-3-8b-o64-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v2_reward/config.json from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v2_reward/merges.txt from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v2_reward/reward.tensors from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v2_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v1_reward/vocab.json from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v2_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key hastagaras-llama-3-8b-o64-v2_reward/tokenizer_config.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.97s
Deleting key hastagaras-llama-3-8b-o64-v2_reward/vocab.json from bucket guanaco-reward-models
hastagaras-llama-3-8b-o64_v1 status is now torndown due to DeploymentManager action
Pipeline stage MKMLModelDeleter completed in 3.25s
hastagaras-llama-3-8b-o64_v2 status is now torndown due to DeploymentManager action