submission_id: hastagaras-waduh-m-llama-3-8b_v2
developer_uid: Hastagaras
status: torndown
model_repo: Hastagaras/Waduh-M-llama-3-8b
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.0, 'top_p': 0.95, 'min_p': 0.04, 'top_k': 200, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': True}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': True}
timestamp: 2024-05-19T03:09:50+00:00
model_name: mwrge-test
model_eval_status: success
model_group: Hastagaras/Waduh-M-llama
num_battles: 34257
num_wins: 18835
celo_rating: 1204.29
propriety_score: 0.0
propriety_total_count: 0.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: mwrge-test
ineligible_reason: propriety_total_count < 800
language_model: Hastagaras/Waduh-M-llama-3-8b
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-05-18
win_ratio: 0.5498146364246723
preference_data_url: None
Resubmit model
Running pipeline stage MKMLizer
Starting job with name hastagaras-waduh-m-llama-3-8b-v2-mkmlizer
Waiting for job on hastagaras-waduh-m-llama-3-8b-v2-mkmlizer to finish
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ _____ __ __ ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ /___/ ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ Version: 0.8.14 ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ https://mk1.ai ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ The license key for the current software has been verified as ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ belonging to: ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ Chai Research Corp. ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ║ ║
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: warnings.warn(warning_message, FutureWarning)
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: Downloaded to shared memory in 14.109s
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: quantizing model to /dev/shm/model_cache
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:04<09:40, 2.01s/it] Loading 0: 8%|▊ | 22/291 [00:04<00:36, 7.34it/s] Loading 0: 14%|█▍ | 41/291 [00:04<00:15, 15.85it/s] Loading 0: 21%|██ | 60/291 [00:04<00:10, 21.86it/s] Loading 0: 28%|██▊ | 82/291 [00:04<00:05, 35.34it/s] Loading 0: 35%|███▌ | 103/291 [00:04<00:03, 50.59it/s] Loading 0: 42%|████▏ | 122/291 [00:04<00:02, 65.56it/s] Loading 0: 48%|████▊ | 141/291 [00:05<00:01, 82.01it/s] Loading 0: 56%|█████▌ | 163/291 [00:05<00:01, 104.39it/s] Loading 0: 63%|██████▎ | 182/291 [00:05<00:01, 73.71it/s] Loading 0: 69%|██████▉ | 202/291 [00:05<00:00, 90.23it/s] Loading 0: 76%|███████▌ | 221/291 [00:05<00:00, 104.65it/s] Loading 0: 82%|████████▏ | 239/291 [00:05<00:00, 117.61it/s] Loading 0: 89%|████████▊ | 258/291 [00:06<00:00, 131.12it/s] Loading 0: 95%|█████████▍| 275/291 [00:06<00:00, 83.32it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: quantized model in 18.097s
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: Processed model Hastagaras/Waduh-M-llama-3-8b in 33.363s
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: creating bucket guanaco-mkml-models
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/hastagaras-waduh-m-llama-3-8b-v2
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/hastagaras-waduh-m-llama-3-8b-v2/config.json
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/hastagaras-waduh-m-llama-3-8b-v2/special_tokens_map.json
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/hastagaras-waduh-m-llama-3-8b-v2/tokenizer_config.json
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/hastagaras-waduh-m-llama-3-8b-v2/tokenizer.json
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/hastagaras-waduh-m-llama-3-8b-v2/flywheel_model.0.safetensors
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: warnings.warn(
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: warnings.warn(
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: warnings.warn(
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: return self.fget.__get__(instance, owner)()
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: creating bucket guanaco-reward-models
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/hastagaras-waduh-m-llama-3-8b-v2_reward
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/hastagaras-waduh-m-llama-3-8b-v2_reward/tokenizer_config.json
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/hastagaras-waduh-m-llama-3-8b-v2_reward/merges.txt
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/hastagaras-waduh-m-llama-3-8b-v2_reward/config.json
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/hastagaras-waduh-m-llama-3-8b-v2_reward/special_tokens_map.json
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/hastagaras-waduh-m-llama-3-8b-v2_reward/vocab.json
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/hastagaras-waduh-m-llama-3-8b-v2_reward/tokenizer.json
hastagaras-waduh-m-llama-3-8b-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/hastagaras-waduh-m-llama-3-8b-v2_reward/reward.tensors
Job hastagaras-waduh-m-llama-3-8b-v2-mkmlizer completed after 62.83s with status: succeeded
Stopping job with name hastagaras-waduh-m-llama-3-8b-v2-mkmlizer
Pipeline stage MKMLizer completed in 66.57s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service hastagaras-waduh-m-llama-3-8b-v2
Waiting for inference service hastagaras-waduh-m-llama-3-8b-v2 to be ready
Inference service hastagaras-waduh-m-llama-3-8b-v2 ready after 30.421388387680054s
Pipeline stage ISVCDeployer completed in 37.51s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.2510900497436523s
Received healthy response to inference request in 1.3005516529083252s
Received healthy response to inference request in 1.2557432651519775s
Received healthy response to inference request in 1.2241430282592773s
Received healthy response to inference request in 1.271615743637085s
5 requests
0 failed requests
5th percentile: 1.2304630756378174
10th percentile: 1.2367831230163575
20th percentile: 1.2494232177734375
30th percentile: 1.258917760848999
40th percentile: 1.265266752243042
50th percentile: 1.271615743637085
60th percentile: 1.2831901073455811
70th percentile: 1.294764471054077
80th percentile: 1.4906593322753907
90th percentile: 1.8708746910095215
95th percentile: 2.060982370376587
99th percentile: 2.213068513870239
mean time: 1.4606287479400635
Pipeline stage StressChecker completed in 7.94s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.03s
Running pipeline stage DaemonicSafetyScorer
Pipeline stage DaemonicSafetyScorer completed in 0.03s
Running M-Eval for topic stay_in_character
hastagaras-waduh-m-llama-3-8b_v2 status is now deployed due to DeploymentManager action
M-Eval Dataset for topic stay_in_character is loaded
hastagaras-waduh-m-llama-3-8b_v2 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of hastagaras-waduh-m-llama-3-8b_v2
Running pipeline stage ISVCDeleter
Checking if service hastagaras-waduh-m-llama-3-8b-v2 is running
Tearing down inference service hastagaras-waduh-m-llama-3-8b-v2
Toredown service hastagaras-waduh-m-llama-3-8b-v2
Pipeline stage ISVCDeleter completed in 4.22s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key hastagaras-waduh-m-llama-3-8b-v2/config.json from bucket guanaco-mkml-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2/tokenizer.json from bucket guanaco-mkml-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key hastagaras-waduh-m-llama-3-8b-v2_reward/config.json from bucket guanaco-reward-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2_reward/merges.txt from bucket guanaco-reward-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2_reward/reward.tensors from bucket guanaco-reward-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key hastagaras-waduh-m-llama-3-8b-v2_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.01s
hastagaras-waduh-m-llama-3-8b_v2 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics