submission_id: huggyllama-llama-7b_v171
developer_uid: robert_irvine
status: torndown
model_repo: huggyllama/llama-7b
reward_repo: rirv938/reward_gpt2_preference_24m_e2
generation_params: {'temperature': 0.72, 'top_p': 0.73, 'min_p': 0.0, 'top_k': 1000, 'presence_penalty': 0.7, 'frequency_penalty': 0.3, 'stopping_words': ['</s>', '<|user|>', '###', '\n'], 'max_input_tokens': 512, 'best_of': 8, 'max_output_tokens': 80}
formatter: {'memory_template': "### Instruction:\n\n{bot_name}'s Persona: {memory}.\n\nPlay the role of {bot_name}. Engage in a chat with {user_name} while stay in character. Do not write dialogues and narration for {user_name}. {bot_name} should response with engaging messages of medium length that encourage responses.", 'prompt_template': '{prompt}\n\n', 'bot_template': '### Response:\n\n{bot_name}: {message}\n\n', 'user_template': '### Input:\n\n{user_name}: {message}\n\n', 'response_template': '### Response:\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': 'Memory: {memory}\n', 'prompt_template': '{prompt}\n', 'bot_template': 'Bot: {message}\n', 'user_template': 'User: {message}\n', 'response_template': 'Bot:', 'truncate_by_message': False}
timestamp: 2024-03-29T16:15:40+00:00
model_name: huggyllama-llama-7b_v171
model_eval_status: pending
model_group: huggyllama/llama-7b
num_battles: 24874
num_wins: 10618
celo_rating: 1102.35
propriety_score: 0.0
propriety_total_count: 0.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 6738415616.0
best_of: 8
max_input_tokens: 512
max_output_tokens: 80
display_name: huggyllama-llama-7b_v171
ineligible_reason: max_output_tokens!=64
language_model: huggyllama/llama-7b
model_size: 7B
reward_model: rirv938/reward_gpt2_preference_24m_e2
us_pacific_date: 2024-03-29
win_ratio: 0.42687143201736755
preference_data_url: None
Resubmit model
Running pipeline stage MKMLizer
Starting job with name huggyllama-llama-7b-v171-mkmlizer
Waiting for job on huggyllama-llama-7b-v171-mkmlizer to finish
huggyllama-llama-7b-v171-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
huggyllama-llama-7b-v171-mkmlizer: ║ _____ __ __ ║
huggyllama-llama-7b-v171-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
huggyllama-llama-7b-v171-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
huggyllama-llama-7b-v171-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
huggyllama-llama-7b-v171-mkmlizer: ║ /___/ ║
huggyllama-llama-7b-v171-mkmlizer: ║ ║
huggyllama-llama-7b-v171-mkmlizer: ║ Version: 0.6.11 ║
huggyllama-llama-7b-v171-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
huggyllama-llama-7b-v171-mkmlizer: ║ ║
huggyllama-llama-7b-v171-mkmlizer: ║ The license key for the current software has been verified as ║
huggyllama-llama-7b-v171-mkmlizer: ║ belonging to: ║
huggyllama-llama-7b-v171-mkmlizer: ║ ║
huggyllama-llama-7b-v171-mkmlizer: ║ Chai Research Corp. ║
huggyllama-llama-7b-v171-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
huggyllama-llama-7b-v171-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
huggyllama-llama-7b-v171-mkmlizer: ║ ║
huggyllama-llama-7b-v171-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
huggyllama-llama-7b-v171-mkmlizer: .gitattributes: 0%| | 0.00/1.48k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.48k/1.48k [00:00<00:00, 18.2MB/s]
huggyllama-llama-7b-v171-mkmlizer: LICENSE: 0%| | 0.00/10.6k [00:00<?, ?B/s] LICENSE: 100%|██████████| 10.6k/10.6k [00:00<00:00, 74.8MB/s]
huggyllama-llama-7b-v171-mkmlizer: README.md: 0%| | 0.00/472 [00:00<?, ?B/s] README.md: 100%|██████████| 472/472 [00:00<00:00, 4.31MB/s]
huggyllama-llama-7b-v171-mkmlizer: config.json: 0%| | 0.00/594 [00:00<?, ?B/s] config.json: 100%|██████████| 594/594 [00:00<00:00, 7.10MB/s]
huggyllama-llama-7b-v171-mkmlizer: generation_config.json: 0%| | 0.00/137 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 137/137 [00:00<00:00, 2.14MB/s]
huggyllama-llama-7b-v171-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/huggyllama-llama-7b-v171/mkml_model.tensors
huggyllama-llama-7b-v171-mkmlizer: loading reward model from rirv938/reward_gpt2_preference_24m_e2
huggyllama-llama-7b-v171-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
huggyllama-llama-7b-v171-mkmlizer: warnings.warn(
huggyllama-llama-7b-v171-mkmlizer: config.json: 0%| | 0.00/995 [00:00<?, ?B/s] config.json: 100%|██████████| 995/995 [00:00<00:00, 10.3MB/s]
huggyllama-llama-7b-v171-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
huggyllama-llama-7b-v171-mkmlizer: warnings.warn(
huggyllama-llama-7b-v171-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.90MB/s]
huggyllama-llama-7b-v171-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 7.34MB/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 7.31MB/s]
huggyllama-llama-7b-v171-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 18.2MB/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 18.0MB/s]
huggyllama-llama-7b-v171-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
huggyllama-llama-7b-v171-mkmlizer: warnings.warn(
huggyllama-llama-7b-v171-mkmlizer: pytorch_model.bin: 0%| | 0.00/510M [00:00<?, ?B/s] pytorch_model.bin: 2%|▏ | 10.5M/510M [00:00<00:37, 13.2MB/s] pytorch_model.bin: 4%|▍ | 21.0M/510M [00:00<00:20, 23.7MB/s] pytorch_model.bin: 8%|▊ | 41.9M/510M [00:01<00:09, 50.0MB/s] pytorch_model.bin: 14%|█▍ | 73.4M/510M [00:01<00:05, 83.3MB/s] pytorch_model.bin: 25%|██▍ | 126M/510M [00:01<00:02, 159MB/s] pytorch_model.bin: 51%|█████ | 259M/510M [00:01<00:00, 380MB/s] pytorch_model.bin: 65%|██████▌ | 332M/510M [00:01<00:00, 455MB/s] pytorch_model.bin: 100%|█████████▉| 510M/510M [00:01<00:00, 304MB/s]
huggyllama-llama-7b-v171-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
huggyllama-llama-7b-v171-mkmlizer: Saving duration: 0.096s
huggyllama-llama-7b-v171-mkmlizer: Processed model rirv938/reward_gpt2_preference_24m_e2 in 4.661s
huggyllama-llama-7b-v171-mkmlizer: creating bucket guanaco-reward-models
huggyllama-llama-7b-v171-mkmlizer: Bucket 's3://guanaco-reward-models/' created
huggyllama-llama-7b-v171-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/huggyllama-llama-7b-v171_reward
huggyllama-llama-7b-v171-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/huggyllama-llama-7b-v171_reward/config.json
huggyllama-llama-7b-v171-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/huggyllama-llama-7b-v171_reward/special_tokens_map.json
huggyllama-llama-7b-v171-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/huggyllama-llama-7b-v171_reward/merges.txt
huggyllama-llama-7b-v171-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/huggyllama-llama-7b-v171_reward/vocab.json
huggyllama-llama-7b-v171-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/huggyllama-llama-7b-v171_reward/tokenizer_config.json
huggyllama-llama-7b-v171-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/huggyllama-llama-7b-v171_reward/tokenizer.json
huggyllama-llama-7b-v171-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/huggyllama-llama-7b-v171_reward/reward.tensors
Job huggyllama-llama-7b-v171-mkmlizer completed after 103.82s with status: succeeded
Stopping job with name huggyllama-llama-7b-v171-mkmlizer
Pipeline stage MKMLizer completed in 111.83s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.29s
Running pipeline stage ISVCDeployer
Creating inference service huggyllama-llama-7b-v171
Waiting for inference service huggyllama-llama-7b-v171 to be ready
Inference service huggyllama-llama-7b-v171 ready after 40.72526144981384s
Pipeline stage ISVCDeployer completed in 57.16s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.985002040863037s
Received healthy response to inference request in 1.3808538913726807s
Received healthy response to inference request in 0.795325517654419s
Received healthy response to inference request in 0.9144916534423828s
Received healthy response to inference request in 1.2792081832885742s
5 requests
0 failed requests
5th percentile: 0.8191587448120117
10th percentile: 0.8429919719696045
20th percentile: 0.8906584262847901
30th percentile: 0.9874349594116211
40th percentile: 1.1333215713500977
50th percentile: 1.2792081832885742
60th percentile: 1.3198664665222168
70th percentile: 1.3605247497558595
80th percentile: 1.501683521270752
90th percentile: 1.7433427810668947
95th percentile: 1.8641724109649658
99th percentile: 1.9608361148834228
mean time: 1.2709762573242187
Pipeline stage StressChecker completed in 17.79s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.11s
Running M-Eval for topic stay_in_character
Running pipeline stage DaemonicSafetyScorer
M-Eval Dataset for topic stay_in_character is loaded
Pipeline stage DaemonicSafetyScorer completed in 0.24s
%s, retrying in %s seconds...
huggyllama-llama-7b_v171 status is now deployed due to DeploymentManager action
%s, retrying in %s seconds...
huggyllama-llama-7b_v171 status is now inactive due to admin request
huggyllama-llama-7b_v171 status is now deployed due to admin request
huggyllama-llama-7b_v171 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of huggyllama-llama-7b_v171
Running pipeline stage ISVCDeleter
Checking if service huggyllama-llama-7b-v171 is running
Tearing down inference service huggyllama-llama-7b-v171
Toredown service huggyllama-llama-7b-v171
Pipeline stage ISVCDeleter completed in 3.84s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key huggyllama-llama-7b-v171/config.json from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v171/mkml_model.tensors from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v171/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v171/tokenizer.json from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v171/tokenizer.model from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v171/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key huggyllama-llama-7b-v171_reward/config.json from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v171_reward/merges.txt from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v171_reward/reward.tensors from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v171_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v171_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v171_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v171_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 1.99s
huggyllama-llama-7b_v171 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics