developer_uid: chaiverse_console_tests
submission_id: huggyllama-llama-7b_v178
model_name: huggyllama-llama-7b_v178
model_group: huggyllama/llama-7b
status: torndown
timestamp: 2024-04-04T16:01:59+00:00
num_battles: 5308
num_wins: 1953
celo_rating: 1065.08
family_friendly_score: 0.0
submission_type: basic
model_repo: huggyllama/llama-7b
model_architecture: LlamaForCausalLM
model_num_parameters: 6738415616.0
best_of: 4
max_input_tokens: 512
max_output_tokens: 64
reward_model: default
display_name: huggyllama-llama-7b_v178
is_internal_developer: True
language_model: huggyllama/llama-7b
model_size: 7B
ranking_group: single
us_pacific_date: 2024-04-04
win_ratio: 0.36793519216277315
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 4, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
model_eval_status: success
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
Resubmit model
Running pipeline stage MKMLizer
Starting job with name huggyllama-llama-7b-v178-mkmlizer
Waiting for job on huggyllama-llama-7b-v178-mkmlizer to finish
huggyllama-llama-7b-v178-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
huggyllama-llama-7b-v178-mkmlizer: ║ _____ __ __ ║
huggyllama-llama-7b-v178-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
huggyllama-llama-7b-v178-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
huggyllama-llama-7b-v178-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
huggyllama-llama-7b-v178-mkmlizer: ║ /___/ ║
huggyllama-llama-7b-v178-mkmlizer: ║ ║
huggyllama-llama-7b-v178-mkmlizer: ║ Version: 0.6.11 ║
huggyllama-llama-7b-v178-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
huggyllama-llama-7b-v178-mkmlizer: ║ ║
huggyllama-llama-7b-v178-mkmlizer: ║ The license key for the current software has been verified as ║
huggyllama-llama-7b-v178-mkmlizer: ║ belonging to: ║
huggyllama-llama-7b-v178-mkmlizer: ║ ║
huggyllama-llama-7b-v178-mkmlizer: ║ Chai Research Corp. ║
huggyllama-llama-7b-v178-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
huggyllama-llama-7b-v178-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
huggyllama-llama-7b-v178-mkmlizer: ║ ║
huggyllama-llama-7b-v178-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
huggyllama-llama-7b-v178-mkmlizer: .gitattributes: 0%| | 0.00/1.48k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.48k/1.48k [00:00<00:00, 18.2MB/s]
huggyllama-llama-7b-v178-mkmlizer: LICENSE: 0%| | 0.00/10.6k [00:00<?, ?B/s] LICENSE: 100%|██████████| 10.6k/10.6k [00:00<00:00, 109MB/s]
huggyllama-llama-7b-v178-mkmlizer: README.md: 0%| | 0.00/472 [00:00<?, ?B/s] README.md: 100%|██████████| 472/472 [00:00<00:00, 3.94MB/s]
huggyllama-llama-7b-v178-mkmlizer: config.json: 0%| | 0.00/594 [00:00<?, ?B/s] config.json: 100%|██████████| 594/594 [00:00<00:00, 4.83MB/s]
huggyllama-llama-7b-v178-mkmlizer: generation_config.json: 0%| | 0.00/137 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 137/137 [00:00<00:00, 1.23MB/s]
huggyllama-llama-7b-v178-mkmlizer: model-00002-of-00002.safetensors: 0%| | 0.00/3.50G [00:00<?, ?B/s] model-00002-of-00002.safetensors: 0%| | 10.5M/3.50G [00:00<01:46, 32.6MB/s] model-00002-of-00002.safetensors: 1%| | 31.5M/3.50G [00:00<00:43, 79.0MB/s] model-00002-of-00002.safetensors: 2%|▏ | 62.9M/3.50G [00:00<00:23, 143MB/s] model-00002-of-00002.safetensors: 3%|▎ | 105M/3.50G [00:00<00:15, 221MB/s] model-00002-of-00002.safetensors: 5%|▌ | 189M/3.50G [00:00<00:09, 355MB/s] model-00002-of-00002.safetensors: 7%|▋ | 252M/3.50G [00:00<00:07, 415MB/s] model-00002-of-00002.safetensors: 9%|▊ | 304M/3.50G [00:01<00:07, 420MB/s] model-00002-of-00002.safetensors: 10%|█ | 357M/3.50G [00:01<00:07, 413MB/s] model-00002-of-00002.safetensors: 12%|█▏ | 409M/3.50G [00:01<00:07, 387MB/s] model-00002-of-00002.safetensors: 13%|█▎ | 451M/3.50G [00:01<00:07, 383MB/s] model-00002-of-00002.safetensors: 14%|█▍ | 503M/3.50G [00:01<00:07, 417MB/s] model-00002-of-00002.safetensors: 16%|█▌ | 556M/3.50G [00:02<00:13, 222MB/s] model-00002-of-00002.safetensors: 17%|█▋ | 598M/3.50G [00:02<00:13, 209MB/s] model-00002-of-00002.safetensors: 18%|█▊ | 629M/3.50G [00:02<00:15, 181MB/s] model-00002-of-00002.safetensors: 19%|█▉ | 661M/3.50G [00:02<00:14, 195MB/s] model-00002-of-00002.safetensors: 20%|██ | 703M/3.50G [00:02<00:13, 204MB/s] model-00002-of-00002.safetensors: 24%|██▎ | 828M/3.50G [00:03<00:09, 283MB/s] model-00002-of-00002.safetensors: 26%|██▌ | 902M/3.50G [00:03<00:07, 352MB/s] model-00002-of-00002.safetensors: 52%|█████▏ | 1.82G/3.50G [00:03<00:00, 1.96GB/s] model-00002-of-00002.safetensors: 61%|██████ | 2.14G/3.50G [00:03<00:01, 1.12GB/s] model-00002-of-00002.safetensors: 68%|██████▊ | 2.37G/3.50G [00:04<00:00, 1.14GB/s] model-00002-of-00002.safetensors: 75%|███████▍ | 2.61G/3.50G [00:04<00:00, 1.31GB/s] model-00002-of-00002.safetensors: 81%|████████ | 2.82G/3.50G [00:04<00:00, 1.13GB/s] model-00002-of-00002.safetensors: 86%|████████▌ | 3.00G/3.50G [00:04<00:00, 1.15GB/s] model-00002-of-00002.safetensors: 90%|█████████ | 3.15G/3.50G [00:04<00:00, 874MB/s] model-00002-of-00002.safetensors: 100%|█████████▉| 3.50G/3.50G [00:05<00:00, 1.10GB/s] model-00002-of-00002.safetensors: 100%|█████████▉| 3.50G/3.50G [00:05<00:00, 675MB/s]
huggyllama-llama-7b-v178-mkmlizer: model.safetensors.index.json: 0%| | 0.00/26.8k [00:00<?, ?B/s] model.safetensors.index.json: 100%|██████████| 26.8k/26.8k [00:00<00:00, 169MB/s]
huggyllama-llama-7b-v178-mkmlizer: pytorch_model.bin.index.json: 0%| | 0.00/26.8k [00:00<?, ?B/s] pytorch_model.bin.index.json: 100%|██████████| 26.8k/26.8k [00:00<00:00, 161MB/s]
huggyllama-llama-7b-v178-mkmlizer: special_tokens_map.json: 0%| | 0.00/411 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 411/411 [00:00<00:00, 4.72MB/s]
huggyllama-llama-7b-v178-mkmlizer: tokenizer.json: 0%| | 0.00/1.84M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 1.84M/1.84M [00:00<00:00, 15.2MB/s] tokenizer.json: 100%|██████████| 1.84M/1.84M [00:00<00:00, 15.2MB/s]
huggyllama-llama-7b-v178-mkmlizer: tokenizer.model: 0%| | 0.00/500k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 500k/500k [00:00<00:00, 48.3MB/s]
huggyllama-llama-7b-v178-mkmlizer: tokenizer_config.json: 0%| | 0.00/700 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 700/700 [00:00<00:00, 6.18MB/s]
huggyllama-llama-7b-v178-mkmlizer: Downloaded to shared memory in 12.769s
huggyllama-llama-7b-v178-mkmlizer: quantizing model to /dev/shm/model_cache
huggyllama-llama-7b-v178-mkmlizer: Saving mkml model at /dev/shm/model_cache
huggyllama-llama-7b-v178-mkmlizer: Reading /tmp/tmpq0m36jmz/model.safetensors.index.json
huggyllama-llama-7b-v178-mkmlizer: quantized model in 13.420s
huggyllama-llama-7b-v178-mkmlizer: Processed model huggyllama/llama-7b in 26.983s
huggyllama-llama-7b-v178-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/huggyllama-llama-7b-v178/mkml_model.tensors
huggyllama-llama-7b-v178-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
huggyllama-llama-7b-v178-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
huggyllama-llama-7b-v178-mkmlizer: warnings.warn(
huggyllama-llama-7b-v178-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 13.1MB/s]
huggyllama-llama-7b-v178-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
huggyllama-llama-7b-v178-mkmlizer: warnings.warn(
huggyllama-llama-7b-v178-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.35MB/s]
huggyllama-llama-7b-v178-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 39.5MB/s]
huggyllama-llama-7b-v178-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 50.9MB/s]
huggyllama-llama-7b-v178-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
huggyllama-llama-7b-v178-mkmlizer: warnings.warn(
huggyllama-llama-7b-v178-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
huggyllama-llama-7b-v178-mkmlizer: Saving duration: 0.218s
huggyllama-llama-7b-v178-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 5.850s
huggyllama-llama-7b-v178-mkmlizer: creating bucket guanaco-reward-models
huggyllama-llama-7b-v178-mkmlizer: Bucket 's3://guanaco-reward-models/' created
huggyllama-llama-7b-v178-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/huggyllama-llama-7b-v178_reward
huggyllama-llama-7b-v178-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/huggyllama-llama-7b-v178_reward/config.json
huggyllama-llama-7b-v178-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/huggyllama-llama-7b-v178_reward/special_tokens_map.json
huggyllama-llama-7b-v178-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/huggyllama-llama-7b-v178_reward/tokenizer_config.json
huggyllama-llama-7b-v178-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/huggyllama-llama-7b-v178_reward/merges.txt
huggyllama-llama-7b-v178-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/huggyllama-llama-7b-v178_reward/vocab.json
huggyllama-llama-7b-v178-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/huggyllama-llama-7b-v178_reward/tokenizer.json
huggyllama-llama-7b-v178-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/huggyllama-llama-7b-v178_reward/reward.tensors
Job huggyllama-llama-7b-v178-mkmlizer completed after 53.79s with status: succeeded
Stopping job with name huggyllama-llama-7b-v178-mkmlizer
Pipeline stage MKMLizer completed in 60.54s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.12s
Running pipeline stage ISVCDeployer
Creating inference service huggyllama-llama-7b-v178
Waiting for inference service huggyllama-llama-7b-v178 to be ready
Inference service huggyllama-llama-7b-v178 ready after 50.505688190460205s
Pipeline stage ISVCDeployer completed in 58.65s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.621150255203247s
Received healthy response to inference request in 0.6683237552642822s
Received healthy response to inference request in 0.9632167816162109s
Received healthy response to inference request in 1.8486804962158203s
Received healthy response to inference request in 0.6772429943084717s
5 requests
0 failed requests
5th percentile: 0.6701076030731201
10th percentile: 0.671891450881958
20th percentile: 0.6754591464996338
30th percentile: 0.7344377517700196
40th percentile: 0.8488272666931153
50th percentile: 0.9632167816162109
60th percentile: 1.2263901710510252
70th percentile: 1.4895635604858397
80th percentile: 1.6666563034057618
90th percentile: 1.757668399810791
95th percentile: 1.8031744480133056
99th percentile: 1.8395792865753173
mean time: 1.1557228565216064
Pipeline stage StressChecker completed in 6.61s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.03s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.06s
M-Eval Dataset for topic stay_in_character is loaded
huggyllama-llama-7b_v178 status is now deployed due to DeploymentManager action
huggyllama-llama-7b_v178 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of huggyllama-llama-7b_v178
Running pipeline stage ISVCDeleter
Checking if service huggyllama-llama-7b-v178 is running
Tearing down inference service huggyllama-llama-7b-v178
Toredown service huggyllama-llama-7b-v178
Pipeline stage ISVCDeleter completed in 3.81s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key huggyllama-llama-7b-v178/config.json from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v178/mkml_model.tensors from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v178/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v178/tokenizer.json from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v178/tokenizer.model from bucket guanaco-mkml-models
Deleting key huggyllama-llama-7b-v178/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key huggyllama-llama-7b-v178_reward/config.json from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v178_reward/merges.txt from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v178_reward/reward.tensors from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v178_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v178_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v178_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key huggyllama-llama-7b-v178_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.43s
huggyllama-llama-7b_v178 status is now torndown due to DeploymentManager action