developer_uid: clover0103
submission_id: deverdever-heavenly-goat-v7_v1
model_name: deverdever-heavenly-goat-v7_v1
model_group: DeverDever/heavenly-goat
status: rejected
timestamp: 2024-04-08T09:43:45+00:00
num_battles: 103
num_wins: 59
family_friendly_score: 0.0
submission_type: basic
model_repo: DeverDever/heavenly-goat-v7
model_architecture: MistralForCausalLM
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
model_num_parameters: 7241732096.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: deverdever-heavenly-goat-v7_v1
ineligible_reason: model is not deployable
is_internal_developer: False
language_model: DeverDever/heavenly-goat-v7
model_size: 7B
ranking_group: single
us_pacific_date: 2024-04-08
win_ratio: 0.5728155339805825
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': '### System:\n- You are {bot_name}.\n- Your personal: {memory}\n- You give replies to {user_name}. Aim to captivate and inspire a continued dialogue.', 'prompt_template': '- Your example dialogue:\n{prompt}\n</s>', 'bot_template': '### {bot_name}: {message}\n', 'user_template': '### {user_name}: {message}\n', 'response_template': '### {bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'user_template': '{user_name}: {message}\n'}
model_eval_status: error
Resubmit model
Running pipeline stage MKMLizer
Starting job with name deverdever-heavenly-goat-v7-v1-mkmlizer
Waiting for job on deverdever-heavenly-goat-v7-v1-mkmlizer to finish
deverdever-heavenly-goat-v7-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ _____ __ __ ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ /___/ ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ Version: 0.6.11 ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ The license key for the current software has been verified as ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ belonging to: ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ Chai Research Corp. ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ║ ║
deverdever-heavenly-goat-v7-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
deverdever-heavenly-goat-v7-v1-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 18.6MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: README.md: 0%| | 0.00/21.0 [00:00<?, ?B/s] README.md: 100%|██████████| 21.0/21.0 [00:00<00:00, 206kB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: added_tokens.json: 0%| | 0.00/42.0 [00:00<?, ?B/s] added_tokens.json: 100%|██████████| 42.0/42.0 [00:00<00:00, 347kB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: config.json: 0%| | 0.00/617 [00:00<?, ?B/s] config.json: 100%|██████████| 617/617 [00:00<00:00, 10.0MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: generation_config.json: 0%| | 0.00/111 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 111/111 [00:00<00:00, 1.80MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: pytorch_model-00001-of-00002.bin: 0%| | 0.00/9.94G [00:00<?, ?B/s] pytorch_model-00001-of-00002.bin: 0%| | 10.5M/9.94G [00:01<15:57, 10.4MB/s] pytorch_model-00001-of-00002.bin: 0%| | 21.0M/9.94G [00:01<11:44, 14.1MB/s] pytorch_model-00001-of-00002.bin: 0%| | 31.5M/9.94G [00:01<07:25, 22.2MB/s] pytorch_model-00001-of-00002.bin: 1%| | 52.4M/9.94G [00:01<04:16, 38.6MB/s] pytorch_model-00001-of-00002.bin: 1%| | 105M/9.94G [00:02<01:37, 101MB/s] pytorch_model-00001-of-00002.bin: 2%|▏ | 168M/9.94G [00:02<00:54, 180MB/s] pytorch_model-00001-of-00002.bin: 3%|▎ | 252M/9.94G [00:02<00:39, 243MB/s] pytorch_model-00001-of-00002.bin: 3%|▎ | 336M/9.94G [00:02<00:28, 343MB/s] pytorch_model-00001-of-00002.bin: 6%|▌ | 556M/9.94G [00:02<00:13, 707MB/s] pytorch_model-00001-of-00002.bin: 12%|█▏ | 1.17G/9.94G [00:02<00:04, 1.91GB/s] pytorch_model-00001-of-00002.bin: 14%|█▍ | 1.44G/9.94G [00:04<00:18, 463MB/s] pytorch_model-00001-of-00002.bin: 16%|█▋ | 1.63G/9.94G [00:04<00:16, 492MB/s] pytorch_model-00001-of-00002.bin: 18%|█▊ | 1.79G/9.94G [00:04<00:13, 586MB/s] pytorch_model-00001-of-00002.bin: 25%|██▍ | 2.44G/9.94G [00:04<00:07, 1.06GB/s] pytorch_model-00001-of-00002.bin: 27%|██▋ | 2.64G/9.94G [00:06<00:14, 509MB/s] pytorch_model-00001-of-00002.bin: 28%|██▊ | 2.79G/9.94G [00:06<00:13, 516MB/s] pytorch_model-00001-of-00002.bin: 30%|██▉ | 2.95G/9.94G [00:06<00:11, 588MB/s] pytorch_model-00001-of-00002.bin: 31%|███ | 3.07G/9.94G [00:06<00:10, 645MB/s] pytorch_model-00001-of-00002.bin: 35%|███▌ | 3.48G/9.94G [00:06<00:06, 1.07GB/s] pytorch_model-00001-of-00002.bin: 37%|███▋ | 3.69G/9.94G [00:07<00:08, 750MB/s] pytorch_model-00001-of-00002.bin: 39%|███▊ | 3.85G/9.94G [00:07<00:11, 548MB/s] pytorch_model-00001-of-00002.bin: 40%|███▉ | 3.97G/9.94G [00:08<00:11, 539MB/s] pytorch_model-00001-of-00002.bin: 41%|████ | 4.08G/9.94G [00:08<00:11, 503MB/s] pytorch_model-00001-of-00002.bin: 43%|████▎ | 4.28G/9.94G [00:08<00:08, 675MB/s] pytorch_model-00001-of-00002.bin: 44%|████▍ | 4.39G/9.94G [00:08<00:07, 720MB/s] pytorch_model-00001-of-00002.bin: 47%|████▋ | 4.71G/9.94G [00:08<00:05, 998MB/s] pytorch_model-00001-of-00002.bin: 49%|████▊ | 4.84G/9.94G [00:09<00:09, 558MB/s] pytorch_model-00001-of-00002.bin: 50%|████▉ | 4.95G/9.94G [00:09<00:09, 544MB/s] pytorch_model-00001-of-00002.bin: 51%|█████ | 5.03G/9.94G [00:09<00:09, 538MB/s] pytorch_model-00001-of-00002.bin: 51%|█████▏ | 5.12G/9.94G [00:09<00:10, 477MB/s] pytorch_model-00001-of-00002.bin: 53%|█████▎ | 5.24G/9.94G [00:10<00:07, 589MB/s] pytorch_model-00001-of-00002.bin: 54%|█████▍ | 5.37G/9.94G [00:10<00:06, 696MB/s] pytorch_model-00001-of-00002.bin: 55%|█████▍ | 5.46G/9.94G [00:10<00:06, 742MB/s] pytorch_model-00001-of-00002.bin: 56%|█████▋ | 5.61G/9.94G [00:10<00:04, 899MB/s] pytorch_model-00001-of-00002.bin: 58%|█████▊ | 5.79G/9.94G [00:10<00:04, 991MB/s] pytorch_model-00001-of-00002.bin: 59%|█████▉ | 5.90G/9.94G [00:10<00:05, 701MB/s] pytorch_model-00001-of-00002.bin: 60%|██████ | 6.00G/9.94G [00:11<00:07, 512MB/s] pytorch_model-00001-of-00002.bin: 61%|██████ | 6.08G/9.94G [00:11<00:07, 542MB/s] pytorch_model-00001-of-00002.bin: 62%|██████▏ | 6.16G/9.94G [00:11<00:08, 446MB/s] pytorch_model-00001-of-00002.bin: 63%|██████▎ | 6.22G/9.94G [00:11<00:08, 419MB/s] pytorch_model-00001-of-00002.bin: 64%|██████▍ | 6.34G/9.94G [00:11<00:06, 548MB/s] pytorch_model-00001-of-00002.bin: 65%|██████▍ | 6.46G/9.94G [00:11<00:05, 660MB/s] pytorch_model-00001-of-00002.bin: 66%|██████▌ | 6.57G/9.94G [00:12<00:04, 736MB/s] pytorch_model-00001-of-00002.bin: 67%|██████▋ | 6.67G/9.94G [00:12<00:04, 732MB/s] pytorch_model-00001-of-00002.bin: 69%|██████▉ | 6.86G/9.94G [00:12<00:03, 879MB/s] pytorch_model-00001-of-00002.bin: 70%|██████▉ | 6.95G/9.94G [00:12<00:03, 767MB/s] pytorch_model-00001-of-00002.bin: 71%|███████ | 7.04G/9.94G [00:12<00:05, 493MB/s] pytorch_model-00001-of-00002.bin: 72%|███████▏ | 7.11G/9.94G [00:13<00:05, 521MB/s] pytorch_model-00001-of-00002.bin: 72%|███████▏ | 7.18G/9.94G [00:13<00:04, 554MB/s] pytorch_model-00001-of-00002.bin: 73%|███████▎ | 7.26G/9.94G [00:13<00:06, 399MB/s] pytorch_model-00001-of-00002.bin: 74%|███████▎ | 7.32G/9.94G [00:13<00:06, 428MB/s] pytorch_model-00001-of-00002.bin: 74%|███████▍ | 7.39G/9.94G [00:13<00:05, 480MB/s] pytorch_model-00001-of-00002.bin: 75%|███████▌ | 7.49G/9.94G [00:13<00:04, 567MB/s] pytorch_model-00001-of-00002.bin: 77%|███████▋ | 7.63G/9.94G [00:13<00:03, 760MB/s] pytorch_model-00001-of-00002.bin: 78%|███████▊ | 7.74G/9.94G [00:14<00:02, 830MB/s] pytorch_model-00001-of-00002.bin: 79%|███████▉ | 7.90G/9.94G [00:14<00:02, 1.01GB/s] pytorch_model-00001-of-00002.bin: 81%|████████ | 8.01G/9.94G [00:14<00:01, 1.00GB/s] pytorch_model-00001-of-00002.bin: 82%|████████▏ | 8.13G/9.94G [00:14<00:02, 764MB/s] pytorch_model-00001-of-00002.bin: 83%|████████▎ | 8.22G/9.94G [00:14<00:02, 615MB/s] pytorch_model-00001-of-00002.bin: 84%|████████▎ | 8.30G/9.94G [00:14<00:02, 604MB/s] pytorch_model-00001-of-00002.bin: 84%|████████▍ | 8.38G/9.94G [00:15<00:03, 507MB/s] pytorch_model-00001-of-00002.bin: 85%|████████▍ | 8.44G/9.94G [00:15<00:03, 448MB/s] pytorch_model-00001-of-00002.bin: 86%|████████▌ | 8.52G/9.94G [00:15<00:02, 515MB/s] pytorch_model-00001-of-00002.bin: 87%|████████▋ | 8.61G/9.94G [00:15<00:02, 566MB/s] pytorch_model-00001-of-00002.bin: 88%|████████▊ | 8.72G/9.94G [00:15<00:01, 677MB/s] pytorch_model-00001-of-00002.bin: 89%|████████▊ | 8.82G/9.94G [00:15<00:01, 711MB/s] pytorch_model-00001-of-00002.bin: 90%|████████▉ | 8.93G/9.94G [00:15<00:01, 807MB/s] pytorch_model-00001-of-00002.bin: 91%|█████████▏| 9.08G/9.94G [00:15<00:00, 907MB/s] pytorch_model-00001-of-00002.bin: 92%|█████████▏| 9.18G/9.94G [00:16<00:00, 849MB/s] pytorch_model-00001-of-00002.bin: 93%|█████████▎| 9.27G/9.94G [00:16<00:00, 694MB/s] pytorch_model-00001-of-00002.bin: 95%|█████████▍| 9.40G/9.94G [00:16<00:00, 774MB/s] pytorch_model-00001-of-00002.bin: 95%|█████████▌| 9.48G/9.94G [00:16<00:00, 728MB/s] pytorch_model-00001-of-00002.bin: 97%|█████████▋| 9.64G/9.94G [00:16<00:00, 881MB/s] pytorch_model-00001-of-00002.bin: 100%|█████████▉| 9.94G/9.94G [00:16<00:00, 591MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: pytorch_model.bin.index.json: 0%| | 0.00/23.9k [00:00<?, ?B/s] pytorch_model.bin.index.json: 100%|██████████| 23.9k/23.9k [00:00<00:00, 92.2MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: special_tokens_map.json: 0%| | 0.00/145 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 145/145 [00:00<00:00, 1.20MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: tokenizer.json: 0%| | 0.00/1.80M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 34.3MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 61.8MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: tokenizer_config.json: 0%| | 0.00/953 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 953/953 [00:00<00:00, 11.4MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: Downloaded to shared memory in 25.204s
deverdever-heavenly-goat-v7-v1-mkmlizer: quantizing model to /dev/shm/model_cache
deverdever-heavenly-goat-v7-v1-mkmlizer: Saving mkml model at /dev/shm/model_cache
deverdever-heavenly-goat-v7-v1-mkmlizer: Reading /tmp/tmplcfqtiit/pytorch_model.bin.index.json
deverdever-heavenly-goat-v7-v1-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:02<12:33, 2.60s/it] Profiling: 70%|███████ | 204/291 [00:03<00:01, 79.19it/s] Profiling: 100%|██████████| 291/291 [00:04<00:00, 75.26it/s] Profiling: 100%|██████████| 291/291 [00:04<00:00, 63.23it/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: quantized model in 14.469s
deverdever-heavenly-goat-v7-v1-mkmlizer: Processed model DeverDever/heavenly-goat-v7 in 40.473s
deverdever-heavenly-goat-v7-v1-mkmlizer: creating bucket guanaco-mkml-models
deverdever-heavenly-goat-v7-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
deverdever-heavenly-goat-v7-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/deverdever-heavenly-goat-v7-v1
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v7-v1/config.json
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v7-v1/special_tokens_map.json
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/deverdever-heavenly-goat-v7-v1/tokenizer.model
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v7-v1/tokenizer_config.json
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v7-v1/tokenizer.json
deverdever-heavenly-goat-v7-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
deverdever-heavenly-goat-v7-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
deverdever-heavenly-goat-v7-v1-mkmlizer: warnings.warn(
deverdever-heavenly-goat-v7-v1-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 12.8MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
deverdever-heavenly-goat-v7-v1-mkmlizer: warnings.warn(
deverdever-heavenly-goat-v7-v1-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 1.70MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 43.0MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 40.7MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
deverdever-heavenly-goat-v7-v1-mkmlizer: warnings.warn(
deverdever-heavenly-goat-v7-v1-mkmlizer: pytorch_model.bin: 0%| | 0.00/1.44G [00:00<?, ?B/s] pytorch_model.bin: 1%| | 10.5M/1.44G [00:00<00:16, 89.0MB/s] pytorch_model.bin: 5%|▌ | 73.4M/1.44G [00:00<00:03, 381MB/s] pytorch_model.bin: 17%|█▋ | 241M/1.44G [00:00<00:01, 842MB/s] pytorch_model.bin: 25%|██▍ | 357M/1.44G [00:00<00:01, 953MB/s] pytorch_model.bin: 34%|███▍ | 490M/1.44G [00:00<00:01, 901MB/s] pytorch_model.bin: 40%|████ | 585M/1.44G [00:02<00:04, 181MB/s] pytorch_model.bin: 46%|████▌ | 658M/1.44G [00:02<00:03, 200MB/s] pytorch_model.bin: 49%|████▉ | 711M/1.44G [00:02<00:03, 222MB/s] pytorch_model.bin: 82%|████████▏ | 1.18G/1.44G [00:02<00:00, 712MB/s] pytorch_model.bin: 100%|█████████▉| 1.44G/1.44G [00:02<00:00, 545MB/s]
deverdever-heavenly-goat-v7-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
deverdever-heavenly-goat-v7-v1-mkmlizer: Saving duration: 0.219s
deverdever-heavenly-goat-v7-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 5.763s
deverdever-heavenly-goat-v7-v1-mkmlizer: creating bucket guanaco-reward-models
deverdever-heavenly-goat-v7-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
deverdever-heavenly-goat-v7-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/deverdever-heavenly-goat-v7-v1_reward
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/deverdever-heavenly-goat-v7-v1_reward/config.json
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/deverdever-heavenly-goat-v7-v1_reward/tokenizer_config.json
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/deverdever-heavenly-goat-v7-v1_reward/special_tokens_map.json
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/deverdever-heavenly-goat-v7-v1_reward/merges.txt
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/deverdever-heavenly-goat-v7-v1_reward/vocab.json
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/deverdever-heavenly-goat-v7-v1_reward/tokenizer.json
deverdever-heavenly-goat-v7-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/deverdever-heavenly-goat-v7-v1_reward/reward.tensors
Job deverdever-heavenly-goat-v7-v1-mkmlizer completed after 74.15s with status: succeeded
Stopping job with name deverdever-heavenly-goat-v7-v1-mkmlizer
Pipeline stage MKMLizer completed in 79.76s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Creating inference service deverdever-heavenly-goat-v7-v1
Waiting for inference service deverdever-heavenly-goat-v7-v1 to be ready
Inference service deverdever-heavenly-goat-v7-v1 ready after 40.22731161117554s
Pipeline stage ISVCDeployer completed in 48.29s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.7332615852355957s
Received healthy response to inference request in 1.065021276473999s
Received healthy response to inference request in 1.171849012374878s
Received healthy response to inference request in 0.9980776309967041s
Received healthy response to inference request in 1.1618645191192627s
5 requests
0 failed requests
5th percentile: 1.0114663600921632
10th percentile: 1.024855089187622
20th percentile: 1.05163254737854
30th percentile: 1.0843899250030518
40th percentile: 1.1231272220611572
50th percentile: 1.1618645191192627
60th percentile: 1.1658583164215088
70th percentile: 1.169852113723755
80th percentile: 1.2841315269470215
90th percentile: 1.5086965560913086
95th percentile: 1.6209790706634521
99th percentile: 1.710805082321167
mean time: 1.2260148048400878
Pipeline stage StressChecker completed in 6.94s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.05s
M-Eval Dataset for topic stay_in_character is loaded
deverdever-heavenly-goat-v7_v1 status is now deployed due to DeploymentManager action
deverdever-heavenly-goat-v7_v1 status is now rejected due to Failing to get Model Eval score