developer_uid: clover0103
submission_id: deverdever-heavenly-goat-v9_v1
model_name: deverdever-heavenly-goat-v9_v1
model_group: DeverDever/heavenly-goat
status: torndown
timestamp: 2024-04-12T10:24:13+00:00
num_battles: 5606
num_wins: 2478
celo_rating: 1126.76
family_friendly_score: 0.0
submission_type: basic
model_repo: DeverDever/heavenly-goat-v9
model_architecture: MistralForCausalLM
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
model_num_parameters: 7241732096.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: deverdever-heavenly-goat-v9_v1
is_internal_developer: False
language_model: DeverDever/heavenly-goat-v9
model_size: 7B
ranking_group: single
us_pacific_date: 2024-04-12
win_ratio: 0.4420264002854085
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "### Instruction:\nYou are a creative agent roleplaying as a character called {bot_name}. Stay true to the persona given, reply with short and descriptive sentences. Do not be repetitive.\n{bot_name}'s Persona: {memory}\n", 'prompt_template': '### Input:\n# Example conversation:\n{prompt}\n# Actual conversation:\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': 'User: {message}\n', 'response_template': '### Response:\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
model_eval_status: success
Resubmit model
Running pipeline stage MKMLizer
Starting job with name deverdever-heavenly-goat-v9-v1-mkmlizer
Waiting for job on deverdever-heavenly-goat-v9-v1-mkmlizer to finish
deverdever-heavenly-goat-v9-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ _____ __ __ ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ /___/ ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ Version: 0.6.11 ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ The license key for the current software has been verified as ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ belonging to: ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ Chai Research Corp. ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ║ ║
deverdever-heavenly-goat-v9-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
deverdever-heavenly-goat-v9-v1-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 18.7MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: README.md: 0%| | 0.00/21.0 [00:00<?, ?B/s] README.md: 100%|██████████| 21.0/21.0 [00:00<00:00, 291kB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: added_tokens.json: 0%| | 0.00/42.0 [00:00<?, ?B/s] added_tokens.json: 100%|██████████| 42.0/42.0 [00:00<00:00, 702kB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: config.json: 0%| | 0.00/609 [00:00<?, ?B/s] config.json: 100%|██████████| 609/609 [00:00<00:00, 4.99MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: generation_config.json: 0%| | 0.00/116 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 116/116 [00:00<00:00, 962kB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: pytorch_model-00001-of-00002.bin: 0%| | 0.00/9.94G [00:00<?, ?B/s] pytorch_model-00001-of-00002.bin: 0%| | 10.5M/9.94G [00:00<12:57, 12.8MB/s] pytorch_model-00001-of-00002.bin: 0%| | 31.5M/9.94G [00:01<07:13, 22.8MB/s] pytorch_model-00001-of-00002.bin: 1%| | 52.4M/9.94G [00:01<03:55, 42.0MB/s] pytorch_model-00001-of-00002.bin: 1%| | 73.4M/9.94G [00:01<02:47, 59.0MB/s] pytorch_model-00001-of-00002.bin: 2%|▏ | 157M/9.94G [00:01<00:57, 169MB/s] pytorch_model-00001-of-00002.bin: 3%|▎ | 294M/9.94G [00:01<00:26, 368MB/s] pytorch_model-00001-of-00002.bin: 4%|▎ | 367M/9.94G [00:02<00:22, 427MB/s] pytorch_model-00001-of-00002.bin: 4%|▍ | 440M/9.94G [00:02<00:19, 476MB/s] pytorch_model-00001-of-00002.bin: 7%|▋ | 692M/9.94G [00:02<00:09, 933MB/s] pytorch_model-00001-of-00002.bin: 12%|█▏ | 1.16G/9.94G [00:02<00:04, 1.84GB/s] pytorch_model-00001-of-00002.bin: 14%|█▍ | 1.39G/9.94G [00:02<00:04, 1.94GB/s] pytorch_model-00001-of-00002.bin: 16%|█▋ | 1.63G/9.94G [00:03<00:11, 735MB/s] pytorch_model-00001-of-00002.bin: 18%|█▊ | 1.79G/9.94G [00:03<00:09, 817MB/s] pytorch_model-00001-of-00002.bin: 21%|██ | 2.06G/9.94G [00:03<00:07, 1.07GB/s] pytorch_model-00001-of-00002.bin: 23%|██▎ | 2.24G/9.94G [00:03<00:06, 1.20GB/s] pytorch_model-00001-of-00002.bin: 25%|██▍ | 2.44G/9.94G [00:03<00:05, 1.29GB/s] pytorch_model-00001-of-00002.bin: 26%|██▋ | 2.62G/9.94G [00:04<00:09, 800MB/s] pytorch_model-00001-of-00002.bin: 28%|██▊ | 2.76G/9.94G [00:04<00:08, 852MB/s] pytorch_model-00001-of-00002.bin: 29%|██▉ | 2.93G/9.94G [00:04<00:07, 981MB/s] pytorch_model-00001-of-00002.bin: 31%|███ | 3.10G/9.94G [00:04<00:06, 1.13GB/s] pytorch_model-00001-of-00002.bin: 33%|███▎ | 3.28G/9.94G [00:04<00:05, 1.27GB/s] pytorch_model-00001-of-00002.bin: 36%|███▌ | 3.59G/9.94G [00:04<00:04, 1.59GB/s] pytorch_model-00001-of-00002.bin: 38%|███▊ | 3.77G/9.94G [00:04<00:04, 1.26GB/s] pytorch_model-00001-of-00002.bin: 40%|████ | 3.98G/9.94G [00:05<00:04, 1.38GB/s] pytorch_model-00001-of-00002.bin: 42%|████▏ | 4.18G/9.94G [00:05<00:03, 1.51GB/s] pytorch_model-00001-of-00002.bin: 44%|████▍ | 4.36G/9.94G [00:05<00:03, 1.40GB/s] pytorch_model-00001-of-00002.bin: 46%|████▌ | 4.54G/9.94G [00:05<00:03, 1.48GB/s] pytorch_model-00001-of-00002.bin: 48%|████▊ | 4.73G/9.94G [00:05<00:03, 1.52GB/s] pytorch_model-00001-of-00002.bin: 49%|████▉ | 4.90G/9.94G [00:05<00:03, 1.37GB/s] pytorch_model-00001-of-00002.bin: 51%|█████ | 5.04G/9.94G [00:05<00:03, 1.32GB/s] pytorch_model-00001-of-00002.bin: 53%|█████▎ | 5.26G/9.94G [00:05<00:03, 1.53GB/s] pytorch_model-00001-of-00002.bin: 55%|█████▌ | 5.49G/9.94G [00:06<00:02, 1.68GB/s] pytorch_model-00001-of-00002.bin: 57%|█████▋ | 5.67G/9.94G [00:06<00:02, 1.55GB/s] pytorch_model-00001-of-00002.bin: 59%|█████▉ | 5.85G/9.94G [00:06<00:02, 1.59GB/s] pytorch_model-00001-of-00002.bin: 61%|██████ | 6.02G/9.94G [00:06<00:02, 1.51GB/s] pytorch_model-00001-of-00002.bin: 62%|██████▏ | 6.18G/9.94G [00:06<00:02, 1.46GB/s] pytorch_model-00001-of-00002.bin: 64%|██████▍ | 6.34G/9.94G [00:06<00:02, 1.50GB/s] pytorch_model-00001-of-00002.bin: 65%|██████▌ | 6.50G/9.94G [00:06<00:02, 1.51GB/s] pytorch_model-00001-of-00002.bin: 67%|██████▋ | 6.68G/9.94G [00:06<00:02, 1.57GB/s] pytorch_model-00001-of-00002.bin: 70%|██████▉ | 6.93G/9.94G [00:06<00:01, 1.84GB/s] pytorch_model-00001-of-00002.bin: 73%|███████▎ | 7.21G/9.94G [00:07<00:01, 2.11GB/s] pytorch_model-00001-of-00002.bin: 75%|███████▌ | 7.49G/9.94G [00:07<00:01, 2.22GB/s] pytorch_model-00001-of-00002.bin: 78%|███████▊ | 7.72G/9.94G [00:07<00:00, 2.25GB/s] pytorch_model-00001-of-00002.bin: 80%|███████▉ | 7.95G/9.94G [00:07<00:01, 1.87GB/s] pytorch_model-00001-of-00002.bin: 82%|████████▏ | 8.16G/9.94G [00:07<00:00, 1.93GB/s] pytorch_model-00001-of-00002.bin: 84%|████████▍ | 8.37G/9.94G [00:07<00:00, 1.94GB/s] pytorch_model-00001-of-00002.bin: 86%|████████▋ | 8.58G/9.94G [00:07<00:00, 1.97GB/s] pytorch_model-00001-of-00002.bin: 88%|████████▊ | 8.80G/9.94G [00:07<00:00, 1.98GB/s] pytorch_model-00001-of-00002.bin: 91%|█████████ | 9.01G/9.94G [00:07<00:00, 1.94GB/s] pytorch_model-00001-of-00002.bin: 93%|█████████▎| 9.27G/9.94G [00:08<00:00, 2.09GB/s] pytorch_model-00001-of-00002.bin: 96%|█████████▋| 9.57G/9.94G [00:08<00:00, 2.34GB/s] pytorch_model-00001-of-00002.bin: 99%|█████████▊| 9.81G/9.94G [00:08<00:00, 2.08GB/s] pytorch_model-00001-of-00002.bin: 100%|█████████▉| 9.94G/9.94G [00:08<00:00, 1.13GB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: pytorch_model-00002-of-00002.bin: 0%| | 0.00/4.54G [00:00<?, ?B/s] pytorch_model-00002-of-00002.bin: 0%| | 10.5M/4.54G [00:01<08:16, 9.12MB/s] pytorch_model-00002-of-00002.bin: 0%| | 21.0M/4.54G [00:01<04:43, 15.9MB/s] pytorch_model-00002-of-00002.bin: 1%| | 31.5M/4.54G [00:01<03:01, 24.9MB/s] pytorch_model-00002-of-00002.bin: 1%| | 52.4M/4.54G [00:01<01:37, 46.1MB/s] pytorch_model-00002-of-00002.bin: 2%|▏ | 105M/4.54G [00:01<00:37, 118MB/s] pytorch_model-00002-of-00002.bin: 5%|▍ | 210M/4.54G [00:01<00:15, 279MB/s] pytorch_model-00002-of-00002.bin: 7%|▋ | 304M/4.54G [00:02<00:11, 385MB/s] pytorch_model-00002-of-00002.bin: 8%|▊ | 377M/4.54G [00:02<00:09, 447MB/s] pytorch_model-00002-of-00002.bin: 21%|██▏ | 975M/4.54G [00:02<00:02, 1.70GB/s] pytorch_model-00002-of-00002.bin: 27%|██▋ | 1.21G/4.54G [00:02<00:02, 1.54GB/s] pytorch_model-00002-of-00002.bin: 31%|███ | 1.41G/4.54G [00:03<00:04, 636MB/s] pytorch_model-00002-of-00002.bin: 34%|███▍ | 1.55G/4.54G [00:03<00:04, 622MB/s] pytorch_model-00002-of-00002.bin: 38%|███▊ | 1.73G/4.54G [00:03<00:03, 759MB/s] pytorch_model-00002-of-00002.bin: 41%|████ | 1.87G/4.54G [00:03<00:03, 793MB/s] pytorch_model-00002-of-00002.bin: 48%|████▊ | 2.16G/4.54G [00:03<00:02, 1.14GB/s] pytorch_model-00002-of-00002.bin: 54%|█████▍ | 2.44G/4.54G [00:04<00:01, 1.36GB/s] pytorch_model-00002-of-00002.bin: 58%|█████▊ | 2.63G/4.54G [00:04<00:01, 1.06GB/s] pytorch_model-00002-of-00002.bin: 61%|██████ | 2.78G/4.54G [00:04<00:01, 1.09GB/s] pytorch_model-00002-of-00002.bin: 64%|██████▍ | 2.93G/4.54G [00:04<00:01, 1.09GB/s] pytorch_model-00002-of-00002.bin: 69%|██████▉ | 3.14G/4.54G [00:04<00:01, 1.27GB/s] pytorch_model-00002-of-00002.bin: 73%|███████▎ | 3.29G/4.54G [00:04<00:01, 1.13GB/s] pytorch_model-00002-of-00002.bin: 77%|███████▋ | 3.50G/4.54G [00:04<00:00, 1.33GB/s] pytorch_model-00002-of-00002.bin: 81%|████████▏ | 3.69G/4.54G [00:05<00:00, 1.46GB/s] pytorch_model-00002-of-00002.bin: 85%|████████▍ | 3.86G/4.54G [00:05<00:00, 1.49GB/s] pytorch_model-00002-of-00002.bin: 89%|████████▊ | 4.03G/4.54G [00:05<00:00, 1.36GB/s] pytorch_model-00002-of-00002.bin: 94%|█████████▍| 4.27G/4.54G [00:05<00:00, 1.60GB/s] pytorch_model-00002-of-00002.bin: 99%|█████████▉| 4.49G/4.54G [00:05<00:00, 1.67GB/s] pytorch_model-00002-of-00002.bin: 100%|█████████▉| 4.54G/4.54G [00:05<00:00, 761MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: pytorch_model.bin.index.json: 0%| | 0.00/23.9k [00:00<?, ?B/s] pytorch_model.bin.index.json: 100%|██████████| 23.9k/23.9k [00:00<00:00, 18.2MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: special_tokens_map.json: 0%| | 0.00/95.0 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 95.0/95.0 [00:00<00:00, 1.57MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: tokenizer.json: 0%| | 0.00/1.80M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 2.64MB/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 2.64MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 5.89MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: tokenizer_config.json: 0%| | 0.00/915 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 915/915 [00:00<00:00, 7.39MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: Downloaded to shared memory in 20.253s
deverdever-heavenly-goat-v9-v1-mkmlizer: quantizing model to /dev/shm/model_cache
deverdever-heavenly-goat-v9-v1-mkmlizer: Saving mkml model at /dev/shm/model_cache
deverdever-heavenly-goat-v9-v1-mkmlizer: Reading /tmp/tmp_2txi1ab/pytorch_model.bin.index.json
deverdever-heavenly-goat-v9-v1-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:02<13:18, 2.75s/it] Profiling: 70%|███████ | 204/291 [00:03<00:01, 75.87it/s] Profiling: 100%|██████████| 291/291 [00:04<00:00, 74.27it/s] Profiling: 100%|██████████| 291/291 [00:04<00:00, 61.61it/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: quantized model in 14.591s
deverdever-heavenly-goat-v9-v1-mkmlizer: Processed model DeverDever/heavenly-goat-v9 in 35.769s
deverdever-heavenly-goat-v9-v1-mkmlizer: creating bucket guanaco-mkml-models
deverdever-heavenly-goat-v9-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
deverdever-heavenly-goat-v9-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/deverdever-heavenly-goat-v9-v1
deverdever-heavenly-goat-v9-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v9-v1/special_tokens_map.json
deverdever-heavenly-goat-v9-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v9-v1/tokenizer_config.json
deverdever-heavenly-goat-v9-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v9-v1/config.json
deverdever-heavenly-goat-v9-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/deverdever-heavenly-goat-v9-v1/tokenizer.model
deverdever-heavenly-goat-v9-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v9-v1/tokenizer.json
deverdever-heavenly-goat-v9-v1-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/deverdever-heavenly-goat-v9-v1/mkml_model.tensors
deverdever-heavenly-goat-v9-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
deverdever-heavenly-goat-v9-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
deverdever-heavenly-goat-v9-v1-mkmlizer: warnings.warn(
deverdever-heavenly-goat-v9-v1-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 13.3MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
deverdever-heavenly-goat-v9-v1-mkmlizer: warnings.warn(
deverdever-heavenly-goat-v9-v1-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.83MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 9.17MB/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 9.14MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 44.8MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
deverdever-heavenly-goat-v9-v1-mkmlizer: warnings.warn(
deverdever-heavenly-goat-v9-v1-mkmlizer: pytorch_model.bin: 0%| | 0.00/1.44G [00:00<?, ?B/s] pytorch_model.bin: 1%| | 10.5M/1.44G [00:00<01:12, 19.7MB/s] pytorch_model.bin: 1%|▏ | 21.0M/1.44G [00:00<00:45, 31.5MB/s] pytorch_model.bin: 4%|▎ | 52.4M/1.44G [00:01<00:25, 53.7MB/s] pytorch_model.bin: 6%|▌ | 83.9M/1.44G [00:01<00:14, 91.9MB/s] pytorch_model.bin: 12%|█▏ | 168M/1.44G [00:01<00:05, 219MB/s] pytorch_model.bin: 23%|██▎ | 336M/1.44G [00:01<00:02, 505MB/s] pytorch_model.bin: 52%|█████▏ | 755M/1.44G [00:01<00:00, 1.30GB/s] pytorch_model.bin: 77%|███████▋ | 1.11G/1.44G [00:01<00:00, 1.80GB/s] pytorch_model.bin: 100%|█████████▉| 1.44G/1.44G [00:01<00:00, 827MB/s]
deverdever-heavenly-goat-v9-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
deverdever-heavenly-goat-v9-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/deverdever-heavenly-goat-v9-v1_reward/reward.tensors
Job deverdever-heavenly-goat-v9-v1-mkmlizer completed after 63.98s with status: succeeded
Stopping job with name deverdever-heavenly-goat-v9-v1-mkmlizer
Pipeline stage MKMLizer completed in 69.27s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Creating inference service deverdever-heavenly-goat-v9-v1
Waiting for inference service deverdever-heavenly-goat-v9-v1 to be ready
Inference service deverdever-heavenly-goat-v9-v1 ready after 40.25569772720337s
Pipeline stage ISVCDeployer completed in 48.50s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.718055248260498s
Received healthy response to inference request in 1.1686618328094482s
Received healthy response to inference request in 1.1159441471099854s
Received healthy response to inference request in 1.1525158882141113s
Received healthy response to inference request in 0.8753068447113037s
5 requests
0 failed requests
5th percentile: 0.9234343051910401
10th percentile: 0.9715617656707763
20th percentile: 1.067816686630249
30th percentile: 1.1232584953308105
40th percentile: 1.137887191772461
50th percentile: 1.1525158882141113
60th percentile: 1.1589742660522462
70th percentile: 1.1654326438903808
80th percentile: 1.2785405158996583
90th percentile: 1.498297882080078
95th percentile: 1.608176565170288
99th percentile: 1.696079511642456
mean time: 1.2060967922210692
Pipeline stage StressChecker completed in 6.82s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.04s
M-Eval Dataset for topic stay_in_character is loaded
deverdever-heavenly-goat-v9_v1 status is now deployed due to DeploymentManager action
deverdever-heavenly-goat-v9_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of deverdever-heavenly-goat-v9_v1
Running pipeline stage ISVCDeleter
Checking if service deverdever-heavenly-goat-v9-v1 is running
Tearing down inference service deverdever-heavenly-goat-v9-v1
Toredown service deverdever-heavenly-goat-v9-v1
Pipeline stage ISVCDeleter completed in 4.74s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key deverdever-heavenly-goat-v9-v1/config.json from bucket guanaco-mkml-models
Deleting key deverdever-heavenly-goat-v9-v1/mkml_model.tensors from bucket guanaco-mkml-models
Deleting key deverdever-heavenly-goat-v9-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key deverdever-heavenly-goat-v9-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key deverdever-heavenly-goat-v9-v1/tokenizer.model from bucket guanaco-mkml-models
Deleting key deverdever-heavenly-goat-v9-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key deverdever-heavenly-goat-v9-v1_reward/config.json from bucket guanaco-reward-models
Deleting key deverdever-heavenly-goat-v9-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key deverdever-heavenly-goat-v9-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key deverdever-heavenly-goat-v9-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key deverdever-heavenly-goat-v9-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key deverdever-heavenly-goat-v9-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key deverdever-heavenly-goat-v9-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.63s
deverdever-heavenly-goat-v9_v1 status is now torndown due to DeploymentManager action