submission_id: megumi21-megumi-chat-7b-v0-7_v2
developer_uid: megumi_10073
status: torndown
model_repo: megumi21/Megumi-Chat-7B-v0.7
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 4, 'max_output_tokens': 64}
formatter: {'memory_template': "### Instruction:\nYou are a creative agent roleplaying as a character called {bot_name}. Stay true to the persona given, reply with short and descriptive sentences. Do not be repetitive.\n{bot_name}'s Persona: {memory}\n", 'prompt_template': '### Input:\n# Example conversation:\n{prompt}\n# Actual conversation:\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '### Response: {bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-04-03T08:42:09+00:00
model_name: megumi-chat-7b-v7-inst
model_eval_status: success
model_group: megumi21/Megumi-Chat-7B-
num_battles: 15961
num_wins: 7507
celo_rating: 1132.65
propriety_score: 0.0
propriety_total_count: 0.0
submission_type: basic
model_architecture: MistralForCausalLM
model_num_parameters: 7241732096.0
best_of: 4
max_input_tokens: 512
max_output_tokens: 64
display_name: megumi-chat-7b-v7-inst
ineligible_reason: propriety_total_count < 800
language_model: megumi21/Megumi-Chat-7B-v0.7
model_size: 7B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-04-03
win_ratio: 0.47033393897625464
preference_data_url: None
Resubmit model
Running pipeline stage MKMLizer
Starting job with name megumi21-megumi-chat-7b-v0-7-v2-mkmlizer
Waiting for job on megumi21-megumi-chat-7b-v0-7-v2-mkmlizer to finish
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ _____ __ __ ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ /___/ ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ Version: 0.6.11 ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ The license key for the current software has been verified as ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ belonging to: ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ Chai Research Corp. ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 15.1MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: config.json: 0%| | 0.00/654 [00:00<?, ?B/s] config.json: 100%|██████████| 654/654 [00:00<00:00, 6.66MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: generation_config.json: 0%| | 0.00/132 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 132/132 [00:00<00:00, 2.05MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: pytorch_model-00001-of-00003.bin: 0%| | 0.00/4.94G [00:00<?, ?B/s] pytorch_model-00001-of-00003.bin: 0%| | 10.5M/4.94G [00:01<15:26, 5.33MB/s] pytorch_model-00001-of-00003.bin: 0%| | 21.0M/4.94G [00:02<07:18, 11.2MB/s] pytorch_model-00001-of-00003.bin: 1%| | 31.5M/4.94G [00:02<05:15, 15.6MB/s] pytorch_model-00001-of-00003.bin: 2%|▏ | 94.4M/4.94G [00:02<01:21, 59.2MB/s] pytorch_model-00001-of-00003.bin: 2%|▏ | 105M/4.94G [00:02<01:16, 63.0MB/s] pytorch_model-00001-of-00003.bin: 3%|▎ | 126M/4.94G [00:03<01:00, 79.8MB/s] pytorch_model-00001-of-00003.bin: 7%|▋ | 357M/4.94G [00:03<00:11, 403MB/s] pytorch_model-00001-of-00003.bin: 20%|██ | 996M/4.94G [00:03<00:02, 1.42GB/s] pytorch_model-00001-of-00003.bin: 25%|██▌ | 1.25G/4.94G [00:03<00:04, 912MB/s] pytorch_model-00001-of-00003.bin: 29%|██▉ | 1.44G/4.94G [00:04<00:05, 674MB/s] pytorch_model-00001-of-00003.bin: 36%|███▋ | 1.80G/4.94G [00:04<00:03, 1.01GB/s] pytorch_model-00001-of-00003.bin: 41%|████ | 2.02G/4.94G [00:04<00:02, 1.00GB/s] pytorch_model-00001-of-00003.bin: 45%|████▍ | 2.20G/4.94G [00:04<00:02, 1.02GB/s] pytorch_model-00001-of-00003.bin: 53%|█████▎ | 2.62G/4.94G [00:04<00:01, 1.51GB/s] pytorch_model-00001-of-00003.bin: 58%|█████▊ | 2.87G/4.94G [00:04<00:01, 1.70GB/s] pytorch_model-00001-of-00003.bin: 63%|██████▎ | 3.11G/4.94G [00:05<00:01, 1.62GB/s] pytorch_model-00001-of-00003.bin: 67%|██████▋ | 3.33G/4.94G [00:05<00:01, 1.40GB/s] pytorch_model-00001-of-00003.bin: 71%|███████ | 3.51G/4.94G [00:05<00:01, 1.22GB/s] pytorch_model-00001-of-00003.bin: 74%|███████▍ | 3.67G/4.94G [00:05<00:01, 1.15GB/s] pytorch_model-00001-of-00003.bin: 77%|███████▋ | 3.81G/4.94G [00:05<00:00, 1.16GB/s] pytorch_model-00001-of-00003.bin: 81%|████████ | 4.01G/4.94G [00:05<00:00, 1.32GB/s] pytorch_model-00001-of-00003.bin: 87%|████████▋ | 4.29G/4.94G [00:06<00:00, 1.65GB/s] pytorch_model-00001-of-00003.bin: 92%|█████████▏| 4.55G/4.94G [00:06<00:00, 1.88GB/s] pytorch_model-00001-of-00003.bin: 96%|█████████▋| 4.76G/4.94G [00:06<00:00, 1.87GB/s] pytorch_model-00001-of-00003.bin: 100%|█████████▉| 4.94G/4.94G [00:06<00:00, 780MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: pytorch_model-00002-of-00003.bin: 0%| | 0.00/5.00G [00:00<?, ?B/s] pytorch_model-00002-of-00003.bin: 0%| | 10.5M/5.00G [00:02<18:28, 4.50MB/s] pytorch_model-00002-of-00003.bin: 0%| | 21.0M/5.00G [00:02<08:23, 9.88MB/s] pytorch_model-00002-of-00003.bin: 1%| | 31.5M/5.00G [00:02<04:56, 16.7MB/s] pytorch_model-00002-of-00003.bin: 3%|▎ | 126M/5.00G [00:03<01:06, 73.6MB/s] pytorch_model-00002-of-00003.bin: 10%|█ | 503M/5.00G [00:03<00:11, 395MB/s] pytorch_model-00002-of-00003.bin: 24%|██▍ | 1.20G/5.00G [00:03<00:04, 876MB/s] pytorch_model-00002-of-00003.bin: 27%|██▋ | 1.36G/5.00G [00:03<00:04, 761MB/s] pytorch_model-00002-of-00003.bin: 32%|███▏ | 1.58G/5.00G [00:04<00:03, 916MB/s] pytorch_model-00002-of-00003.bin: 35%|███▍ | 1.74G/5.00G [00:04<00:04, 652MB/s] pytorch_model-00002-of-00003.bin: 39%|███▉ | 1.94G/5.00G [00:04<00:03, 798MB/s] pytorch_model-00002-of-00003.bin: 42%|████▏ | 2.09G/5.00G [00:04<00:03, 739MB/s] pytorch_model-00002-of-00003.bin: 44%|████▍ | 2.20G/5.00G [00:05<00:03, 782MB/s] pytorch_model-00002-of-00003.bin: 47%|████▋ | 2.37G/5.00G [00:05<00:02, 927MB/s] pytorch_model-00002-of-00003.bin: 50%|█████ | 2.51G/5.00G [00:05<00:03, 792MB/s] pytorch_model-00002-of-00003.bin: 52%|█████▏ | 2.62G/5.00G [00:05<00:02, 808MB/s] pytorch_model-00002-of-00003.bin: 58%|█████▊ | 2.88G/5.00G [00:05<00:01, 1.15GB/s] pytorch_model-00002-of-00003.bin: 64%|██████▍ | 3.19G/5.00G [00:05<00:01, 1.54GB/s] pytorch_model-00002-of-00003.bin: 68%|██████▊ | 3.39G/5.00G [00:05<00:01, 1.60GB/s] pytorch_model-00002-of-00003.bin: 72%|███████▏ | 3.58G/5.00G [00:05<00:01, 1.41GB/s] pytorch_model-00002-of-00003.bin: 75%|███████▍ | 3.74G/5.00G [00:06<00:01, 1.16GB/s] pytorch_model-00002-of-00003.bin: 82%|████████▏ | 4.11G/5.00G [00:06<00:00, 1.65GB/s] pytorch_model-00002-of-00003.bin: 86%|████████▌ | 4.31G/5.00G [00:06<00:00, 1.70GB/s] pytorch_model-00002-of-00003.bin: 92%|█████████▏| 4.62G/5.00G [00:06<00:00, 2.01GB/s] pytorch_model-00002-of-00003.bin: 100%|█████████▉| 5.00G/5.00G [00:06<00:00, 752MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: pytorch_model-00003-of-00003.bin: 0%| | 0.00/4.54G [00:00<?, ?B/s] pytorch_model-00003-of-00003.bin: 0%| | 10.5M/4.54G [00:02<19:05, 3.95MB/s] pytorch_model-00003-of-00003.bin: 1%| | 31.5M/4.54G [00:02<05:24, 13.9MB/s] pytorch_model-00003-of-00003.bin: 3%|▎ | 136M/4.54G [00:02<00:54, 81.3MB/s] pytorch_model-00003-of-00003.bin: 4%|▍ | 189M/4.54G [00:03<00:44, 97.4MB/s] pytorch_model-00003-of-00003.bin: 17%|█▋ | 786M/4.54G [00:03<00:05, 629MB/s] pytorch_model-00003-of-00003.bin: 27%|██▋ | 1.25G/4.54G [00:03<00:03, 1.07GB/s] pytorch_model-00003-of-00003.bin: 34%|███▍ | 1.55G/4.54G [00:04<00:03, 866MB/s] pytorch_model-00003-of-00003.bin: 39%|███▉ | 1.77G/4.54G [00:04<00:03, 902MB/s] pytorch_model-00003-of-00003.bin: 43%|████▎ | 1.96G/4.54G [00:04<00:02, 881MB/s] pytorch_model-00003-of-00003.bin: 47%|████▋ | 2.12G/4.54G [00:04<00:02, 845MB/s] pytorch_model-00003-of-00003.bin: 50%|████▉ | 2.25G/4.54G [00:04<00:02, 822MB/s] pytorch_model-00003-of-00003.bin: 53%|█████▎ | 2.41G/4.54G [00:04<00:02, 937MB/s] pytorch_model-00003-of-00003.bin: 56%|█████▌ | 2.54G/4.54G [00:05<00:02, 924MB/s] pytorch_model-00003-of-00003.bin: 58%|█████▊ | 2.65G/4.54G [00:05<00:01, 962MB/s] pytorch_model-00003-of-00003.bin: 61%|██████ | 2.77G/4.54G [00:05<00:01, 997MB/s] pytorch_model-00003-of-00003.bin: 64%|██████▎ | 2.88G/4.54G [00:05<00:02, 723MB/s] pytorch_model-00003-of-00003.bin: 66%|██████▌ | 3.00G/4.54G [00:05<00:01, 799MB/s] pytorch_model-00003-of-00003.bin: 72%|███████▏ | 3.28G/4.54G [00:05<00:01, 1.23GB/s] pytorch_model-00003-of-00003.bin: 80%|███████▉ | 3.63G/4.54G [00:05<00:00, 1.68GB/s] pytorch_model-00003-of-00003.bin: 86%|████████▌ | 3.89G/4.54G [00:06<00:00, 1.87GB/s] pytorch_model-00003-of-00003.bin: 91%|█████████ | 4.11G/4.54G [00:06<00:00, 1.91GB/s] pytorch_model-00003-of-00003.bin: 97%|█████████▋| 4.41G/4.54G [00:06<00:00, 2.21GB/s] pytorch_model-00003-of-00003.bin: 100%|█████████▉| 4.54G/4.54G [00:06<00:00, 670MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: pytorch_model.bin.index.json: 0%| | 0.00/23.9k [00:00<?, ?B/s] pytorch_model.bin.index.json: 100%|██████████| 23.9k/23.9k [00:00<00:00, 147MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: special_tokens_map.json: 0%| | 0.00/437 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 437/437 [00:00<00:00, 5.66MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 45.0MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: tokenizer_config.json: 0%| | 0.00/1.51k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 1.51k/1.51k [00:00<00:00, 18.2MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Downloaded to shared memory in 43.477s
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: quantizing model to /dev/shm/model_cache
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Saving mkml model at /dev/shm/model_cache
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Reading /tmp/tmpdubtcf08/pytorch_model.bin.index.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:02<10:43, 2.22s/it] Profiling: 34%|███▎ | 98/291 [00:03<00:05, 38.49it/s] Profiling: 70%|███████ | 204/291 [00:04<00:01, 65.79it/s] Profiling: 100%|██████████| 291/291 [00:05<00:00, 66.84it/s] Profiling: 100%|██████████| 291/291 [00:05<00:00, 54.38it/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: quantized model in 18.102s
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Processed model megumi21/Megumi-Chat-7B-v0.7 in 62.544s
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: creating bucket guanaco-mkml-models
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-7-v2
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-7-v2/special_tokens_map.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-7-v2/config.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-7-v2/tokenizer.model
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-7-v2/tokenizer_config.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-7-v2/tokenizer.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-7-v2/mkml_model.tensors
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.27MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 56.3MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 24.5MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: pytorch_model.bin: 0%| | 0.00/1.44G [00:00<?, ?B/s] pytorch_model.bin: 1%| | 10.5M/1.44G [00:00<00:20, 70.2MB/s] pytorch_model.bin: 2%|▏ | 31.5M/1.44G [00:00<00:15, 92.5MB/s] pytorch_model.bin: 3%|▎ | 41.9M/1.44G [00:00<00:19, 70.2MB/s] pytorch_model.bin: 4%|▎ | 52.4M/1.44G [00:00<00:18, 77.0MB/s] pytorch_model.bin: 12%|█▏ | 168M/1.44G [00:00<00:03, 346MB/s] pytorch_model.bin: 24%|██▍ | 346M/1.44G [00:00<00:01, 724MB/s] pytorch_model.bin: 54%|█████▎ | 776M/1.44G [00:00<00:00, 1.66GB/s] pytorch_model.bin: 68%|██████▊ | 975M/1.44G [00:01<00:00, 1.73GB/s] pytorch_model.bin: 81%|████████ | 1.17G/1.44G [00:01<00:00, 812MB/s] pytorch_model.bin: 91%|█████████▏| 1.32G/1.44G [00:01<00:00, 866MB/s] pytorch_model.bin: 100%|█████████▉| 1.44G/1.44G [00:01<00:00, 774MB/s]
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Saving duration: 0.291s
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 5.370s
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: creating bucket guanaco-reward-models
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-7-v2_reward
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-7-v2_reward/special_tokens_map.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-7-v2_reward/config.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-7-v2_reward/tokenizer_config.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-7-v2_reward/merges.txt
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-7-v2_reward/vocab.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-7-v2_reward/tokenizer.json
megumi21-megumi-chat-7b-v0-7-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-7-v2_reward/reward.tensors
Job megumi21-megumi-chat-7b-v0-7-v2-mkmlizer completed after 84.76s with status: succeeded
Stopping job with name megumi21-megumi-chat-7b-v0-7-v2-mkmlizer
Pipeline stage MKMLizer completed in 89.10s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service megumi21-megumi-chat-7b-v0-7-v2
Waiting for inference service megumi21-megumi-chat-7b-v0-7-v2 to be ready
Inference service megumi21-megumi-chat-7b-v0-7-v2 ready after 40.35738682746887s
Pipeline stage ISVCDeployer completed in 47.34s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.5475497245788574s
Received healthy response to inference request in 1.012707233428955s
Received healthy response to inference request in 0.6837847232818604s
Received healthy response to inference request in 0.9877159595489502s
Received healthy response to inference request in 1.0808095932006836s
5 requests
0 failed requests
5th percentile: 0.7445709705352783
10th percentile: 0.8053572177886963
20th percentile: 0.9269297122955322
30th percentile: 0.9927142143249512
40th percentile: 1.002710723876953
50th percentile: 1.012707233428955
60th percentile: 1.0399481773376464
70th percentile: 1.0671891212463378
80th percentile: 1.1741576194763184
90th percentile: 1.360853672027588
95th percentile: 1.4542016983032227
99th percentile: 1.5288801193237305
mean time: 1.0625134468078614
Pipeline stage StressChecker completed in 6.21s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.05s
M-Eval Dataset for topic stay_in_character is loaded
megumi21-megumi-chat-7b-v0-7_v2 status is now deployed due to DeploymentManager action
megumi21-megumi-chat-7b-v0-7_v2 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of megumi21-megumi-chat-7b-v0-7_v2
Running pipeline stage ISVCDeleter
Checking if service megumi21-megumi-chat-7b-v0-7-v2 is running
Tearing down inference service megumi21-megumi-chat-7b-v0-7-v2
Toredown service megumi21-megumi-chat-7b-v0-7-v2
Pipeline stage ISVCDeleter completed in 3.75s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key megumi21-megumi-chat-7b-v0-7-v2/config.json from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2/mkml_model.tensors from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2/tokenizer.json from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2/tokenizer.model from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key megumi21-megumi-chat-7b-v0-7-v2_reward/config.json from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2_reward/merges.txt from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2_reward/reward.tensors from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v0-7-v2_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.20s
megumi21-megumi-chat-7b-v0-7_v2 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics