submission_id: megumi21-megumi-chat-7b-v0-5_v1
developer_uid: megumi_10073
status: rejected
model_repo: megumi21/Megumi-Chat-7B-v0.5
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "### Instruction:\nYou are a creative agent roleplaying as a character called {bot_name}. Stay true to the persona given, reply with short and descriptive sentences. Do not be repetitive.\n{bot_name}'s Persona: {memory}\n", 'prompt_template': '### Input:\n# Example conversation:\n{prompt}\n# Actual conversation:\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '### Response: {bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-03-28T06:12:10+00:00
model_name: megumi-chat-7b-v6
model_eval_status: success
model_group: megumi21/Megumi-Chat-7B-
num_battles: 42
num_wins: 17
celo_rating: None
propriety_score: 0.0
propriety_total_count: 0.0
submission_type: basic
model_architecture: MistralForCausalLM
model_num_parameters: 7241732096.0
best_of: 8
max_input_tokens: 512
max_output_tokens: 64
display_name: megumi-chat-7b-v6
ineligible_reason: model is not deployable
language_model: megumi21/Megumi-Chat-7B-v0.5
model_size: 7B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-03-27
win_ratio: 0.40476190476190477
preference_data_url: None
Resubmit model
Running pipeline stage MKMLizer
Starting job with name megumi21-megumi-chat-7b-v0-5-v1-mkmlizer
Waiting for job on megumi21-megumi-chat-7b-v0-5-v1-mkmlizer to finish
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ _____ __ __ ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ /___/ ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ Version: 0.6.11 ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ The license key for the current software has been verified as ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ belonging to: ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ Chai Research Corp. ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 17.9MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: config.json: 0%| | 0.00/654 [00:00<?, ?B/s] config.json: 100%|██████████| 654/654 [00:00<00:00, 5.50MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: generation_config.json: 0%| | 0.00/132 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 132/132 [00:00<00:00, 1.83MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: pytorch_model-00001-of-00003.bin: 0%| | 0.00/4.94G [00:00<?, ?B/s] pytorch_model-00001-of-00003.bin: 0%| | 10.5M/4.94G [00:00<03:18, 24.9MB/s] pytorch_model-00001-of-00003.bin: 0%| | 21.0M/4.94G [00:02<11:04, 7.41MB/s] pytorch_model-00001-of-00003.bin: 1%| | 41.9M/4.94G [00:02<04:28, 18.2MB/s] pytorch_model-00001-of-00003.bin: 1%|▏ | 73.4M/4.94G [00:02<02:06, 38.4MB/s] pytorch_model-00001-of-00003.bin: 4%|▍ | 199M/4.94G [00:02<00:32, 148MB/s] pytorch_model-00001-of-00003.bin: 7%|▋ | 346M/4.94G [00:03<00:16, 277MB/s] pytorch_model-00001-of-00003.bin: 10%|▉ | 482M/4.94G [00:03<00:10, 414MB/s] pytorch_model-00001-of-00003.bin: 22%|██▏ | 1.07G/4.94G [00:03<00:04, 941MB/s] pytorch_model-00001-of-00003.bin: 24%|██▍ | 1.18G/4.94G [00:04<00:07, 507MB/s] pytorch_model-00001-of-00003.bin: 26%|██▌ | 1.27G/4.94G [00:04<00:07, 517MB/s] pytorch_model-00001-of-00003.bin: 30%|██▉ | 1.47G/4.94G [00:04<00:05, 682MB/s] pytorch_model-00001-of-00003.bin: 32%|███▏ | 1.58G/4.94G [00:04<00:05, 651MB/s] pytorch_model-00001-of-00003.bin: 34%|███▍ | 1.68G/4.94G [00:04<00:05, 652MB/s] pytorch_model-00001-of-00003.bin: 36%|███▌ | 1.77G/4.94G [00:05<00:05, 615MB/s] pytorch_model-00001-of-00003.bin: 39%|███▉ | 1.95G/4.94G [00:05<00:03, 758MB/s] pytorch_model-00001-of-00003.bin: 41%|████▏ | 2.04G/4.94G [00:05<00:03, 762MB/s] pytorch_model-00001-of-00003.bin: 49%|████▉ | 2.43G/4.94G [00:05<00:01, 1.37GB/s] pytorch_model-00001-of-00003.bin: 56%|█████▌ | 2.77G/4.94G [00:05<00:01, 1.80GB/s] pytorch_model-00001-of-00003.bin: 60%|██████ | 2.99G/4.94G [00:06<00:02, 756MB/s] pytorch_model-00001-of-00003.bin: 64%|██████▍ | 3.16G/4.94G [00:06<00:02, 674MB/s] pytorch_model-00001-of-00003.bin: 68%|██████▊ | 3.36G/4.94G [00:06<00:01, 822MB/s] pytorch_model-00001-of-00003.bin: 73%|███████▎ | 3.61G/4.94G [00:06<00:01, 1.06GB/s] pytorch_model-00001-of-00003.bin: 77%|███████▋ | 3.79G/4.94G [00:06<00:00, 1.17GB/s] pytorch_model-00001-of-00003.bin: 80%|████████ | 3.96G/4.94G [00:07<00:00, 1.27GB/s] pytorch_model-00001-of-00003.bin: 84%|████████▎ | 4.14G/4.94G [00:07<00:01, 654MB/s] pytorch_model-00001-of-00003.bin: 86%|████████▋ | 4.27G/4.94G [00:07<00:00, 730MB/s] pytorch_model-00001-of-00003.bin: 90%|█████████ | 4.45G/4.94G [00:07<00:00, 875MB/s] pytorch_model-00001-of-00003.bin: 93%|█████████▎| 4.59G/4.94G [00:08<00:00, 900MB/s] pytorch_model-00001-of-00003.bin: 100%|█████████▉| 4.94G/4.94G [00:08<00:00, 609MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: pytorch_model-00002-of-00003.bin: 0%| | 0.00/5.00G [00:00<?, ?B/s] pytorch_model-00002-of-00003.bin: 0%| | 10.5M/5.00G [00:01<12:05, 6.87MB/s] pytorch_model-00002-of-00003.bin: 0%| | 21.0M/5.00G [00:01<05:53, 14.1MB/s] pytorch_model-00002-of-00003.bin: 1%| | 31.5M/5.00G [00:01<03:59, 20.7MB/s] pytorch_model-00002-of-00003.bin: 1%| | 41.9M/5.00G [00:02<04:03, 20.4MB/s] pytorch_model-00002-of-00003.bin: 1%|▏ | 62.9M/5.00G [00:02<02:17, 35.9MB/s] pytorch_model-00002-of-00003.bin: 3%|▎ | 147M/5.00G [00:02<00:38, 128MB/s] pytorch_model-00002-of-00003.bin: 6%|▌ | 294M/5.00G [00:02<00:15, 309MB/s] pytorch_model-00002-of-00003.bin: 8%|▊ | 398M/5.00G [00:02<00:10, 430MB/s] pytorch_model-00002-of-00003.bin: 13%|█▎ | 629M/5.00G [00:03<00:05, 784MB/s] pytorch_model-00002-of-00003.bin: 22%|██▏ | 1.11G/5.00G [00:03<00:02, 1.60GB/s] pytorch_model-00002-of-00003.bin: 27%|██▋ | 1.33G/5.00G [00:03<00:05, 685MB/s] pytorch_model-00002-of-00003.bin: 32%|███▏ | 1.61G/5.00G [00:04<00:03, 886MB/s] pytorch_model-00002-of-00003.bin: 36%|███▌ | 1.79G/5.00G [00:04<00:04, 752MB/s] pytorch_model-00002-of-00003.bin: 39%|███▊ | 1.93G/5.00G [00:04<00:04, 732MB/s] pytorch_model-00002-of-00003.bin: 42%|████▏ | 2.10G/5.00G [00:04<00:03, 827MB/s] pytorch_model-00002-of-00003.bin: 44%|████▍ | 2.22G/5.00G [00:04<00:03, 891MB/s] pytorch_model-00002-of-00003.bin: 50%|████▉ | 2.49G/5.00G [00:05<00:02, 1.18GB/s] pytorch_model-00002-of-00003.bin: 54%|█████▍ | 2.72G/5.00G [00:05<00:01, 1.41GB/s] pytorch_model-00002-of-00003.bin: 62%|██████▏ | 3.09G/5.00G [00:05<00:01, 1.87GB/s] pytorch_model-00002-of-00003.bin: 66%|██████▋ | 3.31G/5.00G [00:05<00:01, 1.09GB/s] pytorch_model-00002-of-00003.bin: 70%|██████▉ | 3.49G/5.00G [00:05<00:01, 1.08GB/s] pytorch_model-00002-of-00003.bin: 73%|███████▎ | 3.65G/5.00G [00:05<00:01, 1.15GB/s] pytorch_model-00002-of-00003.bin: 77%|███████▋ | 3.85G/5.00G [00:06<00:00, 1.30GB/s] pytorch_model-00002-of-00003.bin: 83%|████████▎ | 4.13G/5.00G [00:06<00:00, 1.61GB/s] pytorch_model-00002-of-00003.bin: 87%|████████▋ | 4.33G/5.00G [00:06<00:00, 1.34GB/s] pytorch_model-00002-of-00003.bin: 90%|████████▉ | 4.50G/5.00G [00:06<00:00, 1.13GB/s] pytorch_model-00002-of-00003.bin: 93%|█████████▎| 4.67G/5.00G [00:06<00:00, 1.21GB/s] pytorch_model-00002-of-00003.bin: 96%|█████████▋| 4.81G/5.00G [00:06<00:00, 1.15GB/s] pytorch_model-00002-of-00003.bin: 100%|█████████▉| 4.99G/5.00G [00:07<00:00, 875MB/s] pytorch_model-00002-of-00003.bin: 100%|█████████▉| 5.00G/5.00G [00:07<00:00, 649MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: pytorch_model-00003-of-00003.bin: 0%| | 0.00/4.54G [00:00<?, ?B/s] pytorch_model-00003-of-00003.bin: 0%| | 10.5M/4.54G [00:01<13:57, 5.41MB/s] pytorch_model-00003-of-00003.bin: 1%| | 31.5M/4.54G [00:02<04:33, 16.5MB/s] pytorch_model-00003-of-00003.bin: 2%|▏ | 73.4M/4.54G [00:02<01:35, 46.8MB/s] pytorch_model-00003-of-00003.bin: 3%|▎ | 147M/4.54G [00:02<00:39, 113MB/s] pytorch_model-00003-of-00003.bin: 7%|▋ | 304M/4.54G [00:02<00:14, 284MB/s] pytorch_model-00003-of-00003.bin: 8%|▊ | 377M/4.54G [00:02<00:12, 338MB/s] pytorch_model-00003-of-00003.bin: 10%|█ | 461M/4.54G [00:02<00:09, 419MB/s] pytorch_model-00003-of-00003.bin: 24%|██▍ | 1.10G/4.54G [00:03<00:02, 1.36GB/s] pytorch_model-00003-of-00003.bin: 28%|██▊ | 1.27G/4.54G [00:03<00:04, 683MB/s] pytorch_model-00003-of-00003.bin: 31%|███ | 1.39G/4.54G [00:03<00:04, 683MB/s] pytorch_model-00003-of-00003.bin: 35%|███▌ | 1.60G/4.54G [00:03<00:03, 869MB/s] pytorch_model-00003-of-00003.bin: 39%|███▉ | 1.77G/4.54G [00:04<00:02, 980MB/s] pytorch_model-00003-of-00003.bin: 42%|████▏ | 1.92G/4.54G [00:04<00:02, 925MB/s] pytorch_model-00003-of-00003.bin: 46%|████▌ | 2.09G/4.54G [00:04<00:02, 1.06GB/s] pytorch_model-00003-of-00003.bin: 51%|█████▏ | 2.33G/4.54G [00:04<00:01, 1.31GB/s] pytorch_model-00003-of-00003.bin: 55%|█████▍ | 2.50G/4.54G [00:04<00:01, 1.13GB/s] pytorch_model-00003-of-00003.bin: 58%|█████▊ | 2.63G/4.54G [00:04<00:01, 1.09GB/s] pytorch_model-00003-of-00003.bin: 61%|██████ | 2.76G/4.54G [00:05<00:01, 1.01GB/s] pytorch_model-00003-of-00003.bin: 63%|██████▎ | 2.87G/4.54G [00:05<00:02, 785MB/s] pytorch_model-00003-of-00003.bin: 65%|██████▌ | 2.97G/4.54G [00:05<00:02, 726MB/s] pytorch_model-00003-of-00003.bin: 67%|██████▋ | 3.06G/4.54G [00:05<00:01, 750MB/s] pytorch_model-00003-of-00003.bin: 69%|██████▉ | 3.15G/4.54G [00:05<00:01, 767MB/s] pytorch_model-00003-of-00003.bin: 73%|███████▎ | 3.30G/4.54G [00:05<00:01, 945MB/s] pytorch_model-00003-of-00003.bin: 75%|███████▌ | 3.42G/4.54G [00:05<00:01, 960MB/s] pytorch_model-00003-of-00003.bin: 78%|███████▊ | 3.52G/4.54G [00:05<00:01, 954MB/s] pytorch_model-00003-of-00003.bin: 80%|███████▉ | 3.63G/4.54G [00:06<00:00, 917MB/s] pytorch_model-00003-of-00003.bin: 82%|████████▏ | 3.74G/4.54G [00:06<00:00, 954MB/s] pytorch_model-00003-of-00003.bin: 85%|████████▍ | 3.85G/4.54G [00:06<00:00, 964MB/s] pytorch_model-00003-of-00003.bin: 87%|████████▋ | 3.95G/4.54G [00:06<00:00, 922MB/s] pytorch_model-00003-of-00003.bin: 90%|█████████ | 4.10G/4.54G [00:06<00:00, 1.06GB/s] pytorch_model-00003-of-00003.bin: 94%|█████████▎| 4.25G/4.54G [00:06<00:00, 1.14GB/s] pytorch_model-00003-of-00003.bin: 100%|█████████▉| 4.54G/4.54G [00:06<00:00, 671MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: pytorch_model.bin.index.json: 0%| | 0.00/23.9k [00:00<?, ?B/s] pytorch_model.bin.index.json: 100%|██████████| 23.9k/23.9k [00:00<00:00, 126MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: special_tokens_map.json: 0%| | 0.00/437 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 437/437 [00:00<00:00, 4.53MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 61.0MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: tokenizer_config.json: 0%| | 0.00/1.51k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 1.51k/1.51k [00:00<00:00, 24.1MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Downloaded to shared memory in 24.699s
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: quantizing model to /dev/shm/model_cache
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Saving mkml model at /dev/shm/model_cache
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Reading /tmp/tmppuas8z44/pytorch_model.bin.index.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:02<10:19, 2.14s/it] Profiling: 34%|███▎ | 98/291 [00:02<00:04, 43.54it/s] Profiling: 70%|███████ | 204/291 [00:03<00:01, 78.17it/s] Profiling: 100%|██████████| 291/291 [00:04<00:00, 73.45it/s] Profiling: 100%|██████████| 291/291 [00:04<00:00, 60.25it/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: quantized model in 15.101s
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Processed model megumi21/Megumi-Chat-7B-v0.5 in 40.640s
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: creating bucket guanaco-mkml-models
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v1
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v1/tokenizer_config.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v1/special_tokens_map.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v1/config.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v1/tokenizer.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v1/tokenizer.model
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v1/mkml_model.tensors
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 11.6MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.66MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 49.0MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 37.3MB/s]
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Saving duration: 0.238s
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 4.514s
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: creating bucket guanaco-reward-models
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-5-v1_reward
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-5-v1_reward/special_tokens_map.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-5-v1_reward/config.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-5-v1_reward/tokenizer_config.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-5-v1_reward/merges.txt
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-5-v1_reward/vocab.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-5-v1_reward/tokenizer.json
megumi21-megumi-chat-7b-v0-5-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/megumi21-megumi-chat-7b-v0-5-v1_reward/reward.tensors
Job megumi21-megumi-chat-7b-v0-5-v1-mkmlizer completed after 64.07s with status: succeeded
Stopping job with name megumi21-megumi-chat-7b-v0-5-v1-mkmlizer
Pipeline stage MKMLizer completed in 68.54s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.12s
Running pipeline stage ISVCDeployer
Creating inference service megumi21-megumi-chat-7b-v0-5-v1
Waiting for inference service megumi21-megumi-chat-7b-v0-5-v1 to be ready
Inference service megumi21-megumi-chat-7b-v0-5-v1 ready after 40.27812695503235s
Pipeline stage ISVCDeployer completed in 47.20s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.6586806774139404s
Received healthy response to inference request in 1.1148056983947754s
Received healthy response to inference request in 1.103130578994751s
Received healthy response to inference request in 0.7331113815307617s
Received healthy response to inference request in 1.1119091510772705s
5 requests
0 failed requests
5th percentile: 0.8071152210235596
10th percentile: 0.8811190605163575
20th percentile: 1.0291267395019532
30th percentile: 1.1048862934112549
40th percentile: 1.1083977222442627
50th percentile: 1.1119091510772705
60th percentile: 1.1130677700042724
70th percentile: 1.1142263889312745
80th percentile: 1.2235806941986085
90th percentile: 1.4411306858062745
95th percentile: 1.5499056816101073
99th percentile: 1.6369256782531738
mean time: 1.1443274974823
Pipeline stage StressChecker completed in 6.64s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running M-Eval for topic stay_in_character
Running pipeline stage DaemonicSafetyScorer
M-Eval Dataset for topic stay_in_character is loaded
Pipeline stage DaemonicSafetyScorer completed in 0.11s
megumi21-megumi-chat-7b-v0-5_v1 status is now deployed due to DeploymentManager action
megumi21-megumi-chat-7b-v0-5_v1 status is now rejected due to ELO is less than acceptable minimum 6.5

Usage Metrics

Latency Metrics