submission_id: megumi21-megumi-chat-7b-v1-0_v1
developer_uid: megumi_10073
status: torndown
model_repo: megumi21/Megumi-Chat-7B-v1.0
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-04-15T03:28:30+00:00
model_name: megumi-chat-7b-v10
model_eval_status: success
model_group: megumi21/Megumi-Chat-7B-
num_battles: 25840
num_wins: 13677
celo_rating: 1166.67
propriety_score: 0.0
propriety_total_count: 0.0
submission_type: basic
model_architecture: MistralForCausalLM
model_num_parameters: 7241732096.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: megumi-chat-7b-v10
ineligible_reason: propriety_total_count < 800
language_model: megumi21/Megumi-Chat-7B-v1.0
model_size: 7B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-04-14
win_ratio: 0.5292956656346749
preference_data_url: None
Resubmit model
Running pipeline stage MKMLizer
Starting job with name megumi21-megumi-chat-7b-v1-0-v1-mkmlizer
Waiting for job on megumi21-megumi-chat-7b-v1-0-v1-mkmlizer to finish
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ _____ __ __ ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ /___/ ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ Version: 0.6.11 ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ The license key for the current software has been verified as ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ belonging to: ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ Chai Research Corp. ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 15.4MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: config.json: 0%| | 0.00/654 [00:00<?, ?B/s] config.json: 100%|██████████| 654/654 [00:00<00:00, 5.22MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: generation_config.json: 0%| | 0.00/132 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 132/132 [00:00<00:00, 1.08MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: pytorch_model-00001-of-00003.bin: 0%| | 0.00/4.94G [00:00<?, ?B/s] pytorch_model-00001-of-00003.bin: 0%| | 10.5M/4.94G [00:00<04:39, 17.7MB/s] pytorch_model-00001-of-00003.bin: 0%| | 21.0M/4.94G [00:01<04:28, 18.4MB/s] pytorch_model-00001-of-00003.bin: 1%| | 41.9M/4.94G [00:01<02:01, 40.3MB/s] pytorch_model-00001-of-00003.bin: 1%| | 52.4M/4.94G [00:01<01:50, 44.2MB/s] pytorch_model-00001-of-00003.bin: 2%|▏ | 83.9M/4.94G [00:01<01:04, 75.9MB/s] pytorch_model-00001-of-00003.bin: 3%|▎ | 136M/4.94G [00:01<00:32, 147MB/s] pytorch_model-00001-of-00003.bin: 3%|▎ | 168M/4.94G [00:02<00:32, 146MB/s] pytorch_model-00001-of-00003.bin: 4%|▍ | 189M/4.94G [00:02<00:30, 155MB/s] pytorch_model-00001-of-00003.bin: 4%|▍ | 220M/4.94G [00:02<00:28, 167MB/s] pytorch_model-00001-of-00003.bin: 5%|▌ | 252M/4.94G [00:02<00:24, 194MB/s] pytorch_model-00001-of-00003.bin: 6%|▌ | 283M/4.94G [00:02<00:26, 173MB/s] pytorch_model-00001-of-00003.bin: 6%|▋ | 315M/4.94G [00:02<00:23, 200MB/s] pytorch_model-00001-of-00003.bin: 8%|▊ | 377M/4.94G [00:02<00:16, 275MB/s] pytorch_model-00001-of-00003.bin: 8%|▊ | 419M/4.94G [00:02<00:14, 308MB/s] pytorch_model-00001-of-00003.bin: 10%|▉ | 482M/4.94G [00:03<00:11, 386MB/s] pytorch_model-00001-of-00003.bin: 13%|█▎ | 661M/4.94G [00:03<00:05, 715MB/s] pytorch_model-00001-of-00003.bin: 15%|█▌ | 744M/4.94G [00:03<00:07, 543MB/s] pytorch_model-00001-of-00003.bin: 17%|█▋ | 839M/4.94G [00:03<00:06, 606MB/s] pytorch_model-00001-of-00003.bin: 29%|██▉ | 1.45G/4.94G [00:03<00:02, 1.71GB/s] pytorch_model-00001-of-00003.bin: 33%|███▎ | 1.64G/4.94G [00:04<00:05, 614MB/s] pytorch_model-00001-of-00003.bin: 36%|███▌ | 1.77G/4.94G [00:04<00:04, 686MB/s] pytorch_model-00001-of-00003.bin: 39%|███▊ | 1.91G/4.94G [00:05<00:05, 533MB/s] pytorch_model-00001-of-00003.bin: 41%|████ | 2.01G/4.94G [00:05<00:04, 588MB/s] pytorch_model-00001-of-00003.bin: 43%|████▎ | 2.12G/4.94G [00:05<00:04, 611MB/s] pytorch_model-00001-of-00003.bin: 45%|████▍ | 2.21G/4.94G [00:05<00:04, 634MB/s] pytorch_model-00001-of-00003.bin: 47%|████▋ | 2.31G/4.94G [00:05<00:04, 582MB/s] pytorch_model-00001-of-00003.bin: 48%|████▊ | 2.39G/4.94G [00:05<00:04, 566MB/s] pytorch_model-00001-of-00003.bin: 50%|████▉ | 2.46G/4.94G [00:05<00:04, 578MB/s] pytorch_model-00001-of-00003.bin: 51%|█████▏ | 2.54G/4.94G [00:06<00:04, 526MB/s] pytorch_model-00001-of-00003.bin: 53%|█████▎ | 2.61G/4.94G [00:06<00:04, 480MB/s] pytorch_model-00001-of-00003.bin: 54%|█████▍ | 2.67G/4.94G [00:06<00:05, 444MB/s] pytorch_model-00001-of-00003.bin: 56%|█████▌ | 2.77G/4.94G [00:06<00:04, 513MB/s] pytorch_model-00001-of-00003.bin: 58%|█████▊ | 2.85G/4.94G [00:06<00:03, 579MB/s] pytorch_model-00001-of-00003.bin: 59%|█████▉ | 2.93G/4.94G [00:06<00:04, 470MB/s] pytorch_model-00001-of-00003.bin: 60%|██████ | 2.99G/4.94G [00:07<00:04, 461MB/s] pytorch_model-00001-of-00003.bin: 62%|██████▏ | 3.07G/4.94G [00:07<00:03, 523MB/s] pytorch_model-00001-of-00003.bin: 63%|██████▎ | 3.14G/4.94G [00:07<00:04, 406MB/s] pytorch_model-00001-of-00003.bin: 65%|██████▍ | 3.20G/4.94G [00:07<00:04, 432MB/s] pytorch_model-00001-of-00003.bin: 66%|██████▌ | 3.25G/4.94G [00:07<00:04, 389MB/s] pytorch_model-00001-of-00003.bin: 67%|██████▋ | 3.32G/4.94G [00:07<00:03, 443MB/s] pytorch_model-00001-of-00003.bin: 68%|██████▊ | 3.38G/4.94G [00:08<00:03, 458MB/s] pytorch_model-00001-of-00003.bin: 70%|██████▉ | 3.44G/4.94G [00:08<00:03, 494MB/s] pytorch_model-00001-of-00003.bin: 71%|███████ | 3.50G/4.94G [00:08<00:03, 434MB/s] pytorch_model-00001-of-00003.bin: 73%|███████▎ | 3.62G/4.94G [00:08<00:02, 598MB/s] pytorch_model-00001-of-00003.bin: 75%|███████▍ | 3.70G/4.94G [00:08<00:01, 625MB/s] pytorch_model-00001-of-00003.bin: 76%|███████▋ | 3.77G/4.94G [00:08<00:02, 438MB/s] pytorch_model-00001-of-00003.bin: 78%|███████▊ | 3.87G/4.94G [00:08<00:02, 522MB/s] pytorch_model-00001-of-00003.bin: 80%|███████▉ | 3.94G/4.94G [00:09<00:01, 548MB/s] pytorch_model-00001-of-00003.bin: 81%|████████ | 4.02G/4.94G [00:09<00:01, 484MB/s] pytorch_model-00001-of-00003.bin: 83%|████████▎ | 4.08G/4.94G [00:09<00:01, 448MB/s] pytorch_model-00001-of-00003.bin: 84%|████████▎ | 4.13G/4.94G [00:09<00:01, 440MB/s] pytorch_model-00001-of-00003.bin: 85%|████████▌ | 4.22G/4.94G [00:09<00:01, 455MB/s] pytorch_model-00001-of-00003.bin: 87%|████████▋ | 4.30G/4.94G [00:09<00:01, 506MB/s] pytorch_model-00001-of-00003.bin: 88%|████████▊ | 4.36G/4.94G [00:10<00:01, 486MB/s] pytorch_model-00001-of-00003.bin: 89%|████████▉ | 4.41G/4.94G [00:10<00:01, 480MB/s] pytorch_model-00001-of-00003.bin: 91%|█████████ | 4.50G/4.94G [00:10<00:00, 557MB/s] pytorch_model-00001-of-00003.bin: 94%|█████████▎| 4.63G/4.94G [00:10<00:00, 665MB/s] pytorch_model-00001-of-00003.bin: 100%|█████████▉| 4.93G/4.94G [00:10<00:00, 1.14GB/s] pytorch_model-00001-of-00003.bin: 100%|█████████▉| 4.94G/4.94G [00:10<00:00, 465MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: pytorch_model-00002-of-00003.bin: 0%| | 0.00/5.00G [00:00<?, ?B/s] pytorch_model-00002-of-00003.bin: 0%| | 10.5M/5.00G [00:01<09:28, 8.78MB/s] pytorch_model-00002-of-00003.bin: 1%|▏ | 62.9M/5.00G [00:01<01:23, 59.4MB/s] pytorch_model-00002-of-00003.bin: 2%|▏ | 94.4M/5.00G [00:01<01:00, 81.3MB/s] pytorch_model-00002-of-00003.bin: 3%|▎ | 136M/5.00G [00:01<00:38, 127MB/s] pytorch_model-00002-of-00003.bin: 3%|▎ | 168M/5.00G [00:01<00:37, 129MB/s] pytorch_model-00002-of-00003.bin: 4%|▍ | 199M/5.00G [00:02<00:31, 153MB/s] pytorch_model-00002-of-00003.bin: 5%|▍ | 241M/5.00G [00:02<00:24, 198MB/s] pytorch_model-00002-of-00003.bin: 5%|▌ | 273M/5.00G [00:02<00:23, 205MB/s] pytorch_model-00002-of-00003.bin: 6%|▌ | 304M/5.00G [00:02<00:21, 220MB/s] pytorch_model-00002-of-00003.bin: 7%|▋ | 336M/5.00G [00:02<00:22, 210MB/s] pytorch_model-00002-of-00003.bin: 7%|▋ | 367M/5.00G [00:02<00:21, 219MB/s] pytorch_model-00002-of-00003.bin: 8%|▊ | 419M/5.00G [00:02<00:16, 271MB/s] pytorch_model-00002-of-00003.bin: 10%|▉ | 493M/5.00G [00:02<00:12, 354MB/s] pytorch_model-00002-of-00003.bin: 12%|█▏ | 577M/5.00G [00:03<00:09, 460MB/s] pytorch_model-00002-of-00003.bin: 14%|█▍ | 713M/5.00G [00:03<00:07, 561MB/s] pytorch_model-00002-of-00003.bin: 16%|█▌ | 776M/5.00G [00:03<00:08, 520MB/s] pytorch_model-00002-of-00003.bin: 17%|█▋ | 849M/5.00G [00:03<00:07, 556MB/s] pytorch_model-00002-of-00003.bin: 18%|█▊ | 923M/5.00G [00:03<00:06, 596MB/s] pytorch_model-00002-of-00003.bin: 28%|██▊ | 1.41G/5.00G [00:03<00:02, 1.65GB/s] pytorch_model-00002-of-00003.bin: 32%|███▏ | 1.59G/5.00G [00:03<00:02, 1.18GB/s] pytorch_model-00002-of-00003.bin: 35%|███▌ | 1.75G/5.00G [00:04<00:04, 736MB/s] pytorch_model-00002-of-00003.bin: 37%|███▋ | 1.87G/5.00G [00:04<00:04, 731MB/s] pytorch_model-00002-of-00003.bin: 40%|████ | 2.00G/5.00G [00:04<00:03, 822MB/s] pytorch_model-00002-of-00003.bin: 43%|████▎ | 2.15G/5.00G [00:04<00:03, 938MB/s] pytorch_model-00002-of-00003.bin: 46%|████▌ | 2.30G/5.00G [00:04<00:02, 1.05GB/s] pytorch_model-00002-of-00003.bin: 49%|████▊ | 2.43G/5.00G [00:05<00:03, 814MB/s] pytorch_model-00002-of-00003.bin: 51%|█████ | 2.54G/5.00G [00:05<00:03, 780MB/s] pytorch_model-00002-of-00003.bin: 53%|█████▎ | 2.63G/5.00G [00:05<00:03, 737MB/s] pytorch_model-00002-of-00003.bin: 55%|█████▍ | 2.75G/5.00G [00:05<00:02, 805MB/s] pytorch_model-00002-of-00003.bin: 57%|█████▋ | 2.84G/5.00G [00:05<00:02, 814MB/s] pytorch_model-00002-of-00003.bin: 61%|██████ | 3.05G/5.00G [00:05<00:01, 1.12GB/s] pytorch_model-00002-of-00003.bin: 64%|██████▍ | 3.20G/5.00G [00:05<00:01, 1.17GB/s] pytorch_model-00002-of-00003.bin: 68%|██████▊ | 3.38G/5.00G [00:06<00:01, 1.32GB/s] pytorch_model-00002-of-00003.bin: 74%|███████▍ | 3.69G/5.00G [00:06<00:00, 1.67GB/s] pytorch_model-00002-of-00003.bin: 77%|███████▋ | 3.87G/5.00G [00:06<00:00, 1.30GB/s] pytorch_model-00002-of-00003.bin: 81%|████████ | 4.03G/5.00G [00:06<00:00, 1.34GB/s] pytorch_model-00002-of-00003.bin: 83%|████████▎ | 4.17G/5.00G [00:06<00:00, 1.11GB/s] pytorch_model-00002-of-00003.bin: 86%|████████▌ | 4.30G/5.00G [00:06<00:00, 936MB/s] pytorch_model-00002-of-00003.bin: 88%|████████▊ | 4.41G/5.00G [00:06<00:00, 955MB/s] pytorch_model-00002-of-00003.bin: 90%|█████████ | 4.52G/5.00G [00:07<00:00, 891MB/s] pytorch_model-00002-of-00003.bin: 94%|█████████▍| 4.72G/5.00G [00:07<00:00, 1.14GB/s] pytorch_model-00002-of-00003.bin: 100%|█████████▉| 5.00G/5.00G [00:07<00:00, 826MB/s] pytorch_model-00002-of-00003.bin: 100%|█████████▉| 5.00G/5.00G [00:07<00:00, 646MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: special_tokens_map.json: 0%| | 0.00/437 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 437/437 [00:00<00:00, 3.77MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 5.36MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: tokenizer_config.json: 0%| | 0.00/1.51k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 1.51k/1.51k [00:00<00:00, 24.6MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: Downloaded to shared memory in 28.391s
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: quantizing model to /dev/shm/model_cache
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: Saving mkml model at /dev/shm/model_cache
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: Reading /tmp/tmp650vb0g0/pytorch_model.bin.index.json
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:02<12:31, 2.59s/it] Profiling: 34%|███▎ | 98/291 [00:03<00:05, 33.42it/s] Profiling: 70%|███████ | 204/291 [00:04<00:01, 56.87it/s] Profiling: 100%|██████████| 291/291 [00:06<00:00, 58.59it/s] Profiling: 100%|██████████| 291/291 [00:06<00:00, 47.37it/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: quantized model in 16.918s
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: Processed model megumi21/Megumi-Chat-7B-v1.0 in 46.511s
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: creating bucket guanaco-mkml-models
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v1-0-v1
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v1-0-v1/special_tokens_map.json
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v1-0-v1/tokenizer_config.json
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v1-0-v1/tokenizer.model
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v1-0-v1/config.json
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v1-0-v1/tokenizer.json
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v1-0-v1/mkml_model.tensors
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 10.6MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.80MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 10.7MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 49.5MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: pytorch_model.bin: 0%| | 0.00/1.44G [00:00<?, ?B/s] pytorch_model.bin: 1%| | 10.5M/1.44G [00:00<00:46, 30.7MB/s] pytorch_model.bin: 1%|▏ | 21.0M/1.44G [00:00<00:44, 31.7MB/s] pytorch_model.bin: 4%|▍ | 62.9M/1.44G [00:00<00:12, 108MB/s] pytorch_model.bin: 7%|▋ | 94.4M/1.44G [00:00<00:08, 153MB/s] pytorch_model.bin: 11%|█ | 157M/1.44G [00:01<00:05, 240MB/s] pytorch_model.bin: 13%|█▎ | 189M/1.44G [00:01<00:05, 249MB/s] pytorch_model.bin: 16%|█▌ | 231M/1.44G [00:01<00:04, 273MB/s] pytorch_model.bin: 18%|█▊ | 262M/1.44G [00:01<00:04, 263MB/s] pytorch_model.bin: 22%|██▏ | 315M/1.44G [00:01<00:03, 317MB/s] pytorch_model.bin: 25%|██▌ | 367M/1.44G [00:01<00:03, 337MB/s] pytorch_model.bin: 28%|██▊ | 409M/1.44G [00:01<00:03, 342MB/s] pytorch_model.bin: 36%|███▌ | 514M/1.44G [00:01<00:01, 503MB/s] pytorch_model.bin: 43%|████▎ | 627M/1.44G [00:01<00:01, 660MB/s] pytorch_model.bin: 52%|█████▏ | 753M/1.44G [00:02<00:00, 778MB/s] pytorch_model.bin: 61%|██████ | 878M/1.44G [00:02<00:00, 829MB/s] pytorch_model.bin: 67%|██████▋ | 973M/1.44G [00:02<00:00, 829MB/s] pytorch_model.bin: 73%|███████▎ | 1.06G/1.44G [00:02<00:00, 728MB/s] pytorch_model.bin: 80%|███████▉ | 1.15G/1.44G [00:02<00:00, 779MB/s] pytorch_model.bin: 99%|█████████▉| 1.43G/1.44G [00:02<00:00, 1.27GB/s] pytorch_model.bin: 100%|█████████▉| 1.44G/1.44G [00:02<00:00, 506MB/s]
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: Saving duration: 0.296s
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 7.183s
megumi21-megumi-chat-7b-v1-0-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/megumi21-megumi-chat-7b-v1-0-v1_reward/reward.tensors
Job megumi21-megumi-chat-7b-v1-0-v1-mkmlizer completed after 73.79s with status: succeeded
Stopping job with name megumi21-megumi-chat-7b-v1-0-v1-mkmlizer
Pipeline stage MKMLizer completed in 77.74s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service megumi21-megumi-chat-7b-v1-0-v1
Waiting for inference service megumi21-megumi-chat-7b-v1-0-v1 to be ready
Inference service megumi21-megumi-chat-7b-v1-0-v1 ready after 40.22964119911194s
Pipeline stage ISVCDeployer completed in 47.54s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.6851778030395508s
Received healthy response to inference request in 1.1831848621368408s
Received healthy response to inference request in 1.200014591217041s
Received healthy response to inference request in 1.1908462047576904s
Received healthy response to inference request in 1.0323002338409424s
5 requests
0 failed requests
5th percentile: 1.0624771595001221
10th percentile: 1.0926540851593018
20th percentile: 1.153007936477661
30th percentile: 1.1847171306610107
40th percentile: 1.1877816677093507
50th percentile: 1.1908462047576904
60th percentile: 1.1945135593414307
70th percentile: 1.198180913925171
80th percentile: 1.297047233581543
90th percentile: 1.491112518310547
95th percentile: 1.5881451606750487
99th percentile: 1.6657712745666504
mean time: 1.258304738998413
Pipeline stage StressChecker completed in 7.09s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.03s
M-Eval Dataset for topic stay_in_character is loaded
megumi21-megumi-chat-7b-v1-0_v1 status is now deployed due to DeploymentManager action
megumi21-megumi-chat-7b-v1-0_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of megumi21-megumi-chat-7b-v1-0_v1
Deleting key katythecutie-lemonaderp-6308-v22_reward/reward.tensors from bucket guanaco-reward-models
Running pipeline stage ISVCDeleter
Deleting key katythecutie-lemonaderp-6308-v22_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key katythecutie-lemonaderp-6308-v22_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key katythecutie-lemonaderp-6308-v22_reward/tokenizer_config.json from bucket guanaco-reward-models
Checking if service megumi21-megumi-chat-7b-v1-0-v1 is running
Deleting key katythecutie-lemonaderp-6308-v22_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 3.05s
katythecutie-lemonaderp_6308_v22 status is now torndown due to DeploymentManager action
Tearing down inference service megumi21-megumi-chat-7b-v1-0-v1
Toredown service megumi21-megumi-chat-7b-v1-0-v1
Pipeline stage ISVCDeleter completed in 4.29s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key megumi21-megumi-chat-7b-v1-0-v1/config.json from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1/mkml_model.tensors from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1/tokenizer.model from bucket guanaco-mkml-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key megumi21-megumi-chat-7b-v1-0-v1_reward/config.json from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key megumi21-megumi-chat-7b-v1-0-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.10s
megumi21-megumi-chat-7b-v1-0_v1 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics