submission_id: deverdever-heavenly-goat-v4_v8
developer_uid: clover0103
status: inactive
model_repo: DeverDever/heavenly-goat-v4
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 0.8, 'top_p': 0.9, 'top_k': 30, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "### Instruction:\nYou are a creative agent roleplaying as a character called {bot_name}. Stay true to the persona given, reply with short and descriptive sentences. Do not be repetitive.\n{bot_name}'s Persona: {memory}\n", 'prompt_template': '### Input:\n# Example conversation:\n{prompt}\n# Actual conversation:\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': 'User: {message}\n', 'response_template': '### Response:\n{bot_name}:'}
timestamp: 2024-03-18T02:24:06+00:00
model_name: heavenly-goat-v5
model_eval_status: success
safety_score: 0.72
entertaining: 6.46
stay_in_character: 8.19
user_preference: 6.88
double_thumbs_up: 856
thumbs_up: 1332
thumbs_down: 720
num_battles: 67935
num_wins: 34365
win_ratio: 0.5058511812762199
celo_rating: 1161.48
Resubmit model
Running pipeline stage MKMLizer
Starting job with name deverdever-heavenly-goat-v4-v8-mkmlizer
Waiting for job on deverdever-heavenly-goat-v4-v8-mkmlizer to finish
deverdever-heavenly-goat-v4-v8-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ _____ __ __ ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ /___/ ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ Version: 0.6.11 ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ The license key for the current software has been verified as ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ belonging to: ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ Chai Research Corp. ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ Expiration: 2024-04-15 23:59:59 ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ║ ║
deverdever-heavenly-goat-v4-v8-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
deverdever-heavenly-goat-v4-v8-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 15.6MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: README.md: 0%| | 0.00/21.0 [00:00<?, ?B/s] README.md: 100%|██████████| 21.0/21.0 [00:00<00:00, 201kB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: added_tokens.json: 0%| | 0.00/22.0 [00:00<?, ?B/s] added_tokens.json: 100%|██████████| 22.0/22.0 [00:00<00:00, 165kB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: config.json: 0%| | 0.00/698 [00:00<?, ?B/s] config.json: 100%|██████████| 698/698 [00:00<00:00, 9.41MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: generation_config.json: 0%| | 0.00/137 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 137/137 [00:00<00:00, 1.57MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: pytorch_model-00001-of-00003.bin: 0%| | 0.00/4.94G [00:00<?, ?B/s] pytorch_model-00001-of-00003.bin: 0%| | 10.5M/4.94G [00:01<10:11, 8.07MB/s] pytorch_model-00001-of-00003.bin: 1%| | 31.5M/4.94G [00:01<04:11, 19.5MB/s] pytorch_model-00001-of-00003.bin: 1%| | 41.9M/4.94G [00:01<03:09, 25.8MB/s] pytorch_model-00001-of-00003.bin: 2%|▏ | 115M/4.94G [00:02<00:47, 102MB/s] pytorch_model-00001-of-00003.bin: 3%|▎ | 147M/4.94G [00:02<00:45, 106MB/s] pytorch_model-00001-of-00003.bin: 4%|▍ | 199M/4.94G [00:02<00:29, 159MB/s] pytorch_model-00001-of-00003.bin: 5%|▍ | 231M/4.94G [00:02<00:27, 173MB/s] pytorch_model-00001-of-00003.bin: 6%|▌ | 294M/4.94G [00:02<00:20, 228MB/s] pytorch_model-00001-of-00003.bin: 7%|▋ | 336M/4.94G [00:03<00:21, 210MB/s] pytorch_model-00001-of-00003.bin: 8%|▊ | 388M/4.94G [00:03<00:17, 254MB/s] pytorch_model-00001-of-00003.bin: 9%|▊ | 430M/4.94G [00:03<00:17, 257MB/s] pytorch_model-00001-of-00003.bin: 10%|▉ | 482M/4.94G [00:03<00:14, 308MB/s] pytorch_model-00001-of-00003.bin: 11%|█ | 545M/4.94G [00:03<00:12, 356MB/s] pytorch_model-00001-of-00003.bin: 13%|█▎ | 640M/4.94G [00:03<00:08, 479MB/s] pytorch_model-00001-of-00003.bin: 15%|█▌ | 755M/4.94G [00:03<00:06, 614MB/s] pytorch_model-00001-of-00003.bin: 17%|█▋ | 860M/4.94G [00:03<00:07, 582MB/s] pytorch_model-00001-of-00003.bin: 19%|█▊ | 923M/4.94G [00:04<00:06, 580MB/s] pytorch_model-00001-of-00003.bin: 28%|██▊ | 1.37G/4.94G [00:04<00:02, 1.50GB/s] pytorch_model-00001-of-00003.bin: 32%|███▏ | 1.57G/4.94G [00:04<00:02, 1.16GB/s] pytorch_model-00001-of-00003.bin: 35%|███▍ | 1.72G/4.94G [00:05<00:05, 556MB/s] pytorch_model-00001-of-00003.bin: 37%|███▋ | 1.85G/4.94G [00:05<00:04, 637MB/s] pytorch_model-00001-of-00003.bin: 40%|███▉ | 1.96G/4.94G [00:05<00:04, 705MB/s] pytorch_model-00001-of-00003.bin: 42%|████▏ | 2.08G/4.94G [00:05<00:03, 734MB/s] pytorch_model-00001-of-00003.bin: 44%|████▍ | 2.18G/4.94G [00:05<00:03, 717MB/s] pytorch_model-00001-of-00003.bin: 46%|████▌ | 2.28G/4.94G [00:05<00:03, 748MB/s] pytorch_model-00001-of-00003.bin: 49%|████▉ | 2.42G/4.94G [00:05<00:02, 875MB/s] pytorch_model-00001-of-00003.bin: 52%|█████▏ | 2.57G/4.94G [00:05<00:02, 994MB/s] pytorch_model-00001-of-00003.bin: 54%|█████▍ | 2.68G/4.94G [00:06<00:02, 947MB/s] pytorch_model-00001-of-00003.bin: 56%|█████▋ | 2.79G/4.94G [00:06<00:02, 731MB/s] pytorch_model-00001-of-00003.bin: 59%|█████▉ | 2.90G/4.94G [00:06<00:02, 800MB/s] pytorch_model-00001-of-00003.bin: 61%|██████▏ | 3.03G/4.94G [00:06<00:02, 894MB/s] pytorch_model-00001-of-00003.bin: 67%|██████▋ | 3.32G/4.94G [00:06<00:01, 1.37GB/s] pytorch_model-00001-of-00003.bin: 71%|███████ | 3.51G/4.94G [00:06<00:00, 1.49GB/s] pytorch_model-00001-of-00003.bin: 74%|███████▍ | 3.68G/4.94G [00:06<00:00, 1.38GB/s] pytorch_model-00001-of-00003.bin: 78%|███████▊ | 3.84G/4.94G [00:07<00:01, 1.05GB/s] pytorch_model-00001-of-00003.bin: 80%|████████ | 3.96G/4.94G [00:07<00:01, 704MB/s] pytorch_model-00001-of-00003.bin: 82%|████████▏ | 4.07G/4.94G [00:07<00:01, 625MB/s] pytorch_model-00001-of-00003.bin: 84%|████████▍ | 4.16G/4.94G [00:07<00:01, 642MB/s] pytorch_model-00001-of-00003.bin: 86%|████████▋ | 4.27G/4.94G [00:07<00:00, 721MB/s] pytorch_model-00001-of-00003.bin: 88%|████████▊ | 4.37G/4.94G [00:08<00:00, 727MB/s] pytorch_model-00001-of-00003.bin: 92%|█████████▏| 4.52G/4.94G [00:08<00:00, 890MB/s] pytorch_model-00001-of-00003.bin: 97%|█████████▋| 4.82G/4.94G [00:08<00:00, 1.36GB/s] pytorch_model-00001-of-00003.bin: 100%|█████████▉| 4.94G/4.94G [00:08<00:00, 590MB/s]
Failed to get response for submission thanhdaonguyen-once-upon-a-t_v37: HTTPConnectionPool(host='thanhdaonguyen-once-upon-a-t-v37-predictor-default.tenant-chaiml-guanaco.knative.ord1.coreweave.cloud', port=80): Read timed out. (read timeout=5.5)
deverdever-heavenly-goat-v4-v8-mkmlizer: pytorch_model-00002-of-00003.bin: 0%| | 0.00/5.00G [00:00<?, ?B/s] pytorch_model-00002-of-00003.bin: 0%| | 10.5M/5.00G [00:00<07:51, 10.6MB/s] pytorch_model-00002-of-00003.bin: 0%| | 21.0M/5.00G [00:01<05:09, 16.1MB/s] pytorch_model-00002-of-00003.bin: 2%|▏ | 83.9M/5.00G [00:01<00:59, 82.6MB/s] pytorch_model-00002-of-00003.bin: 4%|▍ | 220M/5.00G [00:01<00:23, 203MB/s] pytorch_model-00002-of-00003.bin: 6%|▌ | 294M/5.00G [00:01<00:17, 271MB/s] pytorch_model-00002-of-00003.bin: 8%|▊ | 388M/5.00G [00:02<00:12, 363MB/s] pytorch_model-00002-of-00003.bin: 9%|▉ | 451M/5.00G [00:02<00:12, 370MB/s] pytorch_model-00002-of-00003.bin: 10%|█ | 514M/5.00G [00:02<00:10, 412MB/s] pytorch_model-00002-of-00003.bin: 12%|█▏ | 577M/5.00G [00:02<00:10, 415MB/s] pytorch_model-00002-of-00003.bin: 13%|█▎ | 640M/5.00G [00:02<00:09, 440MB/s] pytorch_model-00002-of-00003.bin: 14%|█▍ | 692M/5.00G [00:02<00:09, 456MB/s] pytorch_model-00002-of-00003.bin: 15%|█▌ | 755M/5.00G [00:02<00:08, 490MB/s] pytorch_model-00002-of-00003.bin: 17%|█▋ | 828M/5.00G [00:02<00:07, 527MB/s] pytorch_model-00002-of-00003.bin: 18%|█▊ | 902M/5.00G [00:03<00:07, 572MB/s] pytorch_model-00002-of-00003.bin: 21%|██ | 1.03G/5.00G [00:03<00:05, 751MB/s] pytorch_model-00002-of-00003.bin: 24%|██▍ | 1.21G/5.00G [00:03<00:03, 1.03GB/s] pytorch_model-00002-of-00003.bin: 26%|██▋ | 1.32G/5.00G [00:03<00:03, 951MB/s] pytorch_model-00002-of-00003.bin: 34%|███▍ | 1.72G/5.00G [00:03<00:01, 1.75GB/s] pytorch_model-00002-of-00003.bin: 38%|███▊ | 1.92G/5.00G [00:03<00:01, 1.60GB/s] pytorch_model-00002-of-00003.bin: 42%|████▏ | 2.10G/5.00G [00:04<00:04, 632MB/s] pytorch_model-00002-of-00003.bin: 45%|████▍ | 2.23G/5.00G [00:04<00:04, 669MB/s] pytorch_model-00002-of-00003.bin: 47%|████▋ | 2.36G/5.00G [00:04<00:03, 743MB/s] pytorch_model-00002-of-00003.bin: 49%|████▉ | 2.47G/5.00G [00:04<00:03, 674MB/s] pytorch_model-00002-of-00003.bin: 52%|█████▏ | 2.60G/5.00G [00:05<00:03, 706MB/s] pytorch_model-00002-of-00003.bin: 54%|█████▍ | 2.69G/5.00G [00:05<00:03, 689MB/s] pytorch_model-00002-of-00003.bin: 56%|█████▌ | 2.78G/5.00G [00:05<00:03, 708MB/s] pytorch_model-00002-of-00003.bin: 57%|█████▋ | 2.86G/5.00G [00:05<00:02, 712MB/s] pytorch_model-00002-of-00003.bin: 59%|█████▉ | 2.97G/5.00G [00:05<00:02, 765MB/s] pytorch_model-00002-of-00003.bin: 62%|██████▏ | 3.09G/5.00G [00:05<00:02, 812MB/s] pytorch_model-00002-of-00003.bin: 64%|██████▍ | 3.19G/5.00G [00:05<00:02, 830MB/s] pytorch_model-00002-of-00003.bin: 66%|██████▋ | 3.32G/5.00G [00:05<00:01, 929MB/s] pytorch_model-00002-of-00003.bin: 70%|██████▉ | 3.48G/5.00G [00:05<00:01, 1.08GB/s] pytorch_model-00002-of-00003.bin: 72%|███████▏ | 3.60G/5.00G [00:06<00:01, 901MB/s] pytorch_model-00002-of-00003.bin: 76%|███████▋ | 3.82G/5.00G [00:06<00:00, 1.20GB/s] pytorch_model-00002-of-00003.bin: 83%|████████▎ | 4.15G/5.00G [00:06<00:00, 1.72GB/s] pytorch_model-00002-of-00003.bin: 90%|████████▉ | 4.48G/5.00G [00:06<00:00, 2.12GB/s] pytorch_model-00002-of-00003.bin: 94%|█████████▍| 4.71G/5.00G [00:06<00:00, 1.44GB/s] pytorch_model-00002-of-00003.bin: 98%|█████████▊| 4.90G/5.00G [00:06<00:00, 1.49GB/s] pytorch_model-00002-of-00003.bin: 100%|█████████▉| 5.00G/5.00G [00:07<00:00, 670MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: pytorch_model-00003-of-00003.bin: 0%| | 0.00/4.54G [00:00<?, ?B/s] pytorch_model-00003-of-00003.bin: 0%| | 10.5M/4.54G [00:00<02:16, 33.2MB/s] pytorch_model-00003-of-00003.bin: 1%| | 31.5M/4.54G [00:00<01:02, 72.3MB/s] pytorch_model-00003-of-00003.bin: 3%|▎ | 126M/4.54G [00:00<00:15, 284MB/s] pytorch_model-00003-of-00003.bin: 4%|▍ | 178M/4.54G [00:00<00:12, 339MB/s] pytorch_model-00003-of-00003.bin: 5%|▌ | 241M/4.54G [00:00<00:10, 392MB/s] pytorch_model-00003-of-00003.bin: 6%|▋ | 294M/4.54G [00:00<00:10, 411MB/s] pytorch_model-00003-of-00003.bin: 8%|▊ | 357M/4.54G [00:01<00:09, 461MB/s] pytorch_model-00003-of-00003.bin: 9%|▉ | 419M/4.54G [00:01<00:08, 475MB/s] pytorch_model-00003-of-00003.bin: 10%|█ | 472M/4.54G [00:01<00:08, 467MB/s] pytorch_model-00003-of-00003.bin: 12%|█▏ | 524M/4.54G [00:01<00:09, 405MB/s] pytorch_model-00003-of-00003.bin: 13%|█▎ | 598M/4.54G [00:01<00:08, 484MB/s] pytorch_model-00003-of-00003.bin: 14%|█▍ | 650M/4.54G [00:01<00:09, 417MB/s] pytorch_model-00003-of-00003.bin: 15%|█▌ | 703M/4.54G [00:01<00:08, 434MB/s] pytorch_model-00003-of-00003.bin: 17%|█▋ | 755M/4.54G [00:01<00:08, 438MB/s] pytorch_model-00003-of-00003.bin: 18%|█▊ | 807M/4.54G [00:02<00:08, 427MB/s] pytorch_model-00003-of-00003.bin: 19%|█▉ | 860M/4.54G [00:02<00:09, 388MB/s] pytorch_model-00003-of-00003.bin: 20%|██ | 912M/4.54G [00:02<00:08, 418MB/s] pytorch_model-00003-of-00003.bin: 21%|██▏ | 975M/4.54G [00:02<00:08, 435MB/s] pytorch_model-00003-of-00003.bin: 23%|██▎ | 1.04G/4.54G [00:02<00:07, 470MB/s] pytorch_model-00003-of-00003.bin: 24%|██▍ | 1.11G/4.54G [00:02<00:06, 506MB/s] pytorch_model-00003-of-00003.bin: 26%|██▌ | 1.18G/4.54G [00:02<00:06, 517MB/s] pytorch_model-00003-of-00003.bin: 27%|██▋ | 1.24G/4.54G [00:02<00:06, 504MB/s] pytorch_model-00003-of-00003.bin: 28%|██▊ | 1.29G/4.54G [00:03<00:07, 418MB/s] pytorch_model-00003-of-00003.bin: 30%|██▉ | 1.34G/4.54G [00:03<00:07, 437MB/s] pytorch_model-00003-of-00003.bin: 31%|███ | 1.41G/4.54G [00:03<00:06, 471MB/s] pytorch_model-00003-of-00003.bin: 32%|███▏ | 1.46G/4.54G [00:03<00:06, 445MB/s] pytorch_model-00003-of-00003.bin: 33%|███▎ | 1.51G/4.54G [00:03<00:11, 257MB/s] pytorch_model-00003-of-00003.bin: 37%|███▋ | 1.69G/4.54G [00:04<00:05, 509MB/s] pytorch_model-00003-of-00003.bin: 56%|█████▌ | 2.53G/4.54G [00:04<00:01, 2.00GB/s] pytorch_model-00003-of-00003.bin: 62%|██████▏ | 2.83G/4.54G [00:04<00:01, 915MB/s] pytorch_model-00003-of-00003.bin: 67%|██████▋ | 3.05G/4.54G [00:05<00:01, 899MB/s] pytorch_model-00003-of-00003.bin: 71%|███████▏ | 3.24G/4.54G [00:05<00:01, 867MB/s] pytorch_model-00003-of-00003.bin: 75%|███████▍ | 3.40G/4.54G [00:05<00:01, 847MB/s] pytorch_model-00003-of-00003.bin: 78%|███████▊ | 3.53G/4.54G [00:05<00:01, 899MB/s] pytorch_model-00003-of-00003.bin: 81%|████████ | 3.66G/4.54G [00:05<00:01, 877MB/s] pytorch_model-00003-of-00003.bin: 83%|████████▎ | 3.78G/4.54G [00:06<00:01, 497MB/s] pytorch_model-00003-of-00003.bin: 85%|████████▍ | 3.86G/4.54G [00:06<00:01, 491MB/s] pytorch_model-00003-of-00003.bin: 87%|████████▋ | 3.95G/4.54G [00:06<00:01, 544MB/s] pytorch_model-00003-of-00003.bin: 89%|████████▉ | 4.04G/4.54G [00:07<00:01, 480MB/s] pytorch_model-00003-of-00003.bin: 91%|█████████ | 4.12G/4.54G [00:07<00:00, 532MB/s] pytorch_model-00003-of-00003.bin: 100%|█████████▉| 4.52G/4.54G [00:07<00:00, 1.15GB/s] pytorch_model-00003-of-00003.bin: 100%|█████████▉| 4.54G/4.54G [00:07<00:00, 622MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: pytorch_model.bin.index.json: 0%| | 0.00/23.9k [00:00<?, ?B/s] pytorch_model.bin.index.json: 100%|██████████| 23.9k/23.9k [00:00<00:00, 122MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: special_tokens_map.json: 0%| | 0.00/625 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 625/625 [00:00<00:00, 5.00MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 62.3MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: tokenizer_config.json: 0%| | 0.00/1.23k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 1.23k/1.23k [00:00<00:00, 9.36MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:02<10:27, 2.16s/it] Profiling: 34%|███▎ | 98/291 [00:03<00:04, 40.22it/s] Profiling: 70%|███████ | 204/291 [00:03<00:01, 71.35it/s] Profiling: 100%|██████████| 291/291 [00:05<00:00, 70.55it/s] Profiling: 100%|██████████| 291/291 [00:05<00:00, 57.34it/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: quantized model in 15.587s
deverdever-heavenly-goat-v4-v8-mkmlizer: Processed model DeverDever/heavenly-goat-v4 in 42.629s
deverdever-heavenly-goat-v4-v8-mkmlizer: creating bucket guanaco-mkml-models
deverdever-heavenly-goat-v4-v8-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
deverdever-heavenly-goat-v4-v8-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/deverdever-heavenly-goat-v4-v8
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v4-v8/config.json
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /dev/shm/model_cache/added_tokens.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v4-v8/added_tokens.json
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v4-v8/special_tokens_map.json
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v4-v8/tokenizer.json
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/deverdever-heavenly-goat-v4-v8/tokenizer.model
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/deverdever-heavenly-goat-v4-v8/tokenizer_config.json
deverdever-heavenly-goat-v4-v8-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
deverdever-heavenly-goat-v4-v8-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
deverdever-heavenly-goat-v4-v8-mkmlizer: warnings.warn(
deverdever-heavenly-goat-v4-v8-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 8.76MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
deverdever-heavenly-goat-v4-v8-mkmlizer: warnings.warn(
deverdever-heavenly-goat-v4-v8-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.19MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 7.55MB/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 7.53MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 14.9MB/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 14.9MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
deverdever-heavenly-goat-v4-v8-mkmlizer: warnings.warn(
deverdever-heavenly-goat-v4-v8-mkmlizer: pytorch_model.bin: 0%| | 0.00/1.44G [00:00<?, ?B/s] pytorch_model.bin: 1%| | 10.5M/1.44G [00:00<00:52, 27.1MB/s] pytorch_model.bin: 5%|▌ | 73.4M/1.44G [00:00<00:08, 154MB/s] pytorch_model.bin: 12%|█▏ | 178M/1.44G [00:00<00:03, 354MB/s] pytorch_model.bin: 20%|█▉ | 283M/1.44G [00:00<00:02, 503MB/s] pytorch_model.bin: 25%|██▍ | 357M/1.44G [00:00<00:02, 540MB/s] pytorch_model.bin: 30%|██▉ | 430M/1.44G [00:01<00:01, 562MB/s] pytorch_model.bin: 35%|███▍ | 501M/1.44G [00:01<00:01, 490MB/s] pytorch_model.bin: 39%|███▉ | 564M/1.44G [00:01<00:02, 386MB/s] pytorch_model.bin: 43%|████▎ | 616M/1.44G [00:03<00:08, 97.5MB/s] pytorch_model.bin: 51%|█████ | 732M/1.44G [00:03<00:04, 161MB/s] pytorch_model.bin: 54%|█████▍ | 784M/1.44G [00:03<00:03, 183MB/s] pytorch_model.bin: 72%|███████▏ | 1.05G/1.44G [00:03<00:00, 425MB/s] pytorch_model.bin: 100%|█████████▉| 1.44G/1.44G [00:03<00:00, 393MB/s]
deverdever-heavenly-goat-v4-v8-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
deverdever-heavenly-goat-v4-v8-mkmlizer: Saving duration: 0.249s
deverdever-heavenly-goat-v4-v8-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 7.778s
deverdever-heavenly-goat-v4-v8-mkmlizer: creating bucket guanaco-reward-models
deverdever-heavenly-goat-v4-v8-mkmlizer: Bucket 's3://guanaco-reward-models/' created
deverdever-heavenly-goat-v4-v8-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/deverdever-heavenly-goat-v4-v8_reward
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/deverdever-heavenly-goat-v4-v8_reward/vocab.json
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/deverdever-heavenly-goat-v4-v8_reward/config.json
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/deverdever-heavenly-goat-v4-v8_reward/tokenizer_config.json
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/deverdever-heavenly-goat-v4-v8_reward/special_tokens_map.json
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/deverdever-heavenly-goat-v4-v8_reward/merges.txt
deverdever-heavenly-goat-v4-v8-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/deverdever-heavenly-goat-v4-v8_reward/tokenizer.json
Job deverdever-heavenly-goat-v4-v8-mkmlizer completed after 74.51s with status: succeeded
Stopping job with name deverdever-heavenly-goat-v4-v8-mkmlizer
Pipeline stage MKMLizer completed in 80.25s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.14s
Running pipeline stage ISVCDeployer
Creating inference service deverdever-heavenly-goat-v4-v8
Waiting for inference service deverdever-heavenly-goat-v4-v8 to be ready
Inference service deverdever-heavenly-goat-v4-v8 ready after 40.25553059577942s
Pipeline stage ISVCDeployer completed in 48.13s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.3622653484344482s
Received healthy response to inference request in 0.9824159145355225s
Received healthy response to inference request in 0.9488756656646729s
Received healthy response to inference request in 0.9519078731536865s
Received healthy response to inference request in 1.0074996948242188s
5 requests
0 failed requests
5th percentile: 0.9494821071624756
10th percentile: 0.9500885486602784
20th percentile: 0.9513014316558838
30th percentile: 0.9580094814300537
40th percentile: 0.9702126979827881
50th percentile: 0.9824159145355225
60th percentile: 0.992449426651001
70th percentile: 1.0024829387664795
80th percentile: 1.0784528255462646
90th percentile: 1.2203590869903564
95th percentile: 1.2913122177124023
99th percentile: 1.348074722290039
mean time: 1.0505928993225098
Pipeline stage StressChecker completed in 6.23s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.08s
Running pipeline stage DaemonicSafetyScorer
Pipeline stage DaemonicSafetyScorer completed in 0.06s
Running M-Eval for topic stay_in_character
M-Eval Dataset for topic stay_in_character is loaded
deverdever-heavenly-goat-v4_v8 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics