submission_id: aetherresearch-cerebrum-_6186_v2
developer_uid: chai_backend_admin
status: inactive
model_repo: AetherResearch/Cerebrum-1.0-7b
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 0.9, 'top_p': 1.0, 'top_k': 50, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>', '<|user|>', '###'], 'max_input_tokens': 512, 'best_of': 1, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:'}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:'}
timestamp: 2024-04-02T16:37:22+00:00
model_name: auto_submit_ricum_pacaborodu
model_eval_status: success
safety_score: 0.8
entertaining: 6.6
stay_in_character: 8.12
user_preference: 6.58
double_thumbs_up: 233
thumbs_up: 441
thumbs_down: 284
num_battles: 24229
num_wins: 9780
win_ratio: 0.40364852036815385
celo_rating: 1085.67
Resubmit model
Running pipeline stage MKMLizer
Starting job with name aetherresearch-cerebrum-6186-v2-mkmlizer
Waiting for job on aetherresearch-cerebrum-6186-v2-mkmlizer to finish
aetherresearch-cerebrum-6186-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ _____ __ __ ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ /___/ ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ Version: 0.6.11 ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ The license key for the current software has been verified as ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ belonging to: ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ Chai Research Corp. ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ║ ║
aetherresearch-cerebrum-6186-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
aetherresearch-cerebrum-6186-v2-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 18.8MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: README.md: 0%| | 0.00/3.63k [00:00<?, ?B/s] README.md: 100%|██████████| 3.63k/3.63k [00:00<00:00, 33.1MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: benchmarking.png: 0%| | 0.00/14.5k [00:00<?, ?B/s] benchmarking.png: 100%|██████████| 14.5k/14.5k [00:00<00:00, 179MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: benchmarking_table.png: 0%| | 0.00/25.8k [00:00<?, ?B/s] benchmarking_table.png: 100%|██████████| 25.8k/25.8k [00:00<00:00, 213MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: config.json: 0%| | 0.00/651 [00:00<?, ?B/s] config.json: 100%|██████████| 651/651 [00:00<00:00, 8.75MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: generation_config.json: 0%| | 0.00/116 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 116/116 [00:00<00:00, 1.20MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: model-00001-of-00003.safetensors: 0%| | 0.00/4.94G [00:00<?, ?B/s] model-00001-of-00003.safetensors: 0%| | 10.5M/4.94G [00:00<04:06, 20.0MB/s] model-00001-of-00003.safetensors: 1%| | 52.4M/4.94G [00:00<00:55, 87.5MB/s] model-00001-of-00003.safetensors: 2%|▏ | 94.4M/4.94G [00:00<00:31, 153MB/s] model-00001-of-00003.safetensors: 3%|▎ | 136M/4.94G [00:00<00:23, 203MB/s] model-00001-of-00003.safetensors: 4%|▍ | 189M/4.94G [00:01<00:18, 260MB/s] model-00001-of-00003.safetensors: 5%|▍ | 231M/4.94G [00:01<00:27, 171MB/s] model-00001-of-00003.safetensors: 5%|▌ | 262M/4.94G [00:01<00:26, 175MB/s] model-00001-of-00003.safetensors: 6%|▌ | 294M/4.94G [00:01<00:26, 179MB/s] model-00001-of-00003.safetensors: 7%|▋ | 325M/4.94G [00:01<00:24, 189MB/s] model-00001-of-00003.safetensors: 7%|▋ | 357M/4.94G [00:02<00:23, 196MB/s] model-00001-of-00003.safetensors: 8%|▊ | 388M/4.94G [00:02<00:26, 171MB/s] model-00001-of-00003.safetensors: 8%|▊ | 409M/4.94G [00:02<00:26, 171MB/s] model-00001-of-00003.safetensors: 9%|▊ | 430M/4.94G [00:02<00:28, 156MB/s] model-00001-of-00003.safetensors: 9%|▉ | 461M/4.94G [00:02<00:24, 184MB/s] model-00001-of-00003.safetensors: 10%|▉ | 482M/4.94G [00:03<00:44, 101MB/s] model-00001-of-00003.safetensors: 13%|█▎ | 640M/4.94G [00:03<00:13, 308MB/s] model-00001-of-00003.safetensors: 31%|███ | 1.51G/4.94G [00:03<00:02, 1.72GB/s] model-00001-of-00003.safetensors: 37%|███▋ | 1.81G/4.94G [00:04<00:03, 1.01GB/s] model-00001-of-00003.safetensors: 41%|████▏ | 2.04G/4.94G [00:04<00:02, 986MB/s] model-00001-of-00003.safetensors: 45%|████▌ | 2.23G/4.94G [00:04<00:03, 832MB/s] model-00001-of-00003.safetensors: 48%|████▊ | 2.38G/4.94G [00:04<00:03, 828MB/s] model-00001-of-00003.safetensors: 51%|█████ | 2.51G/4.94G [00:05<00:03, 779MB/s] model-00001-of-00003.safetensors: 53%|█████▎ | 2.62G/4.94G [00:05<00:02, 781MB/s] model-00001-of-00003.safetensors: 56%|█████▋ | 2.79G/4.94G [00:05<00:02, 914MB/s] model-00001-of-00003.safetensors: 59%|█████▉ | 2.90G/4.94G [00:05<00:02, 943MB/s] model-00001-of-00003.safetensors: 62%|██████▏ | 3.05G/4.94G [00:05<00:01, 1.05GB/s] model-00001-of-00003.safetensors: 64%|██████▍ | 3.18G/4.94G [00:05<00:01, 1.09GB/s] model-00001-of-00003.safetensors: 67%|██████▋ | 3.31G/4.94G [00:05<00:01, 1.14GB/s] model-00001-of-00003.safetensors: 70%|██████▉ | 3.44G/4.94G [00:05<00:01, 971MB/s] model-00001-of-00003.safetensors: 72%|███████▏ | 3.55G/4.94G [00:06<00:01, 905MB/s] model-00001-of-00003.safetensors: 81%|████████ | 3.98G/4.94G [00:06<00:00, 1.68GB/s] model-00001-of-00003.safetensors: 88%|████████▊ | 4.36G/4.94G [00:06<00:00, 2.18GB/s] model-00001-of-00003.safetensors: 93%|█████████▎| 4.62G/4.94G [00:06<00:00, 2.22GB/s] model-00001-of-00003.safetensors: 99%|█████████▊| 4.87G/4.94G [00:06<00:00, 1.69GB/s] model-00001-of-00003.safetensors: 100%|█████████▉| 4.94G/4.94G [00:07<00:00, 702MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: model-00002-of-00003.safetensors: 0%| | 0.00/5.00G [00:00<?, ?B/s] model-00002-of-00003.safetensors: 0%| | 10.5M/5.00G [00:01<15:37, 5.32MB/s] model-00002-of-00003.safetensors: 0%| | 21.0M/5.00G [00:02<07:26, 11.2MB/s] model-00002-of-00003.safetensors: 1%| | 31.5M/5.00G [00:02<05:11, 15.9MB/s] model-00002-of-00003.safetensors: 1%| | 41.9M/5.00G [00:02<03:30, 23.6MB/s] model-00002-of-00003.safetensors: 1%|▏ | 73.4M/5.00G [00:02<01:32, 53.4MB/s] model-00002-of-00003.safetensors: 3%|▎ | 147M/5.00G [00:02<00:34, 142MB/s] model-00002-of-00003.safetensors: 5%|▌ | 252M/5.00G [00:02<00:16, 283MB/s] model-00002-of-00003.safetensors: 7%|▋ | 325M/5.00G [00:03<00:13, 359MB/s] model-00002-of-00003.safetensors: 8%|▊ | 388M/5.00G [00:03<00:11, 400MB/s] model-00002-of-00003.safetensors: 14%|█▎ | 682M/5.00G [00:03<00:04, 954MB/s] model-00002-of-00003.safetensors: 22%|██▏ | 1.09G/5.00G [00:03<00:02, 1.58GB/s] model-00002-of-00003.safetensors: 26%|██▌ | 1.28G/5.00G [00:04<00:05, 696MB/s] model-00002-of-00003.safetensors: 29%|██▊ | 1.43G/5.00G [00:04<00:04, 722MB/s] model-00002-of-00003.safetensors: 31%|███ | 1.56G/5.00G [00:04<00:04, 810MB/s] model-00002-of-00003.safetensors: 34%|███▍ | 1.69G/5.00G [00:04<00:04, 764MB/s] model-00002-of-00003.safetensors: 36%|███▌ | 1.80G/5.00G [00:04<00:04, 659MB/s] model-00002-of-00003.safetensors: 38%|███▊ | 1.90G/5.00G [00:04<00:04, 699MB/s] model-00002-of-00003.safetensors: 40%|████ | 2.02G/5.00G [00:05<00:03, 797MB/s] model-00002-of-00003.safetensors: 44%|████▍ | 2.19G/5.00G [00:05<00:02, 981MB/s] model-00002-of-00003.safetensors: 46%|████▋ | 2.32G/5.00G [00:05<00:02, 959MB/s] model-00002-of-00003.safetensors: 54%|█████▍ | 2.71G/5.00G [00:05<00:01, 1.57GB/s] model-00002-of-00003.safetensors: 58%|█████▊ | 2.89G/5.00G [00:05<00:01, 1.57GB/s] model-00002-of-00003.safetensors: 61%|██████▏ | 3.07G/5.00G [00:05<00:01, 1.29GB/s] model-00002-of-00003.safetensors: 64%|██████▍ | 3.22G/5.00G [00:06<00:01, 925MB/s] model-00002-of-00003.safetensors: 67%|██████▋ | 3.34G/5.00G [00:06<00:02, 753MB/s] model-00002-of-00003.safetensors: 69%|██████▉ | 3.45G/5.00G [00:06<00:01, 787MB/s] model-00002-of-00003.safetensors: 72%|███████▏ | 3.58G/5.00G [00:06<00:01, 859MB/s] model-00002-of-00003.safetensors: 75%|███████▍ | 3.73G/5.00G [00:06<00:01, 1.00GB/s] model-00002-of-00003.safetensors: 77%|███████▋ | 3.86G/5.00G [00:06<00:01, 985MB/s] model-00002-of-00003.safetensors: 80%|███████▉ | 4.00G/5.00G [00:06<00:00, 1.05GB/s] model-00002-of-00003.safetensors: 83%|████████▎ | 4.13G/5.00G [00:06<00:00, 1.11GB/s] model-00002-of-00003.safetensors: 85%|████████▌ | 4.26G/5.00G [00:07<00:00, 941MB/s] model-00002-of-00003.safetensors: 87%|████████▋ | 4.36G/5.00G [00:07<00:00, 884MB/s] model-00002-of-00003.safetensors: 90%|████████▉ | 4.49G/5.00G [00:07<00:00, 958MB/s] model-00002-of-00003.safetensors: 92%|█████████▏| 4.60G/5.00G [00:07<00:00, 974MB/s] model-00002-of-00003.safetensors: 100%|█████████▉| 4.98G/5.00G [00:07<00:00, 1.64GB/s] model-00002-of-00003.safetensors: 100%|█████████▉| 5.00G/5.00G [00:07<00:00, 631MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: special_tokens_map.json: 0%| | 0.00/414 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 414/414 [00:00<00:00, 6.16MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: tokenizer.json: 0%| | 0.00/1.80M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 66.8MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 59.7MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: Downloaded to shared memory in 53.826s
aetherresearch-cerebrum-6186-v2-mkmlizer: quantizing model to /dev/shm/model_cache
aetherresearch-cerebrum-6186-v2-mkmlizer: Saving mkml model at /dev/shm/model_cache
aetherresearch-cerebrum-6186-v2-mkmlizer: tokenizer_config.json: 0%| | 0.00/2.10k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 2.10k/2.10k [00:00<00:00, 28.6MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: Reading /tmp/tmpes6x9a15/model.safetensors.index.json
aetherresearch-cerebrum-6186-v2-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:01<05:50, 1.21s/it] Profiling: 4%|▍ | 13/291 [00:01<00:21, 13.19it/s] Profiling: 10%|▉ | 28/291 [00:01<00:08, 30.25it/s] Profiling: 15%|█▍ | 43/291 [00:01<00:05, 48.58it/s] Profiling: 19%|█▉ | 55/291 [00:01<00:03, 61.34it/s] Profiling: 25%|██▍ | 72/291 [00:01<00:02, 80.93it/s] Profiling: 29%|██▉ | 85/291 [00:01<00:02, 90.62it/s] Profiling: 34%|███▎ | 98/291 [00:02<00:02, 66.43it/s] Profiling: 38%|███▊ | 111/291 [00:02<00:02, 77.40it/s] Profiling: 42%|████▏ | 122/291 [00:02<00:02, 83.51it/s] Profiling: 48%|████▊ | 139/291 [00:02<00:01, 99.12it/s] Profiling: 53%|█████▎ | 155/291 [00:02<00:01, 113.52it/s] Profiling: 58%|█████▊ | 169/291 [00:02<00:01, 113.37it/s] Profiling: 63%|██████▎ | 184/291 [00:02<00:00, 118.04it/s] Profiling: 68%|██████▊ | 198/291 [00:02<00:00, 122.53it/s] Profiling: 73%|███████▎ | 211/291 [00:04<00:03, 25.12it/s] Profiling: 76%|███████▋ | 222/291 [00:04<00:02, 31.09it/s] Profiling: 82%|████████▏ | 239/291 [00:04<00:01, 43.32it/s] Profiling: 87%|████████▋ | 253/291 [00:04<00:00, 54.46it/s] Profiling: 91%|█████████▏| 266/291 [00:04<00:00, 64.94it/s] Profiling: 97%|█████████▋| 283/291 [00:05<00:00, 80.36it/s] Profiling: 100%|██████████| 291/291 [00:05<00:00, 54.76it/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: quantized model in 15.964s
aetherresearch-cerebrum-6186-v2-mkmlizer: Processed model AetherResearch/Cerebrum-1.0-7b in 70.723s
aetherresearch-cerebrum-6186-v2-mkmlizer: creating bucket guanaco-mkml-models
aetherresearch-cerebrum-6186-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
aetherresearch-cerebrum-6186-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/aetherresearch-cerebrum-6186-v2
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/aetherresearch-cerebrum-6186-v2/config.json
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/aetherresearch-cerebrum-6186-v2/special_tokens_map.json
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/aetherresearch-cerebrum-6186-v2/tokenizer.model
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/aetherresearch-cerebrum-6186-v2/tokenizer_config.json
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/aetherresearch-cerebrum-6186-v2/tokenizer.json
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/aetherresearch-cerebrum-6186-v2/mkml_model.tensors
aetherresearch-cerebrum-6186-v2-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
aetherresearch-cerebrum-6186-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
aetherresearch-cerebrum-6186-v2-mkmlizer: warnings.warn(
aetherresearch-cerebrum-6186-v2-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 12.4MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
aetherresearch-cerebrum-6186-v2-mkmlizer: warnings.warn(
aetherresearch-cerebrum-6186-v2-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.72MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 49.3MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 44.2MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
aetherresearch-cerebrum-6186-v2-mkmlizer: warnings.warn(
aetherresearch-cerebrum-6186-v2-mkmlizer: pytorch_model.bin: 0%| | 0.00/1.44G [00:00<?, ?B/s] pytorch_model.bin: 1%|▏ | 21.0M/1.44G [00:00<00:08, 175MB/s] pytorch_model.bin: 4%|▎ | 52.4M/1.44G [00:00<00:12, 110MB/s] pytorch_model.bin: 9%|▊ | 126M/1.44G [00:00<00:05, 254MB/s] pytorch_model.bin: 15%|█▍ | 210M/1.44G [00:00<00:03, 392MB/s] pytorch_model.bin: 18%|█▊ | 262M/1.44G [00:00<00:02, 425MB/s] pytorch_model.bin: 24%|██▍ | 346M/1.44G [00:00<00:02, 520MB/s] pytorch_model.bin: 28%|██▊ | 409M/1.44G [00:01<00:01, 536MB/s] pytorch_model.bin: 37%|███▋ | 535M/1.44G [00:01<00:01, 709MB/s] pytorch_model.bin: 45%|████▌ | 650M/1.44G [00:01<00:00, 815MB/s] pytorch_model.bin: 54%|█████▍ | 786M/1.44G [00:01<00:00, 955MB/s] pytorch_model.bin: 70%|██████▉ | 1.00G/1.44G [00:01<00:00, 1.29GB/s] pytorch_model.bin: 79%|███████▉ | 1.14G/1.44G [00:01<00:00, 1.16GB/s] pytorch_model.bin: 100%|█████████▉| 1.44G/1.44G [00:05<00:00, 284MB/s]
aetherresearch-cerebrum-6186-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
aetherresearch-cerebrum-6186-v2-mkmlizer: Saving duration: 0.246s
aetherresearch-cerebrum-6186-v2-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 8.384s
aetherresearch-cerebrum-6186-v2-mkmlizer: creating bucket guanaco-reward-models
aetherresearch-cerebrum-6186-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
aetherresearch-cerebrum-6186-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/aetherresearch-cerebrum-6186-v2_reward
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/aetherresearch-cerebrum-6186-v2_reward/special_tokens_map.json
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/aetherresearch-cerebrum-6186-v2_reward/config.json
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/aetherresearch-cerebrum-6186-v2_reward/tokenizer_config.json
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/aetherresearch-cerebrum-6186-v2_reward/tokenizer.json
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/aetherresearch-cerebrum-6186-v2_reward/merges.txt
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/aetherresearch-cerebrum-6186-v2_reward/vocab.json
aetherresearch-cerebrum-6186-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/aetherresearch-cerebrum-6186-v2_reward/reward.tensors
Job aetherresearch-cerebrum-6186-v2-mkmlizer completed after 105.92s with status: succeeded
Stopping job with name aetherresearch-cerebrum-6186-v2-mkmlizer
Pipeline stage MKMLizer completed in 109.73s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.13s
Running pipeline stage ISVCDeployer
Creating inference service aetherresearch-cerebrum-6186-v2
Waiting for inference service aetherresearch-cerebrum-6186-v2 to be ready
Inference service aetherresearch-cerebrum-6186-v2 ready after 40.46550941467285s
Pipeline stage ISVCDeployer completed in 47.79s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.0911061763763428s
Received healthy response to inference request in 0.2917656898498535s
Received healthy response to inference request in 0.3582756519317627s
Received healthy response to inference request in 0.4323699474334717s
Received healthy response to inference request in 0.6262683868408203s
5 requests
0 failed requests
5th percentile: 0.3050676822662354
10th percentile: 0.3183696746826172
20th percentile: 0.34497365951538084
30th percentile: 0.3730945110321045
40th percentile: 0.40273222923278806
50th percentile: 0.4323699474334717
60th percentile: 0.5099293231964112
70th percentile: 0.5874886989593505
80th percentile: 0.7192359447479248
90th percentile: 0.9051710605621338
95th percentile: 0.9981386184692382
99th percentile: 1.072512664794922
mean time: 0.5599571704864502
Pipeline stage StressChecker completed in 3.80s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.05s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.07s
M-Eval Dataset for topic stay_in_character is loaded
aetherresearch-cerebrum-_6186_v2 status is now deployed due to DeploymentManager action
aetherresearch-cerebrum-_6186_v2 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics