submission_id: grimjim-kukulemon-7b_v5
developer_uid: chai_backend_admin
status: torndown
model_repo: grimjim/kukulemon-7B
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 0.9, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 50, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>', '<|user|>', '###'], 'max_input_tokens': 512, 'best_of': 1, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-04-01T21:28:18+00:00
model_name: auto_submit_wehay_tawigite
model_eval_status: success
model_group: grimjim/kukulemon-7B
num_battles: 5509
num_wins: 2506
celo_rating: 1130.26
propriety_score: 0.0
propriety_total_count: 0.0
submission_type: basic
model_architecture: MistralForCausalLM
model_num_parameters: 7241732096.0
best_of: 1
max_input_tokens: 512
max_output_tokens: 64
display_name: auto_submit_wehay_tawigite
ineligible_reason: propriety_total_count < 800
language_model: grimjim/kukulemon-7B
model_size: 7B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-04-01
win_ratio: 0.4548919949174079
preference_data_url: None
Resubmit model
Running pipeline stage MKMLizer
Starting job with name grimjim-kukulemon-7b-v5-mkmlizer
Waiting for job on grimjim-kukulemon-7b-v5-mkmlizer to finish
grimjim-kukulemon-7b-v5-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
grimjim-kukulemon-7b-v5-mkmlizer: ║ _____ __ __ ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ /___/ ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ Version: 0.6.11 ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ The license key for the current software has been verified as ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ belonging to: ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ Chai Research Corp. ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
grimjim-kukulemon-7b-v5-mkmlizer: ║ ║
grimjim-kukulemon-7b-v5-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
grimjim-kukulemon-7b-v5-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 18.8MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: README.md: 0%| | 0.00/1.93k [00:00<?, ?B/s] README.md: 100%|██████████| 1.93k/1.93k [00:00<00:00, 15.9MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: config.json: 0%| | 0.00/645 [00:00<?, ?B/s] config.json: 100%|██████████| 645/645 [00:00<00:00, 6.31MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: mergekit_config.yml: 0%| | 0.00/481 [00:00<?, ?B/s] mergekit_config.yml: 100%|██████████| 481/481 [00:00<00:00, 5.83MB/s]
Exception raised while processing tagging_function
Traceback (most recent call last): File "/code/guanaco/guanaco_services/src/guanaco_model_service/chat_api.py", line 274, in resolve_chat_api conversation_tag = self.tagging_function(conversation_state) File "/home/zongyi/gitlab/zztools/zztools/llm/guanaco/submit_routing_model.py", line 176, in last_user_message_length TypeError: 'ConversationState' object is not subscriptable
grimjim-kukulemon-7b-v5-mkmlizer: model-00001-of-00002.safetensors: 0%| | 0.00/9.86G [00:00<?, ?B/s] model-00001-of-00002.safetensors: 0%| | 10.5M/9.86G [00:00<01:57, 83.9MB/s] model-00001-of-00002.safetensors: 1%| | 83.9M/9.86G [00:00<00:27, 349MB/s] model-00001-of-00002.safetensors: 1%|▏ | 147M/9.86G [00:00<00:23, 410MB/s] model-00001-of-00002.safetensors: 2%|▏ | 231M/9.86G [00:00<00:19, 497MB/s] model-00001-of-00002.safetensors: 3%|▎ | 325M/9.86G [00:00<00:15, 609MB/s] model-00001-of-00002.safetensors: 4%|▍ | 388M/9.86G [00:00<00:15, 597MB/s] model-00001-of-00002.safetensors: 5%|▍ | 451M/9.86G [00:00<00:15, 597MB/s] model-00001-of-00002.safetensors: 7%|▋ | 650M/9.86G [00:00<00:09, 966MB/s] model-00001-of-00002.safetensors: 8%|▊ | 776M/9.86G [00:01<00:09, 1.01GB/s] model-00001-of-00002.safetensors: 10%|▉ | 975M/9.86G [00:01<00:06, 1.28GB/s] model-00001-of-00002.safetensors: 12%|█▏ | 1.16G/9.86G [00:01<00:06, 1.41GB/s] model-00001-of-00002.safetensors: 13%|█▎ | 1.31G/9.86G [00:01<00:06, 1.29GB/s] model-00001-of-00002.safetensors: 16%|█▋ | 1.61G/9.86G [00:01<00:04, 1.76GB/s] model-00001-of-00002.safetensors: 20%|█▉ | 1.95G/9.86G [00:01<00:03, 2.20GB/s] model-00001-of-00002.safetensors: 24%|██▎ | 2.33G/9.86G [00:01<00:02, 2.61GB/s] model-00001-of-00002.safetensors: 26%|██▋ | 2.60G/9.86G [00:01<00:02, 2.62GB/s] model-00001-of-00002.safetensors: 29%|██▉ | 2.87G/9.86G [00:02<00:03, 2.14GB/s] model-00001-of-00002.safetensors: 32%|███▏ | 3.11G/9.86G [00:02<00:04, 1.66GB/s] model-00001-of-00002.safetensors: 34%|███▎ | 3.31G/9.86G [00:02<00:05, 1.15GB/s] model-00001-of-00002.safetensors: 35%|███▌ | 3.47G/9.86G [00:02<00:06, 1.04GB/s] model-00001-of-00002.safetensors: 37%|███▋ | 3.61G/9.86G [00:02<00:06, 1.01GB/s] model-00001-of-00002.safetensors: 38%|███▊ | 3.73G/9.86G [00:03<00:06, 1.01GB/s] model-00001-of-00002.safetensors: 39%|███▉ | 3.85G/9.86G [00:03<00:06, 970MB/s] model-00001-of-00002.safetensors: 40%|████ | 3.96G/9.86G [00:03<00:06, 872MB/s] model-00001-of-00002.safetensors: 42%|████▏ | 4.11G/9.86G [00:03<00:05, 988MB/s] model-00001-of-00002.safetensors: 43%|████▎ | 4.25G/9.86G [00:03<00:05, 1.02GB/s] model-00001-of-00002.safetensors: 45%|████▍ | 4.40G/9.86G [00:03<00:04, 1.12GB/s] model-00001-of-00002.safetensors: 47%|████▋ | 4.59G/9.86G [00:03<00:04, 1.30GB/s] model-00001-of-00002.safetensors: 48%|████▊ | 4.76G/9.86G [00:03<00:03, 1.39GB/s] model-00001-of-00002.safetensors: 51%|█████ | 4.98G/9.86G [00:04<00:03, 1.60GB/s] model-00001-of-00002.safetensors: 52%|█████▏ | 5.15G/9.86G [00:04<00:02, 1.61GB/s] model-00001-of-00002.safetensors: 55%|█████▍ | 5.39G/9.86G [00:04<00:02, 1.82GB/s] model-00001-of-00002.safetensors: 57%|█████▋ | 5.58G/9.86G [00:04<00:02, 1.75GB/s] model-00001-of-00002.safetensors: 59%|█████▉ | 5.82G/9.86G [00:04<00:02, 1.93GB/s] model-00001-of-00002.safetensors: 61%|██████ | 6.03G/9.86G [00:04<00:01, 1.97GB/s] model-00001-of-00002.safetensors: 64%|██████▍ | 6.30G/9.86G [00:04<00:01, 2.14GB/s] model-00001-of-00002.safetensors: 67%|██████▋ | 6.61G/9.86G [00:04<00:01, 2.39GB/s] model-00001-of-00002.safetensors: 69%|██████▉ | 6.85G/9.86G [00:04<00:01, 2.08GB/s] model-00001-of-00002.safetensors: 72%|███████▏ | 7.11G/9.86G [00:05<00:01, 2.22GB/s] model-00001-of-00002.safetensors: 74%|███████▍ | 7.34G/9.86G [00:05<00:01, 2.10GB/s] model-00001-of-00002.safetensors: 78%|███████▊ | 7.65G/9.86G [00:05<00:00, 2.37GB/s] model-00001-of-00002.safetensors: 80%|████████ | 7.91G/9.86G [00:05<00:00, 2.15GB/s] model-00001-of-00002.safetensors: 83%|████████▎ | 8.14G/9.86G [00:05<00:00, 2.09GB/s] model-00001-of-00002.safetensors: 85%|████████▍ | 8.36G/9.86G [00:05<00:00, 2.09GB/s] model-00001-of-00002.safetensors: 87%|████████▋ | 8.60G/9.86G [00:05<00:00, 2.18GB/s] model-00001-of-00002.safetensors: 90%|████████▉ | 8.83G/9.86G [00:05<00:00, 2.19GB/s] model-00001-of-00002.safetensors: 92%|█████████▏| 9.05G/9.86G [00:05<00:00, 2.12GB/s] model-00001-of-00002.safetensors: 94%|█████████▍| 9.30G/9.86G [00:06<00:00, 2.16GB/s] model-00001-of-00002.safetensors: 97%|█████████▋| 9.52G/9.86G [00:06<00:00, 1.32GB/s] model-00001-of-00002.safetensors: 98%|█████████▊| 9.70G/9.86G [00:06<00:00, 1.14GB/s] model-00001-of-00002.safetensors: 100%|█████████▉| 9.86G/9.86G [00:06<00:00, 1.47GB/s]
grimjim-kukulemon-7b-v5-mkmlizer: model-00002-of-00002.safetensors: 0%| | 0.00/4.62G [00:00<?, ?B/s] model-00002-of-00002.safetensors: 0%| | 10.5M/4.62G [00:00<01:29, 51.7MB/s] model-00002-of-00002.safetensors: 1%| | 31.5M/4.62G [00:00<00:40, 114MB/s] model-00002-of-00002.safetensors: 4%|▍ | 178M/4.62G [00:00<00:07, 605MB/s] model-00002-of-00002.safetensors: 7%|▋ | 336M/4.62G [00:00<00:04, 889MB/s] model-00002-of-00002.safetensors: 10%|█ | 482M/4.62G [00:00<00:03, 1.06GB/s] model-00002-of-00002.safetensors: 13%|█▎ | 619M/4.62G [00:00<00:03, 1.15GB/s] model-00002-of-00002.safetensors: 17%|█▋ | 765M/4.62G [00:00<00:03, 1.16GB/s] model-00002-of-00002.safetensors: 19%|█▉ | 891M/4.62G [00:00<00:03, 1.06GB/s] model-00002-of-00002.safetensors: 22%|██▏ | 1.01G/4.62G [00:01<00:03, 980MB/s] model-00002-of-00002.safetensors: 25%|██▍ | 1.14G/4.62G [00:01<00:03, 1.06GB/s] model-00002-of-00002.safetensors: 29%|██▉ | 1.34G/4.62G [00:01<00:02, 1.27GB/s] model-00002-of-00002.safetensors: 35%|███▍ | 1.61G/4.62G [00:01<00:01, 1.66GB/s] model-00002-of-00002.safetensors: 42%|████▏ | 1.96G/4.62G [00:01<00:01, 2.09GB/s] model-00002-of-00002.safetensors: 51%|█████ | 2.34G/4.62G [00:01<00:00, 2.55GB/s] model-00002-of-00002.safetensors: 56%|█████▋ | 2.61G/4.62G [00:01<00:00, 2.50GB/s] model-00002-of-00002.safetensors: 62%|██████▏ | 2.87G/4.62G [00:01<00:00, 2.46GB/s] model-00002-of-00002.safetensors: 68%|██████▊ | 3.17G/4.62G [00:02<00:00, 2.40GB/s] model-00002-of-00002.safetensors: 74%|███████▍ | 3.42G/4.62G [00:02<00:00, 1.70GB/s] model-00002-of-00002.safetensors: 80%|███████▉ | 3.68G/4.62G [00:02<00:00, 1.89GB/s] model-00002-of-00002.safetensors: 84%|████████▍ | 3.90G/4.62G [00:02<00:00, 1.83GB/s] model-00002-of-00002.safetensors: 89%|████████▉ | 4.11G/4.62G [00:03<00:00, 886MB/s] model-00002-of-00002.safetensors: 92%|█████████▏| 4.27G/4.62G [00:03<00:00, 901MB/s] model-00002-of-00002.safetensors: 95%|█████████▌| 4.40G/4.62G [00:03<00:00, 892MB/s] model-00002-of-00002.safetensors: 100%|█████████▉| 4.62G/4.62G [00:03<00:00, 1.32GB/s]
grimjim-kukulemon-7b-v5-mkmlizer: model.safetensors.index.json: 0%| | 0.00/22.8k [00:00<?, ?B/s] model.safetensors.index.json: 100%|██████████| 22.8k/22.8k [00:00<00:00, 45.3MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: special_tokens_map.json: 0%| | 0.00/414 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 414/414 [00:00<00:00, 4.35MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: tokenizer.json: 0%| | 0.00/1.80M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 15.0MB/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 14.9MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 62.0MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: tokenizer_config.json: 0%| | 0.00/967 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 967/967 [00:00<00:00, 13.8MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: Downloaded to shared memory in 11.921s
grimjim-kukulemon-7b-v5-mkmlizer: quantizing model to /dev/shm/model_cache
grimjim-kukulemon-7b-v5-mkmlizer: Saving mkml model at /dev/shm/model_cache
grimjim-kukulemon-7b-v5-mkmlizer: Reading /tmp/tmpn14jro_2/model.safetensors.index.json
grimjim-kukulemon-7b-v5-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:01<06:47, 1.40s/it] Profiling: 4%|▍ | 13/291 [00:01<00:24, 11.57it/s] Profiling: 7%|▋ | 20/291 [00:01<00:20, 12.99it/s] Profiling: 11%|█ | 31/291 [00:02<00:11, 23.08it/s] Profiling: 16%|█▌ | 46/291 [00:02<00:06, 39.52it/s] Profiling: 20%|█▉ | 58/291 [00:02<00:04, 51.95it/s] Profiling: 25%|██▌ | 73/291 [00:02<00:03, 69.64it/s] Profiling: 29%|██▉ | 85/291 [00:02<00:02, 79.53it/s] Profiling: 33%|███▎ | 97/291 [00:02<00:02, 88.60it/s] Profiling: 38%|███▊ | 112/291 [00:02<00:01, 99.98it/s] Profiling: 44%|████▎ | 127/291 [00:02<00:01, 112.50it/s] Profiling: 48%|████▊ | 141/291 [00:02<00:01, 113.01it/s] Profiling: 54%|█████▍ | 157/291 [00:03<00:01, 120.22it/s] Profiling: 59%|█████▉ | 173/291 [00:03<00:00, 130.72it/s] Profiling: 64%|██████▍ | 187/291 [00:03<00:00, 124.90it/s] Profiling: 69%|██████▉ | 201/291 [00:05<00:03, 23.80it/s] Profiling: 73%|███████▎ | 212/291 [00:05<00:02, 29.43it/s] Profiling: 77%|███████▋ | 225/291 [00:05<00:01, 38.06it/s] Profiling: 82%|████████▏ | 239/291 [00:05<00:01, 48.80it/s] Profiling: 88%|████████▊ | 255/291 [00:05<00:00, 63.72it/s] Profiling: 92%|█████████▏| 268/291 [00:05<00:00, 72.05it/s] Profiling: 98%|█████████▊| 284/291 [00:05<00:00, 85.92it/s] Profiling: 100%|██████████| 291/291 [00:05<00:00, 49.34it/s]
grimjim-kukulemon-7b-v5-mkmlizer: quantized model in 16.082s
grimjim-kukulemon-7b-v5-mkmlizer: Processed model grimjim/kukulemon-7B in 28.867s
grimjim-kukulemon-7b-v5-mkmlizer: creating bucket guanaco-mkml-models
grimjim-kukulemon-7b-v5-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
grimjim-kukulemon-7b-v5-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/grimjim-kukulemon-7b-v5
grimjim-kukulemon-7b-v5-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/grimjim-kukulemon-7b-v5/special_tokens_map.json
grimjim-kukulemon-7b-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/grimjim-kukulemon-7b-v5/tokenizer_config.json
grimjim-kukulemon-7b-v5-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/grimjim-kukulemon-7b-v5/config.json
grimjim-kukulemon-7b-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/grimjim-kukulemon-7b-v5/tokenizer.model
grimjim-kukulemon-7b-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/grimjim-kukulemon-7b-v5/tokenizer.json
grimjim-kukulemon-7b-v5-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/grimjim-kukulemon-7b-v5/mkml_model.tensors
grimjim-kukulemon-7b-v5-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
grimjim-kukulemon-7b-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
grimjim-kukulemon-7b-v5-mkmlizer: warnings.warn(
grimjim-kukulemon-7b-v5-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 12.0MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
grimjim-kukulemon-7b-v5-mkmlizer: warnings.warn(
grimjim-kukulemon-7b-v5-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 1.96MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 45.7MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 4.24MB/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 4.23MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
grimjim-kukulemon-7b-v5-mkmlizer: warnings.warn(
grimjim-kukulemon-7b-v5-mkmlizer: pytorch_model.bin: 0%| | 0.00/1.44G [00:00<?, ?B/s] pytorch_model.bin: 1%| | 10.5M/1.44G [00:00<00:20, 69.2MB/s] pytorch_model.bin: 3%|▎ | 41.9M/1.44G [00:00<00:08, 175MB/s] pytorch_model.bin: 5%|▌ | 73.4M/1.44G [00:00<00:06, 219MB/s] pytorch_model.bin: 11%|█ | 157M/1.44G [00:00<00:03, 387MB/s] pytorch_model.bin: 15%|█▍ | 210M/1.44G [00:00<00:04, 276MB/s] pytorch_model.bin: 17%|█▋ | 241M/1.44G [00:00<00:04, 266MB/s] pytorch_model.bin: 19%|█▉ | 273M/1.44G [00:01<00:04, 276MB/s] pytorch_model.bin: 23%|██▎ | 325M/1.44G [00:01<00:03, 313MB/s] pytorch_model.bin: 34%|███▍ | 493M/1.44G [00:01<00:01, 592MB/s] pytorch_model.bin: 76%|███████▌ | 1.10G/1.44G [00:01<00:00, 1.90GB/s] pytorch_model.bin: 100%|█████████▉| 1.44G/1.44G [00:05<00:00, 260MB/s]
grimjim-kukulemon-7b-v5-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
grimjim-kukulemon-7b-v5-mkmlizer: Saving duration: 0.248s
grimjim-kukulemon-7b-v5-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 10.103s
grimjim-kukulemon-7b-v5-mkmlizer: creating bucket guanaco-reward-models
grimjim-kukulemon-7b-v5-mkmlizer: Bucket 's3://guanaco-reward-models/' created
grimjim-kukulemon-7b-v5-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/grimjim-kukulemon-7b-v5_reward
grimjim-kukulemon-7b-v5-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v5_reward/config.json
grimjim-kukulemon-7b-v5-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v5_reward/special_tokens_map.json
grimjim-kukulemon-7b-v5-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v5_reward/tokenizer_config.json
grimjim-kukulemon-7b-v5-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/grimjim-kukulemon-7b-v5_reward/merges.txt
grimjim-kukulemon-7b-v5-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v5_reward/vocab.json
grimjim-kukulemon-7b-v5-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v5_reward/tokenizer.json
grimjim-kukulemon-7b-v5-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/grimjim-kukulemon-7b-v5_reward/reward.tensors
Job grimjim-kukulemon-7b-v5-mkmlizer completed after 64.87s with status: succeeded
Stopping job with name grimjim-kukulemon-7b-v5-mkmlizer
Pipeline stage MKMLizer completed in 70.07s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.12s
Running pipeline stage ISVCDeployer
Creating inference service grimjim-kukulemon-7b-v5
Waiting for inference service grimjim-kukulemon-7b-v5 to be ready
Inference service grimjim-kukulemon-7b-v5 ready after 40.29822540283203s
Pipeline stage ISVCDeployer completed in 48.41s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.2703795433044434s
Received healthy response to inference request in 0.4594709873199463s
Received healthy response to inference request in 0.6623668670654297s
Received healthy response to inference request in 0.4071235656738281s
Received healthy response to inference request in 1.035851240158081s
5 requests
0 failed requests
5th percentile: 0.41759305000305175
10th percentile: 0.42806253433227537
20th percentile: 0.44900150299072267
30th percentile: 0.500050163269043
40th percentile: 0.5812085151672364
50th percentile: 0.6623668670654297
60th percentile: 0.8117606163024902
70th percentile: 0.9611543655395507
80th percentile: 1.0827569007873536
90th percentile: 1.1765682220458984
95th percentile: 1.2234738826751708
99th percentile: 1.2609984111785888
mean time: 0.7670384407043457
Pipeline stage StressChecker completed in 4.75s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.05s
M-Eval Dataset for topic stay_in_character is loaded
grimjim-kukulemon-7b_v5 status is now deployed due to DeploymentManager action
grimjim-kukulemon-7b_v5 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of grimjim-kukulemon-7b_v5
Running pipeline stage ISVCDeleter
Checking if service grimjim-kukulemon-7b-v5 is running
Tearing down inference service grimjim-kukulemon-7b-v5
Toredown service grimjim-kukulemon-7b-v5
Pipeline stage ISVCDeleter completed in 4.09s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key grimjim-kukulemon-7b-v5/config.json from bucket guanaco-mkml-models
Deleting key grimjim-kukulemon-7b-v5/mkml_model.tensors from bucket guanaco-mkml-models
Deleting key grimjim-kukulemon-7b-v5/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key grimjim-kukulemon-7b-v5/tokenizer.json from bucket guanaco-mkml-models
Deleting key grimjim-kukulemon-7b-v5/tokenizer.model from bucket guanaco-mkml-models
Deleting key grimjim-kukulemon-7b-v5/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key grimjim-kukulemon-7b-v5_reward/config.json from bucket guanaco-reward-models
Deleting key grimjim-kukulemon-7b-v5_reward/merges.txt from bucket guanaco-reward-models
Deleting key grimjim-kukulemon-7b-v5_reward/reward.tensors from bucket guanaco-reward-models
Deleting key grimjim-kukulemon-7b-v5_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key grimjim-kukulemon-7b-v5_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key grimjim-kukulemon-7b-v5_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key grimjim-kukulemon-7b-v5_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.22s
grimjim-kukulemon-7b_v5 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics