developer_uid: bogoconic1
submission_id: chaiml-gy-exp188-sft-gy_24525_v1
model_name: chaiml-gy-exp188-sft-gy_24525_v1
model_group: ChaiML/gy-exp188-sft-gy-
status: torndown
timestamp: 2025-07-18T15:58:32+00:00
num_battles: 3810
num_wins: 1871
celo_rating: 1286.08
family_friendly_score: 0.55
family_friendly_standard_error: 0.007035623639735145
submission_type: basic
model_repo: ChaiML/gy-exp188-sft-gy-datamix-v1-Grok3-AR-lex-gt0.1-rm-gt0.6-ep2
model_architecture: MistralForCausalLM
model_num_parameters: 24096691200.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
reward_model: default
display_name: chaiml-gy-exp188-sft-gy_24525_v1
ineligible_reason: num_battles<10000
is_internal_developer: False
language_model: ChaiML/gy-exp188-sft-gy-datamix-v1-Grok3-AR-lex-gt0.1-rm-gt0.6-ep2
model_size: 24B
ranking_group: single
us_pacific_date: 2025-07-18
win_ratio: 0.4910761154855643
generation_params: {'temperature': 0.6, 'top_p': 0.95, 'min_p': 0.025, 'top_k': 60, 'presence_penalty': 0.4, 'frequency_penalty': 0.4, 'stopping_words': ['<|im_start|>', '<|im_end|>', '\n'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': '<|system|>Family Friendly{memory}\n', 'prompt_template': '', 'bot_template': '<|im_start|>assistant\n{message}<|im_end|>\n', 'user_template': '<|im_start|>user\nYou:{message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer
Waiting for job on chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer to finish
Failed to get response for submission chaiml-isaac-brown-sylu_53639_v2: HTTPConnectionPool(host='chaiml-isaac-brown-sylu-53639-v2-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Failed to get response for submission chaiml-isaac-brown-sylu_53639_v2: HTTPConnectionPool(host='chaiml-isaac-brown-sylu-53639-v2-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Failed to get response for submission chaiml-simon-ghost-rile_65921_v1: HTTPConnectionPool(host='chaiml-simon-ghost-rile-65921-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Failed to get response for submission chaiml-isaac-brown-sylu_53639_v2: HTTPConnectionPool(host='chaiml-isaac-brown-sylu-53639-v2-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Failed to get response for submission chaiml-daren-god-of-deat_6797_v1: HTTPConnectionPool(host='chaiml-daren-god-of-deat-6797-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Failed to get response for submission chaiml-daren-god-of-deat_6797_v1: HTTPConnectionPool(host='chaiml-daren-god-of-deat-6797-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ██████ ██████ █████ ████ ████ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ░░██████ ██████ ░░███ ███░ ░░███ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ░███░█████░███ ░███ ███ ░███ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ░███░░███ ░███ ░███████ ░███ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ░███ ░░░ ░███ ░███░░███ ░███ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ░███ ░███ ░███ ░░███ ░███ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ █████ █████ █████ ░░████ █████ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ░░░░░ ░░░░░ ░░░░░ ░░░░ ░░░░░ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ Version: 0.29.15 ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ Features: FLYWHEEL, CUDA ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ Copyright 2023-2025 MK ONE TECHNOLOGIES Inc. ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ https://mk1.ai ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ The license key for the current software has been verified as ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ belonging to: ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ Chai Research Corp. ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ Expiration: 2028-03-31 23:59:59 ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ║ ║
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission chaiml-isaac-brown-sylu_53639_v2: HTTPConnectionPool(host='chaiml-isaac-brown-sylu-53639-v2-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: Downloaded to shared memory in 84.639s
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: Checking if ChaiML/gy-exp188-sft-gy-datamix-v1-Grok3-AR-lex-gt0.1-rm-gt0.6-ep2 already exists in ChaiML
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp0d9i9pyr, device:0
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: quantized model in 49.094s
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: Processed model ChaiML/gy-exp188-sft-gy-datamix-v1-Grok3-AR-lex-gt0.1-rm-gt0.6-ep2 in 133.733s
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia/config.json
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia/special_tokens_map.json
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia/tokenizer_config.json
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia/tokenizer.json
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.1.safetensors s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia/flywheel_model.1.safetensors
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia/flywheel_model.0.safetensors
chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer: Loading 0: 0%| | 0/363 [00:00<?, ?it/s] Loading 0: 1%| | 4/363 [00:00<00:09, 38.11it/s] Loading 0: 2%|▏ | 8/363 [00:00<00:12, 29.21it/s] Loading 0: 3%|▎ | 12/363 [00:00<00:11, 30.61it/s] Loading 0: 4%|▍ | 16/363 [00:00<00:12, 27.25it/s] Loading 0: 6%|▌ | 21/363 [00:00<00:10, 31.13it/s] Loading 0: 7%|▋ | 25/363 [00:00<00:11, 28.40it/s] Loading 0: 9%|▉ | 32/363 [00:00<00:09, 35.40it/s] Loading 0: 10%|▉ | 36/363 [00:01<00:15, 20.87it/s] Loading 0: 11%|█ | 40/363 [00:01<00:13, 23.47it/s] Loading 0: 12%|█▏ | 44/363 [00:01<00:13, 24.24it/s] Loading 0: 13%|█▎ | 48/363 [00:01<00:12, 26.14it/s] Loading 0: 14%|█▍ | 52/363 [00:01<00:12, 24.55it/s] Loading 0: 16%|█▌ | 57/363 [00:02<00:11, 27.72it/s] Loading 0: 17%|█▋ | 61/363 [00:02<00:11, 26.16it/s] Loading 0: 18%|█▊ | 65/363 [00:02<00:11, 26.67it/s] Loading 0: 19%|█▉ | 70/363 [00:02<00:13, 22.07it/s] Loading 0: 20%|██ | 73/363 [00:02<00:15, 18.78it/s] Loading 0: 22%|██▏ | 79/363 [00:03<00:11, 23.83it/s] Loading 0: 23%|██▎ | 82/363 [00:03<00:12, 23.16it/s] Loading 0: 24%|██▎ | 86/363 [00:03<00:11, 24.85it/s] Loading 0: 25%|██▍ | 89/363 [00:03<00:11, 24.64it/s] Loading 0: 25%|██▌ | 92/363 [00:03<00:13, 20.12it/s] Loading 0: 27%|██▋ | 99/363 [00:03<00:09, 27.17it/s] Loading 0: 28%|██▊ | 102/363 [00:04<00:10, 24.17it/s] Loading 0: 29%|██▉ | 107/363 [00:04<00:11, 21.51it/s] Loading 0: 31%|███ | 111/363 [00:04<00:10, 24.45it/s] Loading 0: 31%|███▏ | 114/363 [00:04<00:11, 21.75it/s] Loading 0: 33%|███▎ | 120/363 [00:04<00:09, 26.19it/s] Loading 0: 34%|███▍ | 123/363 [00:04<00:10, 23.63it/s] Loading 0: 36%|███▌ | 129/363 [00:05<00:08, 27.91it/s] Loading 0: 36%|███▋ | 132/363 [00:05<00:09, 24.69it/s] Loading 0: 37%|███▋ | 136/363 [00:05<00:08, 27.44it/s] Loading 0: 38%|███▊ | 139/363 [00:05<00:08, 27.04it/s] Loading 0: 39%|███▉ | 142/363 [00:05<00:08, 26.18it/s] Loading 0: 40%|████ | 147/363 [00:05<00:06, 31.74it/s] Loading 0: 42%|████▏ | 151/363 [00:06<00:09, 22.26it/s] Loading 0: 42%|████▏ | 154/363 [00:06<00:10, 20.24it/s] Loading 0: 43%|████▎ | 157/363 [00:06<00:09, 21.34it/s] Loading 0: 44%|████▍ | 160/363 [00:06<00:09, 21.19it/s] Loading 0: 45%|████▌ | 165/363 [00:06<00:08, 23.98it/s] Loading 0: 46%|████▋ | 168/363 [00:06<00:08, 21.97it/s] Loading 0: 48%|████▊ | 174/363 [00:06<00:07, 26.98it/s] Loading 0: 49%|████▉ | 177/363 [00:07<00:07, 23.88it/s] Loading 0: 50%|█████ | 182/363 [00:07<00:06, 26.37it/s] Loading 0: 52%|█████▏ | 187/363 [00:07<00:07, 23.23it/s] Loading 0: 52%|█████▏ | 190/363 [00:07<00:08, 21.38it/s] Loading 0: 53%|█████▎ | 193/363 [00:07<00:07, 22.33it/s] Loading 0: 54%|█████▍ | 196/363 [00:08<00:07, 22.15it/s] Loading 0: 55%|█████▍ | 199/363 [00:08<00:06, 23.81it/s] Loading 0: 55%|█████▌ | 200/363 [00:22<00:06, 23.81it/s] Loading 0: 55%|█████▌ | 201/363 [00:22<04:07, 1.53s/it] Loading 0: 56%|█████▌ | 203/363 [00:23<03:13, 1.21s/it] Loading 0: 57%|█████▋ | 208/363 [00:23<01:45, 1.47it/s] Loading 0: 58%|█████▊ | 211/363 [00:23<01:17, 1.95it/s] Loading 0: 59%|█████▉ | 214/363 [00:23<00:56, 2.63it/s] Loading 0: 60%|██████ | 218/363 [00:23<00:37, 3.86it/s] Loading 0: 61%|██████ | 221/363 [00:23<00:28, 4.99it/s] Loading 0: 62%|██████▏ | 224/363 [00:24<00:23, 6.04it/s] Loading 0: 63%|██████▎ | 228/363 [00:24<00:15, 8.54it/s] Loading 0: 64%|██████▎ | 231/363 [00:24<00:13, 9.86it/s] Loading 0: 65%|██████▌ | 237/363 [00:24<00:08, 14.55it/s] Loading 0: 66%|██████▌ | 240/363 [00:24<00:08, 15.37it/s] Loading 0: 68%|██████▊ | 246/363 [00:24<00:05, 20.42it/s] Loading 0: 69%|██████▊ | 249/363 [00:24<00:05, 19.56it/s] Loading 0: 70%|███████ | 255/363 [00:25<00:04, 24.24it/s] Loading 0: 71%|███████ | 258/363 [00:25<00:04, 22.42it/s] Loading 0: 72%|███████▏ | 263/363 [00:25<00:03, 27.49it/s] Loading 0: 74%|███████▎ | 267/363 [00:25<00:04, 22.88it/s] Loading 0: 74%|███████▍ | 270/363 [00:25<00:04, 19.81it/s] Loading 0: 75%|███████▌ | 274/363 [00:25<00:03, 23.02it/s] Loading 0: 76%|███████▋ | 277/363 [00:26<00:03, 23.55it/s] Loading 0: 78%|███████▊ | 282/363 [00:26<00:03, 26.50it/s] Loading 0: 79%|███████▊ | 285/363 [00:26<00:03, 24.19it/s] Loading 0: 80%|████████ | 291/363 [00:26<00:02, 29.58it/s] Loading 0: 81%|████████▏ | 295/363 [00:26<00:02, 27.27it/s] Loading 0: 82%|████████▏ | 299/363 [00:26<00:02, 27.22it/s] Loading 0: 84%|████████▎ | 304/363 [00:27<00:02, 24.81it/s] Loading 0: 85%|████████▍ | 307/363 [00:27<00:02, 22.46it/s] Loading 0: 85%|████████▌ | 310/363 [00:27<00:02, 23.30it/s] Loading 0: 86%|████████▌ | 313/363 [00:27<00:02, 23.08it/s] Loading 0: 88%|████████▊ | 318/363 [00:27<00:01, 25.75it/s] Loading 0: 88%|████████▊ | 321/363 [00:27<00:01, 22.33it/s] Loading 0: 90%|█████████ | 327/363 [00:27<00:01, 26.77it/s] Loading 0: 91%|█████████ | 330/363 [00:28<00:01, 23.86it/s] Loading 0: 92%|█████████▏| 335/363 [00:28<00:01, 25.96it/s] Loading 0: 93%|█████████▎| 338/363 [00:28<00:01, 24.28it/s] Loading 0: 94%|█████████▍| 341/363 [00:28<00:01, 14.39it/s] Loading 0: 95%|█████████▌| 346/363 [00:29<00:00, 19.56it/s] Loading 0: 96%|█████████▌| 349/363 [00:29<00:00, 18.82it/s] Loading 0: 97%|█████████▋| 353/363 [00:29<00:00, 22.15it/s] Loading 0: 98%|█████████▊| 356/363 [00:29<00:00, 23.70it/s] Loading 0: 99%|█████████▉| 359/363 [00:29<00:00, 23.42it/s] Loading 0: 100%|█████████▉| 362/363 [00:29<00:00, 24.75it/s]
Job chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer completed after 1266.1s with status: succeeded
Stopping job with name chaiml-gy-exp188-sft-gy-24525-v1-mkmlizer
Pipeline stage MKMLizer completed in 1266.69s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.15s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service chaiml-gy-exp188-sft-gy-24525-v1
Waiting for inference service chaiml-gy-exp188-sft-gy-24525-v1 to be ready
Failed to get response for submission chaiml-simon-ghost-rile_65921_v1: HTTPConnectionPool(host='chaiml-simon-ghost-rile-65921-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Inference service chaiml-gy-exp188-sft-gy-24525-v1 ready after 331.30339002609253s
Pipeline stage MKMLDeployer completed in 332.31s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.8882696628570557s
Received healthy response to inference request in 2.3115861415863037s
Received healthy response to inference request in 1.8385035991668701s
Received healthy response to inference request in 1.9810686111450195s
Received healthy response to inference request in 2.0116279125213623s
5 requests
0 failed requests
5th percentile: 1.8670166015625
10th percentile: 1.89552960395813
20th percentile: 1.9525556087493896
30th percentile: 1.987180471420288
40th percentile: 1.9994041919708252
50th percentile: 2.0116279125213623
60th percentile: 2.131611204147339
70th percentile: 2.251594495773315
80th percentile: 2.4269228458404544
90th percentile: 2.657596254348755
95th percentile: 2.772932958602905
99th percentile: 2.8652023220062257
mean time: 2.2062111854553224
Pipeline stage StressChecker completed in 12.40s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.80s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.87s
Shutdown handler de-registered
chaiml-gy-exp188-sft-gy_24525_v1 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.15s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.12s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler
Waiting for inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler to be ready
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.14s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.12s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler
Waiting for inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler to be ready
Tearing down inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler
Waiting for inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler to be ready
Tearing down inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler
Waiting for inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler to be ready
Tearing down inference service chaiml-gy-exp188-sft-gy-24525-v1-profiler
clean up pipeline due to error=DeploymentError('Timeout to start the InferenceService chaiml-gy-exp188-sft-gy-24525-v1-profiler. The InferenceService is as following: {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'kind\': \'InferenceService\', \'metadata\': {\'annotations\': {\'autoscaling.knative.dev/class\': \'hpa.autoscaling.knative.dev\', \'autoscaling.knative.dev/container-concurrency-target-percentage\': \'70\', \'autoscaling.knative.dev/initial-scale\': \'1\', \'autoscaling.knative.dev/max-scale-down-rate\': \'1.1\', \'autoscaling.knative.dev/max-scale-up-rate\': \'2\', \'autoscaling.knative.dev/metric\': \'mean_pod_latency_ms_v2\', \'autoscaling.knative.dev/panic-threshold-percentage\': \'650\', \'autoscaling.knative.dev/panic-window-percentage\': \'35\', \'autoscaling.knative.dev/scale-down-delay\': \'30s\', \'autoscaling.knative.dev/scale-to-zero-grace-period\': \'10m\', \'autoscaling.knative.dev/stable-window\': \'180s\', \'autoscaling.knative.dev/target\': \'4000\', \'autoscaling.knative.dev/target-burst-capacity\': \'-1\', \'autoscaling.knative.dev/tick-interval\': \'15s\', \'features.knative.dev/http-full-duplex\': \'Enabled\', \'networking.knative.dev/ingress-class\': \'istio.ingress.networking.knative.dev\'}, \'creationTimestamp\': \'2025-07-18T17:43:06Z\', \'finalizers\': [\'inferenceservice.finalizers\'], \'generation\': 1, \'labels\': {\'knative.coreweave.cloud/ingress\': \'istio.ingress.networking.knative.dev\', \'prometheus.k.chaiverse.com\': \'true\', \'qos.coreweave.cloud/latency\': \'low\'}, \'managedFields\': [{\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:annotations\': {\'.\': {}, \'f:autoscaling.knative.dev/class\': {}, \'f:autoscaling.knative.dev/container-concurrency-target-percentage\': {}, \'f:autoscaling.knative.dev/initial-scale\': {}, \'f:autoscaling.knative.dev/max-scale-down-rate\': {}, \'f:autoscaling.knative.dev/max-scale-up-rate\': {}, \'f:autoscaling.knative.dev/metric\': {}, \'f:autoscaling.knative.dev/panic-threshold-percentage\': {}, \'f:autoscaling.knative.dev/panic-window-percentage\': {}, \'f:autoscaling.knative.dev/scale-down-delay\': {}, \'f:autoscaling.knative.dev/scale-to-zero-grace-period\': {}, \'f:autoscaling.knative.dev/stable-window\': {}, \'f:autoscaling.knative.dev/target\': {}, \'f:autoscaling.knative.dev/target-burst-capacity\': {}, \'f:autoscaling.knative.dev/tick-interval\': {}, \'f:features.knative.dev/http-full-duplex\': {}, \'f:networking.knative.dev/ingress-class\': {}}, \'f:labels\': {\'.\': {}, \'f:knative.coreweave.cloud/ingress\': {}, \'f:prometheus.k.chaiverse.com\': {}, \'f:qos.coreweave.cloud/latency\': {}}}, \'f:spec\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:affinity\': {\'.\': {}, \'f:nodeAffinity\': {\'.\': {}, \'f:tion\': {}, \'f:requiredDuringSchedulingIgnoredDuringExecution\': {}}}, \'f:containerConcurrency\': {}, \'f:containers\': {}, \'f:imagePullSecrets\': {}, \'f:maxReplicas\': {}, \'f:minReplicas\': {}, \'f:timeout\': {}, \'f:volumes\': {}}}}, \'manager\': \'OpenAPI-Generator\', \'operation\': \'Update\', \'time\': \'2025-07-18T17:43:06Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:finalizers\': {\'.\': {}, \'v:"inferenceservice.finalizers"\': {}}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'time\': \'2025-07-18T17:43:06Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:status\': {\'.\': {}, \'f:components\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:latestCreatedRevision\': {}}}, \'f:conditions\': {}, \'f:modelStatus\': {\'.\': {}, \'f:lastFailureInfo\': {\'.\': {}, \'f:exitCode\': {}, \'f:message\': {}, \'f:reason\': {}}, \'f:states\': {\'.\': {}, \'f:activeModelState\': {}, \'f:targetModelState\': {}}, \'f:transitionStatus\': {}}, \'f:observedGeneration\': {}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'subresource\': \'status\', \'time\': \'2025-07-18T17:47:38Z\'}], \'name\': \'chaiml-gy-exp188-sft-gy-24525-v1-profiler\', \'namespace\': \'tenant-chaiml-guanaco\', \'resourceVersion\': \'466329068\', \'uid\': \'331fb3f0-7c4a-4dc2-8a31-f0ef9627c948\'}, \'spec\': {\'predictor\': {\'affinity\': {\'nodeAffinity\': {\'tion\': [{\'preference\': {\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'A100_NVLINK_80GB\']}]}, \'weight\': 5}], \'requiredDuringSchedulingIgnoredDuringExecution\': {\'nodeSelectorTerms\': [{\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'A100_NVLINK_80GB\', \'RTX_A6000\']}]}]}}}, \'containerConcurrency\': 0, \'containers\': [{\'env\': [{\'name\': \'MAX_TOKEN_INPUT\', \'value\': \'1024\'}, {\'name\': \'BEST_OF\', \'value\': \'8\'}, {\'name\': \'TEMPERATURE\', \'value\': \'0.6\'}, {\'name\': \'PRESENCE_PENALTY\', \'value\': \'0.4\'}, {\'name\': \'FREQUENCY_PENALTY\', \'value\': \'0.4\'}, {\'name\': \'TOP_P\', \'value\': \'0.95\'}, {\'name\': \'MIN_P\', \'value\': \'0.025\'}, {\'name\': \'TOP_K\', \'value\': \'60\'}, {\'name\': \'STOPPING_WORDS\', \'value\': \'["\\\\\\\\n", "<|im_end|>", "<|im_start|>"]\'}, {\'name\': \'MAX_TOKENS\', \'value\': \'64\'}, {\'name\': \'MAX_BATCH_SIZE\', \'value\': \'128\'}, {\'name\': \'MAX_CACHED_RESPONSES\', \'value\': \'-1\'}, {\'name\': \'URL_ROUTE\', \'value\': \'GPT-J-6B-lit-v2\'}, {\'name\': \'OBJ_ACCESS_KEY_ID\', \'value\': \'LETMTTRMLFFAMTBK\'}, {\'name\': \'OBJ_SECRET_ACCESS_KEY\', \'value\': \'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\'}, {\'name\': \'OBJ_ENDPOINT\', \'value\': \'https://accel-object.ord1.coreweave.com\'}, {\'name\': \'TENSORIZER_URI\', \'value\': \'s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia\'}, {\'name\': \'RESERVE_MEMORY\', \'value\': \'2048\'}, {\'name\': \'DOWNLOAD_TO_LOCAL\', \'value\': \'/dev/shm/model_cache\'}, {\'name\': \'NUM_GPUS\', \'value\': \'1\'}, {\'name\': \'MK1_QUANTIZATION_PROFILE\', \'value\': \'s0\'}, {\'name\': \'MK1_MKML_LICENSE_KEY\', \'valueFrom\': {\'secretKeyRef\': {\'key\': \'key\', \'name\': \'mkml-license-key\'}}}], \'image\': \'gcr.io/chai-959f8/chai-guanaco/mkml:mkml_v0.29.15\', \'imagePullPolicy\': \'IfNotPresent\', \'name\': \'kserve-container\', \'readinessProbe\': {\'exec\': {\'command\': [\'cat\', \'/tmp/ready\']}, \'failureThreshold\': 1, \'initialDelaySeconds\': 10, \'periodSeconds\': 10, \'successThreshold\': 1, \'timeoutSeconds\': 5}, \'resources\': {\'limits\': {\'cpu\': \'2\', \'memory\': \'26Gi\', \'nvidia.com/gpu\': \'1\'}, \'requests\': {\'cpu\': \'2\', \'memory\': \'26Gi\', \'nvidia.com/gpu\': \'1\'}}, \'volumeMounts\': [{\'mountPath\': \'/dev/shm\', \'name\': \'shared-memory-cache\'}]}], \'imagePullSecrets\': [{\'name\': \'docker-creds\'}], \'maxReplicas\': 1, \'minReplicas\': 1, \'timeout\': 60, \'volumes\': [{\'emptyDir\': {\'medium\': \'Memory\'}, \'name\': \'shared-memory-cache\'}]}}, \'status\': {\'components\': {\'predictor\': {\'latestCreatedRevision\': \'chaiml-gy-exp188-sft-gy-24525-v1-profiler-predictor-00001\'}}, \'conditions\': [{\'lastTransitionTime\': \'2025-07-18T17:47:38Z\', \'reason\': \'PredictorConfigurationReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'LatestDeploymentReady\'}, {\'lastTransitionTime\': \'2025-07-18T17:47:38Z\', \'message\': \'Revision "chaiml-gy-exp188-sft-gy-24525-v1-profiler-predictor-00001" failed with message: Container failed with: ║\\n║ Chai Research Corp. ║\\n║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║\\n║ Expiration: 2028-03-31 23:59:59 ║\\n║ ║\\n╚═════════════════════════════════════════════════════════════════════╝\\n\\nINFO:datasets:PyTorch version 2.3.0 available.\\nInference config: InferenceConfig(server_num_workers=1, server_port=8080, max_batch_size=128, log_level=0, reserve_memory=2048, num_gpus=1, quantization_profile=s0, all_reduce_profile=None, kv_cache_profile=None, calibration_samples=-1, max_cached_responses=-1, sampling=SamplingParameters(temperature=0.6, top_p=0.95, min_p=0.025, top_k=60, max_input_tokens=1024, max_tokens=64, stop=[\\\'\\\\n\\\', \\\'<|im_end|>\\\', \\\'<|im_start|>\\\'], eos_token_ids=[], frequency_penalty=0.4, presence_penalty=0.4, reward_enabled=True, num_samples=8, reward_max_token_input=256, profile=False), url_route=GPT-J-6B-lit-v2, tensorizer_uri=s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia, s3_creds=S3Credentials(s3_access_key_id=\\\'LETMTTRMLFFAMTBK\\\', s3_secret_access_key=\\\'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\\\', s3_endpoint=\\\'https://accel-object.ord1.coreweave.com\\\', s3_uncached_endpoint=\\\'https://object.ord1.coreweave.com\\\'), local_folder=/dev/shm/model_cache)\\nTraceback (most recent call last):\\n File "/code/mkml_inference_service/main.py", line 95, in <module>\\n model.load()\\n File "/code/mkml_inference_service/main.py", line 31, in load\\n self.engine = mkml_backend.AsyncInferenceService.from_folder(settings, settings.local_folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 45, in from_folder\\n with open(model_config) as f:\\nFileNotFoundError: [Errno 2] No such file or directory: \\\'/dev/shm/model_cache/config.json\\\'\\n.\', \'reason\': \'RevisionFailed\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorConfigurationReady\'}, {\'lastTransitionTime\': \'2025-07-18T17:47:38Z\', \'message\': \'Configuration "chaiml-gy-exp188-sft-gy-24525-v1-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'PredictorReady\'}, {\'lastTransitionTime\': \'2025-07-18T17:47:38Z\', \'message\': \'Configuration "chaiml-gy-exp188-sft-gy-24525-v1-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorRouteReady\'}, {\'lastTransitionTime\': \'2025-07-18T17:47:38Z\', \'message\': \'Configuration "chaiml-gy-exp188-sft-gy-24525-v1-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'Ready\'}, {\'lastTransitionTime\': \'2025-07-18T17:47:38Z\', \'reason\': \'PredictorRouteReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'RoutesReady\'}], \'modelStatus\': {\'lastFailureInfo\': {\'exitCode\': 1, \'message\': \' ║\\n║ Chai Research Corp. ║\\n║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║\\n║ Expiration: 2028-03-31 23:59:59 ║\\n║ ║\\n╚═════════════════════════════════════════════════════════════════════╝\\n\\nINFO:datasets:PyTorch version 2.3.0 available.\\nInference config: InferenceConfig(server_num_workers=1, server_port=8080, max_batch_size=128, log_level=0, reserve_memory=2048, num_gpus=1, quantization_profile=s0, all_reduce_profile=None, kv_cache_profile=None, calibration_samples=-1, max_cached_responses=-1, sampling=SamplingParameters(temperature=0.6, top_p=0.95, min_p=0.025, top_k=60, max_input_tokens=1024, max_tokens=64, stop=[\\\'\\\\n\\\', \\\'<|im_end|>\\\', \\\'<|im_start|>\\\'], eos_token_ids=[], frequency_penalty=0.4, presence_penalty=0.4, reward_enabled=True, num_samples=8, reward_max_token_input=256, profile=False), url_route=GPT-J-6B-lit-v2, tensorizer_uri=s3://guanaco-mkml-models/chaiml-gy-exp188-sft-gy-24525-v1/nvidia, s3_creds=S3Credentials(s3_access_key_id=\\\'LETMTTRMLFFAMTBK\\\', s3_secret_access_key=\\\'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\\\', s3_endpoint=\\\'https://accel-object.ord1.coreweave.com\\\', s3_uncached_endpoint=\\\'https://object.ord1.coreweave.com\\\'), local_folder=/dev/shm/model_cache)\\nTraceback (most recent call last):\\n File "/code/mkml_inference_service/main.py", line 95, in <module>\\n model.load()\\n File "/code/mkml_inference_service/main.py", line 31, in load\\n self.engine = mkml_backend.AsyncInferenceService.from_folder(settings, settings.local_folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 45, in from_folder\\n with open(model_config) as f:\\nFileNotFoundError: [Errno 2] No such file or directory: \\\'/dev/shm/model_cache/config.json\\\'\\n\', \'reason\': \'ModelLoadFailed\'}, \'states\': {\'activeModelState\': \'\', \'targetModelState\': \'FailedToLoad\'}, \'transitionStatus\': \'BlockedByFailedLoad\'}, \'observedGeneration\': 1}}')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.15s
Shutdown handler de-registered
chaiml-gy-exp188-sft-gy_24525_v1 status is now inactive due to system request
chaiml-gy-exp188-sft-gy_24525_v1 status is now torndown due to DeploymentManager action
chaiml-gy-exp188-sft-gy_24525_v1 status is now torndown due to DeploymentManager action