developer_uid: richhx
submission_id: chaiml-0sw-96p-4ff-chai_45017_v2
model_name: chaiml-0sw-96p-4ff-chai_45017_v2
model_group: ChaiML/0sw_96p_4ff_chaim
status: torndown
timestamp: 2025-06-25T00:06:04+00:00
num_battles: 8763
num_wins: 4319
celo_rating: 1281.59
family_friendly_score: 0.5087999999999999
family_friendly_standard_error: 0.007069972560059904
submission_type: basic
model_repo: ChaiML/0sw_96p_4ff_chaiml_snugstable1_5_1_59274_v21_merge_s0_to_w0
model_architecture: MistralForCausalLM
model_num_parameters: 12772070400.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
reward_model: default
display_name: chaiml-0sw-96p-4ff-chai_45017_v2
ineligible_reason: num_battles<10000
is_internal_developer: True
language_model: ChaiML/0sw_96p_4ff_chaiml_snugstable1_5_1_59274_v21_merge_s0_to_w0
model_size: 13B
ranking_group: single
us_pacific_date: 2025-06-24
win_ratio: 0.4928677393586671
generation_params: {'temperature': 0.9, 'top_p': 1.0, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['</s>', '####', 'You:', '<|eot_id|>', '\n', 'Bot:', 'User:', '<|im_end|>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer
Waiting for job on chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer to finish
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ██████ ██████ █████ ████ ████ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ░░██████ ██████ ░░███ ███░ ░░███ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ░███░█████░███ ░███ ███ ░███ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ░███░░███ ░███ ░███████ ░███ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ░███ ░░░ ░███ ░███░░███ ░███ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ░███ ░███ ░███ ░░███ ░███ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ █████ █████ █████ ░░████ █████ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ░░░░░ ░░░░░ ░░░░░ ░░░░ ░░░░░ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ Version: 0.29.3 ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ Features: FLYWHEEL, CUDA ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ Copyright 2023-2025 MK ONE TECHNOLOGIES Inc. ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ https://mk1.ai ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ The license key for the current software has been verified as ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ belonging to: ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ Chai Research Corp. ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ Expiration: 2028-03-31 23:59:59 ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ║ ║
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: Downloaded to shared memory in 27.318s
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: Checking if ChaiML/0sw_96p_4ff_chaiml_snugstable1_5_1_59274_v21_merge_s0_to_w0 already exists in ChaiML
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpbc5yp5au, device:0
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: quantized model in 40.506s
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: Processed model ChaiML/0sw_96p_4ff_chaiml_snugstable1_5_1_59274_v21_merge_s0_to_w0 in 67.824s
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: creating bucket guanaco-mkml-models
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/config.json
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/special_tokens_map.json
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer_config.json
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer.json
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.2.safetensors s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/flywheel_model.2.safetensors
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/flywheel_model.0.safetensors
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.1.safetensors s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/flywheel_model.1.safetensors
chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer: Loading 0: 0%| | 0/243 [00:00<?, ?it/s] Loading 0: 1%| | 2/243 [00:00<00:26, 9.06it/s] Loading 0: 4%|▍ | 10/243 [00:00<00:06, 33.63it/s] Loading 0: 7%|▋ | 17/243 [00:00<00:05, 39.53it/s] Loading 0: 10%|█ | 25/243 [00:00<00:04, 50.94it/s] Loading 0: 13%|█▎ | 32/243 [00:00<00:03, 56.13it/s] Loading 0: 16%|█▋ | 40/243 [00:00<00:03, 60.23it/s] Loading 0: 19%|█▉ | 47/243 [00:00<00:03, 55.64it/s] Loading 0: 23%|██▎ | 56/243 [00:01<00:02, 63.53it/s] Loading 0: 26%|██▋ | 64/243 [00:01<00:02, 65.29it/s] Loading 0: 29%|██▉ | 71/243 [00:01<00:02, 60.17it/s] Loading 0: 33%|███▎ | 80/243 [00:01<00:02, 66.48it/s] Loading 0: 36%|███▌ | 88/243 [00:01<00:02, 67.68it/s] Loading 0: 36%|███▌ | 88/243 [00:17<00:02, 67.68it/s] Loading 0: 37%|███▋ | 89/243 [00:17<02:05, 1.23it/s] Loading 0: 40%|████ | 98/243 [00:17<01:11, 2.02it/s] Loading 0: 44%|████▎ | 106/243 [00:17<00:45, 3.00it/s] Loading 0: 47%|████▋ | 114/243 [00:17<00:29, 4.34it/s] Loading 0: 51%|█████ | 124/243 [00:17<00:17, 6.65it/s] Loading 0: 54%|█████▍ | 132/243 [00:17<00:12, 9.05it/s] Loading 0: 58%|█████▊ | 142/243 [00:17<00:07, 13.08it/s] Loading 0: 62%|██████▏ | 150/243 [00:17<00:05, 16.87it/s] Loading 0: 66%|██████▌ | 160/243 [00:18<00:03, 23.06it/s] Loading 0: 69%|██████▉ | 168/243 [00:18<00:02, 28.08it/s] Loading 0: 73%|███████▎ | 178/243 [00:18<00:01, 36.07it/s] Loading 0: 77%|███████▋ | 186/243 [00:18<00:01, 41.07it/s] Loading 0: 81%|████████ | 196/243 [00:18<00:00, 49.43it/s] Loading 0: 84%|████████▍ | 204/243 [00:18<00:00, 52.84it/s] Loading 0: 84%|████████▍ | 205/243 [00:30<00:00, 52.84it/s] Loading 0: 85%|████████▍ | 206/243 [00:34<00:25, 1.43it/s] Loading 0: 88%|████████▊ | 214/243 [00:34<00:13, 2.13it/s] Loading 0: 91%|█████████▏| 222/243 [00:34<00:06, 3.11it/s] Loading 0: 95%|█████████▌| 232/243 [00:34<00:02, 4.81it/s] Loading 0: 99%|█████████▉| 240/243 [00:34<00:00, 6.61it/s]
Job chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer completed after 95.64s with status: succeeded
Stopping job with name chaiml-0sw-96p-4ff-chai-45017-v2-mkmlizer
Pipeline stage MKMLizer completed in 96.16s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.17s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service chaiml-0sw-96p-4ff-chai-45017-v2
Waiting for inference service chaiml-0sw-96p-4ff-chai-45017-v2 to be ready
Failed to get response for submission chaiml-mistral-24b-dpo-_59605_v2: HTTPConnectionPool(host='chaiml-mistral-24b-dpo-59605-v2-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Tearing down inference service chaiml-0sw-96p-4ff-chai-45017-v2
%s, retrying in %s seconds...
Creating inference service chaiml-0sw-96p-4ff-chai-45017-v2
Waiting for inference service chaiml-0sw-96p-4ff-chai-45017-v2 to be ready
Unable to record family friendly update due to error: Invalid JSON input: JSON must contain 'User Safety' and 'Response Safety' fields
Inference service chaiml-0sw-96p-4ff-chai-45017-v2 ready after 522.222496509552s
Pipeline stage MKMLDeployer completed in 1125.87s
run pipeline stage %s
Running pipeline stage StressChecker
HTTPConnectionPool(host='guanaco-submitter.guanaco-backend.k2.chaiverse.com', port=80): Read timed out. (read timeout=20)
Received unhealthy response to inference request!
Received healthy response to inference request in 2.392009973526001s
Received healthy response to inference request in 1.4320642948150635s
Received healthy response to inference request in 1.7401080131530762s
Received healthy response to inference request in 1.3435492515563965s
5 requests
1 failed requests
5th percentile: 1.3612522602081298
10th percentile: 1.3789552688598632
20th percentile: 1.4143612861633301
30th percentile: 1.493673038482666
40th percentile: 1.6168905258178712
50th percentile: 1.7401080131530762
60th percentile: 2.000868797302246
70th percentile: 2.261629581451416
80th percentile: 5.943738842010501
90th percentile: 13.047196578979495
95th percentile: 16.598925447463987
99th percentile: 19.440308542251586
mean time: 5.4116771697998045
%s, retrying in %s seconds...
Received healthy response to inference request in 1.408067226409912s
Received healthy response to inference request in 1.4174644947052002s
Received healthy response to inference request in 1.7129442691802979s
Received healthy response to inference request in 1.28080153465271s
Received healthy response to inference request in 1.4342036247253418s
5 requests
0 failed requests
5th percentile: 1.3062546730041504
10th percentile: 1.331707811355591
20th percentile: 1.3826140880584716
30th percentile: 1.4099466800689697
40th percentile: 1.413705587387085
50th percentile: 1.4174644947052002
60th percentile: 1.4241601467132567
70th percentile: 1.4308557987213135
80th percentile: 1.4899517536163331
90th percentile: 1.6014480113983154
95th percentile: 1.6571961402893065
99th percentile: 1.7017946434020996
mean time: 1.4506962299346924
Pipeline stage StressChecker completed in 37.13s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.73s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.72s
Shutdown handler de-registered
chaiml-0sw-96p-4ff-chai_45017_v2 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.15s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.13s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
Waiting for inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler to be ready
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.21s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.21s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
Waiting for inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler to be ready
Tearing down inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
Waiting for inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler to be ready
Tearing down inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
Waiting for inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler to be ready
Tearing down inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
clean up pipeline due to error=DeploymentError('Timeout to start the InferenceService chaiml-0sw-96p-4ff-chai-45017-v2-profiler. The InferenceService is as following: {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'kind\': \'InferenceService\', \'metadata\': {\'annotations\': {\'autoscaling.knative.dev/class\': \'hpa.autoscaling.knative.dev\', \'autoscaling.knative.dev/container-concurrency-target-percentage\': \'70\', \'autoscaling.knative.dev/initial-scale\': \'1\', \'autoscaling.knative.dev/max-scale-down-rate\': \'1.1\', \'autoscaling.knative.dev/max-scale-up-rate\': \'2\', \'autoscaling.knative.dev/metric\': \'mean_pod_latency_ms_v2\', \'autoscaling.knative.dev/panic-threshold-percentage\': \'650\', \'autoscaling.knative.dev/panic-window-percentage\': \'35\', \'autoscaling.knative.dev/scale-down-delay\': \'30s\', \'autoscaling.knative.dev/scale-to-zero-grace-period\': \'10m\', \'autoscaling.knative.dev/stable-window\': \'180s\', \'autoscaling.knative.dev/target\': \'4000\', \'autoscaling.knative.dev/target-burst-capacity\': \'-1\', \'autoscaling.knative.dev/tick-interval\': \'15s\', \'features.knative.dev/http-full-duplex\': \'Enabled\', \'networking.knative.dev/ingress-class\': \'istio.ingress.networking.knative.dev\'}, \'creationTimestamp\': \'2025-06-25T01:19:50Z\', \'finalizers\': [\'inferenceservice.finalizers\'], \'generation\': 1, \'labels\': {\'knative.coreweave.cloud/ingress\': \'istio.ingress.networking.knative.dev\', \'prometheus.k.chaiverse.com\': \'true\', \'qos.coreweave.cloud/latency\': \'low\'}, \'managedFields\': [{\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:annotations\': {\'.\': {}, \'f:autoscaling.knative.dev/class\': {}, \'f:autoscaling.knative.dev/container-concurrency-target-percentage\': {}, \'f:autoscaling.knative.dev/initial-scale\': {}, \'f:autoscaling.knative.dev/max-scale-down-rate\': {}, \'f:autoscaling.knative.dev/max-scale-up-rate\': {}, \'f:autoscaling.knative.dev/metric\': {}, \'f:autoscaling.knative.dev/panic-threshold-percentage\': {}, \'f:autoscaling.knative.dev/panic-window-percentage\': {}, \'f:autoscaling.knative.dev/scale-down-delay\': {}, \'f:autoscaling.knative.dev/scale-to-zero-grace-period\': {}, \'f:autoscaling.knative.dev/stable-window\': {}, \'f:autoscaling.knative.dev/target\': {}, \'f:autoscaling.knative.dev/target-burst-capacity\': {}, \'f:autoscaling.knative.dev/tick-interval\': {}, \'f:features.knative.dev/http-full-duplex\': {}, \'f:networking.knative.dev/ingress-class\': {}}, \'f:labels\': {\'.\': {}, \'f:knative.coreweave.cloud/ingress\': {}, \'f:prometheus.k.chaiverse.com\': {}, \'f:qos.coreweave.cloud/latency\': {}}}, \'f:spec\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:affinity\': {\'.\': {}, \'f:nodeAffinity\': {\'.\': {}, \'f:tion\': {}, \'f:requiredDuringSchedulingIgnoredDuringExecution\': {}}}, \'f:containerConcurrency\': {}, \'f:containers\': {}, \'f:imagePullSecrets\': {}, \'f:maxReplicas\': {}, \'f:minReplicas\': {}, \'f:timeout\': {}, \'f:volumes\': {}}}}, \'manager\': \'OpenAPI-Generator\', \'operation\': \'Update\', \'time\': \'2025-06-25T01:19:50Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:finalizers\': {\'.\': {}, \'v:"inferenceservice.finalizers"\': {}}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'time\': \'2025-06-25T01:19:50Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:status\': {\'.\': {}, \'f:components\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:latestCreatedRevision\': {}}}, \'f:conditions\': {}, \'f:modelStatus\': {\'.\': {}, \'f:lastFailureInfo\': {\'.\': {}, \'f:exitCode\': {}, \'f:message\': {}, \'f:reason\': {}}, \'f:states\': {\'.\': {}, \'f:activeModelState\': {}, \'f:targetModelState\': {}}, \'f:transitionStatus\': {}}, \'f:observedGeneration\': {}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'subresource\': \'status\', \'time\': \'2025-06-25T01:29:52Z\'}], \'name\': \'chaiml-0sw-96p-4ff-chai-45017-v2-profiler\', \'namespace\': \'tenant-chaiml-guanaco\', \'resourceVersion\': \'435492873\', \'uid\': \'1494c4ad-2c38-4596-bb13-846bb19cec80\'}, \'spec\': {\'predictor\': {\'affinity\': {\'nodeAffinity\': {\'tion\': [{\'preference\': {\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'RTX_A5000\']}]}, \'weight\': 5}], \'requiredDuringSchedulingIgnoredDuringExecution\': {\'nodeSelectorTerms\': [{\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'RTX_A5000\']}]}]}}}, \'containerConcurrency\': 0, \'containers\': [{\'env\': [{\'name\': \'MAX_TOKEN_INPUT\', \'value\': \'1024\'}, {\'name\': \'BEST_OF\', \'value\': \'8\'}, {\'name\': \'TEMPERATURE\', \'value\': \'0.9\'}, {\'name\': \'PRESENCE_PENALTY\', \'value\': \'0.0\'}, {\'name\': \'FREQUENCY_PENALTY\', \'value\': \'0.0\'}, {\'name\': \'TOP_P\', \'value\': \'1.0\'}, {\'name\': \'MIN_P\', \'value\': \'0.05\'}, {\'name\': \'TOP_K\', \'value\': \'80\'}, {\'name\': \'STOPPING_WORDS\', \'value\': \'["####", "You:", "<|eot_id|>", "</s>", "Bot:", "<|im_end|>", "User:", "\\\\\\\\n"]\'}, {\'name\': \'MAX_TOKENS\', \'value\': \'64\'}, {\'name\': \'MAX_BATCH_SIZE\', \'value\': \'128\'}, {\'name\': \'MAX_CACHED_RESPONSES\', \'value\': \'-1\'}, {\'name\': \'URL_ROUTE\', \'value\': \'GPT-J-6B-lit-v2\'}, {\'name\': \'OBJ_ACCESS_KEY_ID\', \'value\': \'LETMTTRMLFFAMTBK\'}, {\'name\': \'OBJ_SECRET_ACCESS_KEY\', \'value\': \'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\'}, {\'name\': \'OBJ_ENDPOINT\', \'value\': \'https://accel-object.ord1.coreweave.com\'}, {\'name\': \'TENSORIZER_URI\', \'value\': \'s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2\'}, {\'name\': \'RESERVE_MEMORY\', \'value\': \'2048\'}, {\'name\': \'DOWNLOAD_TO_LOCAL\', \'value\': \'/dev/shm/model_cache\'}, {\'name\': \'NUM_GPUS\', \'value\': \'1\'}, {\'name\': \'MK1_QUANTIZATION_PROFILE\', \'value\': \'s0\'}, {\'name\': \'MK1_MKML_LICENSE_KEY\', \'valueFrom\': {\'secretKeyRef\': {\'key\': \'key\', \'name\': \'mkml-license-key\'}}}], \'image\': \'gcr.io/chai-959f8/chai-guanaco/mkml:q4_flywheel_0293\', \'imagePullPolicy\': \'IfNotPresent\', \'name\': \'kserve-container\', \'readinessProbe\': {\'exec\': {\'command\': [\'cat\', \'/tmp/ready\']}, \'failureThreshold\': 1, \'initialDelaySeconds\': 10, \'periodSeconds\': 10, \'successThreshold\': 1, \'timeoutSeconds\': 5}, \'resources\': {\'limits\': {\'cpu\': \'2\', \'memory\': \'14Gi\', \'nvidia.com/gpu\': \'1\'}, \'requests\': {\'cpu\': \'2\', \'memory\': \'14Gi\', \'nvidia.com/gpu\': \'1\'}}, \'volumeMounts\': [{\'mountPath\': \'/dev/shm\', \'name\': \'shared-memory-cache\'}]}], \'imagePullSecrets\': [{\'name\': \'docker-creds\'}], \'maxReplicas\': 1, \'minReplicas\': 1, \'timeout\': 60, \'volumes\': [{\'emptyDir\': {\'medium\': \'Memory\'}, \'name\': \'shared-memory-cache\'}]}}, \'status\': {\'components\': {\'predictor\': {\'latestCreatedRevision\': \'chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor-00001\'}}, \'conditions\': [{\'lastTransitionTime\': \'2025-06-25T01:21:05Z\', \'reason\': \'PredictorConfigurationReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'LatestDeploymentReady\'}, {\'lastTransitionTime\': \'2025-06-25T01:29:52Z\', \'message\': \'Revision "chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor-00001" failed with message: Container failed with: tances)\\nNUMA node(s): 1\\nNUMA node0 CPU(s): 0-47\\nVulnerability Gather data sampling: Not affected\\nVulnerability Itlb multihit: Not affected\\nVulnerability L1tf: Not affected\\nVulnerability Mds: Not affected\\nVulnerability Meltdown: Not affected\\nVulnerability Mmio stale data: Not affected\\nVulnerability Retbleed: Not affected\\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\\nVulnerability Srbds: Not affected\\nVulnerability Tsx async abort: Not affected\\nGPU: 24564 MiB, NVIDIA RTX A5000\\nIP: 216.153.53.111\\ndownloading s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2 to /dev/shm/model_cache\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/config.json /dev/shm/model_cache/config.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/special_tokens_map.json /dev/shm/model_cache/special_tokens_map.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer_config.json /dev/shm/model_cache/tokenizer_config.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer.json /dev/shm/model_cache/tokenizer.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/flywheel_model.2.safetensors /dev/shm/model_cache/flywheel_model.2.safetensors\\nstart_server.sh: line 20: 45 Killed s5cmd --log debug --credentials-file uploading/s5cfg --endpoint-url https://object.ord1.coreweave.com cp --concurrency 4 "$TENSORIZER_URI/*" $DOWNLOAD_TO_LOCAL\\n\\nreal\\t0m15.229s\\nuser\\t0m5.883s\\nsys\\t0m24.468s\\nstart_server.sh: line 23: 87 Killed python3 main.py\\n.\', \'reason\': \'RevisionFailed\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorConfigurationReady\'}, {\'lastTransitionTime\': \'2025-06-25T01:21:05Z\', \'message\': \'Configuration "chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'PredictorReady\'}, {\'lastTransitionTime\': \'2025-06-25T01:21:05Z\', \'message\': \'Configuration "chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorRouteReady\'}, {\'lastTransitionTime\': \'2025-06-25T01:21:05Z\', \'message\': \'Configuration "chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'Ready\'}, {\'lastTransitionTime\': \'2025-06-25T01:21:05Z\', \'reason\': \'PredictorRouteReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'RoutesReady\'}], \'modelStatus\': {\'lastFailureInfo\': {\'exitCode\': 137, \'message\': \'tances)\\nNUMA node(s): 1\\nNUMA node0 CPU(s): 0-47\\nVulnerability Gather data sampling: Not affected\\nVulnerability Itlb multihit: Not affected\\nVulnerability L1tf: Not affected\\nVulnerability Mds: Not affected\\nVulnerability Meltdown: Not affected\\nVulnerability Mmio stale data: Not affected\\nVulnerability Retbleed: Not affected\\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\\nVulnerability Srbds: Not affected\\nVulnerability Tsx async abort: Not affected\\nGPU: 24564 MiB, NVIDIA RTX A5000\\nIP: 216.153.53.111\\ndownloading s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2 to /dev/shm/model_cache\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/config.json /dev/shm/model_cache/config.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/special_tokens_map.json /dev/shm/model_cache/special_tokens_map.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer_config.json /dev/shm/model_cache/tokenizer_config.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer.json /dev/shm/model_cache/tokenizer.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/flywheel_model.2.safetensors /dev/shm/model_cache/flywheel_model.2.safetensors\\nstart_server.sh: line 20: 45 Killed s5cmd --log debug --credentials-file uploading/s5cfg --endpoint-url https://object.ord1.coreweave.com cp --concurrency 4 "$TENSORIZER_URI/*" $DOWNLOAD_TO_LOCAL\\n\\nreal\\t0m15.229s\\nuser\\t0m5.883s\\nsys\\t0m24.468s\\nstart_server.sh: line 23: 87 Killed python3 main.py\\n\', \'reason\': \'ModelLoadFailed\'}, \'states\': {\'activeModelState\': \'\', \'targetModelState\': \'FailedToLoad\'}, \'transitionStatus\': \'BlockedByFailedLoad\'}, \'observedGeneration\': 1}}')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.18s
Shutdown handler de-registered
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.19s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.19s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
Waiting for inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler to be ready
Tearing down inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
Waiting for inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler to be ready
Tearing down inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
Waiting for inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler to be ready
Tearing down inference service chaiml-0sw-96p-4ff-chai-45017-v2-profiler
clean up pipeline due to error=DeploymentError('Timeout to start the InferenceService chaiml-0sw-96p-4ff-chai-45017-v2-profiler. The InferenceService is as following: {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'kind\': \'InferenceService\', \'metadata\': {\'annotations\': {\'autoscaling.knative.dev/class\': \'hpa.autoscaling.knative.dev\', \'autoscaling.knative.dev/container-concurrency-target-percentage\': \'70\', \'autoscaling.knative.dev/initial-scale\': \'1\', \'autoscaling.knative.dev/max-scale-down-rate\': \'1.1\', \'autoscaling.knative.dev/max-scale-up-rate\': \'2\', \'autoscaling.knative.dev/metric\': \'mean_pod_latency_ms_v2\', \'autoscaling.knative.dev/panic-threshold-percentage\': \'650\', \'autoscaling.knative.dev/panic-window-percentage\': \'35\', \'autoscaling.knative.dev/scale-down-delay\': \'30s\', \'autoscaling.knative.dev/scale-to-zero-grace-period\': \'10m\', \'autoscaling.knative.dev/stable-window\': \'180s\', \'autoscaling.knative.dev/target\': \'4000\', \'autoscaling.knative.dev/target-burst-capacity\': \'-1\', \'autoscaling.knative.dev/tick-interval\': \'15s\', \'features.knative.dev/http-full-duplex\': \'Enabled\', \'networking.knative.dev/ingress-class\': \'istio.ingress.networking.knative.dev\'}, \'creationTimestamp\': \'2025-06-25T01:51:16Z\', \'finalizers\': [\'inferenceservice.finalizers\'], \'generation\': 1, \'labels\': {\'knative.coreweave.cloud/ingress\': \'istio.ingress.networking.knative.dev\', \'prometheus.k.chaiverse.com\': \'true\', \'qos.coreweave.cloud/latency\': \'low\'}, \'managedFields\': [{\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:annotations\': {\'.\': {}, \'f:autoscaling.knative.dev/class\': {}, \'f:autoscaling.knative.dev/container-concurrency-target-percentage\': {}, \'f:autoscaling.knative.dev/initial-scale\': {}, \'f:autoscaling.knative.dev/max-scale-down-rate\': {}, \'f:autoscaling.knative.dev/max-scale-up-rate\': {}, \'f:autoscaling.knative.dev/metric\': {}, \'f:autoscaling.knative.dev/panic-threshold-percentage\': {}, \'f:autoscaling.knative.dev/panic-window-percentage\': {}, \'f:autoscaling.knative.dev/scale-down-delay\': {}, \'f:autoscaling.knative.dev/scale-to-zero-grace-period\': {}, \'f:autoscaling.knative.dev/stable-window\': {}, \'f:autoscaling.knative.dev/target\': {}, \'f:autoscaling.knative.dev/target-burst-capacity\': {}, \'f:autoscaling.knative.dev/tick-interval\': {}, \'f:features.knative.dev/http-full-duplex\': {}, \'f:networking.knative.dev/ingress-class\': {}}, \'f:labels\': {\'.\': {}, \'f:knative.coreweave.cloud/ingress\': {}, \'f:prometheus.k.chaiverse.com\': {}, \'f:qos.coreweave.cloud/latency\': {}}}, \'f:spec\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:affinity\': {\'.\': {}, \'f:nodeAffinity\': {\'.\': {}, \'f:tion\': {}, \'f:requiredDuringSchedulingIgnoredDuringExecution\': {}}}, \'f:containerConcurrency\': {}, \'f:containers\': {}, \'f:imagePullSecrets\': {}, \'f:maxReplicas\': {}, \'f:minReplicas\': {}, \'f:timeout\': {}, \'f:volumes\': {}}}}, \'manager\': \'OpenAPI-Generator\', \'operation\': \'Update\', \'time\': \'2025-06-25T01:51:16Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:finalizers\': {\'.\': {}, \'v:"inferenceservice.finalizers"\': {}}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'time\': \'2025-06-25T01:51:16Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:status\': {\'.\': {}, \'f:components\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:latestCreatedRevision\': {}}}, \'f:conditions\': {}, \'f:modelStatus\': {\'.\': {}, \'f:lastFailureInfo\': {\'.\': {}, \'f:exitCode\': {}, \'f:message\': {}, \'f:reason\': {}}, \'f:states\': {\'.\': {}, \'f:activeModelState\': {}, \'f:targetModelState\': {}}, \'f:transitionStatus\': {}}, \'f:observedGeneration\': {}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'subresource\': \'status\', \'time\': \'2025-06-25T02:01:18Z\'}], \'name\': \'chaiml-0sw-96p-4ff-chai-45017-v2-profiler\', \'namespace\': \'tenant-chaiml-guanaco\', \'resourceVersion\': \'435521941\', \'uid\': \'3fc4dab7-d995-42c6-9cde-c674f0d4cfbd\'}, \'spec\': {\'predictor\': {\'affinity\': {\'nodeAffinity\': {\'tion\': [{\'preference\': {\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'RTX_A5000\']}]}, \'weight\': 5}], \'requiredDuringSchedulingIgnoredDuringExecution\': {\'nodeSelectorTerms\': [{\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'RTX_A5000\']}]}]}}}, \'containerConcurrency\': 0, \'containers\': [{\'env\': [{\'name\': \'MAX_TOKEN_INPUT\', \'value\': \'1024\'}, {\'name\': \'BEST_OF\', \'value\': \'8\'}, {\'name\': \'TEMPERATURE\', \'value\': \'0.9\'}, {\'name\': \'PRESENCE_PENALTY\', \'value\': \'0.0\'}, {\'name\': \'FREQUENCY_PENALTY\', \'value\': \'0.0\'}, {\'name\': \'TOP_P\', \'value\': \'1.0\'}, {\'name\': \'MIN_P\', \'value\': \'0.05\'}, {\'name\': \'TOP_K\', \'value\': \'80\'}, {\'name\': \'STOPPING_WORDS\', \'value\': \'["####", "You:", "<|eot_id|>", "</s>", "Bot:", "<|im_end|>", "User:", "\\\\\\\\n"]\'}, {\'name\': \'MAX_TOKENS\', \'value\': \'64\'}, {\'name\': \'MAX_BATCH_SIZE\', \'value\': \'128\'}, {\'name\': \'MAX_CACHED_RESPONSES\', \'value\': \'-1\'}, {\'name\': \'URL_ROUTE\', \'value\': \'GPT-J-6B-lit-v2\'}, {\'name\': \'OBJ_ACCESS_KEY_ID\', \'value\': \'LETMTTRMLFFAMTBK\'}, {\'name\': \'OBJ_SECRET_ACCESS_KEY\', \'value\': \'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\'}, {\'name\': \'OBJ_ENDPOINT\', \'value\': \'https://accel-object.ord1.coreweave.com\'}, {\'name\': \'TENSORIZER_URI\', \'value\': \'s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2\'}, {\'name\': \'RESERVE_MEMORY\', \'value\': \'2048\'}, {\'name\': \'DOWNLOAD_TO_LOCAL\', \'value\': \'/dev/shm/model_cache\'}, {\'name\': \'NUM_GPUS\', \'value\': \'1\'}, {\'name\': \'MK1_QUANTIZATION_PROFILE\', \'value\': \'s0\'}, {\'name\': \'MK1_MKML_LICENSE_KEY\', \'valueFrom\': {\'secretKeyRef\': {\'key\': \'key\', \'name\': \'mkml-license-key\'}}}], \'image\': \'gcr.io/chai-959f8/chai-guanaco/mkml:q4_flywheel_0293\', \'imagePullPolicy\': \'IfNotPresent\', \'name\': \'kserve-container\', \'readinessProbe\': {\'exec\': {\'command\': [\'cat\', \'/tmp/ready\']}, \'failureThreshold\': 1, \'initialDelaySeconds\': 10, \'periodSeconds\': 10, \'successThreshold\': 1, \'timeoutSeconds\': 5}, \'resources\': {\'limits\': {\'cpu\': \'2\', \'memory\': \'14Gi\', \'nvidia.com/gpu\': \'1\'}, \'requests\': {\'cpu\': \'2\', \'memory\': \'14Gi\', \'nvidia.com/gpu\': \'1\'}}, \'volumeMounts\': [{\'mountPath\': \'/dev/shm\', \'name\': \'shared-memory-cache\'}]}], \'imagePullSecrets\': [{\'name\': \'docker-creds\'}], \'maxReplicas\': 1, \'minReplicas\': 1, \'timeout\': 60, \'volumes\': [{\'emptyDir\': {\'medium\': \'Memory\'}, \'name\': \'shared-memory-cache\'}]}}, \'status\': {\'components\': {\'predictor\': {\'latestCreatedRevision\': \'chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor-00001\'}}, \'conditions\': [{\'lastTransitionTime\': \'2025-06-25T01:52:47Z\', \'reason\': \'PredictorConfigurationReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'LatestDeploymentReady\'}, {\'lastTransitionTime\': \'2025-06-25T02:01:18Z\', \'message\': \'Revision "chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor-00001" failed with message: Container failed with: e0 CPU(s): 0-31,64-95\\nNUMA node1 CPU(s): 32-63,96-127\\nVulnerability Gather data sampling: Not affected\\nVulnerability Itlb multihit: Not affected\\nVulnerability L1tf: Not affected\\nVulnerability Mds: Not affected\\nVulnerability Meltdown: Not affected\\nVulnerability Mmio stale data: Not affected\\nVulnerability Retbleed: Not affected\\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\\nVulnerability Srbds: Not affected\\nVulnerability Tsx async abort: Not affected\\nGPU: 24564 MiB, NVIDIA RTX A5000\\nIP: 216.153.55.80\\ndownloading s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2 to /dev/shm/model_cache\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/config.json /dev/shm/model_cache/config.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/special_tokens_map.json /dev/shm/model_cache/special_tokens_map.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer_config.json /dev/shm/model_cache/tokenizer_config.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer.json /dev/shm/model_cache/tokenizer.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/flywheel_model.2.safetensors /dev/shm/model_cache/flywheel_model.2.safetensors\\nstart_server.sh: line 20: 44 Killed s5cmd --log debug --credentials-file uploading/s5cfg --endpoint-url https://object.ord1.coreweave.com cp --concurrency 4 "$TENSORIZER_URI/*" $DOWNLOAD_TO_LOCAL\\n\\nreal\\t0m21.046s\\nuser\\t0m6.174s\\nsys\\t0m34.715s\\nstart_server.sh: line 23: 93 Killed python3 main.py\\n.\', \'reason\': \'RevisionFailed\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorConfigurationReady\'}, {\'lastTransitionTime\': \'2025-06-25T01:52:47Z\', \'message\': \'Configuration "chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'PredictorReady\'}, {\'lastTransitionTime\': \'2025-06-25T01:52:47Z\', \'message\': \'Configuration "chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorRouteReady\'}, {\'lastTransitionTime\': \'2025-06-25T01:52:47Z\', \'message\': \'Configuration "chaiml-0sw-96p-4ff-chai-45017-v2-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'Ready\'}, {\'lastTransitionTime\': \'2025-06-25T01:52:47Z\', \'reason\': \'PredictorRouteReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'RoutesReady\'}], \'modelStatus\': {\'lastFailureInfo\': {\'exitCode\': 137, \'message\': \'e0 CPU(s): 0-31,64-95\\nNUMA node1 CPU(s): 32-63,96-127\\nVulnerability Gather data sampling: Not affected\\nVulnerability Itlb multihit: Not affected\\nVulnerability L1tf: Not affected\\nVulnerability Mds: Not affected\\nVulnerability Meltdown: Not affected\\nVulnerability Mmio stale data: Not affected\\nVulnerability Retbleed: Not affected\\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\\nVulnerability Srbds: Not affected\\nVulnerability Tsx async abort: Not affected\\nGPU: 24564 MiB, NVIDIA RTX A5000\\nIP: 216.153.55.80\\ndownloading s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2 to /dev/shm/model_cache\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/config.json /dev/shm/model_cache/config.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/special_tokens_map.json /dev/shm/model_cache/special_tokens_map.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer_config.json /dev/shm/model_cache/tokenizer_config.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/tokenizer.json /dev/shm/model_cache/tokenizer.json\\ncp s3://guanaco-mkml-models/chaiml-0sw-96p-4ff-chai-45017-v2/flywheel_model.2.safetensors /dev/shm/model_cache/flywheel_model.2.safetensors\\nstart_server.sh: line 20: 44 Killed s5cmd --log debug --credentials-file uploading/s5cfg --endpoint-url https://object.ord1.coreweave.com cp --concurrency 4 "$TENSORIZER_URI/*" $DOWNLOAD_TO_LOCAL\\n\\nreal\\t0m21.046s\\nuser\\t0m6.174s\\nsys\\t0m34.715s\\nstart_server.sh: line 23: 93 Killed python3 main.py\\n\', \'reason\': \'ModelLoadFailed\'}, \'states\': {\'activeModelState\': \'\', \'targetModelState\': \'FailedToLoad\'}, \'transitionStatus\': \'BlockedByFailedLoad\'}, \'observedGeneration\': 1}}')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.21s
Shutdown handler de-registered
chaiml-0sw-96p-4ff-chai_45017_v2 status is now inactive due to auto deactivation removed underperforming models
chaiml-0sw-96p-4ff-chai_45017_v2 status is now protected due to ABTestQueueItem
chaiml-0sw-96p-4ff-chai_45017_v2 status is now inactive
chaiml-0sw-96p-4ff-chai_45017_v2 status is now torndown due to DeploymentManager action
chaiml-0sw-96p-4ff-chai_45017_v2 status is now torndown due to DeploymentManager action
admin requested tearing down of junhua024-chai-06-full-_30622_v5