Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
run pipeline stage %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Running pipeline stage VLLMUploader
Starting job with name chaiml-baelor-targaryen-43031-v1-uploader
Waiting for job on chaiml-baelor-targaryen-43031-v1-uploader to finish
chaiml-baelor-targaryen-43031-v1-uploader: Using quantization_mode: fp8
chaiml-baelor-targaryen-43031-v1-uploader: Checking if ChaiML/Baelor-Targaryen260223205536_sft-FP8 already exists in ChaiML
chaiml-baelor-targaryen-43031-v1-uploader: Downloading snapshot of ChaiML/Baelor-Targaryen260223205536_sft...
chaiml-baelor-targaryen-43031-v1-uploader: Downloaded in 167.724s
chaiml-baelor-targaryen-43031-v1-uploader: Loading /tmp/model_input...
chaiml-baelor-targaryen-43031-v1-uploader: The tokenizer you are loading from '/tmp/model_input' with an incorrect regex pattern: https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503/discussions/84#69121093e8b480e709447d5e. This will lead to incorrect tokenization. You should set the `fix_mistral_regex=True` flag when loading this tokenizer to fix this issue.
chaiml-baelor-targaryen-43031-v1-uploader: `torch_dtype` is deprecated! Use `dtype` instead!
chaiml-baelor-targaryen-43031-v1-uploader: Some parameters are on the meta device because they were offloaded to the cpu.
chaiml-baelor-targaryen-43031-v1-uploader: Applying quantization...
HTTP Request: %s %s "%s %d %s"
chaiml-baelor-targaryen-43031-v1-uploader: Some parameters are on the meta device because they were offloaded to the cpu.
chaiml-baelor-targaryen-43031-v1-uploader: 2026-02-23T13:07:40.723799-0800 | finalize | INFO - Compression lifecycle finalized for 1 modifiers
chaiml-baelor-targaryen-43031-v1-uploader: 2026-02-23T13:07:42.874061-0800 | post_process | WARNING - Optimized model is not saved. To save, please provide`output_dir` as input arg.Ex. `oneshot(..., output_dir=...)`
chaiml-baelor-targaryen-43031-v1-uploader: Saving to /dev/shm/model_output...
chaiml-baelor-targaryen-43031-v1-uploader: 2026-02-23T13:07:42.900847-0800 | get_model_compressor | INFO - skip_sparsity_compression_stats set to True. Skipping sparsity compression statistic calculations. No sparsity compressor will be applied.
chaiml-baelor-targaryen-43031-v1-uploader: Cleaning quantization config in /dev/shm/model_output
chaiml-baelor-targaryen-43031-v1-uploader: Pushing to ChaiML/Baelor-Targaryen260223205536_sft-FP8
chaiml-baelor-targaryen-43031-v1-uploader: Checking if ChaiML/Baelor-Targaryen260223205536_sft-FP8 already exists in ChaiML
chaiml-baelor-targaryen-43031-v1-uploader: Creating repo ChaiML/Baelor-Targaryen260223205536_sft-FP8 and uploading /dev/shm/model_output to it
chaiml-baelor-targaryen-43031-v1-uploader: ---------- 2026-02-23 13:08:31 (0:00:00) ----------
chaiml-baelor-targaryen-43031-v1-uploader: Files: hashed 4/13 (274.2K/24.9G) | pre-uploaded: 0/0 (0.0/24.9G) (+13 unsure) | committed: 0/13 (0.0/24.9G) | ignored: 0
chaiml-baelor-targaryen-43031-v1-uploader: Workers: hashing: 12 | get upload mode: 1 | pre-uploading: 0 | committing: 0 | waiting: 113
chaiml-baelor-targaryen-43031-v1-uploader: ---------------------------------------------------
chaiml-baelor-targaryen-43031-v1-uploader:
[K[F
[K[F
[K[F
[K[F
[K[F
[K[F
[K[F
chaiml-baelor-targaryen-43031-v1-uploader: ---------- 2026-02-23 13:09:31 (0:01:00) ----------
chaiml-baelor-targaryen-43031-v1-uploader: Files: hashed 13/13 (24.9G/24.9G) | pre-uploaded: 7/7 (24.9G/24.9G) | committed: 0/13 (0.0/24.9G) | ignored: 0
chaiml-baelor-targaryen-43031-v1-uploader: Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 0 | committing: 1 | waiting: 125
chaiml-baelor-targaryen-43031-v1-uploader: ---------------------------------------------------
chaiml-baelor-targaryen-43031-v1-uploader: Processed model ChaiML/Baelor-Targaryen260223205536_sft in 330.690s
chaiml-baelor-targaryen-43031-v1-uploader: creating bucket guanaco-vllm-models
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/special_tokens_map.json s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/special_tokens_map.json
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/model.safetensors.index.json s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/model.safetensors.index.json
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/tokenizer_config.json s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/tokenizer_config.json
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/config.json s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/config.json
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/tokenizer.json s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/tokenizer.json
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/model-00006-of-00006.safetensors s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/model-00006-of-00006.safetensors
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/model-00005-of-00006.safetensors s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/model-00005-of-00006.safetensors
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/model-00001-of-00006.safetensors s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/model-00001-of-00006.safetensors
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/model-00002-of-00006.safetensors s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/model-00002-of-00006.safetensors
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/model-00003-of-00006.safetensors s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/model-00003-of-00006.safetensors
chaiml-baelor-targaryen-43031-v1-uploader: cp /dev/shm/model_output/model-00004-of-00006.safetensors s3://guanaco-vllm-models/chaiml-baelor-targaryen-43031-v1/default/model-00004-of-00006.safetensors
Job chaiml-baelor-targaryen-43031-v1-uploader completed after 402.5s with status: succeeded
Stopping job with name chaiml-baelor-targaryen-43031-v1-uploader
Pipeline stage VLLMUploader completed in 404.11s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.24s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service chaiml-baelor-targaryen-43031-v1
Waiting for inference service chaiml-baelor-targaryen-43031-v1 to be ready
HTTP Request: %s %s "%s %d %s"
HTTP Request: %s %s "%s %d %s"
Inference service chaiml-baelor-targaryen-43031-v1 ready after 151.4397897720337s
Pipeline stage VLLMDeployer completed in 152.06s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 0.9439375400543213s
Received healthy response to inference request in 1.0813632011413574s
Received healthy response to inference request in 0.5642797946929932s
Received healthy response to inference request in 1.0026867389678955s
Received healthy response to inference request in 1.291663408279419s
Received healthy response to inference request in 0.9795217514038086s
Received healthy response to inference request in 0.5235674381256104s
Received healthy response to inference request in 1.0948565006256104s
Received healthy response to inference request in 1.0609099864959717s
Received healthy response to inference request in 1.0468571186065674s
Received healthy response to inference request in 0.6344106197357178s
Received healthy response to inference request in 0.6679048538208008s
Received healthy response to inference request in 0.8766500949859619s
Received healthy response to inference request in 1.4266042709350586s
Received healthy response to inference request in 1.3096320629119873s
Received healthy response to inference request in 0.5928618907928467s
Received healthy response to inference request in 1.283099889755249s
Received healthy response to inference request in 1.5252118110656738s
Received healthy response to inference request in 1.2964658737182617s
Received healthy response to inference request in 1.5042688846588135s
Received healthy response to inference request in 1.2127509117126465s
Received healthy response to inference request in 1.089972734451294s
Received healthy response to inference request in 0.8677191734313965s
Received healthy response to inference request in 0.5631380081176758s
Received healthy response to inference request in 1.6954376697540283s
Received healthy response to inference request in 0.9431681632995605s
Received healthy response to inference request in 1.0463778972625732s
Received healthy response to inference request in 0.579599142074585s
Received healthy response to inference request in 1.518587589263916s
Received healthy response to inference request in 0.8408496379852295s
30 requests
0 failed requests
5th percentile: 0.5636518120765686
10th percentile: 0.5780672073364258
20th percentile: 0.6612060070037842
30th percentile: 0.8739708185195922
40th percentile: 0.9652880668640137
50th percentile: 1.0466175079345703
60th percentile: 1.0848070144653321
70th percentile: 1.233855605125427
80th percentile: 1.299099111557007
90th percentile: 1.5057007551193238
95th percentile: 1.5222309112548829
99th percentile: 1.6460721707344057
mean time: 1.0354784886042276
Pipeline stage StressChecker completed in 36.07s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.63s
Shutdown handler de-registered
chaiml-baelor-targaryen_43031_v1 status is now deployed due to DeploymentManager action