developer_uid: richhx
submission_id: qwen-qwen3-235b-a22b-i_47730_v24
model_name: qwen-qwen3-235b-a22b-i_47730_v24
model_group: Qwen/Qwen3-235B-A22B-Ins
status: torndown
timestamp: 2026-04-16T22:52:27+00:00
num_battles: 11593
num_wins: 5763
celo_rating: 0.0
family_friendly_score: 0.0
family_friendly_standard_error: 0.0
submission_type: basic
model_repo: Qwen/Qwen3-235B-A22B-Instruct-2507
model_architecture: Qwen3MoeForCausalLM
model_num_parameters: 1821417132032.0
best_of: 8
max_input_tokens: 2048
max_output_tokens: 80
reward_model: default
display_name: qwen-qwen3-235b-a22b-i_47730_v24
ineligible_reason: max_output_tokens!=64
is_internal_developer: True
language_model: Qwen/Qwen3-235B-A22B-Instruct-2507
model_size: 1821B
ranking_group: single
us_pacific_date: 2026-04-13
win_ratio: 0.4971103251962391
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['<|user|>', '</think>', '</s>', '<|im_end|>', '####', '<|assistant|>'], 'max_input_tokens': 2048, 'best_of': 8, 'max_output_tokens': 80}
formatter: {'memory_template': "<|im_start|>system\n{bot_name}'s persona: {memory}<|im_end|>\n", 'prompt_template': '', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': True}
Resubmit model
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
Shutdown handler not registered because Python interpreter is not running in the main thread
2026-04-13T21:08:04.082755+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Post "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00065-of-00072.safetensors?uploads=": EOF
run pipeline %s
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
run pipeline stage %s
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00030-of-00072.safetensors?partNumber=3&uploadId=f1726196-0b01-41e2-ac43-1f38d46dc40f": write tcp 10.0.23.239:45542->166.19.18.1:443: use of closed network connection
Running pipeline stage VLLMUploader
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00044-of-00072.safetensors?partNumber=1&uploadId=ce5a8987-e366-486f-9024-2e3fa156c825": write tcp 10.0.23.239:45510->166.19.18.1:443: use of closed network connection
Starting job with name qwen-qwen3-235b-a22b-i-47730-v24-uploader
chaiml-pony-v2-g46-lr1-80834-v33-uploader: cp /tmp/model_output/model-00044-of-00072.safetensors s3://guanaco-vllm-models/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00044-of-00072.safetensors
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
Waiting for job on qwen-qwen3-235b-a22b-i-47730-v24-uploader to finish
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00038-of-00072.safetensors?partNumber=1&uploadId=6143aebd-b541-4109-a96a-1c718cfdfeda": write tcp 10.0.23.239:45390->166.19.18.1:443: write: connection timed out
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00019-of-00072.safetensors?partNumber=2&uploadId=0bd4ff93-a00a-4690-9d8e-c7ed145288f3": write tcp 10.0.23.239:45522->166.19.18.1:443: write: connection timed out
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00030-of-00072.safetensors?partNumber=4&uploadId=f1726196-0b01-41e2-ac43-1f38d46dc40f": write tcp 10.0.23.239:45456->166.19.18.1:443: write: connection timed out
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00040-of-00072.safetensors?partNumber=3&uploadId=b4b3a24b-f477-4a78-9bcb-44d15eed99a6": write tcp 10.0.23.239:45370->166.19.18.1:443: write: broken pipe
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00023-of-00072.safetensors?partNumber=4&uploadId=0ffd0760-2fe8-49e4-897d-8926ef46bf6d": write tcp 10.0.23.239:45448->166.19.18.1:443: write: broken pipe
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00043-of-00072.safetensors?partNumber=2&uploadId=f7ad4c1e-caac-4d34-bda6-e298241c91b6": write tcp 10.0.23.239:45468->166.19.18.1:443: write: broken pipe
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/tokenizer.json": write tcp 10.0.23.239:45398->166.19.18.1:443: write: broken pipe
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00042-of-00072.safetensors?partNumber=3&uploadId=9791b8e9-2fc1-440c-96a3-ad9e819482ad": write tcp 10.0.23.239:45342->166.19.18.1:443: write: connection timed out
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00024-of-00072.safetensors?partNumber=2&uploadId=1fb6b34f-fb9a-400b-affb-dd078847b145": write tcp 10.0.23.239:45374->166.19.18.1:443: write: broken pipe
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00041-of-00072.safetensors?partNumber=2&uploadId=60a04136-258e-4be0-adc3-2553a8f86e5a": write tcp 10.0.23.239:45558->166.19.18.1:443: write: connection timed out
chaiml-pony-v2-g46-lr1-80834-v33-uploader: cp /tmp/model_output/tokenizer.json s3://guanaco-vllm-models/chaiml-pony-v2-g46-lr1-80834-v33/default/tokenizer.json
chaiml-pony-v2-g46-lr1-80834-v33-uploader: cp /tmp/model_output/model-00042-of-00072.safetensors s3://guanaco-vllm-models/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00042-of-00072.safetensors
chaiml-pony-v2-g46-lr1-80834-v33-uploader: cp /tmp/model_output/model-00030-of-00072.safetensors s3://guanaco-vllm-models/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00030-of-00072.safetensors
chaiml-pony-v2-g46-lr1-80834-v33-uploader: cp /tmp/model_output/model-00043-of-00072.safetensors s3://guanaco-vllm-models/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00043-of-00072.safetensors
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00020-of-00072.safetensors?partNumber=1&uploadId=8010bb20-eaeb-40f2-bad4-8956551aca55": write tcp 10.0.23.239:45410->166.19.18.1:443: write: connection timed out
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00023-of-00072.safetensors?partNumber=1&uploadId=0ffd0760-2fe8-49e4-897d-8926ef46bf6d": write tcp 10.0.23.239:45440->166.19.18.1:443: write: broken pipe
chaiml-pony-v2-g46-lr1-80834-v33-uploader: DEBUG retryable error: RequestError: send request failed
chaiml-pony-v2-g46-lr1-80834-v33-uploader: caused by: Put "https://guanaco-vllm-models.cwobject.com/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00034-of-00072.safetensors?partNumber=3&uploadId=e4dfa805-b833-44e7-9e7b-d4ddfd7e5649": write tcp 10.0.23.239:45520->166.19.18.1:443: write: connection timed out
chaiml-pony-v2-g46-lr1-80834-v33-uploader: cp /tmp/model_output/model-00034-of-00072.safetensors s3://guanaco-vllm-models/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00034-of-00072.safetensors
chaiml-pony-v2-g46-lr1-80834-v33-uploader: cp /tmp/model_output/model-00020-of-00072.safetensors s3://guanaco-vllm-models/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00020-of-00072.safetensors
chaiml-pony-v2-g46-lr1-80834-v33-uploader: cp /tmp/model_output/model-00023-of-00072.safetensors s3://guanaco-vllm-models/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00023-of-00072.safetensors
chaiml-pony-v2-g46-lr1-80834-v33-uploader: cp /tmp/model_output/model-00065-of-00072.safetensors s3://guanaco-vllm-models/chaiml-pony-v2-g46-lr1-80834-v33/default/model-00065-of-00072.safetensors
Job chaiml-pony-v2-g46-lr1-80834-v33-uploader completed after 395.69s with status: succeeded
Stopping job with name chaiml-pony-v2-g46-lr1-80834-v33-uploader
Pipeline stage VLLMUploader completed in 397.51s
run pipeline stage %s
Running pipeline stage VLLMUploaderAMD
Pipeline stage vllm_upload_amd skipped, reason=not amd cluster
Pipeline stage VLLMUploaderAMD completed in 0.82s
run pipeline stage %s
Running pipeline stage VLLMTemplater
2026-04-13T21:08:44.100199+00:00 monitor updated for chaiml-kaniwara-japan-fu_433_v10
Pipeline stage VLLMTemplater completed in 1.81s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service chaiml-pony-v2-g46-lr1-80834-v33
Waiting for inference service chaiml-pony-v2-g46-lr1-80834-v33 to be ready
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Using quantization_mode: w4a16
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Checking if ChaiML/Qwen3-235B-A22B-Instruct-2507-W4A16 already exists in ChaiML
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Downloading snapshot of Qwen/Qwen3-235B-A22B-Instruct-2507...
2026-04-13T21:09:03.825873+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:09:04.772575+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
2026-04-13T21:09:44.561079+00:00 monitor updated for chaiml-kaniwara-japan-fu_433_v10
2026-04-13T21:10:04.277326+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:10:05.216846+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
Inference service chaiml-kaniwara-japan-fu-433-v10 ready after 161.98267555236816s
Pipeline stage VLLMDeployer completed in 164.03s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 4.661473274230957s
Received healthy response to inference request in 4.862468242645264s
2026-04-13T21:10:45.034270+00:00 monitor updated for chaiml-kaniwara-japan-fu_433_v10
Received healthy response to inference request in 2.8098244667053223s
Received healthy response to inference request in 2.8670241832733154s
Received healthy response to inference request in 2.960700035095215s
Received healthy response to inference request in 2.769190788269043s
Received healthy response to inference request in 3.2361013889312744s
Received healthy response to inference request in 3.0505425930023193s
2026-04-13T21:11:04.986946+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:11:05.696612+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
Received healthy response to inference request in 3.415175676345825s
Received healthy response to inference request in 2.916670083999634s
Received healthy response to inference request in 2.792447328567505s
Received healthy response to inference request in 2.8960628509521484s
Received healthy response to inference request in 2.8704681396484375s
Received healthy response to inference request in 2.974917411804199s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Downloaded in 153.489s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Applying quantization...
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:11:23 INFO base.py L473: `enable_opt_rtn` is turned on, set `--disable_opt_rtn` for higher speed at the cost of accuracy.
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:11:23 INFO base.py L517: using torch.bfloat16 for quantization tuning
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:11:23 WARNING formats.py L166: some layers are skipped quantization (shape not divisible by 32): 
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:11:23 INFO base.py L1660: Using predefined ignore_layers: model.layers.[0-93].mlp.gate
Received healthy response to inference request in 2.927828073501587s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:11:25 INFO base.py L1150: start to compute imatrix
Received healthy response to inference request in 2.7502920627593994s
Received healthy response to inference request in 2.9118502140045166s
Received healthy response to inference request in 2.7684085369110107s
Received healthy response to inference request in 4.7864813804626465s
2026-04-13T21:11:45.747304+00:00 monitor updated for chaiml-kaniwara-japan-fu_433_v10
Received healthy response to inference request in 2.7500486373901367s
Received healthy response to inference request in 4.618350505828857s
Received healthy response to inference request in 2.7546370029449463s
Received healthy response to inference request in 2.876657724380493s
Received healthy response to inference request in 2.745826482772827s
Received healthy response to inference request in 2.7761542797088623s
2026-04-13T21:12:05.633589+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:12:06.195270+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
Received healthy response to inference request in 2.9927659034729004s
Received healthy response to inference request in 2.846034288406372s
Received healthy response to inference request in 3.017359972000122s
Received healthy response to inference request in 2.9151771068573s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:12:16 WARNING base.py L1270: MoE layer detected: optimized RTN is disabled for efficiency. Use `--enable_opt_rtn` to force-enable it for MoE layers.
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:12:18 INFO device.py L1692: 'peak_ram': 19.11GB, 'peak_vram': 11.38GB
Received healthy response to inference request in 4.663482904434204s
30 requests
0 failed requests
5th percentile: 2.7501581788063048
10th percentile: 2.7542025089263915
20th percentile: 2.7747615814208983
30th percentile: 2.835171341896057
40th percentile: 2.874181890487671
50th percentile: 2.913513660430908
60th percentile: 2.940976858139038
70th percentile: 3.000144124031067
80th percentile: 3.271916246414185
90th percentile: 4.661674237251281
95th percentile: 4.731132066249847
99th percentile: 4.840432052612305
mean time: 3.206147384643555
Pipeline stage StressChecker completed in 113.70s
Shutdown handler de-registered
chaiml-kaniwara-japan-fu_433_v10 status is now deployed due to DeploymentManager action
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:12:24 INFO device.py L1692: 'peak_ram': 21.59GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:12:31 INFO device.py L1692: 'peak_ram': 21.59GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:12:42 INFO device.py L1692: 'peak_ram': 26.52GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:12:48 INFO device.py L1692: 'peak_ram': 27.41GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:12:54 INFO device.py L1692: 'peak_ram': 28.33GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:13:00 INFO device.py L1692: 'peak_ram': 28.33GB, 'peak_vram': 11.38GB
2026-04-13T21:13:06.107853+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:13:06.659989+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:13:11 INFO device.py L1692: 'peak_ram': 28.33GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:13:17 INFO device.py L1692: 'peak_ram': 28.33GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:13:23 INFO device.py L1692: 'peak_ram': 28.33GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:13:30 INFO device.py L1692: 'peak_ram': 28.33GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:13:39 INFO device.py L1692: 'peak_ram': 28.33GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:13:45 INFO device.py L1692: 'peak_ram': 28.38GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:13:51 INFO device.py L1692: 'peak_ram': 29.49GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:13:57 INFO device.py L1692: 'peak_ram': 29.49GB, 'peak_vram': 11.38GB
2026-04-13T21:14:06.462901+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:14:07.044288+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:14:06 INFO device.py L1692: 'peak_ram': 29.84GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:14:13 INFO device.py L1692: 'peak_ram': 30.87GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:14:19 INFO device.py L1692: 'peak_ram': 30.87GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:14:25 INFO device.py L1692: 'peak_ram': 30.87GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:14:35 INFO device.py L1692: 'peak_ram': 30.87GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:14:42 INFO device.py L1692: 'peak_ram': 30.87GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:14:48 INFO device.py L1692: 'peak_ram': 30.87GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:14:54 INFO device.py L1692: 'peak_ram': 30.91GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:15:03 INFO device.py L1692: 'peak_ram': 30.91GB, 'peak_vram': 11.38GB
2026-04-13T21:15:06.952514+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:15:07.424779+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:15:09 INFO device.py L1692: 'peak_ram': 30.91GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:15:15 INFO device.py L1692: 'peak_ram': 32.12GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:15:21 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:15:32 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:15:38 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:15:44 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:15:50 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:16:07.290626+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:16:07.773791+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:15:59 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:16:05 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:16:12 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:16:18 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:16:27 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:16:34 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:16:40 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:16:47 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:16:58 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:17:07.642690+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:17:08.127862+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:17:10 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:17:19 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:17:31 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:17:38 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:17:47 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:17:54 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:18:07.982967+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:18:00 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:18:06 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:18:08.484848+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:18:17 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:18:23 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:18:29 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:18:35 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:18:44 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:18:50 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:18:57 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:19:08.351021+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:19:09.238360+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:19:04 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:19:14 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:19:21 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:19:27 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:19:33 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:19:42 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:19:48 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:19:54 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:20:00 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:20:08.736408+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:20:09.611712+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMUploader
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:20:10 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
Starting job with name cyankiwi-gemma-4-31b-it-43377-v3-uploader
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:20:16 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
Waiting for job on cyankiwi-gemma-4-31b-it-43377-v3-uploader to finish
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:20:28 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cyankiwi/gemma-4-31B-it-AWQ-4bit is already quantized
cyankiwi-gemma-4-31b-it-43377-v3-uploader: Using quantization_mode: none
cyankiwi-gemma-4-31b-it-43377-v3-uploader: Downloading snapshot of cyankiwi/gemma-4-31B-it-AWQ-4bit...
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:20:38 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:20:44 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
cyankiwi-gemma-4-31b-it-43377-v3-uploader: Downloaded in 13.623s
cyankiwi-gemma-4-31b-it-43377-v3-uploader: Processed model cyankiwi/gemma-4-31B-it-AWQ-4bit in 13.877s
cyankiwi-gemma-4-31b-it-43377-v3-uploader: creating bucket guanaco-vllm-models
cyankiwi-gemma-4-31b-it-43377-v3-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:56: SyntaxWarning: invalid escape sequence '\.'
cyankiwi-gemma-4-31b-it-43377-v3-uploader: RE_S3_DATESTRING = re.compile('\.[0-9]*(?:[Z\\-\\+]*?)')
cyankiwi-gemma-4-31b-it-43377-v3-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:57: SyntaxWarning: invalid escape sequence '\s'
cyankiwi-gemma-4-31b-it-43377-v3-uploader: RE_XML_NAMESPACE = re.compile(b'^(<?[^>]+?>\s*|\s*)(<\w+) xmlns=[\'"](https?://[^\'"]+)[\'"]', re.MULTILINE)
cyankiwi-gemma-4-31b-it-43377-v3-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:240: SyntaxWarning: invalid escape sequence '\.'
cyankiwi-gemma-4-31b-it-43377-v3-uploader: invalid = re.search("([^a-z0-9\.-])", bucket, re.UNICODE)
cyankiwi-gemma-4-31b-it-43377-v3-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:244: SyntaxWarning: invalid escape sequence '\.'
cyankiwi-gemma-4-31b-it-43377-v3-uploader: invalid = re.search("([^A-Za-z0-9\._-])", bucket, re.UNICODE)
cyankiwi-gemma-4-31b-it-43377-v3-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:255: SyntaxWarning: invalid escape sequence '\.'
cyankiwi-gemma-4-31b-it-43377-v3-uploader: if re.search("-\.", bucket, re.UNICODE):
cyankiwi-gemma-4-31b-it-43377-v3-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:257: SyntaxWarning: invalid escape sequence '\.'
cyankiwi-gemma-4-31b-it-43377-v3-uploader: if re.search("\.\.", bucket, re.UNICODE):
cyankiwi-gemma-4-31b-it-43377-v3-uploader: /usr/lib/python3/dist-packages/S3/S3Uri.py:155: SyntaxWarning: invalid escape sequence '\w'
cyankiwi-gemma-4-31b-it-43377-v3-uploader: _re = re.compile("^(\w+://)?(.*)", re.UNICODE)
cyankiwi-gemma-4-31b-it-43377-v3-uploader: /usr/lib/python3/dist-packages/S3/FileLists.py:480: SyntaxWarning: invalid escape sequence '\*'
cyankiwi-gemma-4-31b-it-43377-v3-uploader: wildcard_split_result = re.split("\*|\?", uri_str, maxsplit=1)
cyankiwi-gemma-4-31b-it-43377-v3-uploader: Bucket 's3://guanaco-vllm-models/' created
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:20:50 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
cyankiwi-gemma-4-31b-it-43377-v3-uploader: uploading /tmp/model_output to s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:20:57 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cp /tmp/model_output/.gitattributes s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default/.gitattributes
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cp /tmp/model_output/tokenizer_config.json s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default/tokenizer_config.json
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cp /tmp/model_output/README.md s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default/README.md
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cp /tmp/model_output/generation_config.json s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default/generation_config.json
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cp /tmp/model_output/model.safetensors.index.json s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default/model.safetensors.index.json
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cp /tmp/model_output/processor_config.json s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default/processor_config.json
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cp /tmp/model_output/config.json s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default/config.json
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cp /tmp/model_output/tokenizer.json s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default/tokenizer.json
cyankiwi-gemma-4-31b-it-43377-v3-uploader: cp /tmp/model_output/chat_template.jinja s3://guanaco-vllm-models/cyankiwi-gemma-4-31b-it-43377-v3/default/chat_template.jinja
2026-04-13T21:21:09.325320+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:21:09.994145+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:21:05 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:21:14.798625+00:00 monitor updated for cyankiwi-gemma-4-31b-it_43377_v3
Job cyankiwi-gemma-4-31b-it-43377-v3-uploader completed after 47.28s with status: succeeded
Stopping job with name cyankiwi-gemma-4-31b-it-43377-v3-uploader
Pipeline stage VLLMUploader completed in 60.26s
run pipeline stage %s
Running pipeline stage VLLMUploaderAMD
Pipeline stage vllm_upload_amd skipped, reason=not amd cluster
Pipeline stage VLLMUploaderAMD completed in 0.91s
run pipeline stage %s
Running pipeline stage VLLMTemplater
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:21:18 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
Pipeline stage VLLMTemplater completed in 3.20s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service cyankiwi-gemma-4-31b-it-43377-v3
Waiting for inference service cyankiwi-gemma-4-31b-it-43377-v3 to be ready
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:21:24 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:21:35 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:21:41 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:21:47 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:21:55 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:22:01 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:22:09.782596+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:22:10.474180+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
2026-04-13T21:22:15.302613+00:00 monitor updated for cyankiwi-gemma-4-31b-it_43377_v3
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:22:07 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:22:13 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:22:22 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:22:29 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:22:35 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:22:41 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:22:52 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:22:58 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:23:10.305776+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:23:10.959958+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:23:03 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:23:09 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
2026-04-13T21:23:15.795264+00:00 monitor updated for cyankiwi-gemma-4-31b-it_43377_v3
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:23:18 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:23:22 INFO shard_writer.py L250: model has been saved to /tmp/model_output/
qwen-qwen3-235b-a22b-i-47730-v24-uploader: 2026-04-13 21:23:23 INFO device.py L1692: 'peak_ram': 34.42GB, 'peak_vram': 11.38GB
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Checking if ChaiML/Qwen3-235B-A22B-Instruct-2507-W4A16 already exists in ChaiML
2026-04-13T21:24:10.875108+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:24:11.457124+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
2026-04-13T21:24:16.312893+00:00 monitor updated for cyankiwi-gemma-4-31b-it_43377_v3
Inference service cyankiwi-gemma-4-31b-it-43377-v3 ready after 172.2768907546997s
Pipeline stage VLLMDeployer completed in 175.20s
run pipeline stage %s
Running pipeline stage StressChecker
qwen-qwen3-235b-a22b-i-47730-v24-uploader:       
qwen-qwen3-235b-a22b-i-47730-v24-uploader: ---------- 2026-04-13 21:24:24 (0:01:00) ----------
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Files: hashed 36/36 (131.9G/131.9G) | pre-uploaded: 26/26 (131.9G/131.9G) | committed: 0/36 (0.0/131.9G) | ignored: 0
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 0 | committing: 1 | waiting: 125
Received healthy response to inference request in 9.005492210388184s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: ---------------------------------------------------
Received healthy response to inference request in 9.342642307281494s
Received healthy response to inference request in 9.192247152328491s
Received healthy response to inference request in 2.8636105060577393s
Received healthy response to inference request in 8.202306985855103s
Received healthy response to inference request in 3.235755443572998s
2026-04-13T21:25:11.382427+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:25:11.936544+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
Received healthy response to inference request in 9.07116174697876s
2026-04-13T21:25:16.807946+00:00 monitor updated for cyankiwi-gemma-4-31b-it_43377_v3
Received healthy response to inference request in 2.8545331954956055s
Received healthy response to inference request in 2.9104204177856445s
Received healthy response to inference request in 3.2239205837249756s
Received healthy response to inference request in 3.761776924133301s
Received healthy response to inference request in 2.779815435409546s
qwen-qwen3-235b-a22b-i-47730-v24-uploader:       
qwen-qwen3-235b-a22b-i-47730-v24-uploader: ---------- 2026-04-13 21:25:24 (0:02:00) ----------
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Files: hashed 36/36 (131.9G/131.9G) | pre-uploaded: 26/26 (131.9G/131.9G) | committed: 0/36 (0.0/131.9G) | ignored: 0
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 0 | committing: 1 | waiting: 125
qwen-qwen3-235b-a22b-i-47730-v24-uploader: ---------------------------------------------------
Received healthy response to inference request in 2.8235597610473633s
Received healthy response to inference request in 2.845670461654663s
Received healthy response to inference request in 2.8032097816467285s
Received healthy response to inference request in 2.782843589782715s
Received healthy response to inference request in 2.8434243202209473s
Received healthy response to inference request in 2.832632064819336s
Received healthy response to inference request in 2.905041456222534s
Received healthy response to inference request in 2.7912135124206543s
Received healthy response to inference request in 2.809849977493286s
Received healthy response to inference request in 2.8274989128112793s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Processed model Qwen/Qwen3-235B-A22B-Instruct-2507 in 1035.319s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: creating bucket guanaco-vllm-models
qwen-qwen3-235b-a22b-i-47730-v24-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:56: SyntaxWarning: invalid escape sequence '\.'
qwen-qwen3-235b-a22b-i-47730-v24-uploader: RE_S3_DATESTRING = re.compile('\.[0-9]*(?:[Z\\-\\+]*?)')
qwen-qwen3-235b-a22b-i-47730-v24-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:57: SyntaxWarning: invalid escape sequence '\s'
Received healthy response to inference request in 2.869008779525757s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: RE_XML_NAMESPACE = re.compile(b'^(<?[^>]+?>\s*|\s*)(<\w+) xmlns=[\'"](https?://[^\'"]+)[\'"]', re.MULTILINE)
qwen-qwen3-235b-a22b-i-47730-v24-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:240: SyntaxWarning: invalid escape sequence '\.'
qwen-qwen3-235b-a22b-i-47730-v24-uploader: invalid = re.search("([^a-z0-9\.-])", bucket, re.UNICODE)
qwen-qwen3-235b-a22b-i-47730-v24-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:244: SyntaxWarning: invalid escape sequence '\.'
qwen-qwen3-235b-a22b-i-47730-v24-uploader: invalid = re.search("([^A-Za-z0-9\._-])", bucket, re.UNICODE)
qwen-qwen3-235b-a22b-i-47730-v24-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:255: SyntaxWarning: invalid escape sequence '\.'
qwen-qwen3-235b-a22b-i-47730-v24-uploader: if re.search("-\.", bucket, re.UNICODE):
qwen-qwen3-235b-a22b-i-47730-v24-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:257: SyntaxWarning: invalid escape sequence '\.'
Received healthy response to inference request in 2.7947070598602295s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: if re.search("\.\.", bucket, re.UNICODE):
qwen-qwen3-235b-a22b-i-47730-v24-uploader: /usr/lib/python3/dist-packages/S3/S3Uri.py:155: SyntaxWarning: invalid escape sequence '\w'
2026-04-13T21:26:11.960087+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
qwen-qwen3-235b-a22b-i-47730-v24-uploader: _re = re.compile("^(\w+://)?(.*)", re.UNICODE)
2026-04-13T21:26:12.476085+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
qwen-qwen3-235b-a22b-i-47730-v24-uploader: /usr/lib/python3/dist-packages/S3/FileLists.py:480: SyntaxWarning: invalid escape sequence '\*'
qwen-qwen3-235b-a22b-i-47730-v24-uploader: wildcard_split_result = re.split("\*|\?", uri_str, maxsplit=1)
qwen-qwen3-235b-a22b-i-47730-v24-uploader: Bucket 's3://guanaco-vllm-models/' created
qwen-qwen3-235b-a22b-i-47730-v24-uploader: uploading /tmp/model_output to s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default
Received healthy response to inference request in 3.0371906757354736s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/merges.txt s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/merges.txt
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/quantization_config.json s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/quantization_config.json
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/added_tokens.json s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/added_tokens.json
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/tokenizer_config.json s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/tokenizer_config.json
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/generation_config.json s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/generation_config.json
2026-04-13T21:26:17.331699+00:00 monitor updated for cyankiwi-gemma-4-31b-it_43377_v3
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/config.json s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/config.json
Received healthy response to inference request in 2.8595998287200928s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/vocab.json s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/vocab.json
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/special_tokens_map.json s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/special_tokens_map.json
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/chat_template.jinja s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/chat_template.jinja
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model.safetensors.index.json s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model.safetensors.index.json
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/tokenizer.json s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/tokenizer.json
Received healthy response to inference request in 3.0183205604553223s
Received healthy response to inference request in 2.9402248859405518s
Received healthy response to inference request in 2.838000774383545s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00002-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00002-of-00025.safetensors
Received healthy response to inference request in 2.9090070724487305s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00023-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00023-of-00025.safetensors
30 requests
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00018-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00018-of-00025.safetensors
0 failed requests
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00024-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00024-of-00025.safetensors
5th percentile: 2.7866100549697874
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00001-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00001-of-00025.safetensors
10th percentile: 2.794357705116272
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00021-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00021-of-00025.safetensors
20th percentile: 2.820817804336548
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00013-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00013-of-00025.safetensors
30th percentile: 2.836390161514282
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00009-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00009-of-00025.safetensors
40th percentile: 2.8509881019592287
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00015-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00015-of-00025.safetensors
50th percentile: 2.866309642791748
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00012-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00012-of-00025.safetensors
60th percentile: 2.909572410583496
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00008-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00008-of-00025.safetensors
70th percentile: 3.0239815950393676
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00014-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00014-of-00025.safetensors
80th percentile: 3.34095973968506
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00019-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00019-of-00025.safetensors
90th percentile: 9.01205916404724
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00016-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00016-of-00025.safetensors
95th percentile: 9.137758719921111
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00007-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00007-of-00025.safetensors
99th percentile: 9.299027712345124
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00006-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00006-of-00025.safetensors
mean time: 3.932489546140035
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00020-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00020-of-00025.safetensors
Pipeline stage StressChecker completed in 139.42s
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00017-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00017-of-00025.safetensors
Shutdown handler de-registered
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00004-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00004-of-00025.safetensors
cyankiwi-gemma-4-31b-it_43377_v3 status is now deployed due to DeploymentManager action
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00003-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00003-of-00025.safetensors
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00011-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00011-of-00025.safetensors
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00022-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00022-of-00025.safetensors
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00010-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00010-of-00025.safetensors
qwen-qwen3-235b-a22b-i-47730-v24-uploader: cp /tmp/model_output/model-00005-of-00025.safetensors s3://guanaco-vllm-models/qwen-qwen3-235b-a22b-i-47730-v24/default/model-00005-of-00025.safetensors
Job qwen-qwen3-235b-a22b-i-47730-v24-uploader completed after 1125.33s with status: succeeded
Stopping job with name qwen-qwen3-235b-a22b-i-47730-v24-uploader
Pipeline stage VLLMUploader completed in 1127.83s
run pipeline stage %s
Running pipeline stage VLLMUploaderAMD
Pipeline stage vllm_upload_amd skipped, reason=not amd cluster
Pipeline stage VLLMUploaderAMD completed in 0.67s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 1.36s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service qwen-qwen3-235b-a22b-i-47730-v24
Waiting for inference service qwen-qwen3-235b-a22b-i-47730-v24 to be ready
2026-04-13T21:27:12.798298+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:27:13.324496+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
2026-04-13T21:28:13.466679+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:28:13.818024+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
2026-04-13T21:29:13.879684+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:29:14.272958+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
2026-04-13T21:30:14.302412+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:30:14.674812+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
2026-04-13T21:31:14.728624+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:31:15.137199+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
2026-04-13T21:32:15.186907+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:32:15.552669+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
Inference service qwen-qwen3-235b-a22b-i-47730-v24 ready after 334.00974798202515s
Pipeline stage VLLMDeployer completed in 336.11s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 4.396897315979004s
Received healthy response to inference request in 4.28605055809021s
Received healthy response to inference request in 4.683693885803223s
Received healthy response to inference request in 2.0493767261505127s
Received healthy response to inference request in 2.315415382385254s
Received healthy response to inference request in 1.9890460968017578s
Received healthy response to inference request in 1.9222524166107178s
Received healthy response to inference request in 2.2911996841430664s
Received healthy response to inference request in 2.129863739013672s
Received healthy response to inference request in 2.1197829246520996s
Received healthy response to inference request in 4.669481515884399s
Received healthy response to inference request in 2.1055026054382324s
Received healthy response to inference request in 1.9669063091278076s
2026-04-13T21:33:15.611844+00:00 monitor updated for qwen-qwen3-235b-a22b-i_47730_v24
2026-04-13T21:33:15.980956+00:00 monitor updated for chaiml-pony-v2-g46-lr1_80834_v33
Received healthy response to inference request in 2.0467474460601807s
Received healthy response to inference request in 2.0099050998687744s
Received healthy response to inference request in 2.040977716445923s
Received healthy response to inference request in 2.030802011489868s
Received healthy response to inference request in 1.991945743560791s
Received healthy response to inference request in 2.2285683155059814s
Received healthy response to inference request in 2.0126612186431885s
Received healthy response to inference request in 2.0304534435272217s
Received healthy response to inference request in 2.050466775894165s
Received healthy response to inference request in 2.3940975666046143s
Received healthy response to inference request in 2.068387269973755s
Received healthy response to inference request in 2.3264620304107666s
Received healthy response to inference request in 2.1543772220611572s
Received healthy response to inference request in 2.057744026184082s
Received healthy response to inference request in 2.4786016941070557s
Received healthy response to inference request in 2.2149946689605713s
Received healthy response to inference request in 2.0633180141448975s
30 requests
0 failed requests
5th percentile: 1.9768692135810852
10th percentile: 1.9916557788848877
20th percentile: 2.026894998550415
30th percentile: 2.0450165271759033
40th percentile: 2.0548331260681154
50th percentile: 2.0869449377059937
60th percentile: 2.139669132232666
70th percentile: 2.247357726097107
80th percentile: 2.3399891376495363
90th percentile: 4.297135233879089
95th percentile: 4.546818625926971
99th percentile: 4.679572298526764
mean time: 2.437532647450765
Pipeline stage StressChecker completed in 88.52s
Shutdown handler de-registered
qwen-qwen3-235b-a22b-i_47730_v24 status is now deployed due to DeploymentManager action
qwen-qwen3-235b-a22b-i_47730_v24 status is now inactive due to auto deactivation removed underperforming models
qwen-qwen3-235b-a22b-i_47730_v24 status is now torndown due to DeploymentManager action