developer_uid: chai_evaluation_service
submission_id: qwen-qwen3-4b_v2
model_name: qwen-qwen3-4b_v2
model_group: Qwen/Qwen3-4B
status: inactive
timestamp: 2026-02-08T01:57:27+00:00
num_battles: 10734
num_wins: 4037
celo_rating: 1243.51
family_friendly_score: 0.0
family_friendly_standard_error: 0.0
submission_type: basic
model_repo: Qwen/Qwen3-4B
model_architecture: Qwen3ForCausalLM
model_num_parameters: 4057520640.0
best_of: 8
max_input_tokens: 2048
max_output_tokens: 64
reward_model: default
display_name: qwen-qwen3-4b_v2
is_internal_developer: True
language_model: Qwen/Qwen3-4B
model_size: 4B
ranking_group: single
us_pacific_date: 2026-02-07
win_ratio: 0.3760946525060555
generation_params: {'temperature': 0.85, 'top_p': 0.9, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.2, 'frequency_penalty': 0.3, 'stopping_words': ['\n'], 'max_input_tokens': 2048, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': '<|im_start|>system\n{memory}<|im_end|>\n', 'prompt_template': '<|im_start|>user\n{prompt}<|im_end|>\n', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{user_name}: {message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMUploader
Starting job with name qwen-qwen3-4b-v2-uploader
Waiting for job on qwen-qwen3-4b-v2-uploader to finish
qwen-qwen3-4b-v2-uploader: Using quantization_mode: none
qwen-qwen3-4b-v2-uploader: Downloading snapshot of Qwen/Qwen3-4B...
qwen-qwen3-4b-v2-uploader: Fetching 13 files: 0%| | 0/13 [00:00<?, ?it/s] Fetching 13 files: 8%|▊ | 1/13 [00:00<00:05, 2.12it/s] Fetching 13 files: 54%|█████▍ | 7/13 [00:03<00:02, 2.03it/s] Fetching 13 files: 62%|██████▏ | 8/13 [00:03<00:02, 2.20it/s] Fetching 13 files: 100%|██████████| 13/13 [00:03<00:00, 3.50it/s]
qwen-qwen3-4b-v2-uploader: Downloaded in 3.871s
qwen-qwen3-4b-v2-uploader: Processed model Qwen/Qwen3-4B in 6.955s
qwen-qwen3-4b-v2-uploader: creating bucket guanaco-vllm-models
qwen-qwen3-4b-v2-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:56: SyntaxWarning: invalid escape sequence '\.'
qwen-qwen3-4b-v2-uploader: RE_S3_DATESTRING = re.compile('\.[0-9]*(?:[Z\\-\\+]*?)')
qwen-qwen3-4b-v2-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:57: SyntaxWarning: invalid escape sequence '\s'
qwen-qwen3-4b-v2-uploader: RE_XML_NAMESPACE = re.compile(b'^(<?[^>]+?>\s*|\s*)(<\w+) xmlns=[\'"](https?://[^\'"]+)[\'"]', re.MULTILINE)
qwen-qwen3-4b-v2-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:240: SyntaxWarning: invalid escape sequence '\.'
qwen-qwen3-4b-v2-uploader: invalid = re.search("([^a-z0-9\.-])", bucket, re.UNICODE)
qwen-qwen3-4b-v2-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:244: SyntaxWarning: invalid escape sequence '\.'
qwen-qwen3-4b-v2-uploader: invalid = re.search("([^A-Za-z0-9\._-])", bucket, re.UNICODE)
qwen-qwen3-4b-v2-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:255: SyntaxWarning: invalid escape sequence '\.'
qwen-qwen3-4b-v2-uploader: if re.search("-\.", bucket, re.UNICODE):
qwen-qwen3-4b-v2-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:257: SyntaxWarning: invalid escape sequence '\.'
qwen-qwen3-4b-v2-uploader: if re.search("\.\.", bucket, re.UNICODE):
qwen-qwen3-4b-v2-uploader: /usr/lib/python3/dist-packages/S3/S3Uri.py:155: SyntaxWarning: invalid escape sequence '\w'
qwen-qwen3-4b-v2-uploader: _re = re.compile("^(\w+://)?(.*)", re.UNICODE)
qwen-qwen3-4b-v2-uploader: /usr/lib/python3/dist-packages/S3/FileLists.py:480: SyntaxWarning: invalid escape sequence '\*'
qwen-qwen3-4b-v2-uploader: wildcard_split_result = re.split("\*|\?", uri_str, maxsplit=1)
qwen-qwen3-4b-v2-uploader: Bucket 's3://guanaco-vllm-models/' created
qwen-qwen3-4b-v2-uploader: uploading /dev/shm/model_output to s3://guanaco-vllm-models/qwen-qwen3-4b-v2
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/config.json s3://guanaco-vllm-models/qwen-qwen3-4b-v2/config.json
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/.gitattributes s3://guanaco-vllm-models/qwen-qwen3-4b-v2/.gitattributes
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/generation_config.json s3://guanaco-vllm-models/qwen-qwen3-4b-v2/generation_config.json
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/tokenizer_config.json s3://guanaco-vllm-models/qwen-qwen3-4b-v2/tokenizer_config.json
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/LICENSE s3://guanaco-vllm-models/qwen-qwen3-4b-v2/LICENSE
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/README.md s3://guanaco-vllm-models/qwen-qwen3-4b-v2/README.md
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/model.safetensors.index.json s3://guanaco-vllm-models/qwen-qwen3-4b-v2/model.safetensors.index.json
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/merges.txt s3://guanaco-vllm-models/qwen-qwen3-4b-v2/merges.txt
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/vocab.json s3://guanaco-vllm-models/qwen-qwen3-4b-v2/vocab.json
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/tokenizer.json s3://guanaco-vllm-models/qwen-qwen3-4b-v2/tokenizer.json
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/model-00003-of-00003.safetensors s3://guanaco-vllm-models/qwen-qwen3-4b-v2/model-00003-of-00003.safetensors
HTTP Request: %s %s "%s %d %s"
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/model-00002-of-00003.safetensors s3://guanaco-vllm-models/qwen-qwen3-4b-v2/model-00002-of-00003.safetensors
qwen-qwen3-4b-v2-uploader: cp /dev/shm/model_output/model-00001-of-00003.safetensors s3://guanaco-vllm-models/qwen-qwen3-4b-v2/model-00001-of-00003.safetensors
Job qwen-qwen3-4b-v2-uploader completed after 83.48s with status: succeeded
Stopping job with name qwen-qwen3-4b-v2-uploader
Pipeline stage VLLMUploader completed in 83.92s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.27s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service qwen-qwen3-4b-v2
Waiting for inference service qwen-qwen3-4b-v2 to be ready
Inference service qwen-qwen3-4b-v2 ready after 170.87987399101257s
Pipeline stage VLLMDeployer completed in 171.30s
run pipeline stage %s
Running pipeline stage StressChecker
HTTPConnectionPool(host='guanaco-submitter.guanaco-backend.k2.chaiverse.com', port=80): Read timed out. (read timeout=20)
Received unhealthy response to inference request!
Received healthy response to inference request in 0.8198096752166748s
Received healthy response to inference request in 0.47973012924194336s
Received healthy response to inference request in 0.8476238250732422s
Received healthy response to inference request in 0.7603259086608887s
Received healthy response to inference request in 0.785207986831665s
Received healthy response to inference request in 0.8288686275482178s
Received healthy response to inference request in 0.898796558380127s
Received healthy response to inference request in 0.7948617935180664s
Received healthy response to inference request in 1.063819408416748s
Received healthy response to inference request in 0.7230985164642334s
Received healthy response to inference request in 0.9289250373840332s
Received healthy response to inference request in 0.6665668487548828s
Received healthy response to inference request in 1.0021562576293945s
Received healthy response to inference request in 0.5506536960601807s
Received healthy response to inference request in 0.8202567100524902s
Received healthy response to inference request in 0.7724337577819824s
Received healthy response to inference request in 0.7599008083343506s
Received healthy response to inference request in 0.7593848705291748s
Received healthy response to inference request in 0.8583850860595703s
Received healthy response to inference request in 0.7838523387908936s
Received healthy response to inference request in 0.8609459400177002s
Received healthy response to inference request in 0.5875000953674316s
Received healthy response to inference request in 0.8696630001068115s
Received healthy response to inference request in 0.7195754051208496s
Received healthy response to inference request in 0.6997978687286377s
Received healthy response to inference request in 0.8537969589233398s
Received healthy response to inference request in 0.5205481052398682s
Received healthy response to inference request in 0.5832765102386475s
Received healthy response to inference request in 0.4638831615447998s
30 requests
1 failed requests
5th percentile: 0.49809821844100954
10th percentile: 0.5476431369781494
20th percentile: 0.6507534980773926
30th percentile: 0.7220415830612182
40th percentile: 0.7601558685302734
50th percentile: 0.7845301628112793
60th percentile: 0.819988489151001
70th percentile: 0.8494757652282715
80th percentile: 0.8626893520355224
90th percentile: 0.9362481594085694
95th percentile: 1.0360709905624388
99th percentile: 14.589246585369127
mean time: 1.4059120575586954
%s, retrying in %s seconds...
Received healthy response to inference request in 0.7734875679016113s
Received healthy response to inference request in 0.920419454574585s
Received healthy response to inference request in 0.9366886615753174s
Received healthy response to inference request in 0.9468154907226562s
Received healthy response to inference request in 1.2405893802642822s
Received healthy response to inference request in 1.0946862697601318s
Received healthy response to inference request in 0.8574404716491699s
Received healthy response to inference request in 0.7060911655426025s
Received healthy response to inference request in 0.7303495407104492s
Received healthy response to inference request in 0.8784441947937012s
Received healthy response to inference request in 0.7433955669403076s
Received healthy response to inference request in 0.7755522727966309s
Received healthy response to inference request in 0.7473475933074951s
Received healthy response to inference request in 0.9640974998474121s
Received healthy response to inference request in 0.8430271148681641s
Received healthy response to inference request in 0.8317747116088867s
Received healthy response to inference request in 0.8092844486236572s
Received healthy response to inference request in 0.883342981338501s
Received healthy response to inference request in 1.0981454849243164s
Received healthy response to inference request in 0.9830935001373291s
Received healthy response to inference request in 1.02691650390625s
Received healthy response to inference request in 0.5066101551055908s
Received healthy response to inference request in 0.6468586921691895s
Received healthy response to inference request in 0.8059117794036865s
Received healthy response to inference request in 0.9691247940063477s
Received healthy response to inference request in 0.7978544235229492s
Received healthy response to inference request in 0.7313141822814941s
Received healthy response to inference request in 0.5693702697753906s
Received healthy response to inference request in 0.9120402336120605s
Received healthy response to inference request in 0.53287672996521s
30 requests
0 failed requests
5th percentile: 0.5492988228797913
10th percentile: 0.6391098499298096
20th percentile: 0.7311212539672851
30th percentile: 0.7656455755233764
40th percentile: 0.8026888370513916
50th percentile: 0.8374009132385254
60th percentile: 0.880403709411621
70th percentile: 0.9253002166748047
80th percentile: 0.9651029586791993
90th percentile: 1.0336934804916382
95th percentile: 1.0965888381004334
99th percentile: 1.1992806506156923
mean time: 0.8420983711878459
Pipeline stage StressChecker completed in 73.91s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.59s
Shutdown handler de-registered
qwen-qwen3-4b_v2 status is now deployed due to DeploymentManager action
qwen-qwen3-4b_v2 status is now inactive due to auto deactivation removed underperforming models