developer_uid: chai_evaluation_service
submission_id: qwen-qwen3-8b_v1
model_name: qwen-qwen3-8b_v1
model_group: Qwen/Qwen3-8B
status: inactive
timestamp: 2026-02-08T01:57:27+00:00
num_battles: 11158
num_wins: 4454
celo_rating: 1260.41
family_friendly_score: 0.0
family_friendly_standard_error: 0.0
submission_type: basic
model_repo: Qwen/Qwen3-8B
model_architecture: Qwen3ForCausalLM
model_num_parameters: 8190726144.0
best_of: 8
max_input_tokens: 2048
max_output_tokens: 64
reward_model: default
display_name: qwen-qwen3-8b_v1
is_internal_developer: True
language_model: Qwen/Qwen3-8B
model_size: 8B
ranking_group: single
us_pacific_date: 2026-02-07
win_ratio: 0.3991754794766087
generation_params: {'temperature': 0.85, 'top_p': 0.9, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.2, 'frequency_penalty': 0.3, 'stopping_words': ['\n'], 'max_input_tokens': 2048, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': '<|im_start|>system\n{memory}<|im_end|>\n', 'prompt_template': '<|im_start|>user\n{prompt}<|im_end|>\n', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{user_name}: {message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMUploader
Starting job with name qwen-qwen3-8b-v1-uploader
Waiting for job on qwen-qwen3-8b-v1-uploader to finish
HTTP Request: %s %s "%s %d %s"
qwen-qwen3-8b-v1-uploader: Using quantization_mode: none
qwen-qwen3-8b-v1-uploader: Downloading snapshot of Qwen/Qwen3-8B...
qwen-qwen3-8b-v1-uploader: Fetching 15 files: 0%| | 0/15 [00:00<?, ?it/s] Fetching 15 files: 7%|▋ | 1/15 [00:00<00:03, 3.72it/s] Fetching 15 files: 40%|████ | 6/15 [00:00<00:00, 18.11it/s] Fetching 15 files: 60%|██████ | 9/15 [00:05<00:04, 1.23it/s] Fetching 15 files: 100%|██████████| 15/15 [00:05<00:00, 2.55it/s]
qwen-qwen3-8b-v1-uploader: Downloaded in 5.991s
qwen-qwen3-8b-v1-uploader: Processed model Qwen/Qwen3-8B in 12.276s
qwen-qwen3-8b-v1-uploader: creating bucket guanaco-vllm-models
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/.gitattributes s3://guanaco-vllm-models/qwen-qwen3-8b-v1/.gitattributes
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/config.json s3://guanaco-vllm-models/qwen-qwen3-8b-v1/config.json
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/tokenizer_config.json s3://guanaco-vllm-models/qwen-qwen3-8b-v1/tokenizer_config.json
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/generation_config.json s3://guanaco-vllm-models/qwen-qwen3-8b-v1/generation_config.json
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/LICENSE s3://guanaco-vllm-models/qwen-qwen3-8b-v1/LICENSE
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/README.md s3://guanaco-vllm-models/qwen-qwen3-8b-v1/README.md
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/model.safetensors.index.json s3://guanaco-vllm-models/qwen-qwen3-8b-v1/model.safetensors.index.json
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/merges.txt s3://guanaco-vllm-models/qwen-qwen3-8b-v1/merges.txt
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/vocab.json s3://guanaco-vllm-models/qwen-qwen3-8b-v1/vocab.json
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/tokenizer.json s3://guanaco-vllm-models/qwen-qwen3-8b-v1/tokenizer.json
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/model-00005-of-00005.safetensors s3://guanaco-vllm-models/qwen-qwen3-8b-v1/model-00005-of-00005.safetensors
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/model-00004-of-00005.safetensors s3://guanaco-vllm-models/qwen-qwen3-8b-v1/model-00004-of-00005.safetensors
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/model-00002-of-00005.safetensors s3://guanaco-vllm-models/qwen-qwen3-8b-v1/model-00002-of-00005.safetensors
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/model-00003-of-00005.safetensors s3://guanaco-vllm-models/qwen-qwen3-8b-v1/model-00003-of-00005.safetensors
qwen-qwen3-8b-v1-uploader: cp /dev/shm/model_output/model-00001-of-00005.safetensors s3://guanaco-vllm-models/qwen-qwen3-8b-v1/model-00001-of-00005.safetensors
Job qwen-qwen3-8b-v1-uploader completed after 92.63s with status: succeeded
Stopping job with name qwen-qwen3-8b-v1-uploader
Pipeline stage VLLMUploader completed in 93.08s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.15s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service qwen-qwen3-8b-v1
Waiting for inference service qwen-qwen3-8b-v1 to be ready
HTTP Request: %s %s "%s %d %s"
HTTP Request: %s %s "%s %d %s"
Inference service qwen-qwen3-8b-v1 ready after 170.8550989627838s
Pipeline stage VLLMDeployer completed in 171.36s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 0.8266651630401611s
Received healthy response to inference request in 1.0349583625793457s
Received healthy response to inference request in 1.0771667957305908s
Received healthy response to inference request in 0.994516134262085s
Received healthy response to inference request in 1.024268627166748s
Received healthy response to inference request in 0.8056640625s
Received healthy response to inference request in 1.1416888236999512s
Received healthy response to inference request in 1.0337727069854736s
Received healthy response to inference request in 1.0625078678131104s
Received healthy response to inference request in 1.4545361995697021s
Received healthy response to inference request in 1.144639492034912s
Received healthy response to inference request in 1.0543344020843506s
Received healthy response to inference request in 0.9734811782836914s
Received healthy response to inference request in 1.1428945064544678s
Received healthy response to inference request in 1.163874864578247s
Received healthy response to inference request in 1.340505838394165s
Received healthy response to inference request in 1.1037912368774414s
Received healthy response to inference request in 1.4023334980010986s
Received healthy response to inference request in 1.0667786598205566s
Received healthy response to inference request in 0.9899845123291016s
Received healthy response to inference request in 1.1837618350982666s
Received healthy response to inference request in 1.3895354270935059s
Received healthy response to inference request in 1.0976614952087402s
Received healthy response to inference request in 1.3245868682861328s
Received healthy response to inference request in 1.0768239498138428s
Received healthy response to inference request in 1.1663978099822998s
Received healthy response to inference request in 1.2756590843200684s
Received healthy response to inference request in 1.051426887512207s
Received healthy response to inference request in 1.3077468872070312s
Received healthy response to inference request in 0.964418888092041s
30 requests
0 failed requests
5th percentile: 0.8886543393135071
10th percentile: 0.9725749492645264
20th percentile: 1.0183181285858154
30th percentile: 1.0464863300323486
40th percentile: 1.065070343017578
50th percentile: 1.0874141454696655
60th percentile: 1.1421710968017578
70th percentile: 1.1646317481994628
80th percentile: 1.282076644897461
90th percentile: 1.3454087972640991
95th percentile: 1.396574366092682
99th percentile: 1.439397416114807
mean time: 1.1225460688273112
Pipeline stage StressChecker completed in 36.62s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.62s
Shutdown handler de-registered
qwen-qwen3-8b_v1 status is now deployed due to DeploymentManager action
qwen-qwen3-8b_v1 status is now inactive due to auto deactivation removed underperforming models