developer_uid: chai_backend_admin
submission_id: qwen-qwen3-235b-a22b-i_47730_v14
model_name: qwen-qwen3-235b-a22b-i_47730_v14
model_group: Qwen/Qwen3-235B-A22B-Ins
status: inactive
timestamp: 2025-12-09T17:43:45+00:00
num_battles: 7018
num_wins: 3538
celo_rating: 1295.55
family_friendly_score: 0.5944
family_friendly_standard_error: 0.006943898616771417
submission_type: basic
model_repo: Qwen/Qwen3-235B-A22B-Instruct-2507
model_architecture: Qwen3MoeForCausalLM
model_num_parameters: 18790207488.0
best_of: 8
max_input_tokens: 1978
max_output_tokens: 70
reward_model: default
display_name: qwen-qwen3-235b-a22b-i_47730_v14
ineligible_reason: max_output_tokens!=64
is_internal_developer: True
language_model: Qwen/Qwen3-235B-A22B-Instruct-2507
model_size: 19B
ranking_group: single
us_pacific_date: 2025-12-09
win_ratio: 0.5041322314049587
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['</think>', '<|user|>', '####', '<|assistant|>', '</s>', '<|im_end|>'], 'max_input_tokens': 1978, 'best_of': 8, 'max_output_tokens': 70}
formatter: {'memory_template': "<|im_start|>system\nYou are {bot_name}. Engaged in a highly engaging roleplay with user. NSFW allowed. Be descriptive and creative.\n{bot_name}'s persona: {memory}<|im_end|>\n", 'prompt_template': '', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.24s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service qwen-qwen3-235b-a22b-i-47730-v14
Waiting for inference service qwen-qwen3-235b-a22b-i-47730-v14 to be ready
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.38s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service chaiml-qwen3-235b-a22b-13233-v1
Waiting for inference service chaiml-qwen3-235b-a22b-13233-v1 to be ready
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.46s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service chaiml-wb-cai-hq-ep2-rh-79474-v1
Waiting for inference service chaiml-wb-cai-hq-ep2-rh-79474-v1 to be ready
Inference service qwen-qwen3-235b-a22b-i-47730-v14 ready after 506.42819356918335s
Pipeline stage VLLMDeployer completed in 507.74s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.601287603378296s
Received healthy response to inference request in 1.7351646423339844s
Received healthy response to inference request in 2.345020055770874s
Received healthy response to inference request in 2.5507397651672363s
Received healthy response to inference request in 2.561331033706665s
Received healthy response to inference request in 2.038681745529175s
Shutdown handler not registered because Python interpreter is not running in the main thread
Received healthy response to inference request in 1.6344273090362549s
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.68s
run pipeline stage %s
Received healthy response to inference request in 1.7584846019744873s
Running pipeline stage VLLMDeployer
Creating inference service chaiml-kimid-v4a-q235-n-98827-v1
Waiting for inference service chaiml-kimid-v4a-q235-n-98827-v1 to be ready
Received healthy response to inference request in 2.4452264308929443s
Received healthy response to inference request in 1.64005708694458s
Received healthy response to inference request in 2.3121936321258545s
Received healthy response to inference request in 2.3297946453094482s
Received healthy response to inference request in 2.465017080307007s
Received healthy response to inference request in 2.4643428325653076s
Received healthy response to inference request in 1.7290902137756348s
Received healthy response to inference request in 2.265662908554077s
Received healthy response to inference request in 1.9082050323486328s
Received healthy response to inference request in 1.743034839630127s
Received healthy response to inference request in 1.7323687076568604s
Received healthy response to inference request in 2.4501678943634033s
Received healthy response to inference request in 1.7877094745635986s
Received healthy response to inference request in 1.8205325603485107s
Received healthy response to inference request in 2.109382390975952s
Received healthy response to inference request in 1.82071852684021s
Received healthy response to inference request in 2.5014824867248535s
Received healthy response to inference request in 1.9738624095916748s
Received healthy response to inference request in 1.9534573554992676s
Received healthy response to inference request in 1.8873779773712158s
Received healthy response to inference request in 1.782801866531372s
Received healthy response to inference request in 1.7835376262664795s
30 requests
0 failed requests
5th percentile: 1.6801219940185548
10th percentile: 1.7320408582687379
20th percentile: 1.7553946495056152
30th percentile: 1.7864579200744628
40th percentile: 1.8607141971588135
50th percentile: 1.9636598825454712
60th percentile: 2.171894598007202
70th percentile: 2.334362268447876
80th percentile: 2.4530028820037844
90th percentile: 2.506408214569092
95th percentile: 2.556564962863922
99th percentile: 2.589700198173523
mean time: 2.0710386912027996
Pipeline stage StressChecker completed in 82.89s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 2.48s
Shutdown handler de-registered
qwen-qwen3-235b-a22b-i_47730_v14 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Generating Leaderboard row for %s
Generated Leaderboard row for %s
Pipeline stage OfflineFamilyFriendlyScorer completed in 2291.16s
Shutdown handler de-registered