developer_uid: chai_backend_admin
submission_id: chaiml-kimid-v4a-q235-2k_v2
model_name: chaiml-kimid-v4a-q235-2k_v2
model_group: ChaiML/kimid-v4a-q235-2k
status: torndown
timestamp: 2025-12-09T21:24:36+00:00
num_battles: 7451
num_wins: 3839
celo_rating: 1305.8
family_friendly_score: 0.5162
family_friendly_standard_error: 0.0070673553752446895
submission_type: basic
model_repo: ChaiML/kimid-v4a-q235-2k
model_architecture: Qwen3MoeForCausalLM
model_num_parameters: 18790207488.0
best_of: 8
max_input_tokens: 2048
max_output_tokens: 72
reward_model: default
display_name: chaiml-kimid-v4a-q235-2k_v2
ineligible_reason: max_output_tokens!=64
is_internal_developer: True
language_model: ChaiML/kimid-v4a-q235-2k
model_size: 19B
ranking_group: single
us_pacific_date: 2025-12-09
win_ratio: 0.5152328546503825
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['####', '<|im_end|>', '<|assistant|>', '</think>', '<|user|>', '</s>'], 'max_input_tokens': 2048, 'best_of': 8, 'max_output_tokens': 72}
formatter: {'memory_template': "<|im_start|>system\n{bot_name}'s persona: {memory}<|im_end|>\n", 'prompt_template': '', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.24s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service chaiml-kimid-v4a-q235-2k-v2
Waiting for inference service chaiml-kimid-v4a-q235-2k-v2 to be ready
Inference service chaiml-kimid-v4a-q235-2k-v2 ready after 457.61161279678345s
Pipeline stage VLLMDeployer completed in 458.96s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 3.560936212539673s
Received healthy response to inference request in 2.3098623752593994s
Received healthy response to inference request in 2.6024906635284424s
Received healthy response to inference request in 2.5104987621307373s
Received healthy response to inference request in 1.8129618167877197s
Received healthy response to inference request in 1.8340377807617188s
Received healthy response to inference request in 2.7016210556030273s
Received healthy response to inference request in 2.362015724182129s
Received healthy response to inference request in 2.825737237930298s
Received healthy response to inference request in 2.2216293811798096s
Received healthy response to inference request in 1.6074109077453613s
Received healthy response to inference request in 1.6730446815490723s
Received healthy response to inference request in 2.07511305809021s
Received healthy response to inference request in 2.952683687210083s
Received healthy response to inference request in 1.7891099452972412s
Received healthy response to inference request in 1.9117696285247803s
Received healthy response to inference request in 2.192553758621216s
Received healthy response to inference request in 2.051267385482788s
Received healthy response to inference request in 2.0288379192352295s
Received healthy response to inference request in 2.4092886447906494s
Received healthy response to inference request in 2.0476465225219727s
Received healthy response to inference request in 1.911010980606079s
Received healthy response to inference request in 1.8952922821044922s
Received healthy response to inference request in 2.00712513923645s
Received healthy response to inference request in 2.1176795959472656s
Received healthy response to inference request in 2.4766714572906494s
Received healthy response to inference request in 2.0248446464538574s
Received healthy response to inference request in 2.841155767440796s
Received healthy response to inference request in 1.9234938621520996s
Received healthy response to inference request in 2.7671782970428467s
30 requests
0 failed requests
5th percentile: 1.7252740502357482
10th percentile: 1.8105766296386718
20th percentile: 1.9078672409057618
30th percentile: 1.982035756111145
40th percentile: 2.0401230812072755
50th percentile: 2.096396327018738
60th percentile: 2.2569225788116456
70th percentile: 2.4295034885406492
80th percentile: 2.6223167419433597
90th percentile: 2.8272790908813477
95th percentile: 2.9024961233139033
99th percentile: 3.3845429801940923
mean time: 2.2481656392415363
Pipeline stage StressChecker completed in 72.17s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 1.33s
Shutdown handler de-registered
chaiml-kimid-v4a-q235-2k_v2 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Generating Leaderboard row for %s
Generated Leaderboard row for %s
Pipeline stage OfflineFamilyFriendlyScorer completed in 2191.98s
Shutdown handler de-registered
chaiml-kimid-v4a-q235-2k_v2 status is now inactive due to auto deactivation removed underperforming models
chaiml-kimid-v4a-q235-2k_v2 status is now torndown due to DeploymentManager action