developer_uid: richhx
submission_id: chaiml-kimid-v8b-kimid_63800_v11
model_name: chaiml-kimid-v8b-kimid_63800_v11
model_group: ChaiML/kimid-v8b-kimidv5
status: torndown
timestamp: 2026-01-17T00:30:35+00:00
num_battles: 14614
num_wins: 8235
celo_rating: 1341.08
family_friendly_score: 0.511
family_friendly_standard_error: 0.007069356406349874
submission_type: basic
model_repo: ChaiML/kimid-v8b-kimidv5a-lr5e6ep2r64g4b01-int4-mixed
model_architecture: Qwen3MoeForCausalLM
model_num_parameters: 18790207488.0
best_of: 4
max_input_tokens: 2048
max_output_tokens: 80
reward_model: default
display_name: chaiml-kimid-v8b-kimid_63800_v11
ineligible_reason: max_output_tokens!=64
is_internal_developer: True
language_model: ChaiML/kimid-v8b-kimidv5a-lr5e6ep2r64g4b01-int4-mixed
model_size: 19B
ranking_group: single
us_pacific_date: 2026-01-13
win_ratio: 0.5635007527028877
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['<|im_end|>', '</think>', '<|user|>', '<|assistant|>', '####', '</s>'], 'max_input_tokens': 2048, 'best_of': 4, 'max_output_tokens': 80}
formatter: {'memory_template': "<|im_start|>system\n{bot_name}'s persona: {memory}<|im_end|>\n", 'prompt_template': '', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.14s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service chaiml-kimid-v8b-kimid-63800-v11
Waiting for inference service chaiml-kimid-v8b-kimid-63800-v11 to be ready
HTTP Request: %s %s "%s %d %s"
Inference service chaiml-kimid-v8b-kimid-63800-v11 ready after 230.1269130706787s
Pipeline stage VLLMDeployer completed in 230.69s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.1115710735321045s
Received healthy response to inference request in 1.693288803100586s
Received healthy response to inference request in 2.3619158267974854s
Received healthy response to inference request in 2.008640766143799s
Received healthy response to inference request in 2.200371026992798s
Received healthy response to inference request in 2.210113763809204s
Received healthy response to inference request in 1.79118013381958s
Received healthy response to inference request in 1.7840609550476074s
Received healthy response to inference request in 2.05680775642395s
Received healthy response to inference request in 1.9275736808776855s
Received healthy response to inference request in 2.0709102153778076s
Received healthy response to inference request in 1.8769505023956299s
Received healthy response to inference request in 1.6694738864898682s
Received healthy response to inference request in 1.8119533061981201s
Received healthy response to inference request in 2.0134899616241455s
Received healthy response to inference request in 1.8852274417877197s
Received healthy response to inference request in 1.8015899658203125s
Received healthy response to inference request in 1.8409438133239746s
Received healthy response to inference request in 1.7985811233520508s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.810145378112793s
Received healthy response to inference request in 1.9399657249450684s
Received healthy response to inference request in 1.857621192932129s
Received healthy response to inference request in 1.8704934120178223s
Received healthy response to inference request in 1.8597676753997803s
Received healthy response to inference request in 1.6791269779205322s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.7124757766723633s
Received healthy response to inference request in 1.7727997303009033s
Received healthy response to inference request in 1.7378439903259277s
Received healthy response to inference request in 1.7679171562194824s
Received healthy response to inference request in 1.7247583866119385s
30 requests
0 failed requests
5th percentile: 1.6854997992515564
10th percentile: 1.7105570793151856
20th percentile: 1.7619025230407714
30th percentile: 1.7890443801879883
40th percentile: 1.8067232131958009
50th percentile: 1.8492825031280518
60th percentile: 1.8730762481689454
70th percentile: 1.9312912940979003
80th percentile: 2.0221535205841064
90th percentile: 2.120451068878174
95th percentile: 2.205729532241821
99th percentile: 2.3178932285308838
mean time: 1.8882519801457722
Pipeline stage StressChecker completed in 60.01s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.66s
Shutdown handler de-registered
chaiml-kimid-v8b-kimid_63800_v11 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
Evaluating %s Family Friendly Score with %s threads
Generating Leaderboard row for %s
Generated Leaderboard row for %s
Pipeline stage OfflineFamilyFriendlyScorer completed in 5367.28s
Shutdown handler de-registered
chaiml-kimid-v8b-kimid_63800_v11 status is now torndown due to DeploymentManager action
Falling back to EndpointApi.from_submission implementation