developer_uid: chai_backend_admin
submission_id: chaiml-2fe5-v1-dpo-lr15-b025_v1
model_name: training123
model_group: ChaiML/2fe5-v1-dpo_lr15-
status: torndown
timestamp: 2025-11-21T09:21:08+00:00
num_battles: 6126
num_wins: 3274
celo_rating: 1304.39
family_friendly_score: 0.5324
family_friendly_standard_error: 0.007056206346189147
submission_type: basic
model_repo: ChaiML/2fe5-v1-dpo_lr15-b025
model_architecture: MistralForCausalLM
model_num_parameters: 12772070400.0
best_of: 8
max_input_tokens: 2048
max_output_tokens: 64
reward_model: default
latencies: [{'batch_size': 1, 'throughput': 0.6870447851975807, 'latency_mean': 1.4553838539123536, 'latency_p50': 1.4571858644485474, 'latency_p90': 1.604361367225647}, {'batch_size': 3, 'throughput': 1.2965031922199262, 'latency_mean': 2.308679291009903, 'latency_p50': 2.304383635520935, 'latency_p90': 2.5888653755187985}, {'batch_size': 5, 'throughput': 1.585109914255945, 'latency_mean': 3.1420161759853364, 'latency_p50': 3.14900803565979, 'latency_p90': 3.527662658691406}, {'batch_size': 6, 'throughput': 1.67494859196002, 'latency_mean': 3.5608933866024017, 'latency_p50': 3.560341000556946, 'latency_p90': 4.054297399520874}, {'batch_size': 8, 'throughput': 1.8143068271388518, 'latency_mean': 4.392174423933029, 'latency_p50': 4.4044259786605835, 'latency_p90': 4.9547127962112425}, {'batch_size': 10, 'throughput': 1.8798064939088812, 'latency_mean': 5.271130673885345, 'latency_p50': 5.289108991622925, 'latency_p90': 6.022910261154174}]
gpu_counts: {'NVIDIA L40S': 1}
display_name: training123
is_internal_developer: True
language_model: ChaiML/2fe5-v1-dpo_lr15-b025
model_size: 13B
ranking_group: single
throughput_3p7s: 1.71
us_pacific_date: 2025-11-18
win_ratio: 0.534443356186745
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', 'User:', '</s>', 'You:'], 'max_input_tokens': 2048, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer
Waiting for job on chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer to finish
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ██████ ██████ █████ ████ ████ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ░░██████ ██████ ░░███ ███░ ░░███ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ░███░█████░███ ░███ ███ ░███ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ░███░░███ ░███ ░███████ ░███ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ░███ ░░░ ░███ ░███░░███ ░███ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ░███ ░███ ░███ ░░███ ░███ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ █████ █████ █████ ░░████ █████ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ░░░░░ ░░░░░ ░░░░░ ░░░░ ░░░░░ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ Version: 0.30.2 ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ Features: FLYWHEEL, CUDA ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ Copyright 2023-2025 MK ONE TECHNOLOGIES Inc. ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ https://mk1.ai ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ The license key for the current software has been verified as ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ belonging to: ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ Chai Research Corp. ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ Expiration: 2028-03-31 23:59:59 ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ║ ║
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: Downloaded to shared memory in 49.648s
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: Checking if ChaiML/2fe5-v1-dpo_lr15-b025 already exists in ChaiML
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp0a35nk_v, device:0
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: quantized model in 27.070s
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: Processed model ChaiML/2fe5-v1-dpo_lr15-b025 in 76.719s
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: creating bucket guanaco-mkml-models
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/chaiml-2fe5-v1-dpo-lr15-b025-v1/nvidia
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/chaiml-2fe5-v1-dpo-lr15-b025-v1/nvidia/config.json
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/chaiml-2fe5-v1-dpo-lr15-b025-v1/nvidia/special_tokens_map.json
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/chaiml-2fe5-v1-dpo-lr15-b025-v1/nvidia/tokenizer_config.json
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/chaiml-2fe5-v1-dpo-lr15-b025-v1/nvidia/tokenizer.json
chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/chaiml-2fe5-v1-dpo-lr15-b025-v1/nvidia/flywheel_model.0.safetensors
Job chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer completed after 128.38s with status: succeeded
Stopping job with name chaiml-2fe5-v1-dpo-lr15-b025-v1-mkmlizer
Pipeline stage MKMLizer completed in 129.25s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.28s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service chaiml-2fe5-v1-dpo-lr15-b025-v1
Waiting for inference service chaiml-2fe5-v1-dpo-lr15-b025-v1 to be ready
Failed to get response for submission mistralai-mistral-nem_93303_v569: ('http://mistralai-mistral-nem-93303-v569-predictor.tenant-chaiml-guanaco.k2.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', '')
Inference service chaiml-2fe5-v1-dpo-lr15-b025-v1 ready after 151.0109577178955s
Pipeline stage MKMLDeployer completed in 151.46s
run pipeline stage %s
Running pipeline stage StressChecker
HTTPConnectionPool(host='guanaco-submitter.guanaco-backend.k2.chaiverse.com', port=80): Read timed out. (read timeout=20)
Received unhealthy response to inference request!
Received healthy response to inference request in 1.685718059539795s
Received healthy response to inference request in 1.3738336563110352s
Received healthy response to inference request in 1.375685691833496s
Received healthy response to inference request in 1.2477443218231201s
5 requests
1 failed requests
5th percentile: 1.2729621887207032
10th percentile: 1.298180055618286
20th percentile: 1.348615789413452
30th percentile: 1.3742040634155273
40th percentile: 1.3749448776245117
50th percentile: 1.375685691833496
60th percentile: 1.4996986389160156
70th percentile: 1.6237115859985352
80th percentile: 5.377420473098758
90th percentile: 12.760825300216677
95th percentile: 16.45252771377563
99th percentile: 19.405889644622803
mean time: 5.165442371368409
%s, retrying in %s seconds...
Received healthy response to inference request in 1.2894649505615234s
Received healthy response to inference request in 1.5345399379730225s
Received healthy response to inference request in 1.2962274551391602s
Received healthy response to inference request in 1.3595833778381348s
Received healthy response to inference request in 1.8381929397583008s
5 requests
0 failed requests
5th percentile: 1.2908174514770507
10th percentile: 1.292169952392578
20th percentile: 1.2948749542236329
30th percentile: 1.308898639678955
40th percentile: 1.334241008758545
50th percentile: 1.3595833778381348
60th percentile: 1.42956600189209
70th percentile: 1.4995486259460449
80th percentile: 1.5952705383300783
90th percentile: 1.7167317390441894
95th percentile: 1.777462339401245
99th percentile: 1.8260468196868895
mean time: 1.4636017322540282
Pipeline stage StressChecker completed in 35.82s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.95s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.68s
Shutdown handler de-registered
chaiml-2fe5-v1-dpo-lr15-b025_v1 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 2469.15s
Shutdown handler de-registered
chaiml-2fe5-v1-dpo-lr15-b025_v1 status is now inactive due to auto deactivation removed underperforming models
chaiml-2fe5-v1-dpo-lr15-b025_v1 status is now torndown due to DeploymentManager action