developer_uid: azuruce
submission_id: chaiml-dpo-training-alb_15532_v1
model_name: chaiml-dpo-training-alb_15532_v1
model_group: ChaiML/dpo-training-albe
status: torndown
timestamp: 2025-01-07T23:08:38+00:00
num_battles: 10594
num_wins: 4901
celo_rating: 1241.57
family_friendly_score: 0.5908
family_friendly_standard_error: 0.006953493510459329
submission_type: basic
model_repo: ChaiML/dpo-training-albert-2n2g
model_architecture: MistralForCausalLM
model_num_parameters: 12772070400.0
best_of: 4
max_input_tokens: 1024
max_output_tokens: 64
reward_model: default
latencies: [{'batch_size': 1, 'throughput': 0.6378736482519511, 'latency_mean': 1.5676490867137909, 'latency_p50': 1.566117763519287, 'latency_p90': 1.714979100227356}, {'batch_size': 3, 'throughput': 1.2648349577263025, 'latency_mean': 2.3695253455638885, 'latency_p50': 2.3806774616241455, 'latency_p90': 2.5708618879318235}, {'batch_size': 5, 'throughput': 1.5987688313904085, 'latency_mean': 3.1105642426013946, 'latency_p50': 3.1149203777313232, 'latency_p90': 3.4775726079940794}, {'batch_size': 6, 'throughput': 1.7138160466801482, 'latency_mean': 3.4788966715335845, 'latency_p50': 3.4811453819274902, 'latency_p90': 3.907578730583191}, {'batch_size': 8, 'throughput': 1.8557401304046544, 'latency_mean': 4.283623377084732, 'latency_p50': 4.288382411003113, 'latency_p90': 4.7625867366790775}, {'batch_size': 10, 'throughput': 1.9518909937560722, 'latency_mean': 5.083827506303788, 'latency_p50': 5.066975712776184, 'latency_p90': 5.751632928848267}]
gpu_counts: {'NVIDIA RTX A5000': 1}
display_name: chaiml-dpo-training-alb_15532_v1
is_internal_developer: False
language_model: ChaiML/dpo-training-albert-2n2g
model_size: 13B
ranking_group: single
throughput_3p7s: 1.77
us_pacific_date: 2025-01-07
win_ratio: 0.46262035114215594
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 100, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['Bot:', '<|im_end|>', 'User:', 'You:', '\n', 'Me', '</s>', '####', '<|eot_id|>'], 'max_input_tokens': 1024, 'best_of': 4, 'max_output_tokens': 64}
formatter: {'memory_template': '', 'prompt_template': '', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name chaiml-dpo-training-alb-15532-v1-mkmlizer
Waiting for job on chaiml-dpo-training-alb-15532-v1-mkmlizer to finish
chaiml-dpo-training-alb-15532-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ _____ __ __ ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ /___/ ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ Version: 0.11.12 ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ https://mk1.ai ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ The license key for the current software has been verified as ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ belonging to: ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ Chai Research Corp. ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ Expiration: 2025-04-15 23:59:59 ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ║ ║
chaiml-dpo-training-alb-15532-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
chaiml-dpo-training-alb-15532-v1-mkmlizer: Downloaded to shared memory in 47.643s
chaiml-dpo-training-alb-15532-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpyqiln_hi, device:0
chaiml-dpo-training-alb-15532-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
chaiml-dpo-training-alb-15532-v1-mkmlizer: quantized model in 36.459s
chaiml-dpo-training-alb-15532-v1-mkmlizer: Processed model ChaiML/dpo-training-albert-2n2g in 84.103s
chaiml-dpo-training-alb-15532-v1-mkmlizer: creating bucket guanaco-mkml-models
chaiml-dpo-training-alb-15532-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
chaiml-dpo-training-alb-15532-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/chaiml-dpo-training-alb-15532-v1
chaiml-dpo-training-alb-15532-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/chaiml-dpo-training-alb-15532-v1/config.json
chaiml-dpo-training-alb-15532-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/chaiml-dpo-training-alb-15532-v1/special_tokens_map.json
chaiml-dpo-training-alb-15532-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/chaiml-dpo-training-alb-15532-v1/tokenizer_config.json
chaiml-dpo-training-alb-15532-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/chaiml-dpo-training-alb-15532-v1/tokenizer.json
chaiml-dpo-training-alb-15532-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/chaiml-dpo-training-alb-15532-v1/flywheel_model.0.safetensors
chaiml-dpo-training-alb-15532-v1-mkmlizer: Loading 0: 0%| | 0/363 [00:00<?, ?it/s] Loading 0: 1%|▏ | 5/363 [00:00<00:11, 30.71it/s] Loading 0: 4%|▎ | 13/363 [00:00<00:06, 51.48it/s] Loading 0: 5%|▌ | 19/363 [00:00<00:07, 48.16it/s] Loading 0: 7%|▋ | 25/363 [00:00<00:06, 49.29it/s] Loading 0: 9%|▊ | 31/363 [00:00<00:06, 51.64it/s] Loading 0: 10%|█ | 37/363 [00:00<00:06, 49.09it/s] Loading 0: 12%|█▏ | 43/363 [00:00<00:06, 49.80it/s] Loading 0: 13%|█▎ | 49/363 [00:00<00:06, 51.73it/s] Loading 0: 15%|█▌ | 55/363 [00:01<00:06, 48.01it/s] Loading 0: 17%|█▋ | 61/363 [00:01<00:08, 36.11it/s] Loading 0: 18%|█▊ | 66/363 [00:01<00:08, 36.19it/s] Loading 0: 20%|█▉ | 72/363 [00:01<00:07, 40.33it/s] Loading 0: 21%|██ | 77/363 [00:01<00:06, 42.56it/s] Loading 0: 23%|██▎ | 82/363 [00:01<00:07, 36.80it/s] Loading 0: 25%|██▍ | 90/363 [00:02<00:06, 44.84it/s] Loading 0: 26%|██▌ | 95/363 [00:02<00:06, 44.42it/s] Loading 0: 28%|██▊ | 100/363 [00:02<00:06, 37.89it/s] Loading 0: 29%|██▉ | 106/363 [00:02<00:06, 42.42it/s] Loading 0: 31%|███ | 112/363 [00:02<00:05, 46.05it/s] Loading 0: 32%|███▏ | 117/363 [00:02<00:05, 43.77it/s] Loading 0: 34%|███▍ | 123/363 [00:02<00:05, 42.42it/s] Loading 0: 35%|███▌ | 128/363 [00:02<00:05, 41.51it/s] Loading 0: 37%|███▋ | 134/363 [00:03<00:05, 44.43it/s] Loading 0: 38%|███▊ | 139/363 [00:03<00:05, 43.32it/s] Loading 0: 40%|███▉ | 144/363 [00:03<00:07, 28.44it/s] Loading 0: 41%|████ | 149/363 [00:03<00:07, 29.83it/s] Loading 0: 43%|████▎ | 156/363 [00:03<00:05, 37.62it/s] Loading 0: 44%|████▍ | 161/363 [00:03<00:05, 39.77it/s] Loading 0: 46%|████▌ | 166/363 [00:03<00:04, 40.96it/s] Loading 0: 47%|████▋ | 171/363 [00:04<00:04, 43.05it/s] Loading 0: 48%|████▊ | 176/363 [00:04<00:04, 37.85it/s] Loading 0: 51%|█████ | 184/363 [00:04<00:03, 45.68it/s] Loading 0: 52%|█████▏ | 189/363 [00:04<00:03, 46.08it/s] Loading 0: 53%|█████▎ | 194/363 [00:04<00:04, 37.62it/s] Loading 0: 56%|█████▌ | 202/363 [00:04<00:03, 45.20it/s] Loading 0: 57%|█████▋ | 208/363 [00:04<00:03, 44.11it/s] Loading 0: 59%|█████▊ | 213/363 [00:05<00:03, 40.57it/s] Loading 0: 60%|██████ | 218/363 [00:05<00:03, 42.10it/s] Loading 0: 61%|██████▏ | 223/363 [00:05<00:04, 31.77it/s] Loading 0: 63%|██████▎ | 227/363 [00:05<00:04, 32.07it/s] Loading 0: 64%|██████▎ | 231/363 [00:05<00:04, 31.53it/s] Loading 0: 65%|██████▌ | 237/363 [00:05<00:03, 37.74it/s] Loading 0: 67%|██████▋ | 242/363 [00:05<00:03, 40.12it/s] Loading 0: 68%|██████▊ | 247/363 [00:06<00:02, 41.60it/s] Loading 0: 70%|██████▉ | 253/363 [00:06<00:02, 41.25it/s] Loading 0: 71%|███████ | 258/363 [00:06<00:02, 39.98it/s] Loading 0: 73%|███████▎ | 265/363 [00:06<00:02, 45.21it/s] Loading 0: 75%|███████▍ | 271/363 [00:06<00:02, 43.30it/s] Loading 0: 76%|███████▌ | 276/363 [00:06<00:02, 41.27it/s] Loading 0: 78%|███████▊ | 282/363 [00:06<00:01, 45.19it/s] Loading 0: 79%|███████▉ | 287/363 [00:06<00:01, 43.73it/s] Loading 0: 80%|████████ | 292/363 [00:07<00:01, 43.17it/s] Loading 0: 82%|████████▏ | 297/363 [00:07<00:01, 44.25it/s] Loading 0: 83%|████████▎ | 302/363 [00:07<00:01, 44.67it/s] Loading 0: 85%|████████▍ | 307/363 [00:14<00:23, 2.37it/s] Loading 0: 86%|████████▌ | 312/363 [00:14<00:15, 3.28it/s] Loading 0: 88%|████████▊ | 319/363 [00:14<00:08, 5.07it/s] Loading 0: 89%|████████▉ | 324/363 [00:14<00:05, 6.69it/s] Loading 0: 91%|█████████ | 329/363 [00:14<00:03, 8.83it/s] Loading 0: 92%|█████████▏| 335/363 [00:14<00:02, 11.91it/s] Loading 0: 94%|█████████▎| 340/363 [00:14<00:01, 14.95it/s] Loading 0: 96%|█████████▌| 347/363 [00:15<00:00, 20.65it/s] Loading 0: 97%|█████████▋| 353/363 [00:15<00:00, 24.43it/s] Loading 0: 99%|█████████▊| 358/363 [00:15<00:00, 26.82it/s]
Job chaiml-dpo-training-alb-15532-v1-mkmlizer completed after 114.43s with status: succeeded
Stopping job with name chaiml-dpo-training-alb-15532-v1-mkmlizer
Pipeline stage MKMLizer completed in 115.01s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.17s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service chaiml-dpo-training-alb-15532-v1
Waiting for inference service chaiml-dpo-training-alb-15532-v1 to be ready
Inference service chaiml-dpo-training-alb-15532-v1 ready after 341.30884623527527s
Pipeline stage MKMLDeployer completed in 341.87s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.5643959045410156s
Received healthy response to inference request in 0.9480502605438232s
Received healthy response to inference request in 0.7433555126190186s
Received healthy response to inference request in 1.3934588432312012s
Received healthy response to inference request in 1.0967864990234375s
5 requests
0 failed requests
5th percentile: 0.7842944622039795
10th percentile: 0.8252334117889404
20th percentile: 0.9071113109588623
30th percentile: 0.9777975082397461
40th percentile: 1.0372920036315918
50th percentile: 1.0967864990234375
60th percentile: 1.2154554367065429
70th percentile: 1.3341243743896485
80th percentile: 1.4276462554931642
90th percentile: 1.4960210800170899
95th percentile: 1.5302084922790526
99th percentile: 1.557558422088623
mean time: 1.1492094039916991
Pipeline stage StressChecker completed in 7.46s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.76s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.99s
Shutdown handler de-registered
chaiml-dpo-training-alb_15532_v1 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.09s
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 2399.80s
Shutdown handler de-registered
chaiml-dpo-training-alb_15532_v1 status is now inactive due to auto deactivation removed underperforming models
chaiml-dpo-training-alb_15532_v1 status is now torndown due to DeploymentManager action