developer_uid: zmeeks
submission_id: cognitivecomputations-do_9214_v6
model_name: dolph-c0909
model_group: cognitivecomputations/do
status: torndown
timestamp: 2025-06-28T20:26:18+00:00
num_battles: 5447
num_wins: 2156
celo_rating: 1193.69
family_friendly_score: 0.636
family_friendly_standard_error: 0.006804469119630127
submission_type: basic
model_repo: cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
model_architecture: MistralForCausalLM
model_num_parameters: 12772090880.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
reward_model: default
latencies: [{'batch_size': 1, 'throughput': 0.5950819202733987, 'latency_mean': 1.68026242852211, 'latency_p50': 1.6767306327819824, 'latency_p90': 1.845559310913086}, {'batch_size': 3, 'throughput': 1.074973525569674, 'latency_mean': 2.781644207239151, 'latency_p50': 2.787246584892273, 'latency_p90': 3.0846478223800657}, {'batch_size': 5, 'throughput': 1.281339521438979, 'latency_mean': 3.8806371665000916, 'latency_p50': 3.8962095975875854, 'latency_p90': 4.305924797058105}, {'batch_size': 6, 'throughput': 1.336546680421669, 'latency_mean': 4.461724126338959, 'latency_p50': 4.471400737762451, 'latency_p90': 4.975725674629212}, {'batch_size': 8, 'throughput': 1.3994879950208798, 'latency_mean': 5.690701376199723, 'latency_p50': 5.685524225234985, 'latency_p90': 6.363058066368103}, {'batch_size': 10, 'throughput': 1.4376566370665742, 'latency_mean': 6.902973840236664, 'latency_p50': 6.91237211227417, 'latency_p90': 7.775298452377319}]
gpu_counts: {'NVIDIA RTX A5000': 1}
display_name: dolph-c0909
is_internal_developer: False
language_model: cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
model_size: 13B
ranking_group: single
throughput_3p7s: 1.26
us_pacific_date: 2025-06-28
win_ratio: 0.3958142096566918
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': '<|im_start|>system\n{memory}<|im_end|>\n', 'prompt_template': '<|im_start|>user\n{prompt}<|im_end|>\n', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{user_name}: {message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name cognitivecomputations-do-9214-v6-mkmlizer
Waiting for job on cognitivecomputations-do-9214-v6-mkmlizer to finish
cognitivecomputations-do-9214-v6-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
cognitivecomputations-do-9214-v6-mkmlizer: ║ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ██████ ██████ █████ ████ ████ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ░░██████ ██████ ░░███ ███░ ░░███ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ░███░█████░███ ░███ ███ ░███ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ░███░░███ ░███ ░███████ ░███ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ░███ ░░░ ░███ ░███░░███ ░███ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ░███ ░███ ░███ ░░███ ░███ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ █████ █████ █████ ░░████ █████ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ░░░░░ ░░░░░ ░░░░░ ░░░░ ░░░░░ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ Version: 0.29.3 ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ Features: FLYWHEEL, CUDA ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ Copyright 2023-2025 MK ONE TECHNOLOGIES Inc. ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ https://mk1.ai ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ The license key for the current software has been verified as ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ belonging to: ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ Chai Research Corp. ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ Expiration: 2028-03-31 23:59:59 ║
cognitivecomputations-do-9214-v6-mkmlizer: ║ ║
cognitivecomputations-do-9214-v6-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
cognitivecomputations-do-9214-v6-mkmlizer: Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`
cognitivecomputations-do-9214-v6-mkmlizer: Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`
cognitivecomputations-do-9214-v6-mkmlizer: Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`
cognitivecomputations-do-9214-v6-mkmlizer: Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`
cognitivecomputations-do-9214-v6-mkmlizer: Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`
cognitivecomputations-do-9214-v6-mkmlizer: Downloaded to shared memory in 51.715s
cognitivecomputations-do-9214-v6-mkmlizer: Checking if cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b already exists in ChaiML
cognitivecomputations-do-9214-v6-mkmlizer: Creating repo ChaiML/dolphin-2.9.3-mistral-nemo-12b and uploading /tmp/tmp14kqujaj to it
cognitivecomputations-do-9214-v6-mkmlizer: 0%| | 0/5 [00:00<?, ?it/s] 20%|██ | 1/5 [00:06<00:27, 6.79s/it] 40%|████ | 2/5 [00:11<00:16, 5.54s/it] 60%|██████ | 3/5 [00:15<00:09, 4.72s/it] 80%|████████ | 4/5 [00:21<00:05, 5.52s/it] 100%|██████████| 5/5 [00:29<00:00, 6.35s/it] 100%|██████████| 5/5 [00:29<00:00, 5.95s/it]
cognitivecomputations-do-9214-v6-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp14kqujaj, device:0
cognitivecomputations-do-9214-v6-mkmlizer: Saving flywheel model at /dev/shm/model_cache
cognitivecomputations-do-9214-v6-mkmlizer: quantized model in 29.966s
cognitivecomputations-do-9214-v6-mkmlizer: Processed model cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b in 138.568s
cognitivecomputations-do-9214-v6-mkmlizer: creating bucket guanaco-mkml-models
cognitivecomputations-do-9214-v6-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
cognitivecomputations-do-9214-v6-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/cognitivecomputations-do-9214-v6
cognitivecomputations-do-9214-v6-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/cognitivecomputations-do-9214-v6/config.json
cognitivecomputations-do-9214-v6-mkmlizer: cp /dev/shm/model_cache/added_tokens.json s3://guanaco-mkml-models/cognitivecomputations-do-9214-v6/added_tokens.json
cognitivecomputations-do-9214-v6-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/cognitivecomputations-do-9214-v6/special_tokens_map.json
cognitivecomputations-do-9214-v6-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/cognitivecomputations-do-9214-v6/tokenizer_config.json
cognitivecomputations-do-9214-v6-mkmlizer: cp /dev/shm/model_cache/vocab.json s3://guanaco-mkml-models/cognitivecomputations-do-9214-v6/vocab.json
cognitivecomputations-do-9214-v6-mkmlizer: cp /dev/shm/model_cache/merges.txt s3://guanaco-mkml-models/cognitivecomputations-do-9214-v6/merges.txt
cognitivecomputations-do-9214-v6-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/cognitivecomputations-do-9214-v6/tokenizer.json
cognitivecomputations-do-9214-v6-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/cognitivecomputations-do-9214-v6/flywheel_model.0.safetensors
Job cognitivecomputations-do-9214-v6-mkmlizer completed after 167.22s with status: succeeded
Stopping job with name cognitivecomputations-do-9214-v6-mkmlizer
Pipeline stage MKMLizer completed in 167.71s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.16s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service cognitivecomputations-do-9214-v6
Waiting for inference service cognitivecomputations-do-9214-v6 to be ready
Inference service cognitivecomputations-do-9214-v6 ready after 171.48875999450684s
Pipeline stage MKMLDeployer completed in 172.03s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.850149154663086s
Received healthy response to inference request in 1.8555512428283691s
Received healthy response to inference request in 0.33681249618530273s
Received healthy response to inference request in 0.9558582305908203s
Received healthy response to inference request in 1.6035256385803223s
5 requests
0 failed requests
5th percentile: 0.46062164306640624
10th percentile: 0.5844307899475097
20th percentile: 0.8320490837097169
30th percentile: 1.0853917121887207
40th percentile: 1.3444586753845216
50th percentile: 1.6035256385803223
60th percentile: 1.7021750450134276
70th percentile: 1.8008244514465332
80th percentile: 1.8512295722961425
90th percentile: 1.8533904075622558
95th percentile: 1.8544708251953126
99th percentile: 1.8553351593017577
mean time: 1.3203793525695802
Pipeline stage StressChecker completed in 7.91s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.69s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.67s
Shutdown handler de-registered
cognitivecomputations-do_9214_v6 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service cognitivecomputations-do-9214-v6-profiler
Waiting for inference service cognitivecomputations-do-9214-v6-profiler to be ready
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 3145.83s
Shutdown handler de-registered
cognitivecomputations-do_9214_v6 status is now inactive due to auto deactivation removed underperforming models
cognitivecomputations-do_9214_v6 status is now torndown due to DeploymentManager action