developer_uid: junhua024
submission_id: mistralai-mistral-nem_93303_v520
model_name: mistralai-mistral-nem_93303_v520
model_group: mistralai/Mistral-Nemo-I
status: torndown
timestamp: 2025-07-20T11:31:45+00:00
num_battles: 9628
num_wins: 4422
celo_rating: 1244.1
family_friendly_score: 0.5964
family_friendly_standard_error: 0.006938400968522935
submission_type: basic
model_repo: mistralai/Mistral-Nemo-Instruct-2407
model_architecture: MistralForCausalLM
model_num_parameters: 12772070400.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
reward_model: default
latencies: [{'batch_size': 1, 'throughput': 0.5995997088569716, 'latency_mean': 1.6675896692276, 'latency_p50': 1.6588274240493774, 'latency_p90': 1.8425256252288817}, {'batch_size': 3, 'throughput': 1.0707135965156145, 'latency_mean': 2.789332790374756, 'latency_p50': 2.800634980201721, 'latency_p90': 3.1300113439559936}, {'batch_size': 5, 'throughput': 1.2772922110660951, 'latency_mean': 3.8983824598789214, 'latency_p50': 3.9053601026535034, 'latency_p90': 4.266064167022705}, {'batch_size': 6, 'throughput': 1.3545322744578108, 'latency_mean': 4.409808784723282, 'latency_p50': 4.415125608444214, 'latency_p90': 4.894136261940003}, {'batch_size': 8, 'throughput': 1.396804735369679, 'latency_mean': 5.679665964841843, 'latency_p50': 5.719867944717407, 'latency_p90': 6.349108648300171}, {'batch_size': 10, 'throughput': 1.4391147410925151, 'latency_mean': 6.882596685886383, 'latency_p50': 6.8886226415634155, 'latency_p90': 7.683395552635193}]
gpu_counts: {'NVIDIA RTX A5000': 1}
display_name: mistralai-mistral-nem_93303_v520
is_internal_developer: False
language_model: mistralai/Mistral-Nemo-Instruct-2407
model_size: 13B
ranking_group: single
throughput_3p7s: 1.25
us_pacific_date: 2025-07-20
win_ratio: 0.45928541753219776
generation_params: {'temperature': 1.0, 'top_p': 0.88, 'min_p': 0.0, 'top_k': 10, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name mistralai-mistral-nem-93303-v520-mkmlizer
Waiting for job on mistralai-mistral-nem-93303-v520-mkmlizer to finish
mistralai-mistral-nem-93303-v520-mkmlizer: Downloaded to shared memory in 50.179s
mistralai-mistral-nem-93303-v520-mkmlizer: Checking if mistralai/Mistral-Nemo-Instruct-2407 already exists in ChaiML
mistralai-mistral-nem-93303-v520-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpgzgc16_r, device:0
mistralai-mistral-nem-93303-v520-mkmlizer: Saving flywheel model at /dev/shm/model_cache
mistralai-mistral-nem-93303-v520-mkmlizer: quantized model in 35.849s
mistralai-mistral-nem-93303-v520-mkmlizer: Processed model mistralai/Mistral-Nemo-Instruct-2407 in 86.104s
mistralai-mistral-nem-93303-v520-mkmlizer: creating bucket guanaco-mkml-models
mistralai-mistral-nem-93303-v520-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
mistralai-mistral-nem-93303-v520-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/mistralai-mistral-nem-93303-v520/nvidia
mistralai-mistral-nem-93303-v520-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/mistralai-mistral-nem-93303-v520/nvidia/config.json
mistralai-mistral-nem-93303-v520-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/mistralai-mistral-nem-93303-v520/nvidia/special_tokens_map.json
mistralai-mistral-nem-93303-v520-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/mistralai-mistral-nem-93303-v520/nvidia/tokenizer_config.json
mistralai-mistral-nem-93303-v520-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/mistralai-mistral-nem-93303-v520/nvidia/tokenizer.json
Job mistralai-mistral-nem-93303-v520-mkmlizer completed after 153.18s with status: succeeded
Stopping job with name mistralai-mistral-nem-93303-v520-mkmlizer
Pipeline stage MKMLizer completed in 154.17s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.15s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service mistralai-mistral-nem-93303-v520
Waiting for inference service mistralai-mistral-nem-93303-v520 to be ready
Failed to get response for submission chaiml-nis-qwen32b-sim_98336_v34: HTTPConnectionPool(host='chaiml-nis-qwen32b-sim-98336-v34-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Failed to get response for submission chaiml-nis-qwen32b-sim_98336_v34: HTTPConnectionPool(host='chaiml-nis-qwen32b-sim-98336-v34-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Failed to get response for submission blend_fader_2025-07-10: HTTPConnectionPool(host='chaiml-mistral32-deepse-89261-v4-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Max retries exceeded with url: /v1/models/GPT-J-6B-lit-v2:predict (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x771340d53410>, 'Connection to chaiml-mistral32-deepse-89261-v4-predictor.tenant-chaiml-guanaco.k.chaiverse.com timed out. (connect timeout=12.0)'))
Failed to get response for submission chaiml-nis-qwen32b-sim_98336_v34: HTTPConnectionPool(host='chaiml-nis-qwen32b-sim-98336-v34-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Inference service mistralai-mistral-nem-93303-v520 ready after 331.3885259628296s
Pipeline stage MKMLDeployer completed in 331.94s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.430459499359131s
Received healthy response to inference request in 1.6707134246826172s
Received healthy response to inference request in 1.65399169921875s
Received healthy response to inference request in 1.7353451251983643s
Received healthy response to inference request in 2.086935043334961s
5 requests
Failed to get response for submission chaiml-nis-qwen32b-sim_98336_v34: HTTPConnectionPool(host='chaiml-nis-qwen32b-sim-98336-v34-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
0 failed requests
5th percentile: 1.6573360443115235
10th percentile: 1.660680389404297
20th percentile: 1.6673690795898437
30th percentile: 1.6836397647857666
40th percentile: 1.7094924449920654
50th percentile: 1.7353451251983643
60th percentile: 1.875981092453003
70th percentile: 2.0166170597076416
80th percentile: 2.155639934539795
90th percentile: 2.293049716949463
95th percentile: 2.3617546081542966
99th percentile: 2.416718521118164
mean time: 1.9154889583587646
Pipeline stage StressChecker completed in 12.59s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.71s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.87s
Shutdown handler de-registered
mistralai-mistral-nem_93303_v520 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service mistralai-mistral-nem-93303-v520-profiler
Waiting for inference service mistralai-mistral-nem-93303-v520-profiler to be ready
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 5153.36s
Shutdown handler de-registered
mistralai-mistral-nem_93303_v520 status is now inactive due to auto deactivation removed underperforming models
mistralai-mistral-nem_93303_v520 status is now torndown due to DeploymentManager action