developer_uid: rirv938
submission_id: rirv938-97p-3ff-rirv938-_7194_v1
model_name: rirv938-97p-3ff-rirv938-_7194_v1
model_group: rirv938/97p_3ff_rirv938_
status: torndown
timestamp: 2025-04-03T15:55:35+00:00
num_battles: 9168
num_wins: 5100
celo_rating: 1342.83
family_friendly_score: 0.5107999999999999
family_friendly_standard_error: 0.007069418080719233
submission_type: basic
model_repo: rirv938/97p_3ff_rirv938_20k_100p_0ff_ri_19485_v1_cp1125_v3
model_architecture: MistralForCausalLM
model_num_parameters: 24096691200.0
best_of: 8
max_input_tokens: 768
max_output_tokens: 64
reward_model: default
latencies: [{'batch_size': 1, 'throughput': 0.3875321116298844, 'latency_mean': 2.580352635383606, 'latency_p50': 2.602152943611145, 'latency_p90': 2.8516461133956907}, {'batch_size': 3, 'throughput': 0.7925716371156615, 'latency_mean': 3.764594466686249, 'latency_p50': 3.766548275947571, 'latency_p90': 4.174997043609619}, {'batch_size': 5, 'throughput': 1.0535649487356604, 'latency_mean': 4.723513108491898, 'latency_p50': 4.706278324127197, 'latency_p90': 5.232630562782288}, {'batch_size': 6, 'throughput': 1.1285210747840644, 'latency_mean': 5.258730525970459, 'latency_p50': 5.223300337791443, 'latency_p90': 5.927357125282287}, {'batch_size': 10, 'throughput': 1.2966266729506974, 'latency_mean': 7.622187484502792, 'latency_p50': 7.6393702030181885, 'latency_p90': 8.52933874130249}]
gpu_counts: {'NVIDIA RTX A6000': 1}
display_name: rirv938-97p-3ff-rirv938-_7194_v1
is_internal_developer: True
language_model: rirv938/97p_3ff_rirv938_20k_100p_0ff_ri_19485_v1_cp1125_v3
model_size: 24B
ranking_group: single
throughput_3p7s: 0.78
us_pacific_date: 2025-04-03
win_ratio: 0.556282722513089
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.6, 'top_k': 80, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['###', '</s>', 'You:', '\n'], 'max_input_tokens': 768, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': '[INST]', 'prompt_template': '', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '[/INST]{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name rirv938-97p-3ff-rirv938-7194-v1-mkmlizer
Waiting for job on rirv938-97p-3ff-rirv938-7194-v1-mkmlizer to finish
Failed to get response for submission chaiml-sft-gemma2-28b-v_83370_v5: HTTPConnectionPool(host='chaiml-sft-gemma2-28b-v-83370-v5-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Failed to get response for submission chaiml-sft-gemma2-28b-v_83370_v5: HTTPConnectionPool(host='chaiml-sft-gemma2-28b-v-83370-v5-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: Downloaded to shared memory in 166.231s
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpij2mj7u6, device:0
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: quantized model in 64.770s
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: Processed model rirv938/97p_3ff_rirv938_20k_100p_0ff_ri_19485_v1_cp1125_v3 in 231.001s
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: creating bucket guanaco-mkml-models
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/rirv938-97p-3ff-rirv938-7194-v1
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/rirv938-97p-3ff-rirv938-7194-v1/config.json
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/rirv938-97p-3ff-rirv938-7194-v1/special_tokens_map.json
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/rirv938-97p-3ff-rirv938-7194-v1/tokenizer_config.json
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/rirv938-97p-3ff-rirv938-7194-v1/tokenizer.json
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.1.safetensors s3://guanaco-mkml-models/rirv938-97p-3ff-rirv938-7194-v1/flywheel_model.1.safetensors
rirv938-97p-3ff-rirv938-7194-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/rirv938-97p-3ff-rirv938-7194-v1/flywheel_model.0.safetensors
Job rirv938-97p-3ff-rirv938-7194-v1-mkmlizer completed after 266.09s with status: succeeded
Stopping job with name rirv938-97p-3ff-rirv938-7194-v1-mkmlizer
Pipeline stage MKMLizer completed in 266.71s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.14s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service rirv938-97p-3ff-rirv938-7194-v1
Waiting for inference service rirv938-97p-3ff-rirv938-7194-v1 to be ready
Failed to get response for submission chaiml-sft-gemma2-28b-v_83370_v5: HTTPConnectionPool(host='chaiml-sft-gemma2-28b-v-83370-v5-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Inference service rirv938-97p-3ff-rirv938-7194-v1 ready after 110.4219958782196s
Pipeline stage MKMLDeployer completed in 111.30s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.9165728092193604s
Received healthy response to inference request in 2.8970344066619873s
Received healthy response to inference request in 2.630767345428467s
Received healthy response to inference request in 2.3716044425964355s
Received healthy response to inference request in 2.512655735015869s
5 requests
0 failed requests
5th percentile: 2.3998147010803224
10th percentile: 2.428024959564209
20th percentile: 2.4844454765319823
30th percentile: 2.5362780570983885
40th percentile: 2.5835227012634276
50th percentile: 2.630767345428467
60th percentile: 2.737274169921875
70th percentile: 2.843780994415283
80th percentile: 2.900942087173462
90th percentile: 2.908757448196411
95th percentile: 2.9126651287078857
99th percentile: 2.9157912731170654
mean time: 2.6657269477844237
Pipeline stage StressChecker completed in 14.58s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.88s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.74s
Shutdown handler de-registered
rirv938-97p-3ff-rirv938-_7194_v1 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 2979.19s
Shutdown handler de-registered
rirv938-97p-3ff-rirv938-_7194_v1 status is now inactive due to auto deactivation removed underperforming models
rirv938-97p-3ff-rirv938-_7194_v1 status is now torndown due to DeploymentManager action