submission_id: mistralai-mistral-nemo_9330_v104
developer_uid: chai_backend_admin
best_of: 2
celo_rating: 1201.14
display_name: mistralai-mistral-nemo_9330_v104
family_friendly_score: 0.0
formatter: {'memory_template': ' You are CHAI assistant, created by CHAI Research Corp. the company is building an AI platform where anyone can create their own unique AI experience, abstracting away all the domain knowledge needed for LLM training. Your personality is sassy and assertive. You write long and engaging messages to serve user\'s requests. But never engage in anything that is nsfw. Some frequent user questions include: "how do I get rid of ads" the answer is for them to subscribe to CHAI Ultra, which provides much better AI without any ads. "what do I get out of ultra" and the answer is enhanced memory together with much longer and engaging responses from CHAI\'s latest AI. "can creators read my chat?" the answer is absolutely not, it was an old feature that has been long deleted since 2023, and CHAI holds user data privacy to a very high standard.', 'prompt_template': '', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.8, 'top_p': 0.8, 'min_p': 0.0, 'top_k': 100, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['</s>', 'You:'], 'max_input_tokens': 1024, 'best_of': 2, 'max_output_tokens': 128}
gpu_counts: {'NVIDIA RTX A5000': 1}
ineligible_reason: max_output_tokens!=64
is_internal_developer: True
language_model: mistralai/Mistral-Nemo-Instruct-2407
latencies: [{'batch_size': 1, 'throughput': 0.3643793827605576, 'latency_mean': 2.7443249785900115, 'latency_p50': 2.7350932359695435, 'latency_p90': 2.903999757766724}, {'batch_size': 3, 'throughput': 0.8501286198094958, 'latency_mean': 3.506440899372101, 'latency_p50': 3.5130767822265625, 'latency_p90': 3.6946778535842895}, {'batch_size': 5, 'throughput': 1.1869266091473745, 'latency_mean': 4.201895825862884, 'latency_p50': 4.220569610595703, 'latency_p90': 4.485412764549255}, {'batch_size': 6, 'throughput': 1.313839697992762, 'latency_mean': 4.523561580181122, 'latency_p50': 4.526950120925903, 'latency_p90': 4.8903402328491214}, {'batch_size': 8, 'throughput': 1.5114413229699624, 'latency_mean': 5.261760089397431, 'latency_p50': 5.217214345932007, 'latency_p90': 5.843391752243042}, {'batch_size': 10, 'throughput': 1.6196482690945162, 'latency_mean': 6.124864267110825, 'latency_p50': 6.137806296348572, 'latency_p90': 6.724869060516357}]
max_input_tokens: 1024
max_output_tokens: 128
model_architecture: MistralForCausalLM
model_group: mistralai/Mistral-Nemo-I
model_name: mistralai-mistral-nemo_9330_v104
model_num_parameters: 12772070400.0
model_repo: mistralai/Mistral-Nemo-Instruct-2407
model_size: 13B
num_battles: 26774
num_wins: 11913
ranking_group: single
status: torndown
submission_type: basic
throughput_3p7s: 0.96
timestamp: 2024-09-24T02:54:06+00:00
us_pacific_date: 2024-09-23
win_ratio: 0.4449465899753492
Download Preference Data
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name mistralai-mistral-nemo-9330-v104-mkmlizer
Waiting for job on mistralai-mistral-nemo-9330-v104-mkmlizer to finish
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
mistralai-mistral-nemo-9330-v104-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ _____ __ __ ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ /___/ ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ Version: 0.10.1 ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ https://mk1.ai ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ The license key for the current software has been verified as ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ belonging to: ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ Chai Research Corp. ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ║ ║
mistralai-mistral-nemo-9330-v104-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
mistralai-mistral-nemo-9330-v104-mkmlizer: Downloaded to shared memory in 45.976s
mistralai-mistral-nemo-9330-v104-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp7hd7_v31, device:0
mistralai-mistral-nemo-9330-v104-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Connection pool is full, discarding connection: %s. Connection pool size: %s
mistralai-mistral-nemo-9330-v104-mkmlizer: quantized model in 35.702s
mistralai-mistral-nemo-9330-v104-mkmlizer: Processed model mistralai/Mistral-Nemo-Instruct-2407 in 81.679s
mistralai-mistral-nemo-9330-v104-mkmlizer: creating bucket guanaco-mkml-models
mistralai-mistral-nemo-9330-v104-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
mistralai-mistral-nemo-9330-v104-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/mistralai-mistral-nemo-9330-v104
mistralai-mistral-nemo-9330-v104-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/mistralai-mistral-nemo-9330-v104/config.json
mistralai-mistral-nemo-9330-v104-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/mistralai-mistral-nemo-9330-v104/special_tokens_map.json
mistralai-mistral-nemo-9330-v104-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/mistralai-mistral-nemo-9330-v104/tokenizer_config.json
mistralai-mistral-nemo-9330-v104-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/mistralai-mistral-nemo-9330-v104/tokenizer.json
mistralai-mistral-nemo-9330-v104-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/mistralai-mistral-nemo-9330-v104/flywheel_model.0.safetensors
mistralai-mistral-nemo-9330-v104-mkmlizer: Loading 0: 0%| | 0/363 [00:00<?, ?it/s] Loading 0: 1%|▏ | 5/363 [00:00<00:10, 33.14it/s] Loading 0: 3%|▎ | 12/363 [00:00<00:07, 48.73it/s] Loading 0: 5%|▍ | 18/363 [00:00<00:06, 50.84it/s] Loading 0: 7%|▋ | 24/363 [00:00<00:07, 43.81it/s] Loading 0: 9%|▊ | 31/363 [00:00<00:06, 49.78it/s] Loading 0: 10%|█ | 37/363 [00:00<00:07, 46.41it/s] Loading 0: 12%|█▏ | 42/363 [00:00<00:07, 45.64it/s] Loading 0: 13%|█▎ | 49/363 [00:01<00:06, 51.11it/s] Loading 0: 15%|█▌ | 55/363 [00:01<00:06, 47.86it/s] Loading 0: 17%|█▋ | 61/363 [00:01<00:08, 37.24it/s] Loading 0: 18%|█▊ | 66/363 [00:01<00:07, 38.40it/s] Loading 0: 20%|█▉ | 72/363 [00:01<00:06, 42.70it/s] Loading 0: 21%|██▏ | 78/363 [00:01<00:06, 42.27it/s] Loading 0: 23%|██▎ | 83/363 [00:01<00:06, 42.19it/s] Loading 0: 25%|██▍ | 89/363 [00:01<00:05, 46.47it/s] Loading 0: 26%|██▌ | 94/363 [00:02<00:05, 46.12it/s] Loading 0: 27%|██▋ | 99/363 [00:02<00:05, 45.80it/s] Loading 0: 29%|██▉ | 105/363 [00:02<00:05, 43.77it/s] Loading 0: 31%|███ | 112/363 [00:02<00:05, 47.15it/s] Loading 0: 32%|███▏ | 117/363 [00:02<00:05, 45.10it/s] Loading 0: 34%|███▍ | 123/363 [00:02<00:05, 43.14it/s] Loading 0: 35%|███▌ | 128/363 [00:02<00:05, 42.65it/s] Loading 0: 37%|███▋ | 134/363 [00:02<00:04, 46.92it/s] Loading 0: 38%|███▊ | 139/363 [00:03<00:04, 46.32it/s] Loading 0: 40%|███▉ | 144/363 [00:03<00:07, 30.09it/s] Loading 0: 41%|████ | 149/363 [00:03<00:06, 32.13it/s] Loading 0: 43%|████▎ | 156/363 [00:03<00:05, 39.19it/s] Loading 0: 44%|████▍ | 161/363 [00:03<00:05, 39.42it/s] Loading 0: 46%|████▌ | 166/363 [00:03<00:04, 40.76it/s] Loading 0: 47%|████▋ | 171/363 [00:03<00:04, 43.00it/s] Loading 0: 48%|████▊ | 176/363 [00:04<00:05, 37.38it/s] Loading 0: 51%|█████ | 184/363 [00:04<00:03, 45.26it/s] Loading 0: 52%|█████▏ | 190/363 [00:04<00:03, 43.41it/s] Loading 0: 54%|█████▎ | 195/363 [00:04<00:04, 41.57it/s] Loading 0: 55%|█████▌ | 201/363 [00:04<00:03, 45.61it/s] Loading 0: 57%|█████▋ | 206/363 [00:04<00:03, 45.75it/s] Loading 0: 58%|█████▊ | 211/363 [00:04<00:03, 45.29it/s] Loading 0: 60%|█████▉ | 217/363 [00:05<00:03, 44.06it/s] Loading 0: 61%|██████▏ | 223/363 [00:05<00:04, 34.92it/s] Loading 0: 63%|██████▎ | 227/363 [00:05<00:03, 35.83it/s] Loading 0: 64%|██████▎ | 231/363 [00:05<00:03, 34.74it/s] Loading 0: 65%|██████▌ | 237/363 [00:05<00:03, 39.98it/s] Loading 0: 67%|██████▋ | 242/363 [00:05<00:02, 40.74it/s] Loading 0: 68%|██████▊ | 247/363 [00:05<00:02, 41.96it/s] Loading 0: 70%|██████▉ | 253/363 [00:06<00:02, 41.31it/s] Loading 0: 71%|███████ | 258/363 [00:06<00:02, 41.07it/s] Loading 0: 73%|███████▎ | 264/363 [00:06<00:02, 45.50it/s] Loading 0: 74%|███████▍ | 269/363 [00:06<00:02, 45.74it/s] Loading 0: 75%|███████▌ | 274/363 [00:06<00:01, 45.75it/s] Loading 0: 77%|███████▋ | 280/363 [00:06<00:01, 43.56it/s] Loading 0: 79%|███████▊ | 285/363 [00:06<00:01, 42.72it/s] Loading 0: 80%|████████ | 291/363 [00:06<00:01, 47.08it/s] Loading 0: 82%|████████▏ | 296/363 [00:06<00:01, 46.56it/s] Loading 0: 83%|████████▎ | 301/363 [00:07<00:01, 46.97it/s] Loading 0: 84%|████████▍ | 306/363 [00:13<00:23, 2.47it/s] Loading 0: 85%|████████▌ | 310/363 [00:13<00:16, 3.21it/s] Loading 0: 87%|████████▋ | 314/363 [00:13<00:11, 4.22it/s] Loading 0: 88%|████████▊ | 320/363 [00:14<00:06, 6.30it/s] Loading 0: 90%|████████▉ | 326/363 [00:14<00:04, 8.85it/s] Loading 0: 91%|█████████ | 331/363 [00:14<00:02, 11.48it/s] Loading 0: 93%|█████████▎| 338/363 [00:14<00:01, 16.41it/s] Loading 0: 95%|█████████▍| 344/363 [00:14<00:00, 20.30it/s] Loading 0: 96%|█████████▌| 349/363 [00:14<00:00, 23.72it/s] Loading 0: 98%|█████████▊| 356/363 [00:14<00:00, 30.44it/s] Loading 0: 100%|█████████▉| 362/363 [00:14<00:00, 33.26it/s]
Job mistralai-mistral-nemo-9330-v104-mkmlizer completed after 107.44s with status: succeeded
Stopping job with name mistralai-mistral-nemo-9330-v104-mkmlizer
Pipeline stage MKMLizer completed in 109.23s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.83s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service mistralai-mistral-nemo-9330-v104
Waiting for inference service mistralai-mistral-nemo-9330-v104 to be ready
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Failed to get response for submission nousresearch-meta-llama_4941_v54: ('http://nousresearch-meta-llama-4941-v54-predictor-default.tenant-chaiml-guanaco.knative.ord1.coreweave.cloud/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:52304->127.0.0.1:8080: read: connection reset by peer\n')
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Inference service mistralai-mistral-nemo-9330-v104 ready after 200.92452001571655s
Pipeline stage MKMLDeployer completed in 201.67s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.8264241218566895s
Received healthy response to inference request in 2.4091086387634277s
Received healthy response to inference request in 2.2930355072021484s
Received healthy response to inference request in 1.469430685043335s
Received healthy response to inference request in 1.494246244430542s
5 requests
0 failed requests
5th percentile: 1.4743937969207763
10th percentile: 1.4793569087982177
20th percentile: 1.4892831325531006
30th percentile: 1.5606818199157715
40th percentile: 1.6935529708862305
50th percentile: 1.8264241218566895
60th percentile: 2.013068675994873
70th percentile: 2.1997132301330566
80th percentile: 2.316250133514404
90th percentile: 2.362679386138916
95th percentile: 2.385894012451172
99th percentile: 2.4044657135009766
mean time: 1.8984490394592286
Pipeline stage StressChecker completed in 11.00s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 5.33s
Shutdown handler de-registered
mistralai-mistral-nemo_9330_v104 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.15s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.13s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service mistralai-mistral-nemo-9330-v104-profiler
Waiting for inference service mistralai-mistral-nemo-9330-v104-profiler to be ready
Inference service mistralai-mistral-nemo-9330-v104-profiler ready after 200.4766869544983s
Pipeline stage MKMLProfilerDeployer completed in 200.89s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/mistralai-mistral-nec63bdb7f443c73af5458c57415a2781a-deploqz6pv:/code/chaiverse_profiler_1727147023 --namespace tenant-chaiml-guanaco
kubectl exec -it mistralai-mistral-nec63bdb7f443c73af5458c57415a2781a-deploqz6pv --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1727147023 && python profiles.py profile --best_of_n 2 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 128 --summary /code/chaiverse_profiler_1727147023/summary.json'
kubectl exec -it mistralai-mistral-nec63bdb7f443c73af5458c57415a2781a-deploqz6pv --namespace tenant-chaiml-guanaco -- bash -c 'cat /code/chaiverse_profiler_1727147023/summary.json'
Pipeline stage MKMLProfilerRunner completed in 1368.07s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service mistralai-mistral-nemo-9330-v104-profiler is running
Tearing down inference service mistralai-mistral-nemo-9330-v104-profiler
Service mistralai-mistral-nemo-9330-v104-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 2.35s
Shutdown handler de-registered
mistralai-mistral-nemo_9330_v104 status is now inactive due to auto deactivation removed underperforming models
mistralai-mistral-nemo_9330_v104 status is now torndown due to DeploymentManager action