submission_id: fengfengzi14-mistral-testg3_v2
developer_uid: fengfengzi14_07790
alignment_samples: 11279
alignment_score: 0.33536903992903044
best_of: 1
celo_rating: 1089.65
display_name: fengfengzi14-mistral-testg3_v2
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 1, 'max_output_tokens': 64}
gpu_counts: {'NVIDIA RTX A5000': 1}
is_internal_developer: False
language_model: fengfengzi14/mistral_testG3
latencies: [{'batch_size': 1, 'throughput': 1.125164571223685, 'latency_mean': 0.888690116405487, 'latency_p50': 0.8888425827026367, 'latency_p90': 0.9949434518814086}, {'batch_size': 5, 'throughput': 3.610959784111249, 'latency_mean': 1.3718048489093781, 'latency_p50': 1.372676134109497, 'latency_p90': 1.5287819623947143}, {'batch_size': 10, 'throughput': 5.274400576037483, 'latency_mean': 1.875470917224884, 'latency_p50': 1.8797107934951782, 'latency_p90': 2.1227442979812623}, {'batch_size': 15, 'throughput': 6.189704286838737, 'latency_mean': 2.3854563355445864, 'latency_p50': 2.402297258377075, 'latency_p90': 2.6897613286972044}, {'batch_size': 20, 'throughput': 6.781502182223493, 'latency_mean': 2.897797749042511, 'latency_p50': 2.886462688446045, 'latency_p90': 3.3340934991836546}, {'batch_size': 25, 'throughput': 7.1557610368440985, 'latency_mean': 3.416962295770645, 'latency_p50': 3.398791193962097, 'latency_p90': 4.003235387802124}, {'batch_size': 30, 'throughput': 7.473625018371998, 'latency_mean': 3.928154796361923, 'latency_p50': 3.87990939617157, 'latency_p90': 4.598503470420837}, {'batch_size': 35, 'throughput': 7.547412634323951, 'latency_mean': 4.519120894670486, 'latency_p50': 4.45664644241333, 'latency_p90': 5.472826504707336}, {'batch_size': 40, 'throughput': 7.623087214519354, 'latency_mean': 5.08000522851944, 'latency_p50': 5.0453362464904785, 'latency_p90': 5.958399534225464}]
max_input_tokens: 512
max_output_tokens: 64
model_architecture: MistralForCausalLM
model_group: fengfengzi14/mistral_tes
model_name: fengfengzi14-mistral-testg3_v2
model_num_parameters: 7241732096.0
model_repo: fengfengzi14/mistral_testG3
model_size: 7B
num_battles: 11277
num_wins: 3508
propriety_score: 0.7241014799154334
propriety_total_count: 946.0
ranking_group: single
status: inactive
submission_type: basic
throughput_3p7s: 7.5
timestamp: 2024-09-07T04:25:54+00:00
us_pacific_date: 2024-09-06
win_ratio: 0.3110756406845792
Download Preference Data
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name fengfengzi14-mistral-testg3-v2-mkmlizer
Waiting for job on fengfengzi14-mistral-testg3-v2-mkmlizer to finish
fengfengzi14-mistral-testg3-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ _____ __ __ ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ /___/ ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ Version: 0.10.1 ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ https://mk1.ai ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ The license key for the current software has been verified as ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ belonging to: ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ Chai Research Corp. ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ║ ║
fengfengzi14-mistral-testg3-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission blend_fedek_2024-08-24: ('http://zonemercy-lexical-nemo-1518-v18-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:35118->127.0.0.1:8080: read: connection reset by peer\n')
fengfengzi14-mistral-testg3-v2-mkmlizer: Downloaded to shared memory in 34.157s
fengfengzi14-mistral-testg3-v2-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpw1f9170z, device:0
fengfengzi14-mistral-testg3-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Failed to get response for submission chaiml-lexical-nemo-v4-1k1e5_v1: ('http://chaiml-lexical-nemo-v4-1k1e5-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:35004->127.0.0.1:8080: read: connection reset by peer\n')
fengfengzi14-mistral-testg3-v2-mkmlizer: quantized model in 18.099s
fengfengzi14-mistral-testg3-v2-mkmlizer: Processed model fengfengzi14/mistral_testG3 in 52.257s
fengfengzi14-mistral-testg3-v2-mkmlizer: creating bucket guanaco-mkml-models
fengfengzi14-mistral-testg3-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
fengfengzi14-mistral-testg3-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/fengfengzi14-mistral-testg3-v2
fengfengzi14-mistral-testg3-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/fengfengzi14-mistral-testg3-v2/config.json
fengfengzi14-mistral-testg3-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/fengfengzi14-mistral-testg3-v2/special_tokens_map.json
fengfengzi14-mistral-testg3-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/fengfengzi14-mistral-testg3-v2/tokenizer_config.json
fengfengzi14-mistral-testg3-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/fengfengzi14-mistral-testg3-v2/tokenizer.model
fengfengzi14-mistral-testg3-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/fengfengzi14-mistral-testg3-v2/tokenizer.json
fengfengzi14-mistral-testg3-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/fengfengzi14-mistral-testg3-v2/flywheel_model.0.safetensors
fengfengzi14-mistral-testg3-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/291 [00:00<00:06, 44.41it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:04, 64.63it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:03, 72.21it/s] Loading 0: 12%|█▏ | 34/291 [00:00<00:03, 69.59it/s] Loading 0: 15%|█▍ | 43/291 [00:00<00:03, 66.71it/s] Loading 0: 18%|█▊ | 52/291 [00:00<00:03, 71.36it/s] Loading 0: 21%|██ | 61/291 [00:00<00:03, 73.77it/s] Loading 0: 24%|██▍ | 70/291 [00:00<00:02, 77.75it/s] Loading 0: 27%|██▋ | 79/291 [00:01<00:02, 80.75it/s] Loading 0: 31%|███ | 89/291 [00:01<00:02, 85.89it/s] Loading 0: 34%|███▎ | 98/291 [00:02<00:08, 21.94it/s] Loading 0: 36%|███▋ | 106/291 [00:02<00:07, 26.18it/s] Loading 0: 40%|███▉ | 115/291 [00:02<00:05, 32.82it/s] Loading 0: 43%|████▎ | 124/291 [00:02<00:04, 38.96it/s] Loading 0: 46%|████▌ | 133/291 [00:02<00:03, 44.27it/s] Loading 0: 49%|████▉ | 142/291 [00:02<00:02, 49.92it/s] Loading 0: 52%|█████▏ | 151/291 [00:03<00:02, 53.90it/s] Loading 0: 55%|█████▍ | 160/291 [00:03<00:02, 60.40it/s] Loading 0: 58%|█████▊ | 169/291 [00:03<00:01, 64.33it/s] Loading 0: 61%|██████ | 178/291 [00:03<00:01, 67.84it/s] Loading 0: 64%|██████▍ | 187/291 [00:03<00:01, 67.25it/s] Loading 0: 67%|██████▋ | 196/291 [00:03<00:01, 70.90it/s] Loading 0: 70%|███████ | 204/291 [00:04<00:04, 21.35it/s] Loading 0: 73%|███████▎ | 211/291 [00:04<00:03, 25.32it/s] Loading 0: 76%|███████▌ | 220/291 [00:05<00:02, 30.58it/s] Loading 0: 79%|███████▊ | 229/291 [00:05<00:01, 37.55it/s] Loading 0: 82%|████████▏ | 238/291 [00:05<00:01, 43.21it/s] Loading 0: 85%|████████▍ | 247/291 [00:05<00:00, 48.92it/s] Loading 0: 88%|████████▊ | 256/291 [00:05<00:00, 53.98it/s] Loading 0: 91%|█████████ | 265/291 [00:05<00:00, 57.58it/s] Loading 0: 94%|█████████▍| 274/291 [00:05<00:00, 59.64it/s] Loading 0: 97%|█████████▋| 283/291 [00:05<00:00, 61.09it/s] Loading 0: 100%|██████████| 291/291 [00:07<00:00, 17.03it/s]
Job fengfengzi14-mistral-testg3-v2-mkmlizer completed after 74.3s with status: succeeded
Stopping job with name fengfengzi14-mistral-testg3-v2-mkmlizer
Pipeline stage MKMLizer completed in 75.28s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.20s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service fengfengzi14-mistral-testg3-v2
Waiting for inference service fengfengzi14-mistral-testg3-v2 to be ready
Failed to get response for submission trace2333-mistral-trial6_v9: ('http://trace2333-mistral-trial6-v9-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:55204->127.0.0.1:8080: read: connection reset by peer\n')
Failed to get response for submission chaiml-llama-8b-pairwis_8189_v19: ('http://chaiml-llama-8b-pairwis-8189-v19-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:35306->127.0.0.1:8080: read: connection reset by peer\n')
Inference service fengfengzi14-mistral-testg3-v2 ready after 140.83653092384338s
Pipeline stage MKMLDeployer completed in 141.32s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.1924843788146973s
Received healthy response to inference request in 1.132002353668213s
Received healthy response to inference request in 1.191690444946289s
Received healthy response to inference request in 1.110459566116333s
Received healthy response to inference request in 2.2069029808044434s
5 requests
0 failed requests
5th percentile: 1.114768123626709
10th percentile: 1.119076681137085
20th percentile: 1.1276937961578368
30th percentile: 1.1439399719238281
40th percentile: 1.1678152084350586
50th percentile: 1.191690444946289
60th percentile: 1.1920080184936523
70th percentile: 1.1923255920410156
80th percentile: 1.3953680992126467
90th percentile: 1.801135540008545
95th percentile: 2.004019260406494
99th percentile: 2.1663262367248537
mean time: 1.366707944869995
Pipeline stage StressChecker completed in 8.41s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 5.41s
Shutdown handler de-registered
fengfengzi14-mistral-testg3_v2 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.14s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service fengfengzi14-mistral-testg3-v2-profiler
Waiting for inference service fengfengzi14-mistral-testg3-v2-profiler to be ready
Inference service fengfengzi14-mistral-testg3-v2-profiler ready after 150.3452446460724s
Pipeline stage MKMLProfilerDeployer completed in 151.24s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/fengfengzi14-mistral4299f668e071354f37dace4a7423c168-deplorv9dr:/code/chaiverse_profiler_1725683574 --namespace tenant-chaiml-guanaco
kubectl exec -it fengfengzi14-mistral4299f668e071354f37dace4a7423c168-deplorv9dr --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1725683574 && python profiles.py profile --best_of_n 1 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 512 --output_tokens 64 --summary /code/chaiverse_profiler_1725683574/summary.json'
kubectl exec -it fengfengzi14-mistral4299f668e071354f37dace4a7423c168-deplorv9dr --namespace tenant-chaiml-guanaco -- bash -c 'cat /code/chaiverse_profiler_1725683574/summary.json'
Pipeline stage MKMLProfilerRunner completed in 446.90s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service fengfengzi14-mistral-testg3-v2-profiler is running
Tearing down inference service fengfengzi14-mistral-testg3-v2-profiler
Service fengfengzi14-mistral-testg3-v2-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 1.70s
Shutdown handler de-registered
fengfengzi14-mistral-testg3_v2 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics