submission_id: rinen0721-llama0828_v1
developer_uid: rinen0721
alignment_samples: 12957
alignment_score: -0.9103239413398599
best_of: 16
celo_rating: 1226.36
display_name: rinen0721-llama0828_v1
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
is_internal_developer: False
language_model: rinen0721/llama0828
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: rinen0721/llama0828
model_name: rinen0721-llama0828_v1
model_num_parameters: 8030261248.0
model_repo: rinen0721/llama0828
model_size: 8B
num_battles: 12957
num_wins: 6291
propriety_score: 0.7370727432077125
propriety_total_count: 1141.0
ranking_group: single
status: inactive
submission_type: basic
timestamp: 2024-08-28T12:32:12+00:00
us_pacific_date: 2024-08-28
win_ratio: 0.4855290576522343
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name rinen0721-llama0828-v1-mkmlizer
Waiting for job on rinen0721-llama0828-v1-mkmlizer to finish
Stopping job with name rinen0721-llama0828-v1-mkmlizer
%s, retrying in %s seconds...
Starting job with name rinen0721-llama0828-v1-mkmlizer
Waiting for job on rinen0721-llama0828-v1-mkmlizer to finish
Stopping job with name rinen0721-llama0828-v1-mkmlizer
%s, retrying in %s seconds...
Starting job with name rinen0721-llama0828-v1-mkmlizer
Waiting for job on rinen0721-llama0828-v1-mkmlizer to finish
rinen0721-llama0828-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
rinen0721-llama0828-v1-mkmlizer: ║ _____ __ __ ║
rinen0721-llama0828-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
rinen0721-llama0828-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
rinen0721-llama0828-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
rinen0721-llama0828-v1-mkmlizer: ║ /___/ ║
rinen0721-llama0828-v1-mkmlizer: ║ ║
rinen0721-llama0828-v1-mkmlizer: ║ Version: 0.10.1 ║
rinen0721-llama0828-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
rinen0721-llama0828-v1-mkmlizer: ║ https://mk1.ai ║
rinen0721-llama0828-v1-mkmlizer: ║ ║
rinen0721-llama0828-v1-mkmlizer: ║ The license key for the current software has been verified as ║
rinen0721-llama0828-v1-mkmlizer: ║ belonging to: ║
rinen0721-llama0828-v1-mkmlizer: ║ ║
rinen0721-llama0828-v1-mkmlizer: ║ Chai Research Corp. ║
rinen0721-llama0828-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
rinen0721-llama0828-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
rinen0721-llama0828-v1-mkmlizer: ║ ║
rinen0721-llama0828-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
rinen0721-llama0828-v1-mkmlizer: quantized model in 25.527s
rinen0721-llama0828-v1-mkmlizer: Processed model rinen0721/llama0828 in 61.302s
rinen0721-llama0828-v1-mkmlizer: creating bucket guanaco-mkml-models
rinen0721-llama0828-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
rinen0721-llama0828-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/rinen0721-llama0828-v1
rinen0721-llama0828-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/rinen0721-llama0828-v1/config.json
rinen0721-llama0828-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/rinen0721-llama0828-v1/special_tokens_map.json
rinen0721-llama0828-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/rinen0721-llama0828-v1/tokenizer_config.json
rinen0721-llama0828-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/rinen0721-llama0828-v1/tokenizer.json
rinen0721-llama0828-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/rinen0721-llama0828-v1/flywheel_model.0.safetensors
rinen0721-llama0828-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/291 [00:00<00:05, 52.92it/s] Loading 0: 8%|▊ | 22/291 [00:00<00:03, 84.57it/s] Loading 0: 11%|█ | 31/291 [00:00<00:03, 84.39it/s] Loading 0: 14%|█▎ | 40/291 [00:00<00:02, 85.90it/s] Loading 0: 18%|█▊ | 52/291 [00:00<00:02, 84.20it/s] Loading 0: 21%|██ | 61/291 [00:00<00:02, 82.00it/s] Loading 0: 24%|██▍ | 70/291 [00:00<00:02, 82.38it/s] Loading 0: 27%|██▋ | 79/291 [00:00<00:02, 84.33it/s] Loading 0: 30%|███ | 88/291 [00:02<00:09, 21.98it/s] Loading 0: 33%|███▎ | 97/291 [00:02<00:06, 28.41it/s] Loading 0: 38%|███▊ | 112/291 [00:02<00:04, 40.73it/s] Loading 0: 42%|████▏ | 121/291 [00:02<00:03, 47.04it/s] Loading 0: 45%|████▌ | 132/291 [00:02<00:02, 57.30it/s] Loading 0: 49%|████▉ | 142/291 [00:02<00:02, 60.51it/s] Loading 0: 52%|█████▏ | 152/291 [00:02<00:02, 68.09it/s] Loading 0: 57%|█████▋ | 166/291 [00:02<00:01, 77.09it/s] Loading 0: 61%|██████ | 178/291 [00:03<00:01, 79.16it/s] Loading 0: 64%|██████▍ | 187/291 [00:04<00:04, 24.49it/s] Loading 0: 67%|██████▋ | 196/291 [00:04<00:03, 30.19it/s] Loading 0: 71%|███████ | 206/291 [00:04<00:02, 37.95it/s] Loading 0: 74%|███████▍ | 215/291 [00:04<00:01, 44.91it/s] Loading 0: 77%|███████▋ | 224/291 [00:04<00:01, 51.44it/s] Loading 0: 82%|████████▏ | 238/291 [00:04<00:00, 63.09it/s] Loading 0: 85%|████████▍ | 247/291 [00:04<00:00, 67.72it/s] Loading 0: 88%|████████▊ | 256/291 [00:04<00:00, 71.95it/s] Loading 0: 91%|█████████ | 265/291 [00:05<00:00, 74.62it/s] Loading 0: 95%|█████████▍| 276/291 [00:05<00:00, 83.24it/s] Loading 0: 98%|█████████▊| 286/291 [00:05<00:00, 76.81it/s]
Job rinen0721-llama0828-v1-mkmlizer completed after 84.7s with status: succeeded
Stopping job with name rinen0721-llama0828-v1-mkmlizer
Pipeline stage MKMLizer completed in 87.63s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.14s
Running pipeline stage ISVCDeployer
Creating inference service rinen0721-llama0828-v1
Waiting for inference service rinen0721-llama0828-v1 to be ready
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Inference service rinen0721-llama0828-v1 ready after 170.89250826835632s
Pipeline stage ISVCDeployer completed in 171.66s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.72810435295105s
Received healthy response to inference request in 2.2138917446136475s
Received healthy response to inference request in 2.0475270748138428s
Received healthy response to inference request in 2.2475504875183105s
Received healthy response to inference request in 1.7963829040527344s
5 requests
0 failed requests
5th percentile: 1.846611738204956
10th percentile: 1.8968405723571777
20th percentile: 1.997298240661621
30th percentile: 2.080800008773804
40th percentile: 2.1473458766937257
50th percentile: 2.2138917446136475
60th percentile: 2.2273552417755127
70th percentile: 2.240818738937378
80th percentile: 2.3436612606048586
90th percentile: 2.535882806777954
95th percentile: 2.6319935798645018
99th percentile: 2.70888219833374
mean time: 2.206691312789917
Pipeline stage StressChecker completed in 12.96s
rinen0721-llama0828_v1 status is now deployed due to DeploymentManager action
rinen0721-llama0828_v1 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics