submission_id: rinen0721-llama0830_v2
developer_uid: rinen0721
best_of: 16
celo_rating: 1228.49
display_name: rinen0721-llama0830_v1
family_friendly_score: 0.0
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
is_internal_developer: False
language_model: rinen0721/llama0830
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: rinen0721/llama0830
model_name: rinen0721-llama0830_v1
model_num_parameters: 8030261248.0
model_repo: rinen0721/llama0830
model_size: 8B
num_battles: 10926
num_wins: 5357
ranking_group: single
status: torndown
submission_type: basic
timestamp: 2024-08-30T10:03:14+00:00
us_pacific_date: 2024-08-30
win_ratio: 0.49029837085850264
Resubmit model
Running pipeline stage MKMLizer
Starting job with name rinen0721-llama0830-v2-mkmlizer
Waiting for job on rinen0721-llama0830-v2-mkmlizer to finish
Stopping job with name rinen0721-llama0830-v2-mkmlizer
%s, retrying in %s seconds...
Starting job with name rinen0721-llama0830-v2-mkmlizer
Waiting for job on rinen0721-llama0830-v2-mkmlizer to finish
rinen0721-llama0830-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
rinen0721-llama0830-v2-mkmlizer: ║ _____ __ __ ║
rinen0721-llama0830-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
rinen0721-llama0830-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
rinen0721-llama0830-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
rinen0721-llama0830-v2-mkmlizer: ║ /___/ ║
rinen0721-llama0830-v2-mkmlizer: ║ ║
rinen0721-llama0830-v2-mkmlizer: ║ Version: 0.10.1 ║
rinen0721-llama0830-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
rinen0721-llama0830-v2-mkmlizer: ║ https://mk1.ai ║
rinen0721-llama0830-v2-mkmlizer: ║ ║
rinen0721-llama0830-v2-mkmlizer: ║ The license key for the current software has been verified as ║
rinen0721-llama0830-v2-mkmlizer: ║ belonging to: ║
rinen0721-llama0830-v2-mkmlizer: ║ ║
rinen0721-llama0830-v2-mkmlizer: ║ Chai Research Corp. ║
rinen0721-llama0830-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
rinen0721-llama0830-v2-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
rinen0721-llama0830-v2-mkmlizer: ║ ║
rinen0721-llama0830-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
rinen0721-llama0830-v2-mkmlizer: Downloaded to shared memory in 29.034s
rinen0721-llama0830-v2-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp7cxtpi69, device:0
rinen0721-llama0830-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
rinen0721-llama0830-v2-mkmlizer: quantized model in 26.598s
rinen0721-llama0830-v2-mkmlizer: Processed model rinen0721/llama0830 in 55.632s
rinen0721-llama0830-v2-mkmlizer: creating bucket guanaco-mkml-models
rinen0721-llama0830-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
rinen0721-llama0830-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/rinen0721-llama0830-v2
rinen0721-llama0830-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/rinen0721-llama0830-v2/special_tokens_map.json
rinen0721-llama0830-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/rinen0721-llama0830-v2/config.json
rinen0721-llama0830-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/rinen0721-llama0830-v2/tokenizer_config.json
rinen0721-llama0830-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/rinen0721-llama0830-v2/tokenizer.json
rinen0721-llama0830-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/rinen0721-llama0830-v2/flywheel_model.0.safetensors
rinen0721-llama0830-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/291 [00:00<00:05, 52.18it/s] Loading 0: 8%|▊ | 22/291 [00:00<00:03, 84.92it/s] Loading 0: 11%|█ | 31/291 [00:00<00:03, 84.14it/s] Loading 0: 14%|█▍ | 42/291 [00:00<00:02, 92.41it/s] Loading 0: 18%|█▊ | 52/291 [00:00<00:03, 77.60it/s] Loading 0: 21%|██▏ | 62/291 [00:00<00:02, 83.55it/s] Loading 0: 24%|██▍ | 71/291 [00:00<00:02, 85.03it/s] Loading 0: 27%|██▋ | 80/291 [00:00<00:02, 80.12it/s] Loading 0: 31%|███ | 89/291 [00:02<00:09, 20.45it/s] Loading 0: 33%|███▎ | 97/291 [00:02<00:07, 25.40it/s] Loading 0: 36%|███▋ | 106/291 [00:02<00:05, 31.26it/s] Loading 0: 40%|███▉ | 115/291 [00:02<00:04, 38.26it/s] Loading 0: 43%|████▎ | 124/291 [00:02<00:03, 44.05it/s] Loading 0: 46%|████▌ | 133/291 [00:02<00:03, 51.68it/s] Loading 0: 49%|████▉ | 142/291 [00:02<00:02, 57.98it/s] Loading 0: 52%|█████▏ | 151/291 [00:03<00:02, 62.88it/s] Loading 0: 55%|█████▍ | 160/291 [00:03<00:01, 68.83it/s] Loading 0: 58%|█████▊ | 169/291 [00:03<00:01, 69.14it/s] Loading 0: 63%|██████▎ | 184/291 [00:03<00:01, 80.38it/s] Loading 0: 66%|██████▋ | 193/291 [00:04<00:04, 22.59it/s] Loading 0: 69%|██████▉ | 202/291 [00:04<00:03, 28.40it/s] Loading 0: 73%|███████▎ | 212/291 [00:04<00:02, 36.25it/s] Loading 0: 76%|███████▌ | 220/291 [00:04<00:01, 40.81it/s] Loading 0: 79%|███████▊ | 229/291 [00:05<00:01, 47.63it/s] Loading 0: 82%|████████▏ | 238/291 [00:05<00:01, 52.55it/s] Loading 0: 86%|████████▌ | 249/291 [00:05<00:00, 63.61it/s] Loading 0: 89%|████████▊ | 258/291 [00:05<00:00, 65.79it/s] Loading 0: 92%|█████████▏| 267/291 [00:05<00:00, 67.71it/s] Loading 0: 95%|█████████▍| 276/291 [00:05<00:00, 72.72it/s] Loading 0: 98%|█████████▊| 285/291 [00:05<00:00, 66.38it/s]
Job rinen0721-llama0830-v2-mkmlizer completed after 83.77s with status: succeeded
Stopping job with name rinen0721-llama0830-v2-mkmlizer
Pipeline stage MKMLizer completed in 85.24s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.21s
Running pipeline stage ISVCDeployer
Creating inference service rinen0721-llama0830-v2
Waiting for inference service rinen0721-llama0830-v2 to be ready
Connection pool is full, discarding connection: %s. Connection pool size: %s
Failed to get response for submission blend_jidor_2024-08-22: ('http://zonemercy-lexical-nemo-1518-v18-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'readfrom tcp 127.0.0.1:49552->127.0.0.1:8080: write tcp 127.0.0.1:49552->127.0.0.1:8080: write: connection reset by peer\n')
Inference service rinen0721-llama0830-v2 ready after 190.77600240707397s
Pipeline stage ISVCDeployer completed in 191.23s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.154439687728882s
Received healthy response to inference request in 2.0928795337677s
Received healthy response to inference request in 1.919438362121582s
Received healthy response to inference request in 2.296409845352173s
Received healthy response to inference request in 2.058751106262207s
5 requests
0 failed requests
5th percentile: 1.947300910949707
10th percentile: 1.975163459777832
20th percentile: 2.030888557434082
30th percentile: 2.0655767917633057
40th percentile: 2.079228162765503
50th percentile: 2.0928795337677
60th percentile: 2.117503595352173
70th percentile: 2.1421276569366454
80th percentile: 2.18283371925354
90th percentile: 2.2396217823028564
95th percentile: 2.268015813827515
99th percentile: 2.290731039047241
mean time: 2.104383707046509
Pipeline stage StressChecker completed in 11.29s
rinen0721-llama0830_v2 status is now deployed due to DeploymentManager action
rinen0721-llama0830_v2 status is now inactive due to auto deactivation removed underperforming models
rinen0721-llama0830_v2 status is now torndown due to DeploymentManager action