submission_id: stark2000s-gpt3_v1
developer_uid: stark2000s
alignment_samples: 11792
alignment_score: 0.554001216396812
best_of: 1
celo_rating: 1111.29
display_name: stark2000s-gpt3_v1
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 1, 'max_output_tokens': 64}
is_internal_developer: False
language_model: stark2000s/gpt3
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: stark2000s/gpt3
model_name: stark2000s-gpt3_v1
model_num_parameters: 8030261248.0
model_repo: stark2000s/gpt3
model_size: 8B
num_battles: 11792
num_wins: 3927
propriety_score: 0.736896197327852
propriety_total_count: 973.0
ranking_group: single
status: inactive
submission_type: basic
timestamp: 2024-08-30T08:24:31+00:00
us_pacific_date: 2024-08-30
win_ratio: 0.33302238805970147
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name stark2000s-gpt3-v1-mkmlizer
Waiting for job on stark2000s-gpt3-v1-mkmlizer to finish
Stopping job with name stark2000s-gpt3-v1-mkmlizer
%s, retrying in %s seconds...
Starting job with name stark2000s-gpt3-v1-mkmlizer
Waiting for job on stark2000s-gpt3-v1-mkmlizer to finish
stark2000s-gpt3-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
stark2000s-gpt3-v1-mkmlizer: ║ _____ __ __ ║
stark2000s-gpt3-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
stark2000s-gpt3-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
stark2000s-gpt3-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
stark2000s-gpt3-v1-mkmlizer: ║ /___/ ║
stark2000s-gpt3-v1-mkmlizer: ║ ║
stark2000s-gpt3-v1-mkmlizer: ║ Version: 0.10.1 ║
stark2000s-gpt3-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
stark2000s-gpt3-v1-mkmlizer: ║ https://mk1.ai ║
stark2000s-gpt3-v1-mkmlizer: ║ ║
stark2000s-gpt3-v1-mkmlizer: ║ The license key for the current software has been verified as ║
stark2000s-gpt3-v1-mkmlizer: ║ belonging to: ║
stark2000s-gpt3-v1-mkmlizer: ║ ║
stark2000s-gpt3-v1-mkmlizer: ║ Chai Research Corp. ║
stark2000s-gpt3-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
stark2000s-gpt3-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
stark2000s-gpt3-v1-mkmlizer: ║ ║
stark2000s-gpt3-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
stark2000s-gpt3-v1-mkmlizer: Downloaded to shared memory in 35.707s
stark2000s-gpt3-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpbd7rnxaz, device:0
stark2000s-gpt3-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
stark2000s-gpt3-v1-mkmlizer: quantized model in 26.393s
stark2000s-gpt3-v1-mkmlizer: Processed model stark2000s/gpt3 in 62.099s
stark2000s-gpt3-v1-mkmlizer: creating bucket guanaco-mkml-models
stark2000s-gpt3-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
stark2000s-gpt3-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/stark2000s-gpt3-v1
stark2000s-gpt3-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/stark2000s-gpt3-v1/config.json
stark2000s-gpt3-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/stark2000s-gpt3-v1/special_tokens_map.json
stark2000s-gpt3-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/stark2000s-gpt3-v1/tokenizer_config.json
stark2000s-gpt3-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/stark2000s-gpt3-v1/tokenizer.json
stark2000s-gpt3-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/stark2000s-gpt3-v1/flywheel_model.0.safetensors
stark2000s-gpt3-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/291 [00:00<00:05, 48.22it/s] Loading 0: 8%|▊ | 22/291 [00:00<00:03, 76.30it/s] Loading 0: 11%|█ | 31/291 [00:00<00:03, 80.31it/s] Loading 0: 14%|█▎ | 40/291 [00:00<00:03, 78.48it/s] Loading 0: 17%|█▋ | 49/291 [00:00<00:02, 81.08it/s] Loading 0: 21%|██ | 61/291 [00:00<00:02, 80.72it/s] Loading 0: 24%|██▍ | 71/291 [00:00<00:02, 85.42it/s] Loading 0: 27%|██▋ | 80/291 [00:00<00:02, 85.92it/s] Loading 0: 31%|███ | 89/291 [00:02<00:09, 21.68it/s] Loading 0: 33%|███▎ | 97/291 [00:02<00:07, 26.97it/s] Loading 0: 36%|███▋ | 106/291 [00:02<00:05, 34.13it/s] Loading 0: 40%|███▉ | 116/291 [00:02<00:04, 43.32it/s] Loading 0: 45%|████▍ | 130/291 [00:02<00:02, 54.46it/s] Loading 0: 48%|████▊ | 139/291 [00:02<00:02, 57.04it/s] Loading 0: 51%|█████ | 148/291 [00:02<00:02, 61.67it/s] Loading 0: 54%|█████▍ | 157/291 [00:02<00:02, 66.53it/s] Loading 0: 57%|█████▋ | 166/291 [00:03<00:01, 70.73it/s] Loading 0: 60%|██████ | 175/291 [00:03<00:01, 73.92it/s] Loading 0: 64%|██████▎ | 185/291 [00:03<00:01, 80.54it/s] Loading 0: 67%|██████▋ | 194/291 [00:04<00:04, 21.19it/s] Loading 0: 69%|██████▉ | 202/291 [00:04<00:03, 25.99it/s] Loading 0: 73%|███████▎ | 211/291 [00:04<00:02, 32.36it/s] Loading 0: 76%|███████▌ | 220/291 [00:04<00:01, 38.88it/s] Loading 0: 79%|███████▊ | 229/291 [00:04<00:01, 46.00it/s] Loading 0: 82%|████████▏ | 240/291 [00:05<00:00, 57.48it/s] Loading 0: 86%|████████▌ | 250/291 [00:05<00:00, 60.20it/s] Loading 0: 89%|████████▉ | 259/291 [00:05<00:00, 60.78it/s] Loading 0: 92%|█████████▏| 268/291 [00:05<00:00, 63.72it/s] Loading 0: 95%|█████████▌| 277/291 [00:05<00:00, 67.76it/s] Loading 0: 98%|█████████▊| 286/291 [00:05<00:00, 72.67it/s]
Job stark2000s-gpt3-v1-mkmlizer completed after 83.89s with status: succeeded
Stopping job with name stark2000s-gpt3-v1-mkmlizer
Pipeline stage MKMLizer completed in 85.35s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.08s
Running pipeline stage ISVCDeployer
Creating inference service stark2000s-gpt3-v1
Waiting for inference service stark2000s-gpt3-v1 to be ready
Failed to get response for submission neversleep-noromaid-v0_8068_v133: ('http://neversleep-noromaid-v0-8068-v133-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:51470->127.0.0.1:8080: read: connection reset by peer\n')
Inference service stark2000s-gpt3-v1 ready after 181.59304022789001s
Pipeline stage ISVCDeployer completed in 182.30s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.5006821155548096s
Received healthy response to inference request in 1.1572496891021729s
Received healthy response to inference request in 1.2662262916564941s
Received healthy response to inference request in 0.6658813953399658s
Received healthy response to inference request in 2.086796998977661s
5 requests
0 failed requests
5th percentile: 0.7641550540924072
10th percentile: 0.8624287128448487
20th percentile: 1.0589760303497315
30th percentile: 1.1790450096130372
40th percentile: 1.2226356506347655
50th percentile: 1.2662262916564941
60th percentile: 1.3600086212158202
70th percentile: 1.4537909507751465
80th percentile: 1.61790509223938
90th percentile: 1.8523510456085206
95th percentile: 1.9695740222930906
99th percentile: 2.063352403640747
mean time: 1.3353672981262208
Pipeline stage StressChecker completed in 8.24s
stark2000s-gpt3_v1 status is now deployed due to DeploymentManager action
stark2000s-gpt3_v1 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics