submission_id: stark2000s-gpt3_v3
developer_uid: stark2000s
alignment_samples: 12509
alignment_score: -0.8538421513553897
best_of: 16
celo_rating: 1214.11
display_name: stark2000s-gpt3_v2
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
is_internal_developer: False
language_model: stark2000s/gpt3
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: stark2000s/gpt3
model_name: stark2000s-gpt3_v2
model_num_parameters: 8030261248.0
model_repo: stark2000s/gpt3
model_size: 8B
num_battles: 12509
num_wins: 5782
propriety_score: 0.7568897637795275
propriety_total_count: 1016.0
ranking_group: single
status: inactive
submission_type: basic
timestamp: 2024-09-04T12:26:05+00:00
us_pacific_date: 2024-09-04
win_ratio: 0.4622271964185786
Download Preference Data
Resubmit model
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name stark2000s-gpt3-v3-mkmlizer
Waiting for job on stark2000s-gpt3-v3-mkmlizer to finish
stark2000s-gpt3-v3-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
stark2000s-gpt3-v3-mkmlizer: ║ _____ __ __ ║
stark2000s-gpt3-v3-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
stark2000s-gpt3-v3-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
stark2000s-gpt3-v3-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
stark2000s-gpt3-v3-mkmlizer: ║ /___/ ║
stark2000s-gpt3-v3-mkmlizer: ║ ║
stark2000s-gpt3-v3-mkmlizer: ║ Version: 0.10.1 ║
stark2000s-gpt3-v3-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
stark2000s-gpt3-v3-mkmlizer: ║ https://mk1.ai ║
stark2000s-gpt3-v3-mkmlizer: ║ ║
stark2000s-gpt3-v3-mkmlizer: ║ The license key for the current software has been verified as ║
stark2000s-gpt3-v3-mkmlizer: ║ belonging to: ║
stark2000s-gpt3-v3-mkmlizer: ║ ║
stark2000s-gpt3-v3-mkmlizer: ║ Chai Research Corp. ║
stark2000s-gpt3-v3-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
stark2000s-gpt3-v3-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
stark2000s-gpt3-v3-mkmlizer: ║ ║
stark2000s-gpt3-v3-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
stark2000s-gpt3-v3-mkmlizer: Downloaded to shared memory in 416.109s
stark2000s-gpt3-v3-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpctvu8vy2, device:0
stark2000s-gpt3-v3-mkmlizer: Saving flywheel model at /dev/shm/model_cache
stark2000s-gpt3-v3-mkmlizer: quantized model in 26.308s
stark2000s-gpt3-v3-mkmlizer: Processed model stark2000s/gpt3 in 442.416s
stark2000s-gpt3-v3-mkmlizer: creating bucket guanaco-mkml-models
stark2000s-gpt3-v3-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
stark2000s-gpt3-v3-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/stark2000s-gpt3-v3
stark2000s-gpt3-v3-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/stark2000s-gpt3-v3/config.json
stark2000s-gpt3-v3-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/stark2000s-gpt3-v3/special_tokens_map.json
stark2000s-gpt3-v3-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/stark2000s-gpt3-v3/tokenizer_config.json
stark2000s-gpt3-v3-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/stark2000s-gpt3-v3/tokenizer.json
stark2000s-gpt3-v3-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/stark2000s-gpt3-v3/flywheel_model.0.safetensors
stark2000s-gpt3-v3-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/291 [00:00<00:05, 51.49it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:04, 62.47it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:04, 63.54it/s] Loading 0: 12%|█▏ | 34/291 [00:00<00:03, 66.43it/s] Loading 0: 15%|█▍ | 43/291 [00:00<00:03, 66.97it/s] Loading 0: 18%|█▊ | 52/291 [00:00<00:03, 71.51it/s] Loading 0: 21%|██ | 61/291 [00:00<00:03, 71.88it/s] Loading 0: 24%|██▍ | 70/291 [00:01<00:03, 70.75it/s] Loading 0: 27%|██▋ | 79/291 [00:01<00:02, 71.02it/s] Loading 0: 30%|██▉ | 87/291 [00:02<00:09, 20.41it/s] Loading 0: 32%|███▏ | 94/291 [00:02<00:08, 24.09it/s] Loading 0: 35%|███▌ | 103/291 [00:02<00:06, 30.89it/s] Loading 0: 38%|███▊ | 112/291 [00:02<00:04, 37.43it/s] Loading 0: 42%|████▏ | 121/291 [00:02<00:03, 43.68it/s] Loading 0: 45%|████▍ | 130/291 [00:02<00:03, 51.20it/s] Loading 0: 48%|████▊ | 139/291 [00:02<00:02, 58.19it/s] Loading 0: 51%|█████ | 148/291 [00:03<00:02, 63.17it/s] Loading 0: 54%|█████▍ | 157/291 [00:03<00:02, 66.87it/s] Loading 0: 57%|█████▋ | 166/291 [00:03<00:01, 70.18it/s] Loading 0: 60%|██████ | 175/291 [00:03<00:01, 72.32it/s] Loading 0: 63%|██████▎ | 184/291 [00:03<00:01, 74.00it/s] Loading 0: 66%|██████▌ | 192/291 [00:04<00:04, 21.52it/s] Loading 0: 68%|██████▊ | 198/291 [00:04<00:03, 24.79it/s] Loading 0: 70%|███████ | 205/291 [00:04<00:02, 29.67it/s] Loading 0: 74%|███████▎ | 214/291 [00:04<00:02, 35.94it/s] Loading 0: 77%|███████▋ | 223/291 [00:05<00:01, 42.76it/s] Loading 0: 80%|███████▉ | 232/291 [00:05<00:01, 48.38it/s] Loading 0: 83%|████████▎ | 241/291 [00:05<00:00, 55.88it/s] Loading 0: 86%|████████▌ | 250/291 [00:05<00:00, 60.39it/s] Loading 0: 89%|████████▉ | 259/291 [00:05<00:00, 66.31it/s] Loading 0: 92%|█████████▏| 268/291 [00:05<00:00, 71.05it/s] Loading 0: 95%|█████████▌| 277/291 [00:05<00:00, 72.59it/s] Loading 0: 98%|█████████▊| 286/291 [00:05<00:00, 75.41it/s]
Job stark2000s-gpt3-v3-mkmlizer completed after 467.54s with status: succeeded
Stopping job with name stark2000s-gpt3-v3-mkmlizer
Pipeline stage MKMLizer completed in 468.44s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.62s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service stark2000s-gpt3-v3
Waiting for inference service stark2000s-gpt3-v3 to be ready
Inference service stark2000s-gpt3-v3 ready after 130.93806719779968s
Pipeline stage MKMLDeployer completed in 133.09s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.8647184371948242s
Received healthy response to inference request in 1.5598337650299072s
Received healthy response to inference request in 2.1534929275512695s
Received healthy response to inference request in 1.7411417961120605s
Received healthy response to inference request in 1.1636159420013428s
5 requests
0 failed requests
5th percentile: 1.2428595066070556
10th percentile: 1.3221030712127686
20th percentile: 1.4805902004241944
30th percentile: 1.596095371246338
40th percentile: 1.6686185836791991
50th percentile: 1.7411417961120605
60th percentile: 1.790572452545166
70th percentile: 1.8400031089782716
80th percentile: 1.9224733352661134
90th percentile: 2.0379831314086916
95th percentile: 2.0957380294799806
99th percentile: 2.1419419479370116
mean time: 1.6965605735778808
Pipeline stage StressChecker completed in 9.24s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
starting trigger_guanaco_pipeline %s
Pipeline stage TriggerMKMLProfilingPipeline completed in 5.21s
stark2000s-gpt3_v3 status is now deployed due to DeploymentManager action
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.11s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.14s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service stark2000s-gpt3-v3-profiler
Waiting for inference service stark2000s-gpt3-v3-profiler to be ready
Inference service stark2000s-gpt3-v3-profiler ready after 130.33345532417297s
Pipeline stage MKMLProfilerDeployer completed in 130.72s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/stark2000s-gpt3-v3-profiler-predictor-00001-deployment-d79xtkfq:/code/chaiverse_profiler_1725453563 --namespace tenant-chaiml-guanaco
kubectl exec -it stark2000s-gpt3-v3-profiler-predictor-00001-deployment-d79xtkfq --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1725453563 && python profiles.py profile --best_of_n 16 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 512 --output_tokens 64 --summary /code/chaiverse_profiler_1725453563/summary.json'
Received signal 15, running shutdown handler
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service stark2000s-gpt3-v3-profiler is running
Tearing down inference service stark2000s-gpt3-v3-profiler
Service stark2000s-gpt3-v3-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 1.80s
Shutdown handler unregistered and original handlers restored
stark2000s-gpt3_v3 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics