submission_id: pawankrd-cosmosrp-llama31_v6
developer_uid: PawanOsman
alignment_samples: 0
best_of: 16
celo_rating: 1205.3
display_name: cosmosrp-llama31-test
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.6, 'top_p': 0.95, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['<', '>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64, 'reward_max_token_input': 128}
is_internal_developer: False
language_model: PawanKrd/cosmosrp-llama31
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: PawanKrd/cosmosrp-llama3
model_name: cosmosrp-llama31-test
model_num_parameters: 8030261248.0
model_repo: PawanKrd/cosmosrp-llama31
model_size: 8B
num_battles: 8770
num_wins: 4496
propriety_score: 0.7538314176245211
propriety_total_count: 1044.0
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: Jellywibble/gpt2_xl_pairwise_89m_step_347634
status: torndown
submission_type: basic
timestamp: 2024-07-27T18:44:52+00:00
us_pacific_date: 2024-07-27
win_ratio: 0.5126567844925883
Download Preference Data
Resubmit model
Running pipeline stage MKMLizer
Starting job with name pawankrd-cosmosrp-llama31-v6-mkmlizer
Waiting for job on pawankrd-cosmosrp-llama31-v6-mkmlizer to finish
Failed to get response for submission shuttleai-shuttle-2-5-1_5730_v10: ('http://shuttleai-shuttle-2-5-1-5730-v10-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission shuttleai-shuttle-2-5-1-_5730_v8: ('http://shuttleai-shuttle-2-5-1-5730-v8-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
pawankrd-cosmosrp-llama31-v6-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ _____ __ __ ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ /___/ ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ Version: 0.9.7 ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ https://mk1.ai ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ The license key for the current software has been verified as ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ belonging to: ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ Chai Research Corp. ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ║ ║
pawankrd-cosmosrp-llama31-v6-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Failed to get response for submission shuttleai-shuttle-2-5-1_5730_v10: ('http://shuttleai-shuttle-2-5-1-5730-v10-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
pawankrd-cosmosrp-llama31-v6-mkmlizer: Downloaded to shared memory in 20.112s
pawankrd-cosmosrp-llama31-v6-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpmedjjwmv, device:0
pawankrd-cosmosrp-llama31-v6-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Failed to get response for submission shuttleai-shuttle-2-5-1-_5730_v8: ('http://shuttleai-shuttle-2-5-1-5730-v8-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission shuttleai-shuttle-2-5-1_5730_v10: ('http://shuttleai-shuttle-2-5-1-5730-v10-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
pawankrd-cosmosrp-llama31-v6-mkmlizer: quantized model in 25.967s
pawankrd-cosmosrp-llama31-v6-mkmlizer: Processed model PawanKrd/cosmosrp-llama31 in 46.079s
pawankrd-cosmosrp-llama31-v6-mkmlizer: creating bucket guanaco-mkml-models
pawankrd-cosmosrp-llama31-v6-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
pawankrd-cosmosrp-llama31-v6-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/pawankrd-cosmosrp-llama31-v6
pawankrd-cosmosrp-llama31-v6-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/pawankrd-cosmosrp-llama31-v6/config.json
pawankrd-cosmosrp-llama31-v6-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/pawankrd-cosmosrp-llama31-v6/special_tokens_map.json
Failed to get response for submission shuttleai-shuttle-2-5-1_5730_v10: ('http://shuttleai-shuttle-2-5-1-5730-v10-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
pawankrd-cosmosrp-llama31-v6-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/pawankrd-cosmosrp-llama31-v6/tokenizer.json
pawankrd-cosmosrp-llama31-v6-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/pawankrd-cosmosrp-llama31-v6/flywheel_model.0.safetensors
pawankrd-cosmosrp-llama31-v6-mkmlizer: loading reward model from Jellywibble/gpt2_xl_pairwise_89m_step_347634
pawankrd-cosmosrp-llama31-v6-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/291 [00:00<00:07, 36.06it/s] Loading 0: 5%|▍ | 14/291 [00:00<00:06, 45.65it/s] Loading 0: 8%|▊ | 22/291 [00:00<00:04, 55.98it/s] Loading 0: 10%|▉ | 28/291 [00:00<00:05, 47.91it/s] Loading 0: 12%|█▏ | 34/291 [00:00<00:05, 49.33it/s] Loading 0: 14%|█▎ | 40/291 [00:00<00:04, 52.03it/s] Loading 0: 16%|█▌ | 46/291 [00:00<00:05, 48.52it/s] Loading 0: 18%|█▊ | 51/291 [00:01<00:04, 48.38it/s] Loading 0: 20%|█▉ | 58/291 [00:01<00:04, 53.31it/s] Loading 0: 22%|██▏ | 64/291 [00:01<00:04, 49.20it/s] Loading 0: 24%|██▍ | 70/291 [00:01<00:04, 50.49it/s] Loading 0: 26%|██▌ | 76/291 [00:01<00:04, 52.78it/s] Loading 0: 28%|██▊ | 82/291 [00:01<00:04, 48.68it/s] Loading 0: 30%|██▉ | 87/291 [00:01<00:06, 31.38it/s] Loading 0: 32%|███▏ | 93/291 [00:02<00:05, 36.15it/s] Loading 0: 34%|███▍ | 99/291 [00:02<00:04, 40.81it/s] Loading 0: 36%|███▌ | 104/291 [00:02<00:04, 37.48it/s] Loading 0: 38%|███▊ | 112/291 [00:02<00:03, 46.01it/s] Loading 0: 41%|████ | 118/291 [00:02<00:03, 45.16it/s] Loading 0: 42%|████▏ | 123/291 [00:02<00:03, 45.26it/s] Loading 0: 45%|████▍ | 130/291 [00:02<00:03, 49.91it/s] Loading 0: 47%|████▋ | 136/291 [00:02<00:03, 46.90it/s] Loading 0: 48%|████▊ | 141/291 [00:03<00:03, 46.15it/s] Loading 0: 51%|█████ | 148/291 [00:03<00:02, 51.57it/s] Loading 0: 53%|█████▎ | 154/291 [00:03<00:02, 48.05it/s] Loading 0: 55%|█████▍ | 159/291 [00:03<00:02, 47.50it/s] Loading 0: 57%|█████▋ | 166/291 [00:03<00:02, 52.15it/s] Loading 0: 59%|█████▉ | 172/291 [00:03<00:02, 48.78it/s] Loading 0: 62%|██████▏ | 179/291 [00:03<00:02, 51.79it/s] Loading 0: 64%|██████▎ | 185/291 [00:03<00:02, 52.99it/s] Loading 0: 66%|██████▌ | 191/291 [00:04<00:02, 34.01it/s] Loading 0: 67%|██████▋ | 196/291 [00:04<00:02, 35.74it/s] Loading 0: 69%|██████▉ | 202/291 [00:04<00:02, 40.73it/s] Loading 0: 71%|███████▏ | 208/291 [00:04<00:02, 41.27it/s] Loading 0: 73%|███████▎ | 213/291 [00:04<00:01, 42.27it/s] Loading 0: 76%|███████▌ | 220/291 [00:04<00:01, 47.70it/s] Loading 0: 78%|███████▊ | 226/291 [00:04<00:01, 45.79it/s] Loading 0: 79%|███████▉ | 231/291 [00:05<00:01, 44.85it/s] Loading 0: 82%|████████▏ | 238/291 [00:05<00:01, 49.23it/s] Loading 0: 84%|████████▍ | 244/291 [00:05<00:01, 46.78it/s] Loading 0: 86%|████████▌ | 249/291 [00:05<00:00, 46.31it/s] Loading 0: 88%|████████▊ | 256/291 [00:05<00:00, 50.70it/s] Loading 0: 90%|█████████ | 262/291 [00:05<00:00, 47.29it/s] Loading 0: 92%|█████████▏| 267/291 [00:05<00:00, 46.77it/s] Loading 0: 94%|█████████▍| 274/291 [00:05<00:00, 51.33it/s] Loading 0: 96%|█████████▌| 280/291 [00:06<00:00, 47.57it/s] Loading 0: 98%|█████████▊| 285/291 [00:06<00:00, 46.14it/s] Loading 0: 100%|█████████▉| 290/291 [00:11<00:00, 3.27it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:957: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
pawankrd-cosmosrp-llama31-v6-mkmlizer: warnings.warn(
pawankrd-cosmosrp-llama31-v6-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:785: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
pawankrd-cosmosrp-llama31-v6-mkmlizer: warnings.warn(
Failed to get response for submission shuttleai-shuttle-2-5-1-_5730_v8: ('http://shuttleai-shuttle-2-5-1-5730-v8-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
pawankrd-cosmosrp-llama31-v6-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
pawankrd-cosmosrp-llama31-v6-mkmlizer: Saving duration: 1.355s
pawankrd-cosmosrp-llama31-v6-mkmlizer: Processed model Jellywibble/gpt2_xl_pairwise_89m_step_347634 in 10.656s
pawankrd-cosmosrp-llama31-v6-mkmlizer: creating bucket guanaco-reward-models
pawankrd-cosmosrp-llama31-v6-mkmlizer: Bucket 's3://guanaco-reward-models/' created
pawankrd-cosmosrp-llama31-v6-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/pawankrd-cosmosrp-llama31-v6_reward
pawankrd-cosmosrp-llama31-v6-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/pawankrd-cosmosrp-llama31-v6_reward/config.json
pawankrd-cosmosrp-llama31-v6-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/pawankrd-cosmosrp-llama31-v6_reward/vocab.json
pawankrd-cosmosrp-llama31-v6-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/pawankrd-cosmosrp-llama31-v6_reward/tokenizer.json
pawankrd-cosmosrp-llama31-v6-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/pawankrd-cosmosrp-llama31-v6_reward/reward.tensors
Job pawankrd-cosmosrp-llama31-v6-mkmlizer completed after 85.51s with status: succeeded
Stopping job with name pawankrd-cosmosrp-llama31-v6-mkmlizer
Pipeline stage MKMLizer completed in 86.46s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.09s
Running pipeline stage ISVCDeployer
Creating inference service pawankrd-cosmosrp-llama31-v6
Waiting for inference service pawankrd-cosmosrp-llama31-v6 to be ready
Failed to get response for submission shuttleai-shuttle-2-5-1-_5730_v8: ('http://shuttleai-shuttle-2-5-1-5730-v8-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Failed to get response for submission shuttleai-shuttle-2-5-1-_5730_v8: ('http://shuttleai-shuttle-2-5-1-5730-v8-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'activator request timeout')
Failed to get response for submission shuttleai-shuttle-2-5-1_5730_v10: ('http://shuttleai-shuttle-2-5-1-5730-v10-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'activator request timeout')
Failed to get response for submission shuttleai-shuttle-2-5-1-_5730_v8: ('http://shuttleai-shuttle-2-5-1-5730-v8-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission shuttleai-shuttle-2-5-1-_5730_v8: ('http://shuttleai-shuttle-2-5-1-5730-v8-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Failed to get response for submission shuttleai-shuttle-2-5-1-_5730_v8: ('http://shuttleai-shuttle-2-5-1-5730-v8-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'request timeout')
Inference service pawankrd-cosmosrp-llama31-v6 ready after 90.98969650268555s
Pipeline stage ISVCDeployer completed in 92.49s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.1499457359313965s
Received healthy response to inference request in 1.3698668479919434s
Received healthy response to inference request in 1.3712611198425293s
Received healthy response to inference request in 1.31075119972229s
Received healthy response to inference request in 1.2703204154968262s
5 requests
0 failed requests
5th percentile: 1.278406572341919
10th percentile: 1.2864927291870116
20th percentile: 1.3026650428771973
30th percentile: 1.3225743293762207
40th percentile: 1.3462205886840821
50th percentile: 1.3698668479919434
60th percentile: 1.3704245567321778
70th percentile: 1.3709822654724122
80th percentile: 1.526998043060303
90th percentile: 1.8384718894958496
95th percentile: 1.9942088127136228
99th percentile: 2.118798351287842
mean time: 1.494429063796997
Pipeline stage StressChecker completed in 8.12s
pawankrd-cosmosrp-llama31_v6 status is now deployed due to DeploymentManager action
pawankrd-cosmosrp-llama31_v6 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of pawankrd-cosmosrp-llama31_v6
Running pipeline stage ISVCDeleter
Checking if service pawankrd-cosmosrp-llama31-v6 is running
Tearing down inference service pawankrd-cosmosrp-llama31-v6
Service pawankrd-cosmosrp-llama31-v6 has been torndown
Pipeline stage ISVCDeleter completed in 5.04s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key pawankrd-cosmosrp-llama31-v6/config.json from bucket guanaco-mkml-models
Deleting key pawankrd-cosmosrp-llama31-v6/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key pawankrd-cosmosrp-llama31-v6/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key pawankrd-cosmosrp-llama31-v6/tokenizer.json from bucket guanaco-mkml-models
Deleting key pawankrd-cosmosrp-llama31-v6/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key pawankrd-cosmosrp-llama31-v6_reward/config.json from bucket guanaco-reward-models
Deleting key pawankrd-cosmosrp-llama31-v6_reward/merges.txt from bucket guanaco-reward-models
Deleting key pawankrd-cosmosrp-llama31-v6_reward/reward.tensors from bucket guanaco-reward-models
Deleting key pawankrd-cosmosrp-llama31-v6_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key pawankrd-cosmosrp-llama31-v6_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key pawankrd-cosmosrp-llama31-v6_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key pawankrd-cosmosrp-llama31-v6_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.45s
pawankrd-cosmosrp-llama31_v6 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics