developer_uid: sao10k
submission_id: sao10k-l3-rp-v5-2_v1
model_name: RP-v5-Expr2
model_group: Sao10K/L3-RP-v5.2
status: torndown
timestamp: 2024-07-07T15:40:51+00:00
num_battles: 43381
num_wins: 23719
celo_rating: 1220.7
family_friendly_score: 0.0
submission_type: basic
model_repo: Sao10K/L3-RP-v5.2
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: RP-v5-Expr2
is_internal_developer: False
language_model: Sao10K/L3-RP-v5.2
model_size: 8B
ranking_group: single
us_pacific_date: 2024-07-07
win_ratio: 0.5467601023489546
generation_params: {'temperature': 1.4, 'top_p': 1.0, 'min_p': 0.2, 'top_k': 50, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_header_id|>,', '<|eot_id|>,', '\n\n{user_name}'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
Resubmit model
Running pipeline stage MKMLizer
Starting job with name sao10k-l3-rp-v5-2-v1-mkmlizer
Waiting for job on sao10k-l3-rp-v5-2-v1-mkmlizer to finish
sao10k-l3-rp-v5-2-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ _____ __ __ ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ /___/ ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ Version: 0.8.14 ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ https://mk1.ai ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ The license key for the current software has been verified as ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ belonging to: ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ Chai Research Corp. ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ║ ║
sao10k-l3-rp-v5-2-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
sao10k-l3-rp-v5-2-v1-mkmlizer: Downloaded to shared memory in 36.601s
sao10k-l3-rp-v5-2-v1-mkmlizer: quantizing model to /dev/shm/model_cache
sao10k-l3-rp-v5-2-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
sao10k-l3-rp-v5-2-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:05<13:26, 2.79s/it] Loading 0: 5%|▍ | 14/291 [00:05<01:23, 3.34it/s] Loading 0: 10%|▉ | 28/291 [00:05<00:32, 8.03it/s] Loading 0: 14%|█▍ | 41/291 [00:05<00:18, 13.69it/s] Loading 0: 19%|█▉ | 55/291 [00:05<00:10, 21.57it/s] Loading 0: 23%|██▎ | 67/291 [00:06<00:09, 22.70it/s] Loading 0: 27%|██▋ | 78/291 [00:06<00:07, 29.71it/s] Loading 0: 32%|███▏ | 94/291 [00:06<00:04, 43.19it/s] Loading 0: 36%|███▋ | 106/291 [00:06<00:03, 52.65it/s] Loading 0: 42%|████▏ | 121/291 [00:06<00:02, 66.72it/s] Loading 0: 46%|████▌ | 133/291 [00:07<00:02, 74.67it/s] Loading 0: 51%|█████ | 148/291 [00:07<00:01, 89.28it/s] Loading 0: 55%|█████▌ | 161/291 [00:07<00:01, 93.51it/s] Loading 0: 59%|█████▉ | 173/291 [00:07<00:02, 53.61it/s] Loading 0: 64%|██████▎ | 185/291 [00:07<00:01, 62.16it/s] Loading 0: 68%|██████▊ | 199/291 [00:07<00:01, 75.22it/s] Loading 0: 73%|███████▎ | 212/291 [00:08<00:00, 83.93it/s] Loading 0: 77%|███████▋ | 223/291 [00:08<00:00, 88.76it/s] Loading 0: 82%|████████▏ | 238/291 [00:08<00:00, 100.38it/s] Loading 0: 86%|████████▌ | 250/291 [00:08<00:00, 103.09it/s] Loading 0: 91%|█████████ | 265/291 [00:08<00:00, 113.38it/s] Loading 0: 96%|█████████▌| 278/291 [00:08<00:00, 59.95it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
sao10k-l3-rp-v5-2-v1-mkmlizer: quantized model in 29.476s
sao10k-l3-rp-v5-2-v1-mkmlizer: Processed model Sao10K/L3-RP-v5.2 in 66.077s
sao10k-l3-rp-v5-2-v1-mkmlizer: creating bucket guanaco-mkml-models
sao10k-l3-rp-v5-2-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
sao10k-l3-rp-v5-2-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/sao10k-l3-rp-v5-2-v1
Failed to get response for submission blend_pefis_2024-07-04: ('http://mistralai-mixtral-8x7b-3473-v33-predictor-default.tenant-chaiml-guanaco.knative.ord1.coreweave.cloud/v1/models/GPT-J-6B-lit-v2:predict', '{"error":"TypeError : SamplingParameters.__init__() got an unexpected keyword argument \'min_p\'"}')
sao10k-l3-rp-v5-2-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/sao10k-l3-rp-v5-2-v1/flywheel_model.0.safetensors
sao10k-l3-rp-v5-2-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
sao10k-l3-rp-v5-2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:919: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v5-2-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v5-2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
sao10k-l3-rp-v5-2-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v5-2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:769: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v5-2-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v5-2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v5-2-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v5-2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
sao10k-l3-rp-v5-2-v1-mkmlizer: return self.fget.__get__(instance, owner)()
sao10k-l3-rp-v5-2-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
sao10k-l3-rp-v5-2-v1-mkmlizer: Saving duration: 0.510s
sao10k-l3-rp-v5-2-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 7.283s
sao10k-l3-rp-v5-2-v1-mkmlizer: creating bucket guanaco-reward-models
sao10k-l3-rp-v5-2-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
sao10k-l3-rp-v5-2-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/sao10k-l3-rp-v5-2-v1_reward
sao10k-l3-rp-v5-2-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/sao10k-l3-rp-v5-2-v1_reward/config.json
sao10k-l3-rp-v5-2-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/sao10k-l3-rp-v5-2-v1_reward/tokenizer_config.json
sao10k-l3-rp-v5-2-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/sao10k-l3-rp-v5-2-v1_reward/special_tokens_map.json
sao10k-l3-rp-v5-2-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/sao10k-l3-rp-v5-2-v1_reward/merges.txt
sao10k-l3-rp-v5-2-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/sao10k-l3-rp-v5-2-v1_reward/vocab.json
sao10k-l3-rp-v5-2-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/sao10k-l3-rp-v5-2-v1_reward/tokenizer.json
sao10k-l3-rp-v5-2-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/sao10k-l3-rp-v5-2-v1_reward/reward.tensors
Job sao10k-l3-rp-v5-2-v1-mkmlizer completed after 103.93s with status: succeeded
Stopping job with name sao10k-l3-rp-v5-2-v1-mkmlizer
Pipeline stage MKMLizer completed in 104.85s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service sao10k-l3-rp-v5-2-v1
Waiting for inference service sao10k-l3-rp-v5-2-v1 to be ready
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Inference service sao10k-l3-rp-v5-2-v1 ready after 100.4500572681427s
Pipeline stage ISVCDeployer completed in 107.56s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.9765071868896484s
Received healthy response to inference request in 1.3545830249786377s
Received healthy response to inference request in 1.316293716430664s
Received healthy response to inference request in 1.299543857574463s
Received healthy response to inference request in 1.3439135551452637s
5 requests
0 failed requests
5th percentile: 1.302893829345703
10th percentile: 1.3062438011169433
20th percentile: 1.3129437446594239
30th percentile: 1.321817684173584
40th percentile: 1.3328656196594237
50th percentile: 1.3439135551452637
60th percentile: 1.3481813430786134
70th percentile: 1.3524491310119628
80th percentile: 1.47896785736084
90th percentile: 1.7277375221252442
95th percentile: 1.8521223545074461
99th percentile: 1.9516302204132079
mean time: 1.4581682682037354
Pipeline stage StressChecker completed in 7.95s
sao10k-l3-rp-v5-2_v1 status is now deployed due to DeploymentManager action
sao10k-l3-rp-v5-2_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of sao10k-l3-rp-v5-2_v1
Running pipeline stage ISVCDeleter
Checking if service sao10k-l3-rp-v5-2-v1 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 4.57s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key sao10k-l3-rp-v5-2-v1/config.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v5-2-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v5-2-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v5-2-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v5-2-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key sao10k-l3-rp-v5-2-v1_reward/config.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v5-2-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v5-2-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v5-2-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v5-2-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v5-2-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v5-2-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.79s
sao10k-l3-rp-v5-2_v1 status is now torndown due to DeploymentManager action