developer_uid: sao10k
submission_id: sao10k-l3-rp-v2_v2
model_name: L3-RP-v2
model_group: Sao10K/L3-RP-v2
status: torndown
timestamp: 2024-05-19T09:44:32+00:00
num_battles: 16012
num_wins: 7841
celo_rating: 1158.91
family_friendly_score: 0.0
submission_type: basic
model_repo: Sao10K/L3-RP-v2
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: L3-RP-v2
is_internal_developer: False
language_model: Sao10K/L3-RP-v2
model_size: 8B
ranking_group: single
us_pacific_date: 2024-05-19
win_ratio: 0.48969522857856607
generation_params: {'temperature': 0.9, 'top_p': 0.8, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': 'Example Conversation:\n{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': True}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': True, 'user_template': '{user_name}: {message}\n'}
model_eval_status: success
Resubmit model
Running pipeline stage MKMLizer
Starting job with name sao10k-l3-rp-v2-v2-mkmlizer
Waiting for job on sao10k-l3-rp-v2-v2-mkmlizer to finish
sao10k-l3-rp-v2-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
sao10k-l3-rp-v2-v2-mkmlizer: ║ _____ __ __ ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ /___/ ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ Version: 0.8.14 ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ https://mk1.ai ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ The license key for the current software has been verified as ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ belonging to: ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ Chai Research Corp. ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
sao10k-l3-rp-v2-v2-mkmlizer: ║ ║
sao10k-l3-rp-v2-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
sao10k-l3-rp-v2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
sao10k-l3-rp-v2-v2-mkmlizer: warnings.warn(warning_message, FutureWarning)
sao10k-l3-rp-v2-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:04<10:01, 2.08s/it] Loading 0: 6%|▌ | 18/291 [00:04<00:47, 5.77it/s] Loading 0: 12%|█▏ | 36/291 [00:04<00:18, 13.65it/s] Loading 0: 19%|█▉ | 55/291 [00:04<00:09, 24.30it/s] Loading 0: 24%|██▍ | 71/291 [00:04<00:08, 27.16it/s] Loading 0: 31%|███ | 90/291 [00:05<00:04, 40.29it/s] Loading 0: 38%|███▊ | 112/291 [00:05<00:03, 58.28it/s] Loading 0: 45%|████▌ | 131/291 [00:05<00:02, 74.71it/s] Loading 0: 52%|█████▏ | 150/291 [00:05<00:01, 90.83it/s] Loading 0: 57%|█████▋ | 167/291 [00:05<00:01, 68.03it/s] Loading 0: 64%|██████▎ | 185/291 [00:05<00:01, 82.96it/s] Loading 0: 70%|██████▉ | 203/291 [00:05<00:00, 98.60it/s] Loading 0: 76%|███████▋ | 222/291 [00:06<00:00, 114.94it/s] Loading 0: 84%|████████▍ | 244/291 [00:06<00:00, 137.08it/s] Loading 0: 90%|█████████ | 263/291 [00:06<00:00, 149.28it/s] Loading 0: 97%|█████████▋| 282/291 [00:06<00:00, 87.68it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
sao10k-l3-rp-v2-v2-mkmlizer: quantized model in 18.399s
sao10k-l3-rp-v2-v2-mkmlizer: Processed model Sao10K/L3-RP-v2 in 33.674s
sao10k-l3-rp-v2-v2-mkmlizer: creating bucket guanaco-mkml-models
sao10k-l3-rp-v2-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
sao10k-l3-rp-v2-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/sao10k-l3-rp-v2-v2
sao10k-l3-rp-v2-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/sao10k-l3-rp-v2-v2/flywheel_model.0.safetensors
sao10k-l3-rp-v2-v2-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
sao10k-l3-rp-v2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v2-v2-mkmlizer: warnings.warn(
sao10k-l3-rp-v2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v2-v2-mkmlizer: warnings.warn(
sao10k-l3-rp-v2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v2-v2-mkmlizer: warnings.warn(
sao10k-l3-rp-v2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
sao10k-l3-rp-v2-v2-mkmlizer: return self.fget.__get__(instance, owner)()
sao10k-l3-rp-v2-v2-mkmlizer: creating bucket guanaco-reward-models
sao10k-l3-rp-v2-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
sao10k-l3-rp-v2-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/sao10k-l3-rp-v2-v2_reward
sao10k-l3-rp-v2-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/sao10k-l3-rp-v2-v2_reward/config.json
sao10k-l3-rp-v2-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/sao10k-l3-rp-v2-v2_reward/special_tokens_map.json
sao10k-l3-rp-v2-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/sao10k-l3-rp-v2-v2_reward/tokenizer_config.json
sao10k-l3-rp-v2-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/sao10k-l3-rp-v2-v2_reward/vocab.json
sao10k-l3-rp-v2-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/sao10k-l3-rp-v2-v2_reward/merges.txt
sao10k-l3-rp-v2-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/sao10k-l3-rp-v2-v2_reward/tokenizer.json
sao10k-l3-rp-v2-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/sao10k-l3-rp-v2-v2_reward/reward.tensors
Job sao10k-l3-rp-v2-v2-mkmlizer completed after 62.72s with status: succeeded
Stopping job with name sao10k-l3-rp-v2-v2-mkmlizer
Pipeline stage MKMLizer completed in 65.09s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service sao10k-l3-rp-v2-v2
Waiting for inference service sao10k-l3-rp-v2-v2 to be ready
Inference service sao10k-l3-rp-v2-v2 ready after 40.206552505493164s
Pipeline stage ISVCDeployer completed in 46.89s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.1851398944854736s
Received healthy response to inference request in 1.2783153057098389s
Received healthy response to inference request in 1.2449753284454346s
Received healthy response to inference request in 1.3319504261016846s
Received healthy response to inference request in 1.2949845790863037s
5 requests
0 failed requests
5th percentile: 1.2516433238983153
10th percentile: 1.2583113193511963
20th percentile: 1.271647310256958
30th percentile: 1.2816491603851319
40th percentile: 1.2883168697357177
50th percentile: 1.2949845790863037
60th percentile: 1.3097709178924561
70th percentile: 1.3245572566986084
80th percentile: 1.5025883197784426
90th percentile: 1.8438641071319581
95th percentile: 2.0145020008087156
99th percentile: 2.151012315750122
mean time: 1.467073106765747
Pipeline stage StressChecker completed in 7.94s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running M-Eval for topic stay_in_character
Running pipeline stage DaemonicSafetyScorer
M-Eval Dataset for topic stay_in_character is loaded
Pipeline stage DaemonicSafetyScorer completed in 0.05s
sao10k-l3-rp-v2_v2 status is now deployed due to DeploymentManager action
sao10k-l3-rp-v2_v2 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of sao10k-l3-rp-v2_v2
Running pipeline stage ISVCDeleter
Checking if service sao10k-l3-rp-v2-v2 is running
Tearing down inference service sao10k-l3-rp-v2-v2
Toredown service sao10k-l3-rp-v2-v2
Pipeline stage ISVCDeleter completed in 3.59s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key sao10k-l3-rp-v2-v2/config.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v2-v2/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v2-v2/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v2-v2/tokenizer.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v2-v2/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key sao10k-l3-rp-v2-v2_reward/config.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v2-v2_reward/merges.txt from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v2-v2_reward/reward.tensors from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v2-v2_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v2-v2_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v2-v2_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v2-v2_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.32s
sao10k-l3-rp-v2_v2 status is now torndown due to DeploymentManager action