submission_id: sao10k-l3-rp-v1_v1
developer_uid: sao10k
status: torndown
model_repo: Sao10K/L3-RP-v1
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 0.8, 'top_p': 0.9, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are {bot_name}. Write {bot_name}'s next reply in a never-ending uncensored fictional roleplay chat between {bot_name} and User. Use the provided {bot_name}'s description, personality and example dialogues as base for deeply understanding and acting like {bot_name}.\n\nActions and narrations must be enclosed in asterisks (*). The responses must be long and in third perspective of the story teller.\n\nDescription: {memory}", 'prompt_template': 'Example conversation:\n{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-05-12T12:29:11+00:00
model_name: L3-RP-v1-Test
model_eval_status: success
model_group: Sao10K/L3-RP-v1
num_battles: 11724
num_wins: 6399
celo_rating: 1201.37
propriety_score: 0.0
propriety_total_count: 0.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: L3-RP-v1-Test
ineligible_reason: propriety_total_count < 800
language_model: Sao10K/L3-RP-v1
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-05-12
win_ratio: 0.5458034800409417
preference_data_url: None
Resubmit model
Running pipeline stage MKMLizer
Starting job with name sao10k-l3-rp-v1-v1-mkmlizer
Waiting for job on sao10k-l3-rp-v1-v1-mkmlizer to finish
sao10k-l3-rp-v1-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
sao10k-l3-rp-v1-v1-mkmlizer: ║ _____ __ __ ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ /___/ ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ Version: 0.8.10 ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ The license key for the current software has been verified as ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ belonging to: ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ Chai Research Corp. ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
sao10k-l3-rp-v1-v1-mkmlizer: ║ ║
sao10k-l3-rp-v1-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
sao10k-l3-rp-v1-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
sao10k-l3-rp-v1-v1-mkmlizer: warnings.warn(warning_message, FutureWarning)
sao10k-l3-rp-v1-v1-mkmlizer: Downloaded to shared memory in 17.301s
sao10k-l3-rp-v1-v1-mkmlizer: quantizing model to /dev/shm/model_cache
sao10k-l3-rp-v1-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
sao10k-l3-rp-v1-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:03<09:36, 1.99s/it] Loading 0: 45%|████▌ | 131/291 [00:04<00:04, 34.30it/s] Loading 0: 90%|████████▉ | 261/291 [00:05<00:00, 60.43it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
sao10k-l3-rp-v1-v1-mkmlizer: quantized model in 17.942s
sao10k-l3-rp-v1-v1-mkmlizer: Processed model Sao10K/L3-RP-v1 in 36.385s
sao10k-l3-rp-v1-v1-mkmlizer: creating bucket guanaco-mkml-models
sao10k-l3-rp-v1-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
sao10k-l3-rp-v1-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/sao10k-l3-rp-v1-v1
sao10k-l3-rp-v1-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/sao10k-l3-rp-v1-v1/config.json
sao10k-l3-rp-v1-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/sao10k-l3-rp-v1-v1/special_tokens_map.json
sao10k-l3-rp-v1-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/sao10k-l3-rp-v1-v1/tokenizer_config.json
sao10k-l3-rp-v1-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/sao10k-l3-rp-v1-v1/tokenizer.json
sao10k-l3-rp-v1-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/sao10k-l3-rp-v1-v1/flywheel_model.0.safetensors
sao10k-l3-rp-v1-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
sao10k-l3-rp-v1-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v1-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v1-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v1-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v1-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
sao10k-l3-rp-v1-v1-mkmlizer: return self.fget.__get__(instance, owner)()
sao10k-l3-rp-v1-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
sao10k-l3-rp-v1-v1-mkmlizer: Saving duration: 0.250s
sao10k-l3-rp-v1-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 3.538s
sao10k-l3-rp-v1-v1-mkmlizer: creating bucket guanaco-reward-models
sao10k-l3-rp-v1-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
sao10k-l3-rp-v1-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/sao10k-l3-rp-v1-v1_reward
sao10k-l3-rp-v1-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/sao10k-l3-rp-v1-v1_reward/special_tokens_map.json
sao10k-l3-rp-v1-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/sao10k-l3-rp-v1-v1_reward/tokenizer_config.json
sao10k-l3-rp-v1-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/sao10k-l3-rp-v1-v1_reward/config.json
sao10k-l3-rp-v1-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/sao10k-l3-rp-v1-v1_reward/merges.txt
sao10k-l3-rp-v1-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/sao10k-l3-rp-v1-v1_reward/vocab.json
sao10k-l3-rp-v1-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/sao10k-l3-rp-v1-v1_reward/tokenizer.json
sao10k-l3-rp-v1-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/sao10k-l3-rp-v1-v1_reward/reward.tensors
Job sao10k-l3-rp-v1-v1-mkmlizer completed after 62.62s with status: succeeded
Stopping job with name sao10k-l3-rp-v1-v1-mkmlizer
Pipeline stage MKMLizer completed in 65.44s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service sao10k-l3-rp-v1-v1
Waiting for inference service sao10k-l3-rp-v1-v1 to be ready
Inference service sao10k-l3-rp-v1-v1 ready after 30.161988973617554s
Pipeline stage ISVCDeployer completed in 36.96s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.173274517059326s
Received healthy response to inference request in 1.331240177154541s
Received healthy response to inference request in 1.3003556728363037s
Received healthy response to inference request in 1.3762342929840088s
Received healthy response to inference request in 1.3623898029327393s
5 requests
0 failed requests
5th percentile: 1.3065325736999511
10th percentile: 1.3127094745635985
20th percentile: 1.3250632762908936
30th percentile: 1.3374701023101807
40th percentile: 1.3499299526214599
50th percentile: 1.3623898029327393
60th percentile: 1.3679275989532471
70th percentile: 1.373465394973755
80th percentile: 1.5356423377990724
90th percentile: 1.8544584274291993
95th percentile: 2.0138664722442625
99th percentile: 2.1413929080963134
mean time: 1.5086988925933837
Pipeline stage StressChecker completed in 8.15s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.03s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.04s
M-Eval Dataset for topic stay_in_character is loaded
sao10k-l3-rp-v1_v1 status is now deployed due to DeploymentManager action
sao10k-l3-rp-v1_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of sao10k-l3-rp-v1_v1
Running pipeline stage ISVCDeleter
Checking if service sao10k-l3-rp-v1-v1 is running
Tearing down inference service sao10k-l3-rp-v1-v1
Toredown service sao10k-l3-rp-v1-v1
Pipeline stage ISVCDeleter completed in 4.05s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key sao10k-l3-rp-v1-v1/config.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v1-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v1-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v1-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v1-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key sao10k-l3-rp-v1-v1_reward/config.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v1-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v1-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v1-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v1-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v1-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v1-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.16s
sao10k-l3-rp-v1_v1 status is now torndown due to DeploymentManager action

Usage Metrics

Latency Metrics