developer_uid: sao10k
submission_id: sao10k-l3-rp-v3-2_v2
model_name: V3-Expr1-Beta
model_group: Sao10K/L3-RP-v3.2
status: torndown
timestamp: 2024-06-05T04:51:20+00:00
num_battles: 13984
num_wins: 7887
celo_rating: 1222.27
family_friendly_score: 0.0
submission_type: basic
model_repo: Sao10K/L3-RP-v3.2
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: V3-Expr1-Beta
is_internal_developer: False
language_model: Sao10K/L3-RP-v3.2
model_size: 8B
ranking_group: single
us_pacific_date: 2024-06-04
win_ratio: 0.5640017162471396
generation_params: {'temperature': 1.12, 'top_p': 0.95, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_header_id|>,', '<|eot_id|>,', '\n\n{user_name}'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
model_eval_status: success
Resubmit model
Running pipeline stage MKMLizer
Starting job with name sao10k-l3-rp-v3-2-v2-mkmlizer
Waiting for job on sao10k-l3-rp-v3-2-v2-mkmlizer to finish
sao10k-l3-rp-v3-2-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ _____ __ __ ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ /___/ ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ Version: 0.8.14 ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ https://mk1.ai ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ The license key for the current software has been verified as ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ belonging to: ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ Chai Research Corp. ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ║ ║
sao10k-l3-rp-v3-2-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
sao10k-l3-rp-v3-2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
sao10k-l3-rp-v3-2-v2-mkmlizer: warnings.warn(warning_message, FutureWarning)
sao10k-l3-rp-v3-2-v2-mkmlizer: Downloaded to shared memory in 13.711s
sao10k-l3-rp-v3-2-v2-mkmlizer: quantizing model to /dev/shm/model_cache
sao10k-l3-rp-v3-2-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
sao10k-l3-rp-v3-2-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:04<11:13, 2.33s/it] Loading 0: 8%|▊ | 23/291 [00:04<00:40, 6.64it/s] Loading 0: 15%|█▌ | 45/291 [00:04<00:16, 15.30it/s] Loading 0: 21%|██▏ | 62/291 [00:05<00:10, 20.89it/s] Loading 0: 29%|██▉ | 85/291 [00:05<00:06, 34.27it/s] Loading 0: 36%|███▌ | 105/291 [00:05<00:03, 47.90it/s] Loading 0: 45%|████▍ | 130/291 [00:05<00:02, 69.05it/s] Loading 0: 52%|█████▏ | 150/291 [00:05<00:01, 85.43it/s] Loading 0: 58%|█████▊ | 169/291 [00:05<00:01, 73.98it/s] Loading 0: 66%|██████▋ | 193/291 [00:06<00:01, 96.68it/s] Loading 0: 73%|███████▎ | 213/291 [00:06<00:00, 112.23it/s] Loading 0: 82%|████████▏ | 238/291 [00:06<00:00, 136.23it/s] Loading 0: 89%|████████▊ | 258/291 [00:06<00:00, 147.45it/s] Loading 0: 96%|█████████▌| 278/291 [00:06<00:00, 103.70it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
sao10k-l3-rp-v3-2-v2-mkmlizer: quantized model in 20.897s
sao10k-l3-rp-v3-2-v2-mkmlizer: Processed model Sao10K/L3-RP-v3.2 in 35.546s
sao10k-l3-rp-v3-2-v2-mkmlizer: creating bucket guanaco-mkml-models
sao10k-l3-rp-v3-2-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
sao10k-l3-rp-v3-2-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/sao10k-l3-rp-v3-2-v2
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/sao10k-l3-rp-v3-2-v2/special_tokens_map.json
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/sao10k-l3-rp-v3-2-v2/tokenizer_config.json
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/sao10k-l3-rp-v3-2-v2/config.json
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/sao10k-l3-rp-v3-2-v2/tokenizer.json
sao10k-l3-rp-v3-2-v2-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
sao10k-l3-rp-v3-2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v3-2-v2-mkmlizer: warnings.warn(
sao10k-l3-rp-v3-2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v3-2-v2-mkmlizer: warnings.warn(
sao10k-l3-rp-v3-2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v3-2-v2-mkmlizer: warnings.warn(
sao10k-l3-rp-v3-2-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
sao10k-l3-rp-v3-2-v2-mkmlizer: return self.fget.__get__(instance, owner)()
sao10k-l3-rp-v3-2-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
sao10k-l3-rp-v3-2-v2-mkmlizer: Saving duration: 0.229s
sao10k-l3-rp-v3-2-v2-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 3.421s
sao10k-l3-rp-v3-2-v2-mkmlizer: creating bucket guanaco-reward-models
sao10k-l3-rp-v3-2-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
sao10k-l3-rp-v3-2-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/sao10k-l3-rp-v3-2-v2_reward
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/sao10k-l3-rp-v3-2-v2_reward/config.json
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/sao10k-l3-rp-v3-2-v2_reward/special_tokens_map.json
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/sao10k-l3-rp-v3-2-v2_reward/merges.txt
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/sao10k-l3-rp-v3-2-v2_reward/vocab.json
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/sao10k-l3-rp-v3-2-v2_reward/tokenizer_config.json
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/sao10k-l3-rp-v3-2-v2_reward/tokenizer.json
sao10k-l3-rp-v3-2-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/sao10k-l3-rp-v3-2-v2_reward/reward.tensors
Job sao10k-l3-rp-v3-2-v2-mkmlizer completed after 63.55s with status: succeeded
Stopping job with name sao10k-l3-rp-v3-2-v2-mkmlizer
Pipeline stage MKMLizer completed in 68.94s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service sao10k-l3-rp-v3-2-v2
Waiting for inference service sao10k-l3-rp-v3-2-v2 to be ready
Inference service sao10k-l3-rp-v3-2-v2 ready after 30.153740644454956s
Pipeline stage ISVCDeployer completed in 38.01s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.0874123573303223s
Received healthy response to inference request in 1.3123993873596191s
Received healthy response to inference request in 1.2997546195983887s
Received healthy response to inference request in 1.2934927940368652s
Received healthy response to inference request in 1.3426954746246338s
5 requests
0 failed requests
5th percentile: 1.2947451591491699
10th percentile: 1.2959975242614745
20th percentile: 1.298502254486084
30th percentile: 1.3022835731506348
40th percentile: 1.3073414802551269
50th percentile: 1.3123993873596191
60th percentile: 1.324517822265625
70th percentile: 1.3366362571716308
80th percentile: 1.4916388511657717
90th percentile: 1.789525604248047
95th percentile: 1.9384689807891844
99th percentile: 2.057623682022095
mean time: 1.4671509265899658
Pipeline stage StressChecker completed in 7.95s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.03s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.04s
M-Eval Dataset for topic stay_in_character is loaded
sao10k-l3-rp-v3-2_v2 status is now deployed due to DeploymentManager action
sao10k-l3-rp-v3-2_v2 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of sao10k-l3-rp-v3-2_v2
Running pipeline stage ISVCDeleter
Checking if service sao10k-l3-rp-v3-2-v2 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 4.51s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key sao10k-l3-rp-v3-2-v2/config.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v3-2-v2/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v3-2-v2/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v3-2-v2/tokenizer.json from bucket guanaco-mkml-models
Deleting key sao10k-l3-rp-v3-2-v2/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key sao10k-l3-rp-v3-2-v2_reward/config.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v3-2-v2_reward/merges.txt from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v3-2-v2_reward/reward.tensors from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v3-2-v2_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v3-2-v2_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v3-2-v2_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key sao10k-l3-rp-v3-2-v2_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 6.02s
sao10k-l3-rp-v3-2_v2 status is now torndown due to DeploymentManager action