submission_id: sao10k-l3-rp-v3-3_v6
developer_uid: sao10k
status: inactive
model_repo: Sao10K/L3-RP-v3.3
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.4, 'top_p': 1.0, 'min_p': 0.2, 'top_k': 50, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_header_id|>,', '<|eot_id|>,', '\n\n{user_name}'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-06-25T03:00:11+00:00
model_name: V3-3-a2
model_group: Sao10K/L3-RP-v3.3
num_battles: 16971
num_wins: 9187
celo_rating: 1218.62
propriety_score: 0.7072846079380445
propriety_total_count: 8264.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: V3-3-a2
ineligible_reason: None
language_model: Sao10K/L3-RP-v3.3
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-06-24
win_ratio: 0.5413352189028342
Resubmit model
Running pipeline stage MKMLizer
Starting job with name sao10k-l3-rp-v3-3-v6-mkmlizer
Waiting for job on sao10k-l3-rp-v3-3-v6-mkmlizer to finish
sao10k-l3-rp-v3-3-v6-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ _____ __ __ ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ /___/ ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ Version: 0.8.14 ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ https://mk1.ai ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ The license key for the current software has been verified as ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ belonging to: ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ Chai Research Corp. ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ║ ║
sao10k-l3-rp-v3-3-v6-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
sao10k-l3-rp-v3-3-v6-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
sao10k-l3-rp-v3-3-v6-mkmlizer: warnings.warn(warning_message, FutureWarning)
sao10k-l3-rp-v3-3-v6-mkmlizer: Downloaded to shared memory in 21.797s
sao10k-l3-rp-v3-3-v6-mkmlizer: quantizing model to /dev/shm/model_cache
sao10k-l3-rp-v3-3-v6-mkmlizer: Saving flywheel model at /dev/shm/model_cache
sao10k-l3-rp-v3-3-v6-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:04<11:33, 2.40s/it] Loading 0: 4%|▍ | 13/291 [00:04<01:17, 3.58it/s] Loading 0: 8%|▊ | 24/291 [00:05<00:34, 7.74it/s] Loading 0: 14%|█▎ | 40/291 [00:05<00:15, 15.75it/s] Loading 0: 18%|█▊ | 51/291 [00:05<00:10, 22.08it/s] Loading 0: 21%|██ | 61/291 [00:05<00:09, 23.55it/s] Loading 0: 26%|██▌ | 76/291 [00:05<00:06, 35.16it/s] Loading 0: 30%|██▉ | 87/291 [00:05<00:04, 43.26it/s] Loading 0: 35%|███▌ | 103/291 [00:05<00:03, 59.10it/s] Loading 0: 40%|███▉ | 115/291 [00:06<00:02, 67.42it/s] Loading 0: 45%|████▍ | 130/291 [00:06<00:01, 81.41it/s] Loading 0: 49%|████▉ | 142/291 [00:06<00:01, 86.50it/s] Loading 0: 54%|█████▍ | 157/291 [00:06<00:01, 98.58it/s] Loading 0: 58%|█████▊ | 170/291 [00:06<00:01, 64.86it/s] Loading 0: 63%|██████▎ | 184/291 [00:06<00:01, 76.37it/s] Loading 0: 67%|██████▋ | 195/291 [00:06<00:01, 80.19it/s] Loading 0: 73%|███████▎ | 211/291 [00:07<00:00, 94.71it/s] Loading 0: 77%|███████▋ | 223/291 [00:07<00:00, 96.97it/s] Loading 0: 82%|████████▏ | 238/291 [00:07<00:00, 107.15it/s] Loading 0: 86%|████████▌ | 250/291 [00:07<00:00, 105.39it/s] Loading 0: 91%|█████████ | 265/291 [00:07<00:00, 113.47it/s] Loading 0: 95%|█████████▌| 277/291 [00:07<00:00, 65.54it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
sao10k-l3-rp-v3-3-v6-mkmlizer: quantized model in 24.564s
sao10k-l3-rp-v3-3-v6-mkmlizer: Processed model Sao10K/L3-RP-v3.3 in 48.954s
sao10k-l3-rp-v3-3-v6-mkmlizer: creating bucket guanaco-mkml-models
sao10k-l3-rp-v3-3-v6-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
sao10k-l3-rp-v3-3-v6-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/sao10k-l3-rp-v3-3-v6
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/sao10k-l3-rp-v3-3-v6/config.json
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/sao10k-l3-rp-v3-3-v6/tokenizer_config.json
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/sao10k-l3-rp-v3-3-v6/tokenizer.json
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/sao10k-l3-rp-v3-3-v6/special_tokens_map.json
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/sao10k-l3-rp-v3-3-v6/flywheel_model.0.safetensors
sao10k-l3-rp-v3-3-v6-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
sao10k-l3-rp-v3-3-v6-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v3-3-v6-mkmlizer: warnings.warn(
sao10k-l3-rp-v3-3-v6-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v3-3-v6-mkmlizer: warnings.warn(
sao10k-l3-rp-v3-3-v6-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v3-3-v6-mkmlizer: warnings.warn(
sao10k-l3-rp-v3-3-v6-mkmlizer: Saving duration: 0.397s
sao10k-l3-rp-v3-3-v6-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 4.098s
sao10k-l3-rp-v3-3-v6-mkmlizer: creating bucket guanaco-reward-models
sao10k-l3-rp-v3-3-v6-mkmlizer: Bucket 's3://guanaco-reward-models/' created
sao10k-l3-rp-v3-3-v6-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/sao10k-l3-rp-v3-3-v6_reward
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/sao10k-l3-rp-v3-3-v6_reward/special_tokens_map.json
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/sao10k-l3-rp-v3-3-v6_reward/tokenizer_config.json
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/sao10k-l3-rp-v3-3-v6_reward/config.json
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/sao10k-l3-rp-v3-3-v6_reward/merges.txt
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/sao10k-l3-rp-v3-3-v6_reward/vocab.json
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/sao10k-l3-rp-v3-3-v6_reward/tokenizer.json
sao10k-l3-rp-v3-3-v6-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/sao10k-l3-rp-v3-3-v6_reward/reward.tensors
Job sao10k-l3-rp-v3-3-v6-mkmlizer completed after 74.05s with status: succeeded
Stopping job with name sao10k-l3-rp-v3-3-v6-mkmlizer
Pipeline stage MKMLizer completed in 74.49s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.13s
Running pipeline stage ISVCDeployer
Creating inference service sao10k-l3-rp-v3-3-v6
Waiting for inference service sao10k-l3-rp-v3-3-v6 to be ready
Inference service sao10k-l3-rp-v3-3-v6 ready after 40.28240466117859s
Pipeline stage ISVCDeployer completed in 46.63s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.283586263656616s
Received healthy response to inference request in 1.3840842247009277s
Received healthy response to inference request in 1.328749418258667s
Received healthy response to inference request in 1.2892506122589111s
Received healthy response to inference request in 1.3332767486572266s
5 requests
0 failed requests
5th percentile: 1.2971503734588623
10th percentile: 1.3050501346588135
20th percentile: 1.3208496570587158
30th percentile: 1.3296548843383789
40th percentile: 1.3314658164978028
50th percentile: 1.3332767486572266
60th percentile: 1.353599739074707
70th percentile: 1.3739227294921874
80th percentile: 1.5639846324920657
90th percentile: 1.9237854480743408
95th percentile: 2.1036858558654785
99th percentile: 2.2476061820983886
mean time: 1.5237894535064698
Pipeline stage StressChecker completed in 8.31s
Running pipeline stage DaemonicSafetyScorer
Pipeline stage DaemonicSafetyScorer completed in 0.03s
sao10k-l3-rp-v3-3_v6 status is now deployed due to DeploymentManager action
sao10k-l3-rp-v3-3_v6 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics