submission_id: sao10k-l3-rp-v5-4_v1
developer_uid: sao10k
status: inactive
model_repo: Sao10K/L3-RP-v5.4
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.4, 'top_p': 1.0, 'min_p': 0.1, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|end_header_id|>,', '<|eot_id|>,', '\n\n{user_name}'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-07-11T10:57:35+00:00
model_name: RP-v5-4-1
model_group: Sao10K/L3-RP-v5.4
num_battles: 34297
num_wins: 18483
celo_rating: 1217.4
alignment_score: None
alignment_samples: 0
propriety_score: 0.6985396383866481
propriety_total_count: 5752.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: RP-v5-4-1
ineligible_reason: None
language_model: Sao10K/L3-RP-v5.4
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-07-11
win_ratio: 0.5389101087558679
preference_data_url: None
Resubmit model
Running pipeline stage MKMLizer
Starting job with name sao10k-l3-rp-v5-4-v1-mkmlizer
Waiting for job on sao10k-l3-rp-v5-4-v1-mkmlizer to finish
sao10k-l3-rp-v5-4-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ _____ __ __ ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ /___/ ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ Version: 0.8.14 ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ https://mk1.ai ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ The license key for the current software has been verified as ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ belonging to: ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ Chai Research Corp. ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ║ ║
sao10k-l3-rp-v5-4-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
sao10k-l3-rp-v5-4-v1-mkmlizer: Downloaded to shared memory in 34.676s
sao10k-l3-rp-v5-4-v1-mkmlizer: quantizing model to /dev/shm/model_cache
sao10k-l3-rp-v5-4-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
sao10k-l3-rp-v5-4-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:05<13:19, 2.77s/it] Loading 0: 5%|▍ | 14/291 [00:05<01:22, 3.37it/s] Loading 0: 10%|▉ | 29/291 [00:05<00:31, 8.45it/s] Loading 0: 14%|█▍ | 41/291 [00:05<00:18, 13.69it/s] Loading 0: 19%|█▉ | 56/291 [00:05<00:10, 22.28it/s] Loading 0: 24%|██▎ | 69/291 [00:06<00:09, 23.55it/s] Loading 0: 29%|██▉ | 85/291 [00:06<00:05, 34.55it/s] Loading 0: 33%|███▎ | 97/291 [00:06<00:04, 43.17it/s] Loading 0: 38%|███▊ | 112/291 [00:06<00:03, 56.86it/s] Loading 0: 43%|████▎ | 125/291 [00:06<00:02, 67.34it/s] Loading 0: 48%|████▊ | 139/291 [00:06<00:01, 79.28it/s] Loading 0: 52%|█████▏ | 152/291 [00:07<00:01, 86.70it/s] Loading 0: 57%|█████▋ | 166/291 [00:07<00:02, 55.49it/s] Loading 0: 61%|██████ | 177/291 [00:07<00:01, 63.22it/s] Loading 0: 66%|██████▋ | 193/291 [00:07<00:01, 79.69it/s] Loading 0: 70%|███████ | 205/291 [00:07<00:01, 85.85it/s] Loading 0: 76%|███████▌ | 220/291 [00:07<00:00, 99.79it/s] Loading 0: 80%|████████ | 233/291 [00:08<00:00, 104.12it/s] Loading 0: 85%|████████▍ | 247/291 [00:08<00:00, 112.36it/s] Loading 0: 89%|████████▉ | 260/291 [00:08<00:00, 113.50it/s] Loading 0: 94%|█████████▍| 273/291 [00:08<00:00, 62.27it/s] Loading 0: 98%|█████████▊| 284/291 [00:08<00:00, 68.59it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
sao10k-l3-rp-v5-4-v1-mkmlizer: quantized model in 29.079s
sao10k-l3-rp-v5-4-v1-mkmlizer: Processed model Sao10K/L3-RP-v5.4 in 63.755s
sao10k-l3-rp-v5-4-v1-mkmlizer: creating bucket guanaco-mkml-models
sao10k-l3-rp-v5-4-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
sao10k-l3-rp-v5-4-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/sao10k-l3-rp-v5-4-v1
sao10k-l3-rp-v5-4-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/sao10k-l3-rp-v5-4-v1/config.json
sao10k-l3-rp-v5-4-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/sao10k-l3-rp-v5-4-v1/tokenizer_config.json
sao10k-l3-rp-v5-4-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/sao10k-l3-rp-v5-4-v1/tokenizer.json
sao10k-l3-rp-v5-4-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/sao10k-l3-rp-v5-4-v1/special_tokens_map.json
sao10k-l3-rp-v5-4-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/sao10k-l3-rp-v5-4-v1/flywheel_model.0.safetensors
sao10k-l3-rp-v5-4-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
sao10k-l3-rp-v5-4-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:919: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v5-4-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v5-4-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
sao10k-l3-rp-v5-4-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v5-4-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:769: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v5-4-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v5-4-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
sao10k-l3-rp-v5-4-v1-mkmlizer: warnings.warn(
sao10k-l3-rp-v5-4-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
sao10k-l3-rp-v5-4-v1-mkmlizer: return self.fget.__get__(instance, owner)()
sao10k-l3-rp-v5-4-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
sao10k-l3-rp-v5-4-v1-mkmlizer: Saving duration: 0.530s
sao10k-l3-rp-v5-4-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 4.536s
sao10k-l3-rp-v5-4-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/sao10k-l3-rp-v5-4-v1_reward/reward.tensors
Job sao10k-l3-rp-v5-4-v1-mkmlizer completed after 98.54s with status: succeeded
Stopping job with name sao10k-l3-rp-v5-4-v1-mkmlizer
Pipeline stage MKMLizer completed in 99.54s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.14s
Running pipeline stage ISVCDeployer
Creating inference service sao10k-l3-rp-v5-4-v1
Waiting for inference service sao10k-l3-rp-v5-4-v1 to be ready
Failed to get response for submission blend_pefis_2024-07-04: ('http://mistralai-mixtral-8x7b-3473-v33-predictor-default.tenant-chaiml-guanaco.knative.ord1.coreweave.cloud/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:50986->127.0.0.1:8080: read: connection reset by peer\n')
Inference service sao10k-l3-rp-v5-4-v1 ready after 30.169527292251587s
Pipeline stage ISVCDeployer completed in 37.32s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.987971305847168s
Received healthy response to inference request in 1.3055999279022217s
Received healthy response to inference request in 1.2979216575622559s
Received healthy response to inference request in 1.2887318134307861s
Received healthy response to inference request in 1.3444573879241943s
5 requests
0 failed requests
5th percentile: 1.29056978225708
10th percentile: 1.292407751083374
20th percentile: 1.296083688735962
30th percentile: 1.299457311630249
40th percentile: 1.3025286197662354
50th percentile: 1.3055999279022217
60th percentile: 1.3211429119110107
70th percentile: 1.3366858959197998
80th percentile: 1.4731601715087892
90th percentile: 1.7305657386779787
95th percentile: 1.859268522262573
99th percentile: 1.962230749130249
mean time: 1.444936418533325
Pipeline stage StressChecker completed in 8.16s
sao10k-l3-rp-v5-4_v1 status is now deployed due to DeploymentManager action
sao10k-l3-rp-v5-4_v1 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics