submission_id: arlineka-pharaoh-8b_v1
developer_uid: arlineka
status: inactive
model_repo: arlineka/pharaoh-8b
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.1, 'top_p': 1.0, 'min_p': 0.08, 'top_k': 100, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-06-01T15:48:42+00:00
model_name: arlineka-pharaoh-8b_v1
model_eval_status: success
model_group: arlineka/pharaoh-8b
num_battles: 13881
num_wins: 7472
celo_rating: 1200.54
propriety_score: 0.6617647058823529
propriety_total_count: 204.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030277632.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: arlineka-pharaoh-8b_v1
ineligible_reason: propriety_total_count < 5000
language_model: arlineka/pharaoh-8b
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-06-01
win_ratio: 0.5382897485771918
Resubmit model
Running pipeline stage MKMLizer
Starting job with name arlineka-pharaoh-8b-v1-mkmlizer
Waiting for job on arlineka-pharaoh-8b-v1-mkmlizer to finish
arlineka-pharaoh-8b-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
arlineka-pharaoh-8b-v1-mkmlizer: ║ _____ __ __ ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ /___/ ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ Version: 0.8.14 ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ https://mk1.ai ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ The license key for the current software has been verified as ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ belonging to: ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ Chai Research Corp. ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
arlineka-pharaoh-8b-v1-mkmlizer: ║ ║
arlineka-pharaoh-8b-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
arlineka-pharaoh-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
arlineka-pharaoh-8b-v1-mkmlizer: warnings.warn(warning_message, FutureWarning)
arlineka-pharaoh-8b-v1-mkmlizer: Downloaded to shared memory in 54.706s
arlineka-pharaoh-8b-v1-mkmlizer: quantizing model to /dev/shm/model_cache
arlineka-pharaoh-8b-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
arlineka-pharaoh-8b-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:02, 111.81it/s] Loading 0: 8%|▊ | 24/291 [00:00<00:02, 110.47it/s] Loading 0: 13%|█▎ | 39/291 [00:00<00:02, 125.36it/s] Loading 0: 18%|█▊ | 52/291 [00:00<00:01, 121.27it/s] Loading 0: 23%|██▎ | 66/291 [00:00<00:01, 127.06it/s] Loading 0: 27%|██▋ | 79/291 [00:00<00:01, 123.02it/s] Loading 0: 32%|███▏ | 92/291 [00:01<00:02, 68.50it/s] Loading 0: 35%|███▌ | 103/291 [00:01<00:02, 75.52it/s] Loading 0: 40%|████ | 117/291 [00:01<00:01, 89.16it/s] Loading 0: 45%|████▍ | 130/291 [00:01<00:01, 97.17it/s] Loading 0: 49%|████▉ | 144/291 [00:01<00:01, 107.02it/s] Loading 0: 54%|█████▍ | 157/291 [00:01<00:01, 110.03it/s] Loading 0: 59%|█████▉ | 171/291 [00:01<00:01, 116.72it/s] Loading 0: 63%|██████▎ | 184/291 [00:01<00:00, 119.72it/s] Loading 0: 68%|██████▊ | 197/291 [00:02<00:01, 71.68it/s] Loading 0: 73%|███████▎ | 211/291 [00:02<00:00, 82.07it/s] Loading 0: 77%|███████▋ | 225/291 [00:02<00:00, 93.78it/s] Loading 0: 82%|████████▏ | 238/291 [00:02<00:00, 99.77it/s] Loading 0: 87%|████████▋ | 252/291 [00:02<00:00, 108.43it/s] Loading 0: 91%|█████████ | 265/291 [00:02<00:00, 112.04it/s] Loading 0: 96%|█████████▌| 279/291 [00:02<00:00, 118.43it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
arlineka-pharaoh-8b-v1-mkmlizer: quantized model in 22.969s
arlineka-pharaoh-8b-v1-mkmlizer: Processed model arlineka/pharaoh-8b in 80.229s
arlineka-pharaoh-8b-v1-mkmlizer: creating bucket guanaco-mkml-models
arlineka-pharaoh-8b-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
arlineka-pharaoh-8b-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/arlineka-pharaoh-8b-v1
arlineka-pharaoh-8b-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/arlineka-pharaoh-8b-v1/config.json
arlineka-pharaoh-8b-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/arlineka-pharaoh-8b-v1/special_tokens_map.json
arlineka-pharaoh-8b-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/arlineka-pharaoh-8b-v1/tokenizer.json
arlineka-pharaoh-8b-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/arlineka-pharaoh-8b-v1/tokenizer_config.json
arlineka-pharaoh-8b-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/arlineka-pharaoh-8b-v1/flywheel_model.0.safetensors
arlineka-pharaoh-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
arlineka-pharaoh-8b-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
arlineka-pharaoh-8b-v1-mkmlizer: warnings.warn(
arlineka-pharaoh-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
arlineka-pharaoh-8b-v1-mkmlizer: warnings.warn(
arlineka-pharaoh-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
arlineka-pharaoh-8b-v1-mkmlizer: warnings.warn(
arlineka-pharaoh-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
arlineka-pharaoh-8b-v1-mkmlizer: return self.fget.__get__(instance, owner)()
arlineka-pharaoh-8b-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
arlineka-pharaoh-8b-v1-mkmlizer: creating bucket guanaco-reward-models
arlineka-pharaoh-8b-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
arlineka-pharaoh-8b-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/arlineka-pharaoh-8b-v1_reward
arlineka-pharaoh-8b-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/arlineka-pharaoh-8b-v1_reward/config.json
arlineka-pharaoh-8b-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/arlineka-pharaoh-8b-v1_reward/special_tokens_map.json
arlineka-pharaoh-8b-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/arlineka-pharaoh-8b-v1_reward/tokenizer_config.json
arlineka-pharaoh-8b-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/arlineka-pharaoh-8b-v1_reward/vocab.json
arlineka-pharaoh-8b-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/arlineka-pharaoh-8b-v1_reward/merges.txt
arlineka-pharaoh-8b-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/arlineka-pharaoh-8b-v1_reward/tokenizer.json
arlineka-pharaoh-8b-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/arlineka-pharaoh-8b-v1_reward/reward.tensors
Job arlineka-pharaoh-8b-v1-mkmlizer completed after 114.06s with status: succeeded
Stopping job with name arlineka-pharaoh-8b-v1-mkmlizer
Pipeline stage MKMLizer completed in 117.59s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.12s
Running pipeline stage ISVCDeployer
Creating inference service arlineka-pharaoh-8b-v1
Waiting for inference service arlineka-pharaoh-8b-v1 to be ready
Inference service arlineka-pharaoh-8b-v1 ready after 40.21509790420532s
Pipeline stage ISVCDeployer completed in 47.25s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.159817934036255s
Received healthy response to inference request in 1.3427300453186035s
Received healthy response to inference request in 1.3079302310943604s
Received healthy response to inference request in 1.3092026710510254s
Received healthy response to inference request in 1.3700380325317383s
5 requests
0 failed requests
5th percentile: 1.3081847190856934
10th percentile: 1.3084392070770263
20th percentile: 1.3089481830596923
30th percentile: 1.315908145904541
40th percentile: 1.3293190956115724
50th percentile: 1.3427300453186035
60th percentile: 1.3536532402038575
70th percentile: 1.3645764350891114
80th percentile: 1.5279940128326417
90th percentile: 1.8439059734344483
95th percentile: 2.0018619537353515
99th percentile: 2.128226737976074
mean time: 1.4979437828063964
Pipeline stage StressChecker completed in 8.27s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.04s
M-Eval Dataset for topic stay_in_character is loaded
arlineka-pharaoh-8b_v1 status is now deployed due to DeploymentManager action
arlineka-pharaoh-8b_v1 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics