submission_id: hastagaras-jamet-8b-l3-m_1276_v1
developer_uid: Hastagaras
status: inactive
model_repo: Hastagaras/Jamet-8B-L3-MK.V-BR1
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 0.95, 'top_p': 1.0, 'min_p': 0.08, 'top_k': 100, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-06-06T10:41:27+00:00
model_name: br1
model_eval_status: success
model_group: Hastagaras/Jamet-8B-L3-M
num_battles: 13837
num_wins: 7842
celo_rating: 1216.86
propriety_score: 0.6876712328767123
propriety_total_count: 730.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: br1
ineligible_reason: propriety_total_count < 5000
language_model: Hastagaras/Jamet-8B-L3-MK.V-BR1
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-06-06
win_ratio: 0.5667413456674134
Resubmit model
Running pipeline stage MKMLizer
Starting job with name hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer
Waiting for job on hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer to finish
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ _____ __ __ ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ /___/ ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ Version: 0.8.14 ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ https://mk1.ai ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ The license key for the current software has been verified as ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ belonging to: ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ Chai Research Corp. ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ║ ║
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: warnings.warn(warning_message, FutureWarning)
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: Downloaded to shared memory in 29.863s
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: quantizing model to /dev/shm/model_cache
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:04<10:43, 2.23s/it] Loading 0: 6%|▌ | 18/291 [00:04<00:50, 5.40it/s] Loading 0: 12%|█▏ | 36/291 [00:04<00:19, 12.80it/s] Loading 0: 19%|█▊ | 54/291 [00:04<00:10, 22.32it/s] Loading 0: 24%|██▎ | 69/291 [00:05<00:08, 26.98it/s] Loading 0: 30%|██▉ | 87/291 [00:05<00:05, 39.51it/s] Loading 0: 35%|███▌ | 103/291 [00:05<00:03, 52.14it/s] Loading 0: 42%|████▏ | 121/291 [00:05<00:02, 68.19it/s] Loading 0: 48%|████▊ | 139/291 [00:05<00:01, 85.18it/s] Loading 0: 54%|█████▍ | 157/291 [00:05<00:01, 101.60it/s] Loading 0: 60%|█████▉ | 174/291 [00:05<00:01, 79.03it/s] Loading 0: 64%|██████▍ | 187/291 [00:06<00:01, 87.21it/s] Loading 0: 70%|███████ | 204/291 [00:06<00:00, 101.19it/s] Loading 0: 76%|███████▋ | 222/291 [00:06<00:00, 116.50it/s] Loading 0: 82%|████████▏ | 240/291 [00:06<00:00, 130.26it/s] Loading 0: 89%|████████▊ | 258/291 [00:06<00:00, 140.97it/s] Loading 0: 95%|█████████▍| 275/291 [00:06<00:00, 94.32it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: quantized model in 22.827s
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: Processed model Hastagaras/Jamet-8B-L3-MK.V-BR1 in 55.107s
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: creating bucket guanaco-mkml-models
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/hastagaras-jamet-8b-l3-m-1276-v1
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/hastagaras-jamet-8b-l3-m-1276-v1/config.json
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/hastagaras-jamet-8b-l3-m-1276-v1/special_tokens_map.json
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/hastagaras-jamet-8b-l3-m-1276-v1/tokenizer_config.json
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/hastagaras-jamet-8b-l3-m-1276-v1/tokenizer.json
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/hastagaras-jamet-8b-l3-m-1276-v1/flywheel_model.0.safetensors
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: warnings.warn(
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: warnings.warn(
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: warnings.warn(
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: return self.fget.__get__(instance, owner)()
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: Saving duration: 0.410s
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 3.870s
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/hastagaras-jamet-8b-l3-m-1276-v1_reward
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/hastagaras-jamet-8b-l3-m-1276-v1_reward/config.json
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/hastagaras-jamet-8b-l3-m-1276-v1_reward/special_tokens_map.json
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/hastagaras-jamet-8b-l3-m-1276-v1_reward/tokenizer_config.json
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/hastagaras-jamet-8b-l3-m-1276-v1_reward/merges.txt
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/hastagaras-jamet-8b-l3-m-1276-v1_reward/vocab.json
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/hastagaras-jamet-8b-l3-m-1276-v1_reward/tokenizer.json
hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/hastagaras-jamet-8b-l3-m-1276-v1_reward/reward.tensors
Job hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer completed after 82.86s with status: succeeded
Stopping job with name hastagaras-jamet-8b-l3-m-1276-v1-mkmlizer
Pipeline stage MKMLizer completed in 83.29s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.08s
Running pipeline stage ISVCDeployer
Creating inference service hastagaras-jamet-8b-l3-m-1276-v1
Waiting for inference service hastagaras-jamet-8b-l3-m-1276-v1 to be ready
Inference service hastagaras-jamet-8b-l3-m-1276-v1 ready after 101.54125785827637s
Pipeline stage ISVCDeployer completed in 107.27s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.0363357067108154s
Received healthy response to inference request in 1.3795864582061768s
Received healthy response to inference request in 1.3267090320587158s
Received healthy response to inference request in 1.28944993019104s
Received healthy response to inference request in 1.2417988777160645s
5 requests
0 failed requests
5th percentile: 1.2513290882110595
10th percentile: 1.2608592987060547
20th percentile: 1.279919719696045
30th percentile: 1.2969017505645752
40th percentile: 1.3118053913116454
50th percentile: 1.3267090320587158
60th percentile: 1.3478600025177
70th percentile: 1.3690109729766846
80th percentile: 1.5109363079071045
90th percentile: 1.77363600730896
95th percentile: 1.9049858570098877
99th percentile: 2.01006573677063
mean time: 1.4547760009765625
Pipeline stage StressChecker completed in 7.92s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.04s
M-Eval Dataset for topic stay_in_character is loaded
hastagaras-jamet-8b-l3-m_1276_v1 status is now deployed due to DeploymentManager action
hastagaras-jamet-8b-l3-m_1276_v1 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics