submission_id: wespro-daring-samantha-l3-8b_v1
developer_uid: WesPro
best_of: 4
celo_rating: 1170.75
display_name: wespro-daring-samantha-l3-8b_v1
family_friendly_score: 0.0
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 4, 'max_output_tokens': 64}
is_internal_developer: False
language_model: WesPro/Daring-Samantha-L3-8B
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_eval_status: success
model_group: WesPro/Daring-Samantha-L
model_name: wespro-daring-samantha-l3-8b_v1
model_num_parameters: 8030261248.0
model_repo: WesPro/Daring-Samantha-L3-8B
model_size: 8B
num_battles: 16595
num_wins: 8076
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
status: torndown
submission_type: basic
timestamp: 2024-06-05T16:30:49+00:00
us_pacific_date: 2024-06-05
win_ratio: 0.48665260620668876
Resubmit model
Running pipeline stage MKMLizer
Starting job with name wespro-daring-samantha-l3-8b-v1-mkmlizer
Waiting for job on wespro-daring-samantha-l3-8b-v1-mkmlizer to finish
wespro-daring-samantha-l3-8b-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ _____ __ __ ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ /___/ ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ Version: 0.8.14 ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ https://mk1.ai ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ The license key for the current software has been verified as ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ belonging to: ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ Chai Research Corp. ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ║ ║
wespro-daring-samantha-l3-8b-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
wespro-daring-samantha-l3-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
wespro-daring-samantha-l3-8b-v1-mkmlizer: warnings.warn(warning_message, FutureWarning)
wespro-daring-samantha-l3-8b-v1-mkmlizer: Downloaded to shared memory in 33.619s
wespro-daring-samantha-l3-8b-v1-mkmlizer: quantizing model to /dev/shm/model_cache
wespro-daring-samantha-l3-8b-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
wespro-daring-samantha-l3-8b-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:04<10:54, 2.26s/it] Loading 0: 5%|▌ | 15/291 [00:04<01:02, 4.38it/s] Loading 0: 11%|█ | 31/291 [00:04<00:23, 10.87it/s] Loading 0: 14%|█▍ | 42/291 [00:04<00:15, 16.09it/s] Loading 0: 20%|█▉ | 58/291 [00:04<00:08, 26.41it/s] Loading 0: 24%|██▍ | 70/291 [00:05<00:08, 25.83it/s] Loading 0: 29%|██▉ | 85/291 [00:05<00:05, 36.47it/s] Loading 0: 33%|███▎ | 96/291 [00:05<00:04, 44.43it/s] Loading 0: 39%|███▉ | 113/291 [00:05<00:02, 60.95it/s] Loading 0: 45%|████▍ | 130/291 [00:05<00:02, 77.66it/s] Loading 0: 50%|████▉ | 145/291 [00:06<00:01, 90.58it/s] Loading 0: 55%|█████▍ | 159/291 [00:06<00:01, 98.54it/s] Loading 0: 59%|█████▉ | 173/291 [00:06<00:01, 60.68it/s] Loading 0: 64%|██████▎ | 185/291 [00:06<00:01, 68.78it/s] Loading 0: 69%|██████▉ | 202/291 [00:06<00:01, 85.26it/s] Loading 0: 74%|███████▍ | 216/291 [00:06<00:00, 96.19it/s] Loading 0: 79%|███████▉ | 231/291 [00:07<00:00, 106.22it/s] Loading 0: 85%|████████▌ | 248/291 [00:07<00:00, 120.30it/s] Loading 0: 90%|█████████ | 263/291 [00:07<00:00, 127.31it/s] Loading 0: 96%|█████████▌| 278/291 [00:07<00:00, 68.50it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
wespro-daring-samantha-l3-8b-v1-mkmlizer: quantized model in 21.977s
wespro-daring-samantha-l3-8b-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/wespro-daring-samantha-l3-8b-v1/flywheel_model.0.safetensors
wespro-daring-samantha-l3-8b-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
wespro-daring-samantha-l3-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
wespro-daring-samantha-l3-8b-v1-mkmlizer: warnings.warn(
wespro-daring-samantha-l3-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
wespro-daring-samantha-l3-8b-v1-mkmlizer: warnings.warn(
wespro-daring-samantha-l3-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
wespro-daring-samantha-l3-8b-v1-mkmlizer: warnings.warn(
wespro-daring-samantha-l3-8b-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
wespro-daring-samantha-l3-8b-v1-mkmlizer: return self.fget.__get__(instance, owner)()
wespro-daring-samantha-l3-8b-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
wespro-daring-samantha-l3-8b-v1-mkmlizer: Saving duration: 0.294s
wespro-daring-samantha-l3-8b-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 3.764s
wespro-daring-samantha-l3-8b-v1-mkmlizer: creating bucket guanaco-reward-models
wespro-daring-samantha-l3-8b-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
wespro-daring-samantha-l3-8b-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/wespro-daring-samantha-l3-8b-v1_reward
wespro-daring-samantha-l3-8b-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/wespro-daring-samantha-l3-8b-v1_reward/special_tokens_map.json
wespro-daring-samantha-l3-8b-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/wespro-daring-samantha-l3-8b-v1_reward/tokenizer_config.json
wespro-daring-samantha-l3-8b-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/wespro-daring-samantha-l3-8b-v1_reward/config.json
wespro-daring-samantha-l3-8b-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/wespro-daring-samantha-l3-8b-v1_reward/merges.txt
wespro-daring-samantha-l3-8b-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/wespro-daring-samantha-l3-8b-v1_reward/vocab.json
wespro-daring-samantha-l3-8b-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/wespro-daring-samantha-l3-8b-v1_reward/tokenizer.json
wespro-daring-samantha-l3-8b-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/wespro-daring-samantha-l3-8b-v1_reward/reward.tensors
Job wespro-daring-samantha-l3-8b-v1-mkmlizer completed after 90.3s with status: succeeded
Stopping job with name wespro-daring-samantha-l3-8b-v1-mkmlizer
Pipeline stage MKMLizer completed in 92.77s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.08s
Running pipeline stage ISVCDeployer
Creating inference service wespro-daring-samantha-l3-8b-v1
Waiting for inference service wespro-daring-samantha-l3-8b-v1 to be ready
Inference service wespro-daring-samantha-l3-8b-v1 ready after 473.38145542144775s
Pipeline stage ISVCDeployer completed in 480.24s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.9751412868499756s
Received healthy response to inference request in 1.1428248882293701s
Received healthy response to inference request in 1.1389591693878174s
Received healthy response to inference request in 1.1655235290527344s
Received healthy response to inference request in 1.166670322418213s
5 requests
0 failed requests
5th percentile: 1.1397323131561279
10th percentile: 1.1405054569244384
20th percentile: 1.1420517444610596
30th percentile: 1.147364616394043
40th percentile: 1.1564440727233887
50th percentile: 1.1655235290527344
60th percentile: 1.1659822463989258
70th percentile: 1.1664409637451172
80th percentile: 1.3283645153045656
90th percentile: 1.6517529010772707
95th percentile: 1.813447093963623
99th percentile: 1.942802448272705
mean time: 1.317823839187622
Pipeline stage StressChecker completed in 7.22s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.04s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.03s
M-Eval Dataset for topic stay_in_character is loaded
wespro-daring-samantha-l3-8b_v1 status is now deployed due to DeploymentManager action
wespro-daring-samantha-l3-8b_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of wespro-daring-samantha-l3-8b_v1
Running pipeline stage ISVCDeleter
Checking if service wespro-daring-samantha-l3-8b-v1 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 4.36s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key wespro-daring-samantha-l3-8b-v1/config.json from bucket guanaco-mkml-models
Deleting key wespro-daring-samantha-l3-8b-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key wespro-daring-samantha-l3-8b-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key wespro-daring-samantha-l3-8b-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key wespro-daring-samantha-l3-8b-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key wespro-daring-samantha-l3-8b-v1_reward/config.json from bucket guanaco-reward-models
Deleting key wespro-daring-samantha-l3-8b-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key wespro-daring-samantha-l3-8b-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key wespro-daring-samantha-l3-8b-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key wespro-daring-samantha-l3-8b-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key wespro-daring-samantha-l3-8b-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key wespro-daring-samantha-l3-8b-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 6.39s
wespro-daring-samantha-l3-8b_v1 status is now torndown due to DeploymentManager action