submission_id: v000000-l3-8b-poppy-moon_7333_v1
developer_uid: v000000
best_of: 16
celo_rating: 1218.26
display_name: v000000-l3-8b-poppy-moon_7333_v1
family_friendly_score: 0.0
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.22, 'top_p': 0.95, 'min_p': 0.08, 'top_k': 80, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|eot_id|>', '<|end_of_text|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
is_internal_developer: False
language_model: v000000/L3-8B-Poppy-Moonfall-OG
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_eval_status: success
model_group: v000000/L3-8B-Poppy-Moon
model_name: v000000-l3-8b-poppy-moon_7333_v1
model_num_parameters: 8030261248.0
model_repo: v000000/L3-8B-Poppy-Moonfall-OG
model_size: 8B
num_battles: 8181
num_wins: 4405
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
status: torndown
submission_type: basic
timestamp: 2024-06-12T18:52:40+00:00
us_pacific_date: 2024-06-12
win_ratio: 0.5384427331622051
Resubmit model
Running pipeline stage MKMLizer
Starting job with name v000000-l3-8b-poppy-moon-7333-v1-mkmlizer
Waiting for job on v000000-l3-8b-poppy-moon-7333-v1-mkmlizer to finish
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ _____ __ __ ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ /___/ ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ Version: 0.8.14 ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ https://mk1.ai ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ The license key for the current software has been verified as ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ belonging to: ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ Chai Research Corp. ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ║ ║
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: warnings.warn(warning_message, FutureWarning)
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: Downloaded to shared memory in 37.844s
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: quantizing model to /dev/shm/model_cache
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:04<10:52, 2.26s/it] Loading 0: 5%|▌ | 15/291 [00:04<01:02, 4.40it/s] Loading 0: 11%|█▏ | 33/291 [00:04<00:22, 11.68it/s] Loading 0: 17%|█▋ | 50/291 [00:04<00:11, 20.51it/s] Loading 0: 22%|██▏ | 64/291 [00:05<00:09, 24.37it/s] Loading 0: 27%|██▋ | 78/291 [00:05<00:06, 33.51it/s] Loading 0: 33%|███▎ | 95/291 [00:05<00:04, 47.28it/s] Loading 0: 38%|███▊ | 112/291 [00:05<00:02, 62.48it/s] Loading 0: 44%|████▎ | 127/291 [00:05<00:02, 75.26it/s] Loading 0: 48%|████▊ | 141/291 [00:05<00:01, 85.68it/s] Loading 0: 54%|█████▍ | 158/291 [00:05<00:01, 101.65it/s] Loading 0: 59%|█████▉ | 173/291 [00:06<00:01, 73.10it/s] Loading 0: 64%|██████▍ | 186/291 [00:06<00:01, 82.31it/s] Loading 0: 70%|███████ | 204/291 [00:06<00:00, 98.60it/s] Loading 0: 76%|███████▌ | 221/291 [00:06<00:00, 113.74it/s] Loading 0: 82%|████████▏ | 239/291 [00:06<00:00, 125.64it/s] Loading 0: 88%|████████▊ | 257/291 [00:06<00:00, 134.49it/s] Loading 0: 94%|█████████▍| 273/291 [00:07<00:00, 85.10it/s] Loading 0: 99%|█████████▊| 287/291 [00:07<00:00, 94.86it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: quantized model in 23.759s
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: Processed model v000000/L3-8B-Poppy-Moonfall-OG in 64.283s
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/v000000-l3-8b-poppy-moon-7333-v1/special_tokens_map.json
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/v000000-l3-8b-poppy-moon-7333-v1/config.json
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/v000000-l3-8b-poppy-moon-7333-v1/tokenizer.json
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/v000000-l3-8b-poppy-moon-7333-v1/flywheel_model.0.safetensors
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: warnings.warn(
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: warnings.warn(
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: warnings.warn(
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: return self.fget.__get__(instance, owner)()
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: Saving duration: 0.392s
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 7.165s
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: creating bucket guanaco-reward-models
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/v000000-l3-8b-poppy-moon-7333-v1_reward
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/v000000-l3-8b-poppy-moon-7333-v1_reward/config.json
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/v000000-l3-8b-poppy-moon-7333-v1_reward/special_tokens_map.json
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/v000000-l3-8b-poppy-moon-7333-v1_reward/merges.txt
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/v000000-l3-8b-poppy-moon-7333-v1_reward/tokenizer_config.json
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/v000000-l3-8b-poppy-moon-7333-v1_reward/vocab.json
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/v000000-l3-8b-poppy-moon-7333-v1_reward/tokenizer.json
v000000-l3-8b-poppy-moon-7333-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/v000000-l3-8b-poppy-moon-7333-v1_reward/reward.tensors
Job v000000-l3-8b-poppy-moon-7333-v1-mkmlizer completed after 93.42s with status: succeeded
Stopping job with name v000000-l3-8b-poppy-moon-7333-v1-mkmlizer
Pipeline stage MKMLizer completed in 97.21s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service v000000-l3-8b-poppy-moon-7333-v1
Waiting for inference service v000000-l3-8b-poppy-moon-7333-v1 to be ready
Inference service v000000-l3-8b-poppy-moon-7333-v1 ready after 40.23909831047058s
Pipeline stage ISVCDeployer completed in 47.74s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.1780169010162354s
Received healthy response to inference request in 1.2664976119995117s
Received healthy response to inference request in 1.2661464214324951s
Received healthy response to inference request in 1.2602040767669678s
Received healthy response to inference request in 1.2792322635650635s
5 requests
0 failed requests
5th percentile: 1.2613925457000732
10th percentile: 1.2625810146331786
20th percentile: 1.2649579524993897
30th percentile: 1.2662166595458983
40th percentile: 1.266357135772705
50th percentile: 1.2664976119995117
60th percentile: 1.2715914726257325
70th percentile: 1.2766853332519532
80th percentile: 1.4589891910552981
90th percentile: 1.8185030460357667
95th percentile: 1.9982599735260007
99th percentile: 2.1420655155181882
mean time: 1.4500194549560548
Pipeline stage StressChecker completed in 7.89s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.03s
Running M-Eval for topic stay_in_character
Running pipeline stage DaemonicSafetyScorer
M-Eval Dataset for topic stay_in_character is loaded
Pipeline stage DaemonicSafetyScorer completed in 0.06s
v000000-l3-8b-poppy-moon_7333_v1 status is now deployed due to DeploymentManager action
v000000-l3-8b-poppy-moon_7333_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of v000000-l3-8b-poppy-moon_7333_v1
Running pipeline stage ISVCDeleter
Checking if service v000000-l3-8b-poppy-moon-7333-v1 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 4.32s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key v000000-l3-8b-poppy-moon-7333-v1/config.json from bucket guanaco-mkml-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key v000000-l3-8b-poppy-moon-7333-v1_reward/config.json from bucket guanaco-reward-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key v000000-l3-8b-poppy-moon-7333-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 6.09s
v000000-l3-8b-poppy-moon_7333_v1 status is now torndown due to DeploymentManager action