submission_id: openlynn-llama-3-soliloq_8807_v2
developer_uid: Azazelle
best_of: 16
celo_rating: 1173.87
display_name: openlynn-Soliloquy-8B-v2
family_friendly_score: 0.0
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.95, 'top_p': 1.0, 'min_p': 0.08, 'top_k': 50, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
is_internal_developer: False
language_model: openlynn/Llama-3-Soliloquy-8B-v2
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_group: openlynn/Llama-3-Soliloq
model_name: openlynn-Soliloquy-8B-v2
model_num_parameters: 8030261248.0
model_repo: openlynn/Llama-3-Soliloquy-8B-v2
model_size: 8B
num_battles: 31633
num_wins: 15426
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
status: torndown
submission_type: basic
timestamp: 2024-07-06T04:53:14+00:00
us_pacific_date: 2024-07-05
win_ratio: 0.48765529668384283
Resubmit model
Running pipeline stage MKMLizer
Starting job with name openlynn-llama-3-soliloq-8807-v2-mkmlizer
Waiting for job on openlynn-llama-3-soliloq-8807-v2-mkmlizer to finish
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ _____ __ __ ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ /___/ ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ Version: 0.8.14 ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ https://mk1.ai ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ The license key for the current software has been verified as ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ belonging to: ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ Chai Research Corp. ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ║ ║
openlynn-llama-3-soliloq-8807-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
openlynn-llama-3-soliloq-8807-v2-mkmlizer: Downloaded to shared memory in 53.212s
openlynn-llama-3-soliloq-8807-v2-mkmlizer: quantizing model to /dev/shm/model_cache
openlynn-llama-3-soliloq-8807-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
openlynn-llama-3-soliloq-8807-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 4%|▍ | 13/291 [00:00<00:02, 123.50it/s] Loading 0: 10%|█ | 30/291 [00:00<00:01, 139.12it/s] Loading 0: 15%|█▌ | 44/291 [00:00<00:01, 134.93it/s] Loading 0: 20%|█▉ | 58/291 [00:00<00:01, 134.35it/s] Loading 0: 26%|██▌ | 75/291 [00:00<00:01, 140.26it/s] Loading 0: 31%|███ | 89/291 [00:00<00:02, 75.90it/s] Loading 0: 35%|███▌ | 103/291 [00:01<00:02, 87.12it/s] Loading 0: 41%|████ | 120/291 [00:01<00:01, 102.08it/s] Loading 0: 46%|████▌ | 133/291 [00:01<00:01, 107.94it/s] Loading 0: 51%|█████ | 148/291 [00:01<00:01, 115.11it/s] Loading 0: 57%|█████▋ | 165/291 [00:01<00:01, 124.97it/s] Loading 0: 62%|██████▏ | 180/291 [00:01<00:00, 127.43it/s] Loading 0: 67%|██████▋ | 194/291 [00:01<00:01, 77.86it/s] Loading 0: 73%|███████▎ | 211/291 [00:02<00:00, 92.34it/s] Loading 0: 78%|███████▊ | 228/291 [00:02<00:00, 105.18it/s] Loading 0: 83%|████████▎ | 241/291 [00:02<00:00, 109.94it/s] Loading 0: 88%|████████▊ | 256/291 [00:02<00:00, 116.32it/s] Loading 0: 93%|█████████▎| 270/291 [00:02<00:00, 122.11it/s] Loading 0: 98%|█████████▊| 284/291 [00:02<00:00, 121.40it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
openlynn-llama-3-soliloq-8807-v2-mkmlizer: quantized model in 23.795s
openlynn-llama-3-soliloq-8807-v2-mkmlizer: Processed model openlynn/Llama-3-Soliloquy-8B-v2 in 77.008s
openlynn-llama-3-soliloq-8807-v2-mkmlizer: creating bucket guanaco-mkml-models
openlynn-llama-3-soliloq-8807-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
openlynn-llama-3-soliloq-8807-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/openlynn-llama-3-soliloq-8807-v2
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/openlynn-llama-3-soliloq-8807-v2/tokenizer_config.json
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/openlynn-llama-3-soliloq-8807-v2/special_tokens_map.json
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/openlynn-llama-3-soliloq-8807-v2/config.json
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/openlynn-llama-3-soliloq-8807-v2/tokenizer.json
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/openlynn-llama-3-soliloq-8807-v2/flywheel_model.0.safetensors
openlynn-llama-3-soliloq-8807-v2-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
openlynn-llama-3-soliloq-8807-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:919: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
openlynn-llama-3-soliloq-8807-v2-mkmlizer: warnings.warn(
openlynn-llama-3-soliloq-8807-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
openlynn-llama-3-soliloq-8807-v2-mkmlizer: warnings.warn(
openlynn-llama-3-soliloq-8807-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:769: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
openlynn-llama-3-soliloq-8807-v2-mkmlizer: warnings.warn(
openlynn-llama-3-soliloq-8807-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
openlynn-llama-3-soliloq-8807-v2-mkmlizer: warnings.warn(
openlynn-llama-3-soliloq-8807-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
openlynn-llama-3-soliloq-8807-v2-mkmlizer: return self.fget.__get__(instance, owner)()
openlynn-llama-3-soliloq-8807-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
openlynn-llama-3-soliloq-8807-v2-mkmlizer: Saving duration: 0.412s
openlynn-llama-3-soliloq-8807-v2-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 12.789s
openlynn-llama-3-soliloq-8807-v2-mkmlizer: creating bucket guanaco-reward-models
openlynn-llama-3-soliloq-8807-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
openlynn-llama-3-soliloq-8807-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/openlynn-llama-3-soliloq-8807-v2_reward
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/openlynn-llama-3-soliloq-8807-v2_reward/special_tokens_map.json
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/openlynn-llama-3-soliloq-8807-v2_reward/tokenizer_config.json
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/openlynn-llama-3-soliloq-8807-v2_reward/merges.txt
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/openlynn-llama-3-soliloq-8807-v2_reward/config.json
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/openlynn-llama-3-soliloq-8807-v2_reward/vocab.json
openlynn-llama-3-soliloq-8807-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/openlynn-llama-3-soliloq-8807-v2_reward/tokenizer.json
Job openlynn-llama-3-soliloq-8807-v2-mkmlizer completed after 114.46s with status: succeeded
Stopping job with name openlynn-llama-3-soliloq-8807-v2-mkmlizer
Pipeline stage MKMLizer completed in 115.30s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service openlynn-llama-3-soliloq-8807-v2
Waiting for inference service openlynn-llama-3-soliloq-8807-v2 to be ready
Inference service openlynn-llama-3-soliloq-8807-v2 ready after 40.178550481796265s
Pipeline stage ISVCDeployer completed in 46.74s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.0758113861083984s
Received healthy response to inference request in 1.577347993850708s
Received healthy response to inference request in 1.3579456806182861s
Received healthy response to inference request in 1.3176655769348145s
Received healthy response to inference request in 1.334073543548584s
5 requests
0 failed requests
5th percentile: 1.3209471702575684
10th percentile: 1.3242287635803223
20th percentile: 1.33079195022583
30th percentile: 1.3388479709625245
40th percentile: 1.3483968257904053
50th percentile: 1.3579456806182861
60th percentile: 1.4457066059112549
70th percentile: 1.5334675312042236
80th percentile: 1.6770406723022462
90th percentile: 1.8764260292053223
95th percentile: 1.9761187076568603
99th percentile: 2.055872850418091
mean time: 1.5325688362121581
Pipeline stage StressChecker completed in 8.43s
openlynn-llama-3-soliloq_8807_v2 status is now deployed due to DeploymentManager action
openlynn-llama-3-soliloq_8807_v2 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of openlynn-llama-3-soliloq_8807_v2
Running pipeline stage ISVCDeleter
Checking if service openlynn-llama-3-soliloq-8807-v2 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 4.52s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key openlynn-llama-3-soliloq-8807-v2/config.json from bucket guanaco-mkml-models
Deleting key openlynn-llama-3-soliloq-8807-v2/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key openlynn-llama-3-soliloq-8807-v2/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key openlynn-llama-3-soliloq-8807-v2/tokenizer.json from bucket guanaco-mkml-models
Deleting key openlynn-llama-3-soliloq-8807-v2/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key openlynn-llama-3-soliloq-8807-v2_reward/config.json from bucket guanaco-reward-models
Deleting key openlynn-llama-3-soliloq-8807-v2_reward/merges.txt from bucket guanaco-reward-models
Deleting key openlynn-llama-3-soliloq-8807-v2_reward/reward.tensors from bucket guanaco-reward-models
Deleting key openlynn-llama-3-soliloq-8807-v2_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key openlynn-llama-3-soliloq-8807-v2_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key openlynn-llama-3-soliloq-8807-v2_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key openlynn-llama-3-soliloq-8807-v2_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.47s
openlynn-llama-3-soliloq_8807_v2 status is now torndown due to DeploymentManager action