developer_uid: v000000
submission_id: v000000-l3-8b-test2_v1
model_name: v000000-l3-8b-test2_v1
model_group: v000000/l3-8b-test2
status: torndown
timestamp: 2024-06-20T15:59:32+00:00
num_battles: 41907
num_wins: 23445
celo_rating: 1213.05
family_friendly_score: 0.0
submission_type: basic
model_repo: v000000/l3-8b-test2
model_architecture: LlamaForCausalLM
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: v000000-l3-8b-test2_v1
is_internal_developer: False
language_model: v000000/l3-8b-test2
model_size: 8B
ranking_group: single
us_pacific_date: 2024-06-20
win_ratio: 0.5594530746653303
generation_params: {'temperature': 0.95, 'top_p': 0.95, 'min_p': 0.1, 'top_k': 80, 'presence_penalty': 0.05, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}\n'}
Resubmit model
Running pipeline stage MKMLizer
Starting job with name v000000-l3-8b-test2-v1-mkmlizer
Waiting for job on v000000-l3-8b-test2-v1-mkmlizer to finish
v000000-l3-8b-test2-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
v000000-l3-8b-test2-v1-mkmlizer: ║ _____ __ __ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
v000000-l3-8b-test2-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
v000000-l3-8b-test2-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ /___/ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Version: 0.8.14 ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
v000000-l3-8b-test2-v1-mkmlizer: ║ https://mk1.ai ║
v000000-l3-8b-test2-v1-mkmlizer: ║ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ The license key for the current software has been verified as ║
v000000-l3-8b-test2-v1-mkmlizer: ║ belonging to: ║
v000000-l3-8b-test2-v1-mkmlizer: ║ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Chai Research Corp. ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
v000000-l3-8b-test2-v1-mkmlizer: ║ ║
v000000-l3-8b-test2-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
v000000-l3-8b-test2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
v000000-l3-8b-test2-v1-mkmlizer: warnings.warn(warning_message, FutureWarning)
Job v000000-l3-8b-test2-v1-mkmlizer completed after 42.3s with status: failed
Stopping job with name v000000-l3-8b-test2-v1-mkmlizer
%s, retrying in %s seconds...
Starting job with name v000000-l3-8b-test2-v1-mkmlizer
Waiting for job on v000000-l3-8b-test2-v1-mkmlizer to finish
v000000-l3-8b-test2-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
v000000-l3-8b-test2-v1-mkmlizer: ║ _____ __ __ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
v000000-l3-8b-test2-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
v000000-l3-8b-test2-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ /___/ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Version: 0.8.14 ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
v000000-l3-8b-test2-v1-mkmlizer: ║ https://mk1.ai ║
v000000-l3-8b-test2-v1-mkmlizer: ║ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ The license key for the current software has been verified as ║
v000000-l3-8b-test2-v1-mkmlizer: ║ belonging to: ║
v000000-l3-8b-test2-v1-mkmlizer: ║ ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Chai Research Corp. ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
v000000-l3-8b-test2-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
v000000-l3-8b-test2-v1-mkmlizer: ║ ║
v000000-l3-8b-test2-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
v000000-l3-8b-test2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
v000000-l3-8b-test2-v1-mkmlizer: warnings.warn(warning_message, FutureWarning)
v000000-l3-8b-test2-v1-mkmlizer: Downloaded to shared memory in 31.618s
v000000-l3-8b-test2-v1-mkmlizer: quantizing model to /dev/shm/model_cache
v000000-l3-8b-test2-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
v000000-l3-8b-test2-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:04<09:54, 2.06s/it] Loading 0: 5%|▌ | 15/291 [00:04<00:57, 4.83it/s] Loading 0: 11%|█▏ | 33/291 [00:04<00:20, 12.76it/s] Loading 0: 17%|█▋ | 50/291 [00:04<00:10, 22.32it/s] Loading 0: 22%|██▏ | 64/291 [00:04<00:08, 25.46it/s] Loading 0: 27%|██▋ | 78/291 [00:04<00:06, 35.02it/s] Loading 0: 33%|███▎ | 96/291 [00:05<00:03, 49.91it/s] Loading 0: 39%|███▉ | 114/291 [00:05<00:02, 65.99it/s] Loading 0: 45%|████▌ | 132/291 [00:05<00:01, 82.31it/s] Loading 0: 52%|█████▏ | 150/291 [00:05<00:01, 97.76it/s] Loading 0: 57%|█████▋ | 166/291 [00:05<00:01, 69.46it/s] Loading 0: 62%|██████▏ | 181/291 [00:05<00:01, 81.55it/s] Loading 0: 67%|██████▋ | 195/291 [00:05<00:01, 90.96it/s] Loading 0: 73%|███████▎ | 212/291 [00:06<00:00, 106.33it/s] Loading 0: 79%|███████▊ | 229/291 [00:06<00:00, 120.22it/s] Loading 0: 84%|████████▍ | 244/291 [00:06<00:00, 127.12it/s] Loading 0: 89%|████████▉ | 259/291 [00:06<00:00, 130.72it/s] Loading 0: 94%|█████████▍| 274/291 [00:06<00:00, 77.36it/s] Loading 0: 99%|█████████▉| 289/291 [00:06<00:00, 89.63it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
v000000-l3-8b-test2-v1-mkmlizer: quantized model in 18.950s
v000000-l3-8b-test2-v1-mkmlizer: Processed model v000000/l3-8b-test2 in 51.585s
v000000-l3-8b-test2-v1-mkmlizer: creating bucket guanaco-mkml-models
v000000-l3-8b-test2-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
v000000-l3-8b-test2-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/v000000-l3-8b-test2-v1
v000000-l3-8b-test2-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/v000000-l3-8b-test2-v1/config.json
v000000-l3-8b-test2-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/v000000-l3-8b-test2-v1/special_tokens_map.json
v000000-l3-8b-test2-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/v000000-l3-8b-test2-v1/tokenizer_config.json
v000000-l3-8b-test2-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/v000000-l3-8b-test2-v1/tokenizer.json
v000000-l3-8b-test2-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/v000000-l3-8b-test2-v1/flywheel_model.0.safetensors
v000000-l3-8b-test2-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
v000000-l3-8b-test2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
v000000-l3-8b-test2-v1-mkmlizer: warnings.warn(
v000000-l3-8b-test2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
v000000-l3-8b-test2-v1-mkmlizer: warnings.warn(
v000000-l3-8b-test2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
v000000-l3-8b-test2-v1-mkmlizer: warnings.warn(
v000000-l3-8b-test2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
v000000-l3-8b-test2-v1-mkmlizer: return self.fget.__get__(instance, owner)()
v000000-l3-8b-test2-v1-mkmlizer: creating bucket guanaco-reward-models
v000000-l3-8b-test2-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
v000000-l3-8b-test2-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/v000000-l3-8b-test2-v1_reward
v000000-l3-8b-test2-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/v000000-l3-8b-test2-v1_reward/config.json
v000000-l3-8b-test2-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/v000000-l3-8b-test2-v1_reward/tokenizer_config.json
v000000-l3-8b-test2-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/v000000-l3-8b-test2-v1_reward/special_tokens_map.json
v000000-l3-8b-test2-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/v000000-l3-8b-test2-v1_reward/merges.txt
v000000-l3-8b-test2-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/v000000-l3-8b-test2-v1_reward/vocab.json
v000000-l3-8b-test2-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/v000000-l3-8b-test2-v1_reward/tokenizer.json
v000000-l3-8b-test2-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/v000000-l3-8b-test2-v1_reward/reward.tensors
Job v000000-l3-8b-test2-v1-mkmlizer completed after 84.42s with status: succeeded
Stopping job with name v000000-l3-8b-test2-v1-mkmlizer
Pipeline stage MKMLizer completed in 128.10s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service v000000-l3-8b-test2-v1
Waiting for inference service v000000-l3-8b-test2-v1 to be ready
Inference service v000000-l3-8b-test2-v1 ready after 100.61932682991028s
Pipeline stage ISVCDeployer completed in 106.47s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.203857660293579s
Received healthy response to inference request in 1.3583557605743408s
%s, retrying in %s seconds...
Received healthy response to inference request in 1.3720951080322266s
Received healthy response to inference request in 1.3430900573730469s
Received healthy response to inference request in 1.3045194149017334s
Received healthy response to inference request in 1.2924048900604248s
Received healthy response to inference request in 1.3477809429168701s
5 requests
0 failed requests
5th percentile: 1.2948277950286866
10th percentile: 1.2972506999969482
20th percentile: 1.3020965099334716
30th percentile: 1.3122335433959962
40th percentile: 1.3276618003845215
50th percentile: 1.3430900573730469
60th percentile: 1.3449664115905762
70th percentile: 1.3468427658081055
80th percentile: 1.3526437759399415
90th percentile: 1.362369441986084
95th percentile: 1.3672322750091552
99th percentile: 1.3711225414276123
mean time: 1.3319780826568604
Pipeline stage StressChecker completed in 31.08s
Running pipeline stage DaemonicSafetyScorer
Pipeline stage DaemonicSafetyScorer completed in 0.04s
v000000-l3-8b-test2_v1 status is now deployed due to DeploymentManager action
v000000-l3-8b-test2_v1 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of v000000-l3-8b-test2_v1
Running pipeline stage ISVCDeleter
Checking if service v000000-l3-8b-test2-v1 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 3.76s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key v000000-l3-8b-test2-v1/config.json from bucket guanaco-mkml-models
Deleting key v000000-l3-8b-test2-v1/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key v000000-l3-8b-test2-v1/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key v000000-l3-8b-test2-v1/tokenizer.json from bucket guanaco-mkml-models
Deleting key v000000-l3-8b-test2-v1/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key v000000-l3-8b-test2-v1_reward/config.json from bucket guanaco-reward-models
Deleting key v000000-l3-8b-test2-v1_reward/merges.txt from bucket guanaco-reward-models
Deleting key v000000-l3-8b-test2-v1_reward/reward.tensors from bucket guanaco-reward-models
Deleting key v000000-l3-8b-test2-v1_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key v000000-l3-8b-test2-v1_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key v000000-l3-8b-test2-v1_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key v000000-l3-8b-test2-v1_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.66s
v000000-l3-8b-test2_v1 status is now torndown due to DeploymentManager action