submission_id: v000000-mega-prototype_v2
developer_uid: v000000
status: inactive
model_repo: v000000/mega_prototype
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 0.95, 'top_p': 0.95, 'min_p': 0.1, 'top_k': 80, 'presence_penalty': 0.05, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n{bot_name}'s Persona: {memory}\n\n", 'prompt_template': '{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-06-18T12:48:32+00:00
model_name: mega_prototype
model_group: v000000/mega_prototype
num_battles: 17668
num_wins: 9997
celo_rating: 1213.53
propriety_score: 0.7157882958487188
propriety_total_count: 8937.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: mega_prototype
ineligible_reason: None
language_model: v000000/mega_prototype
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-06-18
win_ratio: 0.5658252207380575
Resubmit model
Running pipeline stage MKMLizer
Starting job with name v000000-mega-prototype-v2-mkmlizer
Waiting for job on v000000-mega-prototype-v2-mkmlizer to finish
v000000-mega-prototype-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
v000000-mega-prototype-v2-mkmlizer: ║ _____ __ __ ║
v000000-mega-prototype-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
v000000-mega-prototype-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
v000000-mega-prototype-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
v000000-mega-prototype-v2-mkmlizer: ║ /___/ ║
v000000-mega-prototype-v2-mkmlizer: ║ ║
v000000-mega-prototype-v2-mkmlizer: ║ Version: 0.8.14 ║
v000000-mega-prototype-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
v000000-mega-prototype-v2-mkmlizer: ║ https://mk1.ai ║
v000000-mega-prototype-v2-mkmlizer: ║ ║
v000000-mega-prototype-v2-mkmlizer: ║ The license key for the current software has been verified as ║
v000000-mega-prototype-v2-mkmlizer: ║ belonging to: ║
v000000-mega-prototype-v2-mkmlizer: ║ ║
v000000-mega-prototype-v2-mkmlizer: ║ Chai Research Corp. ║
v000000-mega-prototype-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
v000000-mega-prototype-v2-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
v000000-mega-prototype-v2-mkmlizer: ║ ║
v000000-mega-prototype-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
v000000-mega-prototype-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
v000000-mega-prototype-v2-mkmlizer: warnings.warn(warning_message, FutureWarning)
v000000-mega-prototype-v2-mkmlizer: Downloaded to shared memory in 52.857s
v000000-mega-prototype-v2-mkmlizer: quantizing model to /dev/shm/model_cache
v000000-mega-prototype-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
v000000-mega-prototype-v2-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:05<12:35, 2.61s/it] Loading 0: 4%|▍ | 13/291 [00:05<01:24, 3.28it/s] Loading 0: 8%|▊ | 23/291 [00:05<00:39, 6.77it/s] Loading 0: 11%|█▏ | 33/291 [00:05<00:22, 11.26it/s] Loading 0: 16%|█▌ | 46/291 [00:05<00:12, 18.93it/s] Loading 0: 20%|█▉ | 58/291 [00:05<00:08, 27.15it/s] Loading 0: 23%|██▎ | 68/291 [00:06<00:08, 27.14it/s] Loading 0: 27%|██▋ | 78/291 [00:06<00:06, 34.31it/s] Loading 0: 31%|███ | 90/291 [00:06<00:04, 45.36it/s] Loading 0: 34%|███▍ | 100/291 [00:06<00:03, 53.35it/s] Loading 0: 38%|███▊ | 112/291 [00:06<00:02, 63.11it/s] Loading 0: 42%|████▏ | 122/291 [00:06<00:02, 67.63it/s] Loading 0: 45%|████▌ | 132/291 [00:06<00:02, 72.81it/s] Loading 0: 50%|████▉ | 145/291 [00:06<00:01, 84.81it/s] Loading 0: 54%|█████▍ | 157/291 [00:07<00:01, 89.77it/s] Loading 0: 58%|█████▊ | 168/291 [00:07<00:02, 55.49it/s] Loading 0: 62%|██████▏ | 180/291 [00:07<00:01, 66.64it/s] Loading 0: 65%|██████▌ | 190/291 [00:07<00:01, 73.08it/s] Loading 0: 69%|██████▉ | 202/291 [00:07<00:01, 81.17it/s] Loading 0: 73%|███████▎ | 212/291 [00:07<00:00, 83.54it/s] Loading 0: 76%|███████▋ | 222/291 [00:07<00:00, 85.14it/s] Loading 0: 81%|████████ | 235/291 [00:08<00:00, 95.34it/s] Loading 0: 85%|████████▍ | 247/291 [00:08<00:00, 97.27it/s] Loading 0: 89%|████████▊ | 258/291 [00:08<00:00, 91.10it/s] Loading 0: 92%|█████████▏| 268/291 [00:08<00:00, 59.19it/s] Loading 0: 96%|█████████▌| 280/291 [00:08<00:00, 69.94it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
v000000-mega-prototype-v2-mkmlizer: quantized model in 25.800s
v000000-mega-prototype-v2-mkmlizer: Processed model v000000/mega_prototype in 81.531s
v000000-mega-prototype-v2-mkmlizer: creating bucket guanaco-mkml-models
v000000-mega-prototype-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
v000000-mega-prototype-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/v000000-mega-prototype-v2
v000000-mega-prototype-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/v000000-mega-prototype-v2/tokenizer_config.json
v000000-mega-prototype-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/v000000-mega-prototype-v2/special_tokens_map.json
v000000-mega-prototype-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/v000000-mega-prototype-v2/tokenizer.json
v000000-mega-prototype-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/v000000-mega-prototype-v2/config.json
v000000-mega-prototype-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/v000000-mega-prototype-v2/flywheel_model.0.safetensors
v000000-mega-prototype-v2-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
v000000-mega-prototype-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
v000000-mega-prototype-v2-mkmlizer: warnings.warn(
v000000-mega-prototype-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
v000000-mega-prototype-v2-mkmlizer: warnings.warn(
v000000-mega-prototype-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
v000000-mega-prototype-v2-mkmlizer: return self.fget.__get__(instance, owner)()
v000000-mega-prototype-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
v000000-mega-prototype-v2-mkmlizer: Saving duration: 0.426s
v000000-mega-prototype-v2-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 12.390s
v000000-mega-prototype-v2-mkmlizer: creating bucket guanaco-reward-models
v000000-mega-prototype-v2-mkmlizer: Bucket 's3://guanaco-reward-models/' created
v000000-mega-prototype-v2-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/v000000-mega-prototype-v2_reward
v000000-mega-prototype-v2-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/v000000-mega-prototype-v2_reward/special_tokens_map.json
v000000-mega-prototype-v2-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/v000000-mega-prototype-v2_reward/merges.txt
v000000-mega-prototype-v2-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/v000000-mega-prototype-v2_reward/vocab.json
v000000-mega-prototype-v2-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/v000000-mega-prototype-v2_reward/tokenizer_config.json
v000000-mega-prototype-v2-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/v000000-mega-prototype-v2_reward/config.json
v000000-mega-prototype-v2-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/v000000-mega-prototype-v2_reward/tokenizer.json
v000000-mega-prototype-v2-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/v000000-mega-prototype-v2_reward/reward.tensors
Job v000000-mega-prototype-v2-mkmlizer completed after 113.82s with status: succeeded
Stopping job with name v000000-mega-prototype-v2-mkmlizer
Pipeline stage MKMLizer completed in 118.11s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service v000000-mega-prototype-v2
Waiting for inference service v000000-mega-prototype-v2 to be ready
Inference service v000000-mega-prototype-v2 ready after 50.26534605026245s
Pipeline stage ISVCDeployer completed in 58.09s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.1981685161590576s
Received healthy response to inference request in 1.4148445129394531s
Received healthy response to inference request in 1.4992563724517822s
Received healthy response to inference request in 1.3364529609680176s
Received healthy response to inference request in 1.4052050113677979s
5 requests
0 failed requests
5th percentile: 1.3502033710479737
10th percentile: 1.3639537811279296
20th percentile: 1.3914546012878417
30th percentile: 1.407132911682129
40th percentile: 1.410988712310791
50th percentile: 1.4148445129394531
60th percentile: 1.4486092567443847
70th percentile: 1.4823740005493165
80th percentile: 1.6390388011932373
90th percentile: 1.9186036586761475
95th percentile: 2.0583860874176025
99th percentile: 2.1702120304107666
mean time: 1.5707854747772216
Pipeline stage StressChecker completed in 8.51s
Running pipeline stage DaemonicSafetyScorer
Pipeline stage DaemonicSafetyScorer completed in 0.03s
v000000-mega-prototype_v2 status is now deployed due to DeploymentManager action
v000000-mega-prototype_v2 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics