submission_id: megumi21-megumi-chat-7b-v0-5_v2
developer_uid: megumi_10073
status: rejected
model_repo: megumi21/Megumi-Chat-7B-v0.5
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.1, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "### Instruction:\nYou are a creative agent roleplaying as a character called {bot_name}. Stay true to the persona given, reply with short and descriptive sentences. Do not be repetitive.\n{bot_name}'s Persona: {memory}\n", 'prompt_template': '### Input:\n# Example conversation:\n{prompt}\n# Actual conversation:\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '### Response: {bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-03-28T09:14:02+00:00
model_name: megumi-chat-7b-v5
model_eval_status: success
model_group: megumi21/Megumi-Chat-7B-
num_battles: 35
num_wins: 14
celo_rating: None
propriety_score: 0.0
propriety_total_count: 0.0
submission_type: basic
model_architecture: MistralForCausalLM
model_num_parameters: 7241732096.0
best_of: 8
max_input_tokens: 512
max_output_tokens: 64
display_name: megumi-chat-7b-v5
ineligible_reason: model is not deployable
language_model: megumi21/Megumi-Chat-7B-v0.5
model_size: 7B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-03-28
win_ratio: 0.4
preference_data_url: None
Resubmit model
Running pipeline stage MKMLizer
Starting job with name megumi21-megumi-chat-7b-v0-5-v2-mkmlizer
Waiting for job on megumi21-megumi-chat-7b-v0-5-v2-mkmlizer to finish
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ _____ __ __ ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ /___/ ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ Version: 0.6.11 ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ The license key for the current software has been verified as ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ belonging to: ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ Chai Research Corp. ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ║ ║
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 17.3MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: config.json: 0%| | 0.00/654 [00:00<?, ?B/s] config.json: 100%|██████████| 654/654 [00:00<00:00, 9.69MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: generation_config.json: 0%| | 0.00/132 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 132/132 [00:00<00:00, 1.35MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: pytorch_model-00001-of-00003.bin: 0%| | 0.00/4.94G [00:00<?, ?B/s] pytorch_model-00001-of-00003.bin: 0%| | 10.5M/4.94G [00:00<01:52, 43.9MB/s] pytorch_model-00001-of-00003.bin: 2%|▏ | 83.9M/4.94G [00:00<00:17, 284MB/s] pytorch_model-00001-of-00003.bin: 3%|▎ | 126M/4.94G [00:00<00:27, 178MB/s] pytorch_model-00001-of-00003.bin: 6%|▌ | 273M/4.94G [00:00<00:10, 439MB/s] pytorch_model-00001-of-00003.bin: 16%|█▌ | 786M/4.94G [00:00<00:02, 1.50GB/s] pytorch_model-00001-of-00003.bin: 21%|██ | 1.02G/4.94G [00:01<00:02, 1.55GB/s] pytorch_model-00001-of-00003.bin: 25%|██▍ | 1.23G/4.94G [00:01<00:03, 1.12GB/s] pytorch_model-00001-of-00003.bin: 28%|██▊ | 1.41G/4.94G [00:01<00:02, 1.22GB/s] pytorch_model-00001-of-00003.bin: 37%|███▋ | 1.85G/4.94G [00:01<00:01, 1.87GB/s] pytorch_model-00001-of-00003.bin: 46%|████▌ | 2.29G/4.94G [00:01<00:01, 2.44GB/s] pytorch_model-00001-of-00003.bin: 53%|█████▎ | 2.60G/4.94G [00:02<00:01, 1.53GB/s] pytorch_model-00001-of-00003.bin: 58%|█████▊ | 2.87G/4.94G [00:02<00:01, 1.72GB/s] pytorch_model-00001-of-00003.bin: 67%|██████▋ | 3.30G/4.94G [00:02<00:00, 2.16GB/s] pytorch_model-00001-of-00003.bin: 73%|███████▎ | 3.60G/4.94G [00:02<00:00, 1.78GB/s] pytorch_model-00001-of-00003.bin: 78%|███████▊ | 3.88G/4.94G [00:02<00:00, 1.97GB/s] pytorch_model-00001-of-00003.bin: 86%|████████▌ | 4.24G/4.94G [00:02<00:00, 2.30GB/s] pytorch_model-00001-of-00003.bin: 92%|█████████▏| 4.52G/4.94G [00:02<00:00, 2.30GB/s] pytorch_model-00001-of-00003.bin: 100%|█████████▉| 4.94G/4.94G [00:02<00:00, 1.67GB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: pytorch_model-00002-of-00003.bin: 0%| | 0.00/5.00G [00:00<?, ?B/s] pytorch_model-00002-of-00003.bin: 0%| | 10.5M/5.00G [00:00<02:32, 32.7MB/s] pytorch_model-00002-of-00003.bin: 1%| | 52.4M/5.00G [00:00<00:33, 146MB/s] pytorch_model-00002-of-00003.bin: 3%|▎ | 147M/5.00G [00:00<00:13, 364MB/s] pytorch_model-00002-of-00003.bin: 4%|▍ | 199M/5.00G [00:00<00:11, 401MB/s] pytorch_model-00002-of-00003.bin: 14%|█▍ | 724M/5.00G [00:00<00:02, 1.71GB/s] pytorch_model-00002-of-00003.bin: 18%|█▊ | 923M/5.00G [00:00<00:02, 1.70GB/s] pytorch_model-00002-of-00003.bin: 22%|██▏ | 1.11G/5.00G [00:01<00:02, 1.52GB/s] pytorch_model-00002-of-00003.bin: 26%|██▌ | 1.28G/5.00G [00:01<00:02, 1.50GB/s] pytorch_model-00002-of-00003.bin: 32%|███▏ | 1.58G/5.00G [00:01<00:01, 1.78GB/s] pytorch_model-00002-of-00003.bin: 35%|███▌ | 1.77G/5.00G [00:01<00:02, 1.36GB/s] pytorch_model-00002-of-00003.bin: 39%|███▉ | 1.97G/5.00G [00:01<00:02, 1.50GB/s] pytorch_model-00002-of-00003.bin: 49%|████▉ | 2.46G/5.00G [00:01<00:01, 2.30GB/s] pytorch_model-00002-of-00003.bin: 55%|█████▍ | 2.74G/5.00G [00:01<00:00, 2.30GB/s] pytorch_model-00002-of-00003.bin: 60%|█████▉ | 3.00G/5.00G [00:02<00:01, 1.94GB/s] pytorch_model-00002-of-00003.bin: 65%|██████▍ | 3.24G/5.00G [00:02<00:00, 1.95GB/s] pytorch_model-00002-of-00003.bin: 69%|██████▉ | 3.46G/5.00G [00:02<00:00, 1.99GB/s] pytorch_model-00002-of-00003.bin: 74%|███████▎ | 3.68G/5.00G [00:02<00:00, 1.93GB/s] pytorch_model-00002-of-00003.bin: 78%|███████▊ | 3.89G/5.00G [00:02<00:00, 1.74GB/s] pytorch_model-00002-of-00003.bin: 86%|████████▌ | 4.28G/5.00G [00:02<00:00, 2.25GB/s] pytorch_model-00002-of-00003.bin: 94%|█████████▎| 4.68G/5.00G [00:02<00:00, 2.68GB/s] pytorch_model-00002-of-00003.bin: 100%|█████████▉| 4.99G/5.00G [00:02<00:00, 2.76GB/s] pytorch_model-00002-of-00003.bin: 100%|█████████▉| 5.00G/5.00G [00:03<00:00, 1.50GB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: pytorch_model-00003-of-00003.bin: 0%| | 0.00/4.54G [00:00<?, ?B/s] pytorch_model-00003-of-00003.bin: 0%| | 10.5M/4.54G [00:00<02:18, 32.6MB/s] pytorch_model-00003-of-00003.bin: 1%| | 41.9M/4.54G [00:00<00:37, 119MB/s] pytorch_model-00003-of-00003.bin: 3%|▎ | 136M/4.54G [00:00<00:12, 355MB/s] pytorch_model-00003-of-00003.bin: 5%|▍ | 220M/4.54G [00:00<00:08, 495MB/s] pytorch_model-00003-of-00003.bin: 14%|█▍ | 640M/4.54G [00:00<00:02, 1.59GB/s] pytorch_model-00003-of-00003.bin: 18%|█▊ | 839M/4.54G [00:00<00:02, 1.65GB/s] pytorch_model-00003-of-00003.bin: 23%|██▎ | 1.03G/4.54G [00:01<00:02, 1.43GB/s] pytorch_model-00003-of-00003.bin: 26%|██▋ | 1.20G/4.54G [00:01<00:02, 1.32GB/s] pytorch_model-00003-of-00003.bin: 31%|███ | 1.42G/4.54G [00:01<00:02, 1.52GB/s] pytorch_model-00003-of-00003.bin: 35%|███▍ | 1.58G/4.54G [00:01<00:01, 1.52GB/s] pytorch_model-00003-of-00003.bin: 50%|████▉ | 2.25G/4.54G [00:01<00:00, 2.89GB/s] pytorch_model-00003-of-00003.bin: 57%|█████▋ | 2.58G/4.54G [00:01<00:00, 2.62GB/s] pytorch_model-00003-of-00003.bin: 63%|██████▎ | 2.87G/4.54G [00:01<00:00, 2.00GB/s] pytorch_model-00003-of-00003.bin: 69%|██████▊ | 3.11G/4.54G [00:02<00:00, 1.88GB/s] pytorch_model-00003-of-00003.bin: 76%|███████▌ | 3.44G/4.54G [00:02<00:00, 2.18GB/s] pytorch_model-00003-of-00003.bin: 81%|████████▏ | 3.69G/4.54G [00:02<00:00, 2.17GB/s] pytorch_model-00003-of-00003.bin: 87%|████████▋ | 3.93G/4.54G [00:02<00:00, 1.80GB/s] pytorch_model-00003-of-00003.bin: 100%|█████████▉| 4.54G/4.54G [00:02<00:00, 1.79GB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: pytorch_model.bin.index.json: 0%| | 0.00/23.9k [00:00<?, ?B/s] pytorch_model.bin.index.json: 100%|██████████| 23.9k/23.9k [00:00<00:00, 123MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: special_tokens_map.json: 0%| | 0.00/437 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 437/437 [00:00<00:00, 4.87MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 23.9MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: tokenizer_config.json: 0%| | 0.00/1.51k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 1.51k/1.51k [00:00<00:00, 15.2MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: Downloaded to shared memory in 11.078s
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: quantizing model to /dev/shm/model_cache
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: Saving mkml model at /dev/shm/model_cache
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: Reading /tmp/tmp1p7eoeaf/pytorch_model.bin.index.json
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:02<10:04, 2.08s/it] Profiling: 34%|███▎ | 98/291 [00:02<00:04, 43.81it/s] Profiling: 70%|███████ | 204/291 [00:03<00:01, 77.79it/s] Profiling: 100%|██████████| 291/291 [00:04<00:00, 72.77it/s] Profiling: 100%|██████████| 291/291 [00:04<00:00, 60.12it/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: quantized model in 15.120s
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: Processed model megumi21/Megumi-Chat-7B-v0.5 in 27.062s
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: creating bucket guanaco-mkml-models
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v2
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v2/special_tokens_map.json
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v2/config.json
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v2/tokenizer.model
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v2/tokenizer.json
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/megumi21-megumi-chat-7b-v0-5-v2/mkml_model.tensors
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 11.0MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.65MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 21.5MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 18.3MB/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 18.2MB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: warnings.warn(
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: pytorch_model.bin: 0%| | 0.00/1.44G [00:00<?, ?B/s] pytorch_model.bin: 1%| | 10.5M/1.44G [00:00<00:27, 51.9MB/s] pytorch_model.bin: 2%|▏ | 31.5M/1.44G [00:00<00:16, 84.4MB/s] pytorch_model.bin: 14%|█▍ | 199M/1.44G [00:00<00:02, 559MB/s] pytorch_model.bin: 31%|███ | 451M/1.44G [00:00<00:00, 1.11GB/s] pytorch_model.bin: 41%|████▏ | 598M/1.44G [00:00<00:00, 1.14GB/s] pytorch_model.bin: 51%|█████ | 734M/1.44G [00:01<00:01, 615MB/s] pytorch_model.bin: 68%|██████▊ | 986M/1.44G [00:01<00:00, 936MB/s] pytorch_model.bin: 100%|█████████▉| 1.44G/1.44G [00:01<00:00, 1.07GB/s]
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: Saving duration: 0.248s
megumi21-megumi-chat-7b-v0-5-v2-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 4.866s
Job megumi21-megumi-chat-7b-v0-5-v2-mkmlizer completed after 53.77s with status: succeeded
Stopping job with name megumi21-megumi-chat-7b-v0-5-v2-mkmlizer
Pipeline stage MKMLizer completed in 60.43s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.12s
Running pipeline stage ISVCDeployer
Creating inference service megumi21-megumi-chat-7b-v0-5-v2
Waiting for inference service megumi21-megumi-chat-7b-v0-5-v2 to be ready
Inference service megumi21-megumi-chat-7b-v0-5-v2 ready after 40.2596492767334s
Pipeline stage ISVCDeployer completed in 48.54s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.6513586044311523s
Received healthy response to inference request in 1.1041052341461182s
Received healthy response to inference request in 1.0887043476104736s
Received healthy response to inference request in 1.1540875434875488s
Received healthy response to inference request in 1.0925798416137695s
5 requests
0 failed requests
5th percentile: 1.0894794464111328
10th percentile: 1.090254545211792
20th percentile: 1.0918047428131104
30th percentile: 1.0948849201202393
40th percentile: 1.0994950771331786
50th percentile: 1.1041052341461182
60th percentile: 1.1240981578826905
70th percentile: 1.1440910816192627
80th percentile: 1.2535417556762696
90th percentile: 1.452450180053711
95th percentile: 1.5519043922424316
99th percentile: 1.6314677619934081
mean time: 1.2181671142578125
Pipeline stage StressChecker completed in 6.93s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.06s
Running pipeline stage DaemonicSafetyScorer
Pipeline stage DaemonicSafetyScorer completed in 0.05s
Running M-Eval for topic stay_in_character
megumi21-megumi-chat-7b-v0-5_v2 status is now deployed due to DeploymentManager action
M-Eval Dataset for topic stay_in_character is loaded
megumi21-megumi-chat-7b-v0-5_v2 status is now rejected due to ELO is less than acceptable minimum 6.5

Usage Metrics

Latency Metrics