submission_id: grimjim-kukulemon-7b_v3
developer_uid: zonemercy
status: inactive
model_repo: grimjim/kukulemon-7B
reward_repo: rirv938/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.2, 'top_p': 1.0, 'top_k': 50, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '</s>', '###'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': 'Role-play as {bot_name} based on the Persona: {memory}. Engage user with detailed, creative messages that invite further discussion. Stay in character, keep responses moderately sized for an energetic exchange.', 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:'}
timestamp: 2024-03-22T06:02:59+00:00
model_name: grimjim-kukulemon-7b_v3
model_eval_status: success
safety_score: 0.97
entertaining: 7.24
stay_in_character: 8.62
user_preference: 7.56
double_thumbs_up: 1105
thumbs_up: 1554
thumbs_down: 619
num_battles: 111997
num_wins: 59669
win_ratio: 0.5327731992821236
celo_rating: 1180.8
Resubmit model
Running pipeline stage MKMLizer
Starting job with name grimjim-kukulemon-7b-v3-mkmlizer
Waiting for job on grimjim-kukulemon-7b-v3-mkmlizer to finish
grimjim-kukulemon-7b-v3-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
grimjim-kukulemon-7b-v3-mkmlizer: ║ _____ __ __ ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ /___/ ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ Version: 0.6.11 ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ The license key for the current software has been verified as ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ belonging to: ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ Chai Research Corp. ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ Expiration: 2024-04-15 23:59:59 ║
grimjim-kukulemon-7b-v3-mkmlizer: ║ ║
grimjim-kukulemon-7b-v3-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
grimjim-kukulemon-7b-v3-mkmlizer: .gitattributes: 0%| | 0.00/1.52k [00:00<?, ?B/s] .gitattributes: 100%|██████████| 1.52k/1.52k [00:00<00:00, 20.0MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: README.md: 0%| | 0.00/1.93k [00:00<?, ?B/s] README.md: 100%|██████████| 1.93k/1.93k [00:00<00:00, 31.5MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: config.json: 0%| | 0.00/645 [00:00<?, ?B/s] config.json: 100%|██████████| 645/645 [00:00<00:00, 4.62MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: mergekit_config.yml: 0%| | 0.00/481 [00:00<?, ?B/s] mergekit_config.yml: 100%|██████████| 481/481 [00:00<00:00, 7.73MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: model-00002-of-00002.safetensors: 0%| | 0.00/4.62G [00:00<?, ?B/s] model-00002-of-00002.safetensors: 0%| | 10.5M/4.62G [00:00<01:03, 72.7MB/s] model-00002-of-00002.safetensors: 3%|▎ | 147M/4.62G [00:00<00:06, 676MB/s] model-00002-of-00002.safetensors: 7%|▋ | 346M/4.62G [00:00<00:03, 1.14GB/s] model-00002-of-00002.safetensors: 12%|█▏ | 577M/4.62G [00:00<00:02, 1.53GB/s] model-00002-of-00002.safetensors: 21%|██ | 954M/4.62G [00:00<00:01, 2.24GB/s] model-00002-of-00002.safetensors: 26%|██▌ | 1.20G/4.62G [00:00<00:01, 2.09GB/s] model-00002-of-00002.safetensors: 31%|███ | 1.42G/4.62G [00:00<00:01, 1.97GB/s] model-00002-of-00002.safetensors: 35%|███▌ | 1.64G/4.62G [00:00<00:01, 1.97GB/s] model-00002-of-00002.safetensors: 40%|███▉ | 1.85G/4.62G [00:01<00:01, 1.86GB/s] model-00002-of-00002.safetensors: 44%|████▍ | 2.04G/4.62G [00:01<00:01, 1.65GB/s] model-00002-of-00002.safetensors: 49%|████▉ | 2.25G/4.62G [00:01<00:01, 1.71GB/s] model-00002-of-00002.safetensors: 53%|█████▎ | 2.43G/4.62G [00:01<00:01, 1.63GB/s] model-00002-of-00002.safetensors: 56%|█████▌ | 2.60G/4.62G [00:01<00:01, 1.61GB/s] model-00002-of-00002.safetensors: 60%|██████ | 2.78G/4.62G [00:01<00:01, 1.64GB/s] model-00002-of-00002.safetensors: 65%|██████▌ | 3.01G/4.62G [00:01<00:00, 1.79GB/s] model-00002-of-00002.safetensors: 69%|██████▉ | 3.20G/4.62G [00:01<00:00, 1.66GB/s] model-00002-of-00002.safetensors: 73%|███████▎ | 3.38G/4.62G [00:02<00:00, 1.62GB/s] model-00002-of-00002.safetensors: 77%|███████▋ | 3.54G/4.62G [00:02<00:00, 1.55GB/s] model-00002-of-00002.safetensors: 80%|████████ | 3.70G/4.62G [00:02<00:00, 1.48GB/s] model-00002-of-00002.safetensors: 84%|████████▎ | 3.87G/4.62G [00:02<00:00, 1.52GB/s] model-00002-of-00002.safetensors: 97%|█████████▋| 4.47G/4.62G [00:02<00:00, 2.71GB/s] model-00002-of-00002.safetensors: 100%|█████████▉| 4.62G/4.62G [00:02<00:00, 1.83GB/s]
grimjim-kukulemon-7b-v3-mkmlizer: model.safetensors.index.json: 0%| | 0.00/22.8k [00:00<?, ?B/s] model.safetensors.index.json: 100%|██████████| 22.8k/22.8k [00:00<00:00, 100MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: special_tokens_map.json: 0%| | 0.00/414 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 414/414 [00:00<00:00, 4.51MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: tokenizer.json: 0%| | 0.00/1.80M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 6.44MB/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 6.42MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 58.8MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: tokenizer_config.json: 0%| | 0.00/967 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 967/967 [00:00<00:00, 8.08MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: Downloaded to shared memory in 10.154s
grimjim-kukulemon-7b-v3-mkmlizer: quantizing model to /dev/shm/model_cache
grimjim-kukulemon-7b-v3-mkmlizer: Saving mkml model at /dev/shm/model_cache
grimjim-kukulemon-7b-v3-mkmlizer: Reading /tmp/tmpjerpvb7y/model.safetensors.index.json
grimjim-kukulemon-7b-v3-mkmlizer: Profiling: 0%| | 0/291 [00:00<?, ?it/s] Profiling: 0%| | 1/291 [00:01<05:53, 1.22s/it] Profiling: 6%|▌ | 18/291 [00:01<00:14, 18.48it/s] Profiling: 14%|█▎ | 40/291 [00:01<00:05, 44.09it/s] Profiling: 20%|██ | 59/291 [00:01<00:03, 66.66it/s] Profiling: 29%|██▉ | 84/291 [00:01<00:02, 99.25it/s] Profiling: 36%|███▌ | 104/291 [00:01<00:01, 118.16it/s] Profiling: 44%|████▍ | 129/291 [00:01<00:01, 145.77it/s] Profiling: 52%|█████▏ | 150/291 [00:01<00:00, 156.75it/s] Profiling: 60%|█████▉ | 174/291 [00:02<00:00, 177.31it/s] Profiling: 67%|██████▋ | 196/291 [00:02<00:00, 184.06it/s] Profiling: 75%|███████▍ | 217/291 [00:03<00:02, 36.60it/s] Profiling: 82%|████████▏ | 238/291 [00:03<00:01, 48.44it/s] Profiling: 89%|████████▊ | 258/291 [00:04<00:00, 61.67it/s] Profiling: 97%|█████████▋| 283/291 [00:04<00:00, 82.59it/s] Profiling: 100%|██████████| 291/291 [00:04<00:00, 66.48it/s]
grimjim-kukulemon-7b-v3-mkmlizer: quantized model in 14.020s
grimjim-kukulemon-7b-v3-mkmlizer: Processed model grimjim/kukulemon-7B in 25.047s
grimjim-kukulemon-7b-v3-mkmlizer: creating bucket guanaco-mkml-models
grimjim-kukulemon-7b-v3-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
grimjim-kukulemon-7b-v3-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/grimjim-kukulemon-7b-v3
grimjim-kukulemon-7b-v3-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/grimjim-kukulemon-7b-v3/config.json
grimjim-kukulemon-7b-v3-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/grimjim-kukulemon-7b-v3/tokenizer_config.json
grimjim-kukulemon-7b-v3-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/grimjim-kukulemon-7b-v3/tokenizer.json
grimjim-kukulemon-7b-v3-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/grimjim-kukulemon-7b-v3/special_tokens_map.json
grimjim-kukulemon-7b-v3-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/grimjim-kukulemon-7b-v3/tokenizer.model
grimjim-kukulemon-7b-v3-mkmlizer: cp /dev/shm/model_cache/mkml_model.tensors s3://guanaco-mkml-models/grimjim-kukulemon-7b-v3/mkml_model.tensors
grimjim-kukulemon-7b-v3-mkmlizer: loading reward model from rirv938/reward_gpt2_medium_preference_24m_e2
grimjim-kukulemon-7b-v3-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
grimjim-kukulemon-7b-v3-mkmlizer: warnings.warn(
grimjim-kukulemon-7b-v3-mkmlizer: config.json: 0%| | 0.00/1.05k [00:00<?, ?B/s] config.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 289kB/s]
grimjim-kukulemon-7b-v3-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:690: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
grimjim-kukulemon-7b-v3-mkmlizer: warnings.warn(
grimjim-kukulemon-7b-v3-mkmlizer: tokenizer_config.json: 0%| | 0.00/234 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 234/234 [00:00<00:00, 2.22MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 23.4MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: tokenizer.json: 0%| | 0.00/2.11M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 9.80MB/s] tokenizer.json: 100%|██████████| 2.11M/2.11M [00:00<00:00, 9.77MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
grimjim-kukulemon-7b-v3-mkmlizer: warnings.warn(
grimjim-kukulemon-7b-v3-mkmlizer: pytorch_model.bin: 0%| | 0.00/1.44G [00:00<?, ?B/s] pytorch_model.bin: 1%| | 10.5M/1.44G [00:00<00:16, 87.7MB/s] pytorch_model.bin: 7%|▋ | 94.4M/1.44G [00:00<00:03, 435MB/s] pytorch_model.bin: 15%|█▍ | 210M/1.44G [00:00<00:01, 724MB/s] pytorch_model.bin: 21%|██ | 304M/1.44G [00:00<00:01, 764MB/s] pytorch_model.bin: 30%|███ | 440M/1.44G [00:00<00:01, 816MB/s] pytorch_model.bin: 38%|███▊ | 556M/1.44G [00:00<00:01, 710MB/s] pytorch_model.bin: 49%|████▉ | 713M/1.44G [00:00<00:00, 909MB/s] pytorch_model.bin: 63%|██████▎ | 912M/1.44G [00:01<00:00, 1.18GB/s] pytorch_model.bin: 96%|█████████▌| 1.38G/1.44G [00:01<00:00, 2.11GB/s] pytorch_model.bin: 100%|█████████▉| 1.44G/1.44G [00:05<00:00, 276MB/s]
grimjim-kukulemon-7b-v3-mkmlizer: creating bucket guanaco-reward-models
grimjim-kukulemon-7b-v3-mkmlizer: Bucket 's3://guanaco-reward-models/' created
grimjim-kukulemon-7b-v3-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/grimjim-kukulemon-7b-v3_reward
grimjim-kukulemon-7b-v3-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v3_reward/config.json
grimjim-kukulemon-7b-v3-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v3_reward/special_tokens_map.json
grimjim-kukulemon-7b-v3-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v3_reward/tokenizer_config.json
grimjim-kukulemon-7b-v3-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/grimjim-kukulemon-7b-v3_reward/merges.txt
grimjim-kukulemon-7b-v3-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v3_reward/vocab.json
grimjim-kukulemon-7b-v3-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/grimjim-kukulemon-7b-v3_reward/tokenizer.json
grimjim-kukulemon-7b-v3-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/grimjim-kukulemon-7b-v3_reward/reward.tensors
Job grimjim-kukulemon-7b-v3-mkmlizer completed after 64.41s with status: succeeded
Stopping job with name grimjim-kukulemon-7b-v3-mkmlizer
Pipeline stage MKMLizer completed in 68.42s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.13s
Running pipeline stage ISVCDeployer
Creating inference service grimjim-kukulemon-7b-v3
Waiting for inference service grimjim-kukulemon-7b-v3 to be ready
Inference service grimjim-kukulemon-7b-v3 ready after 30.175511837005615s
Pipeline stage ISVCDeployer completed in 37.43s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.6849253177642822s
Received healthy response to inference request in 1.1873712539672852s
Received healthy response to inference request in 1.1822638511657715s
Received healthy response to inference request in 1.1732704639434814s
Received healthy response to inference request in 1.197577714920044s
5 requests
0 failed requests
5th percentile: 1.1750691413879395
10th percentile: 1.1768678188323975
20th percentile: 1.1804651737213134
30th percentile: 1.1832853317260743
40th percentile: 1.1853282928466797
50th percentile: 1.1873712539672852
60th percentile: 1.1914538383483886
70th percentile: 1.1955364227294922
80th percentile: 1.2950472354888918
90th percentile: 1.489986276626587
95th percentile: 1.5874557971954344
99th percentile: 1.6654314136505126
mean time: 1.285081720352173
Pipeline stage StressChecker completed in 7.25s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.05s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.06s
M-Eval Dataset for topic stay_in_character is loaded
grimjim-kukulemon-7b_v3 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics