developer_uid: rinen0721
submission_id: rinen0721-llama1013_v1
model_name: rinen0721-llama1013_v1
model_group: rinen0721/llama1013
status: torndown
timestamp: 2024-10-13T11:14:36+00:00
num_battles: 7104
num_wins: 3130
celo_rating: 1215.68
family_friendly_score: 0.6000393952824149
family_friendly_standard_error: 0.0059324745079214316
submission_type: basic
model_repo: rinen0721/llama1013
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
display_name: rinen0721-llama1013_v1
is_internal_developer: False
language_model: rinen0721/llama1013
model_size: 8B
ranking_group: single
us_pacific_date: 2024-10-13
win_ratio: 0.44059684684684686
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name rinen0721-llama1013-v1-mkmlizer
Waiting for job on rinen0721-llama1013-v1-mkmlizer to finish
rinen0721-llama1013-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
rinen0721-llama1013-v1-mkmlizer: ║ _____ __ __ ║
rinen0721-llama1013-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
rinen0721-llama1013-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
rinen0721-llama1013-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
rinen0721-llama1013-v1-mkmlizer: ║ /___/ ║
rinen0721-llama1013-v1-mkmlizer: ║ ║
rinen0721-llama1013-v1-mkmlizer: ║ Version: 0.11.12 ║
rinen0721-llama1013-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
rinen0721-llama1013-v1-mkmlizer: ║ https://mk1.ai ║
rinen0721-llama1013-v1-mkmlizer: ║ ║
rinen0721-llama1013-v1-mkmlizer: ║ The license key for the current software has been verified as ║
rinen0721-llama1013-v1-mkmlizer: ║ belonging to: ║
rinen0721-llama1013-v1-mkmlizer: ║ ║
rinen0721-llama1013-v1-mkmlizer: ║ Chai Research Corp. ║
rinen0721-llama1013-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
rinen0721-llama1013-v1-mkmlizer: ║ Expiration: 2024-10-15 23:59:59 ║
rinen0721-llama1013-v1-mkmlizer: ║ ║
rinen0721-llama1013-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
rinen0721-llama1013-v1-mkmlizer: Downloaded to shared memory in 33.770s
rinen0721-llama1013-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmps1122aj6, device:0
rinen0721-llama1013-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
rinen0721-llama1013-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/mk1/flywheel/functional/loader.py:55: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
rinen0721-llama1013-v1-mkmlizer: tensors = torch.load(model_shard_filename, map_location=torch.device(self.device), mmap=True)
rinen0721-llama1013-v1-mkmlizer: quantized model in 26.328s
rinen0721-llama1013-v1-mkmlizer: Processed model rinen0721/llama1013 in 60.099s
rinen0721-llama1013-v1-mkmlizer: creating bucket guanaco-mkml-models
rinen0721-llama1013-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
rinen0721-llama1013-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/rinen0721-llama1013-v1
rinen0721-llama1013-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/rinen0721-llama1013-v1/config.json
rinen0721-llama1013-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/rinen0721-llama1013-v1/special_tokens_map.json
rinen0721-llama1013-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/rinen0721-llama1013-v1/tokenizer_config.json
rinen0721-llama1013-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/rinen0721-llama1013-v1/tokenizer.json
rinen0721-llama1013-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 2%|▏ | 7/291 [00:00<00:05, 47.72it/s] Loading 0: 5%|▌ | 16/291 [00:00<00:04, 67.08it/s] Loading 0: 9%|▊ | 25/291 [00:00<00:03, 72.37it/s] Loading 0: 12%|█▏ | 34/291 [00:00<00:03, 75.49it/s] Loading 0: 15%|█▍ | 43/291 [00:00<00:03, 73.97it/s] Loading 0: 18%|█▊ | 52/291 [00:00<00:03, 76.22it/s] Loading 0: 21%|██ | 61/291 [00:00<00:02, 79.02it/s] Loading 0: 24%|██▍ | 70/291 [00:00<00:02, 80.27it/s] Loading 0: 27%|██▋ | 79/291 [00:01<00:02, 75.58it/s] Loading 0: 30%|██▉ | 87/291 [00:02<00:09, 20.71it/s] Loading 0: 32%|███▏ | 94/291 [00:02<00:07, 25.37it/s] Loading 0: 35%|███▌ | 103/291 [00:02<00:05, 32.92it/s] Loading 0: 38%|███▊ | 112/291 [00:02<00:04, 41.04it/s] Loading 0: 42%|████▏ | 121/291 [00:02<00:03, 47.98it/s] Loading 0: 45%|████▍ | 130/291 [00:02<00:02, 55.73it/s] Loading 0: 48%|████▊ | 139/291 [00:02<00:02, 62.34it/s] Loading 0: 51%|█████ | 148/291 [00:02<00:02, 67.91it/s] Loading 0: 54%|█████▍ | 157/291 [00:03<00:01, 72.02it/s] Loading 0: 57%|█████▋ | 166/291 [00:03<00:01, 72.68it/s] Loading 0: 60%|██████ | 175/291 [00:03<00:01, 73.51it/s] Loading 0: 63%|██████▎ | 184/291 [00:03<00:01, 77.55it/s] Loading 0: 66%|██████▋ | 193/291 [00:04<00:04, 21.62it/s] Loading 0: 69%|██████▉ | 202/291 [00:04<00:03, 27.86it/s] Loading 0: 73%|███████▎ | 211/291 [00:04<00:02, 34.67it/s] Loading 0: 76%|███████▌ | 220/291 [00:04<00:01, 41.61it/s] Loading 0: 79%|███████▊ | 229/291 [00:04<00:01, 48.85it/s] Loading 0: 82%|████████▏ | 238/291 [00:05<00:00, 55.91it/s] Loading 0: 85%|████████▍ | 247/291 [00:05<00:00, 61.86it/s] Loading 0: 88%|████████▊ | 256/291 [00:05<00:00, 67.07it/s] Loading 0: 91%|█████████ | 265/291 [00:05<00:00, 70.54it/s] Loading 0: 94%|█████████▍| 274/291 [00:05<00:00, 73.89it/s] Loading 0: 97%|█████████▋| 283/291 [00:05<00:00, 76.96it/s]
Job rinen0721-llama1013-v1-mkmlizer completed after 83.59s with status: succeeded
Stopping job with name rinen0721-llama1013-v1-mkmlizer
Pipeline stage MKMLizer completed in 84.09s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.16s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service rinen0721-llama1013-v1
Waiting for inference service rinen0721-llama1013-v1 to be ready
Inference service rinen0721-llama1013-v1 ready after 170.57566022872925s
Pipeline stage MKMLDeployer completed in 171.10s
run pipeline stage %s
Running pipeline stage StressChecker
{"detail":"(<class 'abc.InfernoUnion'>, <class 'cachetools.keys._HashedTuple'>, 'submission_id', 'blend_pifos_2024-10-07')"}
Received unhealthy response to inference request!
Received healthy response to inference request in 1.44838285446167s
Received healthy response to inference request in 1.5745372772216797s
Received healthy response to inference request in 1.362255573272705s
Received healthy response to inference request in 1.5859997272491455s
5 requests
1 failed requests
5th percentile: 1.379481029510498
10th percentile: 1.396706485748291
20th percentile: 1.431157398223877
30th percentile: 1.4736137390136719
40th percentile: 1.5240755081176758
50th percentile: 1.5745372772216797
60th percentile: 1.579122257232666
Failed to get response for submission sao10k-mn-12b-spicier_v1: ('http://sao10k-mn-12b-spicier-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:47934->127.0.0.1:8080: read: connection reset by peer\n')
70th percentile: 1.5837072372436523
80th percentile: 1.6140777111053468
90th percentile: 1.670233678817749
95th percentile: 1.69831166267395
99th percentile: 1.720774049758911
mean time: 1.5395130157470702
%s, retrying in %s seconds...
Received healthy response to inference request in 1.5075204372406006s
Received healthy response to inference request in 1.5091094970703125s
Received healthy response to inference request in 1.5892539024353027s
Received healthy response to inference request in 1.4463605880737305s
Received healthy response to inference request in 1.4239916801452637s
5 requests
0 failed requests
5th percentile: 1.4284654617309571
10th percentile: 1.4329392433166503
20th percentile: 1.441886806488037
30th percentile: 1.4585925579071044
40th percentile: 1.4830564975738525
50th percentile: 1.5075204372406006
60th percentile: 1.5081560611724854
70th percentile: 1.5087916851043701
80th percentile: 1.5251383781433105
90th percentile: 1.5571961402893066
95th percentile: 1.5732250213623047
99th percentile: 1.5860481262207031
mean time: 1.495247220993042
Pipeline stage StressChecker completed in 17.92s
Shutdown handler de-registered
rinen0721-llama1013_v1 status is now deployed due to DeploymentManager action
rinen0721-llama1013_v1 status is now inactive due to auto deactivation removed underperforming models
rinen0721-llama1013_v1 status is now torndown due to DeploymentManager action