developer_uid: huohuo12
submission_id: koboldai-llama2-13b-tie_20758_v8
model_name: koboldai-llama2-13b-tie_20758_v8
model_group: KoboldAI/LLaMA2-13B-Tief
status: torndown
timestamp: 2025-02-18T03:13:26+00:00
num_battles: 5996
num_wins: 2656
celo_rating: 1201.75
family_friendly_score: 0.5986
family_friendly_standard_error: 0.006932215230357464
submission_type: basic
model_repo: KoboldAI/LLaMA2-13B-Tiefighter
model_architecture: LlamaForCausalLM
model_num_parameters: 13015864320.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
display_name: koboldai-llama2-13b-tie_20758_v8
is_internal_developer: False
language_model: KoboldAI/LLaMA2-13B-Tiefighter
model_size: 13B
ranking_group: single
us_pacific_date: 2025-02-17
win_ratio: 0.4429619746497665
generation_params: {'temperature': 0.85, 'top_p': 0.88, 'min_p': 0.025, 'top_k': 55, 'presence_penalty': 0.6, 'frequency_penalty': 0.4, 'stopping_words': ['\n'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name koboldai-llama2-13b-tie-20758-v8-mkmlizer
Waiting for job on koboldai-llama2-13b-tie-20758-v8-mkmlizer to finish
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ _____ __ __ ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ /___/ ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ Version: 0.12.8 ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ https://mk1.ai ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ The license key for the current software has been verified as ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ belonging to: ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ Chai Research Corp. ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ Expiration: 2025-04-15 23:59:59 ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ║ ║
koboldai-llama2-13b-tie-20758-v8-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission koboldai-llama2-13b-tie_20758_v5: HTTPConnectionPool(host='koboldai-llama2-13b-tie-20758-v5-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
koboldai-llama2-13b-tie-20758-v8-mkmlizer: Downloaded to shared memory in 24.239s
koboldai-llama2-13b-tie-20758-v8-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp3k0ita9p, device:0
koboldai-llama2-13b-tie-20758-v8-mkmlizer: Saving flywheel model at /dev/shm/model_cache
koboldai-llama2-13b-tie-20758-v8-mkmlizer: /opt/conda/lib/python3.10/site-packages/mk1/flywheel/functional/loader.py:55: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
koboldai-llama2-13b-tie-20758-v8-mkmlizer: tensors = torch.load(model_shard_filename, map_location=torch.device(self.device), mmap=True)
koboldai-llama2-13b-tie-20758-v8-mkmlizer: creating bucket guanaco-mkml-models
koboldai-llama2-13b-tie-20758-v8-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
koboldai-llama2-13b-tie-20758-v8-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v8
koboldai-llama2-13b-tie-20758-v8-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v8/config.json
koboldai-llama2-13b-tie-20758-v8-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v8/special_tokens_map.json
koboldai-llama2-13b-tie-20758-v8-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v8/tokenizer_config.json
koboldai-llama2-13b-tie-20758-v8-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v8/tokenizer.model
koboldai-llama2-13b-tie-20758-v8-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v8/tokenizer.json
koboldai-llama2-13b-tie-20758-v8-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v8/flywheel_model.0.safetensors
koboldai-llama2-13b-tie-20758-v8-mkmlizer: Loading 0: 0%| | 0/363 [00:00<?, ?it/s] Loading 0: 1%| | 4/363 [00:00<00:10, 35.29it/s] Loading 0: 4%|▎ | 13/363 [00:00<00:05, 64.85it/s] Loading 0: 6%|▌ | 22/363 [00:00<00:04, 74.67it/s] Loading 0: 9%|▊ | 31/363 [00:00<00:04, 78.54it/s] Loading 0: 11%|█ | 40/363 [00:00<00:04, 77.38it/s] Loading 0: 13%|█▎ | 49/363 [00:00<00:04, 73.72it/s] Loading 0: 16%|█▌ | 58/363 [00:00<00:04, 72.61it/s] Loading 0: 18%|█▊ | 67/363 [00:00<00:04, 72.64it/s] Loading 0: 21%|██ | 76/363 [00:01<00:03, 73.77it/s] Loading 0: 23%|██▎ | 85/363 [00:01<00:03, 75.61it/s] Loading 0: 26%|██▌ | 94/363 [00:01<00:03, 75.39it/s] Loading 0: 29%|██▉ | 105/363 [00:01<00:03, 84.66it/s] Loading 0: 32%|███▏ | 115/363 [00:01<00:03, 78.31it/s] Loading 0: 34%|███▍ | 124/363 [00:01<00:02, 81.03it/s] Loading 0: 37%|███▋ | 133/363 [00:01<00:02, 81.83it/s] Loading 0: 39%|███▉ | 142/363 [00:01<00:02, 74.64it/s] Loading 0: 41%|████▏ | 150/363 [00:04<00:18, 11.81it/s] Loading 0: 44%|████▍ | 160/363 [00:04<00:12, 16.21it/s] Loading 0: 48%|████▊ | 175/363 [00:04<00:07, 24.86it/s] Loading 0: 52%|█████▏ | 187/363 [00:04<00:05, 31.99it/s] Loading 0: 54%|█████▍ | 196/363 [00:04<00:04, 38.21it/s] Loading 0: 58%|█████▊ | 211/363 [00:04<00:03, 50.34it/s] Loading 0: 61%|██████▏ | 223/363 [00:04<00:02, 56.96it/s] Loading 0: 64%|██████▍ | 232/363 [00:05<00:02, 61.61it/s] Loading 0: 66%|██████▋ | 241/363 [00:05<00:01, 62.53it/s] Loading 0: 69%|██████▉ | 250/363 [00:05<00:01, 65.73it/s] Loading 0: 73%|███████▎ | 265/363 [00:05<00:01, 76.78it/s] Loading 0: 76%|███████▌ | 275/363 [00:05<00:01, 81.67it/s] Loading 0: 79%|███████▉ | 286/363 [00:05<00:00, 79.64it/s] Loading 0: 81%|████████▏ | 295/363 [00:06<00:03, 21.06it/s] Loading 0: 84%|████████▎ | 304/363 [00:07<00:02, 26.03it/s] Loading 0: 88%|████████▊ | 319/363 [00:07<00:01, 36.76it/s] Loading 0: 90%|█████████ | 328/363 [00:07<00:00, 43.16it/s] Loading 0: 93%|█████████▎| 337/363 [00:07<00:00, 48.22it/s] Loading 0: 95%|█████████▌| 346/363 [00:07<00:00, 53.73it/s] Loading 0: 99%|█████████▊| 358/363 [00:07<00:00, 61.40it/s]
Job koboldai-llama2-13b-tie-20758-v8-mkmlizer completed after 83.38s with status: succeeded
Stopping job with name koboldai-llama2-13b-tie-20758-v8-mkmlizer
Pipeline stage MKMLizer completed in 83.80s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.17s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service koboldai-llama2-13b-tie-20758-v8
Waiting for inference service koboldai-llama2-13b-tie-20758-v8 to be ready
Failed to get response for submission chaiml-dn-20250217-c-4ep_8638_v1: HTTPConnectionPool(host='chaiml-dn-20250217-c-4ep-8638-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Inference service koboldai-llama2-13b-tie-20758-v8 ready after 210.65156388282776s
Pipeline stage MKMLDeployer completed in 211.14s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.7047135829925537s
Received healthy response to inference request in 1.842256784439087s
Received healthy response to inference request in 1.8040313720703125s
Received healthy response to inference request in 1.805990219116211s
Received healthy response to inference request in 1.7500693798065186s
5 requests
0 failed requests
5th percentile: 1.7608617782592773
10th percentile: 1.7716541767120362
20th percentile: 1.7932389736175538
30th percentile: 1.8044231414794922
40th percentile: 1.8052066802978515
50th percentile: 1.805990219116211
60th percentile: 1.8204968452453614
70th percentile: 1.8350034713745118
80th percentile: 2.0147481441497805
90th percentile: 2.359730863571167
95th percentile: 2.53222222328186
99th percentile: 2.670215311050415
mean time: 1.9814122676849366
Pipeline stage StressChecker completed in 11.04s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.60s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.63s
Shutdown handler de-registered
koboldai-llama2-13b-tie_20758_v8 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.07s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service koboldai-llama2-13b-tie-20758-v8-profiler
Waiting for inference service koboldai-llama2-13b-tie-20758-v8-profiler to be ready
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 4841.79s
Shutdown handler de-registered
koboldai-llama2-13b-tie_20758_v8 status is now inactive due to auto deactivation removed underperforming models
koboldai-llama2-13b-tie_20758_v8 status is now torndown due to DeploymentManager action
koboldai-llama2-13b-tie_20758_v8 status is now torndown due to DeploymentManager action
koboldai-llama2-13b-tie_20758_v8 status is now torndown due to DeploymentManager action
ChatRequest
Generation Params
Prompt Formatter
Chat History
ChatMessage 1