developer_uid: huohuo12
submission_id: koboldai-llama2-13b-tie_20758_v7
model_name: koboldai-llama2-13b-tie_20758_v7
model_group: KoboldAI/LLaMA2-13B-Tief
status: torndown
timestamp: 2025-02-18T03:09:45+00:00
num_battles: 6484
num_wins: 2923
celo_rating: 1207.1
family_friendly_score: 0.6018
family_friendly_standard_error: 0.006922958327189324
submission_type: basic
model_repo: KoboldAI/LLaMA2-13B-Tiefighter
model_architecture: LlamaForCausalLM
model_num_parameters: 13015864320.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
reward_model: default
display_name: koboldai-llama2-13b-tie_20758_v7
is_internal_developer: False
language_model: KoboldAI/LLaMA2-13B-Tiefighter
model_size: 13B
ranking_group: single
us_pacific_date: 2025-02-17
win_ratio: 0.45080197409006784
generation_params: {'temperature': 0.75, 'top_p': 0.92, 'min_p': 0.01, 'top_k': 45, 'presence_penalty': 0.65, 'frequency_penalty': 0.35, 'stopping_words': ['\n'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name koboldai-llama2-13b-tie-20758-v7-mkmlizer
Waiting for job on koboldai-llama2-13b-tie-20758-v7-mkmlizer to finish
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ _____ __ __ ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ /___/ ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ Version: 0.12.8 ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ https://mk1.ai ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ The license key for the current software has been verified as ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ belonging to: ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ Chai Research Corp. ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ Expiration: 2025-04-15 23:59:59 ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ║ ║
koboldai-llama2-13b-tie-20758-v7-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
koboldai-llama2-13b-tie-20758-v7-mkmlizer: Downloaded to shared memory in 25.666s
koboldai-llama2-13b-tie-20758-v7-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpcj_dwwh6, device:0
koboldai-llama2-13b-tie-20758-v7-mkmlizer: Saving flywheel model at /dev/shm/model_cache
koboldai-llama2-13b-tie-20758-v7-mkmlizer: /opt/conda/lib/python3.10/site-packages/mk1/flywheel/functional/loader.py:55: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
koboldai-llama2-13b-tie-20758-v7-mkmlizer: tensors = torch.load(model_shard_filename, map_location=torch.device(self.device), mmap=True)
koboldai-llama2-13b-tie-20758-v7-mkmlizer: quantized model in 27.457s
koboldai-llama2-13b-tie-20758-v7-mkmlizer: Processed model KoboldAI/LLaMA2-13B-Tiefighter in 53.123s
koboldai-llama2-13b-tie-20758-v7-mkmlizer: creating bucket guanaco-mkml-models
koboldai-llama2-13b-tie-20758-v7-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
koboldai-llama2-13b-tie-20758-v7-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v7
koboldai-llama2-13b-tie-20758-v7-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v7/config.json
koboldai-llama2-13b-tie-20758-v7-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v7/special_tokens_map.json
koboldai-llama2-13b-tie-20758-v7-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v7/tokenizer_config.json
koboldai-llama2-13b-tie-20758-v7-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v7/tokenizer.model
koboldai-llama2-13b-tie-20758-v7-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/koboldai-llama2-13b-tie-20758-v7/flywheel_model.0.safetensors
koboldai-llama2-13b-tie-20758-v7-mkmlizer: Loading 0: 0%| | 0/363 [00:00<?, ?it/s] Loading 0: 1%| | 4/363 [00:00<00:09, 38.40it/s] Loading 0: 4%|▍ | 15/363 [00:00<00:04, 79.80it/s] Loading 0: 7%|▋ | 24/363 [00:00<00:04, 82.75it/s] Loading 0: 9%|▉ | 33/363 [00:00<00:03, 84.99it/s] Loading 0: 12%|█▏ | 42/363 [00:00<00:03, 84.47it/s] Loading 0: 14%|█▍ | 51/363 [00:00<00:03, 86.10it/s] Loading 0: 17%|█▋ | 61/363 [00:00<00:03, 78.63it/s] Loading 0: 19%|█▉ | 70/363 [00:00<00:03, 78.08it/s] Loading 0: 22%|██▏ | 79/363 [00:01<00:03, 77.63it/s] Loading 0: 24%|██▍ | 88/363 [00:01<00:03, 79.47it/s] Loading 0: 27%|██▋ | 97/363 [00:01<00:03, 81.45it/s] Loading 0: 29%|██▉ | 107/363 [00:01<00:02, 86.54it/s] Loading 0: 32%|███▏ | 116/363 [00:01<00:02, 83.27it/s] Loading 0: 36%|███▌ | 130/363 [00:01<00:02, 87.67it/s] Loading 0: 39%|███▉ | 142/363 [00:01<00:02, 85.57it/s] Loading 0: 42%|████▏ | 151/363 [00:03<00:15, 13.76it/s] Loading 0: 44%|████▍ | 160/363 [00:04<00:11, 17.82it/s] Loading 0: 47%|████▋ | 169/363 [00:04<00:08, 22.92it/s] Loading 0: 49%|████▉ | 178/363 [00:04<00:06, 28.93it/s] Loading 0: 52%|█████▏ | 187/363 [00:04<00:04, 35.92it/s] Loading 0: 54%|█████▍ | 197/363 [00:04<00:03, 44.93it/s] Loading 0: 57%|█████▋ | 206/363 [00:04<00:02, 52.37it/s] Loading 0: 59%|█████▉ | 215/363 [00:04<00:02, 59.43it/s] Loading 0: 62%|██████▏ | 224/363 [00:04<00:02, 65.22it/s] Loading 0: 64%|██████▍ | 233/363 [00:04<00:01, 70.97it/s] Loading 0: 67%|██████▋ | 242/363 [00:04<00:01, 75.51it/s] Loading 0: 71%|███████ | 256/363 [00:05<00:01, 82.19it/s] Loading 0: 73%|███████▎ | 266/363 [00:05<00:01, 80.65it/s] Loading 0: 76%|███████▌ | 275/363 [00:05<00:01, 78.42it/s] Loading 0: 78%|███████▊ | 284/363 [00:05<00:01, 76.28it/s] Loading 0: 80%|████████ | 292/363 [00:05<00:00, 77.10it/s] Loading 0: 83%|████████▎ | 300/363 [00:06<00:03, 18.89it/s] Loading 0: 84%|████████▍ | 306/363 [00:06<00:02, 22.31it/s] Loading 0: 88%|████████▊ | 319/363 [00:07<00:01, 32.16it/s] Loading 0: 90%|█████████ | 328/363 [00:07<00:00, 39.08it/s] Loading 0: 93%|█████████▎| 337/363 [00:07<00:00, 45.41it/s] Loading 0: 95%|█████████▌| 346/363 [00:07<00:00, 51.67it/s] Loading 0: 98%|█████████▊| 355/363 [00:07<00:00, 56.36it/s] Loading 0: 100%|██████████| 363/363 [00:09<00:00, 15.14it/s]
Failed to get response for submission chaiml-dn-20250217-c-4ep_8638_v1: HTTPConnectionPool(host='chaiml-dn-20250217-c-4ep-8638-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Job koboldai-llama2-13b-tie-20758-v7-mkmlizer completed after 83.61s with status: succeeded
Stopping job with name koboldai-llama2-13b-tie-20758-v7-mkmlizer
Pipeline stage MKMLizer completed in 84.10s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.14s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service koboldai-llama2-13b-tie-20758-v7
Waiting for inference service koboldai-llama2-13b-tie-20758-v7 to be ready
Inference service koboldai-llama2-13b-tie-20758-v7 ready after 200.6568467617035s
Pipeline stage MKMLDeployer completed in 201.14s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.5717341899871826s
Received healthy response to inference request in 1.826598882675171s
Received healthy response to inference request in 1.7454485893249512s
Received healthy response to inference request in 1.714095115661621s
Received healthy response to inference request in 1.7496535778045654s
5 requests
0 failed requests
5th percentile: 1.7203658103942872
10th percentile: 1.7266365051269532
20th percentile: 1.7391778945922851
30th percentile: 1.7462895870208741
40th percentile: 1.7479715824127198
50th percentile: 1.7496535778045654
60th percentile: 1.7804316997528076
70th percentile: 1.8112098217010497
80th percentile: 1.9756259441375734
90th percentile: 2.273680067062378
95th percentile: 2.42270712852478
99th percentile: 2.541928777694702
mean time: 1.9215060710906982
Pipeline stage StressChecker completed in 11.19s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.65s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.61s
Shutdown handler de-registered
koboldai-llama2-13b-tie_20758_v7 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 4856.91s
Shutdown handler de-registered
koboldai-llama2-13b-tie_20758_v7 status is now inactive due to auto deactivation removed underperforming models
koboldai-llama2-13b-tie_20758_v7 status is now torndown due to DeploymentManager action
koboldai-llama2-13b-tie_20758_v7 status is now torndown due to DeploymentManager action
admin requested tearing down of teknium-airoboros-mistr_79536_v2
koboldai-llama2-13b-tie_20758_v7 status is now torndown due to DeploymentManager action
Shutdown handler not registered because Python interpreter is not running in the main thread