Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name koboldai-llama2-13b-ti-20758-v11-mkmlizer
Waiting for job on koboldai-llama2-13b-ti-20758-v11-mkmlizer to finish
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ _____ __ __ ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ /___/ ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ Version: 0.12.8 ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ https://mk1.ai ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ The license key for the current software has been verified as ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ belonging to: ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ Chai Research Corp. ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ Expiration: 2025-04-15 23:59:59 ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ║ ║
koboldai-llama2-13b-ti-20758-v11-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission mistralai-mistral-nem_93303_v348: pending request queue full
koboldai-llama2-13b-ti-20758-v11-mkmlizer: Downloaded to shared memory in 24.215s
koboldai-llama2-13b-ti-20758-v11-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpf8ojcgrb, device:0
koboldai-llama2-13b-ti-20758-v11-mkmlizer: Saving flywheel model at /dev/shm/model_cache
koboldai-llama2-13b-ti-20758-v11-mkmlizer: /opt/conda/lib/python3.10/site-packages/mk1/flywheel/functional/loader.py:55: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
koboldai-llama2-13b-ti-20758-v11-mkmlizer: tensors = torch.load(model_shard_filename, map_location=torch.device(self.device), mmap=True)
koboldai-llama2-13b-ti-20758-v11-mkmlizer: quantized model in 26.493s
koboldai-llama2-13b-ti-20758-v11-mkmlizer: Processed model KoboldAI/LLaMA2-13B-Tiefighter in 50.708s
koboldai-llama2-13b-ti-20758-v11-mkmlizer: creating bucket guanaco-mkml-models
koboldai-llama2-13b-ti-20758-v11-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
koboldai-llama2-13b-ti-20758-v11-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/koboldai-llama2-13b-ti-20758-v11
koboldai-llama2-13b-ti-20758-v11-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/koboldai-llama2-13b-ti-20758-v11/config.json
koboldai-llama2-13b-ti-20758-v11-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/koboldai-llama2-13b-ti-20758-v11/special_tokens_map.json
koboldai-llama2-13b-ti-20758-v11-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/koboldai-llama2-13b-ti-20758-v11/tokenizer_config.json
koboldai-llama2-13b-ti-20758-v11-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/koboldai-llama2-13b-ti-20758-v11/tokenizer.model
koboldai-llama2-13b-ti-20758-v11-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/koboldai-llama2-13b-ti-20758-v11/tokenizer.json
koboldai-llama2-13b-ti-20758-v11-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/koboldai-llama2-13b-ti-20758-v11/flywheel_model.0.safetensors
koboldai-llama2-13b-ti-20758-v11-mkmlizer:
Loading 0: 0%| | 0/363 [00:00<?, ?it/s]
Loading 0: 1%| | 4/363 [00:00<00:09, 38.21it/s]
Loading 0: 4%|▍ | 15/363 [00:00<00:04, 79.52it/s]
Loading 0: 7%|▋ | 24/363 [00:00<00:04, 77.77it/s]
Loading 0: 9%|▉ | 34/363 [00:00<00:04, 73.94it/s]
Loading 0: 13%|█▎ | 49/363 [00:00<00:03, 85.21it/s]
Loading 0: 16%|█▌ | 58/363 [00:00<00:03, 86.00it/s]
Loading 0: 18%|█▊ | 67/363 [00:00<00:03, 84.92it/s]
Loading 0: 21%|██ | 76/363 [00:00<00:03, 85.09it/s]
Loading 0: 24%|██▍ | 88/363 [00:01<00:03, 84.09it/s]
Loading 0: 28%|██▊ | 103/363 [00:01<00:02, 91.32it/s]
Loading 0: 31%|███ | 113/363 [00:01<00:02, 92.08it/s]
Loading 0: 34%|███▍ | 123/363 [00:01<00:02, 91.11it/s]
Loading 0: 37%|███▋ | 133/363 [00:01<00:02, 84.21it/s]
Loading 0: 40%|████ | 146/363 [00:03<00:13, 15.66it/s]
Loading 0: 42%|████▏ | 153/363 [00:03<00:11, 18.43it/s]
Loading 0: 46%|████▌ | 166/363 [00:03<00:07, 25.73it/s]
Loading 0: 48%|████▊ | 175/363 [00:04<00:05, 31.56it/s]
Loading 0: 52%|█████▏ | 187/363 [00:04<00:04, 39.64it/s]
Loading 0: 54%|█████▍ | 196/363 [00:04<00:03, 45.56it/s]
Loading 0: 58%|█████▊ | 211/363 [00:04<00:02, 57.95it/s]
Loading 0: 61%|██████ | 222/363 [00:04<00:02, 66.96it/s]
Loading 0: 64%|██████▍ | 232/363 [00:04<00:01, 66.04it/s]
Loading 0: 66%|██████▋ | 241/363 [00:04<00:01, 66.80it/s]
Loading 0: 69%|██████▉ | 250/363 [00:04<00:01, 70.92it/s]
Loading 0: 72%|███████▏ | 260/363 [00:05<00:01, 77.55it/s]
Loading 0: 74%|███████▍ | 269/363 [00:05<00:01, 79.95it/s]
Loading 0: 77%|███████▋ | 278/363 [00:05<00:01, 81.48it/s]
Loading 0: 80%|████████ | 292/363 [00:05<00:00, 86.43it/s]
Loading 0: 83%|████████▎ | 301/363 [00:06<00:02, 21.84it/s]
Loading 0: 85%|████████▌ | 310/363 [00:06<00:01, 27.21it/s]
Loading 0: 88%|████████▊ | 319/363 [00:06<00:01, 33.73it/s]
Loading 0: 90%|█████████ | 328/363 [00:07<00:00, 40.83it/s]
Loading 0: 93%|█████████▎| 337/363 [00:07<00:00, 48.02it/s]
Loading 0: 95%|█████████▌| 346/363 [00:07<00:00, 54.62it/s]
Loading 0: 98%|█████████▊| 355/363 [00:07<00:00, 61.15it/s]
Job koboldai-llama2-13b-ti-20758-v11-mkmlizer completed after 73.64s with status: succeeded
Stopping job with name koboldai-llama2-13b-ti-20758-v11-mkmlizer
Pipeline stage MKMLizer completed in 74.08s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.15s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service koboldai-llama2-13b-ti-20758-v11
Waiting for inference service koboldai-llama2-13b-ti-20758-v11 to be ready
Inference service koboldai-llama2-13b-ti-20758-v11 ready after 210.820476770401s
Pipeline stage MKMLDeployer completed in 211.31s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.493412971496582s
Received healthy response to inference request in 1.7576768398284912s
Received healthy response to inference request in 1.7298376560211182s
Received healthy response to inference request in 1.729496717453003s
Received healthy response to inference request in 1.7553520202636719s
5 requests
0 failed requests
5th percentile: 1.729564905166626
10th percentile: 1.729633092880249
20th percentile: 1.7297694683074951
30th percentile: 1.734940528869629
40th percentile: 1.7451462745666504
50th percentile: 1.7553520202636719
60th percentile: 1.7562819480895997
70th percentile: 1.7572118759155273
80th percentile: 1.9048240661621094
90th percentile: 2.1991185188293456
95th percentile: 2.3462657451629636
99th percentile: 2.463983526229858
mean time: 1.8931552410125732
Pipeline stage StressChecker completed in 11.26s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.63s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.64s
Shutdown handler de-registered
koboldai-llama2-13b-ti_20758_v11 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.08s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.06s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service koboldai-llama2-13b-ti-20758-v11-profiler
Waiting for inference service koboldai-llama2-13b-ti-20758-v11-profiler to be ready
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 4735.30s
Shutdown handler de-registered
koboldai-llama2-13b-ti_20758_v11 status is now inactive due to auto deactivation removed underperforming models
koboldai-llama2-13b-ti_20758_v11 status is now torndown due to DeploymentManager action
koboldai-llama2-13b-ti_20758_v11 status is now torndown due to DeploymentManager action
koboldai-llama2-13b-ti_20758_v11 status is now torndown due to DeploymentManager action