Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name stark2000s-utarchat-v1-3-v1-mkmlizer
Waiting for job on stark2000s-utarchat-v1-3-v1-mkmlizer to finish
stark2000s-utarchat-v1-3-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ _____ __ __ ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ /___/ ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ Version: 0.11.12 ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ https://mk1.ai ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ The license key for the current software has been verified as ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ belonging to: ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ Chai Research Corp. ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ Expiration: 2025-01-15 23:59:59 ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ║ ║
stark2000s-utarchat-v1-3-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Connection pool is full, discarding connection: %s. Connection pool size: %s
stark2000s-utarchat-v1-3-v1-mkmlizer: Downloaded to shared memory in 34.465s
stark2000s-utarchat-v1-3-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpxoqnnx38, device:0
stark2000s-utarchat-v1-3-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
stark2000s-utarchat-v1-3-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/mk1/flywheel/functional/loader.py:55: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
stark2000s-utarchat-v1-3-v1-mkmlizer: tensors = torch.load(model_shard_filename, map_location=torch.device(self.device), mmap=True)
stark2000s-utarchat-v1-3-v1-mkmlizer: quantized model in 26.074s
stark2000s-utarchat-v1-3-v1-mkmlizer: Processed model stark2000s/utarchat-v1.3 in 60.539s
stark2000s-utarchat-v1-3-v1-mkmlizer: creating bucket guanaco-mkml-models
stark2000s-utarchat-v1-3-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
stark2000s-utarchat-v1-3-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/stark2000s-utarchat-v1-3-v1
stark2000s-utarchat-v1-3-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/stark2000s-utarchat-v1-3-v1/config.json
stark2000s-utarchat-v1-3-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/stark2000s-utarchat-v1-3-v1/tokenizer.json
stark2000s-utarchat-v1-3-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/stark2000s-utarchat-v1-3-v1/flywheel_model.0.safetensors
stark2000s-utarchat-v1-3-v1-mkmlizer:
Loading 0: 0%| | 0/291 [00:00<?, ?it/s]
Loading 0: 2%|▏ | 7/291 [00:00<00:05, 47.92it/s]
Loading 0: 8%|▊ | 22/291 [00:00<00:03, 82.98it/s]
Loading 0: 12%|█▏ | 34/291 [00:00<00:03, 83.57it/s]
Loading 0: 15%|█▍ | 43/291 [00:00<00:02, 85.01it/s]
Loading 0: 18%|█▊ | 52/291 [00:00<00:02, 82.97it/s]
Loading 0: 21%|██ | 61/291 [00:00<00:02, 80.54it/s]
Loading 0: 26%|██▌ | 76/291 [00:00<00:02, 89.60it/s]
Loading 0: 29%|██▉ | 85/291 [00:02<00:08, 23.28it/s]
Loading 0: 32%|███▏ | 94/291 [00:02<00:06, 29.00it/s]
Loading 0: 35%|███▌ | 103/291 [00:02<00:05, 35.65it/s]
Loading 0: 38%|███▊ | 112/291 [00:02<00:04, 42.34it/s]
Loading 0: 42%|████▏ | 121/291 [00:02<00:03, 49.40it/s]
Loading 0: 45%|████▍ | 130/291 [00:02<00:02, 55.82it/s]
Loading 0: 48%|████▊ | 139/291 [00:02<00:02, 62.70it/s]
Loading 0: 52%|█████▏ | 151/291 [00:02<00:02, 68.95it/s]
Loading 0: 55%|█████▌ | 161/291 [00:02<00:01, 75.70it/s]
Loading 0: 58%|█████▊ | 170/291 [00:03<00:01, 77.51it/s]
Loading 0: 62%|██████▏ | 179/291 [00:03<00:01, 78.00it/s]
Loading 0: 65%|██████▍ | 188/291 [00:04<00:04, 21.28it/s]
Loading 0: 67%|██████▋ | 196/291 [00:04<00:03, 26.28it/s]
Loading 0: 70%|███████ | 205/291 [00:04<00:02, 31.65it/s]
Loading 0: 74%|███████▎ | 214/291 [00:04<00:01, 38.82it/s]
Loading 0: 77%|███████▋ | 223/291 [00:04<00:01, 46.73it/s]
Loading 0: 80%|███████▉ | 232/291 [00:04<00:01, 53.83it/s]
Loading 0: 83%|████████▎ | 241/291 [00:05<00:00, 59.15it/s]
Loading 0: 86%|████████▌ | 250/291 [00:05<00:00, 62.77it/s]
Loading 0: 89%|████████▉ | 260/291 [00:05<00:00, 71.32it/s]
Loading 0: 94%|█████████▍| 274/291 [00:05<00:00, 79.32it/s]
Loading 0: 97%|█████████▋| 283/291 [00:05<00:00, 79.40it/s]
Job stark2000s-utarchat-v1-3-v1-mkmlizer completed after 83.42s with status: succeeded
Stopping job with name stark2000s-utarchat-v1-3-v1-mkmlizer
Pipeline stage MKMLizer completed in 83.98s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.15s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service stark2000s-utarchat-v1-3-v1
Waiting for inference service stark2000s-utarchat-v1-3-v1 to be ready
Inference service stark2000s-utarchat-v1-3-v1 ready after 130.7627558708191s
Pipeline stage MKMLDeployer completed in 131.28s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.7136170864105225s
Received healthy response to inference request in 1.3346426486968994s
Received healthy response to inference request in 1.4533398151397705s
Received healthy response to inference request in 1.6636152267456055s
Received healthy response to inference request in 1.4059324264526367s
5 requests
0 failed requests
5th percentile: 1.348900604248047
10th percentile: 1.3631585597991944
20th percentile: 1.3916744709014892
30th percentile: 1.4154139041900635
40th percentile: 1.434376859664917
50th percentile: 1.4533398151397705
60th percentile: 1.5374499797821044
70th percentile: 1.6215601444244385
80th percentile: 1.6736155986785888
90th percentile: 1.6936163425445556
95th percentile: 1.7036167144775392
99th percentile: 1.7116170120239258
mean time: 1.5142294406890868
Pipeline stage StressChecker completed in 8.85s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 2.52s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 2.17s
Shutdown handler de-registered
stark2000s-utarchat-v1-3_v1 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 2707.92s
Shutdown handler de-registered
stark2000s-utarchat-v1-3_v1 status is now inactive due to auto deactivation removed underperforming models