Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name rirv938-mistral-24b-dpo-30877-v1-mkmlizer
Waiting for job on rirv938-mistral-24b-dpo-30877-v1-mkmlizer to finish
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ _____ __ __ ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ /___/ ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ Version: 0.12.8 ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ https://mk1.ai ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ The license key for the current software has been verified as ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ belonging to: ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ Chai Research Corp. ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ Expiration: 2025-04-15 23:59:59 ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ║ ║
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
Failed to get response for submission rirv938-mistral-24b-dpo_77960_v1: ('http://rirv938-mistral-24b-dpo-77960-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', '')
Failed to get response for submission rirv938-mistral-24b-dpo_35049_v1: ('http://rirv938-mistral-24b-dpo-35049-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', '')
Failed to get response for submission rirv938-mistral-24b-dpo_77960_v1: ('http://rirv938-mistral-24b-dpo-77960-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', '')
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: Downloaded to shared memory in 221.866s
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpwjk596tc, device:0
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: quantized model in 63.891s
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: Processed model rirv938/mistral_24b_dpo_40k_95w_pref_1250_instruct_lr_v2 in 285.764s
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: creating bucket guanaco-mkml-models
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/rirv938-mistral-24b-dpo-30877-v1
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/rirv938-mistral-24b-dpo-30877-v1/config.json
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/rirv938-mistral-24b-dpo-30877-v1/special_tokens_map.json
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/rirv938-mistral-24b-dpo-30877-v1/tokenizer_config.json
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/rirv938-mistral-24b-dpo-30877-v1/tokenizer.json
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.1.safetensors s3://guanaco-mkml-models/rirv938-mistral-24b-dpo-30877-v1/flywheel_model.1.safetensors
rirv938-mistral-24b-dpo-30877-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/rirv938-mistral-24b-dpo-30877-v1/flywheel_model.0.safetensors
Job rirv938-mistral-24b-dpo-30877-v1-mkmlizer completed after 338.15s with status: succeeded
Stopping job with name rirv938-mistral-24b-dpo-30877-v1-mkmlizer
Pipeline stage MKMLizer completed in 338.59s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.14s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service rirv938-mistral-24b-dpo-30877-v1
Waiting for inference service rirv938-mistral-24b-dpo-30877-v1 to be ready
Failed to get response for submission rirv938-mistral-24b-dpo_77960_v1: ('http://rirv938-mistral-24b-dpo-77960-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', '')
Failed to get response for submission rirv938-mistral-24b-dpo_35049_v1: ('http://rirv938-mistral-24b-dpo-35049-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', '')
Inference service rirv938-mistral-24b-dpo-30877-v1 ready after 240.8005301952362s
Pipeline stage MKMLDeployer completed in 241.19s
run pipeline stage %s
Running pipeline stage StressChecker
HTTPConnectionPool(host='guanaco-submitter.guanaco-backend.k2.chaiverse.com', port=80): Read timed out. (read timeout=20)
Received unhealthy response to inference request!
Received healthy response to inference request in 2.482513189315796s
Received healthy response to inference request in 1.8006587028503418s
Received healthy response to inference request in 1.5392141342163086s
Received healthy response to inference request in 1.7479393482208252s
5 requests
1 failed requests
5th percentile: 1.580959177017212
10th percentile: 1.6227042198181152
20th percentile: 1.7061943054199218
30th percentile: 1.7584832191467286
40th percentile: 1.779570960998535
50th percentile: 1.8006587028503418
60th percentile: 2.0734004974365234
70th percentile: 2.346142292022705
80th percentile: 6.010718297958377
90th percentile: 13.067128515243532
95th percentile: 16.595333623886106
99th percentile: 19.41789771080017
mean time: 5.538772821426392
%s, retrying in %s seconds...
Received healthy response to inference request in 2.571474075317383s
Received healthy response to inference request in 1.7756354808807373s
Received healthy response to inference request in 2.1241543292999268s
Received healthy response to inference request in 2.011223793029785s
Received healthy response to inference request in 1.837907075881958s
5 requests
0 failed requests
5th percentile: 1.7880897998809815
10th percentile: 1.8005441188812257
20th percentile: 1.8254527568817138
30th percentile: 1.8725704193115233
40th percentile: 1.9418971061706543
50th percentile: 2.011223793029785
60th percentile: 2.056396007537842
70th percentile: 2.1015682220458984
80th percentile: 2.213618278503418
90th percentile: 2.3925461769104004
95th percentile: 2.4820101261138916
99th percentile: 2.5535812854766844
mean time: 2.064078950881958
Pipeline stage StressChecker completed in 40.20s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.61s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.59s
Shutdown handler de-registered
rirv938-mistral-24b-dpo_30877_v1 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 3035.68s
Shutdown handler de-registered
rirv938-mistral-24b-dpo_30877_v1 status is now inactive due to auto deactivation removed underperforming models
rirv938-mistral-24b-dpo_30877_v1 status is now torndown due to DeploymentManager action