Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Connection pool is full, discarding connection: %s. Connection pool size: %s
HTTPConnectionPool(host='guanaco-submitter.guanaco-backend.k2.chaiverse.com', port=80): Read timed out. (read timeout=20)
Received unhealthy response to inference request!
Received healthy response to inference request in 2.3229317665100098s
Received healthy response to inference request in 1.8196969032287598s
Received healthy response to inference request in 1.9940690994262695s
Received healthy response to inference request in 3.10288143157959s
Received healthy response to inference request in 1.874647617340088s
Received healthy response to inference request in 2.000699520111084s
Received healthy response to inference request in 1.7444114685058594s
Received healthy response to inference request in 2.1427478790283203s
Received healthy response to inference request in 2.4227864742279053s
10 requests
1 failed requests
5th percentile: 1.7782899141311646
10th percentile: 1.8121683597564697
20th percentile: 1.8636574745178223
30th percentile: 1.958242654800415
40th percentile: 1.9980473518371582
50th percentile: 2.071723699569702
60th percentile: 2.214821434020996
70th percentile: 2.3528881788253786
80th percentile: 2.558805465698242
90th percentile: 4.803601932525629
95th percentile: 12.456844186782819
99th percentile: 18.5794379901886
mean time: 3.9534958600997925
%s, retrying in %s seconds...
Received healthy response to inference request in 2.4149668216705322s
Received healthy response to inference request in 2.260810375213623s
Received healthy response to inference request in 1.7264888286590576s
Received healthy response to inference request in 1.9329853057861328s
Received healthy response to inference request in 1.8857362270355225s
Received healthy response to inference request in 2.512599229812622s
Received healthy response to inference request in 2.8217666149139404s
Received healthy response to inference request in 1.9749970436096191s
Received healthy response to inference request in 2.927582025527954s
Received healthy response to inference request in 2.6010427474975586s
10 requests
0 failed requests
5th percentile: 1.7981501579284669
10th percentile: 1.869811487197876
20th percentile: 1.9235354900360107
30th percentile: 1.9623935222625732
40th percentile: 2.1464850425720217
50th percentile: 2.3378885984420776
60th percentile: 2.454019784927368
70th percentile: 2.539132285118103
80th percentile: 2.645187520980835
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
90th percentile: 2.8323481559753416
Connection pool is full, discarding connection: %s. Connection pool size: %s
95th percentile: 2.879965090751648
99th percentile: 2.9180586385726928
mean time: 2.305897521972656
Pipeline stage StressChecker completed in 71.75s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.51s
Shutdown handler de-registered
function_matol_2025-12-16 status is now deployed due to DeploymentManager action
function_matol_2025-12-16 status is now inactive due to auto deactivation removed underperforming models
function_matol_2025-12-16 status is now torndown due to DeploymentManager action