Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
HTTPConnectionPool(host='guanaco-submitter.guanaco-backend.k2.chaiverse.com', port=80): Read timed out. (read timeout=20)
Received unhealthy response to inference request!
Received healthy response to inference request in 2.3172264099121094s
Received healthy response to inference request in 2.2898833751678467s
Received healthy response to inference request in 1.9777207374572754s
Received healthy response to inference request in 2.783082962036133s
Received healthy response to inference request in 2.2924983501434326s
Received healthy response to inference request in 2.4490082263946533s
Received healthy response to inference request in 2.2234413623809814s
Received healthy response to inference request in 2.21700119972229s
Received healthy response to inference request in 1.8894901275634766s
10 requests
1 failed requests
5th percentile: 1.929193902015686
10th percentile: 1.9688976764678956
20th percentile: 2.169145107269287
30th percentile: 2.221509313583374
40th percentile: 2.2633065700531008
50th percentile: 2.2911908626556396
60th percentile: 2.3023895740509035
70th percentile: 2.3567609548568726
80th percentile: 2.5158231735229495
90th percentile: 4.527514386177057
95th percentile: 12.37745579481123
99th percentile: 18.6574089217186
mean time: 4.066674995422363
%s, retrying in %s seconds...
Received healthy response to inference request in 2.289531946182251s
Received healthy response to inference request in 2.9063541889190674s
Received healthy response to inference request in 3.0938827991485596s
Received healthy response to inference request in 2.0450713634490967s
Received healthy response to inference request in 2.385995626449585s
Received healthy response to inference request in 2.781873941421509s
Received healthy response to inference request in 1.7307043075561523s
Received healthy response to inference request in 2.463789701461792s
Received healthy response to inference request in 2.8778138160705566s
Received healthy response to inference request in 2.3016839027404785s
10 requests
0 failed requests
5th percentile: 1.8721694827079773
10th percentile: 2.0136346578598023
20th percentile: 2.24063982963562
30th percentile: 2.2980383157730104
40th percentile: 2.3522709369659425
50th percentile: 2.4248926639556885
60th percentile: 2.5910233974456784
70th percentile: 2.8106559038162233
80th percentile: 2.883521890640259
90th percentile: 2.9251070499420164
95th percentile: 3.009494924545288
99th percentile: 3.0770052242279053
mean time: 2.487670159339905
Pipeline stage StressChecker completed in 70.30s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.64s
Shutdown handler de-registered
function_gufor_2025-12-16 status is now deployed due to DeploymentManager action
function_gufor_2025-12-16 status is now inactive due to auto deactivation removed underperforming models
function_gufor_2025-12-16 status is now torndown due to DeploymentManager action