Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
HTTPConnectionPool(host='guanaco-submitter.guanaco-backend.k2.chaiverse.com', port=80): Read timed out. (read timeout=20)
Received unhealthy response to inference request!
Received healthy response to inference request in 2.708475351333618s
Received healthy response to inference request in 2.27864146232605s
Received healthy response to inference request in 2.5425498485565186s
Received healthy response to inference request in 2.487780809402466s
Received healthy response to inference request in 1.676950454711914s
Received healthy response to inference request in 2.2664337158203125s
Received healthy response to inference request in 1.8961374759674072s
Received healthy response to inference request in 3.0807385444641113s
Received healthy response to inference request in 2.749784469604492s
10 requests
1 failed requests
5th percentile: 1.775584614276886
10th percentile: 1.8742187738418579
20th percentile: 2.1923744678497314
30th percentile: 2.2749791383743285
40th percentile: 2.4041250705718995
50th percentile: 2.515165328979492
60th percentile: 2.608920049667358
70th percentile: 2.7208680868148805
80th percentile: 2.815975284576416
90th percentile: 4.851461505889886
95th percentile: 12.81971483230589
99th percentile: 19.194317493438724
mean time: 4.2475460290908815
%s, retrying in %s seconds...
Received healthy response to inference request in 1.8580470085144043s
Received healthy response to inference request in 2.162942886352539s
Received healthy response to inference request in 1.7617278099060059s
Received healthy response to inference request in 2.0137991905212402s
Received healthy response to inference request in 1.6550767421722412s
Received healthy response to inference request in 2.1517934799194336s
Received healthy response to inference request in 2.055102586746216s
Received healthy response to inference request in 3.202012538909912s
Received healthy response to inference request in 1.7747790813446045s
Received healthy response to inference request in 2.252596855163574s
10 requests
0 failed requests
5th percentile: 1.7030697226524354
10th percentile: 1.7510627031326294
20th percentile: 1.7721688270568847
30th percentile: 1.8330666303634644
40th percentile: 1.951498317718506
50th percentile: 2.034450888633728
60th percentile: 2.093778944015503
70th percentile: 2.155138301849365
80th percentile: 2.180873680114746
90th percentile: 2.3475384235382077
95th percentile: 2.774775481224059
99th percentile: 3.1165651273727417
mean time: 2.088787817955017
Pipeline stage StressChecker completed in 67.82s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.78s
Shutdown handler de-registered
function_latis_2025-12-05 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Generating Leaderboard row for %s
Generated Leaderboard row for %s
Pipeline stage OfflineFamilyFriendlyScorer completed in 2490.13s
Shutdown handler de-registered
function_latis_2025-12-05 status is now inactive due to auto deactivation removed underperforming models
function_latis_2025-12-05 status is now torndown due to DeploymentManager action