submission_id: function_kabam_2024-10-05
developer_uid: chai_backend_admin
celo_rating: 1267.7
display_name: retune_with_base
family_friendly_score: 0.5942886953365996
family_friendly_standard_error: 0.005706917332463179
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.95, 'top_p': 1.0, 'min_p': 0.08, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n', '<|eot_id|>'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
is_internal_developer: True
model_group:
model_name: retune_with_base
num_battles: 7682
num_wins: 3984
ranking_group: single
status: torndown
submission_type: function
timestamp: 2024-10-05T19:26:41+00:00
us_pacific_date: 2024-10-05
win_ratio: 0.5186149440249935
Download Preference Data
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Failed to get response for submission blend_rebor_2024-10-05: ('http://zonemercy-virgo-edit-v3-7871-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', '')
Received healthy response to inference request in 2.46889066696167s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Received healthy response to inference request in 2.9323487281799316s
Received healthy response to inference request in 4.086797714233398s
Received healthy response to inference request in 4.820611476898193s
Received healthy response to inference request in 4.244023561477661s
5 requests
0 failed requests
5th percentile: 2.5615822792053224
10th percentile: 2.6542738914489745
20th percentile: 2.839657115936279
30th percentile: 3.163238525390625
40th percentile: 3.6250181198120117
50th percentile: 4.086797714233398
60th percentile: 4.149688053131103
70th percentile: 4.212578392028808
80th percentile: 4.359341144561768
90th percentile: 4.5899763107299805
95th percentile: 4.705293893814087
99th percentile: 4.797547960281372
mean time: 3.710534429550171
%s, retrying in %s seconds...
Received healthy response to inference request in 9.64291524887085s
Received healthy response to inference request in 3.4415054321289062s
Received healthy response to inference request in 2.3046181201934814s
Received healthy response to inference request in 3.564878225326538s
Received healthy response to inference request in 2.579664945602417s
5 requests
0 failed requests
5th percentile: 2.3596274852752686
10th percentile: 2.4146368503570557
20th percentile: 2.52465558052063
30th percentile: 2.7520330429077147
40th percentile: 3.0967692375183105
50th percentile: 3.4415054321289062
60th percentile: 3.490854549407959
70th percentile: 3.540203666687012
80th percentile: 4.780485630035401
90th percentile: 7.211700439453125
95th percentile: 8.427307844161986
99th percentile: 9.399793767929078
mean time: 4.306716394424439
Pipeline stage StressChecker completed in 43.01s
Shutdown handler de-registered
function_kabam_2024-10-05 status is now deployed due to DeploymentManager action
function_kabam_2024-10-05 status is now inactive due to auto deactivation removed underperforming models
function_kabam_2024-10-05 status is now torndown due to DeploymentManager action