developer_uid: chai_backend_admin
submission_id: function_donol_2024-12-12
model_name: retune_with_base
model_group:
status: torndown
timestamp: 2024-12-12T23:20:47+00:00
num_battles: 9009
num_wins: 4471
celo_rating: 1256.22
family_friendly_score: 0.5833999999999999
family_friendly_standard_error: 0.006972007458401059
submission_type: function
display_name: retune_with_base
is_internal_developer: True
ranking_group: single
us_pacific_date: 2024-12-12
win_ratio: 0.49628149628149626
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.6035168170928955s
Received healthy response to inference request in 2.0751447677612305s
Received healthy response to inference request in 2.5281765460968018s
Received healthy response to inference request in 2.4391098022460938s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Received healthy response to inference request in 1.6585981845855713s
5 requests
0 failed requests
5th percentile: 1.6145330905914306
10th percentile: 1.6255493640899659
20th percentile: 1.6475819110870362
30th percentile: 1.741907501220703
40th percentile: 1.9085261344909668
50th percentile: 2.0751447677612305
60th percentile: 2.220730781555176
70th percentile: 2.366316795349121
80th percentile: 2.4569231510162353
90th percentile: 2.4925498485565187
95th percentile: 2.5103631973266602
99th percentile: 2.5246138763427735
mean time: 2.0609092235565187
Pipeline stage StressChecker completed in 11.64s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 3.00s
Shutdown handler de-registered
function_donol_2024-12-12 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 2969.44s
Shutdown handler de-registered
function_donol_2024-12-12 status is now inactive due to auto deactivation removed underperforming models
function_donol_2024-12-12 status is now torndown due to DeploymentManager action
function_donol_2024-12-12 status is now torndown due to DeploymentManager action
function_donol_2024-12-12 status is now torndown due to DeploymentManager action