submission_id: function_nadem_2024-11-18
developer_uid: chai_backend_admin
celo_rating: 1280.94
display_name: retune_with_base
family_friendly_score: 0.5898
family_friendly_standard_error: 0.006956090281185258
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
is_internal_developer: True
model_group:
model_name: retune_with_base
num_battles: 12697
num_wins: 6796
ranking_group: single
status: inactive
submission_type: function
timestamp: 2024-11-18T23:14:29+00:00
us_pacific_date: 2024-11-18
win_ratio: 0.5352445459557376
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.593273878097534s
Received healthy response to inference request in 3.0029220581054688s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Received healthy response to inference request in 2.721590518951416s
Received healthy response to inference request in 3.4551079273223877s
Received healthy response to inference request in 2.7321043014526367s
5 requests
0 failed requests
5th percentile: 2.6189372062683107
10th percentile: 2.644600534439087
20th percentile: 2.6959271907806395
30th percentile: 2.72369327545166
40th percentile: 2.7278987884521486
50th percentile: 2.7321043014526367
60th percentile: 2.8404314041137697
70th percentile: 2.9487585067749023
80th percentile: 3.0933592319488525
90th percentile: 3.27423357963562
95th percentile: 3.364670753479004
99th percentile: 3.437020492553711
mean time: 2.900999736785889
Pipeline stage StressChecker completed in 15.66s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 2.56s
Shutdown handler de-registered
function_nadem_2024-11-18 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 3468.56s
Shutdown handler de-registered
function_nadem_2024-11-18 status is now inactive due to auto deactivation removed underperforming models