developer_uid: chai_backend_admin
submission_id: function_pejet_2024-11-19
model_name: retune_with_base
model_group:
status: inactive
timestamp: 2024-11-19T21:40:19+00:00
num_battles: 9117
num_wins: 4646
celo_rating: 1262.79
family_friendly_score: 0.6095999999999999
family_friendly_standard_error: 0.006899099071617974
submission_type: function
display_name: retune_with_base
is_internal_developer: True
ranking_group: single
us_pacific_date: 2024-11-19
win_ratio: 0.5095974553032796
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 3.45249080657959s
Received healthy response to inference request in 3.0225307941436768s
Received healthy response to inference request in 7.704570770263672s
Received healthy response to inference request in 3.286637544631958s
Received healthy response to inference request in 4.287220239639282s
5 requests
0 failed requests
5th percentile: 3.075352144241333
10th percentile: 3.128173494338989
20th percentile: 3.2338161945343016
30th percentile: 3.3198081970214846
40th percentile: 3.386149501800537
50th percentile: 3.45249080657959
60th percentile: 3.7863825798034667
70th percentile: 4.120274353027344
80th percentile: 4.970690345764161
90th percentile: 6.337630558013917
95th percentile: 7.021100664138793
99th percentile: 7.567876749038696
mean time: 4.350690031051636
%s, retrying in %s seconds...
Received healthy response to inference request in 3.664562225341797s
Received healthy response to inference request in 3.123272657394409s
Received healthy response to inference request in 2.995464563369751s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Received healthy response to inference request in 2.8384039402008057s
Received healthy response to inference request in 2.885291576385498s
5 requests
0 failed requests
5th percentile: 2.847781467437744
10th percentile: 2.8571589946746827
20th percentile: 2.8759140491485597
30th percentile: 2.907326173782349
40th percentile: 2.95139536857605
50th percentile: 2.995464563369751
60th percentile: 3.0465878009796143
70th percentile: 3.0977110385894777
80th percentile: 3.2315305709838866
90th percentile: 3.448046398162842
95th percentile: 3.5563043117523194
99th percentile: 3.6429106426239013
mean time: 3.101398992538452
Pipeline stage StressChecker completed in 39.85s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 2.70s
Shutdown handler de-registered
function_pejet_2024-11-19 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 3698.86s
Shutdown handler de-registered
function_pejet_2024-11-19 status is now inactive due to auto deactivation removed underperforming models