developer_uid: chai_backend_admin
submission_id: function_tegul_2024-11-24
model_name: retune_with_base
model_group:
status: inactive
timestamp: 2024-11-24T08:02:33+00:00
num_battles: 20224
num_wins: 10987
celo_rating: 1285.53
family_friendly_score: 0.5766
family_friendly_standard_error: 0.00698759529452014
submission_type: function
display_name: retune_with_base
is_internal_developer: True
ranking_group: single
us_pacific_date: 2024-11-24
win_ratio: 0.5432654272151899
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 4.123232364654541s
Received healthy response to inference request in 3.6662306785583496s
Received healthy response to inference request in 4.125193357467651s
Received healthy response to inference request in 3.1858267784118652s
Received healthy response to inference request in 4.353338241577148s
5 requests
0 failed requests
5th percentile: 3.281907558441162
10th percentile: 3.377988338470459
20th percentile: 3.5701498985290527
30th percentile: 3.757631015777588
40th percentile: 3.9404316902160645
50th percentile: 4.123232364654541
60th percentile: 4.124016761779785
70th percentile: 4.124801158905029
80th percentile: 4.170822334289551
90th percentile: 4.262080287933349
95th percentile: 4.307709264755249
99th percentile: 4.344212446212769
mean time: 3.890764284133911
%s, retrying in %s seconds...
Received healthy response to inference request in 3.286311388015747s
Received healthy response to inference request in 3.5790932178497314s
Received healthy response to inference request in 4.621640920639038s
Received healthy response to inference request in 3.507241725921631s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Received healthy response to inference request in 3.628448724746704s
5 requests
0 failed requests
5th percentile: 3.330497455596924
10th percentile: 3.3746835231781005
20th percentile: 3.463055658340454
30th percentile: 3.521612024307251
40th percentile: 3.550352621078491
50th percentile: 3.5790932178497314
60th percentile: 3.5988354206085207
70th percentile: 3.6185776233673095
80th percentile: 3.827087163925171
90th percentile: 4.2243640422821045
95th percentile: 4.423002481460571
99th percentile: 4.581913232803345
mean time: 3.72454719543457
Pipeline stage StressChecker completed in 40.45s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 2.34s
Shutdown handler de-registered
function_tegul_2024-11-24 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 3442.86s
Shutdown handler de-registered
function_tegul_2024-11-24 status is now inactive due to auto deactivation removed underperforming models