developer_uid: chai_backend_admin
submission_id: function_kemuk_2024-11-29
model_name: retune_with_base
model_group:
status: inactive
timestamp: 2024-11-29T03:06:32+00:00
num_battles: 11088
num_wins: 5802
celo_rating: 1283.71
family_friendly_score: 0.5933999999999999
family_friendly_standard_error: 0.006946602622865367
submission_type: function
display_name: retune_with_base
is_internal_developer: True
ranking_group: single
us_pacific_date: 2024-11-28
win_ratio: 0.5232683982683982
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 3.818439483642578s
Received healthy response to inference request in 4.3525519371032715s
Received healthy response to inference request in 3.890821695327759s
Received healthy response to inference request in 4.713558197021484s
Received healthy response to inference request in 4.693199634552002s
5 requests
0 failed requests
5th percentile: 3.832915925979614
10th percentile: 3.84739236831665
20th percentile: 3.8763452529907227
30th percentile: 3.9831677436828614
40th percentile: 4.167859840393066
50th percentile: 4.3525519371032715
60th percentile: 4.488811016082764
70th percentile: 4.625070095062256
80th percentile: 4.697271347045898
90th percentile: 4.705414772033691
95th percentile: 4.709486484527588
99th percentile: 4.712743854522705
mean time: 4.293714189529419
%s, retrying in %s seconds...
Received healthy response to inference request in 3.44356369972229s
Received healthy response to inference request in 3.3136425018310547s
Received healthy response to inference request in 2.7455430030822754s
Received healthy response to inference request in 3.3224878311157227s
Received healthy response to inference request in 3.2702133655548096s
5 requests
0 failed requests
5th percentile: 2.8504770755767823
10th percentile: 2.9554111480712892
20th percentile: 3.1652792930603026
30th percentile: 3.2788991928100586
40th percentile: 3.2962708473205566
50th percentile: 3.3136425018310547
60th percentile: 3.317180633544922
70th percentile: 3.320718765258789
80th percentile: 3.3467030048370363
90th percentile: 3.395133352279663
95th percentile: 3.4193485260009764
99th percentile: 3.4387206649780273
mean time: 3.2190900802612306
Pipeline stage StressChecker completed in 39.95s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 2.91s
Shutdown handler de-registered
function_kemuk_2024-11-29 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 4262.43s
Shutdown handler de-registered
function_kemuk_2024-11-29 status is now inactive due to auto deactivation removed underperforming models