developer_uid: rirv938
submission_id: function_ketal_2025-01-27
model_name: retune_with_base
model_group:
status: torndown
timestamp: 2025-01-27T23:31:06+00:00
num_battles: 7554
num_wins: 4120
celo_rating: 1302.44
family_friendly_score: 0.5589999999999999
family_friendly_standard_error: 0.007021666468866205
submission_type: function
display_name: retune_with_base
is_internal_developer: True
ranking_group: single
us_pacific_date: 2025-01-27
win_ratio: 0.5454064072014827
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 4.272201776504517s
Received healthy response to inference request in 2.750655174255371s
Received healthy response to inference request in 3.1263773441314697s
Received healthy response to inference request in 4.1077046394348145s
Received healthy response to inference request in 2.8356800079345703s
5 requests
0 failed requests
5th percentile: 2.767660140991211
10th percentile: 2.784665107727051
20th percentile: 2.8186750411987305
30th percentile: 2.89381947517395
40th percentile: 3.01009840965271
50th percentile: 3.1263773441314697
60th percentile: 3.5189082622528076
70th percentile: 3.9114391803741455
80th percentile: 4.140604066848755
90th percentile: 4.206402921676636
95th percentile: 4.239302349090576
99th percentile: 4.265621891021729
mean time: 3.4185237884521484
%s, retrying in %s seconds...
Received healthy response to inference request in 2.342014789581299s
Received healthy response to inference request in 2.843024492263794s
Received healthy response to inference request in 1.927870750427246s
Received healthy response to inference request in 2.567979574203491s
Received healthy response to inference request in 2.635040044784546s
5 requests
0 failed requests
5th percentile: 2.0106995582580565
10th percentile: 2.0935283660888673
20th percentile: 2.2591859817504885
30th percentile: 2.3872077465057373
40th percentile: 2.4775936603546143
50th percentile: 2.567979574203491
60th percentile: 2.5948037624359133
70th percentile: 2.621627950668335
80th percentile: 2.6766369342803955
90th percentile: 2.7598307132720947
95th percentile: 2.8014276027679443
99th percentile: 2.834705114364624
mean time: 2.463185930252075
Pipeline stage StressChecker completed in 31.44s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.66s
Shutdown handler de-registered
function_ketal_2025-01-27 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 3487.75s
Shutdown handler de-registered
function_ketal_2025-01-27 status is now inactive due to auto deactivation removed underperforming models
function_ketal_2025-01-27 status is now torndown due to DeploymentManager action
ChatRequest
Generation Params
Prompt Formatter
Chat History
ChatMessage 1