submission_id: function_hagun_2024-11-15
developer_uid: chai_backend_admin
celo_rating: 1250.78
display_name: retune_with_base
family_friendly_score: 0.5851999999999999
family_friendly_standard_error: 0.006967653263474008
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
is_internal_developer: True
model_group:
model_name: retune_with_base
num_battles: 12274
num_wins: 6198
ranking_group: single
status: inactive
submission_type: function
timestamp: 2024-11-15T17:42:10+00:00
us_pacific_date: 2024-11-15
win_ratio: 0.5049698549780023
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 6.729035139083862s
Received healthy response to inference request in 3.6397323608398438s
Received healthy response to inference request in 3.2626543045043945s
Received healthy response to inference request in 3.8877339363098145s
Received healthy response to inference request in 2.346219062805176s
5 requests
0 failed requests
5th percentile: 2.5295061111450194
10th percentile: 2.712793159484863
20th percentile: 3.079367256164551
30th percentile: 3.3380699157714844
40th percentile: 3.488901138305664
50th percentile: 3.6397323608398438
60th percentile: 3.738932991027832
70th percentile: 3.83813362121582
80th percentile: 4.455994176864625
90th percentile: 5.5925146579742435
95th percentile: 6.160774898529052
99th percentile: 6.6153830909729
mean time: 3.973074960708618
%s, retrying in %s seconds...
Received healthy response to inference request in 2.939268112182617s
Received healthy response to inference request in 1.6235525608062744s
Received healthy response to inference request in 3.305225133895874s
Received healthy response to inference request in 3.1155662536621094s
Received healthy response to inference request in 2.8683085441589355s
5 requests
0 failed requests
5th percentile: 1.8725037574768066
10th percentile: 2.121454954147339
20th percentile: 2.6193573474884033
30th percentile: 2.8825004577636717
40th percentile: 2.9108842849731444
50th percentile: 2.939268112182617
60th percentile: 3.009787368774414
70th percentile: 3.080306625366211
80th percentile: 3.1534980297088624
90th percentile: 3.229361581802368
95th percentile: 3.267293357849121
99th percentile: 3.2976387786865233
mean time: 2.770384120941162
Pipeline stage StressChecker completed in 36.37s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 2.33s
Shutdown handler de-registered
function_hagun_2024-11-15 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 3403.60s
Shutdown handler de-registered
function_hagun_2024-11-15 status is now inactive due to auto deactivation removed underperforming models