submission_id: function_jegon_2024-11-29
developer_uid: chai_backend_admin
celo_rating: 1257.05
display_name: retune_with_base
family_friendly_score: 0.587
family_friendly_standard_error: 0.0069632032858448125
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 0.9, 'top_p': 0.9, 'min_p': 0.05, 'top_k': 80, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
is_internal_developer: True
model_group:
model_name: retune_with_base
num_battles: 8935
num_wins: 4491
ranking_group: single
status: inactive
submission_type: function
timestamp: 2024-11-29T19:08:29+00:00
us_pacific_date: 2024-11-29
win_ratio: 0.5026301063234471
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 3.654606819152832s
Received healthy response to inference request in 1.9811100959777832s
Connection pool is full, discarding connection: %s. Connection pool size: %s
Received healthy response to inference request in 2.104531764984131s
Received healthy response to inference request in 3.9881093502044678s
Received healthy response to inference request in 4.3483498096466064s
5 requests
0 failed requests
5th percentile: 2.0057944297790526
10th percentile: 2.030478763580322
20th percentile: 2.0798474311828614
30th percentile: 2.414546775817871
40th percentile: 3.034576797485352
50th percentile: 3.654606819152832
60th percentile: 3.7880078315734864
70th percentile: 3.9214088439941404
80th percentile: 4.060157442092896
90th percentile: 4.204253625869751
95th percentile: 4.276301717758178
99th percentile: 4.333940191268921
mean time: 3.215341567993164
%s, retrying in %s seconds...
Received healthy response to inference request in 2.731914758682251s
Received healthy response to inference request in 3.571359157562256s
Received healthy response to inference request in 2.7636115550994873s
Received healthy response to inference request in 3.273815631866455s
Received healthy response to inference request in 2.016326665878296s
5 requests
0 failed requests
5th percentile: 2.159444284439087
10th percentile: 2.302561902999878
20th percentile: 2.58879714012146
30th percentile: 2.738254117965698
40th percentile: 2.7509328365325927
50th percentile: 2.7636115550994873
60th percentile: 2.9676931858062745
70th percentile: 3.1717748165130613
80th percentile: 3.333324337005615
90th percentile: 3.4523417472839357
95th percentile: 3.511850452423096
99th percentile: 3.559457416534424
mean time: 2.871405553817749
Pipeline stage StressChecker completed in 32.57s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 2.12s
Shutdown handler de-registered
function_jegon_2024-11-29 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 3853.37s
Shutdown handler de-registered
function_jegon_2024-11-29 status is now inactive due to auto deactivation removed underperforming models