developer_uid: chai_evaluation_service
submission_id: mylesgoose-llama-3-2-3b_82215_v2
model_name: mylesgoose-llama-3-2-3b_82215_v2
model_group: mylesgoose/Llama-3.2-3B-
status: inactive
timestamp: 2026-02-07T14:17:10+00:00
num_battles: 11864
num_wins: 3974
celo_rating: 9999.0
family_friendly_score: 0.0
family_friendly_standard_error: 0.0
submission_type: basic
model_repo: mylesgoose/Llama-3.2-3B-abliterated
model_architecture: LlamaForCausalLM
model_num_parameters: 3606752256.0
best_of: 8
max_input_tokens: 2048
max_output_tokens: 64
reward_model: default
display_name: mylesgoose-llama-3-2-3b_82215_v2
is_internal_developer: True
language_model: mylesgoose/Llama-3.2-3B-abliterated
model_size: 4B
ranking_group: single
us_pacific_date: 2026-02-07
win_ratio: 0.3349629130141605
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 2048, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': '<|im_start|>system\n{memory}<|im_end|>\n', 'prompt_template': '<|im_start|>user\n{prompt}<|im_end|>\n', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{user_name}: {message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMUploader
Starting job with name mylesgoose-llama-3-2-3b-82215-v2-uploader
Waiting for job on mylesgoose-llama-3-2-3b-82215-v2-uploader to finish
HTTP Request: %s %s "%s %d %s"
mylesgoose-llama-3-2-3b-82215-v2-uploader: Using quantization_mode: none
mylesgoose-llama-3-2-3b-82215-v2-uploader: Downloading snapshot of mylesgoose/Llama-3.2-3B-abliterated...
mylesgoose-llama-3-2-3b-82215-v2-uploader: Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 10%|█ | 1/10 [00:00<00:02, 3.51it/s] Fetching 10 files: 50%|█████ | 5/10 [00:04<00:05, 1.02s/it] Fetching 10 files: 100%|██████████| 10/10 [00:04<00:00, 2.05it/s]
mylesgoose-llama-3-2-3b-82215-v2-uploader: Downloaded in 5.012s
mylesgoose-llama-3-2-3b-82215-v2-uploader: Bucket 's3://guanaco-vllm-models/' created
mylesgoose-llama-3-2-3b-82215-v2-uploader: uploading /dev/shm/model_output to s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2
mylesgoose-llama-3-2-3b-82215-v2-uploader: cp /dev/shm/model_output/.gitattributes s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2/.gitattributes
mylesgoose-llama-3-2-3b-82215-v2-uploader: cp /dev/shm/model_output/config.json s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2/config.json
mylesgoose-llama-3-2-3b-82215-v2-uploader: cp /dev/shm/model_output/generation_config.json s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2/generation_config.json
mylesgoose-llama-3-2-3b-82215-v2-uploader: cp /dev/shm/model_output/README.md s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2/README.md
mylesgoose-llama-3-2-3b-82215-v2-uploader: cp /dev/shm/model_output/special_tokens_map.json s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2/special_tokens_map.json
mylesgoose-llama-3-2-3b-82215-v2-uploader: cp /dev/shm/model_output/model.safetensors.index.json s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2/model.safetensors.index.json
mylesgoose-llama-3-2-3b-82215-v2-uploader: cp /dev/shm/model_output/tokenizer_config.json s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2/tokenizer_config.json
mylesgoose-llama-3-2-3b-82215-v2-uploader: cp /dev/shm/model_output/tokenizer.json s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2/tokenizer.json
mylesgoose-llama-3-2-3b-82215-v2-uploader: cp /dev/shm/model_output/model-00001-of-00002.safetensors s3://guanaco-vllm-models/mylesgoose-llama-3-2-3b-82215-v2/model-00001-of-00002.safetensors
Job mylesgoose-llama-3-2-3b-82215-v2-uploader completed after 181.6s with status: succeeded
Stopping job with name mylesgoose-llama-3-2-3b-82215-v2-uploader
Pipeline stage VLLMUploader completed in 182.74s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 2.06s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service mylesgoose-llama-3-2-3b-82215-v2
Waiting for inference service mylesgoose-llama-3-2-3b-82215-v2 to be ready
HTTP Request: %s %s "%s %d %s"
Inference service mylesgoose-llama-3-2-3b-82215-v2 ready after 160.7114381790161s
Pipeline stage VLLMDeployer completed in 163.29s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 0.8406152725219727s
Received healthy response to inference request in 1.7137360572814941s
Received healthy response to inference request in 1.477095365524292s
Received healthy response to inference request in 1.195014238357544s
Received healthy response to inference request in 0.7240300178527832s
Received healthy response to inference request in 1.3867144584655762s
Received healthy response to inference request in 1.2608492374420166s
Received healthy response to inference request in 0.6489155292510986s
Received healthy response to inference request in 1.1937389373779297s
Received healthy response to inference request in 0.7710449695587158s
Received healthy response to inference request in 0.9953083992004395s
Received healthy response to inference request in 0.8338358402252197s
Received healthy response to inference request in 1.2907099723815918s
Received healthy response to inference request in 1.184077262878418s
Received healthy response to inference request in 1.2449443340301514s
Received healthy response to inference request in 1.277343511581421s
Received healthy response to inference request in 1.2415461540222168s
Received healthy response to inference request in 1.04876708984375s
Received healthy response to inference request in 1.1075310707092285s
Received healthy response to inference request in 0.8545126914978027s
Received healthy response to inference request in 1.289576530456543s
Received healthy response to inference request in 0.8678469657897949s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 0.7113668918609619s
Received healthy response to inference request in 0.8938271999359131s
Received healthy response to inference request in 0.6578066349029541s
Received healthy response to inference request in 0.9537525177001953s
Received healthy response to inference request in 0.9400796890258789s
Received healthy response to inference request in 0.6397275924682617s
Received healthy response to inference request in 0.9043843746185303s
Received healthy response to inference request in 0.79404616355896s
30 requests
0 failed requests
5th percentile: 0.6529165267944336
10th percentile: 0.7060108661651612
20th percentile: 0.7894459247589112
30th percentile: 0.8503434658050537
40th percentile: 0.9001615047454834
50th percentile: 0.9745304584503174
60th percentile: 1.1381495475769041
70th percentile: 1.2089738130569456
80th percentile: 1.2641480922698975
90th percentile: 1.3003104209899903
95th percentile: 1.4364239573478697
99th percentile: 1.6451102566719058
mean time: 1.0314248323440551
Pipeline stage StressChecker completed in 44.62s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.65s
Shutdown handler de-registered
mylesgoose-llama-3-2-3b_82215_v2 status is now deployed due to DeploymentManager action
mylesgoose-llama-3-2-3b_82215_v2 status is now inactive due to auto deactivation removed underperforming models