developer_uid: chai_evaluation_service
submission_id: evelyn777-chai-sft-3b-v4_v3
model_name: evelyn777-chai-sft-3b-v4_v3
model_group: evelyn777/chai-sft-3b-v4
status: inactive
timestamp: 2026-02-08T08:17:05+00:00
num_battles: 14143
num_wins: 4967
celo_rating: 1198.23
family_friendly_score: 0.0
family_friendly_standard_error: 0.0
submission_type: basic
model_repo: evelyn777/chai-sft-3b-v4
model_architecture: Qwen2ForCausalLM
model_num_parameters: 3397011456.0
best_of: 8
max_input_tokens: 2048
max_output_tokens: 64
reward_model: default
display_name: evelyn777-chai-sft-3b-v4_v3
is_internal_developer: True
language_model: evelyn777/chai-sft-3b-v4
model_size: 3B
ranking_group: single
us_pacific_date: 2026-02-08
win_ratio: 0.35119847274269955
generation_params: {'temperature': 0.6, 'top_p': 0.9, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 2048, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': '<|im_start|>system\n{memory}<|im_end|>\n', 'prompt_template': '<|im_start|>user\n{prompt}<|im_end|>\n', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{user_name}: {message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMUploader
Starting job with name evelyn777-chai-sft-3b-v4-v3-uploader
Waiting for job on evelyn777-chai-sft-3b-v4-v3-uploader to finish
evelyn777-chai-sft-3b-v4-v3-uploader: Using quantization_mode: none
evelyn777-chai-sft-3b-v4-v3-uploader: Downloading snapshot of evelyn777/chai-sft-3b-v4...
evelyn777-chai-sft-3b-v4-v3-uploader: Fetching 13 files: 0%| | 0/13 [00:00<?, ?it/s] Fetching 13 files: 8%|▊ | 1/13 [00:00<00:03, 3.72it/s] Fetching 13 files: 54%|█████▍ | 7/13 [00:04<00:04, 1.47it/s] Fetching 13 files: 100%|██████████| 13/13 [00:04<00:00, 2.81it/s]
evelyn777-chai-sft-3b-v4-v3-uploader: Downloaded in 4.772s
evelyn777-chai-sft-3b-v4-v3-uploader: Processed model evelyn777/chai-sft-3b-v4 in 7.344s
evelyn777-chai-sft-3b-v4-v3-uploader: creating bucket guanaco-vllm-models
evelyn777-chai-sft-3b-v4-v3-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:56: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v4-v3-uploader: RE_S3_DATESTRING = re.compile('\.[0-9]*(?:[Z\\-\\+]*?)')
evelyn777-chai-sft-3b-v4-v3-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:57: SyntaxWarning: invalid escape sequence '\s'
evelyn777-chai-sft-3b-v4-v3-uploader: RE_XML_NAMESPACE = re.compile(b'^(<?[^>]+?>\s*|\s*)(<\w+) xmlns=[\'"](https?://[^\'"]+)[\'"]', re.MULTILINE)
evelyn777-chai-sft-3b-v4-v3-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:240: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v4-v3-uploader: invalid = re.search("([^a-z0-9\.-])", bucket, re.UNICODE)
evelyn777-chai-sft-3b-v4-v3-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:244: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v4-v3-uploader: invalid = re.search("([^A-Za-z0-9\._-])", bucket, re.UNICODE)
evelyn777-chai-sft-3b-v4-v3-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:255: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v4-v3-uploader: if re.search("-\.", bucket, re.UNICODE):
evelyn777-chai-sft-3b-v4-v3-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:257: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v4-v3-uploader: if re.search("\.\.", bucket, re.UNICODE):
evelyn777-chai-sft-3b-v4-v3-uploader: /usr/lib/python3/dist-packages/S3/S3Uri.py:155: SyntaxWarning: invalid escape sequence '\w'
evelyn777-chai-sft-3b-v4-v3-uploader: _re = re.compile("^(\w+://)?(.*)", re.UNICODE)
evelyn777-chai-sft-3b-v4-v3-uploader: /usr/lib/python3/dist-packages/S3/FileLists.py:480: SyntaxWarning: invalid escape sequence '\*'
evelyn777-chai-sft-3b-v4-v3-uploader: wildcard_split_result = re.split("\*|\?", uri_str, maxsplit=1)
evelyn777-chai-sft-3b-v4-v3-uploader: Bucket 's3://guanaco-vllm-models/' created
evelyn777-chai-sft-3b-v4-v3-uploader: uploading /dev/shm/model_output to s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/.gitattributes s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/.gitattributes
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/special_tokens_map.json s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/special_tokens_map.json
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/config.json s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/config.json
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/chat_template.jinja s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/chat_template.jinja
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/generation_config.json s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/generation_config.json
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/tokenizer_config.json s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/tokenizer_config.json
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/added_tokens.json s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/added_tokens.json
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/model.safetensors.index.json s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/model.safetensors.index.json
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/merges.txt s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/merges.txt
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/vocab.json s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/vocab.json
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/tokenizer.json s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/tokenizer.json
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/model-00002-of-00002.safetensors s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/model-00002-of-00002.safetensors
HTTP Request: %s %s "%s %d %s"
HTTP Request: %s %s "%s %d %s"
evelyn777-chai-sft-3b-v4-v3-uploader: cp /dev/shm/model_output/model-00001-of-00002.safetensors s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v4-v3/model-00001-of-00002.safetensors
Job evelyn777-chai-sft-3b-v4-v3-uploader completed after 84.04s with status: succeeded
Stopping job with name evelyn777-chai-sft-3b-v4-v3-uploader
Pipeline stage VLLMUploader completed in 84.61s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.14s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service evelyn777-chai-sft-3b-v4-v3
Waiting for inference service evelyn777-chai-sft-3b-v4-v3 to be ready
Inference service evelyn777-chai-sft-3b-v4-v3 ready after 170.632244348526s
Pipeline stage VLLMDeployer completed in 171.15s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.0736229419708252s
Received healthy response to inference request in 1.254181146621704s
Received healthy response to inference request in 2.5942938327789307s
Received healthy response to inference request in 0.8502209186553955s
Received healthy response to inference request in 0.9153289794921875s
Received healthy response to inference request in 0.7003486156463623s
Received healthy response to inference request in 0.9575667381286621s
Received healthy response to inference request in 0.98006272315979s
Received healthy response to inference request in 0.7635464668273926s
Received healthy response to inference request in 0.7125091552734375s
Received healthy response to inference request in 0.8187193870544434s
Received healthy response to inference request in 0.6992402076721191s
Received healthy response to inference request in 0.45232462882995605s
Received healthy response to inference request in 0.6851260662078857s
Received healthy response to inference request in 0.6294965744018555s
Received healthy response to inference request in 0.8069412708282471s
Received healthy response to inference request in 0.8489241600036621s
Received healthy response to inference request in 0.4456484317779541s
Received healthy response to inference request in 0.9287111759185791s
Received healthy response to inference request in 0.7871882915496826s
Received healthy response to inference request in 0.4852910041809082s
Received healthy response to inference request in 0.6504313945770264s
Received healthy response to inference request in 0.7007248401641846s
Received healthy response to inference request in 0.5521621704101562s
Received healthy response to inference request in 0.7895786762237549s
Received healthy response to inference request in 0.6836385726928711s
Received healthy response to inference request in 0.4802532196044922s
Received healthy response to inference request in 0.6628191471099854s
Received healthy response to inference request in 1.4095771312713623s
Received healthy response to inference request in 0.5948536396026611s
30 requests
0 failed requests
5th percentile: 0.4648924946784973
10th percentile: 0.4847872257232666
20th percentile: 0.6225679874420166
30th percentile: 0.6773927450180054
40th percentile: 0.699905252456665
50th percentile: 0.738027811050415
60th percentile: 0.7965237140655518
70th percentile: 0.8493131875991822
80th percentile: 0.9344822883605958
90th percentile: 1.0916787624359134
95th percentile: 1.3396489381790158
99th percentile: 2.2507259893417366
mean time: 0.8304443836212159
Pipeline stage StressChecker completed in 40.27s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.62s
Shutdown handler de-registered
evelyn777-chai-sft-3b-v4_v3 status is now deployed due to DeploymentManager action
evelyn777-chai-sft-3b-v4_v3 status is now inactive due to auto deactivation removed underperforming models