developer_uid: chai_evaluation_service
submission_id: evelyn777-chai-sft-3b_v1
model_name: evelyn777-chai-sft-3b_v1
model_group: evelyn777/chai-sft-3b
status: inactive
timestamp: 2026-02-07T21:37:34+00:00
num_battles: 11084
num_wins: 3322
celo_rating: 1156.65
family_friendly_score: 0.0
family_friendly_standard_error: 0.0
submission_type: basic
model_repo: evelyn777/chai-sft-3b
model_architecture: Qwen2ForCausalLM
model_num_parameters: 3397011456.0
best_of: 8
max_input_tokens: 2048
max_output_tokens: 64
reward_model: default
display_name: evelyn777-chai-sft-3b_v1
is_internal_developer: True
language_model: evelyn777/chai-sft-3b
model_size: 3B
ranking_group: single
us_pacific_date: 2026-02-07
win_ratio: 0.29971129556116927
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 2048, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': '<|im_start|>system\n{memory}<|im_end|>\n', 'prompt_template': '<|im_start|>user\n{prompt}<|im_end|>\n', 'bot_template': '<|im_start|>assistant\n{bot_name}: {message}<|im_end|>\n', 'user_template': '<|im_start|>user\n{user_name}: {message}<|im_end|>\n', 'response_template': '<|im_start|>assistant\n{bot_name}:', 'truncate_by_message': True}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMUploader
Starting job with name evelyn777-chai-sft-3b-v1-uploader
Waiting for job on evelyn777-chai-sft-3b-v1-uploader to finish
evelyn777-chai-sft-3b-v1-uploader: Using quantization_mode: none
evelyn777-chai-sft-3b-v1-uploader: Downloading snapshot of evelyn777/chai-sft-3b...
evelyn777-chai-sft-3b-v1-uploader: Fetching 13 files: 0%| | 0/13 [00:00<?, ?it/s] Fetching 13 files: 8%|▊ | 1/13 [00:00<00:04, 2.93it/s] Fetching 13 files: 54%|█████▍ | 7/13 [00:04<00:03, 1.58it/s] Fetching 13 files: 100%|██████████| 13/13 [00:04<00:00, 2.99it/s]
evelyn777-chai-sft-3b-v1-uploader: Downloaded in 4.525s
evelyn777-chai-sft-3b-v1-uploader: Processed model evelyn777/chai-sft-3b in 6.885s
evelyn777-chai-sft-3b-v1-uploader: creating bucket guanaco-vllm-models
evelyn777-chai-sft-3b-v1-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:56: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v1-uploader: RE_S3_DATESTRING = re.compile('\.[0-9]*(?:[Z\\-\\+]*?)')
evelyn777-chai-sft-3b-v1-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:57: SyntaxWarning: invalid escape sequence '\s'
evelyn777-chai-sft-3b-v1-uploader: RE_XML_NAMESPACE = re.compile(b'^(<?[^>]+?>\s*|\s*)(<\w+) xmlns=[\'"](https?://[^\'"]+)[\'"]', re.MULTILINE)
evelyn777-chai-sft-3b-v1-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:240: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v1-uploader: invalid = re.search("([^a-z0-9\.-])", bucket, re.UNICODE)
evelyn777-chai-sft-3b-v1-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:244: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v1-uploader: invalid = re.search("([^A-Za-z0-9\._-])", bucket, re.UNICODE)
evelyn777-chai-sft-3b-v1-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:255: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v1-uploader: if re.search("-\.", bucket, re.UNICODE):
evelyn777-chai-sft-3b-v1-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:257: SyntaxWarning: invalid escape sequence '\.'
evelyn777-chai-sft-3b-v1-uploader: if re.search("\.\.", bucket, re.UNICODE):
evelyn777-chai-sft-3b-v1-uploader: /usr/lib/python3/dist-packages/S3/S3Uri.py:155: SyntaxWarning: invalid escape sequence '\w'
evelyn777-chai-sft-3b-v1-uploader: _re = re.compile("^(\w+://)?(.*)", re.UNICODE)
evelyn777-chai-sft-3b-v1-uploader: /usr/lib/python3/dist-packages/S3/FileLists.py:480: SyntaxWarning: invalid escape sequence '\*'
evelyn777-chai-sft-3b-v1-uploader: wildcard_split_result = re.split("\*|\?", uri_str, maxsplit=1)
evelyn777-chai-sft-3b-v1-uploader: Bucket 's3://guanaco-vllm-models/' created
evelyn777-chai-sft-3b-v1-uploader: uploading /dev/shm/model_output to s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v1
evelyn777-chai-sft-3b-v1-uploader: cp /dev/shm/model_output/model-00002-of-00002.safetensors s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v1/model-00002-of-00002.safetensors
evelyn777-chai-sft-3b-v1-uploader: cp /dev/shm/model_output/model-00001-of-00002.safetensors s3://guanaco-vllm-models/evelyn777-chai-sft-3b-v1/model-00001-of-00002.safetensors
Job evelyn777-chai-sft-3b-v1-uploader completed after 82.77s with status: succeeded
Stopping job with name evelyn777-chai-sft-3b-v1-uploader
Pipeline stage VLLMUploader completed in 83.24s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 0.14s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service evelyn777-chai-sft-3b-v1
Waiting for inference service evelyn777-chai-sft-3b-v1 to be ready
Inference service evelyn777-chai-sft-3b-v1 ready after 170.76970529556274s
Pipeline stage VLLMDeployer completed in 171.29s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 0.6721317768096924s
Received healthy response to inference request in 0.8720438480377197s
Received healthy response to inference request in 0.7304730415344238s
Received healthy response to inference request in 0.7404656410217285s
Received healthy response to inference request in 0.7488908767700195s
Received healthy response to inference request in 0.721635103225708s
Received healthy response to inference request in 0.8409514427185059s
Received healthy response to inference request in 0.4895286560058594s
Received healthy response to inference request in 0.5720233917236328s
Received healthy response to inference request in 1.1939568519592285s
Received healthy response to inference request in 0.6880626678466797s
Received healthy response to inference request in 1.0033984184265137s
Received healthy response to inference request in 0.5975267887115479s
Received healthy response to inference request in 0.7489712238311768s
Received healthy response to inference request in 0.6969168186187744s
Received healthy response to inference request in 1.1473219394683838s
Received healthy response to inference request in 0.6512856483459473s
Received healthy response to inference request in 0.6613233089447021s
Received healthy response to inference request in 0.880324125289917s
Received healthy response to inference request in 0.7138054370880127s
Received healthy response to inference request in 0.9654650688171387s
Received healthy response to inference request in 1.0523767471313477s
Received healthy response to inference request in 0.5916726589202881s
Received healthy response to inference request in 0.9470336437225342s
Received healthy response to inference request in 0.7251136302947998s
Received healthy response to inference request in 0.7148685455322266s
Received healthy response to inference request in 0.5994110107421875s
Received healthy response to inference request in 0.8321290016174316s
Received healthy response to inference request in 0.7778224945068359s
Received healthy response to inference request in 0.9930813312530518s
30 requests
0 failed requests
5th percentile: 0.5808655619621277
10th percentile: 0.5969413757324219
20th percentile: 0.6593157768249511
30th percentile: 0.694260573387146
40th percentile: 0.7189284801483155
50th percentile: 0.7354693412780762
60th percentile: 0.7605117321014404
70th percentile: 0.85027916431427
80th percentile: 0.9507199287414552
90th percentile: 1.008296251296997
95th percentile: 1.1045966029167174
99th percentile: 1.1804327273368835
mean time: 0.7856670379638672
Pipeline stage StressChecker completed in 26.36s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.60s
Shutdown handler de-registered
evelyn777-chai-sft-3b_v1 status is now deployed due to DeploymentManager action
evelyn777-chai-sft-3b_v1 status is now inactive due to auto deactivation removed underperforming models