developer_uid: fengjiayi
submission_id: wei123602-llama2-13b-fintune2_v1
model_name: wei123602-llama2-13b-fintune2_v1
model_group: wei123602/llama2-13b-fin
status: torndown
timestamp: 2025-01-23T05:33:01+00:00
num_battles: 7018
num_wins: 2843
celo_rating: 1157.36
family_friendly_score: 0.5956
family_friendly_standard_error: 0.006940614382026997
submission_type: basic
model_repo: wei123602/llama2-13b-fintune2
model_architecture: LlamaForCausalLM
model_num_parameters: 13015864320.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
display_name: wei123602-llama2-13b-fintune2_v1
is_internal_developer: False
language_model: wei123602/llama2-13b-fintune2
model_size: 13B
ranking_group: single
us_pacific_date: 2025-01-22
win_ratio: 0.4051011684240524
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name wei123602-llama2-13b-fintune2-v1-mkmlizer
Waiting for job on wei123602-llama2-13b-fintune2-v1-mkmlizer to finish
wei123602-llama2-13b-fintune2-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ _____ __ __ ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ /___/ ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ Version: 0.11.12 ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ https://mk1.ai ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ The license key for the current software has been verified as ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ belonging to: ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ Chai Research Corp. ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ Expiration: 2025-04-15 23:59:59 ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ║ ║
wei123602-llama2-13b-fintune2-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
wei123602-llama2-13b-fintune2-v1-mkmlizer: Downloaded to shared memory in 39.279s
wei123602-llama2-13b-fintune2-v1-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpzde_xc73, device:0
wei123602-llama2-13b-fintune2-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
wei123602-llama2-13b-fintune2-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/mk1/flywheel/functional/loader.py:55: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
wei123602-llama2-13b-fintune2-v1-mkmlizer: tensors = torch.load(model_shard_filename, map_location=torch.device(self.device), mmap=True)
wei123602-llama2-13b-fintune2-v1-mkmlizer: quantized model in 30.847s
wei123602-llama2-13b-fintune2-v1-mkmlizer: Processed model wei123602/llama2-13b-fintune2 in 70.125s
wei123602-llama2-13b-fintune2-v1-mkmlizer: creating bucket guanaco-mkml-models
wei123602-llama2-13b-fintune2-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
wei123602-llama2-13b-fintune2-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/wei123602-llama2-13b-fintune2-v1
wei123602-llama2-13b-fintune2-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/wei123602-llama2-13b-fintune2-v1/config.json
wei123602-llama2-13b-fintune2-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/wei123602-llama2-13b-fintune2-v1/special_tokens_map.json
wei123602-llama2-13b-fintune2-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/wei123602-llama2-13b-fintune2-v1/tokenizer_config.json
wei123602-llama2-13b-fintune2-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/wei123602-llama2-13b-fintune2-v1/tokenizer.model
wei123602-llama2-13b-fintune2-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/wei123602-llama2-13b-fintune2-v1/tokenizer.json
wei123602-llama2-13b-fintune2-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/wei123602-llama2-13b-fintune2-v1/flywheel_model.0.safetensors
wei123602-llama2-13b-fintune2-v1-mkmlizer: Loading 0: 0%| | 0/363 [00:00<?, ?it/s] Loading 0: 1%| | 4/363 [00:00<00:12, 28.10it/s] Loading 0: 2%|▏ | 7/363 [00:00<00:12, 28.36it/s] Loading 0: 4%|▍ | 16/363 [00:00<00:08, 41.46it/s] Loading 0: 7%|▋ | 25/363 [00:00<00:07, 47.04it/s] Loading 0: 9%|▉ | 34/363 [00:00<00:06, 50.48it/s] Loading 0: 12%|█▏ | 43/363 [00:00<00:06, 52.30it/s] Loading 0: 14%|█▍ | 52/363 [00:01<00:05, 52.83it/s] Loading 0: 17%|█▋ | 61/363 [00:01<00:05, 52.76it/s] Loading 0: 19%|█▉ | 70/363 [00:01<00:05, 53.16it/s] Loading 0: 22%|██▏ | 79/363 [00:01<00:05, 53.47it/s] Loading 0: 24%|██▍ | 88/363 [00:01<00:05, 54.18it/s] Loading 0: 27%|██▋ | 97/363 [00:01<00:04, 57.32it/s] Loading 0: 29%|██▉ | 106/363 [00:02<00:04, 57.54it/s] Loading 0: 32%|███▏ | 115/363 [00:02<00:04, 56.98it/s] Loading 0: 34%|███▍ | 124/363 [00:02<00:04, 57.09it/s] Loading 0: 37%|███▋ | 133/363 [00:02<00:03, 58.53it/s] Loading 0: 38%|███▊ | 139/363 [00:04<00:23, 9.68it/s] Loading 0: 41%|████ | 148/363 [00:05<00:16, 13.05it/s] Loading 0: 43%|████▎ | 157/363 [00:05<00:12, 17.11it/s] Loading 0: 46%|████▌ | 166/363 [00:05<00:09, 21.61it/s] Loading 0: 48%|████▊ | 175/363 [00:05<00:07, 25.96it/s] Loading 0: 51%|█████ | 184/363 [00:05<00:05, 30.51it/s] Loading 0: 53%|█████▎ | 193/363 [00:05<00:04, 35.08it/s] Loading 0: 56%|█████▌ | 202/363 [00:06<00:04, 39.17it/s] Loading 0: 58%|█████▊ | 211/363 [00:06<00:03, 42.35it/s] Loading 0: 61%|██████ | 220/363 [00:06<00:03, 46.45it/s] Loading 0: 63%|██████▎ | 229/363 [00:06<00:02, 48.97it/s] Loading 0: 66%|██████▌ | 238/363 [00:06<00:02, 49.20it/s] Loading 0: 68%|██████▊ | 247/363 [00:06<00:02, 52.99it/s] Loading 0: 71%|███████ | 256/363 [00:07<00:01, 56.42it/s] Loading 0: 73%|███████▎ | 265/363 [00:07<00:01, 57.24it/s] Loading 0: 75%|███████▌ | 274/363 [00:07<00:01, 57.95it/s] Loading 0: 77%|███████▋ | 280/363 [00:09<00:06, 13.63it/s] Loading 0: 79%|███████▉ | 286/363 [00:09<00:04, 16.02it/s] Loading 0: 81%|████████▏ | 295/363 [00:09<00:03, 21.16it/s] Loading 0: 84%|████████▎ | 304/363 [00:09<00:02, 26.44it/s] Loading 0: 86%|████████▌ | 313/363 [00:09<00:01, 32.28it/s] Loading 0: 89%|████████▊ | 322/363 [00:09<00:01, 37.45it/s] Loading 0: 91%|█████████ | 331/363 [00:09<00:00, 41.79it/s] Loading 0: 94%|█████████▎| 340/363 [00:10<00:00, 45.85it/s] Loading 0: 96%|█████████▌| 349/363 [00:10<00:00, 48.57it/s] Loading 0: 99%|█████████▊| 358/363 [00:10<00:00, 50.02it/s]
Job wei123602-llama2-13b-fintune2-v1-mkmlizer completed after 105.01s with status: succeeded
Stopping job with name wei123602-llama2-13b-fintune2-v1-mkmlizer
Pipeline stage MKMLizer completed in 105.54s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.16s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service wei123602-llama2-13b-fintune2-v1
Waiting for inference service wei123602-llama2-13b-fintune2-v1 to be ready
Failed to get response for submission thedrummer-unslopnemo-1_76818_v1: HTTPConnectionPool(host='thedrummer-unslopnemo-1-76818-v1-predictor.tenant-chaiml-guanaco.k.chaiverse.com', port=80): Read timed out. (read timeout=12.0)
Inference service wei123602-llama2-13b-fintune2-v1 ready after 170.56485176086426s
Pipeline stage MKMLDeployer completed in 171.07s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.3941550254821777s
Received healthy response to inference request in 2.3678970336914062s
Received healthy response to inference request in 1.89811372756958s
Received healthy response to inference request in 2.018404245376587s
Received healthy response to inference request in 1.990067481994629s
5 requests
0 failed requests
5th percentile: 1.91650447845459
10th percentile: 1.9348952293395996
20th percentile: 1.971676731109619
30th percentile: 1.9957348346710204
40th percentile: 2.007069540023804
50th percentile: 2.018404245376587
60th percentile: 2.1582013607025146
70th percentile: 2.297998476028442
80th percentile: 2.3731486320495607
90th percentile: 2.3836518287658692
95th percentile: 2.3889034271240233
99th percentile: 2.393104705810547
mean time: 2.133727502822876
Pipeline stage StressChecker completed in 12.27s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.62s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.59s
Shutdown handler de-registered
wei123602-llama2-13b-fintune2_v1 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
Evaluating %s Family Friendly Score with %s threads
Pipeline stage OfflineFamilyFriendlyScorer completed in 5190.13s
Shutdown handler de-registered
wei123602-llama2-13b-fintune2_v1 status is now inactive due to auto deactivation removed underperforming models
wei123602-llama2-13b-fintune2_v1 status is now torndown due to DeploymentManager action