developer_uid: huohuo12
submission_id: qwen-qwen2-5-14b-instruct-1m_v2
model_name: qwen-qwen2-5-14b-instruct-1m_v2
model_group: Qwen/Qwen2.5-14B-Instruc
status: torndown
timestamp: 2025-02-14T07:25:28+00:00
num_battles: 5099
num_wins: 2228
celo_rating: 1217.44
family_friendly_score: 0.6546000000000001
family_friendly_standard_error: 0.006724564521216225
submission_type: basic
model_repo: Qwen/Qwen2.5-14B-Instruct-1M
model_architecture: Qwen2ForCausalLM
model_num_parameters: 14769689600.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
reward_model: default
display_name: qwen-qwen2-5-14b-instruct-1m_v2
is_internal_developer: False
language_model: Qwen/Qwen2.5-14B-Instruct-1M
model_size: 15B
ranking_group: single
us_pacific_date: 2025-02-13
win_ratio: 0.4369484212590704
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer
Waiting for job on qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer to finish
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ _____ __ __ ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ /___/ ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ Version: 0.12.8 ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ https://mk1.ai ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ The license key for the current software has been verified as ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ belonging to: ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ Chai Research Corp. ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ Expiration: 2025-04-15 23:59:59 ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ║ ║
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: Downloaded to shared memory in 45.353s
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpz35u5tva, device:0
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: Saving flywheel model at /dev/shm/model_cache
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: quantized model in 38.569s
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: Processed model Qwen/Qwen2.5-14B-Instruct-1M in 83.922s
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: creating bucket guanaco-mkml-models
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2/config.json
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: cp /dev/shm/model_cache/added_tokens.json s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2/added_tokens.json
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2/special_tokens_map.json
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2/tokenizer_config.json
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: cp /dev/shm/model_cache/merges.txt s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2/merges.txt
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: cp /dev/shm/model_cache/vocab.json s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2/vocab.json
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2/tokenizer.json
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.1.safetensors s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2/flywheel_model.1.safetensors
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/qwen-qwen2-5-14b-instruct-1m-v2/flywheel_model.0.safetensors
qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer: Loading 0: 0%| | 0/579 [00:00<?, ?it/s] Loading 0: 3%|▎ | 15/579 [00:00<00:04, 134.92it/s] Loading 0: 5%|▌ | 29/579 [00:00<00:04, 118.71it/s] Loading 0: 8%|▊ | 47/579 [00:00<00:03, 140.93it/s] Loading 0: 11%|█ | 62/579 [00:00<00:05, 86.23it/s] Loading 0: 14%|█▍ | 82/579 [00:00<00:04, 111.72it/s] Loading 0: 17%|█▋ | 96/579 [00:00<00:04, 114.87it/s] Loading 0: 19%|█▉ | 110/579 [00:00<00:04, 112.17it/s] Loading 0: 21%|██ | 123/579 [00:01<00:04, 113.63it/s] Loading 0: 24%|██▍ | 141/579 [00:01<00:03, 128.45it/s] Loading 0: 27%|██▋ | 155/579 [00:01<00:05, 84.46it/s] Loading 0: 30%|██▉ | 171/579 [00:01<00:04, 97.34it/s] Loading 0: 32%|███▏ | 184/579 [00:01<00:03, 101.19it/s] Loading 0: 34%|███▍ | 197/579 [00:01<00:03, 104.68it/s] Loading 0: 37%|███▋ | 215/579 [00:01<00:02, 121.39it/s] Loading 0: 40%|███▉ | 229/579 [00:02<00:03, 88.31it/s] Loading 0: 42%|████▏ | 243/579 [00:02<00:03, 96.35it/s] Loading 0: 44%|████▍ | 256/579 [00:02<00:03, 101.81it/s] Loading 0: 46%|████▋ | 269/579 [00:02<00:02, 107.04it/s] Loading 0: 50%|████▉ | 287/579 [00:02<00:02, 124.77it/s] Loading 0: 52%|█████▏ | 303/579 [00:02<00:02, 131.64it/s] Loading 0: 55%|█████▍ | 317/579 [00:03<00:02, 87.65it/s] Loading 0: 58%|█████▊ | 333/579 [00:03<00:02, 101.93it/s] Loading 0: 61%|██████ | 351/579 [00:03<00:01, 114.29it/s] Loading 0: 63%|██████▎ | 365/579 [00:03<00:01, 114.09it/s] Loading 0: 67%|██████▋ | 387/579 [00:03<00:01, 133.19it/s] Loading 0: 70%|███████ | 407/579 [00:03<00:01, 104.64it/s] Loading 0: 73%|███████▎ | 420/579 [00:03<00:01, 103.79it/s] Loading 0: 75%|███████▌ | 435/579 [00:04<00:01, 111.26it/s] Loading 0: 77%|███████▋ | 448/579 [00:04<00:01, 113.74it/s] Loading 0: 80%|███████▉ | 461/579 [00:04<00:01, 115.73it/s] Loading 0: 83%|████████▎ | 479/579 [00:04<00:00, 131.32it/s] Loading 0: 85%|████████▌ | 493/579 [00:04<00:00, 94.30it/s] Loading 0: 88%|████████▊ | 507/579 [00:04<00:00, 102.47it/s] Loading 0: 90%|████████▉ | 520/579 [00:04<00:00, 107.82it/s] Loading 0: 92%|█████████▏| 531/579 [00:20<00:00, 107.82it/s] Loading 0: 92%|█████████▏| 532/579 [00:20<00:16, 2.88it/s] Loading 0: 95%|█████████▍| 549/579 [00:20<00:06, 4.38it/s] Loading 0: 98%|█████████▊| 567/579 [00:20<00:01, 6.61it/s]
Job qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer completed after 246.97s with status: succeeded
Stopping job with name qwen-qwen2-5-14b-instruct-1m-v2-mkmlizer
Pipeline stage MKMLizer completed in 247.41s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.14s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service qwen-qwen2-5-14b-instruct-1m-v2
Waiting for inference service qwen-qwen2-5-14b-instruct-1m-v2 to be ready
Failed to get response for submission function_farit_2025-02-13: ('http://chaiml-elo-alignment-run-3-v44-predictor.tenant-chaiml-guanaco.k2.chaiverse.com/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:35050->127.0.0.1:8080: read: connection reset by peer\n')
Inference service qwen-qwen2-5-14b-instruct-1m-v2 ready after 190.79771375656128s
Pipeline stage MKMLDeployer completed in 191.28s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.428788185119629s
Received healthy response to inference request in 1.675107717514038s
Received healthy response to inference request in 1.8834965229034424s
Received healthy response to inference request in 1.8504884243011475s
Received healthy response to inference request in 1.9959781169891357s
5 requests
0 failed requests
5th percentile: 1.71018385887146
10th percentile: 1.7452600002288818
20th percentile: 1.8154122829437256
30th percentile: 1.8570900440216065
40th percentile: 1.8702932834625243
50th percentile: 1.8834965229034424
60th percentile: 1.9284891605377197
70th percentile: 1.973481798171997
80th percentile: 2.0825401306152345
90th percentile: 2.2556641578674315
95th percentile: 2.34222617149353
99th percentile: 2.411475782394409
mean time: 1.9667717933654785
Pipeline stage StressChecker completed in 11.00s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.62s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.62s
Shutdown handler de-registered
qwen-qwen2-5-14b-instruct-1m_v2 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.11s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler
Waiting for inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler to be ready
Inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler ready after 291.2258813381195s
Pipeline stage MKMLProfilerDeployer completed in 291.70s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplohzd5r:/code/chaiverse_profiler_1739518710 --namespace tenant-chaiml-guanaco
kubectl exec -it qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplohzd5r --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739518710 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739518710/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplohzd5r:/code/chaiverse_profiler_1739519023 --namespace tenant-chaiml-guanaco
kubectl exec -it qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplohzd5r --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739519023 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739519023/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplohzd5r:/code/chaiverse_profiler_1739519336 --namespace tenant-chaiml-guanaco
kubectl exec -it qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplohzd5r --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739519336 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739519336/summary.json'
clean up pipeline due to error=ISVCScriptError('Command failed with error: Defaulted container "kserve-container" out of: kserve-container, queue-proxy\nUnable to use a TTY - input is not a terminal or the right kind of file\nTraceback (most recent call last):\n File "/code/chaiverse_profiler_1739519336/profiles.py", line 602, in <module>\n cli()\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__\n return self.main(*args, **kwargs)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1078, in main\n rv = self.invoke(ctx)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke\n return __callback(*args, **kwargs)\n File "/code/chaiverse_profiler_1739519336/profiles.py", line 103, in profile_batches\n client.wait_for_server_startup(target, max_wait=300)\n File "/code/inference_analysis/client.py", line 136, in wait_for_server_startup\n raise RuntimeError(msg)\nRuntimeError: Timed out after 300s waiting for startup\ncommand terminated with exit code 1\n, output: waiting for startup of TargetModel(endpoint=\'localhost\', route=\'GPT-J-6B-lit-v2\', namespace=\'tenant-chaiml-guanaco\', max_characters=9999, reward=False, url_format=\'{endpoint}-predictor-default.{namespace}.knative.ord1.coreweave.cloud\')\n')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service qwen-qwen2-5-14b-instruct-1m-v2-profiler is running
Tearing down inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler
Service qwen-qwen2-5-14b-instruct-1m-v2-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 2.14s
Shutdown handler de-registered
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service qwen-qwen2-5-14b-instruct-1m-v2-profiler is running
Skipping teardown as no inference service was found
Pipeline stage MKMLProfilerDeleter completed in 2.14s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.11s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler
Waiting for inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler to be ready
Inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler ready after 110.4348566532135s
Pipeline stage MKMLProfilerDeployer completed in 110.75s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deploz9smh:/code/chaiverse_profiler_1739519804 --namespace tenant-chaiml-guanaco
kubectl exec -it qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deploz9smh --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739519804 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739519804/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deploz9smh:/code/chaiverse_profiler_1739520117 --namespace tenant-chaiml-guanaco
kubectl exec -it qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deploz9smh --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739520117 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739520117/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deploz9smh:/code/chaiverse_profiler_1739520429 --namespace tenant-chaiml-guanaco
kubectl exec -it qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deploz9smh --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739520429 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739520429/summary.json'
clean up pipeline due to error=ISVCScriptError('Command failed with error: Defaulted container "kserve-container" out of: kserve-container, queue-proxy\nUnable to use a TTY - input is not a terminal or the right kind of file\nTraceback (most recent call last):\n File "/code/chaiverse_profiler_1739520429/profiles.py", line 602, in <module>\n cli()\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__\n return self.main(*args, **kwargs)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1078, in main\n rv = self.invoke(ctx)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke\n return __callback(*args, **kwargs)\n File "/code/chaiverse_profiler_1739520429/profiles.py", line 103, in profile_batches\n client.wait_for_server_startup(target, max_wait=300)\n File "/code/inference_analysis/client.py", line 136, in wait_for_server_startup\n raise RuntimeError(msg)\nRuntimeError: Timed out after 300s waiting for startup\ncommand terminated with exit code 1\n, output: waiting for startup of TargetModel(endpoint=\'localhost\', route=\'GPT-J-6B-lit-v2\', namespace=\'tenant-chaiml-guanaco\', max_characters=9999, reward=False, url_format=\'{endpoint}-predictor-default.{namespace}.knative.ord1.coreweave.cloud\')\n')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service qwen-qwen2-5-14b-instruct-1m-v2-profiler is running
Tearing down inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler
Service qwen-qwen2-5-14b-instruct-1m-v2-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 2.24s
Shutdown handler de-registered
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service qwen-qwen2-5-14b-instruct-1m-v2-profiler is running
Skipping teardown as no inference service was found
Pipeline stage MKMLProfilerDeleter completed in 2.29s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.12s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler
Waiting for inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler to be ready
Inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler ready after 90.37782073020935s
Pipeline stage MKMLProfilerDeployer completed in 90.75s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplo5c7rf:/code/chaiverse_profiler_1739520875 --namespace tenant-chaiml-guanaco
kubectl exec -it qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplo5c7rf --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739520875 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739520875/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplo5c7rf:/code/chaiverse_profiler_1739521188 --namespace tenant-chaiml-guanaco
kubectl exec -it qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplo5c7rf --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739521188 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739521188/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplo5c7rf:/code/chaiverse_profiler_1739521501 --namespace tenant-chaiml-guanaco
kubectl exec -it qwen-qwen2-5-14b-ins9ec9dadabe90b41aa8ee569f42b3e1a9-deplo5c7rf --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739521501 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739521501/summary.json'
clean up pipeline due to error=ISVCScriptError('Command failed with error: Defaulted container "kserve-container" out of: kserve-container, queue-proxy\nUnable to use a TTY - input is not a terminal or the right kind of file\nTraceback (most recent call last):\n File "/code/chaiverse_profiler_1739521501/profiles.py", line 602, in <module>\n cli()\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__\n return self.main(*args, **kwargs)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1078, in main\n rv = self.invoke(ctx)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke\n return __callback(*args, **kwargs)\n File "/code/chaiverse_profiler_1739521501/profiles.py", line 103, in profile_batches\n client.wait_for_server_startup(target, max_wait=300)\n File "/code/inference_analysis/client.py", line 136, in wait_for_server_startup\n raise RuntimeError(msg)\nRuntimeError: Timed out after 300s waiting for startup\ncommand terminated with exit code 1\n, output: waiting for startup of TargetModel(endpoint=\'localhost\', route=\'GPT-J-6B-lit-v2\', namespace=\'tenant-chaiml-guanaco\', max_characters=9999, reward=False, url_format=\'{endpoint}-predictor-default.{namespace}.knative.ord1.coreweave.cloud\')\n')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service qwen-qwen2-5-14b-instruct-1m-v2-profiler is running
Tearing down inference service qwen-qwen2-5-14b-instruct-1m-v2-profiler
Service qwen-qwen2-5-14b-instruct-1m-v2-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 2.57s
Shutdown handler de-registered
qwen-qwen2-5-14b-instruct-1m_v2 status is now inactive due to auto deactivation removed underperforming models
qwen-qwen2-5-14b-instruct-1m_v2 status is now torndown due to DeploymentManager action
qwen-qwen2-5-14b-instruct-1m_v2 status is now torndown due to DeploymentManager action
qwen-qwen2-5-14b-instruct-1m_v2 status is now torndown due to DeploymentManager action