developer_uid: huohuo12
submission_id: sometimesanotion-lamarck_6304_v4
model_name: sometimesanotion-lamarck_6304_v4
model_group: sometimesanotion/Lamarck
status: torndown
timestamp: 2025-02-18T09:52:49+00:00
num_battles: 7363
num_wins: 3507
celo_rating: 1235.04
family_friendly_score: 0.6146
family_friendly_standard_error: 0.006882831394128437
submission_type: basic
model_repo: sometimesanotion/Lamarck-14B-v0.7
model_architecture: Qwen2ForCausalLM
model_num_parameters: 14765603840.0
best_of: 8
max_input_tokens: 1024
max_output_tokens: 64
display_name: sometimesanotion-lamarck_6304_v4
is_internal_developer: False
language_model: sometimesanotion/Lamarck-14B-v0.7
model_size: 15B
ranking_group: single
us_pacific_date: 2025-02-18
win_ratio: 0.47630042102403913
generation_params: {'temperature': 1.0, 'top_p': 0.85, 'min_p': 0.02, 'top_k': 65, 'presence_penalty': 0.6, 'frequency_penalty': 0.3, 'stopping_words': ['\n'], 'max_input_tokens': 1024, 'best_of': 8, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name sometimesanotion-lamarck-6304-v4-mkmlizer
Waiting for job on sometimesanotion-lamarck-6304-v4-mkmlizer to finish
sometimesanotion-lamarck-6304-v4-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ _____ __ __ ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ /___/ ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ Version: 0.12.8 ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ https://mk1.ai ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ The license key for the current software has been verified as ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ belonging to: ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ Chai Research Corp. ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ Expiration: 2025-04-15 23:59:59 ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ║ ║
sometimesanotion-lamarck-6304-v4-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
sometimesanotion-lamarck-6304-v4-mkmlizer: Downloaded to shared memory in 56.007s
sometimesanotion-lamarck-6304-v4-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmpz6a3w400, device:0
sometimesanotion-lamarck-6304-v4-mkmlizer: Saving flywheel model at /dev/shm/model_cache
sometimesanotion-lamarck-6304-v4-mkmlizer: quantized model in 37.405s
sometimesanotion-lamarck-6304-v4-mkmlizer: Processed model sometimesanotion/Lamarck-14B-v0.7 in 93.413s
sometimesanotion-lamarck-6304-v4-mkmlizer: creating bucket guanaco-mkml-models
sometimesanotion-lamarck-6304-v4-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
sometimesanotion-lamarck-6304-v4-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4
sometimesanotion-lamarck-6304-v4-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4/config.json
sometimesanotion-lamarck-6304-v4-mkmlizer: cp /dev/shm/model_cache/added_tokens.json s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4/added_tokens.json
sometimesanotion-lamarck-6304-v4-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4/special_tokens_map.json
sometimesanotion-lamarck-6304-v4-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4/tokenizer_config.json
sometimesanotion-lamarck-6304-v4-mkmlizer: cp /dev/shm/model_cache/merges.txt s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4/merges.txt
sometimesanotion-lamarck-6304-v4-mkmlizer: cp /dev/shm/model_cache/vocab.json s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4/vocab.json
sometimesanotion-lamarck-6304-v4-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4/tokenizer.json
sometimesanotion-lamarck-6304-v4-mkmlizer: cp /dev/shm/model_cache/flywheel_model.1.safetensors s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4/flywheel_model.1.safetensors
sometimesanotion-lamarck-6304-v4-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/sometimesanotion-lamarck-6304-v4/flywheel_model.0.safetensors
Job sometimesanotion-lamarck-6304-v4-mkmlizer completed after 124.74s with status: succeeded
Stopping job with name sometimesanotion-lamarck-6304-v4-mkmlizer
Pipeline stage MKMLizer completed in 125.22s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.16s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service sometimesanotion-lamarck-6304-v4
Waiting for inference service sometimesanotion-lamarck-6304-v4 to be ready
Inference service sometimesanotion-lamarck-6304-v4 ready after 230.80102062225342s
Pipeline stage MKMLDeployer completed in 231.34s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.686544895172119s
Received healthy response to inference request in 2.284071445465088s
Received healthy response to inference request in 1.9119586944580078s
Received healthy response to inference request in 1.8919720649719238s
Received healthy response to inference request in 1.8240833282470703s
5 requests
0 failed requests
5th percentile: 1.837661075592041
10th percentile: 1.8512388229370118
20th percentile: 1.878394317626953
30th percentile: 1.8959693908691406
40th percentile: 1.9039640426635742
50th percentile: 1.9119586944580078
60th percentile: 2.0608037948608398
70th percentile: 2.2096488952636717
80th percentile: 2.3645661354064944
90th percentile: 2.5255555152893066
95th percentile: 2.6060502052307126
99th percentile: 2.670445957183838
mean time: 2.1197260856628417
Pipeline stage StressChecker completed in 11.86s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.65s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.67s
Shutdown handler de-registered
sometimesanotion-lamarck_6304_v4 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.09s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.07s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service sometimesanotion-lamarck-6304-v4-profiler
Waiting for inference service sometimesanotion-lamarck-6304-v4-profiler to be ready
Inference service sometimesanotion-lamarck-6304-v4-profiler ready after 220.82807278633118s
Pipeline stage MKMLProfilerDeployer completed in 221.19s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplorl2p9:/code/chaiverse_profiler_1739873018 --namespace tenant-chaiml-guanaco
kubectl exec -it sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplorl2p9 --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739873018 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739873018/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplorl2p9:/code/chaiverse_profiler_1739873331 --namespace tenant-chaiml-guanaco
kubectl exec -it sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplorl2p9 --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739873331 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739873331/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplorl2p9:/code/chaiverse_profiler_1739873643 --namespace tenant-chaiml-guanaco
kubectl exec -it sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplorl2p9 --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739873643 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739873643/summary.json'
clean up pipeline due to error=ISVCScriptError('Command failed with error: Defaulted container "kserve-container" out of: kserve-container, queue-proxy\nUnable to use a TTY - input is not a terminal or the right kind of file\nTraceback (most recent call last):\n File "/code/chaiverse_profiler_1739873643/profiles.py", line 602, in <module>\n cli()\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__\n return self.main(*args, **kwargs)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1078, in main\n rv = self.invoke(ctx)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke\n return __callback(*args, **kwargs)\n File "/code/chaiverse_profiler_1739873643/profiles.py", line 103, in profile_batches\n client.wait_for_server_startup(target, max_wait=300)\n File "/code/inference_analysis/client.py", line 136, in wait_for_server_startup\n raise RuntimeError(msg)\nRuntimeError: Timed out after 300s waiting for startup\ncommand terminated with exit code 1\n, output: waiting for startup of TargetModel(endpoint=\'localhost\', route=\'GPT-J-6B-lit-v2\', namespace=\'tenant-chaiml-guanaco\', max_characters=9999, reward=False, url_format=\'{endpoint}-predictor-default.{namespace}.knative.ord1.coreweave.cloud\')\n')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service sometimesanotion-lamarck-6304-v4-profiler is running
Tearing down inference service sometimesanotion-lamarck-6304-v4-profiler
Service sometimesanotion-lamarck-6304-v4-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 2.10s
Shutdown handler de-registered
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service sometimesanotion-lamarck-6304-v4-profiler is running
Skipping teardown as no inference service was found
Pipeline stage MKMLProfilerDeleter completed in 2.61s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.11s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service sometimesanotion-lamarck-6304-v4-profiler
Waiting for inference service sometimesanotion-lamarck-6304-v4-profiler to be ready
Inference service sometimesanotion-lamarck-6304-v4-profiler ready after 220.82552814483643s
Pipeline stage MKMLProfilerDeployer completed in 221.17s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplolnl5h:/code/chaiverse_profiler_1739874220 --namespace tenant-chaiml-guanaco
kubectl exec -it sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplolnl5h --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739874220 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739874220/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplolnl5h:/code/chaiverse_profiler_1739874533 --namespace tenant-chaiml-guanaco
kubectl exec -it sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplolnl5h --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739874533 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739874533/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplolnl5h:/code/chaiverse_profiler_1739874846 --namespace tenant-chaiml-guanaco
kubectl exec -it sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplolnl5h --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739874846 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739874846/summary.json'
clean up pipeline due to error=ISVCScriptError('Command failed with error: Defaulted container "kserve-container" out of: kserve-container, queue-proxy\nUnable to use a TTY - input is not a terminal or the right kind of file\nTraceback (most recent call last):\n File "/code/chaiverse_profiler_1739874846/profiles.py", line 602, in <module>\n cli()\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__\n return self.main(*args, **kwargs)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1078, in main\n rv = self.invoke(ctx)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke\n return __callback(*args, **kwargs)\n File "/code/chaiverse_profiler_1739874846/profiles.py", line 103, in profile_batches\n client.wait_for_server_startup(target, max_wait=300)\n File "/code/inference_analysis/client.py", line 136, in wait_for_server_startup\n raise RuntimeError(msg)\nRuntimeError: Timed out after 300s waiting for startup\ncommand terminated with exit code 1\n, output: waiting for startup of TargetModel(endpoint=\'localhost\', route=\'GPT-J-6B-lit-v2\', namespace=\'tenant-chaiml-guanaco\', max_characters=9999, reward=False, url_format=\'{endpoint}-predictor-default.{namespace}.knative.ord1.coreweave.cloud\')\n')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service sometimesanotion-lamarck-6304-v4-profiler is running
Tearing down inference service sometimesanotion-lamarck-6304-v4-profiler
Service sometimesanotion-lamarck-6304-v4-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 2.44s
Shutdown handler de-registered
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service sometimesanotion-lamarck-6304-v4-profiler is running
Skipping teardown as no inference service was found
Pipeline stage MKMLProfilerDeleter completed in 2.53s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service sometimesanotion-lamarck-6304-v4-profiler
Waiting for inference service sometimesanotion-lamarck-6304-v4-profiler to be ready
Inference service sometimesanotion-lamarck-6304-v4-profiler ready after 220.9070200920105s
Pipeline stage MKMLProfilerDeployer completed in 221.24s
run pipeline stage %s
Running pipeline stage MKMLProfilerRunner
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplogzg7z:/code/chaiverse_profiler_1739875429 --namespace tenant-chaiml-guanaco
kubectl exec -it sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplogzg7z --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739875429 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739875429/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplogzg7z:/code/chaiverse_profiler_1739875742 --namespace tenant-chaiml-guanaco
kubectl exec -it sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplogzg7z --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739875742 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739875742/summary.json'
%s, retrying in %s seconds...
kubectl cp /code/guanaco/guanaco_inference_services/src/inference_scripts tenant-chaiml-guanaco/sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplogzg7z:/code/chaiverse_profiler_1739876054 --namespace tenant-chaiml-guanaco
kubectl exec -it sometimesanotion-lam84660aa87fbf4fc676e1ae4b0c4f8373-deplogzg7z --namespace tenant-chaiml-guanaco -- sh -c 'cd /code/chaiverse_profiler_1739876054 && python profiles.py profile --best_of_n 8 --auto_batch 5 --batches 1,5,10,15,20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100,105,110,115,120,125,130,135,140,145,150,155,160,165,170,175,180,185,190,195 --samples 200 --input_tokens 1024 --output_tokens 64 --summary /code/chaiverse_profiler_1739876054/summary.json'
clean up pipeline due to error=ISVCScriptError('Command failed with error: Defaulted container "kserve-container" out of: kserve-container, queue-proxy\nUnable to use a TTY - input is not a terminal or the right kind of file\nTraceback (most recent call last):\n File "/code/chaiverse_profiler_1739876054/profiles.py", line 602, in <module>\n cli()\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__\n return self.main(*args, **kwargs)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1078, in main\n rv = self.invoke(ctx)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke\n return _process_result(sub_ctx.command.invoke(sub_ctx))\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke\n return ctx.invoke(self.callback, **ctx.params)\n File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke\n return __callback(*args, **kwargs)\n File "/code/chaiverse_profiler_1739876054/profiles.py", line 103, in profile_batches\n client.wait_for_server_startup(target, max_wait=300)\n File "/code/inference_analysis/client.py", line 136, in wait_for_server_startup\n raise RuntimeError(msg)\nRuntimeError: Timed out after 300s waiting for startup\ncommand terminated with exit code 1\n, output: waiting for startup of TargetModel(endpoint=\'localhost\', route=\'GPT-J-6B-lit-v2\', namespace=\'tenant-chaiml-guanaco\', max_characters=9999, reward=False, url_format=\'{endpoint}-predictor-default.{namespace}.knative.ord1.coreweave.cloud\')\n')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Checking if service sometimesanotion-lamarck-6304-v4-profiler is running
Tearing down inference service sometimesanotion-lamarck-6304-v4-profiler
Service sometimesanotion-lamarck-6304-v4-profiler has been torndown
Pipeline stage MKMLProfilerDeleter completed in 2.41s
Shutdown handler de-registered
sometimesanotion-lamarck_6304_v4 status is now inactive due to auto deactivation removed underperforming models
sometimesanotion-lamarck_6304_v4 status is now torndown due to DeploymentManager action
sometimesanotion-lamarck_6304_v4 status is now torndown due to DeploymentManager action
sometimesanotion-lamarck_6304_v4 status is now torndown due to DeploymentManager action
ChatRequest
Generation Params
Prompt Formatter
Chat History
ChatMessage 1