developer_uid: rirv938
submission_id: chaiml-llama-8b-multih_78780_v28
model_name: chaiml-llama-8b-multih_78780_v28
model_group: ChaiML/llama_8b_multihea
status: torndown
timestamp: 2025-02-14T01:48:30+00:00
num_battles: 5392
num_wins: 2947
celo_rating: 1295.15
family_friendly_score: 0.0
family_friendly_standard_error: 0.0
submission_type: basic
model_repo: ChaiML/llama_8b_multihead_204m_512_v3_tokens_step_398208
model_architecture: MultiHeadLlamaClassifier
model_num_parameters: 8030261248.0
best_of: 1
max_input_tokens: 2048
max_output_tokens: 1
display_name: chaiml-llama-8b-multih_78780_v28
ineligible_reason: max_output_tokens!=64
is_internal_developer: True
language_model: ChaiML/llama_8b_multihead_204m_512_v3_tokens_step_398208
model_size: 8B
ranking_group: single
us_pacific_date: 2025-02-13
win_ratio: 0.5465504451038575
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 2048, 'best_of': 1, 'max_output_tokens': 1}
formatter: {'memory_template': '', 'prompt_template': '', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:{message}', 'truncate_by_message': False}
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLizer
Starting job with name chaiml-llama-8b-multih-78780-v28-mkmlizer
Waiting for job on chaiml-llama-8b-multih-78780-v28-mkmlizer to finish
chaiml-llama-8b-multih-78780-v28-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ _____ __ __ ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ /___/ ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ Version: 0.12.8 ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ https://mk1.ai ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ The license key for the current software has been verified as ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ belonging to: ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ Chai Research Corp. ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ Expiration: 2025-04-15 23:59:59 ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ║ ║
chaiml-llama-8b-multih-78780-v28-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
chaiml-llama-8b-multih-78780-v28-mkmlizer: Downloaded to shared memory in 22.805s
chaiml-llama-8b-multih-78780-v28-mkmlizer: quantizing model to /dev/shm/model_cache, profile:s0, folder:/tmp/tmp5p8w9n2k, device:0
chaiml-llama-8b-multih-78780-v28-mkmlizer: Saving flywheel model at /dev/shm/model_cache
chaiml-llama-8b-multih-78780-v28-mkmlizer: quantized model in 15.717s
chaiml-llama-8b-multih-78780-v28-mkmlizer: Processed model ChaiML/llama_8b_multihead_204m_512_v3_tokens_step_398208 in 38.522s
chaiml-llama-8b-multih-78780-v28-mkmlizer: creating bucket guanaco-mkml-models
chaiml-llama-8b-multih-78780-v28-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
chaiml-llama-8b-multih-78780-v28-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28
chaiml-llama-8b-multih-78780-v28-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28/config.json
chaiml-llama-8b-multih-78780-v28-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28/tokenizer.json
chaiml-llama-8b-multih-78780-v28-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28/flywheel_model.0.safetensors
chaiml-llama-8b-multih-78780-v28-mkmlizer: Loading 0: 0%| | 0/294 [00:00<?, ?it/s] Loading 0: 2%|▏ | 5/294 [00:00<00:07, 38.57it/s] Loading 0: 4%|▍ | 13/294 [00:00<00:04, 59.11it/s] Loading 0: 7%|▋ | 20/294 [00:00<00:05, 52.06it/s] Loading 0: 9%|▉ | 26/294 [00:00<00:05, 52.10it/s] Loading 0: 11%|█ | 32/294 [00:00<00:05, 44.68it/s] Loading 0: 14%|█▎ | 40/294 [00:00<00:04, 53.31it/s] Loading 0: 16%|█▌ | 46/294 [00:00<00:04, 49.71it/s] Loading 0: 18%|█▊ | 52/294 [00:01<00:04, 50.89it/s] Loading 0: 20%|█▉ | 58/294 [00:01<00:04, 52.44it/s] Loading 0: 22%|██▏ | 64/294 [00:01<00:04, 48.43it/s] Loading 0: 23%|██▎ | 69/294 [00:01<00:04, 47.18it/s] Loading 0: 26%|██▌ | 76/294 [00:01<00:04, 52.09it/s] Loading 0: 28%|██▊ | 82/294 [00:01<00:04, 47.39it/s] Loading 0: 30%|██▉ | 87/294 [00:01<00:06, 30.37it/s] Loading 0: 32%|███▏ | 94/294 [00:02<00:05, 37.07it/s] Loading 0: 34%|███▍ | 100/294 [00:02<00:05, 38.21it/s] Loading 0: 36%|███▌ | 105/294 [00:02<00:04, 39.20it/s] Loading 0: 38%|███▊ | 112/294 [00:02<00:03, 45.81it/s] Loading 0: 40%|████ | 118/294 [00:02<00:03, 44.72it/s] Loading 0: 42%|████▏ | 123/294 [00:02<00:03, 44.99it/s] Loading 0: 44%|████▍ | 130/294 [00:02<00:03, 50.29it/s] Loading 0: 46%|████▋ | 136/294 [00:02<00:03, 47.53it/s] Loading 0: 48%|████▊ | 141/294 [00:03<00:03, 47.37it/s] Loading 0: 50%|█████ | 148/294 [00:03<00:02, 51.21it/s] Loading 0: 52%|█████▏ | 154/294 [00:03<00:02, 47.60it/s] Loading 0: 54%|█████▍ | 159/294 [00:03<00:02, 47.45it/s] Loading 0: 56%|█████▋ | 166/294 [00:03<00:02, 52.94it/s] Loading 0: 59%|█████▊ | 172/294 [00:03<00:02, 49.11it/s] Loading 0: 61%|██████ | 179/294 [00:03<00:02, 53.08it/s] Loading 0: 63%|██████▎ | 185/294 [00:03<00:02, 54.01it/s] Loading 0: 65%|██████▍ | 191/294 [00:04<00:03, 31.10it/s] Loading 0: 67%|██████▋ | 196/294 [00:04<00:02, 33.27it/s] Loading 0: 69%|██████▊ | 202/294 [00:04<00:02, 37.58it/s] Loading 0: 71%|███████ | 208/294 [00:04<00:02, 39.03it/s] Loading 0: 72%|███████▏ | 213/294 [00:04<00:01, 40.77it/s] Loading 0: 75%|███████▍ | 220/294 [00:04<00:01, 46.42it/s] Loading 0: 77%|███████▋ | 226/294 [00:05<00:01, 45.22it/s] Loading 0: 79%|███████▊ | 231/294 [00:05<00:01, 45.09it/s] Loading 0: 81%|████████ | 238/294 [00:05<00:01, 50.42it/s] Loading 0: 83%|████████▎ | 244/294 [00:05<00:01, 46.83it/s] Loading 0: 85%|████████▍ | 249/294 [00:05<00:00, 45.84it/s] Loading 0: 87%|████████▋ | 256/294 [00:05<00:00, 51.80it/s] Loading 0: 89%|████████▉ | 262/294 [00:05<00:00, 48.28it/s] Loading 0: 91%|█████████ | 268/294 [00:05<00:00, 48.69it/s] Loading 0: 93%|█████████▎| 274/294 [00:05<00:00, 51.42it/s] Loading 0: 95%|█████████▌| 280/294 [00:06<00:00, 48.20it/s] Loading 0: 97%|█████████▋| 285/294 [00:06<00:00, 47.98it/s] Loading 0: 99%|█████████▊| 290/294 [00:06<00:00, 32.14it/s]
Job chaiml-llama-8b-multih-78780-v28-mkmlizer completed after 166.29s with status: succeeded
Stopping job with name chaiml-llama-8b-multih-78780-v28-mkmlizer
Pipeline stage MKMLizer completed in 166.81s
run pipeline stage %s
Running pipeline stage MKMLTemplater
Pipeline stage MKMLTemplater completed in 0.14s
run pipeline stage %s
Running pipeline stage MKMLDeployer
Creating inference service chaiml-llama-8b-multih-78780-v28
Waiting for inference service chaiml-llama-8b-multih-78780-v28 to be ready
Inference service chaiml-llama-8b-multih-78780-v28 ready after 190.69922804832458s
Pipeline stage MKMLDeployer completed in 191.18s
run pipeline stage %s
Running pipeline stage StressChecker
Received healthy response to inference request in 4.292471885681152s
Received healthy response to inference request in 3.3669369220733643s
Received healthy response to inference request in 2.8943722248077393s
Received healthy response to inference request in 4.468791723251343s
Received healthy response to inference request in 3.086548328399658s
5 requests
0 failed requests
5th percentile: 2.932807445526123
10th percentile: 2.971242666244507
20th percentile: 3.0481131076812744
30th percentile: 3.1426260471343994
40th percentile: 3.254781484603882
50th percentile: 3.3669369220733643
60th percentile: 3.7371509075164795
70th percentile: 4.107364892959595
80th percentile: 4.327735853195191
90th percentile: 4.398263788223266
95th percentile: 4.4335277557373045
99th percentile: 4.4617389297485355
mean time: 3.6218242168426515
%s, retrying in %s seconds...
Received healthy response to inference request in 2.6399781703948975s
Received healthy response to inference request in 2.159672975540161s
Received healthy response to inference request in 1.6178207397460938s
Received healthy response to inference request in 2.322904109954834s
Received healthy response to inference request in 5.1547605991363525s
5 requests
0 failed requests
5th percentile: 1.7261911869049071
10th percentile: 1.8345616340637207
20th percentile: 2.0513025283813477
30th percentile: 2.1923192024230955
40th percentile: 2.2576116561889648
50th percentile: 2.322904109954834
60th percentile: 2.4497337341308594
70th percentile: 2.5765633583068848
80th percentile: 3.142934656143189
90th percentile: 4.148847627639771
95th percentile: 4.651804113388061
99th percentile: 5.054169301986694
mean time: 2.779027318954468
Pipeline stage StressChecker completed in 34.76s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyTriggerPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage OfflineFamilyFriendlyTriggerPipeline completed in 0.65s
run pipeline stage %s
Running pipeline stage TriggerMKMLProfilingPipeline
run_pipeline:run_in_cloud %s
starting trigger_guanaco_pipeline args=%s
triggered trigger_guanaco_pipeline args=%s
Pipeline stage TriggerMKMLProfilingPipeline completed in 0.69s
Shutdown handler de-registered
chaiml-llama-8b-multih_78780_v28 status is now deployed due to DeploymentManager action
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.10s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.09s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service chaiml-llama-8b-multih-78780-v28-profiler
Waiting for inference service chaiml-llama-8b-multih-78780-v28-profiler to be ready
Tearing down inference service chaiml-llama-8b-multih-78780-v28-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-llama-8b-multih-78780-v28-profiler
Waiting for inference service chaiml-llama-8b-multih-78780-v28-profiler to be ready
Tearing down inference service chaiml-llama-8b-multih-78780-v28-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-llama-8b-multih-78780-v28-profiler
Waiting for inference service chaiml-llama-8b-multih-78780-v28-profiler to be ready
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage OfflineFamilyFriendlyScorer
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
Evaluating %s Family Friendly Score with %s threads
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
%s, retrying in %s seconds...
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.16s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.11s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service chaiml-llama-8b-multih-78780-v28-profiler
Waiting for inference service chaiml-llama-8b-multih-78780-v28-profiler to be ready
Tearing down inference service chaiml-llama-8b-multih-78780-v28-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-llama-8b-multih-78780-v28-profiler
Waiting for inference service chaiml-llama-8b-multih-78780-v28-profiler to be ready
Tearing down inference service chaiml-llama-8b-multih-78780-v28-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-llama-8b-multih-78780-v28-profiler
Waiting for inference service chaiml-llama-8b-multih-78780-v28-profiler to be ready
Tearing down inference service chaiml-llama-8b-multih-78780-v28-profiler
clean up pipeline due to error=DeploymentError('Timeout to start the InferenceService chaiml-llama-8b-multih-78780-v28-profiler. The InferenceService is as following: {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'kind\': \'InferenceService\', \'metadata\': {\'annotations\': {\'autoscaling.knative.dev/class\': \'hpa.autoscaling.knative.dev\', \'autoscaling.knative.dev/container-concurrency-target-percentage\': \'70\', \'autoscaling.knative.dev/initial-scale\': \'1\', \'autoscaling.knative.dev/max-scale-down-rate\': \'1.1\', \'autoscaling.knative.dev/max-scale-up-rate\': \'2\', \'autoscaling.knative.dev/metric\': \'mean_pod_latency_ms_v2\', \'autoscaling.knative.dev/panic-threshold-percentage\': \'650\', \'autoscaling.knative.dev/panic-window-percentage\': \'35\', \'autoscaling.knative.dev/scale-down-delay\': \'30s\', \'autoscaling.knative.dev/scale-to-zero-grace-period\': \'10m\', \'autoscaling.knative.dev/stable-window\': \'180s\', \'autoscaling.knative.dev/target\': \'3700\', \'autoscaling.knative.dev/target-burst-capacity\': \'-1\', \'autoscaling.knative.dev/tick-interval\': \'15s\', \'features.knative.dev/http-full-duplex\': \'Enabled\', \'networking.knative.dev/ingress-class\': \'istio.ingress.networking.knative.dev\'}, \'creationTimestamp\': \'2025-02-14T02:46:56Z\', \'finalizers\': [\'inferenceservice.finalizers\'], \'generation\': 1, \'labels\': {\'knative.coreweave.cloud/ingress\': \'istio.ingress.networking.knative.dev\', \'prometheus.k.chaiverse.com\': \'true\', \'qos.coreweave.cloud/latency\': \'low\'}, \'managedFields\': [{\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:annotations\': {\'.\': {}, \'f:autoscaling.knative.dev/class\': {}, \'f:autoscaling.knative.dev/container-concurrency-target-percentage\': {}, \'f:autoscaling.knative.dev/initial-scale\': {}, \'f:autoscaling.knative.dev/max-scale-down-rate\': {}, \'f:autoscaling.knative.dev/max-scale-up-rate\': {}, \'f:autoscaling.knative.dev/metric\': {}, \'f:autoscaling.knative.dev/panic-threshold-percentage\': {}, \'f:autoscaling.knative.dev/panic-window-percentage\': {}, \'f:autoscaling.knative.dev/scale-down-delay\': {}, \'f:autoscaling.knative.dev/scale-to-zero-grace-period\': {}, \'f:autoscaling.knative.dev/stable-window\': {}, \'f:autoscaling.knative.dev/target\': {}, \'f:autoscaling.knative.dev/target-burst-capacity\': {}, \'f:autoscaling.knative.dev/tick-interval\': {}, \'f:features.knative.dev/http-full-duplex\': {}, \'f:networking.knative.dev/ingress-class\': {}}, \'f:labels\': {\'.\': {}, \'f:knative.coreweave.cloud/ingress\': {}, \'f:prometheus.k.chaiverse.com\': {}, \'f:qos.coreweave.cloud/latency\': {}}}, \'f:spec\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:affinity\': {\'.\': {}, \'f:nodeAffinity\': {\'.\': {}, \'f:tion\': {}, \'f:requiredDuringSchedulingIgnoredDuringExecution\': {}}}, \'f:containerConcurrency\': {}, \'f:containers\': {}, \'f:imagePullSecrets\': {}, \'f:maxReplicas\': {}, \'f:minReplicas\': {}, \'f:timeout\': {}, \'f:volumes\': {}}}}, \'manager\': \'OpenAPI-Generator\', \'operation\': \'Update\', \'time\': \'2025-02-14T02:46:56Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:finalizers\': {\'.\': {}, \'v:"inferenceservice.finalizers"\': {}}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'time\': \'2025-02-14T02:46:56Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:status\': {\'.\': {}, \'f:components\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:latestCreatedRevision\': {}}}, \'f:conditions\': {}, \'f:modelStatus\': {\'.\': {}, \'f:lastFailureInfo\': {\'.\': {}, \'f:exitCode\': {}, \'f:message\': {}, \'f:reason\': {}}, \'f:states\': {\'.\': {}, \'f:activeModelState\': {}, \'f:targetModelState\': {}}, \'f:transitionStatus\': {}}, \'f:observedGeneration\': {}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'subresource\': \'status\', \'time\': \'2025-02-14T02:47:20Z\'}], \'name\': \'chaiml-llama-8b-multih-78780-v28-profiler\', \'namespace\': \'tenant-chaiml-guanaco\', \'resourceVersion\': \'275908576\', \'uid\': \'abae27c2-4225-42a6-9c6f-81125314a91e\'}, \'spec\': {\'predictor\': {\'affinity\': {\'nodeAffinity\': {\'tion\': [{\'preference\': {\'matchExpressions\': [{\'key\': \'topology.kubernetes.io/region\', \'operator\': \'In\', \'values\': [\'ORD1\']}]}, \'weight\': 5}], \'requiredDuringSchedulingIgnoredDuringExecution\': {\'nodeSelectorTerms\': [{\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'RTX_A5000\']}]}]}}}, \'containerConcurrency\': 0, \'containers\': [{\'env\': [{\'name\': \'MAX_TOKEN_INPUT\', \'value\': \'2048\'}, {\'name\': \'BEST_OF\', \'value\': \'1\'}, {\'name\': \'TEMPERATURE\', \'value\': \'1.0\'}, {\'name\': \'PRESENCE_PENALTY\', \'value\': \'0.0\'}, {\'name\': \'FREQUENCY_PENALTY\', \'value\': \'0.0\'}, {\'name\': \'TOP_P\', \'value\': \'1.0\'}, {\'name\': \'MIN_P\', \'value\': \'0.0\'}, {\'name\': \'TOP_K\', \'value\': \'40\'}, {\'name\': \'STOPPING_WORDS\', \'value\': \'["\\\\\\\\n"]\'}, {\'name\': \'MAX_TOKENS\', \'value\': \'1\'}, {\'name\': \'MAX_BATCH_SIZE\', \'value\': \'128\'}, {\'name\': \'URL_ROUTE\', \'value\': \'GPT-J-6B-lit-v2\'}, {\'name\': \'OBJ_ACCESS_KEY_ID\', \'value\': \'LETMTTRMLFFAMTBK\'}, {\'name\': \'OBJ_SECRET_ACCESS_KEY\', \'value\': \'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\'}, {\'name\': \'OBJ_ENDPOINT\', \'value\': \'https://accel-object.ord1.coreweave.com\'}, {\'name\': \'TENSORIZER_URI\', \'value\': \'s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28\'}, {\'name\': \'RESERVE_MEMORY\', \'value\': \'2048\'}, {\'name\': \'DOWNLOAD_TO_LOCAL\', \'value\': \'/dev/shm/model_cache\'}, {\'name\': \'NUM_GPUS\', \'value\': \'1\'}, {\'name\': \'MK1_MKML_LICENSE_KEY\', \'valueFrom\': {\'secretKeyRef\': {\'key\': \'key\', \'name\': \'mkml-license-key\'}}}], \'image\': \'gcr.io/chai-959f8/chai-guanaco/mkml:mkml_v0.11.12_dg\', \'imagePullPolicy\': \'IfNotPresent\', \'name\': \'kserve-container\', \'readinessProbe\': {\'exec\': {\'command\': [\'cat\', \'/tmp/ready\']}, \'failureThreshold\': 1, \'initialDelaySeconds\': 10, \'periodSeconds\': 10, \'successThreshold\': 1, \'timeoutSeconds\': 5}, \'resources\': {\'limits\': {\'cpu\': \'2\', \'memory\': \'12Gi\', \'nvidia.com/gpu\': \'1\'}, \'requests\': {\'cpu\': \'2\', \'memory\': \'12Gi\', \'nvidia.com/gpu\': \'1\'}}, \'volumeMounts\': [{\'mountPath\': \'/dev/shm\', \'name\': \'shared-memory-cache\'}]}], \'imagePullSecrets\': [{\'name\': \'docker-creds\'}], \'maxReplicas\': 1, \'minReplicas\': 1, \'timeout\': 60, \'volumes\': [{\'emptyDir\': {\'medium\': \'Memory\'}, \'name\': \'shared-memory-cache\'}]}}, \'status\': {\'components\': {\'predictor\': {\'latestCreatedRevision\': \'chaiml-llama-8b-multih-78780-v28-profiler-predictor-00001\'}}, \'conditions\': [{\'lastTransitionTime\': \'2025-02-14T02:47:20Z\', \'reason\': \'PredictorConfigurationReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'LatestDeploymentReady\'}, {\'lastTransitionTime\': \'2025-02-14T02:47:20Z\', \'message\': \'Revision "chaiml-llama-8b-multih-78780-v28-profiler-predictor-00001" failed with message: Container failed with: uantization_profile=s0, all_reduce_profile=None, kv_cache_profile=None, calibration_samples=-1, sampling=SamplingParameters(temperature=1.0, top_p=1.0, min_p=0.0, top_k=40, max_input_tokens=2048, max_tokens=1, stop=[\\\'\\\\n\\\'], eos_token_ids=[], frequency_penalty=0.0, presence_penalty=0.0, reward_enabled=True, num_samples=1, reward_max_token_input=256, drop_incomplete_sentences=True, profile=False), url_route=GPT-J-6B-lit-v2, tensorizer_uri=s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28, s3_creds=S3Credentials(s3_access_key_id=\\\'LETMTTRMLFFAMTBK\\\', s3_secret_access_key=\\\'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\\\', s3_endpoint=\\\'https://accel-object.ord1.coreweave.com\\\', s3_uncached_endpoint=\\\'https://object.ord1.coreweave.com\\\'), local_folder=/dev/shm/model_cache)\\n[INFO] Initialized device rank 0\\nTraceback (most recent call last):\\n File "/code/mkml_inference_service/main.py", line 95, in <module>\\n model.load()\\n File "/code/mkml_inference_service/main.py", line 31, in load\\n self.engine = mkml_backend.AsyncInferenceService.from_folder(settings, settings.local_folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 49, in from_folder\\n return service._from_folder(settings, folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 71, in _from_folder\\n engine = mkml.ModelForInference.from_pretrained(\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/inference.py", line 66, in from_pretrained\\n manifold = TensorManifold(model_path, tensor_parallel_size, batching_config, profile, s3_config)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/manifold.py", line 152, in __init__\\n self.model_actor.load(model_path, profile)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/manifold.py", line 63, in load\\n Factory = get_model_factory(self.config)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/instrument.py", line 65, in get_model_factory\\n raise NotImplementedError(config.architectures)\\nNotImplementedError: [\\\'MultiHeadLlamaClassifier\\\']\\n.\', \'reason\': \'RevisionFailed\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorConfigurationReady\'}, {\'lastTransitionTime\': \'2025-02-14T02:47:20Z\', \'message\': \'Configuration "chaiml-llama-8b-multih-78780-v28-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'PredictorReady\'}, {\'lastTransitionTime\': \'2025-02-14T02:47:20Z\', \'message\': \'Configuration "chaiml-llama-8b-multih-78780-v28-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorRouteReady\'}, {\'lastTransitionTime\': \'2025-02-14T02:47:20Z\', \'message\': \'Configuration "chaiml-llama-8b-multih-78780-v28-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'Ready\'}, {\'lastTransitionTime\': \'2025-02-14T02:47:20Z\', \'reason\': \'PredictorRouteReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'RoutesReady\'}], \'modelStatus\': {\'lastFailureInfo\': {\'exitCode\': 1, \'message\': \'uantization_profile=s0, all_reduce_profile=None, kv_cache_profile=None, calibration_samples=-1, sampling=SamplingParameters(temperature=1.0, top_p=1.0, min_p=0.0, top_k=40, max_input_tokens=2048, max_tokens=1, stop=[\\\'\\\\n\\\'], eos_token_ids=[], frequency_penalty=0.0, presence_penalty=0.0, reward_enabled=True, num_samples=1, reward_max_token_input=256, drop_incomplete_sentences=True, profile=False), url_route=GPT-J-6B-lit-v2, tensorizer_uri=s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28, s3_creds=S3Credentials(s3_access_key_id=\\\'LETMTTRMLFFAMTBK\\\', s3_secret_access_key=\\\'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\\\', s3_endpoint=\\\'https://accel-object.ord1.coreweave.com\\\', s3_uncached_endpoint=\\\'https://object.ord1.coreweave.com\\\'), local_folder=/dev/shm/model_cache)\\n[INFO] Initialized device rank 0\\nTraceback (most recent call last):\\n File "/code/mkml_inference_service/main.py", line 95, in <module>\\n model.load()\\n File "/code/mkml_inference_service/main.py", line 31, in load\\n self.engine = mkml_backend.AsyncInferenceService.from_folder(settings, settings.local_folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 49, in from_folder\\n return service._from_folder(settings, folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 71, in _from_folder\\n engine = mkml.ModelForInference.from_pretrained(\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/inference.py", line 66, in from_pretrained\\n manifold = TensorManifold(model_path, tensor_parallel_size, batching_config, profile, s3_config)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/manifold.py", line 152, in __init__\\n self.model_actor.load(model_path, profile)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/manifold.py", line 63, in load\\n Factory = get_model_factory(self.config)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/instrument.py", line 65, in get_model_factory\\n raise NotImplementedError(config.architectures)\\nNotImplementedError: [\\\'MultiHeadLlamaClassifier\\\']\\n\', \'reason\': \'ModelLoadFailed\'}, \'states\': {\'activeModelState\': \'\', \'targetModelState\': \'Pending\'}, \'transitionStatus\': \'InProgress\'}, \'observedGeneration\': 1}}')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.15s
Shutdown handler de-registered
Shutdown handler registered
run pipeline %s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.14s
run pipeline stage %s
Running pipeline stage MKMLProfilerTemplater
Pipeline stage MKMLProfilerTemplater completed in 0.11s
run pipeline stage %s
Running pipeline stage MKMLProfilerDeployer
Creating inference service chaiml-llama-8b-multih-78780-v28-profiler
Waiting for inference service chaiml-llama-8b-multih-78780-v28-profiler to be ready
Tearing down inference service chaiml-llama-8b-multih-78780-v28-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-llama-8b-multih-78780-v28-profiler
Waiting for inference service chaiml-llama-8b-multih-78780-v28-profiler to be ready
Tearing down inference service chaiml-llama-8b-multih-78780-v28-profiler
%s, retrying in %s seconds...
Creating inference service chaiml-llama-8b-multih-78780-v28-profiler
Waiting for inference service chaiml-llama-8b-multih-78780-v28-profiler to be ready
Tearing down inference service chaiml-llama-8b-multih-78780-v28-profiler
clean up pipeline due to error=DeploymentError('Timeout to start the InferenceService chaiml-llama-8b-multih-78780-v28-profiler. The InferenceService is as following: {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'kind\': \'InferenceService\', \'metadata\': {\'annotations\': {\'autoscaling.knative.dev/class\': \'hpa.autoscaling.knative.dev\', \'autoscaling.knative.dev/container-concurrency-target-percentage\': \'70\', \'autoscaling.knative.dev/initial-scale\': \'1\', \'autoscaling.knative.dev/max-scale-down-rate\': \'1.1\', \'autoscaling.knative.dev/max-scale-up-rate\': \'2\', \'autoscaling.knative.dev/metric\': \'mean_pod_latency_ms_v2\', \'autoscaling.knative.dev/panic-threshold-percentage\': \'650\', \'autoscaling.knative.dev/panic-window-percentage\': \'35\', \'autoscaling.knative.dev/scale-down-delay\': \'30s\', \'autoscaling.knative.dev/scale-to-zero-grace-period\': \'10m\', \'autoscaling.knative.dev/stable-window\': \'180s\', \'autoscaling.knative.dev/target\': \'3700\', \'autoscaling.knative.dev/target-burst-capacity\': \'-1\', \'autoscaling.knative.dev/tick-interval\': \'15s\', \'features.knative.dev/http-full-duplex\': \'Enabled\', \'networking.knative.dev/ingress-class\': \'istio.ingress.networking.knative.dev\'}, \'creationTimestamp\': \'2025-02-14T03:17:54Z\', \'finalizers\': [\'inferenceservice.finalizers\'], \'generation\': 1, \'labels\': {\'knative.coreweave.cloud/ingress\': \'istio.ingress.networking.knative.dev\', \'prometheus.k.chaiverse.com\': \'true\', \'qos.coreweave.cloud/latency\': \'low\'}, \'managedFields\': [{\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:annotations\': {\'.\': {}, \'f:autoscaling.knative.dev/class\': {}, \'f:autoscaling.knative.dev/container-concurrency-target-percentage\': {}, \'f:autoscaling.knative.dev/initial-scale\': {}, \'f:autoscaling.knative.dev/max-scale-down-rate\': {}, \'f:autoscaling.knative.dev/max-scale-up-rate\': {}, \'f:autoscaling.knative.dev/metric\': {}, \'f:autoscaling.knative.dev/panic-threshold-percentage\': {}, \'f:autoscaling.knative.dev/panic-window-percentage\': {}, \'f:autoscaling.knative.dev/scale-down-delay\': {}, \'f:autoscaling.knative.dev/scale-to-zero-grace-period\': {}, \'f:autoscaling.knative.dev/stable-window\': {}, \'f:autoscaling.knative.dev/target\': {}, \'f:autoscaling.knative.dev/target-burst-capacity\': {}, \'f:autoscaling.knative.dev/tick-interval\': {}, \'f:features.knative.dev/http-full-duplex\': {}, \'f:networking.knative.dev/ingress-class\': {}}, \'f:labels\': {\'.\': {}, \'f:knative.coreweave.cloud/ingress\': {}, \'f:prometheus.k.chaiverse.com\': {}, \'f:qos.coreweave.cloud/latency\': {}}}, \'f:spec\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:affinity\': {\'.\': {}, \'f:nodeAffinity\': {\'.\': {}, \'f:tion\': {}, \'f:requiredDuringSchedulingIgnoredDuringExecution\': {}}}, \'f:containerConcurrency\': {}, \'f:containers\': {}, \'f:imagePullSecrets\': {}, \'f:maxReplicas\': {}, \'f:minReplicas\': {}, \'f:timeout\': {}, \'f:volumes\': {}}}}, \'manager\': \'OpenAPI-Generator\', \'operation\': \'Update\', \'time\': \'2025-02-14T03:17:54Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:finalizers\': {\'.\': {}, \'v:"inferenceservice.finalizers"\': {}}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'time\': \'2025-02-14T03:17:54Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:status\': {\'.\': {}, \'f:components\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:latestCreatedRevision\': {}}}, \'f:conditions\': {}, \'f:modelStatus\': {\'.\': {}, \'f:lastFailureInfo\': {\'.\': {}, \'f:exitCode\': {}, \'f:message\': {}, \'f:reason\': {}}, \'f:states\': {\'.\': {}, \'f:activeModelState\': {}, \'f:targetModelState\': {}}, \'f:transitionStatus\': {}}, \'f:observedGeneration\': {}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'subresource\': \'status\', \'time\': \'2025-02-14T03:19:35Z\'}], \'name\': \'chaiml-llama-8b-multih-78780-v28-profiler\', \'namespace\': \'tenant-chaiml-guanaco\', \'resourceVersion\': \'275940798\', \'uid\': \'81bb43b7-3736-48b5-9fde-680ee507b8c6\'}, \'spec\': {\'predictor\': {\'affinity\': {\'nodeAffinity\': {\'tion\': [{\'preference\': {\'matchExpressions\': [{\'key\': \'topology.kubernetes.io/region\', \'operator\': \'In\', \'values\': [\'ORD1\']}]}, \'weight\': 5}], \'requiredDuringSchedulingIgnoredDuringExecution\': {\'nodeSelectorTerms\': [{\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'RTX_A5000\']}]}]}}}, \'containerConcurrency\': 0, \'containers\': [{\'env\': [{\'name\': \'MAX_TOKEN_INPUT\', \'value\': \'2048\'}, {\'name\': \'BEST_OF\', \'value\': \'1\'}, {\'name\': \'TEMPERATURE\', \'value\': \'1.0\'}, {\'name\': \'PRESENCE_PENALTY\', \'value\': \'0.0\'}, {\'name\': \'FREQUENCY_PENALTY\', \'value\': \'0.0\'}, {\'name\': \'TOP_P\', \'value\': \'1.0\'}, {\'name\': \'MIN_P\', \'value\': \'0.0\'}, {\'name\': \'TOP_K\', \'value\': \'40\'}, {\'name\': \'STOPPING_WORDS\', \'value\': \'["\\\\\\\\n"]\'}, {\'name\': \'MAX_TOKENS\', \'value\': \'1\'}, {\'name\': \'MAX_BATCH_SIZE\', \'value\': \'128\'}, {\'name\': \'URL_ROUTE\', \'value\': \'GPT-J-6B-lit-v2\'}, {\'name\': \'OBJ_ACCESS_KEY_ID\', \'value\': \'LETMTTRMLFFAMTBK\'}, {\'name\': \'OBJ_SECRET_ACCESS_KEY\', \'value\': \'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\'}, {\'name\': \'OBJ_ENDPOINT\', \'value\': \'https://accel-object.ord1.coreweave.com\'}, {\'name\': \'TENSORIZER_URI\', \'value\': \'s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28\'}, {\'name\': \'RESERVE_MEMORY\', \'value\': \'2048\'}, {\'name\': \'DOWNLOAD_TO_LOCAL\', \'value\': \'/dev/shm/model_cache\'}, {\'name\': \'NUM_GPUS\', \'value\': \'1\'}, {\'name\': \'MK1_MKML_LICENSE_KEY\', \'valueFrom\': {\'secretKeyRef\': {\'key\': \'key\', \'name\': \'mkml-license-key\'}}}], \'image\': \'gcr.io/chai-959f8/chai-guanaco/mkml:mkml_v0.11.12_dg\', \'imagePullPolicy\': \'IfNotPresent\', \'name\': \'kserve-container\', \'readinessProbe\': {\'exec\': {\'command\': [\'cat\', \'/tmp/ready\']}, \'failureThreshold\': 1, \'initialDelaySeconds\': 10, \'periodSeconds\': 10, \'successThreshold\': 1, \'timeoutSeconds\': 5}, \'resources\': {\'limits\': {\'cpu\': \'2\', \'memory\': \'12Gi\', \'nvidia.com/gpu\': \'1\'}, \'requests\': {\'cpu\': \'2\', \'memory\': \'12Gi\', \'nvidia.com/gpu\': \'1\'}}, \'volumeMounts\': [{\'mountPath\': \'/dev/shm\', \'name\': \'shared-memory-cache\'}]}], \'imagePullSecrets\': [{\'name\': \'docker-creds\'}], \'maxReplicas\': 1, \'minReplicas\': 1, \'timeout\': 60, \'volumes\': [{\'emptyDir\': {\'medium\': \'Memory\'}, \'name\': \'shared-memory-cache\'}]}}, \'status\': {\'components\': {\'predictor\': {\'latestCreatedRevision\': \'chaiml-llama-8b-multih-78780-v28-profiler-predictor-00001\'}}, \'conditions\': [{\'lastTransitionTime\': \'2025-02-14T03:19:35Z\', \'reason\': \'PredictorConfigurationReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'LatestDeploymentReady\'}, {\'lastTransitionTime\': \'2025-02-14T03:19:35Z\', \'message\': \'Revision "chaiml-llama-8b-multih-78780-v28-profiler-predictor-00001" failed with message: Container failed with: uantization_profile=s0, all_reduce_profile=None, kv_cache_profile=None, calibration_samples=-1, sampling=SamplingParameters(temperature=1.0, top_p=1.0, min_p=0.0, top_k=40, max_input_tokens=2048, max_tokens=1, stop=[\\\'\\\\n\\\'], eos_token_ids=[], frequency_penalty=0.0, presence_penalty=0.0, reward_enabled=True, num_samples=1, reward_max_token_input=256, drop_incomplete_sentences=True, profile=False), url_route=GPT-J-6B-lit-v2, tensorizer_uri=s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28, s3_creds=S3Credentials(s3_access_key_id=\\\'LETMTTRMLFFAMTBK\\\', s3_secret_access_key=\\\'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\\\', s3_endpoint=\\\'https://accel-object.ord1.coreweave.com\\\', s3_uncached_endpoint=\\\'https://object.ord1.coreweave.com\\\'), local_folder=/dev/shm/model_cache)\\n[INFO] Initialized device rank 0\\nTraceback (most recent call last):\\n File "/code/mkml_inference_service/main.py", line 95, in <module>\\n model.load()\\n File "/code/mkml_inference_service/main.py", line 31, in load\\n self.engine = mkml_backend.AsyncInferenceService.from_folder(settings, settings.local_folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 49, in from_folder\\n return service._from_folder(settings, folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 71, in _from_folder\\n engine = mkml.ModelForInference.from_pretrained(\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/inference.py", line 66, in from_pretrained\\n manifold = TensorManifold(model_path, tensor_parallel_size, batching_config, profile, s3_config)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/manifold.py", line 152, in __init__\\n self.model_actor.load(model_path, profile)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/manifold.py", line 63, in load\\n Factory = get_model_factory(self.config)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/instrument.py", line 65, in get_model_factory\\n raise NotImplementedError(config.architectures)\\nNotImplementedError: [\\\'MultiHeadLlamaClassifier\\\']\\n.\', \'reason\': \'RevisionFailed\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorConfigurationReady\'}, {\'lastTransitionTime\': \'2025-02-14T03:19:35Z\', \'message\': \'Configuration "chaiml-llama-8b-multih-78780-v28-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'PredictorReady\'}, {\'lastTransitionTime\': \'2025-02-14T03:19:35Z\', \'message\': \'Configuration "chaiml-llama-8b-multih-78780-v28-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorRouteReady\'}, {\'lastTransitionTime\': \'2025-02-14T03:19:35Z\', \'message\': \'Configuration "chaiml-llama-8b-multih-78780-v28-profiler-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'Ready\'}, {\'lastTransitionTime\': \'2025-02-14T03:19:35Z\', \'reason\': \'PredictorRouteReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'RoutesReady\'}], \'modelStatus\': {\'lastFailureInfo\': {\'exitCode\': 1, \'message\': \'uantization_profile=s0, all_reduce_profile=None, kv_cache_profile=None, calibration_samples=-1, sampling=SamplingParameters(temperature=1.0, top_p=1.0, min_p=0.0, top_k=40, max_input_tokens=2048, max_tokens=1, stop=[\\\'\\\\n\\\'], eos_token_ids=[], frequency_penalty=0.0, presence_penalty=0.0, reward_enabled=True, num_samples=1, reward_max_token_input=256, drop_incomplete_sentences=True, profile=False), url_route=GPT-J-6B-lit-v2, tensorizer_uri=s3://guanaco-mkml-models/chaiml-llama-8b-multih-78780-v28, s3_creds=S3Credentials(s3_access_key_id=\\\'LETMTTRMLFFAMTBK\\\', s3_secret_access_key=\\\'VwwZaqefOOoaouNxUk03oUmK9pVEfruJhjBHPGdgycK\\\', s3_endpoint=\\\'https://accel-object.ord1.coreweave.com\\\', s3_uncached_endpoint=\\\'https://object.ord1.coreweave.com\\\'), local_folder=/dev/shm/model_cache)\\n[INFO] Initialized device rank 0\\nTraceback (most recent call last):\\n File "/code/mkml_inference_service/main.py", line 95, in <module>\\n model.load()\\n File "/code/mkml_inference_service/main.py", line 31, in load\\n self.engine = mkml_backend.AsyncInferenceService.from_folder(settings, settings.local_folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 49, in from_folder\\n return service._from_folder(settings, folder)\\n File "/code/mkml_inference_service/mkml_backend.py", line 71, in _from_folder\\n engine = mkml.ModelForInference.from_pretrained(\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/inference.py", line 66, in from_pretrained\\n manifold = TensorManifold(model_path, tensor_parallel_size, batching_config, profile, s3_config)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/manifold.py", line 152, in __init__\\n self.model_actor.load(model_path, profile)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/manifold.py", line 63, in load\\n Factory = get_model_factory(self.config)\\n File "/opt/conda/lib/python3.10/site-packages/mk1/flywheel/instrument.py", line 65, in get_model_factory\\n raise NotImplementedError(config.architectures)\\nNotImplementedError: [\\\'MultiHeadLlamaClassifier\\\']\\n\', \'reason\': \'ModelLoadFailed\'}, \'states\': {\'activeModelState\': \'\', \'targetModelState\': \'FailedToLoad\'}, \'transitionStatus\': \'BlockedByFailedLoad\'}, \'observedGeneration\': 1}}')
run pipeline stage %s
Running pipeline stage MKMLProfilerDeleter
Skipping teardown as no inference service was successfully deployed
Pipeline stage MKMLProfilerDeleter completed in 0.16s
Shutdown handler de-registered
chaiml-llama-8b-multih_78780_v28 status is now inactive due to auto deactivation removed underperforming models
chaiml-llama-8b-multih_78780_v28 status is now torndown due to DeploymentManager action
chaiml-llama-8b-multih_78780_v28 status is now torndown due to DeploymentManager action
chaiml-llama-8b-multih_78780_v28 status is now torndown due to DeploymentManager action
ChatRequest
Generation Params
Prompt Formatter
Chat History
ChatMessage 1