submission_id: zai-org-glm-4-7_v7
developer_uid: zonemercy
status: torndown
model_repo: zai-org/GLM-4.7
generation_params: {'temperature': 1.0, 'top_p': 0.95, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['</s>', '<|assistant|>', '<|user|>', '####', '<|im_end|>'], 'max_input_tokens': 2048, 'best_of': 8, 'max_output_tokens': 80}
formatter: {'memory_template': "[gMASK]<sop><|system|>\n{bot_name}'s persona: {memory}", 'prompt_template': '', 'bot_template': '<|assistant|>{bot_name}: {message}', 'user_template': '<|user|>{message}', 'response_template': '<|assistant|></think>{bot_name}:', 'truncate_by_message': True}
timestamp: 2026-04-13T12:21:03+00:00
model_name: zai-org-glm-4-7_v7
Resubmit model
Shutdown handler not registered because Python interpreter is not running in the main thread
run pipeline %s
run pipeline stage %s
Running pipeline stage VLLMUploader
Starting job with name zai-org-glm-4-7-v7-uploader
Waiting for job on zai-org-glm-4-7-v7-uploader to finish
2026-04-13T11:26:17.861883+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:27:17.946382+00:00 monitor updated for zai-org-glm-4-7_v7
zai-org-glm-4-7-v7-uploader: Using quantization_mode: fp8
zai-org-glm-4-7-v7-uploader: Checking if ChaiML/GLM-4.7-FP8 already exists in ChaiML
zai-org-glm-4-7-v7-uploader: Model already exists. Downloading to /tmp/model_output...
zai-org-glm-4-7-v7-uploader: Downloading snapshot of ChaiML/GLM-4.7-FP8...
2026-04-13T11:28:18.059001+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:29:18.148341+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:30:18.237402+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:31:18.321203+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:32:18.403723+00:00 monitor updated for zai-org-glm-4-7_v7
zai-org-glm-4-7-v7-uploader: Downloaded in 293.204s
zai-org-glm-4-7-v7-uploader: Processed model zai-org/GLM-4.7 in 295.926s
zai-org-glm-4-7-v7-uploader: creating bucket guanaco-vllm-models
zai-org-glm-4-7-v7-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:56: SyntaxWarning: invalid escape sequence '\.'
zai-org-glm-4-7-v7-uploader: RE_S3_DATESTRING = re.compile('\.[0-9]*(?:[Z\\-\\+]*?)')
zai-org-glm-4-7-v7-uploader: /usr/lib/python3/dist-packages/S3/BaseUtils.py:57: SyntaxWarning: invalid escape sequence '\s'
zai-org-glm-4-7-v7-uploader: RE_XML_NAMESPACE = re.compile(b'^(<?[^>]+?>\s*|\s*)(<\w+) xmlns=[\'"](https?://[^\'"]+)[\'"]', re.MULTILINE)
zai-org-glm-4-7-v7-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:240: SyntaxWarning: invalid escape sequence '\.'
zai-org-glm-4-7-v7-uploader: invalid = re.search("([^a-z0-9\.-])", bucket, re.UNICODE)
zai-org-glm-4-7-v7-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:244: SyntaxWarning: invalid escape sequence '\.'
zai-org-glm-4-7-v7-uploader: invalid = re.search("([^A-Za-z0-9\._-])", bucket, re.UNICODE)
zai-org-glm-4-7-v7-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:255: SyntaxWarning: invalid escape sequence '\.'
zai-org-glm-4-7-v7-uploader: if re.search("-\.", bucket, re.UNICODE):
zai-org-glm-4-7-v7-uploader: /usr/lib/python3/dist-packages/S3/Utils.py:257: SyntaxWarning: invalid escape sequence '\.'
zai-org-glm-4-7-v7-uploader: if re.search("\.\.", bucket, re.UNICODE):
zai-org-glm-4-7-v7-uploader: /usr/lib/python3/dist-packages/S3/S3Uri.py:155: SyntaxWarning: invalid escape sequence '\w'
zai-org-glm-4-7-v7-uploader: _re = re.compile("^(\w+://)?(.*)", re.UNICODE)
zai-org-glm-4-7-v7-uploader: /usr/lib/python3/dist-packages/S3/FileLists.py:480: SyntaxWarning: invalid escape sequence '\*'
zai-org-glm-4-7-v7-uploader: wildcard_split_result = re.split("\*|\?", uri_str, maxsplit=1)
zai-org-glm-4-7-v7-uploader: Bucket 's3://guanaco-vllm-models/' created
zai-org-glm-4-7-v7-uploader: uploading /tmp/model_output to s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/.gitattributes s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/.gitattributes
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/config.json s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/config.json
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/recipe.yaml s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/recipe.yaml
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/generation_config.json s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/generation_config.json
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/chat_template.jinja s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/chat_template.jinja
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/tokenizer_config.json s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/tokenizer_config.json
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model.safetensors.index.json s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model.safetensors.index.json
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/tokenizer.json s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/tokenizer.json
2026-04-13T11:33:18.493097+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:34:18.593708+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:35:18.685339+00:00 monitor updated for zai-org-glm-4-7_v7
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00015-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00015-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00013-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00013-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00011-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00011-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00014-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00014-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00010-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00010-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00012-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00012-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00007-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00007-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00008-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00008-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00005-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00005-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00006-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00006-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00003-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00003-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00004-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00004-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00001-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00001-of-00015.safetensors
zai-org-glm-4-7-v7-uploader: cp /tmp/model_output/model-00002-of-00015.safetensors s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default/model-00002-of-00015.safetensors
2026-04-13T11:36:18.805981+00:00 monitor updated for zai-org-glm-4-7_v7
Job zai-org-glm-4-7-v7-uploader completed after 663.09s with status: succeeded
Stopping job with name zai-org-glm-4-7-v7-uploader
Pipeline stage VLLMUploader completed in 663.52s
run pipeline stage %s
Running pipeline stage VLLMUploaderAMD
Pipeline stage vllm_upload_amd skipped, reason=not amd cluster
Pipeline stage VLLMUploaderAMD completed in 0.09s
run pipeline stage %s
Running pipeline stage VLLMTemplater
Pipeline stage VLLMTemplater completed in 1.10s
run pipeline stage %s
Running pipeline stage VLLMDeployer
Creating inference service zai-org-glm-4-7-v7
Waiting for inference service zai-org-glm-4-7-v7 to be ready
2026-04-13T11:37:18.893398+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:38:18.985097+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:39:19.081663+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:40:19.716624+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:41:19.811707+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:42:20.327601+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:43:20.488393+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:44:20.660390+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:45:20.756954+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:46:20.924947+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:47:21.024238+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:48:21.121378+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:49:21.345452+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:50:21.473815+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:51:33.929872+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:52:34.552372+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:53:34.701160+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:54:35.718327+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:55:35.893467+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:56:35.990426+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:57:36.092160+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:58:36.184108+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T11:59:36.313004+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:00:36.484942+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:01:36.578803+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:02:36.679114+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:03:36.774368+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:04:36.875827+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:05:36.992025+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:06:37.126767+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:07:37.613726+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:08:37.746404+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:09:37.850345+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:10:37.964990+00:00 monitor updated for zai-org-glm-4-7_v7
Failed to get request counts for guanaco-submitter. Falling back to default
2026-04-13T12:11:38.058948+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:12:38.155702+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:13:38.264830+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:14:38.373336+00:00 monitor updated for zai-org-glm-4-7_v7
2026-04-13T12:15:38.485897+00:00 monitor updated for zai-org-glm-4-7_v7
Tearing down inference service zai-org-glm-4-7-v7
clean up pipeline due to error=DeploymentError('Timeout to start the InferenceService zai-org-glm-4-7-v7. The InferenceService is as following: {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'kind\': \'InferenceService\', \'metadata\': {\'annotations\': {\'autoscaling.knative.dev/class\': \'hpa.autoscaling.knative.dev\', \'autoscaling.knative.dev/container-concurrency-target-percentage\': \'70\', \'autoscaling.knative.dev/initial-scale\': \'5\', \'autoscaling.knative.dev/max-scale-down-rate\': \'1.1\', \'autoscaling.knative.dev/max-scale-up-rate\': \'2\', \'autoscaling.knative.dev/metric\': \'mean_pod_latency_ms_v2\', \'autoscaling.knative.dev/panic-threshold-percentage\': \'650\', \'autoscaling.knative.dev/panic-window-percentage\': \'35\', \'autoscaling.knative.dev/scale-down-delay\': \'30s\', \'autoscaling.knative.dev/scale-to-zero-grace-period\': \'10m\', \'autoscaling.knative.dev/stable-window\': \'180s\', \'autoscaling.knative.dev/target\': \'4000\', \'autoscaling.knative.dev/target-burst-capacity\': \'-1\', \'autoscaling.knative.dev/tick-interval\': \'15s\', \'features.knative.dev/http-full-duplex\': \'Enabled\', \'networking.knative.dev/ingress-class\': \'istio.ingress.networking.knative.dev\', \'serving.knative.dev/progress-deadline\': \'40m\'}, \'creationTimestamp\': \'2026-04-13T11:36:24Z\', \'finalizers\': [\'inferenceservice.finalizers\'], \'generation\': 1, \'labels\': {\'istio.io/rev\': \'prod-canary\', \'knative.coreweave.cloud/ingress\': \'istio.ingress.networking.knative.dev\', \'prometheus.k.chaiverse.com\': \'true\', \'qos.coreweave.cloud/latency\': \'low\'}, \'managedFields\': [{\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:annotations\': {\'.\': {}, \'f:autoscaling.knative.dev/class\': {}, \'f:autoscaling.knative.dev/container-concurrency-target-percentage\': {}, \'f:autoscaling.knative.dev/initial-scale\': {}, \'f:autoscaling.knative.dev/max-scale-down-rate\': {}, \'f:autoscaling.knative.dev/max-scale-up-rate\': {}, \'f:autoscaling.knative.dev/metric\': {}, \'f:autoscaling.knative.dev/panic-threshold-percentage\': {}, \'f:autoscaling.knative.dev/panic-window-percentage\': {}, \'f:autoscaling.knative.dev/scale-down-delay\': {}, \'f:autoscaling.knative.dev/scale-to-zero-grace-period\': {}, \'f:autoscaling.knative.dev/stable-window\': {}, \'f:autoscaling.knative.dev/target\': {}, \'f:autoscaling.knative.dev/target-burst-capacity\': {}, \'f:autoscaling.knative.dev/tick-interval\': {}, \'f:features.knative.dev/http-full-duplex\': {}, \'f:networking.knative.dev/ingress-class\': {}, \'f:serving.knative.dev/progress-deadline\': {}}, \'f:labels\': {\'.\': {}, \'f:istio.io/rev\': {}, \'f:knative.coreweave.cloud/ingress\': {}, \'f:prometheus.k.chaiverse.com\': {}, \'f:qos.coreweave.cloud/latency\': {}}}, \'f:spec\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:affinity\': {\'.\': {}, \'f:nodeAffinity\': {\'.\': {}, \'f:tion\': {}, \'f:requiredDuringSchedulingIgnoredDuringExecution\': {}}, \'f:podAffinity\': {\'.\': {}, \'f:tion\': {}}}, \'f:containerConcurrency\': {}, \'f:containers\': {}, \'f:imagePullSecrets\': {}, \'f:maxReplicas\': {}, \'f:minReplicas\': {}, \'f:priorityClassName\': {}, \'f:timeout\': {}, \'f:volumes\': {}}}}, \'manager\': \'OpenAPI-Generator\', \'operation\': \'Update\', \'time\': \'2026-04-13T11:36:24Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:metadata\': {\'f:finalizers\': {\'.\': {}, \'v:"inferenceservice.finalizers"\': {}}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'time\': \'2026-04-13T11:36:24Z\'}, {\'apiVersion\': \'serving.kserve.io/v1beta1\', \'fieldsType\': \'FieldsV1\', \'fieldsV1\': {\'f:status\': {\'.\': {}, \'f:components\': {\'.\': {}, \'f:predictor\': {\'.\': {}, \'f:latestCreatedRevision\': {}}}, \'f:conditions\': {}, \'f:modelStatus\': {\'.\': {}, \'f:lastFailureInfo\': {\'.\': {}, \'f:exitCode\': {}, \'f:message\': {}, \'f:reason\': {}}, \'f:states\': {\'.\': {}, \'f:activeModelState\': {}, \'f:targetModelState\': {}}, \'f:transitionStatus\': {}}, \'f:observedGeneration\': {}}}, \'manager\': \'manager\', \'operation\': \'Update\', \'subresource\': \'status\', \'time\': \'2026-04-13T12:16:29Z\'}], \'name\': \'zai-org-glm-4-7-v7\', \'namespace\': \'tenant-chaiml-guanaco\', \'resourceVersion\': \'1349795190\', \'uid\': \'94374188-e9de-498e-b21a-1af0f3e47bc0\'}, \'spec\': {\'predictor\': {\'affinity\': {\'nodeAffinity\': {\'tion\': [{\'preference\': {\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'A100_NVLINK_80GB\']}]}, \'weight\': 5}], \'requiredDuringSchedulingIgnoredDuringExecution\': {\'nodeSelectorTerms\': [{\'matchExpressions\': [{\'key\': \'gpu.nvidia.com/class\', \'operator\': \'In\', \'values\': [\'A100_NVLINK_80GB\']}]}]}}, \'podAffinity\': {\'tion\': [{\'podAffinityTerm\': {\'labelSelector\': {\'matchLabels\': {\'serving.kserve.io/inferenceservice\': \'zai-org-glm-4-7-v7\'}}, \'topologyKey\': \'kubernetes.io/hostname\'}, \'weight\': 100}]}}, \'containerConcurrency\': 0, \'containers\': [{\'args\': [\'serve\', \'s3://guanaco-vllm-models/zai-org-glm-4-7-v7/default\', \'--port\', \'8080\', \'--tensor-parallel-size\', \'4\', \'--gpu-memory-utilization\', \'0.9\', \'--max-model-len\', \'8192\', \'--max-num-batched-tokens\', \'8192\', \'--max-num-seqs\', \'64\', \'--trust-remote-code\', \'--load-format\', \'runai_streamer\', \'--served-model-name\', \'zai-org/GLM-4.7\', \'--model-loader-extra-config\', \'{"distributed": true, "concurrency": 2}\'], \'env\': [{\'name\': \'RESERVE_MEMORY\', \'value\': \'2048\'}, {\'name\': \'DOWNLOAD_TO_LOCAL\', \'value\': \'/dev/shm/model_cache\'}, {\'name\': \'NUM_GPUS\', \'value\': \'4\'}, {\'name\': \'VLLM_ASSETS_CACHE\', \'value\': \'/code/vllm_assets_cache\'}, {\'name\': \'RUNAI_STREAMER_S3_USE_VIRTUAL_ADDRESSING\', \'value\': \'1\'}, {\'name\': \'RUNAI_STREAMER_CONCURRENCY\', \'value\': \'1\'}, {\'name\': \'AWS_EC2_METADATA_DISABLED\', \'value\': \'true\'}, {\'name\': \'AWS_ACCESS_KEY_ID\', \'value\': \'CWZAGMHZXKZRFGJK\'}, {\'name\': \'AWS_SECRET_ACCESS_KEY\', \'value\': \'cwoAeWzp46q4O0sTNXOEuZ1MvZzKEFlS9DtEhnTldKp\'}, {\'name\': \'AWS_ENDPOINT_URL\', \'value\': \'https://cwobject.com\'}, {\'name\': \'HF_TOKEN\', \'valueFrom\': {\'secretKeyRef\': {\'key\': \'token\', \'name\': \'hf-token\'}}}, {\'name\': \'RUNAI_STREAMER_CONCURRENCY\', \'value\': \'1\'}], \'image\': \'gcr.io/chai-959f8/vllm:v0.17.1.transformers-5.3.0-dsa_patch\', \'imagePullPolicy\': \'IfNotPresent\', \'name\': \'kserve-container\', \'readinessProbe\': {\'failureThreshold\': 1, \'httpGet\': {\'path\': \'/v1/models\', \'port\': 8080}, \'initialDelaySeconds\': 60, \'periodSeconds\': 10, \'successThreshold\': 1, \'timeoutSeconds\': 5}, \'resources\': {\'limits\': {\'cpu\': \'8\', \'memory\': \'722Gi\', \'nvidia.com/gpu\': \'4\'}, \'requests\': {\'cpu\': \'8\', \'memory\': \'722Gi\', \'nvidia.com/gpu\': \'4\'}}, \'volumeMounts\': [{\'mountPath\': \'/dev/shm\', \'name\': \'shared-memory-cache\'}]}], \'imagePullSecrets\': [{\'name\': \'docker-creds\'}], \'maxReplicas\': 10, \'minReplicas\': 0, \'priorityClassName\': \'chaiverse\', \'timeout\': 20, \'volumes\': [{\'emptyDir\': {\'medium\': \'Memory\', \'sizeLimit\': \'722Gi\'}, \'name\': \'shared-memory-cache\'}]}}, \'status\': {\'components\': {\'predictor\': {\'latestCreatedRevision\': \'zai-org-glm-4-7-v7-predictor-00001\'}}, \'conditions\': [{\'lastTransitionTime\': \'2026-04-13T11:36:25Z\', \'reason\': \'PredictorConfigurationReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'LatestDeploymentReady\'}, {\'lastTransitionTime\': \'2026-04-13T12:16:29Z\', \'message\': \'Revision "zai-org-glm-4-7-v7-predictor-00001" failed with message: Container failed with: m.py", line 154, in __init__\\n(APIServer pid=1) self.engine_core = EngineCoreClient.make_async_mp_client(\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/tracing/otel.py", line 178, in sync_wrapper\\n(APIServer pid=1) return func(*args, **kwargs)\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 127, in make_async_mp_client\\n(APIServer pid=1) return AsyncMPClient(*client_args)\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/tracing/otel.py", line 178, in sync_wrapper\\n(APIServer pid=1) return func(*args, **kwargs)\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 911, in __init__\\n(APIServer pid=1) super().__init__(\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 569, in __init__\\n(APIServer pid=1) with launch_core_engines(\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__\\n(APIServer pid=1) next(self.gen)\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py", line 951, in launch_core_engines\\n(APIServer pid=1) wait_for_engine_startup(\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py", line 1010, in wait_for_engine_startup\\n(APIServer pid=1) raise RuntimeError(\\n(APIServer pid=1) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}\\n/usr/lib/python3.12/multiprocessing/resource_tracker.py:279: UserWarning: resource_tracker: There appear to be 5 leaked shared_memory objects to clean up at shutdown\\n warnings.warn(\\\'resource_tracker: There appear to be %!!(MISSING)d(MISSING) \\\'\\n.\', \'reason\': \'RevisionFailed\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorConfigurationReady\'}, {\'lastTransitionTime\': \'2026-04-13T11:36:25Z\', \'message\': \'Configuration "zai-org-glm-4-7-v7-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'PredictorReady\'}, {\'lastTransitionTime\': \'2026-04-13T11:36:25Z\', \'message\': \'Configuration "zai-org-glm-4-7-v7-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'PredictorRouteReady\'}, {\'lastTransitionTime\': \'2026-04-13T11:36:25Z\', \'message\': \'Configuration "zai-org-glm-4-7-v7-predictor" does not have any ready Revision.\', \'reason\': \'RevisionMissing\', \'status\': \'False\', \'type\': \'Ready\'}, {\'lastTransitionTime\': \'2026-04-13T11:36:25Z\', \'reason\': \'PredictorRouteReady not ready\', \'severity\': \'Info\', \'status\': \'False\', \'type\': \'RoutesReady\'}], \'modelStatus\': {\'lastFailureInfo\': {\'exitCode\': 1, \'message\': \'m.py", line 154, in __init__\\n(APIServer pid=1) self.engine_core = EngineCoreClient.make_async_mp_client(\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/tracing/otel.py", line 178, in sync_wrapper\\n(APIServer pid=1) return func(*args, **kwargs)\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 127, in make_async_mp_client\\n(APIServer pid=1) return AsyncMPClient(*client_args)\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/tracing/otel.py", line 178, in sync_wrapper\\n(APIServer pid=1) return func(*args, **kwargs)\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 911, in __init__\\n(APIServer pid=1) super().__init__(\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 569, in __init__\\n(APIServer pid=1) with launch_core_engines(\\n(APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^\\n(APIServer pid=1) File "/usr/lib/python3.12/contextlib.py", line 144, in __exit__\\n(APIServer pid=1) next(self.gen)\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py", line 951, in launch_core_engines\\n(APIServer pid=1) wait_for_engine_startup(\\n(APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py", line 1010, in wait_for_engine_startup\\n(APIServer pid=1) raise RuntimeError(\\n(APIServer pid=1) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}\\n/usr/lib/python3.12/multiprocessing/resource_tracker.py:279: UserWarning: resource_tracker: There appear to be 5 leaked shared_memory objects to clean up at shutdown\\n warnings.warn(\\\'resource_tracker: There appear to be %d \\\'\\n\', \'reason\': \'ModelLoadFailed\'}, \'states\': {\'activeModelState\': \'\', \'targetModelState\': \'FailedToLoad\'}, \'transitionStatus\': \'BlockedByFailedLoad\'}, \'observedGeneration\': 1}}')
run pipeline stage %s
Running pipeline stage VLLMDeleter
Checking if service zai-org-glm-4-7-v7 is running
Skipping teardown as no inference service was found
Pipeline stage VLLMDeleter completed in 1.02s
run pipeline stage %s
Running pipeline stage VLLMModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key zai-org-glm-4-7-v7/default/.gitattributes from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/chat_template.jinja from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/config.json from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/generation_config.json from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00001-of-00015.safetensors from bucket guanaco-vllm-models
2026-04-13T12:16:38.583201+00:00 monitor updated for zai-org-glm-4-7_v7
Deleting key zai-org-glm-4-7-v7/default/model-00002-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00003-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00004-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00005-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00006-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00007-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00008-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00009-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00010-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00011-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00012-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00013-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00014-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model-00015-of-00015.safetensors from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/model.safetensors.index.json from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/recipe.yaml from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/tokenizer.json from bucket guanaco-vllm-models
Deleting key zai-org-glm-4-7-v7/default/tokenizer_config.json from bucket guanaco-vllm-models
Pipeline stage VLLMModelDeleter completed in 51.35s
Shutdown handler de-registered
zai-org-glm-4-7_v7 status is now failed due to DeploymentManager action
zai-org-glm-4-7_v7 status is now torndown due to DeploymentManager action