Running pipeline stage MKMLizer
Starting job with name meta-llama-meta-llama-3-5386-v5-mkmlizer
Waiting for job on meta-llama-meta-llama-3-5386-v5-mkmlizer to finish
meta-llama-meta-llama-3-5386-v5-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ _____ __ __ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ /___/ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Version: 0.8.14 ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ https://mk1.ai ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ The license key for the current software has been verified as ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ belonging to: ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Chai Research Corp. ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
meta-llama-meta-llama-3-5386-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
meta-llama-meta-llama-3-5386-v5-mkmlizer: warnings.warn(warning_message, FutureWarning)
meta-llama-meta-llama-3-5386-v5-mkmlizer: Downloaded to shared memory in 90.883s
meta-llama-meta-llama-3-5386-v5-mkmlizer: quantizing model to /dev/shm/model_cache
meta-llama-meta-llama-3-5386-v5-mkmlizer: Saving flywheel model at /dev/shm/model_cache
meta-llama-meta-llama-3-5386-v5-mkmlizer:
Loading 0: 0%| | 0/291 [00:00<?, ?it/s]
Loading 0: 4%|▍ | 12/291 [00:00<00:02, 106.74it/s]
Loading 0: 8%|▊ | 23/291 [00:00<00:02, 94.35it/s]
Loading 0: 13%|█▎ | 39/291 [00:00<00:02, 111.89it/s]
Loading 0: 18%|█▊ | 51/291 [00:00<00:02, 105.38it/s]
Loading 0: 23%|██▎ | 66/291 [00:00<00:02, 100.82it/s]
Loading 0: 26%|██▋ | 77/291 [00:00<00:02, 98.88it/s]
Loading 0: 30%|██▉ | 87/291 [00:01<00:03, 51.55it/s]
Loading 0: 35%|███▌ | 102/291 [00:01<00:02, 66.09it/s]
Loading 0: 39%|███▉ | 113/291 [00:01<00:02, 71.92it/s]
Loading 0: 44%|████▍ | 129/291 [00:01<00:01, 88.62it/s]
Loading 0: 48%|████▊ | 140/291 [00:01<00:01, 91.08it/s]
Loading 0: 54%|█████▎ | 156/291 [00:01<00:01, 105.86it/s]
Loading 0: 58%|█████▊ | 168/291 [00:01<00:01, 106.17it/s]
Loading 0: 62%|██████▏ | 180/291 [00:01<00:01, 106.13it/s]
Loading 0: 66%|██████▌ | 192/291 [00:02<00:01, 58.37it/s]
Loading 0: 69%|██████▉ | 202/291 [00:02<00:01, 64.97it/s]
Loading 0: 73%|███████▎ | 212/291 [00:02<00:01, 71.46it/s]
Loading 0: 78%|███████▊ | 228/291 [00:02<00:00, 87.46it/s]
Loading 0: 82%|████████▏ | 239/291 [00:02<00:00, 88.24it/s]
Loading 0: 86%|████████▋ | 251/291 [00:02<00:00, 94.87it/s]
Loading 0: 91%|█████████ | 264/291 [00:03<00:00, 98.65it/s]
Loading 0: 95%|█████████▍| 275/291 [00:03<00:00, 96.70it/s]
Loading 0: 99%|█████████▊| 287/291 [00:09<00:00, 6.33it/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
meta-llama-meta-llama-3-5386-v5-mkmlizer: quantized model in 26.094s
meta-llama-meta-llama-3-5386-v5-mkmlizer: Processed model meta-llama/Meta-Llama-3-8B-Instruct in 123.583s
meta-llama-meta-llama-3-5386-v5-mkmlizer: creating bucket guanaco-mkml-models
meta-llama-meta-llama-3-5386-v5-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
meta-llama-meta-llama-3-5386-v5-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/config.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/tokenizer_config.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/tokenizer.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/special_tokens_map.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/flywheel_model.0.safetensors
meta-llama-meta-llama-3-5386-v5-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
meta-llama-meta-llama-3-5386-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-5386-v5-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-5386-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-5386-v5-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-5386-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-5386-v5-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-5386-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
meta-llama-meta-llama-3-5386-v5-mkmlizer: return self.fget.__get__(instance, owner)()
meta-llama-meta-llama-3-5386-v5-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
meta-llama-meta-llama-3-5386-v5-mkmlizer: Saving duration: 0.497s
meta-llama-meta-llama-3-5386-v5-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 12.866s
meta-llama-meta-llama-3-5386-v5-mkmlizer: creating bucket guanaco-reward-models
meta-llama-meta-llama-3-5386-v5-mkmlizer: ERROR: [Errno -3] Temporary failure in name resolution
meta-llama-meta-llama-3-5386-v5-mkmlizer: ERROR: Connection Error: Error resolving a server hostname.
meta-llama-meta-llama-3-5386-v5-mkmlizer: Please check the servers address specified in 'host_base', 'host_bucket', 'cloudfront_host', 'website_endpoint'
Job meta-llama-meta-llama-3-5386-v5-mkmlizer completed after 185.96s with status: failed
Stopping job with name meta-llama-meta-llama-3-5386-v5-mkmlizer
%s, retrying in %s seconds...
Starting job with name meta-llama-meta-llama-3-5386-v5-mkmlizer
Waiting for job on meta-llama-meta-llama-3-5386-v5-mkmlizer to finish
meta-llama-meta-llama-3-5386-v5-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ _____ __ __ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ /___/ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Version: 0.8.14 ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ https://mk1.ai ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ The license key for the current software has been verified as ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ belonging to: ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Chai Research Corp. ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ║ ║
meta-llama-meta-llama-3-5386-v5-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
meta-llama-meta-llama-3-5386-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
meta-llama-meta-llama-3-5386-v5-mkmlizer: warnings.warn(warning_message, FutureWarning)
meta-llama-meta-llama-3-5386-v5-mkmlizer: Downloaded to shared memory in 28.858s
meta-llama-meta-llama-3-5386-v5-mkmlizer: quantizing model to /dev/shm/model_cache
meta-llama-meta-llama-3-5386-v5-mkmlizer: Saving flywheel model at /dev/shm/model_cache
meta-llama-meta-llama-3-5386-v5-mkmlizer:
Loading 0: 0%| | 0/291 [00:00<?, ?it/s]
Loading 0: 7%|▋ | 21/291 [00:00<00:01, 191.72it/s]
Loading 0: 14%|█▍ | 41/291 [00:00<00:01, 176.57it/s]
Loading 0: 22%|██▏ | 63/291 [00:00<00:01, 192.65it/s]
Loading 0: 29%|██▊ | 83/291 [00:00<00:02, 86.57it/s]
Loading 0: 35%|███▌ | 102/291 [00:00<00:01, 104.67it/s]
Loading 0: 42%|████▏ | 121/291 [00:00<00:01, 121.72it/s]
Loading 0: 48%|████▊ | 140/291 [00:01<00:01, 136.54it/s]
Loading 0: 56%|█████▌ | 162/291 [00:01<00:00, 156.95it/s]
Loading 0: 62%|██████▏ | 181/291 [00:01<00:00, 163.96it/s]
Loading 0: 69%|██████▊ | 200/291 [00:01<00:00, 97.55it/s]
Loading 0: 75%|███████▌ | 219/291 [00:01<00:00, 114.01it/s]
Loading 0: 82%|████████▏ | 238/291 [00:01<00:00, 128.13it/s]
Loading 0: 88%|████████▊ | 257/291 [00:01<00:00, 141.06it/s]
Loading 0: 96%|█████████▌| 280/291 [00:02<00:00, 162.29it/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
meta-llama-meta-llama-3-5386-v5-mkmlizer: quantized model in 17.423s
meta-llama-meta-llama-3-5386-v5-mkmlizer: Processed model meta-llama/Meta-Llama-3-8B-Instruct in 48.397s
meta-llama-meta-llama-3-5386-v5-mkmlizer: creating bucket guanaco-mkml-models
meta-llama-meta-llama-3-5386-v5-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
meta-llama-meta-llama-3-5386-v5-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/config.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/special_tokens_map.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/tokenizer_config.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/tokenizer.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/meta-llama-meta-llama-3-5386-v5/flywheel_model.0.safetensors
meta-llama-meta-llama-3-5386-v5-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
meta-llama-meta-llama-3-5386-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-5386-v5-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-5386-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-5386-v5-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-5386-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
meta-llama-meta-llama-3-5386-v5-mkmlizer: return self.fget.__get__(instance, owner)()
meta-llama-meta-llama-3-5386-v5-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
meta-llama-meta-llama-3-5386-v5-mkmlizer: Saving duration: 0.230s
meta-llama-meta-llama-3-5386-v5-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 6.444s
meta-llama-meta-llama-3-5386-v5-mkmlizer: creating bucket guanaco-reward-models
meta-llama-meta-llama-3-5386-v5-mkmlizer: Bucket 's3://guanaco-reward-models/' created
meta-llama-meta-llama-3-5386-v5-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/meta-llama-meta-llama-3-5386-v5_reward
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/meta-llama-meta-llama-3-5386-v5_reward/config.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/meta-llama-meta-llama-3-5386-v5_reward/special_tokens_map.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/meta-llama-meta-llama-3-5386-v5_reward/tokenizer_config.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/meta-llama-meta-llama-3-5386-v5_reward/merges.txt
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/meta-llama-meta-llama-3-5386-v5_reward/vocab.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/meta-llama-meta-llama-3-5386-v5_reward/tokenizer.json
meta-llama-meta-llama-3-5386-v5-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/meta-llama-meta-llama-3-5386-v5_reward/reward.tensors
Job meta-llama-meta-llama-3-5386-v5-mkmlizer completed after 84.71s with status: succeeded
Stopping job with name meta-llama-meta-llama-3-5386-v5-mkmlizer
Pipeline stage MKMLizer completed in 275.17s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service meta-llama-meta-llama-3-5386-v5
Waiting for inference service meta-llama-meta-llama-3-5386-v5 to be ready
Inference service meta-llama-meta-llama-3-5386-v5 ready after 40.26767611503601s
Pipeline stage ISVCDeployer completed in 47.81s
Running pipeline stage StressChecker
%s, retrying in %s seconds...
%s, retrying in %s seconds...
Received healthy response to inference request in 2.2302193641662598s
Received healthy response to inference request in 1.3290762901306152s
Received healthy response to inference request in 1.2812440395355225s
Received healthy response to inference request in 1.2489566802978516s
Received healthy response to inference request in 1.6204454898834229s
5 requests
0 failed requests
5th percentile: 1.2554141521453857
10th percentile: 1.26187162399292
20th percentile: 1.2747865676879884
30th percentile: 1.290810489654541
40th percentile: 1.309943389892578
50th percentile: 1.3290762901306152
60th percentile: 1.4456239700317384
70th percentile: 1.5621716499328613
80th percentile: 1.7424002647399903
90th percentile: 1.986309814453125
95th percentile: 2.1082645893096923
99th percentile: 2.2058284091949463
mean time: 1.5419883728027344
Pipeline stage StressChecker completed in 33.92s
Running pipeline stage DaemonicSafetyScorer
Pipeline stage DaemonicSafetyScorer completed in 0.05s
meta-llama-meta-llama-3-_5386_v5 status is now deployed due to DeploymentManager action
meta-llama-meta-llama-3-_5386_v5 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of meta-llama-meta-llama-3-_5386_v5
Running pipeline stage ISVCDeleter
Checking if service meta-llama-meta-llama-3-5386-v5 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 3.57s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key meta-llama-meta-llama-3-5386-v5/config.json from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-3-5386-v5/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-3-5386-v5/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-3-5386-v5/tokenizer.json from bucket guanaco-mkml-models
Deleting key meta-llama-meta-llama-3-5386-v5/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key meta-llama-meta-llama-3-5386-v5_reward/config.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-5386-v5_reward/merges.txt from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-5386-v5_reward/reward.tensors from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-5386-v5_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-5386-v5_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-5386-v5_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key meta-llama-meta-llama-3-5386-v5_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 5.70s
meta-llama-meta-llama-3-_5386_v5 status is now torndown due to DeploymentManager action