Running pipeline stage MKMLizer
Starting job with name nousresearch-meta-llama-4941-v44-mkmlizer
Waiting for job on nousresearch-meta-llama-4941-v44-mkmlizer to finish
nousresearch-meta-llama-4941-v44-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
nousresearch-meta-llama-4941-v44-mkmlizer: ║ _____ __ __ ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ /___/ ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ Version: 0.8.10 ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ The license key for the current software has been verified as ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ belonging to: ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ Chai Research Corp. ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
nousresearch-meta-llama-4941-v44-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v44-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
nousresearch-meta-llama-4941-v44-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
nousresearch-meta-llama-4941-v44-mkmlizer: warnings.warn(warning_message, FutureWarning)
nousresearch-meta-llama-4941-v44-mkmlizer: Downloaded to shared memory in 15.980s
nousresearch-meta-llama-4941-v44-mkmlizer: quantizing model to /dev/shm/model_cache
nousresearch-meta-llama-4941-v44-mkmlizer: Saving flywheel model at /dev/shm/model_cache
nousresearch-meta-llama-4941-v44-mkmlizer:
Loading 0: 0%| | 0/291 [00:00<?, ?it/s]
Loading 0: 23%|██▎ | 66/291 [00:01<00:03, 65.85it/s]
Loading 0: 36%|███▌ | 104/291 [00:02<00:03, 49.05it/s]
Loading 0: 64%|██████▍ | 187/291 [00:03<00:01, 60.57it/s]
Loading 0: 99%|█████████▊| 287/291 [00:09<00:00, 23.68it/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
nousresearch-meta-llama-4941-v44-mkmlizer: quantized model in 22.098s
nousresearch-meta-llama-4941-v44-mkmlizer: Processed model NousResearch/Meta-Llama-3-8B-Instruct in 39.836s
nousresearch-meta-llama-4941-v44-mkmlizer: creating bucket guanaco-mkml-models
nousresearch-meta-llama-4941-v44-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
nousresearch-meta-llama-4941-v44-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v44
nousresearch-meta-llama-4941-v44-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v44/config.json
nousresearch-meta-llama-4941-v44-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v44/tokenizer_config.json
nousresearch-meta-llama-4941-v44-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v44/special_tokens_map.json
nousresearch-meta-llama-4941-v44-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v44/tokenizer.json
nousresearch-meta-llama-4941-v44-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v44/flywheel_model.0.safetensors
nousresearch-meta-llama-4941-v44-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
nousresearch-meta-llama-4941-v44-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v44-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v44-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v44-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v44-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v44-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v44-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
nousresearch-meta-llama-4941-v44-mkmlizer: return self.fget.__get__(instance, owner)()
nousresearch-meta-llama-4941-v44-mkmlizer: Bucket 's3://guanaco-reward-models/' created
nousresearch-meta-llama-4941-v44-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/nousresearch-meta-llama-4941-v44_reward
nousresearch-meta-llama-4941-v44-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v44_reward/config.json
nousresearch-meta-llama-4941-v44-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v44_reward/tokenizer_config.json
nousresearch-meta-llama-4941-v44-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v44_reward/vocab.json
nousresearch-meta-llama-4941-v44-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/nousresearch-meta-llama-4941-v44_reward/merges.txt
nousresearch-meta-llama-4941-v44-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v44_reward/special_tokens_map.json
nousresearch-meta-llama-4941-v44-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v44_reward/tokenizer.json
nousresearch-meta-llama-4941-v44-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/nousresearch-meta-llama-4941-v44_reward/reward.tensors
Job nousresearch-meta-llama-4941-v44-mkmlizer completed after 64.57s with status: succeeded
Stopping job with name nousresearch-meta-llama-4941-v44-mkmlizer
Pipeline stage MKMLizer completed in 69.95s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service nousresearch-meta-llama-4941-v44
Waiting for inference service nousresearch-meta-llama-4941-v44 to be ready
Inference service nousresearch-meta-llama-4941-v44 ready after 30.213018655776978s
Pipeline stage ISVCDeployer completed in 38.09s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.1857521533966064s
Received healthy response to inference request in 1.2769651412963867s
Received healthy response to inference request in 1.2714793682098389s
Received healthy response to inference request in 1.2984015941619873s
Received healthy response to inference request in 1.283719778060913s
5 requests
0 failed requests
5th percentile: 1.2725765228271484
10th percentile: 1.273673677444458
20th percentile: 1.2758679866790772
30th percentile: 1.278316068649292
40th percentile: 1.2810179233551025
50th percentile: 1.283719778060913
60th percentile: 1.2895925045013428
70th percentile: 1.2954652309417725
80th percentile: 1.4758717060089113
90th percentile: 1.830811929702759
95th percentile: 2.0082820415496823
99th percentile: 2.1502581310272215
mean time: 1.4632636070251466
Pipeline stage StressChecker completed in 8.00s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.03s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.04s
M-Eval Dataset for topic stay_in_character is loaded
nousresearch-meta-llama_4941_v44 status is now deployed due to DeploymentManager action
nousresearch-meta-llama_4941_v44 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of nousresearch-meta-llama_4941_v44
Running pipeline stage ISVCDeleter
Checking if service nousresearch-meta-llama-4941-v44 is running
Tearing down inference service nousresearch-meta-llama-4941-v44
Toredown service nousresearch-meta-llama-4941-v44
Pipeline stage ISVCDeleter completed in 3.23s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key nousresearch-meta-llama-4941-v44/config.json from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v44/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v44/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v44/tokenizer.json from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v44/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key nousresearch-meta-llama-4941-v44_reward/config.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v44_reward/merges.txt from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v44_reward/reward.tensors from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v44_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v44_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v44_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v44_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 1.72s
nousresearch-meta-llama_4941_v44 status is now torndown due to DeploymentManager action