Running pipeline stage MKMLizer
Starting job with name vicgalle-roleplay-llama-3-8b-v19-mkmlizer
Waiting for job on vicgalle-roleplay-llama-3-8b-v19-mkmlizer to finish
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ _____ __ __ ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ /___/ ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ Version: 0.8.10 ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ The license key for the current software has been verified as ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ belonging to: ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ Chai Research Corp. ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ║ ║
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: warnings.warn(warning_message, FutureWarning)
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: Downloaded to shared memory in 45.733s
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: quantizing model to /dev/shm/model_cache
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: Saving flywheel model at /dev/shm/model_cache
vicgalle-roleplay-llama-3-8b-v19-mkmlizer:
Loading 0: 0%| | 0/291 [00:00<?, ?it/s]
Loading 0: 45%|████▌ | 131/291 [00:01<00:01, 130.65it/s]
Loading 0: 95%|█████████▍| 275/291 [00:02<00:00, 137.88it/s]
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: quantized model in 17.401s
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: Processed model vicgalle/Roleplay-Llama-3-8B in 64.194s
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: creating bucket guanaco-mkml-models
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/vicgalle-roleplay-llama-3-8b-v19
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/vicgalle-roleplay-llama-3-8b-v19/config.json
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/vicgalle-roleplay-llama-3-8b-v19/special_tokens_map.json
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/vicgalle-roleplay-llama-3-8b-v19/tokenizer_config.json
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/vicgalle-roleplay-llama-3-8b-v19/tokenizer.json
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/vicgalle-roleplay-llama-3-8b-v19/flywheel_model.0.safetensors
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: warnings.warn(
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: warnings.warn(
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: creating bucket guanaco-reward-models
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: Bucket 's3://guanaco-reward-models/' created
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/vicgalle-roleplay-llama-3-8b-v19_reward
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/vicgalle-roleplay-llama-3-8b-v19_reward/config.json
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/vicgalle-roleplay-llama-3-8b-v19_reward/special_tokens_map.json
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/vicgalle-roleplay-llama-3-8b-v19_reward/tokenizer_config.json
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/vicgalle-roleplay-llama-3-8b-v19_reward/merges.txt
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/vicgalle-roleplay-llama-3-8b-v19_reward/vocab.json
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/vicgalle-roleplay-llama-3-8b-v19_reward/tokenizer.json
vicgalle-roleplay-llama-3-8b-v19-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/vicgalle-roleplay-llama-3-8b-v19_reward/reward.tensors
Job vicgalle-roleplay-llama-3-8b-v19-mkmlizer completed after 101.2s with status: succeeded
Stopping job with name vicgalle-roleplay-llama-3-8b-v19-mkmlizer
Pipeline stage MKMLizer completed in 102.73s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.44s
Running pipeline stage ISVCDeployer
Creating inference service vicgalle-roleplay-llama-3-8b-v19
Waiting for inference service vicgalle-roleplay-llama-3-8b-v19 to be ready
Inference service vicgalle-roleplay-llama-3-8b-v19 ready after 30.570614099502563s
Pipeline stage ISVCDeployer completed in 32.32s
Running pipeline stage StressChecker
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.6361889839172363s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.7188589572906494s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.5649349689483643s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.6319670677185059s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.6466879844665527s
5 requests
0 failed requests
5th percentile: 1.6349112510681152
10th percentile: 1.6378554344177245
20th percentile: 1.6437438011169434
30th percentile: 1.830337381362915
40th percentile: 2.19763617515564
50th percentile: 2.5649349689483643
60th percentile: 2.593436574935913
70th percentile: 2.621938180923462
80th percentile: 2.652722978591919
90th percentile: 2.6857909679412844
95th percentile: 2.702324962615967
99th percentile: 2.7155521583557127
mean time: 2.239727592468262
Pipeline stage StressChecker completed in 15.75s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.16s
Running M-Eval for topic stay_in_character
Running pipeline stage DaemonicSafetyScorer
M-Eval Dataset for topic stay_in_character is loaded
Pipeline stage DaemonicSafetyScorer completed in 0.38s
%s, retrying in %s seconds...
vicgalle-roleplay-llama-3-8b_v19 status is now deployed due to DeploymentManager action
%s, retrying in %s seconds...
vicgalle-roleplay-llama-3-8b_v19 status is now rejected due to a failure to get M-Eval score. Please try again in five minutes.
admin requested tearing down of vicgalle-roleplay-llama-3-8b_v19
Running pipeline stage ISVCDeleter
Checking if service vicgalle-roleplay-llama-3-8b-v19 is running
Skipping teardown as no inference service was found
Pipeline stage ISVCDeleter completed in 5.89s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key vicgalle-roleplay-llama-3-8b-v19/config.json from bucket guanaco-mkml-models
Deleting key vicgalle-roleplay-llama-3-8b-v19/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key vicgalle-roleplay-llama-3-8b-v19/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key vicgalle-roleplay-llama-3-8b-v19/tokenizer.json from bucket guanaco-mkml-models
Deleting key vicgalle-roleplay-llama-3-8b-v19/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key vicgalle-roleplay-llama-3-8b-v19_reward/config.json from bucket guanaco-reward-models
Deleting key vicgalle-roleplay-llama-3-8b-v19_reward/merges.txt from bucket guanaco-reward-models
Deleting key vicgalle-roleplay-llama-3-8b-v19_reward/reward.tensors from bucket guanaco-reward-models
Deleting key vicgalle-roleplay-llama-3-8b-v19_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key vicgalle-roleplay-llama-3-8b-v19_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key vicgalle-roleplay-llama-3-8b-v19_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key vicgalle-roleplay-llama-3-8b-v19_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 1.89s
vicgalle-roleplay-llama-3-8b_v19 status is now torndown due to DeploymentManager action