submission_id: mlabonne-neuralmarcoro14-7b_v50
developer_uid: end_to_end_test
best_of: 4
celo_rating: 1104.89
display_name: mlabonne-neuralmarcoro14-7b_v50
family_friendly_score: 0.0
formatter: {'memory_template': 'character: {bot_name} {memory}\n', 'prompt_template': '{prompt}', 'bot_template': '{bot_name}: {message}', 'user_template': '{user_name}: {message}', 'response_template': '{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 0.99, 'min_p': 0.1, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 4, 'max_output_tokens': 64}
ineligible_reason: model is only for e2e test
is_internal_developer: True
language_model: mlabonne/NeuralMarcoro14-7B
max_input_tokens: 512
max_output_tokens: 64
model_architecture: MistralForCausalLM
model_eval_status: pending
model_group: mlabonne/NeuralMarcoro14
model_name: mlabonne-neuralmarcoro14-7b_v50
model_num_parameters: 7241732096.0
model_repo: mlabonne/NeuralMarcoro14-7B
model_size: 7B
num_battles: 15621
num_wins: 6389
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}', 'memory_template': 'character: {bot_name} {memory}\n', 'prompt_template': '{prompt}', 'response_template': '{bot_name}:', 'truncate_by_message': False, 'user_template': '{user_name}: {message}'}
reward_repo: ChaiML/reward_models_100_170000000_cp_498032
status: torndown
submission_type: basic
timestamp: 2024-05-14T21:23:05+00:00
us_pacific_date: 2024-05-14
win_ratio: 0.4090007041802701
Resubmit model
Running pipeline stage MKMLizer
Starting job with name mlabonne-neuralmarcoro14-7b-v50-mkmlizer
Waiting for job on mlabonne-neuralmarcoro14-7b-v50-mkmlizer to finish
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ _____ __ __ ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ /___/ ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ Version: 0.8.14 ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ https://mk1.ai ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ The license key for the current software has been verified as ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ belonging to: ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ Chai Research Corp. ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ║ ║
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: warnings.warn(warning_message, FutureWarning)
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: quantized model in 9.924s
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: Processed model mlabonne/NeuralMarcoro14-7B in 22.515s
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: creating bucket guanaco-mkml-models
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/mlabonne-neuralmarcoro14-7b-v50/flywheel_model.0.safetensors
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: loading reward model from ChaiML/reward_models_100_170000000_cp_498032
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 6%|▌ | 17/291 [00:00<00:01, 167.61it/s] Loading 0: 13%|█▎ | 37/291 [00:00<00:01, 181.11it/s] Loading 0: 20%|█▉ | 57/291 [00:00<00:01, 189.50it/s] Loading 0: 28%|██▊ | 81/291 [00:00<00:01, 206.29it/s] Loading 0: 35%|███▌ | 102/291 [00:00<00:01, 102.13it/s] Loading 0: 42%|████▏ | 121/291 [00:00<00:01, 118.48it/s] Loading 0: 48%|████▊ | 140/291 [00:01<00:01, 133.26it/s] Loading 0: 56%|█████▋ | 164/291 [00:01<00:00, 158.62it/s] Loading 0: 63%|██████▎ | 184/291 [00:01<00:00, 165.25it/s] Loading 0: 70%|███████ | 204/291 [00:02<00:02, 36.45it/s] Loading 0: 76%|███████▌ | 221/291 [00:02<00:01, 46.02it/s] Loading 0: 83%|████████▎ | 241/291 [00:02<00:00, 60.43it/s] Loading 0: 91%|█████████ | 265/291 [00:03<00:00, 81.15it/s] Loading 0: 98%|█████████▊| 285/291 [00:03<00:00, 97.04it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: warnings.warn(
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: warnings.warn(
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: warnings.warn(
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: return self.fget.__get__(instance, owner)()
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: Saving duration: 0.085s
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: Processed model ChaiML/reward_models_100_170000000_cp_498032 in 2.265s
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: creating bucket guanaco-reward-models
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: Bucket 's3://guanaco-reward-models/' created
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/mlabonne-neuralmarcoro14-7b-v50_reward
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/mlabonne-neuralmarcoro14-7b-v50_reward/special_tokens_map.json
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/mlabonne-neuralmarcoro14-7b-v50_reward/tokenizer_config.json
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/mlabonne-neuralmarcoro14-7b-v50_reward/merges.txt
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/mlabonne-neuralmarcoro14-7b-v50_reward/vocab.json
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/mlabonne-neuralmarcoro14-7b-v50_reward/config.json
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/mlabonne-neuralmarcoro14-7b-v50_reward/tokenizer.json
mlabonne-neuralmarcoro14-7b-v50-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/mlabonne-neuralmarcoro14-7b-v50_reward/reward.tensors
Job mlabonne-neuralmarcoro14-7b-v50-mkmlizer completed after 123.57s with status: succeeded
Stopping job with name mlabonne-neuralmarcoro14-7b-v50-mkmlizer
Pipeline stage MKMLizer completed in 125.75s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.54s
Running pipeline stage ISVCDeployer
Creating inference service mlabonne-neuralmarcoro14-7b-v50
Waiting for inference service mlabonne-neuralmarcoro14-7b-v50 to be ready
Inference service mlabonne-neuralmarcoro14-7b-v50 ready after 152.24059987068176s
Pipeline stage ISVCDeployer completed in 159.21s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.288684844970703s
Received healthy response to inference request in 1.2114880084991455s
Received healthy response to inference request in 1.0616018772125244s
Received healthy response to inference request in 1.1956522464752197s
Received healthy response to inference request in 1.2698681354522705s
5 requests
0 failed requests
5th percentile: 1.0884119510650634
10th percentile: 1.1152220249176026
20th percentile: 1.1688421726226808
30th percentile: 1.1988193988800049
40th percentile: 1.2051537036895752
50th percentile: 1.2114880084991455
60th percentile: 1.2348400592803954
70th percentile: 1.2581921100616456
80th percentile: 1.4736314773559571
90th percentile: 1.8811581611633301
95th percentile: 2.0849215030670165
99th percentile: 2.247932176589966
mean time: 1.4054590225219727
Pipeline stage StressChecker completed in 10.44s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.15s
Running M-Eval for topic stay_in_character
Running pipeline stage DaemonicSafetyScorer
M-Eval Dataset for topic stay_in_character is loaded
Pipeline stage DaemonicSafetyScorer completed in 0.32s
%s, retrying in %s seconds...
mlabonne-neuralmarcoro14-7b_v50 status is now deployed due to DeploymentManager action
%s, retrying in %s seconds...
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
mlabonne-neuralmarcoro14-7b_v50 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of mlabonne-neuralmarcoro14-7b_v50
Running pipeline stage ISVCDeleter
Checking if service mlabonne-neuralmarcoro14-7b-v50 is running
Tearing down inference service mlabonne-neuralmarcoro14-7b-v50
Toredown service mlabonne-neuralmarcoro14-7b-v50
Pipeline stage ISVCDeleter completed in 3.93s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key mlabonne-neuralmarcoro14-7b-v50/config.json from bucket guanaco-mkml-models
Deleting key mlabonne-neuralmarcoro14-7b-v50/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key mlabonne-neuralmarcoro14-7b-v50/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key mlabonne-neuralmarcoro14-7b-v50/tokenizer.json from bucket guanaco-mkml-models
Deleting key mlabonne-neuralmarcoro14-7b-v50/tokenizer.model from bucket guanaco-mkml-models
Deleting key mlabonne-neuralmarcoro14-7b-v50/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key mlabonne-neuralmarcoro14-7b-v50_reward/config.json from bucket guanaco-reward-models
Deleting key mlabonne-neuralmarcoro14-7b-v50_reward/merges.txt from bucket guanaco-reward-models
Deleting key mlabonne-neuralmarcoro14-7b-v50_reward/reward.tensors from bucket guanaco-reward-models
Deleting key mlabonne-neuralmarcoro14-7b-v50_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key mlabonne-neuralmarcoro14-7b-v50_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key mlabonne-neuralmarcoro14-7b-v50_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key mlabonne-neuralmarcoro14-7b-v50_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.14s
mlabonne-neuralmarcoro14-7b_v50 status is now torndown due to DeploymentManager action