submission_id: mistralai-mixtral-8x7b-_3473_v36
developer_uid: chai_backend_admin
status: inactive
model_repo: mistralai/Mixtral-8x7B-Instruct-v0.1
reward_repo: rirv938/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 0.9, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 50, 'presence_penalty': 0.5, 'frequency_penalty': 0.5, 'stopping_words': ['\n', '</s>', '<|user|>', '###'], 'max_input_tokens': 512, 'best_of': 4, 'max_output_tokens': 96}
formatter: {'memory_template': '<s>[INST] This is an entertaining conversation. You are {bot_name} who has the persona: {memory}.\nEngage in a chat with {user_name} while staying in character. Try to flirt with {user_name}. Engage in *roleplay* actions. Describe the scene dramatically. \n', 'prompt_template': '{prompt}\n', 'bot_template': '{bot_name}: {message}</s>', 'user_template': '[INST] {user_name}: {message} [/INST]', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': 'Memory: {memory}\n', 'prompt_template': '{prompt}\n', 'bot_template': 'Bot: {message}\n', 'user_template': 'User: {message}\n', 'response_template': 'Bot:', 'truncate_by_message': False}
timestamp: 2024-05-06T02:10:33+00:00
model_name: mistralai-mixtral-8x7b-_3473_v36
double_thumbs_up: 321
thumbs_up: 552
thumbs_down: 241
num_battles: 20766
num_wins: 10060
celo_rating: 1175.85
entertaining: None
stay_in_character: None
user_preference: None
safety_score: None
submission_type: basic
model_architecture: MixtralForCausalLM
model_num_parameters: 46702792704.0
best_of: 4
max_input_tokens: 512
max_output_tokens: 96
display_name: mistralai-mixtral-8x7b-_3473_v36
double_thumbs_up_ratio: 0.2881508078994614
feedback_count: 1114
ineligible_reason: max_output_tokens!=64
language_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model_score: None
model_size: 47B
reward_model: rirv938/reward_gpt2_medium_preference_24m_e2
single_thumbs_up_ratio: 0.4955116696588869
thumbs_down_ratio: 0.2163375224416517
thumbs_up_ratio: 0.7836624775583483
us_pacific_date: 2024-05-05
win_ratio: 0.48444572859481844
Resubmit model
Running pipeline stage MKMLizer
Starting job with name mistralai-mixtral-8x7b-3473-v36-mkmlizer
Waiting for job on mistralai-mixtral-8x7b-3473-v36-mkmlizer to finish
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ _____ __ __ ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ /___/ ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ Version: 0.8.10 ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ The license key for the current software has been verified as ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ belonging to: ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ Chai Research Corp. ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ║ ║
mistralai-mixtral-8x7b-3473-v36-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
mistralai-mixtral-8x7b-3473-v36-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
mistralai-mixtral-8x7b-3473-v36-mkmlizer: warnings.warn(warning_message, FutureWarning)
mistralai-mixtral-8x7b-3473-v36-mkmlizer: Downloaded to shared memory in 131.364s
mistralai-mixtral-8x7b-3473-v36-mkmlizer: quantizing model to /dev/shm/model_cache
mistralai-mixtral-8x7b-3473-v36-mkmlizer: Saving flywheel model at /dev/shm/model_cache
mistralai-mixtral-8x7b-3473-v36-mkmlizer: quantized model in 50.049s
mistralai-mixtral-8x7b-3473-v36-mkmlizer: Processed model mistralai/Mixtral-8x7B-Instruct-v0.1 in 187.409s
mistralai-mixtral-8x7b-3473-v36-mkmlizer: creating bucket guanaco-mkml-models
mistralai-mixtral-8x7b-3473-v36-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
mistralai-mixtral-8x7b-3473-v36-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36
mistralai-mixtral-8x7b-3473-v36-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36/config.json
mistralai-mixtral-8x7b-3473-v36-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36/tokenizer_config.json
mistralai-mixtral-8x7b-3473-v36-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36/special_tokens_map.json
mistralai-mixtral-8x7b-3473-v36-mkmlizer: cp /dev/shm/model_cache/tokenizer.model s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36/tokenizer.model
mistralai-mixtral-8x7b-3473-v36-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36/tokenizer.json
mistralai-mixtral-8x7b-3473-v36-mkmlizer: cp /dev/shm/model_cache/flywheel_model.3.safetensors s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36/flywheel_model.3.safetensors
mistralai-mixtral-8x7b-3473-v36-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36/flywheel_model.0.safetensors
mistralai-mixtral-8x7b-3473-v36-mkmlizer: cp /dev/shm/model_cache/flywheel_model.2.safetensors s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36/flywheel_model.2.safetensors
mistralai-mixtral-8x7b-3473-v36-mkmlizer: cp /dev/shm/model_cache/flywheel_model.1.safetensors s3://guanaco-mkml-models/mistralai-mixtral-8x7b-3473-v36/flywheel_model.1.safetensors
mistralai-mixtral-8x7b-3473-v36-mkmlizer: loading reward model from rirv938/reward_gpt2_medium_preference_24m_e2
mistralai-mixtral-8x7b-3473-v36-mkmlizer: Loading 0: 0%| | 0/995 [00:00<?, ?it/s] Loading 0: 5%|▌ | 54/995 [00:01<00:17, 53.51it/s] Loading 0: 13%|█▎ | 127/995 [00:02<00:13, 64.80it/s] Loading 0: 20%|██ | 201/995 [00:03<00:11, 68.95it/s] Loading 0: 27%|██▋ | 265/995 [00:04<00:11, 64.18it/s] Loading 0: 28%|██▊ | 282/995 [00:13<00:57, 12.41it/s] Loading 0: 34%|███▍ | 340/995 [00:14<00:37, 17.65it/s] Loading 0: 43%|████▎ | 423/995 [00:15<00:22, 25.91it/s] Loading 0: 50%|█████ | 499/995 [00:16<00:14, 34.02it/s] Loading 0: 56%|█████▋ | 561/995 [00:26<00:30, 14.45it/s] Loading 0: 63%|██████▎ | 629/995 [00:27<00:18, 19.27it/s] Loading 0: 69%|██████▉ | 691/995 [00:28<00:12, 23.68it/s] Loading 0: 76%|███████▋ | 760/995 [00:29<00:07, 29.90it/s] Loading 0: 83%|████████▎ | 830/995 [00:30<00:04, 36.46it/s] Loading 0: 85%|████████▍ | 842/995 [00:39<00:12, 12.64it/s] Loading 0: 90%|█████████ | 897/995 [00:40<00:05, 16.53it/s] Loading 0: 96%|█████████▌| 952/995 [00:42<00:02, 19.16it/s] /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
mistralai-mixtral-8x7b-3473-v36-mkmlizer: warnings.warn(
mistralai-mixtral-8x7b-3473-v36-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
mistralai-mixtral-8x7b-3473-v36-mkmlizer: warnings.warn(
mistralai-mixtral-8x7b-3473-v36-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
mistralai-mixtral-8x7b-3473-v36-mkmlizer: warnings.warn(
mistralai-mixtral-8x7b-3473-v36-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
mistralai-mixtral-8x7b-3473-v36-mkmlizer: return self.fget.__get__(instance, owner)()
mistralai-mixtral-8x7b-3473-v36-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
mistralai-mixtral-8x7b-3473-v36-mkmlizer: Saving duration: 0.231s
mistralai-mixtral-8x7b-3473-v36-mkmlizer: Processed model rirv938/reward_gpt2_medium_preference_24m_e2 in 7.672s
mistralai-mixtral-8x7b-3473-v36-mkmlizer: creating bucket guanaco-reward-models
Job mistralai-mixtral-8x7b-3473-v36-mkmlizer completed after 236.62s with status: succeeded
Stopping job with name mistralai-mixtral-8x7b-3473-v36-mkmlizer
Pipeline stage MKMLizer completed in 238.13s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.44s
Running pipeline stage ISVCDeployer
Creating inference service mistralai-mixtral-8x7b-3473-v36
Waiting for inference service mistralai-mixtral-8x7b-3473-v36 to be ready
Inference service mistralai-mixtral-8x7b-3473-v36 ready after 60.959372997283936s
Pipeline stage ISVCDeployer completed in 62.53s
Running pipeline stage StressChecker
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 3.6904819011688232s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.292059898376465s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.1379802227020264s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.8850319385528564s
HTTP Request: %s %s "%s %d %s"
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 3.3142693042755127s
Received healthy response to inference request in 2.4800257682800293s
HTTP Request: %s %s "%s %d %s"
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 3.4176971912384033s
Received healthy response to inference request in 2.7184431552886963s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.347959041595459s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.950355052947998s
HTTP Request: %s %s "%s %d %s"
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.4177567958831787s
Received healthy response to inference request in 1.8447580337524414s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.936439037322998s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.4455220699310303s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.8114750385284424s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.6718029975891113s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.8286488056182861s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.3164150714874268s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.146746873855591s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.572995185852051s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.3979268074035645s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.889577865600586s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.3829221725463867s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.3481099605560303s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.9601612091064453s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.008275032043457s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.413843870162964s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.4108729362487793s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.1494028568267822s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.0219597816467285s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.501051902770996s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.1999919414520264s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.7167909145355225s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.656960964202881s
HTTP Request: %s %s "%s %d %s"
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.1671340465545654s
Received healthy response to inference request in 2.9952120780944824s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.1266322135925293s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.0353951454162598s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.9530799388885498s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.5429461002349854s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.0633139610290527s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.487748146057129s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.466078042984009s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.527632236480713s
HTTP Request: %s %s "%s %d %s"
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 3.1448628902435303s
Received healthy response to inference request in 2.5375380516052246s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.6315720081329346s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.436872959136963s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.062638282775879s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.6824567317962646s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.121328353881836s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.925400972366333s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.7137870788574219s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.1209311485290527s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.176959991455078s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.7564332485198975s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.3567440509796143s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.4744861125946045s
Shutting down server chaiverse_console.server.app.
HTTP Request: %s %s "%s %d %s"
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.852118968963623s
Received healthy response to inference request in 3.3172359466552734s
mistralai-mixtral-8x7b-_3473_v36 status is now inactive due to admin request

Usage Metrics

Latency Metrics