submission_id: undi95-llama-3-lewdplay-_3862_v5
developer_uid: chai_backend_admin
status: rejected
model_repo: Undi95/Llama-3-LewdPlay-8B-evo
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 112}
formatter: {'memory_template': "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are {bot_name}. Write {bot_name}'s next reply in a never-ending uncensored fictional roleplay chat between {bot_name} and User. Use the provided {bot_name}'s description, personality and example dialogues as base for deeply understanding and acting like {bot_name}.\n\nActions and narrations must be enclosed in asterisks (*). The responses must be long and in third perspective of the story teller.\n\nDescription: {memory}", 'prompt_template': 'Example conversation:\n{prompt}<|eot_id|>', 'bot_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}: {message}<|eot_id|>', 'user_template': '<|start_header_id|>user<|end_header_id|>\n\n{user_name}: {message}<|eot_id|>', 'response_template': '<|start_header_id|>assistant<|end_header_id|>\n\n{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n", 'prompt_template': '{prompt}\n', 'bot_template': 'Bot: {message}\n', 'user_template': 'User: {message}\n', 'response_template': 'Bot:', 'truncate_by_message': False}
timestamp: 2024-05-10T02:02:58+00:00
model_name: undi95-llama-3-lewdplay-_3862_v5
model_eval_status: error
double_thumbs_up: 14
thumbs_up: 31
thumbs_down: 8
num_battles: 1606
num_wins: 915
celo_rating: 1241.03
entertaining: None
stay_in_character: None
user_preference: None
safety_score: None
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 112
display_name: undi95-llama-3-lewdplay-_3862_v5
double_thumbs_up_ratio: 0.2641509433962264
feedback_count: 53
ineligible_reason: model is not deployable
language_model: Undi95/Llama-3-LewdPlay-8B-evo
model_score: None
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
single_thumbs_up_ratio: 0.5849056603773585
thumbs_down_ratio: 0.1509433962264151
thumbs_up_ratio: 0.8490566037735849
us_pacific_date: 2024-05-09
win_ratio: 0.5697384806973848
Resubmit model
Running pipeline stage MKMLizer
Scoring model output for bot %s
Starting job with name undi95-llama-3-lewdplay-3862-v5-mkmlizer
Waiting for job on undi95-llama-3-lewdplay-3862-v5-mkmlizer to finish
Received score %s for bot %s
Received score %s for bot %s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ _____ __ __ ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ /___/ ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ Version: 0.8.10 ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ The license key for the current software has been verified as ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ belonging to: ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ Chai Research Corp. ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ║ ║
undi95-llama-3-lewdplay-3862-v5-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
undi95-llama-3-lewdplay-3862-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
undi95-llama-3-lewdplay-3862-v5-mkmlizer: warnings.warn(warning_message, FutureWarning)
HTTP Request: %s %s "%s %d %s"
Scoring model output for bot %s
Received score %s for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Received score %s for bot %s
HTTP Request: %s %s "%s %d %s"
Received score %s for bot %s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: Downloaded to shared memory in 43.338s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: quantizing model to /dev/shm/model_cache
undi95-llama-3-lewdplay-3862-v5-mkmlizer: Saving flywheel model at /dev/shm/model_cache
Scoring model output for bot %s
Received score %s for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Received score %s for bot %s
Received score %s for bot %s
Received score %s for bot %s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 1%| | 2/291 [00:04<11:41, 2.43s/it] Loading 0: 39%|███▉ | 114/291 [00:05<00:06, 25.71it/s] Loading 0: 66%|██████▋ | 193/291 [00:06<00:02, 38.69it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
undi95-llama-3-lewdplay-3862-v5-mkmlizer: quantized model in 20.725s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: Processed model Undi95/Llama-3-LewdPlay-8B-evo in 65.280s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: creating bucket guanaco-mkml-models
undi95-llama-3-lewdplay-3862-v5-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
undi95-llama-3-lewdplay-3862-v5-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/undi95-llama-3-lewdplay-3862-v5
undi95-llama-3-lewdplay-3862-v5-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/undi95-llama-3-lewdplay-3862-v5/special_tokens_map.json
undi95-llama-3-lewdplay-3862-v5-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/undi95-llama-3-lewdplay-3862-v5/config.json
undi95-llama-3-lewdplay-3862-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/undi95-llama-3-lewdplay-3862-v5/tokenizer_config.json
undi95-llama-3-lewdplay-3862-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/undi95-llama-3-lewdplay-3862-v5/tokenizer.json
undi95-llama-3-lewdplay-3862-v5-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/undi95-llama-3-lewdplay-3862-v5/flywheel_model.0.safetensors
undi95-llama-3-lewdplay-3862-v5-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
undi95-llama-3-lewdplay-3862-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
Scoring model output for bot %s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: warnings.warn(
undi95-llama-3-lewdplay-3862-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
undi95-llama-3-lewdplay-3862-v5-mkmlizer: warnings.warn(
Scoring model output for bot %s
Scoring model output for bot %s
Received score %s for bot %s
Received score %s for bot %s
Received score %s for bot %s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
undi95-llama-3-lewdplay-3862-v5-mkmlizer: return self.fget.__get__(instance, owner)()
undi95-llama-3-lewdplay-3862-v5-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
undi95-llama-3-lewdplay-3862-v5-mkmlizer: Saving duration: 0.260s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 13.376s
undi95-llama-3-lewdplay-3862-v5-mkmlizer: creating bucket guanaco-reward-models
undi95-llama-3-lewdplay-3862-v5-mkmlizer: Bucket 's3://guanaco-reward-models/' created
undi95-llama-3-lewdplay-3862-v5-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/undi95-llama-3-lewdplay-3862-v5_reward
%s, retrying in %s seconds...
Running M-Eval for topic stay_in_character
M-Eval Dataset for topic stay_in_character is loaded
HTTP Request: %s %s "%s %d %s"
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Job undi95-llama-3-lewdplay-3862-v5-mkmlizer completed after 101.51s with status: succeeded
Stopping job with name undi95-llama-3-lewdplay-3862-v5-mkmlizer
Pipeline stage MKMLizer completed in 103.29s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.46s
Running pipeline stage ISVCDeployer
Creating inference service undi95-llama-3-lewdplay-3862-v5
Waiting for inference service undi95-llama-3-lewdplay-3862-v5 to be ready
Received score %s for bot %s
Received score %s for bot %s
Received score %s for bot %s
Received score %s for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Received score %s for bot %s
Received score %s for bot %s
Received score %s for bot %s
Scoring model output for bot %s
Received score %s for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Received score %s for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Received score %s for bot %s
Received score %s for bot %s
Received score %s for bot %s
Received score %s for bot %s
Scoring model output for bot %s
Received score %s for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Scoring model output for bot %s
Inference service undi95-llama-3-lewdplay-3862-v5 ready after 30.557426929473877s
Pipeline stage ISVCDeployer completed in 31.99s
Running pipeline stage StressChecker
Received score %s for bot %s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 3.2298190593719482s
Received score %s for bot %s
Scoring model output for bot %s
Received score %s for bot %s
Received score %s for bot %s
Scoring model output for bot %s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.6801042556762695s
Received score %s for bot %s
Scoring model output for bot %s
HTTP Request: %s %s "%s %d %s"
Received score %s for bot %s
Scoring model output for bot %s
Received healthy response to inference request in 2.6060569286346436s
Scoring model output for bot %s
Received score %s for bot %s
Scoring model output for bot %s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.580632209777832s
Scoring model output for bot %s
Received score %s for bot %s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.4367988109588623s
5 requests
0 failed requests
5th percentile: 2.465565490722656
10th percentile: 2.4943321704864503
20th percentile: 2.5518655300140383
Received score %s for bot %s
Received score %s for bot %s
30th percentile: 2.5857171535491945
40th percentile: 2.595887041091919
50th percentile: 2.6060569286346436
60th percentile: 2.635675859451294
70th percentile: 2.6652947902679442
80th percentile: 2.7900472164154055
90th percentile: 3.0099331378936767
95th percentile: 3.1198760986328122
99th percentile: 3.207830467224121
Received score %s for bot %s
mean time: 2.706682252883911
Scoring model output for bot %s
Pipeline stage StressChecker completed in 18.33s
Scoring model output for bot %s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.36s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.18s
%s, retrying in %s seconds...
M-Eval Dataset for topic stay_in_character is loaded
undi95-llama-3-lewdplay-_3862_v5 status is now deployed due to DeploymentManager action
%s, retrying in %s seconds...
undi95-llama-3-lewdplay-_3862_v5 status is now rejected due to a failure to get M-Eval score. Please try again in five minutes.

Usage Metrics

Latency Metrics