submission_id: wespro-psykidelic-llama3_1888_v1
developer_uid: WesPro
status: inactive
model_repo: WesPro/PsyKidelic_Llama3_LimaRP
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 4, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-04-28T15:05:19+00:00
model_name: wespro-psykidelic-llama3_1888_v1
model_eval_status: success
double_thumbs_up: 72
thumbs_up: 104
thumbs_down: 47
num_battles: 6608
num_wins: 3476
celo_rating: 1173.68
entertaining: 7.24
stay_in_character: 8.5
user_preference: 7.5
safety_score: None
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 4
max_input_tokens: 512
max_output_tokens: 64
display_name: wespro-psykidelic-llama3_1888_v1
double_thumbs_up_ratio: 0.32286995515695066
feedback_count: 223
ineligible_reason: None
language_model: WesPro/PsyKidelic_Llama3_LimaRP
model_score: 7.746666666666667
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
single_thumbs_up_ratio: 0.4663677130044843
thumbs_down_ratio: 0.21076233183856502
thumbs_up_ratio: 0.7892376681614349
us_pacific_date: 2024-04-28
win_ratio: 0.5260290556900726
Resubmit model
Running pipeline stage MKMLizer
Starting job with name wespro-psykidelic-llama3-1888-v1-mkmlizer
Waiting for job on wespro-psykidelic-llama3-1888-v1-mkmlizer to finish
wespro-psykidelic-llama3-1888-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ _____ __ __ ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ /___/ ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ Version: 0.8.10 ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ The license key for the current software has been verified as ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ belonging to: ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ Chai Research Corp. ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ║ ║
wespro-psykidelic-llama3-1888-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
wespro-psykidelic-llama3-1888-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
wespro-psykidelic-llama3-1888-v1-mkmlizer: warnings.warn(warning_message, FutureWarning)
wespro-psykidelic-llama3-1888-v1-mkmlizer: Downloaded to shared memory in 24.596s
wespro-psykidelic-llama3-1888-v1-mkmlizer: quantizing model to /dev/shm/model_cache
wespro-psykidelic-llama3-1888-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
wespro-psykidelic-llama3-1888-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 64%|██████▍ | 187/291 [00:06<00:03, 30.90it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
wespro-psykidelic-llama3-1888-v1-mkmlizer: quantized model in 17.279s
wespro-psykidelic-llama3-1888-v1-mkmlizer: Processed model WesPro/PsyKidelic_Llama3_LimaRP in 42.800s
wespro-psykidelic-llama3-1888-v1-mkmlizer: creating bucket guanaco-mkml-models
wespro-psykidelic-llama3-1888-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
wespro-psykidelic-llama3-1888-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/wespro-psykidelic-llama3-1888-v1
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/wespro-psykidelic-llama3-1888-v1/config.json
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/wespro-psykidelic-llama3-1888-v1/tokenizer_config.json
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/wespro-psykidelic-llama3-1888-v1/special_tokens_map.json
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/wespro-psykidelic-llama3-1888-v1/tokenizer.json
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/wespro-psykidelic-llama3-1888-v1/flywheel_model.0.safetensors
wespro-psykidelic-llama3-1888-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
wespro-psykidelic-llama3-1888-v1-mkmlizer: warnings.warn(
wespro-psykidelic-llama3-1888-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
wespro-psykidelic-llama3-1888-v1-mkmlizer: return self.fget.__get__(instance, owner)()
wespro-psykidelic-llama3-1888-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
wespro-psykidelic-llama3-1888-v1-mkmlizer: Saving duration: 0.233s
wespro-psykidelic-llama3-1888-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 13.251s
wespro-psykidelic-llama3-1888-v1-mkmlizer: creating bucket guanaco-reward-models
wespro-psykidelic-llama3-1888-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
wespro-psykidelic-llama3-1888-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/wespro-psykidelic-llama3-1888-v1_reward
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/wespro-psykidelic-llama3-1888-v1_reward/special_tokens_map.json
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/wespro-psykidelic-llama3-1888-v1_reward/tokenizer_config.json
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/wespro-psykidelic-llama3-1888-v1_reward/vocab.json
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/wespro-psykidelic-llama3-1888-v1_reward/config.json
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/wespro-psykidelic-llama3-1888-v1_reward/merges.txt
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/wespro-psykidelic-llama3-1888-v1_reward/tokenizer.json
wespro-psykidelic-llama3-1888-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/wespro-psykidelic-llama3-1888-v1_reward/reward.tensors
Job wespro-psykidelic-llama3-1888-v1-mkmlizer completed after 76.08s with status: succeeded
Stopping job with name wespro-psykidelic-llama3-1888-v1-mkmlizer
Pipeline stage MKMLizer completed in 80.65s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service wespro-psykidelic-llama3-1888-v1
Waiting for inference service wespro-psykidelic-llama3-1888-v1 to be ready
Inference service wespro-psykidelic-llama3-1888-v1 ready after 30.195123195648193s
Pipeline stage ISVCDeployer completed in 38.01s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.9780421257019043s
Received healthy response to inference request in 1.12562894821167s
Received healthy response to inference request in 1.1154367923736572s
Received healthy response to inference request in 1.1354594230651855s
Received healthy response to inference request in 1.1810095310211182s
5 requests
0 failed requests
5th percentile: 1.1174752235412597
10th percentile: 1.1195136547088622
20th percentile: 1.1235905170440674
30th percentile: 1.1275950431823731
40th percentile: 1.1315272331237793
50th percentile: 1.1354594230651855
60th percentile: 1.1536794662475587
70th percentile: 1.1718995094299316
80th percentile: 1.3404160499572755
90th percentile: 1.6592290878295899
95th percentile: 1.818635606765747
99th percentile: 1.9461608219146729
mean time: 1.307115364074707
Pipeline stage StressChecker completed in 7.92s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.03s
Running pipeline stage DaemonicSafetyScorer
Running M-Eval for topic stay_in_character
Pipeline stage DaemonicSafetyScorer completed in 0.04s
%s, retrying in %s seconds...
M-Eval Dataset for topic stay_in_character is loaded
wespro-psykidelic-llama3_1888_v1 status is now deployed due to DeploymentManager action
%s, retrying in %s seconds...
wespro-psykidelic-llama3_1888_v1 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics