submission_id: nousresearch-meta-llama_4941_v39
developer_uid: chai_backend_admin
best_of: 16
celo_rating: 1180.69
display_name: nousresearch-meta-llama-3-8b_v1
family_friendly_score: 0.0
formatter: {'memory_template': '### Instruction:\n{memory}\n', 'prompt_template': '### Input:\n{prompt}\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '### Response:\n{bot_name}:', 'truncate_by_message': False}
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
is_internal_developer: True
language_model: NousResearch/Meta-Llama-3-8B-Instruct
max_input_tokens: 512
max_output_tokens: 64
model_architecture: LlamaForCausalLM
model_eval_status: success
model_group: NousResearch/Meta-Llama-
model_name: nousresearch-meta-llama-3-8b_v1
model_num_parameters: 8030261248.0
model_repo: NousResearch/Meta-Llama-3-8B-Instruct
model_size: 8B
num_battles: 6779
num_wins: 3744
ranking_group: single
reward_formatter: {'bot_template': '{bot_name}: {message}\n', 'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'response_template': '{bot_name}:', 'user_template': '{user_name}: {message}\n'}
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
status: torndown
submission_type: basic
timestamp: 2024-04-20T09:23:00+00:00
us_pacific_date: 2024-04-20
win_ratio: 0.5522938486502434
Resubmit model
Running pipeline stage MKMLizer
Starting job with name nousresearch-meta-llama-4941-v39-mkmlizer
Waiting for job on nousresearch-meta-llama-4941-v39-mkmlizer to finish
nousresearch-meta-llama-4941-v39-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
nousresearch-meta-llama-4941-v39-mkmlizer: ║ _____ __ __ ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ /___/ ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ Version: 0.8.10 ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ The license key for the current software has been verified as ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ belonging to: ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ Chai Research Corp. ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
nousresearch-meta-llama-4941-v39-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v39-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
nousresearch-meta-llama-4941-v39-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
nousresearch-meta-llama-4941-v39-mkmlizer: warnings.warn(warning_message, FutureWarning)
nousresearch-meta-llama-4941-v39-mkmlizer: Downloaded to shared memory in 15.164s
nousresearch-meta-llama-4941-v39-mkmlizer: quantizing model to /dev/shm/model_cache
nousresearch-meta-llama-4941-v39-mkmlizer: Saving flywheel model at /dev/shm/model_cache
nousresearch-meta-llama-4941-v39-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 32%|███▏ | 94/291 [00:01<00:02, 93.69it/s] Loading 0: 68%|██████▊ | 198/291 [00:02<00:00, 99.52it/s] Loading 0: 99%|█████████▊| 287/291 [00:07<00:00, 30.68it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
nousresearch-meta-llama-4941-v39-mkmlizer: quantized model in 18.074s
nousresearch-meta-llama-4941-v39-mkmlizer: Processed model NousResearch/Meta-Llama-3-8B-Instruct in 34.177s
nousresearch-meta-llama-4941-v39-mkmlizer: creating bucket guanaco-mkml-models
nousresearch-meta-llama-4941-v39-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
nousresearch-meta-llama-4941-v39-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v39
nousresearch-meta-llama-4941-v39-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v39/config.json
nousresearch-meta-llama-4941-v39-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v39/special_tokens_map.json
nousresearch-meta-llama-4941-v39-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v39/flywheel_model.0.safetensors
nousresearch-meta-llama-4941-v39-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
nousresearch-meta-llama-4941-v39-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v39-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v39-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v39-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v39-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v39-mkmlizer: warnings.warn(
Job nousresearch-meta-llama-4941-v39-mkmlizer completed after 112.53s with status: succeeded
Stopping job with name nousresearch-meta-llama-4941-v39-mkmlizer
Pipeline stage MKMLizer completed in 113.53s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.26s
Running pipeline stage ISVCDeployer
Creating inference service nousresearch-meta-llama-4941-v39
Waiting for inference service nousresearch-meta-llama-4941-v39 to be ready
Inference service nousresearch-meta-llama-4941-v39 ready after 30.33526611328125s
Pipeline stage ISVCDeployer completed in 36.47s
Running pipeline stage StressChecker
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 2.329508066177368s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.8449993133544922s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.5140159130096436s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.5190889835357666s
HTTP Request: %s %s "%s %d %s"
Received healthy response to inference request in 1.523481845855713s
5 requests
0 failed requests
5th percentile: 1.515030527114868
10th percentile: 1.5160451412200928
20th percentile: 1.518074369430542
30th percentile: 1.5199675559997559
40th percentile: 1.5217247009277344
50th percentile: 1.523481845855713
60th percentile: 1.6520888328552246
70th percentile: 1.7806958198547362
80th percentile: 1.9419010639190675
90th percentile: 2.1357045650482176
95th percentile: 2.232606315612793
99th percentile: 2.310127716064453
mean time: 1.7462188243865966
Pipeline stage StressChecker completed in 10.92s
Running pipeline stage DaemonicModelEvalScorer
Pipeline stage DaemonicModelEvalScorer completed in 0.10s
Running M-Eval for topic stay_in_character
Running pipeline stage DaemonicSafetyScorer
M-Eval Dataset for topic stay_in_character is loaded
Pipeline stage DaemonicSafetyScorer completed in 0.20s
%s, retrying in %s seconds...
nousresearch-meta-llama_4941_v39 status is now deployed due to DeploymentManager action
%s, retrying in %s seconds...
nousresearch-meta-llama_4941_v39 status is now inactive due to auto deactivation removed underperforming models
admin requested tearing down of nousresearch-meta-llama_4941_v39
Running pipeline stage ISVCDeleter
Checking if service nousresearch-meta-llama-4941-v39 is running
Tearing down inference service nousresearch-meta-llama-4941-v39
Toredown service nousresearch-meta-llama-4941-v39
Pipeline stage ISVCDeleter completed in 3.30s
Running pipeline stage MKMLModelDeleter
Cleaning model data from S3
Cleaning model data from model cache
Deleting key nousresearch-meta-llama-4941-v39/config.json from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v39/flywheel_model.0.safetensors from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v39/special_tokens_map.json from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v39/tokenizer.json from bucket guanaco-mkml-models
Deleting key nousresearch-meta-llama-4941-v39/tokenizer_config.json from bucket guanaco-mkml-models
Cleaning model data from model cache
Deleting key nousresearch-meta-llama-4941-v39_reward/config.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v39_reward/merges.txt from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v39_reward/reward.tensors from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v39_reward/special_tokens_map.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v39_reward/tokenizer.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v39_reward/tokenizer_config.json from bucket guanaco-reward-models
Deleting key nousresearch-meta-llama-4941-v39_reward/vocab.json from bucket guanaco-reward-models
Pipeline stage MKMLModelDeleter completed in 2.20s
nousresearch-meta-llama_4941_v39 status is now torndown due to DeploymentManager action