submission_id: nousresearch-meta-llama_4941_v68
developer_uid: robert_irvine
status: inactive
model_repo: NousResearch/Meta-Llama-3-8B-Instruct
reward_repo: rirv938/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['</s>', '<|user|>', '###', '\n'], 'max_input_tokens': 512, 'best_of': 1, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': 'Memory: {memory}\n', 'prompt_template': '{prompt}\n', 'bot_template': 'Bot: {message}\n', 'user_template': 'User: {message}\n', 'response_template': 'Bot:', 'truncate_by_message': False}
timestamp: 2024-07-03T18:18:35+00:00
model_name: nousresearch-meta-llama_4941_v68
model_group: NousResearch/Meta-Llama-
num_battles: 14327
num_wins: 5949
celo_rating: 1112.63
propriety_score: 0.7521679598356915
propriety_total_count: 6573.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 1
max_input_tokens: 512
max_output_tokens: 64
display_name: nousresearch-meta-llama_4941_v68
ineligible_reason: None
language_model: NousResearch/Meta-Llama-3-8B-Instruct
model_size: 8B
reward_model: rirv938/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-07-03
win_ratio: 0.41522998534236055
Resubmit model
Running pipeline stage MKMLizer
Starting job with name nousresearch-meta-llama-4941-v68-mkmlizer
Waiting for job on nousresearch-meta-llama-4941-v68-mkmlizer to finish
nousresearch-meta-llama-4941-v68-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
nousresearch-meta-llama-4941-v68-mkmlizer: ║ _____ __ __ ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ /___/ ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ Version: 0.8.14 ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ https://mk1.ai ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ The license key for the current software has been verified as ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ belonging to: ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ Chai Research Corp. ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
nousresearch-meta-llama-4941-v68-mkmlizer: ║ ║
nousresearch-meta-llama-4941-v68-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
nousresearch-meta-llama-4941-v68-mkmlizer: Downloaded to shared memory in 27.752s
nousresearch-meta-llama-4941-v68-mkmlizer: quantizing model to /dev/shm/model_cache
nousresearch-meta-llama-4941-v68-mkmlizer: Saving flywheel model at /dev/shm/model_cache
nousresearch-meta-llama-4941-v68-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:02, 96.46it/s] Loading 0: 8%|▊ | 22/291 [00:00<00:02, 94.83it/s] Loading 0: 11%|█ | 32/291 [00:00<00:02, 94.44it/s] Loading 0: 14%|█▍ | 42/291 [00:00<00:02, 96.48it/s] Loading 0: 18%|█▊ | 53/291 [00:00<00:02, 99.25it/s] Loading 0: 23%|██▎ | 66/291 [00:00<00:02, 106.43it/s] Loading 0: 26%|██▋ | 77/291 [00:00<00:02, 102.59it/s] Loading 0: 30%|███ | 88/291 [00:01<00:04, 50.15it/s] Loading 0: 33%|███▎ | 96/291 [00:01<00:03, 53.19it/s] Loading 0: 38%|███▊ | 111/291 [00:01<00:02, 66.34it/s] Loading 0: 41%|████ | 120/291 [00:01<00:02, 65.22it/s] Loading 0: 45%|████▍ | 130/291 [00:01<00:02, 70.99it/s] Loading 0: 49%|████▉ | 143/291 [00:01<00:01, 83.49it/s] Loading 0: 54%|█████▎ | 156/291 [00:01<00:01, 94.01it/s] Loading 0: 57%|█████▋ | 167/291 [00:02<00:01, 88.63it/s] Loading 0: 62%|██████▏ | 181/291 [00:02<00:01, 97.23it/s] Loading 0: 66%|██████▌ | 192/291 [00:02<00:01, 62.14it/s] Loading 0: 69%|██████▉ | 202/291 [00:02<00:01, 68.89it/s] Loading 0: 74%|███████▍ | 215/291 [00:02<00:00, 80.88it/s] Loading 0: 79%|███████▊ | 229/291 [00:02<00:00, 90.91it/s] Loading 0: 83%|████████▎ | 242/291 [00:02<00:00, 100.17it/s] Loading 0: 87%|████████▋ | 254/291 [00:03<00:00, 104.94it/s] Loading 0: 91%|█████████▏| 266/291 [00:03<00:00, 98.75it/s] Loading 0: 96%|█████████▌| 278/291 [00:03<00:00, 103.20it/s] Loading 0: 99%|█████████▉| 289/291 [00:09<00:00, 6.50it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
nousresearch-meta-llama-4941-v68-mkmlizer: quantized model in 24.573s
nousresearch-meta-llama-4941-v68-mkmlizer: Processed model NousResearch/Meta-Llama-3-8B-Instruct in 52.325s
nousresearch-meta-llama-4941-v68-mkmlizer: creating bucket guanaco-mkml-models
nousresearch-meta-llama-4941-v68-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
nousresearch-meta-llama-4941-v68-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v68
nousresearch-meta-llama-4941-v68-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v68/special_tokens_map.json
nousresearch-meta-llama-4941-v68-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v68/tokenizer_config.json
nousresearch-meta-llama-4941-v68-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v68/config.json
nousresearch-meta-llama-4941-v68-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v68/tokenizer.json
nousresearch-meta-llama-4941-v68-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/nousresearch-meta-llama-4941-v68/flywheel_model.0.safetensors
nousresearch-meta-llama-4941-v68-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:769: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v68-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v68-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
nousresearch-meta-llama-4941-v68-mkmlizer: warnings.warn(
nousresearch-meta-llama-4941-v68-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
nousresearch-meta-llama-4941-v68-mkmlizer: return self.fget.__get__(instance, owner)()
nousresearch-meta-llama-4941-v68-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
nousresearch-meta-llama-4941-v68-mkmlizer: Saving duration: 0.428s
nousresearch-meta-llama-4941-v68-mkmlizer: Processed model rirv938/reward_gpt2_medium_preference_24m_e2 in 6.045s
nousresearch-meta-llama-4941-v68-mkmlizer: creating bucket guanaco-reward-models
nousresearch-meta-llama-4941-v68-mkmlizer: Bucket 's3://guanaco-reward-models/' created
nousresearch-meta-llama-4941-v68-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/nousresearch-meta-llama-4941-v68_reward
nousresearch-meta-llama-4941-v68-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v68_reward/config.json
nousresearch-meta-llama-4941-v68-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v68_reward/special_tokens_map.json
nousresearch-meta-llama-4941-v68-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v68_reward/tokenizer_config.json
nousresearch-meta-llama-4941-v68-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v68_reward/vocab.json
nousresearch-meta-llama-4941-v68-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/nousresearch-meta-llama-4941-v68_reward/merges.txt
nousresearch-meta-llama-4941-v68-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/nousresearch-meta-llama-4941-v68_reward/tokenizer.json
nousresearch-meta-llama-4941-v68-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/nousresearch-meta-llama-4941-v68_reward/reward.tensors
Job nousresearch-meta-llama-4941-v68-mkmlizer completed after 84.24s with status: succeeded
Stopping job with name nousresearch-meta-llama-4941-v68-mkmlizer
Pipeline stage MKMLizer completed in 85.29s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.12s
Running pipeline stage ISVCDeployer
Creating inference service nousresearch-meta-llama-4941-v68
Waiting for inference service nousresearch-meta-llama-4941-v68 to be ready
Inference service nousresearch-meta-llama-4941-v68 ready after 40.2234628200531s
Pipeline stage ISVCDeployer completed in 47.44s
Running pipeline stage StressChecker
Received healthy response to inference request in 1.8439924716949463s
Received healthy response to inference request in 0.385575532913208s
Received healthy response to inference request in 1.0842113494873047s
Received healthy response to inference request in 1.093280553817749s
Received healthy response to inference request in 0.3908698558807373s
5 requests
0 failed requests
5th percentile: 0.38663439750671386
10th percentile: 0.3876932621002197
20th percentile: 0.38981099128723146
30th percentile: 0.5295381546020508
40th percentile: 0.8068747520446777
50th percentile: 1.0842113494873047
60th percentile: 1.0878390312194823
70th percentile: 1.0914667129516602
80th percentile: 1.2434229373931887
90th percentile: 1.5437077045440675
95th percentile: 1.6938500881195067
99th percentile: 1.8139639949798583
mean time: 0.9595859527587891
Pipeline stage StressChecker completed in 5.54s
nousresearch-meta-llama_4941_v68 status is now deployed due to DeploymentManager action
nousresearch-meta-llama_4941_v68 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics