submission_id: meta-llama-meta-llama-3-8b_v5
developer_uid: Meliodia
status: inactive
model_repo: meta-llama/Meta-Llama-3-8B
reward_repo: ChaiML/gpt2_medium_pairwise_60m_step_937500
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-07-02T21:14:35+00:00
model_name: meta-base-model
model_group: meta-llama/Meta-Llama-3-
num_battles: 13983
num_wins: 6929
celo_rating: 1173.5
propriety_score: 0.6946730186227804
propriety_total_count: 6927.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: meta-base-model
ineligible_reason: None
language_model: meta-llama/Meta-Llama-3-8B
model_size: 8B
reward_model: ChaiML/gpt2_medium_pairwise_60m_step_937500
us_pacific_date: 2024-07-02
win_ratio: 0.4955302867768004
Resubmit model
Running pipeline stage MKMLizer
Starting job with name meta-llama-meta-llama-3-8b-v5-mkmlizer
Waiting for job on meta-llama-meta-llama-3-8b-v5-mkmlizer to finish
Retrying (%r) after connection broken by '%r': %s
meta-llama-meta-llama-3-8b-v5-mkmlizer: Downloaded to shared memory in 83.558s
meta-llama-meta-llama-3-8b-v5-mkmlizer: quantizing model to /dev/shm/model_cache
meta-llama-meta-llama-3-8b-v5-mkmlizer: Saving flywheel model at /dev/shm/model_cache
meta-llama-meta-llama-3-8b-v5-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 4%|▍ | 12/291 [00:00<00:02, 106.20it/s] Loading 0: 8%|▊ | 23/291 [00:00<00:02, 93.70it/s] Loading 0: 12%|█▏ | 36/291 [00:00<00:02, 107.93it/s] Loading 0: 16%|█▋ | 48/291 [00:00<00:02, 110.96it/s] Loading 0: 21%|██ | 60/291 [00:00<00:02, 104.74it/s] Loading 0: 25%|██▍ | 72/291 [00:00<00:02, 108.71it/s] Loading 0: 29%|██▊ | 83/291 [00:01<00:03, 59.54it/s] Loading 0: 32%|███▏ | 93/291 [00:01<00:02, 66.61it/s] Loading 0: 35%|███▌ | 103/291 [00:01<00:02, 72.90it/s] Loading 0: 39%|███▉ | 113/291 [00:01<00:02, 78.20it/s] Loading 0: 44%|████▎ | 127/291 [00:01<00:01, 93.41it/s] Loading 0: 47%|████▋ | 138/291 [00:01<00:01, 96.64it/s] Loading 0: 51%|█████ | 149/291 [00:01<00:01, 92.72it/s] Loading 0: 57%|█████▋ | 165/291 [00:01<00:01, 104.39it/s] Loading 0: 60%|██████ | 176/291 [00:01<00:01, 105.19it/s] Loading 0: 64%|██████▍ | 187/291 [00:02<00:01, 63.04it/s] Loading 0: 67%|██████▋ | 196/291 [00:02<00:01, 66.07it/s] Loading 0: 72%|███████▏ | 210/291 [00:02<00:01, 78.06it/s] Loading 0: 76%|███████▌ | 220/291 [00:02<00:00, 81.68it/s] Loading 0: 79%|███████▉ | 230/291 [00:02<00:00, 84.02it/s] Loading 0: 84%|████████▎ | 243/291 [00:02<00:00, 95.30it/s] Loading 0: 88%|████████▊ | 255/291 [00:02<00:00, 100.61it/s] Loading 0: 91%|█████████▏| 266/291 [00:03<00:00, 97.74it/s] Loading 0: 97%|█████████▋| 281/291 [00:03<00:00, 105.44it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
meta-llama-meta-llama-3-8b-v5-mkmlizer: quantized model in 26.697s
meta-llama-meta-llama-3-8b-v5-mkmlizer: Processed model meta-llama/Meta-Llama-3-8B in 110.255s
meta-llama-meta-llama-3-8b-v5-mkmlizer: creating bucket guanaco-mkml-models
meta-llama-meta-llama-3-8b-v5-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
meta-llama-meta-llama-3-8b-v5-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v5
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v5/special_tokens_map.json
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v5/config.json
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v5/tokenizer_config.json
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v5/tokenizer.json
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/meta-llama-meta-llama-3-8b-v5/flywheel_model.0.safetensors
meta-llama-meta-llama-3-8b-v5-mkmlizer: loading reward model from ChaiML/gpt2_medium_pairwise_60m_step_937500
meta-llama-meta-llama-3-8b-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:919: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-8b-v5-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-8b-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
meta-llama-meta-llama-3-8b-v5-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-8b-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:769: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-8b-v5-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-8b-v5-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
meta-llama-meta-llama-3-8b-v5-mkmlizer: warnings.warn(
meta-llama-meta-llama-3-8b-v5-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
meta-llama-meta-llama-3-8b-v5-mkmlizer: Saving duration: 0.397s
meta-llama-meta-llama-3-8b-v5-mkmlizer: Processed model ChaiML/gpt2_medium_pairwise_60m_step_937500 in 12.568s
meta-llama-meta-llama-3-8b-v5-mkmlizer: creating bucket guanaco-reward-models
meta-llama-meta-llama-3-8b-v5-mkmlizer: Bucket 's3://guanaco-reward-models/' created
meta-llama-meta-llama-3-8b-v5-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v5_reward
Failed to get response for submission blend_fojit_2024-07-01: ('http://mistralai-mixtral-8x7b-3473-v33-predictor-default.tenant-chaiml-guanaco.knative.ord1.coreweave.cloud/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:38050->127.0.0.1:8080: read: connection reset by peer\n')
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v5_reward/config.json
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v5_reward/tokenizer_config.json
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v5_reward/special_tokens_map.json
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v5_reward/merges.txt
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v5_reward/vocab.json
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v5_reward/tokenizer.json
meta-llama-meta-llama-3-8b-v5-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/meta-llama-meta-llama-3-8b-v5_reward/reward.tensors
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Connection pool is full, discarding connection: %s
Job meta-llama-meta-llama-3-8b-v5-mkmlizer completed after 154.84s with status: succeeded
Stopping job with name meta-llama-meta-llama-3-8b-v5-mkmlizer
Pipeline stage MKMLizer completed in 155.80s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service meta-llama-meta-llama-3-8b-v5
Waiting for inference service meta-llama-meta-llama-3-8b-v5 to be ready
Inference service meta-llama-meta-llama-3-8b-v5 ready after 40.22654438018799s
Pipeline stage ISVCDeployer completed in 47.72s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.0838983058929443s
Received healthy response to inference request in 1.2167162895202637s
Received healthy response to inference request in 1.0962610244750977s
Received healthy response to inference request in 1.2089500427246094s
Received healthy response to inference request in 1.2124125957489014s
5 requests
0 failed requests
5th percentile: 1.118798828125
10th percentile: 1.1413366317749023
20th percentile: 1.186412239074707
30th percentile: 1.2096425533294677
40th percentile: 1.2110275745391845
50th percentile: 1.2124125957489014
60th percentile: 1.2141340732574464
70th percentile: 1.2158555507659912
80th percentile: 1.3901526927948
90th percentile: 1.737025499343872
95th percentile: 1.910461902618408
99th percentile: 2.049211025238037
mean time: 1.3636476516723632
Pipeline stage StressChecker completed in 7.44s
meta-llama-meta-llama-3-8b_v5 status is now deployed due to DeploymentManager action
meta-llama-meta-llama-3-8b_v5 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics