submission_id: rochatai-llama3-8b-cn-ro_9773_v1
developer_uid: Meliodia
status: inactive
model_repo: RochatAI/llama3-8B-cn-rochat-v1
reward_repo: ChaiML/reward_gpt2_medium_preference_24m_e2
generation_params: {'temperature': 1.0, 'top_p': 1.0, 'min_p': 0.0, 'top_k': 40, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'stopping_words': ['\n'], 'max_input_tokens': 512, 'best_of': 16, 'max_output_tokens': 64}
formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
reward_formatter: {'memory_template': "{bot_name}'s Persona: {memory}\n####\n", 'prompt_template': '{prompt}\n<START>\n', 'bot_template': '{bot_name}: {message}\n', 'user_template': '{user_name}: {message}\n', 'response_template': '{bot_name}:', 'truncate_by_message': False}
timestamp: 2024-06-28T22:29:27+00:00
model_name: rochatai-llama3-8b-cn-ro_9773_v1
model_group: RochatAI/llama3-8B-cn-ro
num_battles: 24420
num_wins: 11828
celo_rating: 1168.42
propriety_score: 0.7112999739831758
propriety_total_count: 11531.0
submission_type: basic
model_architecture: LlamaForCausalLM
model_num_parameters: 8030261248.0
best_of: 16
max_input_tokens: 512
max_output_tokens: 64
display_name: rochatai-llama3-8b-cn-ro_9773_v1
ineligible_reason: None
language_model: RochatAI/llama3-8B-cn-rochat-v1
model_size: 8B
reward_model: ChaiML/reward_gpt2_medium_preference_24m_e2
us_pacific_date: 2024-06-28
win_ratio: 0.48435708435708436
Resubmit model
Running pipeline stage MKMLizer
Starting job with name rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer
Waiting for job on rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer to finish
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ╔═════════════════════════════════════════════════════════════════════╗
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ _____ __ __ ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ / _/ /_ ___ __/ / ___ ___ / / ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ / _/ / // / |/|/ / _ \/ -_) -_) / ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ /_//_/\_, /|__,__/_//_/\__/\__/_/ ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ /___/ ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ Version: 0.8.14 ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ Copyright 2023 MK ONE TECHNOLOGIES Inc. ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ https://mk1.ai ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ The license key for the current software has been verified as ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ belonging to: ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ Chai Research Corp. ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ Account ID: 7997a29f-0ceb-4cc7-9adf-840c57b4ae6f ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ Expiration: 2024-07-15 23:59:59 ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ║ ║
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: ╚═════════════════════════════════════════════════════════════════════╝
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py:131: FutureWarning: 'list_files_info' (from 'huggingface_hub.hf_api') is deprecated and will be removed from version '0.23'. Use `list_repo_tree` and `get_paths_info` instead.
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: warnings.warn(warning_message, FutureWarning)
Connection pool is full, discarding connection: %s
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: Downloaded to shared memory in 48.767s
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: quantizing model to /dev/shm/model_cache
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: Saving flywheel model at /dev/shm/model_cache
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: Loading 0: 0%| | 0/291 [00:00<?, ?it/s] Loading 0: 5%|▍ | 14/291 [00:00<00:02, 128.07it/s] Loading 0: 9%|▉ | 27/291 [00:00<00:03, 73.54it/s] Loading 0: 14%|█▍ | 41/291 [00:00<00:02, 92.97it/s] Loading 0: 21%|██ | 61/291 [00:00<00:02, 90.38it/s] Loading 0: 26%|██▌ | 75/291 [00:00<00:02, 101.31it/s] Loading 0: 31%|███ | 90/291 [00:00<00:01, 113.29it/s] Loading 0: 35%|███▌ | 103/291 [00:01<00:02, 86.92it/s] Loading 0: 42%|████▏ | 121/291 [00:01<00:01, 104.15it/s] Loading 0: 47%|████▋ | 138/291 [00:01<00:01, 119.27it/s] Loading 0: 52%|█████▏ | 152/291 [00:01<00:01, 93.02it/s] Loading 0: 57%|█████▋ | 167/291 [00:01<00:01, 103.93it/s] Loading 0: 63%|██████▎ | 182/291 [00:01<00:01, 91.88it/s] Loading 0: 67%|██████▋ | 194/291 [00:01<00:00, 97.51it/s] Loading 0: 73%|███████▎ | 212/291 [00:02<00:00, 114.02it/s] Loading 0: 77%|███████▋ | 225/291 [00:02<00:00, 92.97it/s] Loading 0: 82%|████████▏ | 240/291 [00:02<00:00, 105.03it/s] Loading 0: 88%|████████▊ | 257/291 [00:02<00:00, 118.10it/s] Loading 0: 93%|█████████▎| 271/291 [00:02<00:00, 99.06it/s] Loading 0: 99%|█████████▊| 287/291 [00:02<00:00, 111.67it/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: quantized model in 23.334s
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: Processed model RochatAI/llama3-8B-cn-rochat-v1 in 74.761s
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: creating bucket guanaco-mkml-models
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: Bucket 's3://guanaco-mkml-models/' created
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: uploading /dev/shm/model_cache to s3://guanaco-mkml-models/rochatai-llama3-8b-cn-ro-9773-v1
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /dev/shm/model_cache/config.json s3://guanaco-mkml-models/rochatai-llama3-8b-cn-ro-9773-v1/config.json
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /dev/shm/model_cache/special_tokens_map.json s3://guanaco-mkml-models/rochatai-llama3-8b-cn-ro-9773-v1/special_tokens_map.json
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer.json s3://guanaco-mkml-models/rochatai-llama3-8b-cn-ro-9773-v1/tokenizer.json
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /dev/shm/model_cache/tokenizer_config.json s3://guanaco-mkml-models/rochatai-llama3-8b-cn-ro-9773-v1/tokenizer_config.json
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /dev/shm/model_cache/flywheel_model.0.safetensors s3://guanaco-mkml-models/rochatai-llama3-8b-cn-ro-9773-v1/flywheel_model.0.safetensors
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: loading reward model from ChaiML/reward_gpt2_medium_preference_24m_e2
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:913: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: warnings.warn(
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:757: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: warnings.warn(
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:468: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: warnings.warn(
Failed to get response for submission turboderp-cat-llama-3-7_8684_v17: ('http://turboderp-cat-llama-3-7-8684-v17-predictor-default.tenant-chaiml-guanaco.knative.ord1.coreweave.cloud/v1/models/GPT-J-6B-lit-v2:predict', 'read tcp 127.0.0.1:46266->127.0.0.1:8080: read: connection reset by peer\n')
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: return self.fget.__get__(instance, owner)()
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: Saving model to /tmp/reward_cache/reward.tensors
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: Saving duration: 0.434s
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: Processed model ChaiML/reward_gpt2_medium_preference_24m_e2 in 4.051s
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: creating bucket guanaco-reward-models
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: Bucket 's3://guanaco-reward-models/' created
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: uploading /tmp/reward_cache to s3://guanaco-reward-models/rochatai-llama3-8b-cn-ro-9773-v1_reward
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /tmp/reward_cache/config.json s3://guanaco-reward-models/rochatai-llama3-8b-cn-ro-9773-v1_reward/config.json
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /tmp/reward_cache/merges.txt s3://guanaco-reward-models/rochatai-llama3-8b-cn-ro-9773-v1_reward/merges.txt
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /tmp/reward_cache/special_tokens_map.json s3://guanaco-reward-models/rochatai-llama3-8b-cn-ro-9773-v1_reward/special_tokens_map.json
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /tmp/reward_cache/tokenizer.json s3://guanaco-reward-models/rochatai-llama3-8b-cn-ro-9773-v1_reward/tokenizer.json
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /tmp/reward_cache/vocab.json s3://guanaco-reward-models/rochatai-llama3-8b-cn-ro-9773-v1_reward/vocab.json
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /tmp/reward_cache/tokenizer_config.json s3://guanaco-reward-models/rochatai-llama3-8b-cn-ro-9773-v1_reward/tokenizer_config.json
rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer: cp /tmp/reward_cache/reward.tensors s3://guanaco-reward-models/rochatai-llama3-8b-cn-ro-9773-v1_reward/reward.tensors
Job rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer completed after 104.54s with status: succeeded
Stopping job with name rochatai-llama3-8b-cn-ro-9773-v1-mkmlizer
Pipeline stage MKMLizer completed in 105.61s
Running pipeline stage MKMLKubeTemplater
Pipeline stage MKMLKubeTemplater completed in 0.10s
Running pipeline stage ISVCDeployer
Creating inference service rochatai-llama3-8b-cn-ro-9773-v1
Waiting for inference service rochatai-llama3-8b-cn-ro-9773-v1 to be ready
Inference service rochatai-llama3-8b-cn-ro-9773-v1 ready after 40.26698160171509s
Pipeline stage ISVCDeployer completed in 46.83s
Running pipeline stage StressChecker
Received healthy response to inference request in 2.2245402336120605s
Received healthy response to inference request in 1.2666306495666504s
Received healthy response to inference request in 1.4852025508880615s
Received healthy response to inference request in 1.082104206085205s
Received healthy response to inference request in 1.2401401996612549s
5 requests
0 failed requests
5th percentile: 1.113711404800415
10th percentile: 1.145318603515625
20th percentile: 1.2085330009460449
30th percentile: 1.245438289642334
40th percentile: 1.2560344696044923
50th percentile: 1.2666306495666504
60th percentile: 1.3540594100952148
70th percentile: 1.4414881706237792
80th percentile: 1.6330700874328614
90th percentile: 1.928805160522461
95th percentile: 2.0766726970672607
99th percentile: 2.1949667263031007
mean time: 1.4597235679626466
Pipeline stage StressChecker completed in 8.04s
rochatai-llama3-8b-cn-ro_9773_v1 status is now deployed due to DeploymentManager action
rochatai-llama3-8b-cn-ro_9773_v1 status is now inactive due to auto deactivation removed underperforming models

Usage Metrics

Latency Metrics