Combining Adapter and Pretrained Model in Hugging Face Transformers for Enhanced Model Deployment

  • 2 minutes read
how to merge fine tuned adapter and pretrained model in hugging face transformers and push to hub
Image credit: Darlene Alderson

Merging a fine-tuned adapter with a pretrained model in Hugging Face Transformers and pushing the resulting model to the Hugging Face Model Hub involves a few steps.

This guide assumes you are already familiar with Hugging Face Transformers and have fine-tuned an adapter. If not, you can refer to the official documentation for more details: https://huggingface.co/transformers/

To merge a fine-tuned adapter with a pretrained model in Hugging Face Transformers and push it to the Hugging Face model hub, you can follow these steps: 1. Install the required packages:

pip install transformers
pip install adapter-transformers

2. Import the necessary libraries:

   import torch
   from transformers import AutoModelForSequenceClassification, AdapterType, AutoTokenizer
   
   from adapter_transformers import (
       AdapterHub,
       AdapterConfig,
       serialize_adapter,
       load_adapter,
       Fuse
   )

3. Load the pretrained model and tokenizer:

pretrained_model_name = ""
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_model_name)

4. Define the adapter configuration and load the fine-tuned adapter:

adapter_name = ""
adapter_config = AdapterConfig.load(adapter_name)
model.load_adapter(adapter_name, config=adapter_config)
model.set_active_adapters(adapter_name)

# Optional: Freeze pretrained model weights
model.train_adapter([adapter_name])
model.freeze_model()

5. Add a Model Card (Optional): You may want to add a model card to provide details about your model's performance, intended use, and any other relevant information. You can create a model card in Markdown format and include it in your model directory.

6. Publish the Model: After pushing your model to the Hugging Face Model Hub, you can publish it by following the instructions on the Hugging Face website.

That's it! Your fine-tuned adapter merged with the pretrained model should now be available on the Hugging Face Model Hub for others to use. Make sure to follow the Hugging Face guidelines and policies when publishing your model.

How to get the size of a Hugging Face pretrained model?

You can get the size (in terms of disk storage) of a Hugging Face pretrained model using the transformers library in Python. Here's how you can do it:

from transformers import AutoModel
model_name = "your-pretrained-model-name"  # Replace with the name of the pretrained model you want to check
model = AutoModel.from_pretrained(model_name)

# Get the model's size in bytes
model_size_bytes = model.save_pretrained("./temp_model")  # Save the model temporarily to a directory

import shutil shutil.rmtree("./temp_model") # Remove the temporary directory # Convert to megabytes (MB) model_size_MB = model_size_bytes / (1024 * 1024) print(f"Size of {model_name}: {model_size_MB:.2f} MB")

Replace "your-pretrained-model-name" with the name or path to the pretrained model you want to check. This code loads the model, saves it temporarily to a directory, measures the size of the saved files, and then converts the size to megabytes for easy readability. Finally, it prints out the model size in MB.

Share this article with your friends

Related articles

Programming