Hello! 😃 In this tutorial I will show you how to use AnimeGAN2 models to apply various filters to an input image using PyTorch. The AnimeGAN2 is a generative adversarial network (GAN) based model that can generate anime-style images from real-word images. We will be using the "bryandlee/animegan2-pytorch" package, which provides pre-trained AnimeGAN2 models for different styles of anime filters.
It would be recommended to have the following before proceeding with this tutorial:
First it's recommended to set up a virtual environment, this can be created via the following command:
python3 -m venv env
And then activated via:
Next we need to install the dependencies, create a file called "requirements.txt" and populate it with the following:
torchvision torch Pillow
To install the dependencies, run the following command:
pip install -r requirements.txt
Done! Now we can actually start coding. 😆
Create a file called "main.py", first we need to declare the imports and load the pre-trained AnimeGAN2 models like so:
from PIL import Image import torch import argparse MODEL_CELEBA = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="celeba_distill") MODEL_FACEV1 = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v1") MODEL_FACEV2 = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v2") MODEL_PAPRIKA = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="paprika") FACE2PAINT = torch.hub.load("bryandlee/animegan2-pytorch:main", "face2paint", size=512)
In the above code we use the torch.hub.load method to download the pre-trained AnimeGAN2 models for different styles of anime filters. We also load the "FACE2PAINT" model that can convert a real world image into a painting style. We wil use these models to apply filters to the input image. 😺
Next we will define a method to apply the filters to the image:
def applyFilters(image): out_celeba = FACE2PAINT(MODEL_CELEBA, image) out_facev1 = FACE2PAINT(MODEL_FACEV1, image) out_facev2 = FACE2PAINT(MODEL_FACEV2, image) out_paprika = FACE2PAINT(MODEL_PAPRIKA, image) # save images out_celeba.save("out_celeba.jpg") out_facev1.save("out_facev1.jpg") out_facev2.save("out_facev2.jpg") out_paprika.save("out_paprika.jpg")
The above method takes an Image object as input and produces their output images.
The final method is the main method which is as follows:
if __name__ == "__main__": ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required = True, help = "Path to input file") args = vars(ap.parse_args()) image = Image.open(args["image"]).convert("RGB") applyFilters(image) print("Done!")
The above method uses the argparse module to parse an "image" command line argument, this allows the user to provide an input file to provide filters to.
To run the script you can use the following command:
python main.py -i input.jpg
Replace the input.jpg with the image you want to provide filters to. If done correct you should now see 4 new images in the current directory. 🤓
It not only works with people, it also works with animals here is my cat with the filters. 😸
In this tutorial I showed how to apply varous anime style filters to an input image. I was actually surprised you could do this in with this little amount of code. 😮 But I had a lot of fun trying this out. Hopefully it's been helpful to you and I hope you try it out with various images.
Happy Coding! 😁
Like me work? I post about a variety of topics, if you would like to see more please like and follow me. Also I love coffee.
If you are looking to learn Algorithm Patterns to ace the coding interview I recommend the following course