Applying Anime Style Filters With Python


Hello! ๐Ÿ˜ƒ In this tutorial I will show you how to use AnimeGAN2 models to apply various filters to an input image using PyTorch. The AnimeGAN2 is a generative adversarial network (GAN) based model that can generate anime-style images from real-word images. We will be using the "bryandlee/animegan2-pytorch" package, which provides pre-trained AnimeGAN2 models for different styles of anime filters.


It would be recommended to have the following before proceeding with this tutorial:

  • Basic knowledge of Python
  • Some familiarity with PyTorch and neural networks

Setting Up The Virtual Environment

First it's recommended to set up a virtual environment, this can be created via the following command:

python3 -m venv env

And then activated via:

source env/bin/activate

Installing The Dependencies

Next we need to install the dependencies, create a file called "requirements.txt" and populate it with the following:


To install the dependencies, run the following command:

pip install -r requirements.txt

Done! Now we can actually start coding. ๐Ÿ˜†

Coding The Project

Create a file called "", first we need to declare the imports and load the pre-trained AnimeGAN2 models like so:

from PIL import Image
import torch
import argparse

MODEL_CELEBA = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="celeba_distill")
MODEL_FACEV1 = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v1")
MODEL_FACEV2 = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v2")
MODEL_PAPRIKA = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="paprika")

FACE2PAINT = torch.hub.load("bryandlee/animegan2-pytorch:main", "face2paint", size=512)

In the above code we use the torch.hub.load method to download the pre-trained AnimeGAN2 models for different styles of anime filters. We also load the "FACE2PAINT" model that can convert a real world image into a painting style. We wil use these models to apply filters to the input image. ๐Ÿ˜บ

Next we will define a method to apply the filters to the image:

def applyFilters(image):
    out_celeba = FACE2PAINT(MODEL_CELEBA, image)
    out_facev1 = FACE2PAINT(MODEL_FACEV1, image)
    out_facev2 = FACE2PAINT(MODEL_FACEV2, image)
    out_paprika = FACE2PAINT(MODEL_PAPRIKA, image)

    # save images"out_celeba.jpg")"out_facev1.jpg")"out_facev2.jpg")"out_paprika.jpg")

The above method takes an Image object as input and produces their output images.

The final method is the main method which is as follows:

if __name__ == "__main__":
    ap = argparse.ArgumentParser()
    ap.add_argument("-i", "--image", required = True, help = "Path to input file")
    args = vars(ap.parse_args())

    image =["image"]).convert("RGB")

The above method uses the argparse module to parse an "image" command line argument, this allows the user to provide an input file to provide filters to.

Running The Script

To run the script you can use the following command:

python -i input.jpg

Replace the input.jpg with the image you want to provide filters to. If done correct you should now see 4 new images in the current directory. ย ๐Ÿค“

It not only works with people, it also works with animals here is my cat with the filters. ๐Ÿ˜ธ

Celeba: Celeba filter

FaceV1: FaceV1

FaceV2: FaceV2

Paprika: Paprika


In this tutorial I showed how to apply varous anime style filters to an input image. I was actually surprised you could do this in with this little amount of code. ๐Ÿ˜ฎ But I had a lot of fun trying this out. Hopefully it's been helpful to you and I hope you try it out with various images.

Happy Coding! ๐Ÿ˜

Like me work? I post about a variety of topics, if you would like to see more please like and follow me. Also I love coffee.

โ€œBuy Me A Coffeeโ€

If you are looking to learn Algorithm Patterns to ace the coding interview I recommend the following course