Go to file
2022-07-04 07:28:44 -04:00
examples convert pngs to jpgs 2022-07-03 16:30:36 -04:00
min_dalle empty cache if cuda available 2022-07-04 07:21:54 -04:00
replicate update replicate, clear cuda cache if cuda available 2022-07-04 07:28:44 -04:00
.gitattributes add gitattributes file 2022-06-29 12:45:41 -04:00
.gitignore update readme 2022-07-02 12:32:54 -04:00
cog.yaml update replicate, clear cuda cache if cuda available 2022-07-04 07:28:44 -04:00
image_from_text.py added grid_size parameter to generate a grid of images 2022-07-02 08:45:49 -04:00
LICENSE license and cleanup 2022-06-27 14:34:10 -04:00
min_dalle.ipynb update colab 2022-07-03 18:40:27 -04:00
README.md empty cache if cuda available 2022-07-04 07:21:54 -04:00
README.rst update replicate, clear cuda cache if cuda available 2022-07-04 07:28:44 -04:00
requirements.txt Update requirements.txt 2022-07-02 08:52:05 -04:00
setup.py update replicate, clear cuda cache if cuda available 2022-07-04 07:28:44 -04:00

min(DALL·E)

Open In Colab   Replicate   Join us on Discord

This is a fast, minimal implementation of Boris Dayma's DALL·E Mega. It has been stripped down for inference and converted to PyTorch. The only third party dependencies are numpy, requests, pillow and torch.

It takes

  • 35 seconds to generate a 3x3 grid with a P100 in Colab
  • 16 seconds to generate a 4x4 grid with an A100 on Replicate
  • TBD to generate a 4x4 grid with an H100 (@NVIDIA?)

The flax model and code for converting it to torch can be found here.

Install

$ pip install min-dalle

Usage

Load the model parameters once and reuse the model to generate multiple images.

from min_dalle import MinDalle

model = MinDalle(is_mega=True, models_root='./pretrained')

The required models will be downloaded to models_root if they are not already there. Once everything has finished initializing, call generate_image with some text and a seed as many times as you want.

text = 'Dali painting of WALL·E'
image = model.generate_image(text, seed=0, grid_size=4)
display(image)
min-dalle
text = 'Rusty Iron Man suit found abandoned in the woods being reclaimed by nature'
image = model.generate_image(text, seed=0, grid_size=3)
display(image)
min-dalle
text = 'court sketch of godzilla on trial'
image = model.generate_image(text, seed=6, grid_size=3)
display(image)
min-dalle
text = 'a funeral at Whole Foods'
image = model.generate_image(text, seed=10, grid_size=3)
display(image)
min-dalle
text = 'Jesus turning water into wine on Americas Got Talent'
image = model.generate_image(text, seed=2, grid_size=3)
display(image)
min-dalle
text = 'cctv footage of Yoda robbing a liquor store'
image = model.generate_image(text, seed=0, grid_size=3)
display(image)
min-dalle

Command Line

Use image_from_text.py to generate images from the command line.

$ python image_from_text.py --text='artificial intelligence' --no-mega --seed=7
min-dalle
$ python image_from_text.py --text='trail cam footage of gollum eating watermelon' --mega --seed=1 --grid-size=3
min-dalle