Go to file
2022-07-03 15:35:33 -04:00
examples add nvidia-smi to notebook 2022-07-03 11:17:37 -04:00
min_dalle fixed typing error for older python versions 2022-07-02 09:06:22 -04:00
replicate update replicate predict.py 2022-07-03 12:14:24 -04:00
.gitattributes add gitattributes file 2022-06-29 12:45:41 -04:00
.gitignore update readme 2022-07-02 12:32:54 -04:00
cog.yaml update cog.yaml 2022-07-03 12:20:03 -04:00
image_from_text.py added grid_size parameter to generate a grid of images 2022-07-02 08:45:49 -04:00
LICENSE license and cleanup 2022-06-27 14:34:10 -04:00
min_dalle.ipynb updated colab 2022-07-03 15:35:33 -04:00
README.md update readme 2022-07-03 13:35:36 -04:00
requirements.txt Update requirements.txt 2022-07-02 08:52:05 -04:00
setup.py update replicate files 2022-07-02 10:05:16 -04:00

min(DALL·E)

Open In Colab   Replicate   Join us on Discord

This is a fast, minimal implementation of Boris Dayma's DALL·E Mini. It has been stripped for inference and converted to PyTorch. The only third party dependencies are numpy, requests, pillow and torch.

To generate a 3x3 grid of DALL·E Mega images it takes

  • 35 seconds with a P100 in Colab
  • 15 seconds with an A100 on Replicate
  • TBD with an H100 (@NVIDIA?)

The flax model and code for converting it to torch can be found here.

Install

$ pip install min-dalle

Usage

Python

Load the model parameters once and reuse the model to generate multiple images.

from min_dalle import MinDalle

model = MinDalle(is_mega=True, models_root='./pretrained')

The required models will be downloaded to models_root if they are not already there. Once everything has finished initializing, call generate_image with some text and a seed as many times as you want.

text = 'a comfy chair that looks like an avocado'
image = model.generate_image(text)
display(image)
drawing
text = 'court sketch of godzilla on trial'
image = model.generate_image(text, seed=6, grid_size=3)
display(image)
drawing
text = 'Rusty Iron Man suit found abandoned in the woods being reclaimed by nature'
image = model.generate_image(text, seed=0, grid_size=3)
display(image)
drawing
text = 'a funeral at Whole Foods'
image = model.generate_image(text, seed=10, grid_size=3)
display(image)
drawing
text = 'Jesus turning water into wine on Americas Got Talent'
image = model.generate_image(text, seed=2, grid_size=3)
display(image)
drawing
text = 'cctv footage of Yoda robbing a liquor store'
image = model.generate_image(text, seed=0, grid_size=3)
display(image)
drawing

Command Line

Use image_from_text.py to generate images from the command line.

$ python image_from_text.py --text='artificial intelligence' --seed=7
drawing
$ python image_from_text.py --text='trail cam footage of gollum eating watermelon' --mega --seed=1 --grid-size=3
drawing