Go to file
2022-06-29 13:56:39 -04:00
examples updated readme with torch examples 2022-06-29 10:43:46 -04:00
min_dalle simplified attention for torch model 2022-06-29 13:48:12 -04:00
.gitattributes add gitattributes file 2022-06-29 12:45:41 -04:00
.gitignore examples 2022-06-27 13:39:00 -04:00
image_from_text.py works with latest flax version 0.5.2, updated requirements.txt 2022-06-29 11:46:19 -04:00
LICENSE license and cleanup 2022-06-27 14:34:10 -04:00
min_dalle.ipynb time the inference pass 2022-06-29 13:55:23 -04:00
README.md simplified attention and keys_values state resulted in decrease in inference time to 7.3 seconds (from ~10 seconds) 2022-06-29 13:56:29 -04:00
requirements.txt works with latest flax version 0.5.2, updated requirements.txt 2022-06-29 11:46:19 -04:00
setup.sh Update setup.sh 2022-06-28 21:18:24 -04:00

min(DALL·E)

Open In Colab

This is a minimal implementation of DALL·E Mini. It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. The only third party dependencies are numpy, torch, and flax (and optionally wandb to download the models).

DALL·E Mega inference with PyTorch takes 7.3 seconds in Colab to generate an avocado armchair

Setup

Run sh setup.sh to install dependencies and download pretrained models. The models can also be downloaded manually here: VQGan, DALL·E Mini, DALL·E Mega

Usage

Use the python script image_from_text.py to generate images from the command line. Note: the command line script loads the models and parameters each time. To load a model once and generate multiple times, initialize either MinDalleTorch or MinDalleFlax, then call generate_image with some text and a seed. See the colab for an example.

Examples

python image_from_text.py --text='artificial intelligence' --torch

Alien

python image_from_text.py --text='a comfy chair that looks like an avocado' --torch --mega --seed=10

Avocado Armchair

python image_from_text.py --text='court sketch of godzilla on trial' --mega --seed=100

Godzilla Trial