Go to file
2022-06-29 10:45:46 -04:00
examples updated readme with torch examples 2022-06-29 10:43:46 -04:00
min_dalle refactored to load models once and run multiple times 2022-06-29 09:42:12 -04:00
.gitignore examples 2022-06-27 13:39:00 -04:00
image_from_text.py refactored to load models once and run multiple times 2022-06-29 09:42:12 -04:00
LICENSE license and cleanup 2022-06-27 14:34:10 -04:00
min_dalle.ipynb default to torch+mega in colab 2022-06-29 10:37:12 -04:00
README.md readme wording 2022-06-29 10:45:46 -04:00
requirements.txt Simplified requirements: 2022-06-28 21:34:57 +01:00
setup.sh Update setup.sh 2022-06-28 21:18:24 -04:00

min(DALL·E)

Open In Colab

This is a minimal implementation of DALL·E Mini. It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. The only third party dependencies are numpy, torch, and flax. PyTorch inference with DALL·E Mega takes about 10 seconds in colab.

Setup

Run sh setup.sh to install dependencies and download pretrained models. The wandb python package is installed to download DALL·E mini and DALL·E mega. Alternatively, the models can be downloaded manually here: VQGan, DALL·E Mini, DALL·E Mega

Usage

Use the python script image_from_text.py to generate images from the command line. Here are some examples runs:

python3 image_from_text.py --text='artificial intelligence' --torch

Alien

python image_from_text.py --text='a comfy chair that looks like an avocado' --torch --mega --seed=10

Avocado Armchair

python image_from_text.py --text='court sketch of godzilla on trial' --mega --seed=100

Godzilla Trial

Note: the command line script loads the models and parameters each time. The colab notebook demonstrates how to load the models once and run multiple times.