You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Brett Kuprel 798e6ac5a3 optionally tile images in token space 2 years ago
examples update replicate, remove unused examples 2 years ago
min_dalle optionally tile images in token space 2 years ago
.gitattributes add gitattributes file 2 years ago
.gitignore update replicate, remove unused examples 2 years ago
LICENSE license and cleanup 2 years ago
README.md optionally tile images in token space 2 years ago
cog.yaml optionally tile images in token space 2 years ago
image_from_text.py save as png 2 years ago
min_dalle.ipynb update colab to use seamless 2 years ago
performance.png update performance graph 2 years ago
replicate_predictor.py fixed bug on replicate 2 years ago
requirements.txt Update requirements.txt 2 years ago
setup.py optionally tile images in token space 2 years ago
tkinter_ui.py optionally tile images in token space 2 years ago

README.md

min(DALL·E)

Colab   Replicate   Discord

This is a fast, minimal port of Boris Dayma's DALL·E Mini (with mega weights). It has been stripped down for inference and converted to PyTorch. The only third party dependencies are numpy, requests, pillow and torch.

To generate a 4x4 grid of DALL·E Mega images it takes:

  • 89 sec with a T4 in Colab
  • 48 sec with a P100 in Colab
  • 13 sec with an A100 on Replicate

Here's a more detailed breakdown of performance on an A100. Credit to @technobird22 and his NeoGen discord bot for the graph.
min-dalle

The flax model and code for converting it to torch can be found here.

Install

$ pip install min-dalle

Usage

Load the model parameters once and reuse the model to generate multiple images.

from min_dalle import MinDalle

model = MinDalle(
    models_root='./pretrained',
    dtype=torch.float32,
    device='cuda',
    is_mega=True, 
    is_reusable=True
)

The required models will be downloaded to models_root if they are not already there. Set the dtype to torch.float16 to save GPU memory. If you have an Ampere architecture GPU you can use torch.bfloat16. Set the device to either "cuda" or "cpu". Once everything has finished initializing, call generate_image with some text as many times as you want. Use a positive seed for reproducible results. Higher values for supercondition_factor result in better agreement with the text but a narrower variety of generated images. Every image token is sampled from the top_k most probable tokens. The largest logit is subtracted from the logits to avoid infs. The logits are then divided by the temperature.

image = model.generate_image(
    text='Nuclear explosion broccoli',
    seed=-1,
    grid_size=4,
    temperature=1,
    top_k=256,
    supercondition_factor=32,
    is_verbose=False
)

display(image)
min-dalle

Credit to @hardmaru for the example

Progressive Outputs

If the model is being used interactively (e.g. in a notebook) generate_image_stream can be used to generate a stream of images as the model is decoding. The detokenizer adds a slight delay for each image. Set progressive_outputs to True to enable this. An example is implemented in the colab.

image_stream = model.generate_image_stream(
    text='Dali painting of WALL·E',
    seed=-1,
    grid_size=3,
    log2_mid_count=3,
    temperature=1,
    top_k=256,
    supercondition_factor=16,
    is_verbose=False
)

for image in image_stream:
    display(image)
min-dalle

Command Line

Use image_from_text.py to generate images from the command line.

$ python image_from_text.py --text='artificial intelligence' --no-mega
min-dalle