2.8 KiB
Vendored
min(DALL·E)
This is a fast, minimal implementation of Boris Dayma's DALL·E Mega. It has been stripped down for inference and converted to PyTorch. The only third party dependencies are numpy, requests, pillow and torch.
To generate a 4x4 grid of DALL·E Mega images it takes:
- 89 sec with a T4 in Colab
- 48 sec with a P100 in Colab
- 14 sec with an A100 on Replicate
- TBD with an H100 (@NVIDIA?)
The flax model and code for converting it to torch can be found here.
Install
$ pip install min-dalle
Usage
Load the model parameters once and reuse the model to generate multiple images.
from min_dalle import MinDalle
model = MinDalle(
is_mega=True,
is_reusable=True,
models_root='./pretrained'
)
The required models will be downloaded to models_root
if they are not already there. Once everything has finished initializing, call generate_image
with some text as many times as you want.
image = model.generate_image(
'Dali painting of WALL·E',
seed=-1,
grid_size=4,
log2_supercondition_factor=3
)
display(image)
Use a positive seed
for reproducible results. Higher values for log2_supercondition_factor
result in better agreement with the text but a narrower variety of generated images.
If the model is being used interactively (e.g. in a notebook) generate_image_stream
can be used to generate a stream of images as it the model is decoding. The detokenizer adds a slight delay for each intermediate image.
image_stream = model.generate_image_stream(
text='Dali painting of WALL·E',
seed=-1,
grid_size=3,
log2_mid_count=3,
log2_supercondition_factor=3
)
is_first = True
for image in image_stream:
display_image = display if is_first else update_display
display_image(image, display_id=1)
is_first = False
Command Line
Use image_from_text.py
to generate images from the command line.
$ python image_from_text.py --text='artificial intelligence' --no-mega