2022-06-27 19:46:04 +00:00
# min(DALL·E)
2022-06-27 17:38:35 +00:00
2022-07-06 15:57:50 +00:00
[![Colab ](https://colab.research.google.com/assets/colab-badge.svg )](https://colab.research.google.com/github/kuprel/min-dalle/blob/main/min_dalle.ipynb)
2022-07-01 23:12:43 +00:00
2022-06-29 19:24:09 +00:00
[![Replicate ](https://replicate.com/kuprel/min-dalle/badge )](https://replicate.com/kuprel/min-dalle)
2022-07-01 23:12:43 +00:00
2022-07-06 15:57:50 +00:00
[![Discord ](https://img.shields.io/discord/823813159592001537?color=5865F2&logo=discord&logoColor=white )](https://discord.com/channels/823813159592001537/912729332311556136)
2022-06-27 21:26:05 +00:00
2022-07-05 19:07:32 +00:00
This is a fast, minimal port of Boris Dayma's [DALL·E Mega ](https://github.com/borisdayma/dalle-mini ). It has been stripped down for inference and converted to PyTorch. The only third party dependencies are numpy, requests, pillow and torch.
2022-06-29 17:56:29 +00:00
2022-07-04 20:06:49 +00:00
To generate a 4x4 grid of DALL·E Mega images it takes:
2022-07-04 14:54:25 +00:00
- 89 sec with a T4 in Colab
- 48 sec with a P100 in Colab
2022-07-04 15:19:39 +00:00
- 14 sec with an A100 on Replicate
2022-06-27 17:51:48 +00:00
2022-07-01 23:19:02 +00:00
The flax model and code for converting it to torch can be found [here ](https://github.com/kuprel/min-dalle-flax ).
2022-07-01 18:06:50 +00:00
2022-07-01 23:03:18 +00:00
## Install
2022-06-27 17:51:48 +00:00
2022-07-01 22:52:40 +00:00
```bash
2022-07-01 22:30:22 +00:00
$ pip install min-dalle
```
2022-06-27 21:26:05 +00:00
2022-07-01 23:03:18 +00:00
## Usage
2022-06-27 21:28:40 +00:00
2022-07-02 00:27:29 +00:00
Load the model parameters once and reuse the model to generate multiple images.
2022-07-01 22:30:22 +00:00
2022-07-01 22:32:48 +00:00
```python
2022-07-01 23:44:24 +00:00
from min_dalle import MinDalle
2022-07-01 22:30:22 +00:00
2022-07-05 11:32:52 +00:00
model = MinDalle(
2022-07-07 12:21:20 +00:00
models_root='./pretrained',
dtype=torch.float32,
2022-07-05 11:32:52 +00:00
is_mega=True,
2022-07-07 12:21:20 +00:00
is_reusable=True
2022-07-05 11:32:52 +00:00
)
2022-06-27 19:46:04 +00:00
```
2022-07-07 12:21:20 +00:00
The required models will be downloaded to `models_root` if they are not already there. If you have an Ampere architecture GPU you can set the `dtype=torch.bfloat16` and save GPU memory. There is still an issue with `dtype=torch.float16` that needs to be sorted out. Once everything has finished initializing, call `generate_image` with some text as many times as you want. Use a positive `seed` for reproducible results. Higher values for `log2_supercondition_factor` result in better agreement with the text but a narrower variety of generated images. Every image token is sampled from the top-$k$ most probable tokens.
2022-07-01 22:54:05 +00:00
2022-07-01 22:50:11 +00:00
```python
2022-07-05 11:32:52 +00:00
image = model.generate_image(
2022-07-06 11:56:57 +00:00
text='Nuclear explosion broccoli',
2022-07-05 11:32:52 +00:00
seed=-1,
grid_size=4,
2022-07-05 21:25:09 +00:00
log2_k=6,
2022-07-05 17:32:06 +00:00
log2_supercondition_factor=5,
2022-07-05 11:53:30 +00:00
is_verbose=False
2022-07-05 11:32:52 +00:00
)
2022-07-05 11:50:56 +00:00
2022-07-01 22:50:11 +00:00
display(image)
2022-06-27 17:38:35 +00:00
```
2022-07-05 17:33:08 +00:00
< img src = "https://github.com/kuprel/min-dalle/raw/main/examples/nuclear_broccoli.jpg" alt = "min-dalle" width = "400" / >
2022-07-06 11:56:57 +00:00
credit: https://twitter.com/hardmaru/status/1544354119527596034
2022-07-02 17:06:03 +00:00
2022-07-05 11:50:56 +00:00
### Interactive
2022-07-05 13:06:27 +00:00
If the model is being used interactively (e.g. in a notebook) `generate_image_stream` can be used to generate a stream of images as the model is decoding. The detokenizer adds a slight delay for each image. Setting `log2_mid_count` to 3 results in a total of `2 ** 3 = 8` generated images. The only valid values for `log2_mid_count` are 0, 1, 2, 3, and 4. This is implemented in the colab.
2022-07-03 19:40:58 +00:00
2022-07-02 16:55:59 +00:00
```python
2022-07-05 11:32:52 +00:00
image_stream = model.generate_image_stream(
text='Dali painting of WALL·E',
seed=-1,
grid_size=3,
log2_mid_count=3,
2022-07-05 21:25:09 +00:00
log2_k=6,
2022-07-05 11:53:30 +00:00
log2_supercondition_factor=3,
is_verbose=False
2022-07-05 11:32:52 +00:00
)
for image in image_stream:
2022-07-05 11:49:00 +00:00
display(image)
2022-07-02 16:55:59 +00:00
```
2022-07-05 11:32:52 +00:00
< img src = "https://github.com/kuprel/min-dalle/raw/main/examples/dali_walle_animated.gif" alt = "min-dalle" width = "300" / >
2022-07-01 23:15:31 +00:00
### Command Line
2022-07-01 23:19:02 +00:00
Use `image_from_text.py` to generate images from the command line.
2022-07-01 23:15:31 +00:00
```bash
2022-07-05 11:21:15 +00:00
$ python image_from_text.py --text='artificial intelligence' --no-mega
2022-07-01 23:15:31 +00:00
```
2022-07-03 20:49:38 +00:00
< img src = "https://github.com/kuprel/min-dalle/raw/main/examples/artificial_intelligence.jpg" alt = "min-dalle" width = "200" / >