mega works with latest flax version 0.5.2 now, removing 0.4.2 pin

main
Brett Kuprel 2 years ago
parent eaee59a1ef
commit b40fd83a0d
  1. 4
      README.md
  2. BIN
      examples/godzilla_trial.png
  3. 42
      min_dalle.ipynb
  4. 4
      min_dalle/models/dalle_bart_decoder_flax.py
  5. 2
      requirements.txt

4
README.md vendored

@ -3,7 +3,7 @@
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kuprel/min-dalle/blob/main/min_dalle.ipynb)  
[![Replicate](https://replicate.com/kuprel/min-dalle/badge)](https://replicate.com/kuprel/min-dalle)
This is a minimal implementation of Boris Dayma's [DALL·E Mini](https://github.com/borisdayma/dalle-mini). It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. To run the torch model, the only third party dependencies are numpy and torch. Flax is used to convert the weights (which are saved with `torch.save` the first time the model is loaded), and wandb is only used to download the models.
This is a minimal implementation of Boris Dayma's [DALL·E Mini](https://github.com/borisdayma/dalle-mini). It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. To run the torch model, the only third party dependencies are numpy and torch. Flax is used to convert the weights (which are saved the first time the model is loaded), and wandb is only used to download the models.
It currently takes **7.4 seconds** to generate an image with DALL·E Mega with PyTorch on a standard GPU runtime in Colab
@ -33,7 +33,7 @@ python image_from_text.py --text='a comfy chair that looks like an avocado' --to
```
python image_from_text.py --text='court sketch of godzilla on trial' --mega --seed=100
python image_from_text.py --text='court sketch of godzilla on trial' --torch --mega --seed=40
```
![Godzilla Trial](examples/godzilla_trial.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 139 KiB

42
min_dalle.ipynb vendored

File diff suppressed because one or more lines are too long

@ -33,7 +33,7 @@ class DecoderSelfAttentionFlax(AttentionFlax):
attention_state = lax.dynamic_update_slice(
attention_state,
jnp.concatenate([keys, values]),
jnp.concatenate([keys, values]).astype(jnp.float32),
state_index
)
batch_count = decoder_state.shape[0]
@ -44,7 +44,7 @@ class DecoderSelfAttentionFlax(AttentionFlax):
values,
queries,
attention_mask
)
).astype(decoder_state.dtype)
return decoder_state, attention_state

2
requirements.txt vendored

@ -1,3 +1,3 @@
torch
flax==0.4.2
flax
wandb

Loading…
Cancel
Save