mega works with latest flax version 0.5.2 now, removing 0.4.2 pin

This commit is contained in:
Brett Kuprel 2022-07-01 02:58:43 -04:00
parent eaee59a1ef
commit b40fd83a0d
5 changed files with 25 additions and 27 deletions

4
README.md vendored
View File

@ -3,7 +3,7 @@
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kuprel/min-dalle/blob/main/min_dalle.ipynb)  
[![Replicate](https://replicate.com/kuprel/min-dalle/badge)](https://replicate.com/kuprel/min-dalle)
This is a minimal implementation of Boris Dayma's [DALL·E Mini](https://github.com/borisdayma/dalle-mini). It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. To run the torch model, the only third party dependencies are numpy and torch. Flax is used to convert the weights (which are saved with `torch.save` the first time the model is loaded), and wandb is only used to download the models.
This is a minimal implementation of Boris Dayma's [DALL·E Mini](https://github.com/borisdayma/dalle-mini). It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. To run the torch model, the only third party dependencies are numpy and torch. Flax is used to convert the weights (which are saved the first time the model is loaded), and wandb is only used to download the models.
It currently takes **7.4 seconds** to generate an image with DALL·E Mega with PyTorch on a standard GPU runtime in Colab
@ -33,7 +33,7 @@ python image_from_text.py --text='a comfy chair that looks like an avocado' --to
```
python image_from_text.py --text='court sketch of godzilla on trial' --mega --seed=100
python image_from_text.py --text='court sketch of godzilla on trial' --torch --mega --seed=40
```
![Godzilla Trial](examples/godzilla_trial.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 139 KiB

42
min_dalle.ipynb vendored

File diff suppressed because one or more lines are too long

View File

@ -33,7 +33,7 @@ class DecoderSelfAttentionFlax(AttentionFlax):
attention_state = lax.dynamic_update_slice(
attention_state,
jnp.concatenate([keys, values]),
jnp.concatenate([keys, values]).astype(jnp.float32),
state_index
)
batch_count = decoder_state.shape[0]
@ -44,7 +44,7 @@ class DecoderSelfAttentionFlax(AttentionFlax):
values,
queries,
attention_mask
)
).astype(decoder_state.dtype)
return decoder_state, attention_state

2
requirements.txt vendored
View File

@ -1,3 +1,3 @@
torch
flax==0.4.2
flax
wandb