mega works with latest flax version 0.5.2 now, removing 0.4.2 pin
This commit is contained in:
parent
eaee59a1ef
commit
b40fd83a0d
4
README.md
vendored
4
README.md
vendored
|
@ -3,7 +3,7 @@
|
||||||
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kuprel/min-dalle/blob/main/min_dalle.ipynb)
|
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kuprel/min-dalle/blob/main/min_dalle.ipynb)
|
||||||
[![Replicate](https://replicate.com/kuprel/min-dalle/badge)](https://replicate.com/kuprel/min-dalle)
|
[![Replicate](https://replicate.com/kuprel/min-dalle/badge)](https://replicate.com/kuprel/min-dalle)
|
||||||
|
|
||||||
This is a minimal implementation of Boris Dayma's [DALL·E Mini](https://github.com/borisdayma/dalle-mini). It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. To run the torch model, the only third party dependencies are numpy and torch. Flax is used to convert the weights (which are saved with `torch.save` the first time the model is loaded), and wandb is only used to download the models.
|
This is a minimal implementation of Boris Dayma's [DALL·E Mini](https://github.com/borisdayma/dalle-mini). It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. To run the torch model, the only third party dependencies are numpy and torch. Flax is used to convert the weights (which are saved the first time the model is loaded), and wandb is only used to download the models.
|
||||||
|
|
||||||
It currently takes **7.4 seconds** to generate an image with DALL·E Mega with PyTorch on a standard GPU runtime in Colab
|
It currently takes **7.4 seconds** to generate an image with DALL·E Mega with PyTorch on a standard GPU runtime in Colab
|
||||||
|
|
||||||
|
@ -33,7 +33,7 @@ python image_from_text.py --text='a comfy chair that looks like an avocado' --to
|
||||||
|
|
||||||
|
|
||||||
```
|
```
|
||||||
python image_from_text.py --text='court sketch of godzilla on trial' --mega --seed=100
|
python image_from_text.py --text='court sketch of godzilla on trial' --torch --mega --seed=40
|
||||||
```
|
```
|
||||||
|
|
||||||
![Godzilla Trial](examples/godzilla_trial.png)
|
![Godzilla Trial](examples/godzilla_trial.png)
|
||||||
|
|
BIN
examples/godzilla_trial.png
vendored
BIN
examples/godzilla_trial.png
vendored
Binary file not shown.
Before Width: | Height: | Size: 155 KiB After Width: | Height: | Size: 139 KiB |
42
min_dalle.ipynb
vendored
42
min_dalle.ipynb
vendored
File diff suppressed because one or more lines are too long
|
@ -33,7 +33,7 @@ class DecoderSelfAttentionFlax(AttentionFlax):
|
||||||
|
|
||||||
attention_state = lax.dynamic_update_slice(
|
attention_state = lax.dynamic_update_slice(
|
||||||
attention_state,
|
attention_state,
|
||||||
jnp.concatenate([keys, values]),
|
jnp.concatenate([keys, values]).astype(jnp.float32),
|
||||||
state_index
|
state_index
|
||||||
)
|
)
|
||||||
batch_count = decoder_state.shape[0]
|
batch_count = decoder_state.shape[0]
|
||||||
|
@ -44,7 +44,7 @@ class DecoderSelfAttentionFlax(AttentionFlax):
|
||||||
values,
|
values,
|
||||||
queries,
|
queries,
|
||||||
attention_mask
|
attention_mask
|
||||||
)
|
).astype(decoder_state.dtype)
|
||||||
return decoder_state, attention_state
|
return decoder_state, attention_state
|
||||||
|
|
||||||
|
|
||||||
|
|
2
requirements.txt
vendored
2
requirements.txt
vendored
|
@ -1,3 +1,3 @@
|
||||||
torch
|
torch
|
||||||
flax==0.4.2
|
flax
|
||||||
wandb
|
wandb
|
||||||
|
|
Loading…
Reference in New Issue
Block a user