simplified attention and keys_values state resulted in decrease in inference time to 7.3 seconds (from ~10 seconds)

main
Brett Kuprel 2 years ago
parent 661ec976ac
commit a4df279fd2
  1. 4
      README.md

4
README.md vendored

@ -2,7 +2,9 @@
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/kuprel/min-dalle/blob/main/min_dalle.ipynb)
This is a minimal implementation of [DALL·E Mini](https://github.com/borisdayma/dalle-mini). It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. The only third party dependencies are numpy, torch, and flax (and optionally wandb to download the models). DALL·E Mega inference with PyTorch takes about 10 seconds in Colab.
This is a minimal implementation of [DALL·E Mini](https://github.com/borisdayma/dalle-mini). It has been stripped to the bare essentials necessary for doing inference, and converted to PyTorch. The only third party dependencies are numpy, torch, and flax (and optionally wandb to download the models).
DALL·E Mega inference with PyTorch takes 7.3 seconds in Colab to generate an avocado armchair
### Setup

Loading…
Cancel
Save