logo

Version 1.9.72 (Release Notes)


Tiny-Dream is scheduled to be released on June/July 2023. Documentation still in progress. Downloads will be available once the library is released.

Tiny Dream is a header only, dependency free Stable Diffusion 2 implementation written in C++ from scratch with primary focus on CPU performance and memory footprint. Tiny Dream runs reasonably fast on the average consumer hardware, require only 5.5 GB of RAM to execute, does not enforce Nvidia GPUs presence, and is designed to be embedded on larger codebases (host programs) with an easy to use C++ API. The possibilities are literally endless (or at least extend to the boundaries of Stable Diffusion's latent manifold).

Principle of Diffusion models. The prompt used to generate such image using the Tiny Dream C++ API is:
"pyramid, desert, palm trees, river, sun, (landscape), (high quality)"

Prompt used to generate such image: pyramid, desert, palm trees, river, sun, (landscape), (high quality)
Principle of Diffusion models Illustration courtesy of Binxu Wang , Ph.D candidate, Harvard University.

Stable Diffusion is a powerful, open-source text-to-image generation model launched publicly by Stability.ai in August, 2022, and is designed to produce images matching input text prompt. Stable Diffusion rely on the latent diffusion model architecture, a variant of the diffusion model which maps to the latent space using a fixed Markov chain . The goal of diffusion models is to learn the latent structure of a dataset by modeling the way in which data points diffuse through the latent space . They are designed to learn the underlying structure of a dataset by mapping it to a lower-dimensional latent space. This latent space represents the data in which the relationships between different data points are more easily understood and analyzed.

PixLab | Symisc Systems Open Source, ML/DL related projects namely:
Tiny Dream, ASCII Art & SOD

Features


  • Lowest Memory Footprint (as of this release) for a Stable Diffusion Implementation.
    • Only 5.5GB of RAM are needed for inference.
    • Tiny Dream maximizes memory efficiency, allowing for seamless image generation with minimal memory footprint.
  • Open CV Dependency Free
    • You do not need to link to OpenCV or any other bloated image processing library.
    • Only stb_image_write.h from the stb single-file public domain libraries is required for saving images.
  • Easy to integrate on existing code bases
    • Just drop tinydream.hpp and stb_image_write.h on your repository with the Pre-trained assets.
    • The library exposes a single C++ class named tinyDream with only 7 public methods.
    • No output to stdout/stderr. You just supply your log consumer callback, and redirect the generated log messages to your favorite stream (eg Terminal, Disk File or Network Socket).
  • Reasonably fast on Intel/AMD CPUs
    • With TBB threading and SSE/AVX vectorization.
    • Intel i9-9990XE Intel i7-13700 Ryzen 9-5900X
      1.98s ~ 3.00s Clock: 4.0GHz ~ 5.0GHz 3.76s ~ 4.90s Clock: 2.8GHz ~ 3.9GHz 2.52s ~ 3.38s Clock: 3.7GHz ~ 4.8GHz
      Lowest to fastest processing time expressed in seconds with clock speed (base & turbo-boost) under various background loads for a standard 512x512, non-upscaled PNG image. Please note that PixLab maintain a private fork of NCNN that is extremely optimized for the x86 architecture so processing speed may vary if reproduced on your machine.
  • Support Real-ESRGAN , A Super Resolution Network Upscaler
    • Generate extremely high resolution output images thanks to this network. This extra steps is a CPU intensive one, and takes few seconds to complete.
  • Full Support for Negative Words
    • An extra text field (set of keywords) that allows you to list what you don't want to see generated such as gore or NSFW content.
  • Full Support for Keywords Priority
    • Instruct the model to pay attention, and give higher priority to tokens (keywords) surrounded by parenthesis.
  • Support for Image Output Metadata
    • Instruct the model to link metadata to your output images such as copyright notice, comments, or any other meta information you would like to see linked to your image.
  • Full Support for Stable Diffusion Extra Parameters
    • Seed resizing, a way to generate the same image but at slightly different resolution.
    • Controls how much the image generation process follows the text prompt using guidance scale.
    • Adjust sampling steps during Stable Diffusion Inference.

Getting Started


Integrating Tiny Dream on your existing code base is straightforward. Here is what to do without having to do a lot of tedious reading and configuration:

1. Download The Code

Download the latest public release of Tiny Dream, and extract the package on a directory of your choice. Refer to the download section below to get a copy of the Tiny Dream source code as well as the Pre-Trained Models & Assets.

2. Embedding Tiny-Dream

The Tiny Dream source code comprise only two header files that is tinydream.hpp and stb_image_write.h. All you have to do is drop these two C/C++ header files on your source tree, and instantiate a new tinyDream object as shown on the pseudo C++ code below:

#include "tinydream.hpp"
int main(void){
 // Instantiate a new Tiny Dream Object
 tinyDream *td = new tinyDream("./assetsPath");
 .
 .
 .
 td->dream(...); //Stable Diffusion Inference
 delete td;
 return 0;
}

👏 You have successfully integrated Tiny Dream. To build & generate an executable for a sample application, run the following commands:

git clone https://github.com/symisc/tiny-dream.git
cd tiny-dream
g++ -o tinydream sample.cpp -funsafe-math-optimizations -Ofast -flto=auto  -funroll-all-loops -pipe -march=native -std=c++17 -Wall -Wextra `pkg-config --cflags --libs ncnn` -lstdc++ -pthread -Wl -flto -fopt-info-vec-optimized
./tinydream "pyramid, desert, palm trees, river, (landscape), (high quality)"

💡 You need to link to NCNN , the backend Tensor library in order to generate the executable. On our roadmap, we plan to ditch ncnn to a less bloated tensor library such as SOD or GGML with focus on CPU performance.

3. Get the Pre-Trained Models & Assets
  • Once your executable is generated, you will need the Tiny Dream pre-trained models & assets path accessible to your executable.
  • The Tiny Dream assets comprise all pre-trained models (over 2GB as of this release) required by the tinyDream::dream() method in order to run stable diffusion inference (image generation).
  • You can download the pre-trained models from the Download section below.
  • Once downloaded, extract the assets ZIP archive in a directory of your choice (usually the directory where your executable is located), and set the full path via tinyDream::setAssetsPath() or from the Tiny Dream constructor.
4. C++ Usage Example

The C++ Gist below highlights a typical integration usage of Tiny Dream on an existing C++ code base:

  • The code above is self-explanatory, and should be easy to understand for the average C++ programmer. A new tinyDream object is instantiated on line xx of the Gist above.
  • Assets & Pre-trained models path location is set via call to tinyDream::setAssetsPath() on line. The Tiny Dream assets comprise all pre-trained models (over 2GB as of this release) required by the tinyDream::dream() method in order to run stable diffusion inference (image generation). You can download the pre-trained assets from the Download section above. You can also set the path directly from the constructor without calling this method.
  • Please note that if your Tiny Dream assets are located on the same directory where your executable (eg "./") reside, there is no need then to specify any path, the default argument for the constructor handle this specific case for you.
  • Optionally, a log consumer callback is registered via call to the tinyDream::setLogCallback() method on line xx. Stable Diffusion inference may take time to execute depending on the available resources so it make sense to log everything to the terminal or a text file during execution for a better user experience.
  • We set the desired output path to the PNG image yet to be generated via call to tinyDream::setImageOutputPath() on line xx.
  • Finally, Stable Diffusion inference took place on line xx via a single call to. You supply . On successful execution, the generated PNG image is saved on
5. Continue with The C++ API Reference Guide

The Tiny Dream C++ Interface, provides detailed specifications for all of the various methods the Tiny Dream class exports. Once the reader understands the basic principles of operation for Tiny Dream, that section should serve as a reference guide.

Downloads


Tiny Dream Source Code

Release 1.75

This ZIP archive contains all C++ source code for Tiny Dream combined into a single header file for easier integration on your existing code base. You may refer to the Getting Started section above for a step-by-step integration guide.


Tiny Dream Pre-Trained Models & Assets

2GB of Assets

This ZIP archive contains all Pre-Trained Models & 2GB of Assets required for Stable Diffusion Inference. Once downloaded, extract the assets ZIP archive in a directory of your choice (usually the directory where your executable is located), and set the path via the Tiny Dream constructor or from the tinyDream::setAssetsPath() method. The assets package is available to download for a one-time fee of $29.

Roadmap


As we continue to develop and improve Tiny Dream, we have an exciting roadmap of future addons and enhancements planned. Here's a glimpse into what you can expect in the future:

  • Highest Priority Provide a Cross-Platform GUI to Tiny Dream implemented in Dear imGUI.
  • Highest Priority Move the Tensor library from NCNN to a non bloated one such as SOD or GGML with focus on CPU performance.
  • Hot Output SVG, and easy to alter image formats rather than static PNGs.
  • Hot Provide a Web-Assembly port to the library once the future Tensor library (SOD or GGML) ported to WASM.
  • Low Provide an Android, proof of concept, show-case APK.

You may refer to the PixLab Blog for the upcoming updates & announcements, and to the Tiny Dream repository for the roadmap progress & implementation.

Licensing


Tiny Dream is released under the GNU Affero General Public License
  • Tiny Dream is a dual-licensed, open source product. The complete source code of the library, and related utility are freely available on Github.
  • Tiny Dream is released under the GNU Affero General Public License (AGPLv3) .
  • The AGPLv3 license permits you to use Tiny Dream at no charge under the condition that if you use the library in a host application, the complete source code for your application must be available and freely redistributable under reasonable conditions.
  • If you wish to derive a commercial advantage by not releasing your application under the AGPLv3 or any other compatible open source license, you must purchase a non-exclusive commercial Tiny Dream license.
  • By purchasing a commercial license, you do not longer have to release your application's source code. Please reach out to licensing@pixlab.io to place an order and/or for additional information about the licensing situation.

C++ API Reference


This section define the C++ language interface to Tiny Dream. For a tutorial introduction, please refer to the Getting Started section above. As of this release, the library expose a single class with just seven methods, making it easy to use & integrate in your existing C++ projects. The exposed methods are documented below.

tinyDream public methods:

Syntax
tinyDream::tinyDream(const std::string& assetsPath = "./assets");
Description

Constructor for the tinyDream class.

Instantiate a new tinyDream object ready for Stable Diffusion Inference. The constructor takes an optional argument assetsPath which specifies the path to the pre-trained assets required by the tinyDream::dream() method in order to accept prompts and generate images.

You can download the pre-trained assets from the Download section above. Once downloaded, extract the assets ZIP archive in a directory of your choice (usually the directory where your executable is located), and set the path via the constructor or tinyDream::setAssetsPath().

Parameters
const std::string& assetsPath

Optional Parameter: Full path where the pre-trained models are located which can be downloaded from here. If this parameter is omitted, then the current path where the executable is running is assumed.

Return Value
None
Syntax
void tinyDream::setLogCallback(tdLogHandler xLogHandler, void* pUserData);
Description

Consume log messages via an external log handler callback. The main task of the supplied callback is to consume log messages generated during Stable Diffusion inference via call to the tinyDream::dream() method with the desired prompts. Stable Diffusion inference may take time to execute depending on the available resources so it make sense to log everything to the terminal or a text file during execution.

Parameters
tdLogHandler xLogHandler

Log consumer callback. The supplied callback must have the following signature:

std::function<void(const char *zLogMsg, void *pUserData)> tdLogHandler;

The log function must accept two arguments:

  • The first argument is a pointer to a null terminated string holding the generated log message.
  • The last argument is the pUserData opaque pointer forwarded verbatim to your callback whenever a log message is generated.

Please note that depending on the load, the log consumer callback may be called hundreds if not thousands of times, so make sure your supplied callback runs without blocking, and as fast as possible to avoid wasting CPU cycles.

void *pUserData

Arbitrary user pointer which is forwarded verbatim by the engine to the supplied callback as its second argument.

Return Value
None
Syntax
void tinyDream::setAssetsPath(const std::string& assetsPath);
Description

Set the pre-trained models (assets) path location.The Tiny Dream assets comprise all pre-trained models (over 2GB as of this release) required by the tinyDream::dream() method in order to run stable diffusion inference (image generation).

You can download the pre-trained assets from the Download section above. Once downloaded, extract the assets ZIP archive in a directory of your choice (usually the directory where your executable is located), and set the full path via this method or from the Tiny Dream constructor.

Parameters
const std::string& assetsPath

Full path where the pre-trained models are located which can be downloaded from here.

Return Value
None
Syntax
void tinyDream::setImageOutputPath(const std::string& outputPngImgPath);
Description

Set the output path to the PNG image yet to be generated following successful call to the tinyDream::dream() method with the desired prompts.

Parameters
const std::string& outputPngImgPath

Desired output path (eg /path/to/dream.png) to the PNG image yet to be generated following successful call to the tinyDream::dream() method.

Return Value
None
Syntax
bool tinyDream::dream(const std::string& positivePrompt, const std::string& negativeTokens,
int step = 30, int seed = 42);
Description

Stable diffusion inference - Generate a high definition, 512x512 PNG image that can be upscaled further to higher resolution image via Real-ESRGAN, and conditioned by the supplied text prompts. (Positive) prompts in this implementation are tokens (keywords) separated by commas where each word describe the thing you'd like to see generated.

Prior to calling this method, an image output path must be set via tinyDream::setImageOutputPath(), as well pre-trained models & assets path must be set either from the constructor or via call to tinyDream::setAssetsPath(). Depending on the step, and seed parameters, it make sense to call this method more than once (ie dozen times) to achieve the desired result. On which case, we recommend that you install a log consume callback via call to tinyDream::setLogCallback() to capture log messages generated during Stable Diffusion inference and get a detailed overview of what's going on under the hood.

You can download the pre-trained assets from the Download section above. Once downloaded, extract the assets ZIP archive in a directory of your choice (usually the directory where your executable is located), and set the full path via this method or from the Tiny Dream constructor.

Parameters
const std::string& positivePrompt

Describe something you'd like to see generated using words separated by commas. meta instructions (eg image quality) must be surrounded by parenthesis.

Example: the following prompts will generate a high quality, landscape picture of a pyramid surrounded by palm trees and a river in the middle of the desert:
"pyramid, desert, palm trees, river, (landscape), (high quality)".


const std::string& negativeTokens

Optional string, defaults to the empty string: An extra set of tokens that allows you to list what you don't want to see in the yet to be generated image. Example of such tokens are: bloods, mutilation, gore, nudity, etc.


int step

Optional integer, defaults to 30. An integer for adjusting the inference steps in Stable Diffusion. The more steps you use, the better quality you'll achieve but you shouldn't set steps as high as possible. Around 30 sampling steps (default value) are usually enough to achieve high-quality images. Using more may produce a slightly different picture, but not necessarily better quality. In addition, the iterative nature of the process makes generation slow; the more steps you'll use, the more time it will take to generate an image. In most cases, it's not worth the additional wait time.


int seed

Optional integer, defaults to 42. Seed in Stable Diffusion is a number used to initialize the generation. Controlling the seed can help you generate reproducible images, experiment with other parameters, or prompt variations.

Return Value

Boolean true is returned on successful Stable Diffusion inference, false is returned otherwise. On which case, log messages captured by your log consumer callback should give you an insight of what went wrong.


Syntax
 static std::pair<std::string /*Positive Prompt */, std::string /* Negative Prompt*/> tinyDream::promptExample();
Description

Return a hard-coded, prompt example template to be passed to the tinyDream::dream() method. This static method is of no particular interest except to familiarize the developer (in lack of imagination) with the library prompt inputs.

Parameters
None
Return Value

This static method never fail, and always return a standard template std::pair object holding the positive prompt in the first field of the std::pair, while the negative prompt is stored in the second field of the pair object.


Syntax
static const char * tinyDream::about();
Description

Return copyright notice, library identification and version number.

Parameters
None
Return Value

This static method never fail, and always return a pointer to a null terminated string holding the copyright notice.


You may find useful the following open source*, production-ready products developed & maintained by PixLab | Symisc Systems:

  • SOD - An Embedded, Dependency-Free, Computer Vision C/C++ Library.
  • ASCII Art - Real-Time ASCII Art Rendering C Library.
  • UnQLite - An Embedded, Transactional Key/Value Database Engine.
  • FACEIO - Cross Browser, Passwordless Facial Authentication Framework.
  • Annotate - Online Image Annotation, Labeling & Segmentation Tool.