This post is a developer diary , kind of. I'm making an improved CLIP interrogator using nearest-neighbor decoding: https://huggingface.co/codeShare/JupyterNotebooks/blob/main/sd_token_similarity_calculator.ipynb
, unlike the Pharmapsychotic model aka the "vanilla" CLIP interrogator : https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator/discussions
It doesn't require GPU to run, and is super quick. The reason for this is that the text_encodings are calculated ahead of time. I have plans on making this a Huggingface module.
//----//
This post gonna be a bit haphazard, but that's the way things are before I get the huggingface gradio module up and running.
Then it can be a fancy "feature" post , but no clue when I will be able to code that.
So better to give an update on the ad-hoc solution I have now.
The NND method I'm using is described here , in this paper which presents various ways to improve CLIP Interrogators: https://arxiv.org/pdf/2303.03032
!
Easier to just use the notebook then follow this gibberish. We pre-encode a bunch of prompt items , then select the most similiar one using dot product. Thats the TLDR.
Right now the resources available are the ones you see in the image.
I'll try to showcase it at some point. But really , I'm mostly building this tool because it is very convenient for myself + a fun challenge to use CLIP.
It's more complicated than the regular CLIP interrogator , but we get a whole bunch of items to select from , and can select exactly "how similiar" we want it to be to the target image/text encoding.
The \{itemA|itemB|itemC\} format is used as this will select an item at random when used on the perchance text-to-image servers, in in which I have a generator where I'm using the full dataset , https://perchance.org/fusion-ai-image-generator
NOTE: I've realized new users get errors when loading the fusion gen for the first time.
It takes minutes to load a fraction of the sets from perchance servers before this generator is "up and running" so-to speak.
I plan to migrate the database to a Huggingface repo to solve this : https://huggingface.co/datasets/codeShare/text-to-image-prompts
The \{itemA|itemB|itemC\} format is also a build-in random selection feature on ComfyUI :
Links/Resources posted here might be useful to someone in the meantime.
!
You can find tons of strange modules on the Huggingface page : https://huggingface.co/spaces
text_encoding_converter (also in the NND notebook) : https://huggingface.co/codeShare/JupyterNotebooks/blob/main/indexed_text_encoding_converter.ipynb
I'm using this to batch process JSON files into json + text_encoding paired files. Really useful (for me at least) when building the interrogator. Runs on the either Colab GPU or on Kaggle for added speed: https://www.kaggle.com/
Here is the dataset folder https://huggingface.co/datasets/codeShare/text-to-image-prompts:
!
Inside these folders you can see the auto-generated safetensor + json pairings in the "text" and "text_encodings" folders.
The JSON file(s) of prompt items from which these were processed are in the "raw" folder.
!
The text_encodings are stored as safetensors. These all represent 100K female first names , with 1K items in each file.
By splitting the files this way , it uses way less RAM / VRAM as lists of 1K can be processed one at a time.
!
I can process roughly 50K text encodings in about the time it takes to write this post (currently processing a set of 100K female firstnames into text encodings for the NND CLIP interrogator. )
EDIT : Here is the output uploaded https://huggingface.co/datasets/codeShare/text-to-image-prompts/tree/main/names/firstnames
I've updated the notebook to include a similarity search for ~100K female firstnames , 100K lastnames and a randomized 36K mix of female firstnames + lastnames
Its a JSON + safetensor pairing with 1K items in each. Inside the JSON is the name of the .safetensor files which it corresponds to. This system is super quick :)!
I have plans on making the NND image interrogator a public resource on Huggingface later down the line, using these sets. Will likely use the repo for perchance imports as well: https://huggingface.co/datasets/codeShare/text-to-image-prompts
Sources for firstnames : https://huggingface.co/datasets/jbrazzy/baby_names
List of most popular names given to people in the US by year
Sources for lastnames : https://github.com/Debdut/names.io
An international list of all firstnames + lastnames in existance, pretty much . Kinda borked as it is biased towards non-western names. Haven't been able to filter this by nationality unfortunately.
//----//
The TLDR : You can run a prompt , or an image , to get the encoding from CLIP. Then sample above sets (of >400K items, at the moment) to get prompt items similiar to that thing.
Added workflow where FLUX can be used as refiner for other models
Since both Optimum-Quanto and BitsAndBytes libraries are limited in their platform support matrix,
try enabling NNCF for quantization/compression on-the-fly!
Few image related goodies...
Context-aware resize that allows for img2img/inpaint even at massively different aspect ratios without distortions!
LUT Color grading apply professional color grading to your images using industry-standard .cube LUTs!
Auto HDR image create for SD and SDXL with both 16ch true-HDR and 8-ch HDR-effect images ;)
And few video related goodies...
CogVideoX2b and 5b variants
with support for text-to-video and video-to-video!
AnimateDiffprompt travel and long context windows!
create video which travels between different prompts and at long video lengths!
Plus tons of other items and fixes - see changelog for details!
Examples:
Built-in prompt-enhancer, TAESD optimizations, new DC-Solver scheduler, global XYZ grid management, etc.
But nonetheless, the T5 encoder is used in text-to-image generation.
So surely, there must be good uses for the T5 in creating a better CLIP interrogator?
Ideas/examples on how to do this?
I have 0% knowledge on the T5 , so feel free to just send me a link someplace if you don't want to type out an essay.
//----//
For context;
I'm making my own version of a CLIP interrogator : https://colab.research.google.com/#fileId=https%3A//huggingface.co/codeShare/JupyterNotebooks/blob/main/sd_token_similarity_calculator.ipynb
Key difference is that this one samples the CLIP-vit-large-patch14 tokens directly instead of using pre-written prompts.
I text_encode the tokens individually , store them in a list for later use.
I'm using the method shown in this paper, the "NND-Nearest neighbor decoding" .
!
Methods for making better CLIP interrogators: https://arxiv.org/pdf/2303.03032
T5 encoder paper : https://arxiv.org/pdf/1910.10683
Example from the notebook where I'm using the NND method on 49K CLIP tokens (Roman girl image) :
It finds the “most similiar tokens” in the list. Similarity is the theta angle between the token vectors.
!
The angle is calculated using cosine similarity , where 1 = 100% similarity (parallell vectors) , and 0 = 0% similarity (perpendicular vectors).
Negative similarity is also possible.
How can I use it?
If you are bored of prompting “girl” and want something similiar you can run this notebook and use the “chick” token at 21.88% similarity , for example
You can also run a mixed search , like “cute+girl”/2 , where for example “kpop” has a 16.71% similarity
There are some strange tokens further down the list you go. Example: tokens similiar to the token "pewdiepie</w>" (yes this is an actual token that exists in CLIP)
!
Each of these correspond to a unique 1x768 token vector.
The higher the ID value , the less often the token appeared in the CLIP training data.
To reiterate; this is the CLIP model training data , not the SD-model training data.
So for certain models , tokens with high ID can give very consistent results , if the SD model is trained to handle them.
Example of this can be anime models , where japanese artist names can affect the output greatly.
Tokens with high ID will often give the "fun" output when used in very short prompts.
What about token vector length?
If you are wondering about token magnitude,
Prompt weights like (banana:1.2) will scale the magnitude of the corresponding 1x768 tensor(s) by 1.2 . So thats how prompt token magnitude works.
So TLDR; vector direction = “what to generate” , vector magnitude = “prompt weights”
How prompting works (technical summary)
There is no correct way to prompt.
Stable diffusion reads your prompt left to right, one token at a time, finding association from the previous token to the current token and to the image generated thus far (Cross Attention Rule)
Stable Diffusion is an optimization problem that seeks to maximize similarity to prompt and minimize similarity to negatives (Optimization Rule)
Reference material (covers entire SD , so not good source material really, but the info is there) : https://youtu.be/sFztPP9qPRc?si=ge2Ty7wnpPGmB0gi
The SD pipeline
For every step (20 in total by default) for SD1.5 :
Prompt text => (tokenizer)
=> Nx768 token vectors =>(CLIP model) =>
1x768 encoding => ( the SD model / Unet ) =>
=> Desired image per Rule 3 => ( sampler)
=> Paint a section of the image => (image)
Disclaimer /Trivia
This notebook should be seen as a "dictionary search tool" for the vocab.json , which is the same for SD1.5 , SDXL and FLUX. Feel free to verify this by checking the 'tokenizer' folder under each model.
vocab.json in the FLUX model , for example (1 of 2 copies) : https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/tokenizer
I'm using Clip-vit-large-patch14 , which is used in SD 1.5 , and is one among the two tokenizers for SDXL and FLUX : https://huggingface.co/openai/clip-vit-large-patch14/blob/main/README.md
This set of tokens has dimension 1x768.
SDXL and FLUX uses an additional set of tokens of dimension 1x1024.
These are not included in this notebook. Feel free to include them yourselves (I would appreciate that).
To do so, you will have to download a FLUX and/or SDXL model
, and copy the 49407x1024 tensor list that is stored within the model and then save it as a .pt file.
//---//
I am aware it is actually the 1x768 text_encoding being processed into an image for the SD models + FLUX.
As such , I've included text_encoding comparison at the bottom of the Notebook.
I am also aware thar SDXL and FLUX uses additional encodings , which are not included in this notebook.
Clip-vit-bigG for SDXL: https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k/blob/main/README.md
And the T5 text encoder for FLUX. I have 0% understanding of FLUX T5 text_encoder.
//---//
If you want them , feel free to include them yourself and share the results (cuz I probably won't) :)!
That being said , being an encoding , I reckon the CLIP Nx768 => 1x768 should be "linear" (or whatever one might call it)
So exchange a few tokens in the Nx768 for something similiar , and the resulting 1x768 ought to be kinda similar to 1x768 we had earlier. Hopefully.
I feel its important to mention this , in case some wonder why the token-token similarity don't match the text-encoding to text-encoding similarity.
Note regarding CLIP text encoding vs. token
To make this disclaimer clear; Token-to-token similarity is not the same as text_encoding similarity.
I have to say this , since it will otherwise get (even more) confusing , as both the individual tokens , and the text_encoding have dimensions 1x768.
They are separate things. Separate results. etc.
As such , you will not get anything useful if you start comparing similarity between a token , and a text-encoding. So don't do that :)!
What about the CLIP image encoding?
The CLIP model can also do an image_encoding of an image, where the output will be a 1x768 tensor. These can be compared with the text_encoding.
Comparing CLIP image_encoding with the CLIP text_encoding for a bunch of random prompts until you find the "highest similarity" , is a method used in the CLIP interrogator : https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator
List of random prompts for CLIP interrogator can be found here, for reference : https://github.com/pharmapsychotic/clip-interrogator/tree/main/clip_interrogator/data
The CLIP image_encoding is not included in this Notebook.
If you spot errors / ideas for improvememts; feel free to fix the code in your own notebook and post the results.
I'd appreciate that over people saying "your math is wrong you n00b!" with no constructive feedback.
//---//
Regarding output
What are the </w> symbols?
The whitespace symbol indicate if the tokenized item ends with whitespace ( the suffix "banana</w>" => "banana " ) or not (the prefix "post" in "post-apocalyptic ")
For ease of reference , I call them prefix-tokens and suffix-tokens.
Sidenote:
Prefix tokens have the unique property in that they "mutate" suffix tokens
Example: "photo of a #prefix#-banana"
where #prefix# is a randomly selected prefix-token from the vocab.json
The hyphen "-" exists to guarantee the tokenized text splits into the written #prefix# and #suffix# token respectively. The "-" hypen symbol can be replaced by any other special character of your choosing.
Capital letters work too , e.g "photo of a #prefix#Abanana" since the capital letters A-Z are only listed once in the entire vocab.json.
You can also choose to omit any separator and just rawdog it with the prompt "photo of a #prefix#banana" , however know that this may , on occasion , be tokenized as completely different tokens of lower ID:s.
Curiously , common NSFW terms found online have in the CLIP model have been purposefully fragmented into separate #prefix# and #suffix# counterparts in the vocab.json. Likely for PR-reasons.
You can verify the results using this online tokenizer: https://sd-tokenizer.rocker.boo/
!
!
!
What is that gibberish tokens that show up?
The gibberish tokens like "ðŁĺħ\</w>" are actually emojis!
Try writing some emojis in this online tokenizer to see the results: https://sd-tokenizer.rocker.boo/
It is a bit borked as it can't process capital letters properly.
Also note that this is not reversible.
If tokenization "😅" => ðŁĺħ</w>
Then you can't prompt "ðŁĺħ" and expect to get the same result as the tokenized original emoji , "😅".
SD 1.5 models actually have training for Emojis.
But you have to set CLIP skip to 1 for this to work is intended.
For example, this is the result from "photo of a 🧔🏻♂️"
!
A tutorial on stuff you can do with the vocab.list concluded.
Anyways, have fun with the notebook.
There might be some updates in the future with features not mentioned here.
We're thrilled to announce the latest release of ComfyUI, version 0.2.0! This update brings a host of improvements and new features designed to enhance your workflow and boost productivity. Let's explore the key updates:
Flux ControlNets
InstantX released a Canny and Union controlnet which can no...
>We present GameNGen, the first game engine powered entirely by a neural model that enables real-time interaction with a complex environment over long trajectories at high quality. GameNGen can interactively simulate the classic game DOOM at over 20 frames per second on a single TPU. Next frame prediction achieves a PSNR of 29.4, comparable to lossy JPEG compression. Human raters are only slightly better than random chance at distinguishing short clips of the game from clips of the simulation. GameNGen is trained in two phases: (1) an RL-agent learns to play the game and the training sessions are recorded, and (2) a diffusion model is trained to produce the next frame, conditioned on the sequence of past frames and actions. Conditioning augmentations enable stable auto-regressive generation over long trajectories.
Text:
>Emad@EMostaque
>
>Delighted to announce the public open source release of #StableDiffusion!
>
>Please see our release post and retweet! stability.ai/blog/stable-di...
>
>Proud of everyone involved in releasing this tech that is the first of a series of models to activate the creative potential of humanity
>
>11:07 AM • Aug 22, 2022
This Python script (UNetExtractor.py) processes SafeTensors files for Stable Diffusion 1.5 (SD 1.5), Stable Diffusion XL (SDXL), and FLUX models. It extracts the UNet into a separate file and creat...