Streaming text generation demo with @huggingface/inference

First, input your token if you have one! Otherwise, you may encounter rate limiting. You can create a token for free at hf.co/settings/tokens

Pick the model you want to run. Check out over 10k models for text to text generation here

Finally the prompt

Output logs

Output will be here

Check out the source code