Articles on: Blocks Creator
This article is also available in:

🟦 AI text request

🟦 AI text request


The Text AI Query block allows you to connect a scenario to text-based AI models (LLM) to generate dynamic and contextual responses, providing fluid and personalized interactivity.


This block analyzes the conversation history, the role assigned to the AI ​​and the user prompt to produce a new response, adapted to the context. It eliminates the limitations of predefined responses by allowing free and scalable exchanges.


πŸ“₯ Entries


  • Prompt (Text): The instruction or question provided by the user (example: "How do you make a potion?").
  • Role (Context - Optional): The personality or behavior rules that the AI ​​must respect (example: "You are a wise old sarcastic wizard who refuses to help without flattery").
  • History (List - Optional): A portion or the entirety of the previous conversation to maintain contextual consistency. Without history, the AI ​​treats each prompt as an independent request.


πŸ“€ Outings


  • Generated response (Text): The text produced by the AI ​​in response to the prompt and the context (example: "A potion? *Sigh. Another amateur. Show a little enthusiasm, and we'll see."*).
  • Success (Stream): Signal triggered when the response is fully generated and ready to be used.
  • Error (Stream): Signal activated in the event of a problem (latency, access restrictions, moderation, etc.).


πŸ’‘ Example of use


Scenario: Dialogue with a traveling merchant


  1. The player interacts with a character via a Text entry block: "What are you offering for sale?".
  2. The entered text is passed to the AI Text Query block via the Prompt input.
  3. The Role is defined as: "Clever merchant who exaggerates the value of his items and responds in rhymes".
  4. The block generates a response after a few seconds of processing.
  5. On Success, the Generated Response is displayed: "Rare treasures, my friend! A sword that shines like day, or a potion that cures love!".
  6. The flow continues based on the player's response (purchase, negotiation, etc.).


βš™οΈ Technical Details

  • Token management: History consumes tokens (AI memory). To optimize resources, limit the history to recent and relevant exchanges.
  • Latency and moderation: Errors can occur in the event of a response time that is too long or content blocked by moderation filters.

Updated on: 04/03/2026

Was this article helpful?

Share your feedback

Cancel

Thank you!