π¦ AI text request
π¦ AI text request
The Text AI Query block allows you to connect a scenario to text-based AI models (LLM) to generate dynamic and contextual responses, providing fluid and personalized interactivity.
This block analyzes the conversation history, the role assigned to the AI ββand the user prompt to produce a new response, adapted to the context. It eliminates the limitations of predefined responses by allowing free and scalable exchanges.
π₯ Entries
- Prompt (Text): The instruction or question provided by the user (example: "How do you make a potion?").
- Role (Context - Optional): The personality or behavior rules that the AI ββmust respect (example: "You are a wise old sarcastic wizard who refuses to help without flattery").
- History (List - Optional): A portion or the entirety of the previous conversation to maintain contextual consistency. Without history, the AI ββtreats each prompt as an independent request.
π€ Outings
- Generated response (Text): The text produced by the AI ββin response to the prompt and the context (example: "A potion? *Sigh. Another amateur. Show a little enthusiasm, and we'll see."*).
- Success (Stream): Signal triggered when the response is fully generated and ready to be used.
- Error (Stream): Signal activated in the event of a problem (latency, access restrictions, moderation, etc.).
π‘ Example of use
Scenario: Dialogue with a traveling merchant
- The player interacts with a character via a Text entry block: "What are you offering for sale?".
- The entered text is passed to the AI Text Query block via the Prompt input.
- The Role is defined as: "Clever merchant who exaggerates the value of his items and responds in rhymes".
- The block generates a response after a few seconds of processing.
- On Success, the Generated Response is displayed: "Rare treasures, my friend! A sword that shines like day, or a potion that cures love!".
- The flow continues based on the player's response (purchase, negotiation, etc.).
βοΈ Technical Details
- Token management: History consumes tokens (AI memory). To optimize resources, limit the history to recent and relevant exchanges.
- Latency and moderation: Errors can occur in the event of a response time that is too long or content blocked by moderation filters.
Updated on: 04/03/2026
Thank you!
