🤳 AI Generation Player
🤳 AI Generation — Player Side
Celestory combines the power of nocode (creating applications without writing code) and vibe coding (describing what you want in plain language and letting the AI generate the code) to test the best AI models — including sovereign models (developed in Europe or open source, independent of US tech giants) like Mistral (text) or Flux (image) — directly in your player experiences. Once models are validated in the Cloud, you can download them for local execution.
General Principles
During an experience (game, narrative application, serious game, training module…), the player can interact directly with AI engines configured by the creator, thanks to two dedicated blocks available in the graph:
- 🟦 AI text request block — Dynamic text generation
- 🟦 AI image request block — On-the-fly image generation
⚠️ AI Credits: Any request triggered by a player (text or image) draws directly from the creator's AI credits. Monitoring and mastering monthly quotas are essential when deploying an experience at a large scale.
🟦 AI text request block
The AI text request block connects your scenario to language models (LLMs — AI programs capable of understanding and generating text, like a very advanced virtual assistant) to generate dynamic and contextual responses in real time.
📥 Inputs
Input | Type | Description |
|---|---|---|
Prompt | Text | The player's instruction or question (e.g., « How do you make a potion? ») |
Role (System Prompt) | Text (optional) | Personality and behavior rules for the AI (e.g., « You are a sarcastic old wizard sage who refuses to help without flattery ») |
History | List (optional) | Previous conversation to maintain contextual coherence. Without history, the AI treats each prompt as an independent request |
Model | Text | The AI model to use for generation |
Temperature | Number (optional) | Controls the creativity of responses (0 = predictable and factual responses, 1 = creative and surprising responses). Default value: 0.5 |
Images | List of URLs (optional) | Images to provide as visual context to the model (for multimodal models, i.e., capable of "reading" images in addition to text) |
📤 Outputs
Output | Type | Description |
|---|---|---|
Generated Response | Text | The text produced by the AI in response to the prompt and context |
Success | Stream (signal that triggers once) | Signal triggered when the response is ready |
Error | Stream | Signal activated in case of problem (latency, moderation, quota exhausted…) |
Text Models Available In-Game
The creator can configure one of the following models (via OpenRouter, a service that provides access to many different AI models from a single interface):
Provider | Model | Context Window (max conversation memory) |
|---|---|---|
Mistral 🇫🇷 | | 256,000 tokens |
Mistral 🇫🇷 | | 256,000 tokens |
| 1,000,000 tokens | |
OpenAI | | 400,000 tokens |
Anthropic | | 200,000 tokens |
Meta | | 1,000,000 tokens |
xAI | | 2,000,000 tokens |
Mistral and Llama (Meta, open source) models are sovereign and open models, downloadable for local execution after validation.
💡 Use Case Example
Scenario: Dialogue with a traveling merchant
- The player interacts via a Text Input block: « What do you have for sale? »
- The text is passed to the Prompt input of the AI text request block.
- The Role is configured: « Sly merchant who exaggerates the value of his items and answers in rhymes ».
- The History contains previous exchanges to maintain dialogue continuity.
- On Success, the response is displayed: « Rare treasures, my friend! A sword that glows like day, or a potion that heals in every way! »
- The flow continues according to the player's choices.
⚙️ Technical Details
- Token Management: History consumes tokens (text units treated by the AI — "hello" = 1 token, a sentence ≈ 13 tokens). Limit it to recent and relevant exchanges to optimize costs.
- Quota Validation: Before each generation, the server checks that the creator has not exceeded their monthly generation limit. If the quota is reached, an
AI_GENERATION_LIMIT_REACHEDerror is returned. - Latency and Moderation: Errors can occur in case of too long response delay or content blocked by moderation filters.
🟦 AI image request block
The AI image request block allows for the transformation of textual descriptions into dynamically generated images, integrated at the heart of the interactive experience.
📥 Inputs
Input | Type | Description |
|---|---|---|
Prompt | Text | Detailed description of the image to generate (e.g., « A cat flying in space in cyberpunk pixel art style ») |
Model | Text | The AI model to use for generation |
Image | URL (optional) | Reference image to guide generation (image-to-image mode) |
Negative prompt | Text (optional) | Elements to exclude from generation (what you DO NOT want to see in the image) (e.g., « blur, low resolution, text ») |
📤 Outputs
Output | Type | Description |
|---|---|---|
Image URL(s) | Text / List | Network address(es) of the generated image or images, usable in an image display or background block |
Success | Stream | Signal triggered when the image is ready |
Error | Stream | Signal activated in case of failure (server overload, moderation, quota exhausted…) |
Image Models Available In-Game
Model | Specificity |
|---|---|
Nano Banana | Fast, multiple ratios, up to 4 images |
Flux 2 Klein 4B | High quality, multiple formats |
GPT Image 1.5 | Adjustable quality, transparent background, inpainting (targeted retouching) |
Flux is an open-source and sovereign image generation model, downloadable for local use after prototyping in Celestory.
💡 Use Case Example
Scenario: Creating a personalized fantasy landscape
- The player is asked to describe a location via a Text Input block: « An enchanted forest with golden-leaved trees and a crystal river under a purple sky »
- The text is passed to the Prompt input of the AI image request block.
- The model is configured to Nano Banana for fast generation.
- A loading indicator is displayed during generation (~10 seconds).
- On Success, the Image URL is retrieved and passed to an Image Display block.
- The generated image is displayed on the screen for the player.
⚙️ Technical Details
- Performance and Costs: Image generation is resource-intensive. Plan for smooth loading screens and monitor AI quotas.
- Quota Validation: Same as text request — check before each generation.
- Error Handling: The Error signal allows for an alternative (retrying or displaying a default image).
⚠️ AI Credit Consumption
Crucial aspect of the AI usage model in player experience:
Any request sent from the experience — whether it's an AI text request or AI image request block triggered by a player's action — draws directly from the Creator's monthly AI credits.
Creator Responsibilities
- Limit and Secure Requests: Limit the number of possible requests per session, per player, or per scene to avoid abuse.
- Use Strict Prompts: Constrain the AI with precise Role/System Prompts (see Competency Prompts in the Creator documentation) to frame responses.
- Monitor Consumption: Track spending and token consumption in the Previous generations tab (filterable by "in-game" source) and in the main quota counter.
- Choose the Right Model: Opt for lightweight models (
ministral-3b,Nano Banana) for frequent interactions, and reserve premium models for key moments. - Handle Errors Gracefully: Systematically use the Error output of AI blocks to provide fallback behaviors (pre-defined responses, default images).
Updated on: 04/03/2026
Thank you!
