Agents
"Agents" orchestrate the interactions between models, Skills, and automation flows within the Artificial Intelligence layer of Skyone Studio. They act as "interpreters" that process commands, execute established configurations, and return the final response to the user.
Each agent:
Is linked to a model
Can use one or more Skills
Can operate standalone as an assistant or in cooperation with other agents.
Difference between Assistant and Cooperation
To optimize the use of Artificial Intelligence, it is fundamental to understand the distinction between an Assistant Agent and a Cooperation architecture. The choice between the two depends on the complexity of the task and the need for specialization.
Assistent Agent
The Assistant Agent is a centralized interaction model. It acts as a generalist assistant that has access to a broad set of tools and knowledge to resolve various demands within a single flow.
Single point of contact: The user interacts with a single agent that processes the request from start to finish.
Broad Context: It is ideal for linear tasks where the volume of information does not compromise the AI's assertiveness.
Operation: It receives the command, consults its knowledge base or configured Skills, and delivers the final response directly.
Cooperation Agent
The Cooperation Agent utilizes a Multi-agent architecture, where the management of cooperative interaction between agents takes place.
Division of Specialties: Instead of a single agent processing all topics, the workload is segmented. In a customer service structure, for example, the Cooperation Agent identifies whether the request refers to Logistics or Support, instantly directing the task to the corresponding specialized agent.
Decision Making (Routing): The Cooperation Agent acts as a logical triage. It analyzes the user's intent and delegates execution to the agent with the best technical mastery of that topic.
Efficiency and Precision: Since each agent works within a reduced scope of data, the risk of errors (hallucinations) decreases and response speed increases, as the search for information is targeted rather than global.
Create an Agent
Go to the side menu and click on “Agents.”

Click “New Agent.”
Choose the agent type: Assistant or Cooperation.

Select the corresponding tab (Assistant or Cooperation) to access the specific creation instructions for each modality:
Fill in or select the following fields:
Agent name: Name used to identify the agent.
System prompt: Write the prompt for the agent. If you wish to add more prompts, simply click “Add prompt”.
Using different prompts for the same agent is useful for separating tasks and facilitating maintenance.
Model: Select an existing model. It will serve as the agent's interpretation base.
Response type: The agent can respond in three different modes:
Text: The agent will respond via text.
Audio: The agent will respond via audio. When selecting this option, the following voice settings are required:
Agente voice: Select the vocal identity from the platform’s available options that best represents your agent.
Speech instructions: Describe the desired tone of voice (e.g., empathetic, formal, enthusiastic) and other possible locution guidelines.
Hybrid: Capable of switching between text and audio, delivering a mixed experience within the same interaction. When selecting this option, the voice settings mentioned above are required.
Response Type availability is directly linked to the selected model. Hybrid mode is not compatible with all AI models available on the platform.
Enable chart generation: By enabling this flag, charts can be created from the agent's data, except for radar (spider) charts.
Chart generation availability is directly linked to the selected model and is not compatible with all models.
Default error message: Define a message for times when the agent cannot interpret the prompt or find a suitable answer.
Manage your tools: Add one or more Skills to define the technical capabilities and functions the agent can execute during the automation flow.
Knowledge Base: Allows you to link documents (such as PDF files) for the agent to use as a reference source. You can select existing files on the platform or upload new documents directly during this step.
Advanced setting options may vary depending on the chosen AI model.
Advanced settings: In this area, you can adjust the technical parameters governing the AI model's response generation. These settings allow you to balance creativity, precision, speed, and operational cost. Where:
top_k: Limits the model's vocabulary to the k most likely words at each stage of generation. For example, if set to 50, the model will only choose the next word from the top 50 options ranked by context.
top_p: Nucleus Sampling. Filters candidate words based on the cumulative sum of their probabilities. For example: if set to 0.9, the model considers only the subset of words that together account for 90% of the occurrence probability, discarding irrelevant options.
Temperature: Defines the predictability level of the response. Low values (e.g., 0.2) result in more direct and exact responses, ideal for technical analysis. High values (e.g., 0.8) encourage more free and creative responses.
token_limit: Determines the token generation limit. It defines how many tokens (words or word fragments) the model can generate in the response, helping control costs and preventing excessively long answers.
context_limit: Defines the conversation memory limit. This determines the maximum volume of tokens (message history and instructions) the model can process simultaneously to maintain coherence and conversation flow.
repeat_penalty: Adjusts the model's strictness to avoid excessive repetition of identical words, terms, or phrases within the same response.
max_completion_tokens: Defines the maximum limit of tokens the model can generate in a single response.
Privacy settings: Configure the agent's privacy by choosing between the following options:
Public: Anyone can view and edit the agent settings.
Private: Only selected users will be able to view details and edit the current agent.
Configure users: When the Private option is selected, this option becomes active. Choose the users who will have access to the agent and then click "Finish".

Unauthorized users can use the agents, but they cannot view technical details or edit them.
Fill in or select the following fields:
Agent name: Name used to identify the agent.
System prompt: Write the prompt for the agent. If you wish to add more prompts, simply click “Add prompt”.
Using different prompts for the same agent is useful for separating tasks and facilitating maintenance.
Model: Select an existing model. It will serve as the agent's interpretation base.
Response type: The agent can respond in three different modes:
Text: The agent will respond via text.
Audio: The agent will respond via audio. When selecting this option, the following voice settings are required:
Agente voice: Select the vocal identity from the platform’s available options that best represents your agent.
Speech instructions: Describe the desired tone of voice (e.g., empathetic, formal, enthusiastic) and other possible locution guidelines.
Hybrid: Capable of switching between text and audio, delivering a mixed experience within the same interaction. When selecting this option, the voice settings mentioned above are required.
Response Type availability is directly linked to the selected model. Hybrid mode is not compatible with all AI models available on the platform.
Manage agents for cooperation: select two or more previously created agents. To do this:
Click “Select agents".
In the modal, select two or more by clicking their respective checkboxes.
Default error message: Define a message for times when the agent cannot interpret the prompt or find a suitable answer.
Advanced settings: In this area, you can adjust the technical parameters governing the AI model's response generation. These settings allow you to balance creativity, precision, speed, and operational cost.
Advanced setting options may vary depending on the chosen AI model.
Where:
top_k: Limits the model's vocabulary to the k most likely words at each stage of generation. For example, if set to 50, the model will only choose the next word from the top 50 options ranked by context.
top_p: Nucleus Sampling. Filters candidate words based on the cumulative sum of their probabilities. For example: if set to 0.9, the model considers only the subset of words that together account for 90% of the occurrence probability, discarding irrelevant options.
Temperature: Defines the predictability level of the response. Low values (e.g., 0.2) result in more direct and exact responses, ideal for technical analysis. High values (e.g., 0.8) encourage more free and creative responses.
token_limit: Determines the token generation limit. It defines how many tokens (words or word fragments) the model can generate in the response, helping control costs and preventing excessively long answers.
context_limit: Defines the conversation memory limit. This determines the maximum volume of tokens (message history and instructions) the model can process simultaneously to maintain coherence and conversation flow.
repeat_penalty: Adjusts the model's strictness to avoid excessive repetition of identical words, terms, or phrases within the same response.
max_completion_tokens: Defines the maximum limit of tokens the model can generate in a single response.
To finish, click “Create agent".
Done! Your agent has been created. Test your agents in the Playground.
Read also: How to edit or delete your agents.
FAQ - IA Agents
What is an AI agent?
It is an entity configured to perform tasks, answer questions, and interact autonomously by combining language models, rules, and skills.
Can I create my own AI agent in Skyone Studio?
Yes. The platform offers full autonomy to create custom agents aligned with your business strategy.
What is the difference between a native agent and a custom agent?
Native: Comes ready to use with default settings.
Custom: You define the behavior, tone, skills, data, and language model.
Can an agent have more than one skill?
Yes. You can assign multiple skills to an agent and define when and how each one will be triggered.
Can the agent access my internal data?
Yes, provided that the necessary integrations and permissions are configured.
Can I change the agent's language model after it has been created?
Yes. You can change the model, adjust parameters, and update skills without recreating the entire agent.
Does the agent learn on its own?
The agent does not "learn" in the traditional sense. It operates based on rules, data sources, and configured models. Its performance can be improved by refining prompts and skills.
Last updated
Was this helpful?
