Explain the problem as you see it
I've struggled to understand the difference between system prompts and initial prompts when configuring Tana agents. This creates confusion about which prompt to use for what purpose, leading to ineffective agent setups and frustration when agents don't behave as expected. The current documentation and UI don't provide sufficient guidance on how to properly configure these different prompt types.
Why is this a problem for you?
This confusion significantly impacts my productivity when working with Tana's AI capabilities:
- I'm wasting time through trial and error, repeatedly adjusting prompts without understanding the underlying system
- Many users likely abandon creating custom agents altogether due to this friction
- Those who do create agents might end up with suboptimal configurations that don't leverage Tana's full AI potential (I know this is true for me, so far)
- The learning curve becomes unnecessarily(?) steep
- When creating complex workflows with multiple connected agents, the confusion compounds, making advanced use cases practically inaccessible (during early stages anyway... I expect this changes as I learn more, similar to the curve for search queries etc)
Suggest a solution
A two-part solution that includes: 1) An AI-powered assistant that guides users through creating effective prompts for their agents, and 2) A comprehensive examples library with annotated, real-world agent configurations that demonstrate best practices and common patterns. (Linked directly from the agent builder, to reduce friction/finding in docs)
- Part 1: AI Prompt Configuration Assistant
- Add an "AI Help" button next to each prompt field in the agent configuration panel
- When clicked, this opens a guided experience where the AI asks the user questions about their intended agent purpose and behavior
- Based on responses, the AI suggests appropriate system and initial prompts, explaining the difference between them in context
- The assistant explains which parts of the prompt handle which responsibilities (e.g., "This part defines the agent's personality, while this part establishes the task parameters")
- Provides real-time feedback on user-written prompts with suggestions for improvement
- Part 2: Agent Examples Library
- Create a dedicated section in the help documentation with fully-configured example agents
- Include a variety of use cases (writing assistant, research helper, data analyzer, etc.)
- For each example, provide annotated prompts that explain each component's purpose
- Show hierarchical examples of main agents with subagents, clearly illustrating how they connect and communicate
- Allow users to clone these example agents directly into their workspace with a single click
- Include a visual diagram for complex agent setups showing information flow between agents
Integration with Existing Functionality
- The examples library would be accessible from both the help documentation and directly from the agent creation interface
- When creating a new agent, offer a "Start from example" option that presents relevant templates
- The AI assistant would leverage the user's existing nodes and supertags to suggest contextually relevant prompts
- Tooltips would be added to the prompt fields explaining their purpose with links to relevant examples