Comment #⁨7⁩

Feedback on AI Chat User Experience

I found that AI chat freezes due to an infinite loop in the child nodes under the search nodes. Specifically, the search node contains another search node, which then searches for the current AI chat node, leading to a deadlock and requiring a client restart.

In theory, In the current implementation logic, if the search conditions are sufficiently complex and precise, and the AI is provided with relevant content within a suitable scope, it should be able to answer questions effectively. However, the current process is not smooth. The main issues are concentrated in two areas:

  1. Giving to the AI chat context by adding a search node leads to a higher threshold for using AIchat. This is because the threshold for writing complex search node statements is inherently high;

  2. The results of searching nodes cannot be sorted by calculating the relevance like technologies like embedding and rerank, and only those with relevance of 0.85 or more can be provided as context. If all the search results are thrown to the AI, it is easy for the program to crash, and the output is hard to guarantee.

If the design goes in the direction it's going now, maybe it will be designed this way later? it might be necessary to design an agent that can generate its own search conditions. For example, if I ask, "What did I talk about with someone recently?", the process should automatically search for "communication-related" tags associated with that person and provide the search results as context to the AI.

This seems like it will get more and more complicated, unlike the popular RAG solution that allows you to ask questions directly to all the note content. This experience should be much better.

Perhaps the RAG (Retrieval-Augmented Generation) approach is unavoidable. I'm a loyal user of tana and I've been looking forward to the moment when tana supports RAG at every turn for the past year