Substantiated findings
LLM’s are great at organizing narratives and findings. It's helpful to see the sources that support these conclusions, making it easier to understand the analysis and where it comes from.


When reviewing findings, I want to see the supporting sources, so I can understand and trust the conclusions more easily, and see how ideas are substantiated.


- Comparing Similarity in Source Materials: Seeing a collection of similar information helps you compare similarity between the source materials.
- Building Trust through Access to Original Sources: Access to original sources builds trust in the findings, as users can review and understand the basis of the conclusions.

More of the Witlist

Presenting multiple outputs helps users explore and identify their preferences and provides valuable insights into their choices, even enabling user feedback for model improvement.

Input design concepts in small bits and see the cumulative output in real-time. Explore different combinations and immediately visualize the results, making the creative process interactive and flexible.

Generating multiple outputs and iteratively using selected ones as new inputs helps people uncover ideas and solutions, even without clear direction.

AI actions often take time to complete. To improve user experience, use descriptions of what is happening combined with basic animations that represent different types of actions.

Using the source input as ground truth will help trust the system and makes it easy to interpret its process and what might have gone wrong.

Automatic model switching in AI can boost efficiency by selecting the most appropriate model for each query, ensuring a balance between quick and accurate responses.