A way to view the context used to generate text

A key to efficiently working with LLMs is understanding their context windows. Especially when the story / context grows in size and exceeds the LLM's context limits it is essential to have an idea of what the model still "knows" in order to get coherent responses. But it is also important to have an idea of where the different bits of information are positioned in the context. Without knowing what gets dropped, there's no way of knowing what crucial information has to be reinforced / repeated in a place that's still within the current context, whether through "Key Details" on the "Write" function, temporary comments in the actual document, or hopefully at some point soon through a lorebook / memory / author's note type of feature that is said to be in development (see ryan_mather1321's comment on [this feature suggestion](https://feedback.sudowrite.com/feedback/34472)). While I understand that you probably won't ever show us the actual context sent to the LLM since you're likely wanting to hide your secret sauce, it would still be incredibly helpful to at least see which parts of the myriad of available input fields have been considered for any kind of text generation (write, describe, beat and chapter / prose generation, ...), ideally even with a way to see in advance what the current context for a text generation feature is going to be, i.e., before I click on the Auto Write button I'd like to know if the LLM is still going to be aware of the silk scarf around the murder victim's neck, or if I have to repeat that information somewhere. Two examples: * if you run your own LLM you can usually see in the application's console output or log files exactly what has been sent (including everything whatever UI is being used is hiding from the user) * NovelAI has a very nice way of visualizing what the context looks like (showing both the last text generation and what would be sent next), including a graphical overview in the shape of a stacked bar chart that tells you how much of the context is taken up by what source, but more importantly it shows exactly **which parts** are still inside the context window and **where** the different sources (story, author's note, lorebook entries...) end up and **why** they are included. Attached is a screenshot from NovelAI's context viewer. I feel that especially with pricing plans like sudowrite employs, where each generation and regeneration essentially costs money, users should be given all the necessary tools to avoid having to endlessly experiment (and regenerate) with tweaking their inputs to get the model to "remember" important details.