There's something about the LLM's need to predict a response that makes me so apprehensive about using it. It's response will always be predictably mid curve based on the context but I don't yet know how to shift the context more permanently/easily.
I feel like a reference file, like you did is probably the way. I don't want to avoid the tech or create just an arse kissing robot companion. Some of the value is in the challenging of my own beliefs & thinking.
What a problem to have though? Interesting times!
