> Confidence Signaling: AI assistants could provide clear, reliable indicators of their confidence
I'm not sure this is possible: In most LLM applications we humans are perceiving the (un-)confidence from the lines given to a fictional robotic story character. Simply change the name of the character to Zap Brannigan and it becomes a confident expert on everything.
What we want is the confidence of the author that their character is proposing good ideas that will work... However the only confidence the real-world LLM can offer is "how much trouble did I have finding a next word that seems to fit."