-

·
The Secrets and Mysteries of AI Interpretability
Reading various papers, for example one by Dario Amodei of Anthropic, the maker of the Claude models, they emphasize the importance of a concept called “interpretability”. This is understanding how an LLM or any AI system thinks, how it interprets requests and constructs outputs – and basically we don’t understand it all. Dario Amodei on…

