

Interesting, thanks for doing the research!
As an extreme non-expert, I would say “deliberate removal of a part of a model in order to study the structure of that model” is a somewhat different concept to “intrinsic and inexorable averaging of language by LLM tools as they currently exist”, but they may well involve similar mechanisms, and that may be what the OP is referencing, I don’t know enough of the technical side to say.
That paper looks pretty interesting in itself; other issues aside, LLMs are really fascinating in the way they build (statistical) representations of language.


It’s a screenshot of this report from a review of the UK’s security and terrorism legislation, published in December.
TechRadar article discussing the specific encryption issue here.
I was skeptical given the grammar issues others have pointed out but it seems legitimate.