• 0 Posts
  • 8 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
rss
  • It really mostly doesn’t, and Quanta Magazine is (as is typical for them) full of sh*t.

    Ternary is most efficient if the space (power, etc) needed to implement an operation on a base-b digit is proportional to b. (Then the cost is b * log(n) / log(b), and b/log b is minimized at e, but is lower with b=3 than with b=2.) However, in practice most operations take space that increases more than proportionally to b. For example, saturated transistors are either on or off, which is enough to implement binary logic, but ternary logic needs typically several more transistors. Transistors, and especially CMOS style implementations, are generally well-suited to binary. If future computers use a different implementation style (neurons! who knows) then something other than binary logic might be best.

    Storing and transmitting data is different: this is often most efficient in bases other than 2. For example, if a flash cell of a certain size can reliably store 4 different amounts of charge, and the difference between these can reliably be read out, then flash manufacturers will store two bits per cell. This is already done and has been done for years. It’s most often done in bases that are powers of 2, but not always.

    Ternary calculations are occasionally used in cryptography, but as far as I can tell, at least the first ternary crypto paper the article cites is garbage.

    There are also other architectures like clockless logic, which uses a third value for “not done calculating yet”, but that’s different from ordinary ternary logic (and is generally implemented using binary anyway). It also showed a lot of promise for saving power, and also some in reducing interference, but in most settings the increased complexity and circuit size required have been too much to deliver that savings.





  • Sure, it’s hard to say whether a computer program can “know” anything or what that even means. But the paper isn’t arguing that. It assumes very little about how how LLMs actually work, and it defines “hallucination” as “not giving the right answer” with no option for the machine to answer “I don’t know”. Then the proof follows basically from the fact that the LLM-or-whatever can’t know everything.

    The result is not very surprising, and saying that it means hallucination is inevitable is an oversell. It’s possible that hallucinations, or at least wrong answers, are inevitable for different reasons though.


  • So I wrote a long-ass rundown of this but it won’t post for some reason (too long)? So TLDR: this is a 17,600-word nothingburger.

    DJB is a brilliant, thorough and accomplished cryptographer. He has also spent the past 5 years burning his reputation to the ground, largely by exhaustively arguing for positions that correlate more with his ego than with the truth. Not just this position. It’s been a whole thing.

    DJB’s accusation, that NSA is manipulating this process to promote a weaker outcome, is plausible. They might have! It’s a worrisome possibility! The community must be on guard against it! But his argument that it actually happened is rambling, nitpicky and dishonest, and as far as I can tell the other experts in the community do not agree with it.

    So yes, take NIST’s recommendation for Kyber with a grain of salt. Use Kyber768 + X448 or whatever instead of just Kyber512. But also take DJB’s accusations with a grain of salt.