Current systems look like 'genius' but need strategies to think through problems, says Microsoft co-founder
Reporting and writing about AI has given me a new perspective on how incredibly amazing the human brain is. While large language models (LLMs) are impressive, they lack entire dimensions of thought that we humans take for granted.
Bill Gates addressed this idea on the Next Big Idea Club podcast . Speaking with host Rufus Griscom, Gates spoke at length about “metacognition,” which refers to a system that can think about its own thinking.
Gates defined metacognition as the ability to “think about a problem in a big picture way, step back and say, OK, how important is it to answer this? How could I check my answer, and what external tools would help me do that?”
The Microsoft founder said the overall “cognitive strategy” of existing LLMs, such as GPT-4 or Llama, is still unsophisticated. “They’re just generating through constant computation every token and sequence, and it’s amazing that it works,” Gates said.
“The model doesn’t step back like a human and think, ‘I’m going to write this article and here’s what I want to cover; okay, I’m going to put some text here and here’s what I want to do for the abstract.’”
Gates believes that AI researchers’ preferred method for improving LLMs—expanding the training data set and computing power—will yield a few more big breakthroughs, and that’s it. After that, researchers will have to employ metacognition strategies to teach AI models to think smarter, not harder.
Research into metacognition could be key to solving the most vexing problem with LLMs: their lack of reliability and accuracy, Gates said. “This technology … will reach superhuman levels. We’re not there yet, considering the low reliability,” he said.
“Much of the new work is to add a level of metacognition that, if done correctly, will address the somewhat erratic nature of genius.”
댓글