On One Weird Thing, Ethan Mollick argues that getting AI to be more powerful (both in the sense of more useful and more dangerous) is a weak link problem. Improving the abilities they already excel at, such as math, reasoning or writing, matters less than improving those abilities they still lack, such as learning between training runs.