To a first approximation, bigger brains = more neurons = smarter. Dig deeper, and it turns out to be more complicated than that. Honeybees have ten times fewer neurons than zebrasfish, yet by some measures are just as smart.
Even discounting species-specific differences, the relationship between neuron count and intelligence isn’t straightforward. For starters, it’s not clear why more neurons are better. Part of the answer is that more neurons allow us to have more fine-grained concepts, and that in turn makes it easier to connect them match them against each other.
One way to get around the limited number of concepts we can encode in our minds is specialization. By only focusing on a small part of the world, we can develop more fine-grained concepts about this. Jargon, i.e. specialized words for niche concepts, is a manifestation of this.
What if we built an artificial mind that could hold many more concepts? Would this translate to higher intelligence?
One counterargument is that some things are very hard to learn and instead seem to be addressable only with general intelligence. Why does practice improve performance at pattern matching, such as is needed to solve Raven’s progressive matrices, only a little, and never to the same level than a genius who didn’t practice at all?
Another question is if the relationship between the number of concepts and intelligence continues to infinity. Does it taper off at some point?
Here’s Stephen Wolfram in a recent blog post:
The world at large is full of computational irreducibility – where the only general way to work out what will happen in a system is just to run the underlying rules for that system step by step and see what comes out […] But brains, for the things most important to them, somehow seem to routinely manage to “jump ahead” without in effect simulating every detail. And what makes this possible is the fundamental fact that within any system that shows overall computational irreducibility there must inevitably be an infinite number of “pockets of computational reducibility”, in effect associated with “simplifying features” of the behavior of the system […] We can think of brains as fundamentally serving to “compress” the complexity of the world, and extract from it just certain features – associated with pockets of reducibility – that we care about. And for us a key manifestation of this is the idea of concepts, and of language that uses them.
What would bigger brains enable us to do? Wolfram again:
There will be some that correspond to concepts (and words) we’re familiar with. But the vast majority will effectively lie in “interconcept space”: places where we could have concepts, but don’t, at least yet. So what could bigger brains do with all this? Potentially they could handle more features, and more concepts. Full computational irreducibility will always in effect ultimately overpower them. But when it comes to handling pockets of reducibility, they’ll presumably be able to deal with more of them.