Given infinite compute, Bayesian optimization like this doesn’t make sense (at least for well-defined objective functions), because you can just select the single best point in the search space.
what makes you confident that evolutionary search under computational resource scarcity selects for anything like an explicit Bayesian optimizer or long term planner? (I say “explicit” because the Bayesian formalism has enough free parameters that you can post-hoc recast ~any successful algorithm as an approximation to a Bayesian ideal)
I would not argue for “explicit”. If I had to argue for “explicit” I would say: because biological organisms do in fact have differentiated organs which serve somewhat comprehensible purposes, and even the brain has somewhat distinct regions serving specific purposes. However, I think the argument for explicit-or-implicit is much stronger.
Even so, I would not argue that evolutionary search under computational resource scarcity selects for a long-term planner, be it explicit or implicit. This would seem to depend on the objective function used. For example, I would not expect something trained on an image-recognition objective to exhibit long-term planning.
I’m curious why you specify evolutionary search rather than some more general category that includes gradient descent and other common techniques which are not Bayesian optimization. Do you expect it to be different in this regard?
I’m not sure why you asked the question, but it seems probably that you thought a “confident belief that [...]” followed from my view expressed in the previous comment? I’m curious about your reasoning there. To me, it seems unrelated.
These issues are tricky to discuss, in part because the term “optimization” is used in several different ways, which have rich interrelationships. I conceptually make a firm distinction between search-style optimization (gradient descent, genetic algorithms, natural selection, etc) vs agent-style optimization (control theory, reinforcement learning, brains, etc). I say more about that here.
The proposal of Bayesian Optimization, as I understand it, is to use the second (agentic optimization) in the inner loop of the first (search). This seems like a sane approach in principle, but of course it is handicapped by the fact that Bayesian ideas don’t represent the resource-boundedness of intelligence particularly well, which is extremely critical for this specific application (you want your inner loop to be fast). I suspect this is the problem you’re trying to comment on?
I think the right way to handle that in principle is to keep the Bayesian ideal as the objective function (in a search sense, not an agency sense) and search for a good search policy (accounting for speed as well as quality of decision-making), which you then use for many specific searches going forward.
I think Bayesian optimization still makes sense with infinite compute if you have limited data (infinite compute doesn’t imply perfect knowledge, you still have to run experiments in the world outside of your computer).
The reason I specified evolutionary search is because that’s the claim I see Lehman & Stanley as making—that algorithms pursuing simple objectives tend to not be robust in an evolutionary sense. I’m less confident making claims about broader classes of optimization but not intentionally excluding them
Meta point: it feels like we’re bouncing between incompatible and partly-specified formalisms before we even know what the high level worldview diff is.
To that end, I’m curious what you think the implications of the Lehman & Stanley hypothesis would be—supposing it were shown even for architectures that allow planning to search, which I agree their paper does not do. So yes you can trivially exhibit a “goal-oriented search over good search policies” that does better than their naive novelty search, but what if it turns out a “novelty-oriented search over novelty-oriented search policies” does better still? Would this be a crux for you, or is this not even a coherent hypothetical in your ontology of optimization?
Given infinite compute, Bayesian optimization like this doesn’t make sense (at least for well-defined objective functions), because you can just select the single best point in the search space.
I would not argue for “explicit”. If I had to argue for “explicit” I would say: because biological organisms do in fact have differentiated organs which serve somewhat comprehensible purposes, and even the brain has somewhat distinct regions serving specific purposes. However, I think the argument for explicit-or-implicit is much stronger.
Even so, I would not argue that evolutionary search under computational resource scarcity selects for a long-term planner, be it explicit or implicit. This would seem to depend on the objective function used. For example, I would not expect something trained on an image-recognition objective to exhibit long-term planning.
I’m curious why you specify evolutionary search rather than some more general category that includes gradient descent and other common techniques which are not Bayesian optimization. Do you expect it to be different in this regard?
I’m not sure why you asked the question, but it seems probably that you thought a “confident belief that [...]” followed from my view expressed in the previous comment? I’m curious about your reasoning there. To me, it seems unrelated.
These issues are tricky to discuss, in part because the term “optimization” is used in several different ways, which have rich interrelationships. I conceptually make a firm distinction between search-style optimization (gradient descent, genetic algorithms, natural selection, etc) vs agent-style optimization (control theory, reinforcement learning, brains, etc). I say more about that here.
The proposal of Bayesian Optimization, as I understand it, is to use the second (agentic optimization) in the inner loop of the first (search). This seems like a sane approach in principle, but of course it is handicapped by the fact that Bayesian ideas don’t represent the resource-boundedness of intelligence particularly well, which is extremely critical for this specific application (you want your inner loop to be fast). I suspect this is the problem you’re trying to comment on?
I think the right way to handle that in principle is to keep the Bayesian ideal as the objective function (in a search sense, not an agency sense) and search for a good search policy (accounting for speed as well as quality of decision-making), which you then use for many specific searches going forward.
Minor points just to get them out of the way:
I think Bayesian optimization still makes sense with infinite compute if you have limited data (infinite compute doesn’t imply perfect knowledge, you still have to run experiments in the world outside of your computer).
The reason I specified evolutionary search is because that’s the claim I see Lehman & Stanley as making—that algorithms pursuing simple objectives tend to not be robust in an evolutionary sense. I’m less confident making claims about broader classes of optimization but not intentionally excluding them
Meta point: it feels like we’re bouncing between incompatible and partly-specified formalisms before we even know what the high level worldview diff is.
To that end, I’m curious what you think the implications of the Lehman & Stanley hypothesis would be—supposing it were shown even for architectures that allow planning to search, which I agree their paper does not do. So yes you can trivially exhibit a “goal-oriented search over good search policies” that does better than their naive novelty search, but what if it turns out a “novelty-oriented search over novelty-oriented search policies” does better still? Would this be a crux for you, or is this not even a coherent hypothetical in your ontology of optimization?