Reasoning (or planning, rational decision making) seems a core aspect of intelligence — but what exactly does that mean? If we observe clever behavior in an animal, can we claim it is based on reasoning? And doesn’t the success of deep RL show us that we (as engineers) do not need reasoning? I’ll discuss reasoning as a means to represent behavior and what the point of that might be.
The Zoom Link will be sent the day before the lecture.