Reward is the driving power for reinforcement studying (RL) brokers. Given its central function in RL, reward is usually assumed to be suitably basic in its expressivity, as summarized by Sutton and Littman’s reward speculation:
« …all of what we imply by targets and functions might be nicely considered maximization of the anticipated worth of the cumulative sum of a acquired scalar sign (reward). »
– SUTTON (2004), LITTMAN (2017)
In our work, we take first steps towards a scientific research of this speculation. To take action, we think about the next thought experiment involving Alice, a designer, and Bob, a studying agent:
We suppose that Alice thinks of a job she may like Bob to be taught to unravel – this job could possibly be within the kind a a pure language description (“stability this pole”), an imagined state of affairs (“attain any of the profitable configurations of a chess board”), or one thing extra conventional like a reward or worth perform. Then, we think about Alice interprets her alternative of job into some generator that may present studying sign (comparable to reward) to Bob (a studying agent), who will be taught from this sign all through his lifetime. We then floor our research of the reward speculation by addressing the next query: given Alice’s alternative of job, is there all the time a reward perform that may convey this job to Bob?
What’s a job?
To make our research of this query concrete, we first limit focus to 3 sorts of job. Specifically, we introduce three job varieties that we consider seize smart sorts of duties: 1) A set of acceptable insurance policies (SOAP), 2) A coverage order (PO), and three) A trajectory order (TO). These three types of duties signify concrete cases of the sorts of job we would need an agent to be taught to unravel.
We then research whether or not reward is able to capturing every of those job varieties in finite environments. Crucially, we solely focus consideration on Markov reward features; as an example, given a state area that’s adequate to kind a job comparable to (x,y) pairs in a grid world, is there a reward perform that solely will depend on this similar state area that may seize the duty?
First Primary Consequence
Our first predominant outcome exhibits that for every of the three job varieties, there are environment-task pairs for which there isn’t any Markov reward perform that may seize the duty. One instance of such a pair is the “go all the way in which across the grid clockwise or counterclockwise” job in a typical grid world:
This job is of course captured by a SOAP that consists of two acceptable insurance policies: the “clockwise” coverage (in blue) and the “counterclockwise” coverage (in purple). For a Markov reward perform to precise this job, it could have to make these two insurance policies strictly greater in worth than all different deterministic insurance policies. Nevertheless, there isn’t any such Markov reward perform: the optimality of a single “transfer clockwise” motion will depend upon whether or not the agent was already transferring in that path prior to now. For the reason that reward perform should be Markov, it can’t convey this sort of info. Related examples exhibit that Markov reward can’t seize each coverage order and trajectory order, too.
Second Primary Consequence
Provided that some duties might be captured and a few can’t, we subsequent discover whether or not there may be an environment friendly process for figuring out whether or not a given job might be captured by reward in a given surroundings. Additional, if there’s a reward perform that captures the given job, we’d ideally like to have the ability to output such a reward perform. Our second result’s a constructive outcome which says that for any finite environment-task pair, there’s a process that may 1) resolve whether or not the duty might be captured by Markov reward within the given surroundings, and a pair of) outputs the specified reward perform that precisely conveys the duty, when such a perform exists.
This work establishes preliminary pathways towards understanding the scope of the reward speculation, however there may be a lot nonetheless to be executed to generalize these outcomes past finite environments, Markov rewards, and easy notions of “job” and “expressivity”. We hope this work supplies new conceptual views on reward and its place in reinforcement studying.