Denis, Nicholas2019-08-272019-08-272019-08-27http://hdl.handle.net/10393/39552http://dx.doi.org/10.20381/ruor-23795Discrete time sequential decision processes require that an agent select an action at each time step. As humans, we plan over long time horizons and use temporal abstraction by selecting temporally extended actions such as “make lunch” or “get a masters degree”, whereby each is comprised of more granular actions. This thesis concerns itself with such hierarchical temporal abstractions in the form of macro actions and options, as they apply to goal-based Markov Decision Processes. A novel algorithm for discovering hierarchical macro actions in goal-based MDPs, as well as a novel algorithm utilizing landmark options for transfer learning in multi-task goal- based reinforcement learning settings are introduced. Theoretical properties regarding the life-long regret of an agent executing the latter algorithm are also discussed.enMarkov decision processReinforcement learningOptions frameworkTemporal abstractionMacro actionsOn Hierarchical Goal Based Reinforcement LearningThesis