Bounding the Problem

Dhruv Srikanth
January 7th, 2023

Abstract

Bounding a problem often helps to identify a problem's characteristics, and eliminates approaches that do not fit the nature of the problem. This enables us to reduce the space of tools, techniques and ideas to solve the problem. Meta learning deals with learning to learn. Can we train a model using meta learning to learn the bounds of different problem spaces?

As mentioned above, we can define a meta learning problem to be associated with learning to learn. This can be done in several ways such as learning to learn by doing different tasks, or learning the gradients of different models on different tasks. Instead of gradients, we can also learn using the parameters or even the meta information i.e. characteristics of the model with respect to each task. It becomes increasingly difficult to learn to learn, to transfer learning, and to generalize learning as the variance between tasks increases.

As shown in animal learning, and subsequently reinforcement learning, we must explore the space of the problem before finding finding the optimal policy for navigating and exploiting the problem. Could we learn to learn the bounds of the problem space? If so would this help us to learn more efficiently? It is certain that we could use this to pretrain meta learning models, for faster donwstream learning. We could view this as a better partition function across the task space. Would this help meta learning agents to generalize better? If the universality assumption is not true, then this would mean that the ability of a meta learning agent to generalize decreases as the variance between tasks increase. Would learning problem bounds enable us to address this problem? And if so, would this contribute to deeper deep reinforcement learning agents? These are some of the open questions I think it would be interesting to explore.

Contact