Gradient based meta-learning has established itself as a promising research direction for problems in few-shot learning: data-constrained training on an ensemble of tasks with quick adaptation on a task, unseen during training. Until recently, little has been known about the success of model-agnostic meta-learners (MAMLs), which are particularly useful to few shot learning. We shed light on the phenomenon of feature reuse in MAMLs through empirical visualizations of the loss landscapes of a variety of tasks. We develop meta-visualization: an efficient framework for visualization of any gradient based meta learning algorithm that can be used to generate hypotheses and guide model designs. The contributions of our work vary from augmenting research in the field of meta learning to explaining the success of the method through geometrical lens.