Localizing content in neural networks provides a bridge to understanding the way in which the brain stores and processes information. A relatively powerful and yet accessible architectural setting in which to approach the question of whether and how such networks house content is provided by feedforward neural networks. Within this context, we establish the existence of polytopes in the state space of the hidden layer of neural networks as vehicles of content. We analyze these geometrical structures from an information-theoretic point of view, invoking mutual information to help define the content stored within them. Our proposal is tested in a suite of experimental investigations, looking into synthetic data, experiments on sonar signals, as well as at data related to the diagnosis of myocardial perfusion studies. We establish how our proposal addresses the problem of misclassification, and provide a new solution to the disjunction problem, which hinges on the precise nature of the causal-informational framework for content advocated herein.