Thus far, work has included literature reviews and computational modeling and analysis. Two review articles on the use of artificial neural networks to study complex sensory processing were published. Modeling work has been carried out that explores different forms of recurrence in visual processing motivated by behavioral studies suggest that recurrence in the visual system is important for processing degraded stimuli.Specifically we added four different kinds of recurrence---two of each anatomical form of lateral and feedback---to a feedforward convolutional neural network and find all of them capable of increasing the ability of the network to classify noisy digit images. To do so, we take inspiration from findings in biology by adding predictive coding-based feedback and lateral surround suppression. To compare these forms of recurrence to anatomically-matched counterparts we also train feedback and lateral connections directly to classify degraded images. Counter-intuitively, we find that the anatomy of the recurrence is not related to its function: both forms of task-trained recurrence change neural activity and behavior similarly to each other and differently from their bio-inspired anatomical counterparts. By using several analysis tools frequently applied to neural data, we identified the distinct strategies used by the predictive coding versus task-trained networks. Specifically, predictive coding de-noises the representation of noisy images at the first layer of the network and decreases its dimensionality, leading to an expected increase in classification performance. Surprisingly, in the task-trained networks, representations are not de-noised over time at the first layer (in fact, they become `noiser' and dimensionality increases) yet these dynamics do lead to de-noising at later layers. The analyses used here can be applied to real neural recordings to identify the strategies at play in the brain. Our analysis of an fMRI dataset weakly supports the predictive coding model but points to a need for higher-resolution cross-regional data to understand recurrent visual processing. In a separate study, the style of training of networks was also explored in order to understand how objective functions impact visual representations. Specifically, artificial neural systems trained using reinforcement, supervised, and unsupervised learning all acquire internal representations of high dimensional input. To what extent these representations depend on the different learning objectives is largely unknown. Here we compare the representations learned by eight different convolutional neural networks, each with identical ResNet architectures and trained on the same family of egocentric images, but embedded within different learning systems. Specifically, the representations are trained to guide action in a compound reinforcement learning task; to predict one or a combination of three task-related targets with supervision; or using one of three different unsupervised objectives. Using representational similarity analysis, we find that the network trained with reinforcement learning differs most from the other networks. Through further analysis using metrics inspired by the neuroscience literature, we find that the model trained with reinforcement learning has a sparse and high-dimensional representation wherein individual images are represented with very different patterns of neural activity. Further analysis suggests these representations may arise in order to guide long-term behavior and goal-seeking in the RL agent. Our results provide insights into how the properties of neural representations are influenced by objective functions and can inform transfer learning approaches. These results are in the process of being disseminated in open-access venues and have been discussed at several conferences and seminars.