During Period 2, major progress was made in developing and integrating the CAVAA architecture across multiple cognitive layers. In WP1, three biologically inspired modules were implemented: (1) the Reactive Layer uses a neural mass allostatic model to replicate hypothalamic dynamics and support self-regulatory behavior; (2) the Adaptive Layer introduces a Motivational Hippocampal Autoencoder, allowing artificial agents to build self-referential cognitive maps; and (3) the Contextual Layer includes Sequential Episodic Control, a hippocampal-inspired memory algorithm that enhances sample-efficient reinforcement learning. These modules have been benchmarked and integrated into the full CAVAA architecture. The Virtualization Model, now fully operational, explains a wide range of hippocampal replay phenomena and has been successfully embedded into the system. In WP2, novel models at the interface of cognitive science and deep learning were developed, including models of attention, goal-directed decision-making, and learning under virtualization. These contribute to both model-based and model-free learning, leveraging cognitive mechanisms such as visual attention and memory. Progress in explainable AI and self-supervised vision transformers resulted in two peer-reviewed publications. In WP3, key components were developed to support physical embodiment and interaction with real or simulated environments. A sensory acquisition interface was implemented to feed environmental data into the architecture. A motor command interface was built to translate the architecture’s outputs into discrete or continuous motion commands. The Motivational Hippocampal Autoencoder was trained in a warehouse environment. WP4 delivered new standardized measures of awareness for biological and artificial systems, aligned with human subjective evaluations. WP5 enabled rich interdisciplinary collaboration between engineers, neuroscientists, ethicists, and philosophers. This led to several notable publications on technical, ethical, and conceptual aspects of AI awareness, including: a new classification of strong vs. weak AI alignment with human values; conceptual clarifications on artificial consciousness; and innovative work on value alignment in moral dilemmas using large language models and probabilistic reinforcement learning. A two-day workshop on AI awareness and privacy was held at the University of Oxford in May 2025, bringing together academic and industry stakeholders, to advance the scientific and societal understanding of artificial awareness, particularly in relation to agency, responsibility, and privacy.