Skip to main content
European Commission logo print header

Modelling of morphology Development of micro- and Nano Structures

Final Report Summary - MODENA (Modelling of morphology Development of micro- and Nano Structures)

Executive Summary:
Whilst process design is done by computational methods, in many areas product designs still require a lot of experimental effort, which is not only work intensive and expensive, but is also limited in terms of the domain in which experiments can be performed. Computational approaches essentially remove all these limits. One can perform more designs in a shorter period, cheaper and in a much larger domain than this is possible in the “real world”. The idea is thus very intriguing and engineering flirts with the idea using it increasingly, but in most cases this involves quite a spectrum of methods tailored to the different scales and materials involved, which essentially always encompasses everything, from atoms to the macro scale of the user product.
Motivated by the chemical industry, MoDeNa took the task to construct a generic computational environment that allows to combine an arbitrary number of computational methods solving problems local to the applicable scale and material and let different software modules interact so that the behaviour and properties of a product are simulated and predicted. Polyurethan and specifically their foams took the centre position, but it should be stressed that the software platform is not limited to this application or even materials modelling Their simulations requires models for the polymerisation, the kinetics of the reaction system, the relevant physical properties, the kinetics of the gas phase blowing the bubbles and the physical properties of all involved pure materials and mixtures, as the material, the polymer changes from monomers to solidification. A special challenge constitutes the combination of the two phases when modelling on the macroscopic scale. All these submodels had to be collected or generated. Some of these tasks used commercial solvers, whilst many of them required the construction of special tools solving the given subproblem.
This resulted in the order of twenty self-contained computational activities, which are connected to solve the overall task of modelling the foam product. Whilst it is mostly numerical information being transferred, the framework also generates surrogate models replacing other more complex behaviour descriptions by a simpler one.
This resulted in the prediction of laboratory experiments for rising foams in simple geometrical containers, which consequently were verified. Some of the computations yielded surprisingly accurate predictions of the quantities of interest within 5 %, whilst others showed clearly shortcomings that have been be traced back to individual model components, specifically the kinetics of the involved reactive systems. The platform, which was constructed from scratch proved to enable dynamic refinement, utilisation of surrogate models and ease of substituting model components.
The project takes the view of the overall model as an onion where each shell represents a layer of models on its scale, with the smallest scale in the centre and the largest as the outer shell. The framework then implements the recursive structure with the additional feature to replace parts of the onion with a surrogate that captures the behaviour of the underlying lower-scale models. These surrogate models can be dynamically identified as part of the overall modelling process steered by policies defined by the user when assembling the workflow for the computation of the overall system. These two structural components of the framework make the MoDeNa framework unique. In addition, the framework uses MongoDB a nonSQL database, providing a great deal of flexibility not at least the ability to work in a distributed network of computing facilities where the user is not required to only use the framework on one computation node, but can spread the computational load over the network. The framework is also able to work with commercial tools, and realisation of software modules in various languages, including but not limited to C, C++, Python and R.
Project Context and Objectives:
The quality of industrial products is determined by the material properties being the properties on all length scales – from nano to macro. The properties are also affected by the production conditions and obviously the chosen ingredients. “Quality” itself is determined by the application: a product must serve a particular purpose. For industrial foams the determining quality measures can for example be mechanical or thermal properties. Currently those quality determining properties are measured on physical samples, which must in turn to be generated by a physical experiment, a process with is not only costly and work intensive, but also is constrained by the available physical means. Replacing these experiments by computational experiments is thus of great interest even if they give only a relatively rough idea of the material’s properties.

The main target is therefore to enable product design and in second line processes or process units to produce the desired product.

The fact that all levels of scale affect the properties of the product implies that any computational tool must provide information about all scales either explicitly as models of the material behaviour in the form of a mechanistic description on the given scale or implicitly as surrogate models capturing the behaviour of “everything” below the current scale.

The surrogates are of particular interest or even required when simulating the behaviour on a larger scale requiring the information of the behaviour of a very small scale, specifically the molecular scale, where large assemblies are required to simulate the behaviour and obtain the sought properties. These computations are notoriously demanding in terms of time and resources. A direct coupling, implying computations of material properties on the flight based on molecular code, is not a feasible undertaking. The generation of simplified, surrogate models is a must. The MoDeNa framework thus provides the means of generating such surrogates from detailed models.

MoDeNa aimed at providing an example of a scale-integrated simulation of a complex material, specifically polyurethane and specifically foams. The project constructed an event-driven software platform that provides a framework for the orchestration of numerical models on all scales and coordinate their interactions. For the generation of surrogate models capturing the behaviour of complex models in a given domain, a mechanism was designed and implemented for fitting surrogate models on demand defined by a user-defined policy. The platform captures the workflow including all hierarchical model components, generally realised as a separate software tool. The integrated noSQL database provides the flexibility of storing documents, models and hierarchical data sets.

The project demonstrates the feasibility to integrate over the scales and to perform the overall process of simulating a complex material using in-house computational tools and commercial tools in a distributed computational environment.
Whilst some of these tools were available, a good number of specialised tools needed to be generated, primarily for the computation of properties such as diffusivity, polymerisation and blowing reactions, viscosity of the polymer in different stages, viscosity of the pseudo phase bubbles and polymer and mechanical properties, to mention a few of the 5 nano-scale tools, 5 meso-scale tools and 7 macro-scale tools all of which are exchange globally 55 pieces of information ranging from scalar parameters, state variables to complete models.

The project uses the following tools:

WP1 Material properties (UNITS): Material Studio (Accelrys BIOVIA, http:// for basic property calculations (e.g. density, diffusivity, transport and mechanical properties as a function of polymer composition, temperature, and degree of cross-linking) using atomistic molecular dynamics simulations (based on the COMPASS force field), mesoscopic molecular simulations based on dissipative particle dynamics (DPD, as implemented in Material Studio), and finite element calculations (using the code Palmyra, a gift of Prof. Gusev, ETH, Zurich, Switzerland). The atomistic and mesoscale level computational recipes developed within Material Studio can be readily ported to any other open-source software for multiscale molecular simulation, such as LAMMPS ( The sole atomistic level calculations can also ported to the freely distributed platform NAMD ( Finally, any open-source finite element calculation program (e.g. CalculiX, can be employed for e.g. mechanical properties estimation. Licence: Commercial

WP1 Kinetics (BASF):
Turbomole Turbomole is a quantum chemistry simulation tool which uses the DFT or CC method to analyze molecules of up to one hundred atoms. It was used for the calculation of rate coefficients for the blowing and gelling reactions with and without amine catalyst based on the DFT method under “gas phase” conditions. Licence: Commercial Alternatives: Gaussian ADF (ttps:// NWChem(
COSMOtherm Describes of the effects of solvation in chemical systems based on the COSMOS-RS theory. The effect of solvation on the chemical rate coefficients, which was not considered within the Turbomole calculations, was determined by COSMOtherm. Licence: Commercial Alternatives: ADF7 (to a limited degree)
Predici A simulation tool that allows the setup and simulation of large reaction system that are typical for polymerization processes. It uses population balance methods to determine the molecular weight distribution of polymers or their statistical parameters. The software was used for the simulation of the overall chemical kinetics of the polyurethane polymerization process. Predici has an export functionality that allows the transfer of kinetic models to other software packages based on C-code. Licence: Commercial Alternatives: AspenPolymers ( offers some features of Predici

WP1 in-house for EOS (US). This tool uses the PC-SAFT equation of state to calculate the solubility of the blowing agents in the polymer rich phase. It is implemented as a self-contained open source Fortran code and integrated into the MoDeNa framework as a backward mapping model with connections to several mesoscale tools. License: US inhouse Alternatives: ThermoC (

WP1 In-house for thermo tool DFT (US). This tool uses a density functional theory which is consistent with PC-SAFT to predict surface tension. It is implemented as an open source Fortran code with a dependence on the numerical library PETSc. The tool is integrated in a backward mapping fashion into the MoDeNa framework. License: US inhouse Alternatives: Tramonto (

WP2 Bubble growth tool (VSCHT). This tool is based on the bubble-shell model. It implements the reaction kinetics for the calculation of temperature and the production of chemical blowing agent. It then simulates the diffusion of blowing agents in the reaction mixture and their evaporation on bubble interface. Thus, it predicts the bubble growth rate, which is further utilized in CFD tool. It is implemented as open source Fortran code. License: VSCHT in-house Alternatives: None known

WP2 Coalescence Kernel tool (TUE) License: TUE in-house. Alternatives:None known

WP2 Polymer viscosity tool (VSCHT). The tool is an implementation of the Castro-Macosko model. The parameters of the model were obtained for different polyurethane systems from the open literature and experimental data generated during the project. It is used mainly in Bubble growth, Wall drainage and Foam rheology tools. It is implemented as a Forward mapping model in the MoDeNa framework. License: VSCHT in-house Alternatives: None known

WP2 Wall drainage tool (VSCHT). The tool simulates the formation of walls and struts during the foaming process. It is based on the thin film hydrodynamics equations. It models the flow of reaction mixture around the bubbles due to capillary forces and growth of the bubbles. It uses the evolution of bubble size calculated by the Bubble growth or CFD tool. It is used for the prediction of the wall thickness profile. It is implemented as an open source Fortran code. License: VSCHT in-house Alternatives: None known

WP3 CFD tool. The CFD tool simulates the expansion of a reacting polymer foam by using the volume-of-fluid method to describe the detailed evolution of the interface between the polymer foam and the surrounding air. The evolution of the cell size distribution inside the foam is described via a population balance equation. The CFD tool accounts for the kinetics of the polymerization reaction, the growth of the foam cells, and changes in foam density, foam conductivity, as well as foam rheology. The CFD tool was implemented both in Ansys Fluent (commercial code) and OpenFOAM (open source) but was released in the final version only in OpenFOAM. The CFD tool can be used as an independent stand-alone code or as integrated in the MoDeNa platform (POLITO). Licence: in-house. Alternatives : None known

WP3 Foam conductivity tool (VSCHT). This tool predicts the heat insulation properties of the foam - the equivalent foam conductivity. It is based on the homogeneous phase approach. First, it estimates effective conductive and radiative properties of the foam based on its morphology and properties of the gas and polymer. And then it performs coupled conductive-radiative heat transfer simulation to determine the equivalent foam conductivity. This property is a valuable output for the user, but it is also used by other models through the MoDeNa framework - Foam aging and CFD tools. It is implemented as an open source Fortran code. License: VSCHT in-house Alternatives: None known

WP3 Foam rheology tool (TUE) License: TUE in-house. Alternatives: None known

WP3 Foam ageing tool (VSCHT). The tool simulates the permeation of blowing agents out of the foam and the simultaneous permeation of air into the foam. Since the largest obstacle for gas permeation is located in the polymer walls, the tool is based on the model of consecutive plane-parallel walls separated by gas cells. It uses thermodynamic tools from WP1 to determine the solubility and diffusivity of the gases. And then it predicts the evolution of gas composition in foam cells in time and with the help of Foam conductivity tool the gradual degradation of heat insulation properties of the foam. It is implemented as an open source Fortran code. License: VSCHT in-house Alternatives: None known

WP3 Foam reconstruction tool (IMDEA, VSCHT). This tool creates a computer generated image of foam representative volume element based on input morphology descriptors (cell size distribution, foam density, strut content). It uses a close packing algorithm to generate closely packed spheres with the desired size distribution. It then uses Laguerre tessellation, in which the spheres act as seeds, to divide the periodic domain into cells. The tools can then create computational mesh for Foam conductivity (Finite volume method) or Mechanical properties (Finite element method) tool. The tool is a collection of mesh creation and mesh manipulation utilities. Most of them are open source C++ and python code. The interface was created in python. License: IMDEA & VSCHT in-house. Alternatives: None known

WP3 Mechanical properties and mechanical simulation (IMDEA). A continuum simulation tool has been developed to predict the mechanical properties of foams from the mechanical behavior of the solid material and the microstructural features of the foam. The simulation strategy is based on computational homogenization by means of the finite element analysis of a representative volume element of the microstructure. The tool can be used to model any type of foams (metallic, ceramic or polymeric, open of close cell, etc.) and it can take into account the main microstructural features of the foam (density, cell size distribution and anisotropy, mass distribution between struts and cell walls, cell wall thickness, strut shape, etc.). The input parameters for the model can be obtained from experiments and/or simulations and it is has been incorporated into the MODENA orchestrator. License: IMDEA in-house. Alternatives: None known

Multi-scale MoDeNa simulation platform - properties and features:

+ The MoDeNa software framework is a generic software platform for any multi-scale modelling. Its application is not limited to polyurethane or materials modelling.
+ The MoDeNa software framework is unique in that it uses adaptive surrogate models to bridge scales.
+ Library structure supports relocation, auto loading and instantiation of models through search algorithm.
+ Model substitution mechanism supports variable indexing including implementation of model substitution with index variables and support of indices in exact task object.
+ Support of jinja2 (template engine) for input file generation and automatic code generation for surrogate model C functions.
+ A two-level software framework consisting of a high level Python library and a low-level C library with additional Fortran, C++ bindings and OpenFOAM field interface.
+ The software components and documentation have been released continuously to the partners and the public through the public GitHub repository.

Project Results:
This part has been uploaded as a separate pdf.
Potential Impact:
Product design
The project demonstrated the ability to generate models and the integration of the model-specific tools into a generic platform not only to be feasible, but also industrially applicable. This demonstrates that complex products can be “computed” and that it is feasible to design products employing these types of computational tools. This opens a world of opportunities which industry is invited to explore. It should though be noted, that the results for polyurethane foaming are of varying quality and a solid framework for the validation is required at this point in time as confidence in the predictive quality of individual parts varies greatly from domain to domain and material to material.
The expected impact is not only the reduction of costs due to reduced requirements of experimental work, but also improved products in terms of quality and performance as well as an increased number of different products, so an increase in versatility.

Process design
The simulation tool once, equipped with the required models to provide reliable prediction, can be used to carry out sensitivity studies which provide essential information on where improvements are possible. This will define requirements for the production process, both in operations but also in design. In the case of MoDeNa, the initial bubble distribution is one of the quality-determining factors. Generating the product-optimal initial bubbles is reflecting into both operations as well as the design of the mixing unit.

The project gave new impetus into the effort of generating models in coded form. The problem is not quite apparent, but evolves from the multiplicity of tasks associated with a given process in terms of what problem should / must be solved using computational tools all of which use models in one or the other form of the same process. With “problems” being associated with specific communities, the models tend to be re-generated over and over again resulting in multiple implementations and a vertical integration with models being on the top of each pillar. There is no mechanism to enforce a compatibility of one or the other kind between the different models, the different pieces of code that represent the mathematical representation of the predicted behaviour of the modelled system. This quite obviously also makes any comparison of results from the different pillars questionable. This forms the background for the request of a centralised approach to model representation in a computational environment. It directly poses the question of interoperability beyond the data exchange, but also to enable model exchange.
MoDeNa has contributed on two levels: On the low level where code is to be structured such as to accommodate models generated from the top level, where meta-data structures are generated and organised in ontologies, latter designed to capture domain-specific information.
The low level has been resolved and proved to be useful and structurally robust. It is hoped that this will have an impact on how mathematical models are structured in code, thereby improve the interoperability between different platforms utilising these models.
MoDeNa gave new impetus to discipline, which not at least is starting to gain visibility in the materials modelling community. This is the result of several workshops organised by the ICMEg and CECAM. Moreover, the multiscale molecular simulations recipes and procedures developed in MoDeNa WP1 by UniTS will be exploited in the EU H2020 Project COMPOSELECTOR, funded under the Topic NMBP-23-2016, in which, upon modification and adaptation to the systems (polymer-based nanocomposites for automotive and aerospace applications), they will be implemented in the MuPiF platform (a distributed multi-physics integration tool), developed during the FP7 EU Project the Multiscale Modelling Platform: Smart design of nano-enabled products in green technologies, project number 604279 (for further info see

The CEN Workshop Agreement (CWA) on Materials modelling terminology, classification and metadata was initiated as a result of the MoDeNa project.
Due to the complexity of materials and the wide range of applications, the materials modelling field consists of several communities that have over the years developed models and expertise in their areas. These communities typically focus on particular types of models (electronic, atomistic, mesoscopic and continuum) as well as applications of these models to certain areas. Along with these domains a wide range of domain specific software codes has evolved as well as domain related terminology. Applications to industrial problems in nanotechnology and advanced materials require a strong interdisciplinary approach between these fields and communities.
Once the CWA is finished and published a common terminology and classification in materials modelling will be publicly available. The document will be a first mover towards standardization of vocabulary and taxonomy of computational material modelling in the repository of standardization documents published by officially recognized standards bodies.
The document seeks to establish a common terminology in materials modelling which will lead to greatly simplified and much more efficient communication, especially benefitting industrial end users in their understanding and lowering the barrier to utilising materials modelling. Furthermore a materials modelling metadata schema shall be defined, which will simplify to communicate, disseminate, store, retrieve and mine data about materials modelling. The metadata can also be used as a basis to develop future standards especially in the field of interoperability between models.

Process of generating a complex simulation
MoDeNa demonstrates the process of generating a complex simulation with more than 10 computational activities and more than 50 interactions.
When assembling such a complex simulation process, the first step is to provide a global overview of what is required. MoDeNa approached it by generating a block-diagram indicating the main activities and interactions. This requires a small group, possibly only one individual, who assemble the diagram. The members of this group require a solid background in modelling of the product on all levels, which implies that for each relevant subject an expert must have input to the picture generation process.
Once established, the overall framework is exposed to the people who are overviewing the solution of each identified block making it into a set of well-defined identifiable computational activities. To each of these activities, a team for the execution of the identified task is to be established.
Each team then communicates with the neighbouring teams, where “neighbouring” is defined as the team that has a task with which information is to be exchanged, either it is data that is required for the task (receiving) or data that is required by the neighbour (sending). The information being exchanged may be of numerical nature or complete models.
Each task may require either the employment of an available, possibly commercial tool or a specialised tool is to be generated. In both cases adaptors must be generated by the respective team to interface with the platform, which in this project is the MoDeNa platform. Each platform has currently its own definition of adaptors, which pose the above-raised portability / interoperability questions. The quest for generating the adapter is to be placed to the team generating / utilising the tool in contrast to the team generating the platform.
In terms of MoDeNa platform this implies that future developments must follow the MoDeNa design principles. This includes the following steps:
+ identify the appropriate functional form (equations) of the surrogate models and implement them within the framework. The surrogate model should cover the required physics while being computationally efficient. This work may be carried out by the sending or receiving team.
+ implement code specific adaptors to drive lower-scale codes. This work is usually carried out by the sending team.
+ embed surrogate models into higher-level code using the MoDeNa API. This work is usually carried out by the receiving team.

Once a simulation has been compiled, it may then be passed on to the application domain. Depending on the competence level of the application team, they can exchange computational tools themselves or they may require the help of the tool-generating team. The experience in the first applications showed that the latter process is quite easily achieved by the application team, though there is considerable competence available in the observed case.

The MoDeNa platform
The MoDeNa software is the first generic framework to integrate software across multiple scales through surrogate functions. The framework provides infrastructure for distributed computing thus computational jobs can be executed in any accessible computation node. The information associated with the simulation, including models, data and recipes are stored in a nonSQL database, which also can be operated over a distributed network. The platform is event-driven, enabling the implementation of dynamic workflows. The platform also provides the infrastructure to replace computationally expensive models with simpler surrogates. The platform can be configured to dynamically update the surrogate based on optimised computational experiments done on the complex models using user-specified updating policies. Standard existing and exhaustively tested software is employed for this task. For the design of experiments and identification of the surrogate models, the statistic communities’ R language is employed. This enables establishing of the loop consisting of experiment & identification to be developed off-line and once in place to simply integrated into the platform. These surrogate procedures are unique to this platform representing a significant improvement over existing platforms.
The platform has been released open-source. Wikki offers support, training, consultancy and customisation for potential users. In addition, the framework offers great potential for work-flow automatisation which is daily business for Wikki and in this way enables Wikki to offer improved world-class services to its customers.

Platform - where to go
The properties required for a platform are :
+ “plug and play” for tools
+ “tools”, representing simulations of well-defined subsystems, must also be executable as stand-alone tasks.
+ “adaptors” - plug and play requires adaptors for each tool and platform raising the issue of communication protocols. Whilst protocols may be unique to the platform, the generation of the adaptors must be formalised asking for the definition of a meta language.
+ “meta terminology” - is in the first place domain specific and should thus be captured in a domain specific ontology. Comparing ontologies from different domains should then be the source of defining a translation table, an aliasing table, and two sets of remaining, domain specific lists of meta terms. The latter then defines the incompatible part, which needs to be added on one or the other domain if indeed a connection is to be established.
+ “centralised model repository” - whilst not trivial and domain specific, one should aim at a feasible form of model representation that is in a form that makes it easy to compile into any target language. A library per domain or a group of domains would enhance model construction very significantly. Main issue is to find a suitable structure and more importantly the model is inhibited by commercial interests.
Properties that may be desirable:
+ “canonical” - representation based on a canonical set of variables as it is common practice in thermodynamics. This could serve as a common basis and building the protocols in it would likely result in a minimal protocol definition space.
+ “architecture” : it appears that two basic architectures may be considered, one that links models together in a type of daisy-chaining that is, model ask for results from subordinate models or a type of bus structure, where the tools hang on to a data bus and are controlled by a global supervisor.
What is not likely achievable:
+ “global platform” - reason: the tools and the associated people are strongly bound to the application domain.
+ “global terminology” - the multiplicity of disciplines involved is too large to find a common agreement soon. However, the approach of first defining the terms in the various domains and then combining them, as suggested above, is seen as feasible.

List of Websites: