A collection of Open Source routines, involving Linux, Apache, Zope, Plone, MySql, Scilab among the others, that allow to browse and first analyse multiparametric data stored in a heterogeneous format.
From the user point of view the main advantage is the possibility of browsing through datasets recorded on different volcanoes, with different instruments, with different sampling frequencies, stored in different formats, all via a consistent, user-friendly interface that transparently runs queries to the database, gets the data from the main storage units, generates the graphs and produces dynamically generated web pages to interact with the user. And when we say user, we refer to a full class of people that maybe interested to using the dynamic web interface. In fact, the Zope application server allows definition of the user privileges, both for what concerns data access, and procedures. In this way we can offer a similar interface to the routine operator that does the monitoring, and to the researcher and to the volcano observatory director, but they will be able to act on different levels of detail on the data, perhaps using different sets of routines.
Handling a volcanic crisis and doing research on volcanic data are completely different tasks. However, they share common problems to be solved. One of these is the handling of huge amounts of etherogeneous data. At the current time, this handling is done in a completely different way in any university and in any volcano observatory. This implies that e.g. in the moment of a crisis a researcher arriving at a foreign country to help the local observatory staff (if any...) has to find its way among the "usual" problems of how the data is stored, where is it, how it is time-stamped and so on, stealing precious time to more high level and important tasks such as the ones more linked to answering the questions: "What is going on?" and "What is going to happen next?".
Even if the foreign researcher decides to use his/her own software, there is always the issue of how to convert the existing data to the format his/her software can handle. Moreover, there is also the issue of licensing, i.e. paying the license fees to the owners of the software, which most of the time is proprietary and commercial. And the so-called "pirating" is not really a nice nor fair solution!
The building of a full and consistent set of tools based on the open source framework is therefore a major result that comes out of MULTIMO. We really hope that this is just the beginning, i.e. that the MULTIMO developed routines will be the base of a continuously growing environment that maybe further specialized and expanded.
We consider particularly important the realization of a full dynamical web environment that allows for the execution and monitoring of the results of many different approaches, including the stochastic one, for which the commercial approach is the standard, probably because this kind of time series analysis was mostly developed for processing economic time series. In this case, the choice of "going open source" is one with particularly important social implications, especially for third world countries.