Abstract: |
Models can be used to gain insights on the modeled system through analyses. As soon as the model is changed,
analyses have to be recomputed for the results to stay valid. Incremental model analyses may help to save time
by only reevaluating those parts of a model analysis that are actually affected by a given model change. Deriving
such incremental model analyses implicitly saves development effort and maintains a good understandability
of the analysis. However, current approaches to implicit incremental model analyses are either restricted
to a certain class of analysis or are unable to incorporate optimized incrementalizations of often reused functions.
In addition, they do not take the model composition hierarchy into account, which could offer additional
efficiency. In the proposed PhD project, these problems are tackled by a combination of 1. an approach to
integrate optimized incrementalizations of commonly used analysis operators through a formal representation
of the incrementalization process in category theory, 2. an approach to simplify dynamic dependency graphs
using the model composition hierarchy, 3. an approach to automatically assemble an incrementalization profile
optimized for a given scenario and 4. an approach for lock-free parallel change propagation of model
updates. These approaches are applied to a range of case studies, including also analyses formulated as model
transformations. Furthermore, the PhD project analyzes how the performance of an incremental model analysis
is influenced by the metamodel design and whether relevant design criteria are already met by metamodel
designers. A new modeling paradigm is proposed to reduce the accidental complexity and therefore improve
the performance of incremental model analyses through the usage of Deep Modeling ideas. |