MODELSWARD 2025 Abstracts


Area 1 - Methodologies, Processes and Platforms

Full Papers
Paper Nr: 21
Title:

Towards the Model-Driven Development of Adaptive Cloud Applications by Leveraging UML-RT and Container Orchestration

Authors:

Mufasir Muthaher Mohammed, Karim Jahed, Juergen Dingel and David Lamb

Abstract: Containers are self-contained units of code that can be executed in various computing environments. Container orchestration tools such as Kubernetes (K8s) assist in deploying, scaling, and managing containers, permitting alterations to the execution platform (environment) at runtime. Container orchestration and model-driven engineering (MDE) both offer concepts, techniques, and tools that facilitate the realization of self-adaptation capabilities. Yet, their joint use for the design, implementation, and maintenance of adaptive cloud applications appears to be underexplored. This paper presents the results of our investigation of how container orchestration can complement an extension of existing MDE techniques (based on UML-RT, a UML 2 profile) for the effective design, implementation, and maintenance of adaptive cloud applications. We will describe an approach and toolchain for automatically generating and deploying a fully containerized distributed application from a UML-RT model and leveraging both model- and platform-level dynamic adaptation and failure recovery capabilities to allow the application to respond to runtime changes to the requirements or failures. The application of the approach with the help of a prototype implementation of our toolchain to an exemplar is described. The evaluation results show the feasibility and effectiveness of the approach.
Download

Paper Nr: 49
Title:

From Plain English to XACML Policies: An AI-Based Pipeline Approach

Authors:

Maria Teresa Paratore, Eda Marchetti and Antonello Calabrò

Abstract: The increasing adoption of generative artificial intelligence, particularly conversational Large Language Models (LLMs), has presented new opportunities for addressing challenges in software development. This paper explores the potential of LLMs in generating eXtensible Access Control Markup Language (XACML) policies. This paper investigates current solutions and strategies for leveraging LLMs to produce verified, secure, compliant access control policies. Specifically, by discussing current methods for enhancing LLM performances in generating structured text, it introduces a pipeline approach that integrates conversational LLMs with syntactic and semantic validators. This approach ensures correctness and reliability of the generated policies. Our proposal is showcased by using real policies and compares various LLMs’ performances (ChatGPT, Claude, Gemini, and LLaMA). Our findings suggest a promising direction for future developments in automated access control policy formulation, bridging the gap between human intent and machine interpretation.
Download

Paper Nr: 53
Title:

An Automated and Intelligent Interface Embracing Process Awareness into User Workspace

Authors:

Minh Khoi Nguyen, Hanh Nhi Tran, Ileana Ober and Razan Abualsaud

Abstract: This paper presents an AI-augmented framework for automated and intelligent process monitoring, addressing the inefficiencies of manual progress reporting in Process Management Systems (PMS), which leads to potential inaccuracies and consumes valuable user time. Our research proposes a novel solution that bridges users’ workspaces and PMS, enabling automatic progress reporting based on users’ actions within their preferred tools. The core innovation of our framework pMage lies in employing Artificial Intelligence (AI) techniques to analyze and interpret sequences of user actions, translating them into accurate task progress updates, which significantly reduce manual input and enhance the accuracy of the reporting, thus making the integration of a PMS smoother and more effective. We demonstrate our framework’s applicability through a case study that uses pMage to monitor a brake system manufacturing process with our prototype. As a smart interface, pMage provides a no-code solution to connect a wide range of user applications to various PMS via their respective APIs. This versatility ensures broad applicability across different organizational contexts and toolsets. Our AI-augmented framework offers a more reliable, efficient, and user-friendly approach than existing monitoring methods.
Download

Short Papers
Paper Nr: 8
Title:

Safe Behavior Model Synthesis: From STPA to LTL to SCCharts

Authors:

Jette Petzold and Reinhard von Hanxleden

Abstract: In Model-Driven Engineering developers create a model of the system. Typically, such a model is verified to be safe by using model checking. For this, the developers need to create Linear Temporal Logic (LTL) formulas. Determining these formulas and modeling the system in the first place is time consuming and error-prone. We propose to automatically create the LTL formulas based on a risk analysis that has to be done anyway. This reduces errors and the time needed to create the formulas. Furthermore, we use these formulas to automatically synthesize a behavior model of the analyzed system that is safe by construction. The presented approach is implemented in the open-source tool PASTA. A case study with a simplified Adaptive Cruise Control system shows the applicability of the Safe Behavior Model synthesis.
Download

Paper Nr: 50
Title:

How to Leverage Digital Twin for System Design?

Authors:

Jean-Sébastien Sottet, Pierre Brimont, Cedric Pruski and Faima Abbasi

Abstract: Digital Twins (DT) are deeply rooted in digital simulation environments. Today, they are still considered data-driven constructs aimed at supporting simulation, optimization, prediction on a physical system. However, data alone may not completely describe a system. This necessitates additional knowledge, encapsulated within models, which forms the foundation of the Model-Driven Digital Twins (MDDT) paradigm. At the start of a DT life-cycle, or when dealing with a system under construction, models becomes the primary artifact enabling the DT due to the lack of available data. This paper explores the advantages of simultaneously engaging in model-driven system design while preparing its corresponding DT. Using a real-world case study focused on developing a hydrogen valley, we demonstrate the substantial benefits of integrating models at the earliest stages of the DT’s design and implementation process. This covers preparing data collection and sensors, and incorporating human knowledge throughout the system lifecycle, enhancing replicability.
Download

Paper Nr: 51
Title:

An Engineer-Friendly Terminology of White, Black and Grey-Box Models

Authors:

Eugen Boos, Mauritz Mälzer, Felix Conrad, Hajo Wiemer and Steffen Ihlenfeldt

Abstract: In engineering modeling, white-box and black-box concepts represent two fundamental approaches for modeling systems. White-box models rely on detailed prior knowledge of the physical system, enabling transparent and explainable representations. Black-box models, on the other hand, consist of opaque internal workings and decision-making processes that prevent immediate interpretability. They are mainly data-driven, relying on statistical methods to capture system behavior. Depending on the literature at hand, the exact definitions of these two approaches differ. With the continuous emergence of machine learning algorithms in engineering and their move towards enhanced explainability and usability, the exact definition and assignment of white- and black-box properties soften. Grey-box modeling provides a hybrid approach. However, this term, as widely as it is used, has no clear definition either. This paper proposes a novel model on the relation of white-, black- and grey-box modeling, offering an improved categorization of conventional vanilla models, state-of-the-art hybrid models as well as the derivation of recommendations for action for targeted model improvement.
Download

Paper Nr: 54
Title:

A Systematic Method to Derive Software Services and Requirements from Business Models

Authors:

Abderrahmane Leshob, Raqeebir Rab and Omar K. Hussain

Abstract: Service-Oriented Architecture (SOA) is an established architectural style to design modular, flexible and scalable software solutions. It provides design principles based on the service concept. Organizations use SOA to design software solutions to support their business models, including business process models and value models. Creating such software is a challenging endeavor that demands expertise in both software and business (process) engineering. On the other hand, Requirements Engineering (RE) plays a key role in building SOA-based solutions. RE ensures that the services are aligned with stakeholders’ needs. This research proposes a transformational approach to i) identify SOA services that automate the collaboration between business partners through their business models and ii) design a goal-based model that connects identified services to the functional requirements they implement and the business and non-functional requirements they satisfy. The obtained model allows to compute a score that measures the effectiveness of the services in satisfying the requirements. The contribution of this work is twofold. First, it generates services that can be used to build software from business models. Second, it builds a model that computes satisfaction scores, allowing architecture and business analysis practitioners to i) measure the effectiveness of the services and ii) compare and select the most appropriate services from various implementation based on the satisfaction of the requirements.
Download

Paper Nr: 28
Title:

Enhancing Simscape Models Reusability Through Semantics and Word Embedding Representations

Authors:

Eduardo Cibrián, Jose María Álvarez-Rodriguez and Roy Mendieta

Abstract: Digital engineering represents a paradigm shift in the current System Engineering practice. The use of digital technologies to create, manage, and integrate data across the entire life-cycle of a system is a key enabler for building a collaborative engineering environment. Models, both logical/descriptive and physical/analytical, are becoming the primary system artefacts used to represent fundamental system structure and behaviour. More specifically, physical models are extensively used for different technical activities, ranging from early requirements verification to system verification in a virtual environment. In this context, the reuse of such models becomes a cornerstone activity, as many similar models are continuously created using different tools which are not always providing a common internal representation. This situation results in wasted time, resources, and a potential source of inconsistency across the organization. In this work, an enriched representation of physical models using semantics and word-embeddings is presented as the basis to build a search service that helps in the first stage of any reuse process: the identification and retrieval of relevant models based on semantic similarity. To validate the presented approach, Simscape models were used in a case study where a word-embedding based search engine was applied, demonstrating improved precision and recall in model retrieval, and thus enhancing the reusability of these models across different domains.

Area 2 - Modeling Languages, Tools and Architectures

Full Papers
Paper Nr: 9
Title:

LLFSMs to TLA+: A Model-to-Text Transformation of Executable Models Enabling Specification and Verification of Multi-Threaded and Concurrent Systems

Authors:

Vladimir Estivill-Castro, Miguel Carrillo and David A. Rosenblueth

Abstract: As complexity of software systems increases, ensuring reliability becomes ever more crucial. Despite advances, behaviour-modelling techniques still face challenges due to semantic gaps. This work focuses on translating Logic-Labelled Finite-State Machines (LLFSMs) to the Temporal Logic of Actions (TLA), bridging the gap between a time-triggered formalism and common temporal logic for model checking. The translation is innovative as multi-threaded and distributed systems can now be designed using LLFSMs. We illustrate the translation with Fischer’s protocol (for multi-threaded systems), and release tools with examples for distributed systems. The approach addresses semantic gaps from three sources: differing finite-state machine semantics, variations in translating to executable models versus models for checking, and discrepancies between abstract and executable model translations.
Download

Paper Nr: 17
Title:

Designing a Meta-Model for the Eclipse Qrisp eDSL for High-Level Quantum Programming

Authors:

Sebastian Bock, Raphael Seidel, Matic Petrič, Nikolay Tcholtchev, Andreas Hoffmann and Niklas Porges

Abstract: Eclipse Qrisp is a high-level programming language designed to simplify quantum programming and make it accessible to a wider range of developers and end users. Initially developed at Fraunhofer FOKUS and now part of the Eclipse Foundation, Eclipse Qrisp abstracts complex quantum operations into user-friendly constructs, enhancing code readability structure. Currently, Eclipse Qrisp is realized as an extension of the Python programming language, in the form of an embedded Domain Specific Language (eDSL), allowing to develop hybrid quantum algorithms, while at the same time utilizing the potential of the overall Python ecosystem in terms of libraries and available developer resources. We firmly believe that the eDSL approach to high-level quantum programming will prevail over the idea of defining specific languages - with their own grammar and ecosystem - due to its ease of integration within available ICT products and services. However, in order to reach higher levels of scalability and market penetration, the Eclipse Qrisp eDSL should be available for various platforms and programming languages beyond Python, e.g. C/C++, Java or Rust. In order to provide the means for implementing Eclipse Qrisp in other programming languages, this paper specifies a meta-model, thereby outlining the pursued design philosophy, architecture, and key features, including compatibility with existing frameworks. The purpose of such a Qrisp meta-model is two-fold: On one hand it formalizes and standardizes the Eclipse Qrisp programming model. On the other hand, such a meta-model can be used to formally extend other programming languages and platforms by the capabilities and concepts specified and implemented within Eclipse Qrisp.
Download

Paper Nr: 18
Title:

Towards a Domain-Specific Modelling Environment for Reinforcement Learning

Authors:

Natalie Sinani, Sahil Salma, Paul Boutot and Sadaf Mustafiz

Abstract: In recent years, machine learning technologies have gained immense popularity and are being used in a wide range of domains. However, due to the complexity associated with machine learning algorithms, it is a challenge to make it user-friendly, easy to understand and apply. In particular, reinforcement learning (RL) applications are especially challenging for users who do not have proficiency in this area. In this paper, we use model-driven engineering (MDE) methods and tools for developing a framework for abstracting RL technologies to improve the learning curve for RL users. Our domain-specific modelling environment for reinforcement learning supports syntax-directed editing, constraint checking, code synthesis, and enables comparative analysis of results generated with multiple RL algorithms. We demonstrate our framework with the use of several reinforcement learning applications.
Download

Paper Nr: 24
Title:

HyperGraphOS: A Meta Operating System for Science and Engineering

Authors:

Antonello Ceravola, Frank Joublin, Ahmed R. Sadik, Bram Bolder and Juha-Pekka Tolvanen

Abstract: This paper presents HyperGraphOS, an innovative Operating System (OS) designed for the scientific and engineering domains. It combines model-based engineering, graph modeling, data containers, and computational tools, offering users a dynamic workspace for creating and managing complex models represented as customizable graphs. Using a web-based architecture, HyperGraphOS requires only a modern browser to organize knowledge, documents, and content into interconnected models. Domain-Specific Languages (DSLs) drive workspace navigation, code generation, AI integration, and process organization. The platform’s models function as both visual drawings and data structures, enabling dynamic modifications and inspection, both interactively and programmatically. HyperGraphOS was evaluated across various domains, including virtual avatars, robotic task planning using Large Language Models (LLMs), and meta-modeling for feature-based code development. Results show significant improvements in flexibility, data management, computation, and document handling. By bridging traditional OS functionality with innovative UX design to fulfill the needs of modern applications, HyperGraphOS delivers enhanced productivity and efficiency. Its graph-based model representation and integration with DSLs create a highly flexible, user-friendly environment, making it ideal for a wide range of scientific and engineering contexts.
Download

Paper Nr: 35
Title:

Automated Generation of Standardised Digital Twins Based on MBSE Models

Authors:

Philippe Barbie, Andreas Pollom, Rene-Pascal Fischer and Martin Becker

Abstract: Digital Twins have emerged as a key technology to enable a mirrored digital representation of physical systems. The Asset Administration Shell (AAS) is a standardised concept for implementing these Digital Twins. However, the implementation of Digital Twins for existing systems poses an enormous challenge, as many physical systems were not originally developed for the integration of Digital Twins. Our aim is to generate ready to use standardised Digital Twins by automatically evaluating MBSE models of system components that are already in productive use. To achieve this, a tool was created that analyzes existing MBSE models and then generates AAS using the established open-source middleware Eclipse BaSyx. Model-Based Systems Engineering (MBSE) is an approach that has been used successfully for many years and uses systematic modeling to plan and support the logical and physical structure of systems over their entire life cycle. Building on this established methodology, we aim to extend its application to create and manage an accessible Digital Twin, ensuring its functionality and alignment with the represented system throughout its entire lifecycle. As part of this paper, we will demonstrate how a simplified space satellite system, documented as an MBSE model, can be automatically transferred into a fully functional AAS within our application prototype. The integration of MBSE principles not only increases the accuracy of the generated Digital Twins, but also improves their scalability and maintainability. This is why our solution has the potential to convince those who currently have reservations about adopting the novel Digital Twin technology for systems within their company.
Download

Short Papers
Paper Nr: 7
Title:

Navigating Dimensionality Through State Machines in Automotive System Validation

Authors:

Laurenz Adolph, Barbara Schütt, David Kraus and Eric Sax

Abstract: The increasing automation of vehicles is resulting in the integration of more extensive in-vehicle sensor systems, electronic control units, and software. Additionally, vehicle-to-everything communication is seen as an opportunity to extend automated driving capabilities through information from a source outside the ego vehicle. However, the validation and verification of automated driving functions already pose a challenge due to the number of possible scenarios that can occur for a driving function, which makes it difficult to achieve comprehensive test coverage. Currently, the establishment of Safety Of The Intended Functionality (SOTIF) mandates the implementation of scenario-based testing. The introduction of additional external systems through vehicle-to-everything further complicates the problem and increases the scenario space. In this paper, a methodology based on state charts is proposed for modeling the interaction with external systems, which may remain as black boxes. This approach leverages the testability and coverage analysis inherent in state charts by combining them with scenario-based testing. The overall objective is to reduce the space of scenarios necessary for testing a networked driving function and to streamline validation and verification. The utilization of this approach is demonstrated using a simulated, signalized intersection with a roadside unit that detects vulnerable road users.
Download

Paper Nr: 10
Title:

Efficient Modelling with Logic-Labelled Finite-State Machines of IEC 61499 Function Blocks: Simulation, Execution and Verification

Authors:

Vladimir Estivill-Castro, Miguel Carrillo and David A. Rosenblueth

Abstract: As automation grows, so does the complexity of software systems. Hence, the urgent and pressing need for software verification, particularly for distributed systems, as they are notoriously difficult to verify. The widespread of verification techniques, such as model checking, however, have been hindered by requiring a significant level of expertise. In the realm of industrial automation, on the other hand, the IEC 61499 function block architecture has gained prominence for modelling intricate distributed automation systems, especially in demanding scenarios such as process control. However, it suffers from being event-driven, forcing semantic interpretations and the use of timed events by a central clock, to produce input for model checkers. We argue that this situation can be remedied by logic-labelled finite-state machines and control-status messages. This is the first time that these concepts have been used for producing executable and verifiable models of distributed systems for industrial automation with communication delays as is the current environment of application of the IEC 61499.
Download

Paper Nr: 13
Title:

Towards Synthesis-Based Engineering for Cyber-Physical Production Systems

Authors:

Wytse Oortwijn, Yuri Blankenstein, Jos Hegge, Dennis Hendriks, Pierre van de Laar, Bram van der Sanden, Laura van Veen and Nan Yang

Abstract: Supervisory control is a key part of Cyber-Physical Production Systems (CPPSs), to orchestrate all system resources to work together in a safe, correct, and optimal way. Engineering reliable supervisors for industrial CPPSs is highly challenging due to their complex nature. Synthesis-Based Engineering (SBE) is an engineering approach centered around supervisory controller synthesis, a technique for automatically computing correct-by-construction supervisors out of formal system requirements and plant models that describe unrestricted system behavior. Even though SBE may lead to higher degrees of automation and faster feedback cycles, SBE may be difficult to integrate into existing ways of working since it is different from traditional engineering. This article contributes a three-step approach to gradually introduce SBE in industrial settings. We are instantiating this approach in a research case together with ASML and VDL-ETG, by developing a proof-of-principle workflow. In this workflow, control is specified as UML activities, for which we contribute a formal execution semantics since that is missing in current practice. Moreover, we discuss design assistance provided in the workflow as well as its evaluation with domain experts. The domain experts see the value of automated design assistance and are willing to take further steps towards the adoption of SBE.
Download

Paper Nr: 19
Title:

A Taxonomy of Change Types for Textual DSL Grammars

Authors:

Hossain Muhammad Muctadir, Jérôme Pfeiffer, Judith Houdijk, Loek Cleophas and Andreas Wortmann

Abstract: Domain-Specific languages (DSLs) bridge the gap between the domain-specific problem space and the solution space of software engineering. Engineering DSLs is a complex and time-intensive iterative process involving exchanges with stakeholders who amongst others decide on the DSL’s syntax. Since in this process the stakeholder requirements change frequently, so can the corresponding DSL. The subsequent changes to the language specification may produce conflicts that language engineers need to be aware of and resolve. Current research has not adequately answered the question which change operations for grammar-based syntax exist, and which impact they have at meta-model and model level. To answer this question we develop a taxonomy of change types for grammars of textual DSLs that includes the concepts typically found in grammar-based language workbenches such as Xtext, MontiCore, and Neverlang, and lists the possible change operations that can be performed. The taxonomy was built iteratively based on an Xtext based implementation of the Systems Modeling Language v2 and evaluated in a case study that leverages the taxonomy to perform impact analysis. The taxonomy presented in this paper will help language engineers to analyse the impact of changes to the grammar-based syntax specification of a language and to utilize this analysis, e.g., to perform historical change impact analysis.
Download

Paper Nr: 27
Title:

An Automata-Based Method to Formalize Psychological Theories: The Case Study of Lazarus and Folkman’s Stress Theory

Authors:

Alain Finkel, Gaspard Fougea and Stéphane Le Roux

Abstract: Formal models are important for theory-building, enhancing the precision of predictions and promoting collaboration. Researchers have argued that there is a lack of formal models in psychology. We present an automata-based method to formalize psychological theories, i.e. to transform verbal theories into formal models. This approach leverages the tools of theoretical computer science for formal theory development, for verification, comparison, collaboration, and modularity. We exemplify our method on Lazarus and Folkman’s theory of stress, showcasing a step-by-step modeling of the theory.
Download

Paper Nr: 31
Title:

ReMoDeL: A Pure Functional Object-Oriented Concept Language for Models, Metamodels and Model Transformation

Authors:

Anthony J. H. Simons

Abstract: Model-Driven Engineering (MDE) is a broad discipline concerned with curating all aspects of system design using models. Model-Driven Architecture (MDA) is a highly publicised approach focusing on the generation of software systems from models. However, MDA consists of a large collection of complex, interlocking standards, which together are difficult to master and have only partial implementations. This motivated us to devise a much simpler language and toolset for MDE. The result is ReMoDeL (Reusable Model Design Language), a pure functional object-oriented language for describing concepts and relationships. ReMoDeL supports the creation of metamodels, models and model transformations. It leverages skills already known to programmers, such as inheritance and pure functional mapping. It integrates with any standard Java IDE and cross-compiles to Java, although ReMoDeL is more succinct (by 4x). ReMoDeL’s pure functional transformations are in principle amenable to formal proof by induction. Practically, it offers a convenient and fast way to prototype different metamodels and transformations. We are using ReMoDeL to develop alternatives to UML and MDA (with different models and abstraction levels), with promising results.
Download

Paper Nr: 32
Title:

On the Generation of Input Space Model for Model-Driven Requirements-Based Testing

Authors:

Ikram Darif, Ghizlane El Boussaidi, Sègla Kpodjedo, Pratibha Padmanabhan and Andrés Paz

Abstract: Safety Critical Software (SCS) are characterized by their complex specifications with a high number of requirements due to their certification constraints. For such systems, requirements can be specified semi-formally using Controlled Natural Language (CNL) to mitigate the inherent ambiguity of natural language, and to be understandable by certification agents. Requirements serve as artifacts for software testing, where Combinatorial Interaction Testing (CIT) emerges as a relevant testing technique for SCS. CIT requires as a first step the generation of an Input Space Model (ISM) from input specifications. In this paper, we propose an approach that leverages Model-Driven Engineering (MDE) techniques for the generation of ISM from semi-formal CNL requirements constrained by templates that are specified by template models. To automatically generate the ISM, we define rules that map the template models to a generic input space model. The generated ISMs include test parameters, their test values, and inter-input constraints. Our approach ensures traceability between the generated ISM and the originating requirements, which is crucial for the certification of SCSs. We implemented our approach, and we evaluated it through a case study from the avionics domain. The case study shows that our approach can support the DO-178C certification needs in terms of requirements-based testing and provides multiple advantages over manual modeling.
Download

Paper Nr: 38
Title:

Towards a Classification Framework for the Digital Twin Tools: A Taxonomy

Authors:

Mert Ozkaya and Alper Turunc

Abstract: Digital twins (DTs) have gained an ever-increasing popularity in many industries for the real-time monitoring of physical systems and performing useful operations such as predictive maintenance and early fault detection. Many commercial and open-source DT tools are available on the market, which offer services or development environment for developing and using DTs. However, our research shows that practitioners still use conventional programming technologies for developing DTs in-house rather than using the DT-specific tools. We believe that one reason here is to do with the lack of any taxonomy for the determining the DT tools that are of use for the practitioners. In this paper, we propose our attempt for a classification framework that can be used for analysing and comparing different DT tools. Having reviewed the literature and practitioners’ needs, we addressed twelve main features for DT tools, which are divided into sub-features. These are (i) domain, (ii) deployment, (iii) type, (iv) maturity, (v) knowledge management, (vi) system integration, (vii) quality assurance, (viii) reusability, (ix) extensibility, (x) abstraction, (xi) enabling technology, and (xii) development platform. We believe that our classification framework will be useful for different stakeholders: (i) practitioners who wish to develop and use DTs, (ii) tool vendors who can determine strengths and weaknesses of their tools, (iii) researchers who address the tool weaknesses.
Download

Paper Nr: 39
Title:

Digital Twin System of Systems: A Layered Architecture Proposal

Authors:

Meriem Smati, Vincent Cheutet, Christophe Danjou and Jannik Laval

Abstract: Integrating Digital Twins (DTs) with Systems of Systems (SoSs) offers transformative potential for optimizing complex, interconnected systems. However, implementing a DT for an SoS poses several challenges due to the independence and diversity of Constituent Systems (CSs), as well as SoS-specific characteristics such as geographic distribution, evolutionary development, and emergent behavior. This study proposes a novel architectural framework for an SoS DT, featuring a layered design that combines individual DTs for each CS with a global SoS DT layer to oversee and coordinate their interactions. By bridging limitations found in standalone DTs, this structure enables a cohesive and adaptive digital representation of the SoS, addressing the challenges of autonomy and extensibility. The framework aligns with fundamental SoS characteristics, paving the way for enhanced system management, predictive analysis, and performance monitoring, while also underscoring the need for a standardized metamodel to support resilient SoS DT development.
Download

Paper Nr: 40
Title:

Hierarchical System of Digital Twins: A Holistic Architecture for Swarm System Analysis

Authors:

Mouhamadou F. Ball, Jannik Laval and Loïc Lagadec

Abstract: Swarm systems are being increasingly adopted for their operational capabilities and are now assigned more sensitive missions, often in unpredictable environments. Therefore, it is crucial to evaluate their performance in the face of natural or human-induced uncertainties before deployment and enhance their resilience during missions. To enable a comprehensive analysis of this system, a multi-level analysis must be conducted to capture the dynamics at the component, cluster, and swarm levels. Digital Twin (DT) offers a promising solution to address this challenge. While there are existing approaches that use digital twins to analyze complex systems, they do not take into account the specific requirements introduced by swarm configuration. This paper presents a holistic reference architecture, the Hierarchical System of Digital Twins (HSDT), which lays the groundwork for creating digital twins of swarm systems. To support this framework, we introduce the concepts of functional and aggregation hierarchies and propose a goal-oriented method for instantiating DT with a specific level of sophistication. Additionally, we present a metamodel that integrates elements of the Asset Administration Shell (AAS) data model to ensure interoperability with external standards. A prototype of HSDT was developed, and a case study was presented, focusing on analyzing spatial parameters within a swarm of Unmanned Vehicles (UVs).
Download

Paper Nr: 52
Title:

Early Fault-Detection in the Development of Exceedingly Complex Reactive Systems

Authors:

Assaf Marron and David Harel

Abstract: Finding hidden faults in reactive systems early in planning and development is critical for human safety, the environment, society and the economy. However, the ever growing complexity of reactive systems and their interactions, combined with the absence of adequate technical details in early development stages, pose a great obstacle. The problem is exacerbated by the constant evolution of systems, and by their extensive and growing interwoven-ness with other systems and the physical world. Appropriately, such systems may be termed super-reactive. We propose an architecture for models and tools that help overcome such barriers and enable simulation, systematic analysis, and fault detection and handling, early in the development of super-reactive systems. The main innovations are: (i) the allowing of natural language (NL) specifications in elements of otherwise standard models and specification formalisms, while deferring the interpretation of such NL elements to simulation and validation time; and (ii) a focus on early formalization of tacit interdependencies among seemingly orthogonal requirements. The approach is facilitated by combining newly specialized tools with standard development and verification facilities, and with the inference and abstraction capabilities of large language models (LLMs) and associated AI techniques. An important ingredient in the approach is the domain knowledge embedded in LLMs. Special methodological measures are proposed to mitigate well known limitations of LLMs.
Download

Paper Nr: 55
Title:

Optimizing Python Code Metrics Feature Reduction Through Meta-Analysis and Swarm Intelligence

Authors:

Marina Ivanova, Zamira Kholmatova and Nikolay Pavlenko

Abstract: Feature selection plays an important role in reducing the complexity of datasets while preserving the integrity of data for analysis and predictive tasks. This study tackles this problem in the context of optimizing metrics for Python source code quality assessment. We propose a combination of meta-analysis with the Modified Discrete Artificial Bee Colony (MDisABC) algorithm to identify an optimal subset of metrics for evaluating code repositories. A systematic preprocessing step using correlation-based thresholds (0.7, 0.8, 0.9) through random-effects meta-analysis effectively reduces redundancy while retaining relevant metrics. The MDisABC algorithm is then employed to minimize Sammon error, ensuring the preservation of structural properties in the reduced feature space. Our results demonstrate significant error reductions, faster convergence, and consistent identification of key metrics that are critical for assessing code quality. This work highlights the utility of integrating meta-analysis and nature-inspired algorithms for feature selection and establishes a foundation for scalable, accurate, and interpretable models in software quality assessment. Future research could expand this methodology to other programming languages and explore alternative algorithms or cost functions for more comprehensive evaluations. All the relevant code can be found on our GitHub repository∗.
Download

Paper Nr: 26
Title:

A Domain Specific Language to Design New Control Architectures for Smart Grids

Authors:

Asma Smaoui, Mathilde Arnaud, Stéphane Salmons and Guillaume Giraud

Abstract: Model Based System Engineering is widely used for the development of Cyber Physical Systems and in particular Smart Grids (SG). SysML/UML are used for several years to develop Domain Specific Modeling Languages (DSML) each one tackling one or several aspects/viewpoints of the SG. In this Paper we will not just present yet another DSML for SG control design, but we will discuss different modeling patterns adopted to define the DSML and discuss the added value/gain of next generation languages/tools mainly SysML v2 and web tools in the developing of DSML. Our DSML is the first building blocks of a Modeling tool integrated in the new RTE (French Energy Transmission company) platform to design, simulate and evaluate the new control architectures of the French electrical transmission network.
Download

Paper Nr: 29
Title:

Enabling Incremental SysML Model Verification: Managing Variability and Complexity Through Tagging and Model Reduction

Authors:

Bastien Sultan, Ludovic Apvrille, Oana Hotescu and Pierre de Saqui-Sannes

Abstract: Designing complex software systems with model-based approaches encounters the recognized state space explosion problem. Typically, only a subset of models can be formally verified, forcing reliance on simulation or testing to verify the entire system. Furthermore, most formal verification tools require a complete reevaluation of properties after even minor modifications to a model. Although incremental formal verification, particularly the incremental model-checking approach of TTool, has been proposed, it still requires modelers to manually select sub-models not facing state space explosion. Unfortunately, this manual model selection is susceptible to errors. This paper presents a twofold contribution to SysML models of software product lines. First, we introduce a SysML model tagging feature that enables designers to explicitly differentiate between various subsystems, such as core and optional features. Second, we develop and implement a model reduction algorithm using dependency graphs (DGs). This algorithm automatically deactivate model elements linked to specific tags, removing both the specified elements and all their logical dependencies provided the DG is acyclic. These two contributions are evaluated for their effectiveness in generating model variants. Together, they facilitate the creation of a core model and an associated set of models, each extended by additional model elements, and make it possible to rely on incremental model-checking. We have implemented the contributions in TTool and applied it to an integrated modular avionics system. This application enables to compare—both manual and automated—model reduction strategies and assess their benefits for TTool users.
Download

Paper Nr: 58
Title:

Vulnerability Mapping and Mitigation Through AI Code Analysis and Testing

Authors:

Tauheed Waheed, Eda Marchetti and Antonello Calabrò

Abstract: The research addresses the significant and complex challenge of vulnerability mapping and repairing code vulnerabilities, which is critical for enhancing cybersecurity in our increasingly technology-driven society. This paper aims to present an in-depth methodology and framework for effectively mapping software vulnerabilities through AI-driven code analysis and testing techniques. The proposed method and framework provide an automated environment that facilitates identifying and mitigating security vulnerabilities. This innovative framework benefits prosumers and developers, empowering them to confidently produce secure code, even with inadequate cybersecurity knowledge or extensive testing experience. By leveraging AI, the methodology streamlines the process of vulnerability detection and enhances overall software security.
Download

Paper Nr: 59
Title:

Integrating Large Language Models with Enterprise Architecture for Enhanced Information Retrieval of System Engineering Models: A Case Study

Authors:

Walt Melo

Abstract: Many organizations, including the Department of Defense, employ system engineering techniques like SysML to model their enterprise architecture (EA). Regardless of the EA framework in use, such as DoDAF, system engineers develop supplementary EA views to facilitate decision-makers in comprehending their EA landscape and making informed decisions about the evolution of their IT infrastructure. However, creating these EA views requires specialized skills that are often hard to acquire. Moreover, the process is labor-intensive. Furthermore, not all decision-makers may be familiar with system engineering techniques, making it challenging to understand the system engineering models related to their business. In this study, we adopted a different approach. We built a Large Language Model (LLM)-based tool called Responde¯o that allowed users to retrieve information from EA repositories populated with standard-based system models using natural language. The Responde¯o tool enables users, such as decision-makers, to retrieve information about their enterprise architectures using their domain-specific vocabulary. Decision-makers do not need to understand SysML or EA framework lexicon, such as DoDAF views, as they can express their queries using domain-specific vernacular. Responde¯o then transforms their domain-specific queries into EA repository requests using the EA Repository API. Responde¯o utilizes an open-source LLM to parse natural language queries and convert them into the EA repository query language. In-context learning techniques were employed to tailor the LLM to our specific EA domain. This paper discusses the results of a case study where Responde¯o was implemented to enhance the information retrieval process for a large U.S. federal agency. Preliminary findings indicate that Responde¯o effectively transforms EA queries expressed in natural language into accurate EA repository queries and converts the outputs into comprehensible results.

Area 3 - Applications and System Development

Full Papers
Paper Nr: 62
Title:

Advancing IoT Architectures Using Collaborative Computing Paradigms for Dynamic and Scalable Systems

Authors:

Prashant G. Joshi and Bharat M. Deshpande

Abstract: The Internet of Things (IoT) continues to push the boundaries of traditional system architecture models, necessitating more agile, scalable, and efficient frameworks to handle complex, large, real-time applications. While conventional architectures, like layered, provide a structured approach, they often suffer from limitations in adaptability, latency, and dynamic resource utilisation. This paper presents a comprehensive design and implementation a Collaborative Computing Paradigms (CCP)-based IoT Reference Architecture, defined by five core attributes: interconnection and interplay across paradigms, dynamic distribution of data processing, computing fluidity enabling seamless transitions across layers, collaborative data storage and management, and scalability and extensibility of computational resources. To validate the proposed architecture, experiments were conducted in Data Center Management System, Automotive Telematics, Smart Building (Fire Safety, Environment) and Asset Tracking (Shipping carts in a mall). Comparative analysis against traditional layered IoT architectures reveals significant improvements in latency reduction, resource utilisation, scalability under variable workloads, and interoperability between computational layers. The CCP-based architecture leverages dynamic orchestration of Edge, Fog, and Cloud paradigms, allowing for adaptive load balancing, real-time analytics, and distributed decision-making, thereby overcoming the rigidity of layered models. The results highlight the superiority of CCP architectures in enabling low-latency processing, high fault tolerance, and dynamic resource optimisation in highly fluid and demanding IoT environments. We believe this work underscores the paradigm shift towards collaborative architectural models, establishing CCP as a benchmark for next-generation IoT system design, particularly in domains requiring high degrees of responsiveness and cross-layer integration.
Download

Short Papers
Paper Nr: 12
Title:

Evaluating the Quality of Class Diagrams Created by a Generative AI: Findings, Guidelines and Automation Options

Authors:

Christian Kop

Abstract: Working with a Generative AI such as ChatGPT to create conceptual models and particularly Class Diagrams became very popular recently in the modelling community. Therefore, the objectives of this paper are the following: It analyses the previous scientific work to summarize the findings about the quality of AI-generated Class Diagrams. Own tests were carried out too. Based on these findings, the paper provides guidelines for manual quality evaluation. It also discusses automation options for evaluating the quality.
Download

Paper Nr: 25
Title:

Energy Monitoring Systems Analysis and Development: A Case Study for Graph-Based Modelling

Authors:

Tiago Carvalho, Tobias Müller, Sebastian Reiter, Luis Miguel Pinho and André Oliveira

Abstract: The Internet of Things (IoT) enables everyday objects to connect and communicate remotely, transforming areas such as smart homes and industrial automation. IoT systems can be standalone or interconnected in a System of Systems, where multiple devices work together towards a common goal. A key application is Energy Monitoring Systems (EMS), which track energy use within communities, using energy production and consumption. Designing this type of IoT systems remains complex and requires careful consideration of heterogeneous devices, their limitations, software, communication protocols, data management, and security. This paper presents a design approach for EMS communities, with a focus on house-level IoT systems. We introduce a model-driven development methodology, a holistic and flexible framework for designing IoT systems across the development and operations lifecycle. Especially, the concept of projectors enables an easy shift between domain assets and provide automation support. The approach is validated with a real-life use case, for which an analysis phase was developed, showing the benefits of using our approach for managing EMS and the automation of the analysis configuration.
Download

Paper Nr: 30
Title:

Next-Generation Design Tools for Intelligent Transportation Systems

Authors:

Dominik Ascher and Georg Hackenberg

Abstract: Intelligent Transportation Systems (ITS) promote new transportation paradigms such as connected and autonomous vehicles (CAV), multi-modal and demand-responsive transport systems, and enable the transportation electrification by sustainable operation of electric vehicles. Methods and tools are needed to explore the possible design space for emerging transportation paradigms, which support evaluation of system design alternatives and verification of system properties. In this work, we propose a model- and simulation-based systems engineering framework for capturing design decisions and evaluating control strategies for ITS design. In addition to capturing and evaluating different design decisions, the proposed solution allows users to guide design decisions by systematic comparison and evaluation of system configurations and control strategies.
Download

Paper Nr: 46
Title:

Test Adapter Generation Based on Assume/Guarantee Contracts for Verification of Cyber-Physical Systems

Authors:

Jishu Guin, Jüri Vain and Leonidas Tsiopoulos

Abstract: Test adapter generation forms an essential but often least automatized part in the Model-Based Testing (MBT). The difficulty of adapter generation is due to ambiguous or loosely defined mapping between the executable test interface and that of abstracted in the test model. The novel method exposed in this work uses saturated Assume/Guarantee (A/G) contracts to specify test interfaces and to generate adapters from those. As a contribution, firstly, we define a generic saturated A/G contract template that supports the uniform approach to specification and verification of test configuration components. Secondly, we demonstrate how the adapter component model is derived by refining the test interface contracts and its correctness is verified. Finally, the adapter code is generated from the verified model as a set of abstract to concrete and concrete to abstract symbol transformers. The approach is exemplified and validated on a real climate control system example.
Download

Paper Nr: 56
Title:

An Integrated Building Management Platform for Investment into Renewable Energy System and SRI Compliance

Authors:

Giuseppe Rocco Rana, Giuseppe Mastandrea, Marco Antonio Insabato, Reshma Penjerla and Luigi D’Oriano

Abstract: The goals of ecological transition in habitations require an increasing number of considerations to ensure that newly installed systems or building management solutions are economically advantageous and effective in terms of energy savings and production. The increasing variety and supply of renewable energy systems, and the increasing demand for them require tools that meet the needs of building stakeholders (e.g., building owners and facility managers) to ease the transition as well as provide consistent metrics to measure the validity and integrated simulation to facilitate investment decisions and track ecological transition progress over time. This paper introduces a comprehensive toolset with multiple features, including the simulation and management of Renewable Energy Systems (RES), the Building Management System (BMS) integration, and the calculation and simulation of the Smart Readiness Indicator (SRI). This toolset collectively assesses the readiness of a building toward an ecological transition. Specifically, the system includes: (1) an Advanced SRI Calculation Engine, which implements both simplified (Method A) and detailed (Method B) SRI calculations for various European regions providing precise evaluations of smart building capabilities across domains such as heating, cooling, ventilation, lighting, and energy monitoring; (2) a continuous tracking of building’s smart readiness evolution enabled by seamless BMS Integration that allows real-time monitoring of building systems and allows a continuous tracking of a building's smart readiness evolution; and finally (3) an Optimized Investment Advisor which offers tailored recommendations for investments in smart building upgrades, renewable energy installations, and energy storage systems, employing advanced optimization algorithms to ensure cost-effectiveness and energy efficiency. Developed as part of the INSPIRE, an experiment under the SUSTAIN EU project Open Call for smart building innovations, this toolset aims to enhance decision-making processes, improve resource allocation, and foster a holistic approach to achieve smart, sustainable, and energy-efficient buildings.
Download

Paper Nr: 57
Title:

A Model-Based Approach to Experiment-Driven Evolution of ML Workflows

Authors:

Petr Hnětynka, Tomáš Bureš, Ilias Gerostathopoulos, Milad Abdullah and Keerthiga Rajenthiram

Abstract: Machine Learning (ML) has advanced significantly, yet the development of ML workflows still relies heavily on expert intuition, limiting standardization. MLOps integrates ML workflows for reliability, while AutoML automates tasks like hyperparameter tuning. However, these approaches often overlook the iterative and experimental nature of the development of ML workflows. Within the ongoing ExtremeXP project (Horizon Europe), we propose an experiment-driven approach where systematic experimentation becomes central to ML workflow evolution. The framework created within the project supports transparent, reproducible, and adaptive experimentation through a formal metamodel and related domain-specific language. Key principles include traceable experiments for transparency, empowered decision-making for data scientists, and adaptive evolution through continuous feedback. In this paper, we present the framework from the model-based approach perspective. We discuss the lessons learned from the use of the metamodel-centric approach within the project—especially with use-case partners without prior modeling expertise.
Download

Paper Nr: 23
Title:

Automatic Evaluation and Partitioning of Algorithms for Heterogeneous Systems

Authors:

Simon Heimbach and Stephan Rudolph

Abstract: The ever growing demand on performance and power efficiency can only be met by multiple specialised compute engines for single tasks while costs and time to market constraints force development of programmes for a known single micro-controller or configuration development for an FPGA. With our proposition, an executable logic can be designed in an integral project development effort and then partitioned by an algorithm for different compute engines depending on the user’s demand, thus generating a heterogeneous system. The timing evaluation is not only based upon different sources like data-sheet, simulation and benchmarks but also on the parallelism offered by FPGA. With exporters, the code for these different devices can be automatically generated including communication channels between them to transfer all necessary data. The paper explains the algorithm’s fundamentals and demonstrates its benefits using an example algorithm running on a micro-controller paired with an FPGA. This shows that not only the algorithm but also the amount of data processed is crucial for balancing a heterogeneous system.
Download

Paper Nr: 42
Title:

Validation of Requirements Models Using a Graph

Authors:

Alexander Rauh

Abstract: Validation of system requirements models is essential for success in system development. Especially in regulated engineering domains like automotive or healthcare organisations have to prove their compliance with regulations. One part of this compliance is the assurance of high-quality system requirements. Today’s approaches often take high effort of requirements analysts or require more formal extensions of common requirements documentation methods. This paper proposes a novel approach that validates requirements models without any formal extensions like Object Constraint Language (OCL) by utilizing a graph structure and graph transformations. In the first step, the requirements model is imported into a graph and is transformed according to a common meta-model for requirements. The integration of a natural language processing (NLP) pipeline provides possibilities to analyse the natural language parts during transformation. In the second step, the structure of the graph is validated using pattern derived from rules for high quality system requirements. A constructed example shows feasibility and helps to get early feedback to the graph-based concept.
Download