MODELSWARD 2026 Abstracts


Area 1 - Methodologies, Processes and Platforms

Full Papers
Paper Nr: 14
Title:

On Integrating Large Language Models and Scenario-Based Modeling for Improving Software Reliability

Authors:

Ayelet Berzack and Guy Katz

Abstract: Large Language Models (LLMs) are fast becoming indispensable tools for software developers, assisting or even partnering with them in crafting complex programs. The advantages are evident - LLMs can significantly reduce development time, generate well-organized and comprehensible code, and occasionally suggest innovative ideas that developers might not conceive on their own. However, despite their strengths, LLMs will often introduce significant errors and present incorrect code with persuasive confidence, potentially misleading developers into accepting flawed solutions. In order to bring LLMs into the software development cycle in a more reliable manner, we propose a methodology for combining them with “traditional” software engineering techniques in a structured way, with the goal of streamlining the development process, reducing errors, and enabling users to verify crucial program properties with increased confidence. Specifically, we focus on the Scenario-Based Modeling (SBM) paradigm an event-driven, scenario-based approach for software engineering - to allow human developers to pour their expert knowledge into the LLM, as well as to inspect and verify its outputs. To evaluate our methodology, we conducted a significant case study, and used it to design and implement the Connect4 game. By combining LLMs and SBM we were able to create a highly-capable agent, which could defeat various strong existing agents. Further, in some cases, we were able to formally verify the correctness of our agent. Finally, our experience reveals interesting insights regarding the ease-of-use of our proposed approach. The full code of our case-study will be made publicly available with the final version of this paper.

Paper Nr: 35
Title:

Continuous AI Assistance for Model-Driven Engineering

Authors:

Ludovic Apvrille and Bastien Sultan

Abstract: Proactive AI-based assistants are now common in software engineering tools; however, few exist for Model-Driven Engineering (MDE) environments. Most existing AI assistants for MDE, particularly those based on large language models, require user interactions that can interrupt the modeling workflow. However, MDE is inherently a continuous process, involving successive cycles of diagram construction, verification, and mod-ification. Relying on supplementary tools that require intensive interaction can therefore be time-consuming and disrupt engineers focus. Consequently, there is a need to shift AI-based modeling assistance paradigms to mechanisms that integrate naturally into the continuous MDE workflow. To address this need, the paper introduces ContinuousAI, a framework for AI-based continuous MDE assistance. Working alongside MDE engineers, ContinuousAI generates modeling suggestions either on demand or continuously, supporting the improvement of model quality throughout the engineering process. ContinuousAI has been implemented within the MDE toolkit TTool. Evaluation results show that ContinuousAI provides highly relevant suggestions while maintaining computation times and environmental footprints compatible with real-world continuous MDE usage.

Paper Nr: 48
Title:

Verified Design of Robotic Autonomous Systems Using Probabilistic Model Checking

Authors:

Atef Azaiez and David A. Anisi

Abstract: Safety and reliability play a crucial role when designing Robotic Autonomous Systems (RAS). Early consideration of hazards, risks and mitigation actions – already in the concept study phase – are important steps in building a solid foundations for the subsequent steps in the system engineering life cycle. The complex nature of RAS, as well as the uncertain and dynamic environments the robots operate within, do not merely effect fault management and operation robustness, but also makes the task of system design concept selection, a hard problem to address. Approaches to tackle the mentioned challenges and their implications on system design, range from ad-hoc concept development and design practices, to systematic, statistical and analytical techniques of Model Based Systems Engineering. In this paper, we propose a methodology to apply a formal method, namely Probabilistic Model Checking (PMC), to enable systematic evaluation and analysis of a given set of system design concepts, ultimately leading to a set of Verified Designs (VD). We illustrate the application of the suggested methodology – using PRISM as probabilistic model checker – to a practical RAS concept selection use-case from agriculture robotics. Along the way, we also develop and present a domain-specific Design Evaluation Criteria for agri-RAS.

Paper Nr: 60
Title:

A Semantically-Grounded Agentic Framework for Assisting BPMN Model Instance Execution

Authors:

Tiago Sousa, Nicolas Guelfi and Benoît Ries

Abstract: While Large Language Models can efficiently learn the syntactic patterns of Business Process Model and Notation (BPMN), their probabilistic nature prevents reliable adherence to the deterministic execution rules governing process behavior. Drawing on the distinction between syntactic form and operational meaning, we argue that LLMs approximate BPMN’s structural grammar but lack grounding in its formal semantics. This work presents an agent-based system that increases the syntactic and semantic correctness of engineered BPMN execution traces. This is achieved through role-specialized components and continuous validation against BPMN’s operational semantics, increasing correctness during generation rather than as post-hoc verification. A process simulator verifies each intermediate BPMN trace and produces CoT-articulated diagnostic feedback when violations occur, guiding automated correction. Experimental evaluation shows marked improvements in conformance to semantic rules compared to baseline approaches, confirming the necessity of external semantic enforcement in model-driven generation tasks.

Paper Nr: 80
Title:

Hierarchical Analysis of Data Clump Model Smells through Subgroup-Based Structural Metrics

Authors:

Padma Iyenghar, Nils Baumgartner and Elke Pulvermüller

Abstract: Data clumps, recurring groups of attributes that appear together across classes or methods, are widely recog-nised as indicators of missing abstractions and design degradation. Existing approaches analyse clumps as monolithic structures, relying on name-based similarity or metric-based ranking to prioritise refactoring. However, many clumps in real systems contain internally coherent subsets of attributes that reflect distinct conceptual roles, and treating them as single units risks producing overly broad or semantically mixed abstractions. This paper introduces a subgroup-oriented analysis that forms the local component of a hierarchical prioritisation framework. The method applies a hybrid name-based similarity measure, threshold-graph clustering, and three quantitative metrics subgroup cohesion (SC), inter-group independence (IC), and subgroup count (SCt) to detect and characterise fine-grained structure within clumps. An empirical study across 23 real-world software systems shows that multi-subgroup clumps are common in large, domain-rich frameworks but rare in compact utility libraries. SC and SCt vary substantially across projects, while IC remains consistently high, indicating that discovered subgroups constitute well-separated conceptual units. These findings establish that subgroup-level analysis captures design behaviour beyond the reach of traditional clump-level approaches and provides a more reliable basis for targeted, semantically aligned refactoring.

Paper Nr: 103
Title:

Large Language Models for Quality Control of Large Language Models

Authors:

Preethika Chandrasekaran, Imron Shajahan, Saran Sankaran, Benjamin Nast and Kurt Sandkuhl

Abstract: Large language models (LLMs) are used in many fields today, including medicine, education, finance, and engineering. The quality of output generated by LLMs is a crucial factor for their successful application. Recent work has shown that LLMs can be a productive support in enterprise modeling (EM) for modelers and domain experts in modeling the current situation of enterprises. In this paper, we explore the capabilities of LLMs to control the quality of LLM-generated output in the context of EM. We utilize three different LLMs to evaluate their own output (self-validation) and that of the other LLMs (cross-validation). The main contributions of this paper are (1) an approach for using LLMs for quality control of LLMs in EM, including quality criteria and their operationalization, and (2) quasi-experiments demonstrating the applicability of the approach. The results confirm that our approach can be applied to the defined scenarios and demonstrate that cross-validation yields more comprehensive and reliable outputs than self-validation. Combining multiple LLMs leads to clearer, more trustworthy results, allowing for greater confidence in LLM usage in EM.

Short Papers
Paper Nr: 29
Title:

Identifying Incentives for More Systematic Modeling of Industrial Software-Intensive Systems

Authors:

Ifrah Qaisar, Jan Carlson, Robbert Jongeling, Antonio Cicchetti, Malvina Latifaj and Federico Ciccozzi

Abstract: Software-intensive systems are characterized by intricate interactions, distributed structures, and ongoing evolution posing notable challenges for traceability, change analysis, and decision support. Modeling and versioning practices are widely used to manage this complexity; however, in practice, they are often fragmented, limiting the value that organizations can derive from them. To address this challenge, we map modeling and versioning practices to the benefits they enable, providing a structured framework for practitioners to assess their current practices and identify opportunities for incremental improvement. Building on findings from our previous interview study with multiple companies, together with insights from the literature, we identify four levels of modeling formality, five levels of versioning sophistication, and four categories of benefits, summarized in two mapping tables. Our contributions include a practitioner-oriented roadmap that helps teams reflect on current modeling and versioning practices, anticipate achievable benefits, and identify incremental adjustments that can add value with minimal overhead. An initial validation with industry representatives confirms the practical relevance of the roadmap and highlights its potential to guide structured reflection and improvement of modeling and versioning practices.

Paper Nr: 32
Title:

A Specification's Realm: Characterizing the Knowledge Required for Executing a Given Algorithm Specification

Authors:

Assaf Marron and David Harel

Abstract: An algorithm specification in natural language or pseudocode is expected to be clear and explicit enough to enable mechanical execution. In this position paper we contribute an initial characterization of the knowledge that an executing agent, human or machine, should possess in order to be able to carry out the instructions of a given algorithm specification as a stand-alone entity, independent of any system implementation. We argue that, for that algorithm specification, such prerequisite knowledge, whether unique or shared with other specifications, can be summarized in a document of practical size. We term this document the realm of the algorithm specification. The generation of such a realm is itself a systematic analytical process, significant parts of which can be automated with the help of large language models and the reuse of existing documents. The algorithm-specification’s realm would consist of specification language syntax and semantics, domain knowledge restricted to the referenced entities, inter-entity relationships, relevant underlying cause-and-effect rules, and detailed instructions and means for carrying out certain operations. Such characterization of the realm can contribute to methodological implementation of the algorithm specification in diverse systems, to formalization of the algorithm for mechanical verification, and to incorporating the algorithm and its environment in comprehensive system models. The paper also touches upon the question of assessing execution faithfulness, which is distinct from correctness: in the absence of a reference interpretation of natural language or pseudocode specification with a given vocabulary, how can we determine if an observed agent’s execution indeed complies with the input specification.

Paper Nr: 57
Title:

From Values to Policies: A Value-Based Decision Model for Data Trustees in Data Spaces

Authors:

Michael Steinert, Simon Geller and Florian Lauf

Abstract: As data ecosystems become more complex and regulatory demands intensify, obtaining informed and user-centric consent for data sharing remains a challenge. Data trustees are emerging as intermediaries with a fiduciary duty to ensure responsible and trust-based data handling, operating within data spaces that provide the usage-restricted infrastructure. However, traditional consent mechanisms lack the flexibility and granularity required. This research investigates Value-Based Consent (VBC), which is generalized into a Value-Based Decision (VBD) model, as a dynamic approach to consent management facilitated by data trustees. The VBD model derives specific, consistent decisions by grounding them in the data provider’s stable values and preferences. This research explores how data trustees can leverage the VBD model to not only enhance informed decisions and user control but also to generate machine-interpretable data usage policies for governance and enforcement within data spaces. This process makes value-derived usage rights explicit and unambiguous, which in turn supports accountability. By examining the interplay between the VBD model, data trustees, and data spaces, this research highlights a pathway towards more trustworthy, transparent, and user-driven data sharing, effectively bridging the gap between abstract values and operational data governance.

Paper Nr: 67
Title:

Towards Model Compliance Using Generative Agents: A NetLogo to Sequence Diagrams Experiment

Authors:

Benoît Ries, Nicolas Guelfi and Tiago Sousa

Abstract: Simulation helps software engineering teams explore complex system behavior, yet NetLogo code remains hard to interpret without design-level documentation. Reverse engineering using generative agents offers a solution to reconstruct models that visually document design. This paper reports on an experiment using generative AI to reverse engineer NetLogo simulations into restricted UML sequence diagrams that represent execution scenarios, produced by specialized generative agents. To ensure compliant model generation, we orchestrate the generative AI task in multiple specialized steps with intermediate compliance audits, each step guided by personas and domain-specific rules. We evaluate the approach on ten public NetLogo simulations paired with eight generative AI models, for a total of 80 experimental runs. Results show that Gemini 2.5 Flash achieves the best results, followed by GPT-5-mini, GPT-5, Devstral; Qwen3, Maverick remain promising, whereas GPT-5-nano underperforms. The experiment shows that orchestrating generative agents with iterative compliance audits improves model compliance.

Paper Nr: 73
Title:

Software Space Analytics: Towards Visualization and Statistics of Internal Software Execution

Authors:

Shinobu Saito

Abstract: In software maintenance work, software architects and programmers need to identify modules that require modification or deletion. Whilst user requests and bug reports are utilised for this purpose, evaluating the execution status of modules within the software is also crucial. This paper, therefore, applies spatial statistics to assess internal software execution data. First, we define a software space dataset, viewing the software’s internal structure as a space based on module call relationships. Then, using spatial statistics, we conduct the visualization of spatial clusters and the statistical testing using spatial measures. Finally, we consider the usefulness of spatial statistics in the software engineering domain and future challenges.

Paper Nr: 98
Title:

Model-Driven Approaches for Serverless Software Development: Evaluation and Future Directions

Authors:

Mehdi Eidi and Raman Ramsin

Abstract: Serverless computing abstracts server management, enabling developers to focus on application logic. However, it introduces challenges across the software development lifecycle that necessitate the use of specialized methodologies. Model-Driven Development (MDD) is a promising approach, yet research on serverless-specific MDD remains underexplored, whereas methodologies for microservices are comparatively mature. This paper reviews selected MDD approaches for serverless and microservices development using a process-centered template and proposes an evaluation framework inspired by feature analysis, encompassing general, MDD, and serverless-related criteria. Applying the framework reveals gaps in existing serverless approaches and highlights adaptable strengths of microservices methodologies. These findings suggest directions toward mature MDD methodologies for serverless software development.

Paper Nr: 99
Title:

Driver-Based Multivariate Time Series Forecasting: A Comparative Analysis of Meta-Learning vs Ensemble Performances

Authors:

Nuno M. C. da Costa, Filipe Novais, Luis Santos, Francisco Franco, Vaibhav Shah, Ricardo Rodrigues, Duarte Fernandes and Emanuel Gouveia

Abstract: Driver-based multivariate time series forecasting aims to predict a target time series - the ”driven” - using multiple influencing variables - the ”drivers”. Traditional linear models often struggle to capture the complex, non-linear relationships in such data. While ensemble methods and meta-learning have been applied, they face limitations like high computational complexity and a narrow focus on model selection. This study addresses these challenges by introducing a new ensemble forecasting approach and extending meta-learning to predict not only the forecasting model but also other critical parameters such as input window size and feature selection methods. We conducted experiments using the publicly available ”Air Quality Monitoring in European Cities” dataset, comparing the proposed ensemble and meta-learning methods across various time series lengths and forecasting horizons (1, 12, and 24 hours). Results demonstrate that the Meta-Learner Forecasting approach outperforms the Ensemble Forecasting approach, especially in smaller datasets and shorter forecasting horizons, achieving improved forecasting accuracy. By extending meta-learning to predict multiple forecasting parameters, this research enhances the versatility and efficiency of multivariate time series forecasting, highlighting the importance of tailoring forecasting parameters to specific data characteristics. The Meta-Learner not only improves accuracy but also reduces computational costs by efficiently narrowing the search space for optimal parameters, making it applicable to more complex forecasting environments.

Paper Nr: 23
Title:

Mapping a System of Systems Core Ontology to a Foundational Ontology

Authors:

Joyce Martin, Jakob Axelsson and Jan Carlson

Abstract: This study proposes a mapping of a core ontology for missions and capabilities in Systems of Systems to a foundational ontology. It gives a brief overview of the core ontology, compares various foundation ontologies, outlines suitability characteristics, and maps the two ontologies. The mapping process is based on a superclass-subclass mapping. To ensure that the mapping is to the highest level of granularity, this process goes beyond the selected foundation ontology and includes extensions provided by common core ontologies. The outcome of the mapping provides insights into the richness of ontology realism, the connectedness of concepts and ontologies, the consistency this mapping adds to systems engineering efforts by supporting the methodological approach of Model-Based Systems Engineering, and the necessity of conformance in ontological research and implementation. This mapping supports the alignment in knowledge representation by explicitly highlighting the kind of expected input of each core ontology concept.

Paper Nr: 53
Title:

Generating Class Diagrams from Structured Use Case Descriptions with LLMs

Authors:

Evin Aslan Oğuz, Jochen M. Küster and Felix Lennart Schildmann

Abstract: This paper explores the use of large language models (LLMs) for automating the generation of domain class diagrams from structured use case descriptions. While existing research projects have applied LLMs to generate domain class diagrams using unstructured input, this work introduces another approach that leverages standardized use case description forms and OpenAI’s structured outputs feature to improve the generated class diagrams. The proposed approach generates PlantUML code that can be directly visualized as domain class diagram. The approach is quantitatively evaluated using a small set of structured use case descriptions, with the resulting domain class diagrams. The system achieved a macro-average F1 score of 0.91, demonstrating strong overall performance, although challenges remain, particularly in modeling relationships, where accuracy was significantly lower. The results highlight the potential of LLMs supporting early stage software modeling and provide a foundation for future improvements, though challenges remain in correctly inferring relationship multiplicities in more complex scenarios.

Paper Nr: 75
Title:

Automatic Program Repair Using Large Language Models in Model-Based Development

Authors:

Ren Ajiki and Kenji Hisazumi

Abstract: While Automatic Program Repair (APR) has been extensively studied for traditional code-based development, research on APR for Model-Based Development (MBD) remains underdeveloped, with existing approaches largely confined to rule-based methods that have not yet leveraged modern Large Language Models (LLMs). This paper presents SimLLMRepair, an LLM-based APR system specifically designed for Simulink models. SimLLMRepair converts Simulink models into structured JSON format and employs design intent-driven Retrieval-Augmented Generation (RAG) in a four-phase hybrid architecture that combines mechanical fault detection with LLM semantic validation while strategically minimizing API costs. Evaluation on 50 systematically generated mutants across 10 fault categories demonstrated 88.8% fault localization and 50.0% repair success rates (with fault detection rules optimized for our dataset). Parameter-related faults showed particularly high repair rates (60-96%), while structural faults presented greater challenges. These results establish baseline feasibility for LLM-based MBD repair, identify specific fault categories where the approach is most effective, and reveal key challenges including LLM reasoning stability and hallucination-induced spurious modifications.

Area 2 - Modeling Languages, Tools and Architectures

Full Papers
Paper Nr: 17
Title:

Bridging MDE and LLM-Based Agent Frameworks for Multi-Agent Systems: A Quasi-Systematic Review and Metamodel

Authors:

James Pontes Miranda, Ansgar Radermacher, Fabien Baligand, Julie Bonnail, Sebastien Gérard, Pascal Bannerot and Marcos Didonet Del Fabro

Abstract: The emergence of Artificial Intelligence (AI)-enabled systems based on Large Language Models (LLMs) has increased their capabilities to perceive and act in their environments through sophisticated natural language processing. LLM-based agents and Multi-Agent Systems (MAS) are attracting increasing interest in both research and industry, enabling the development of numerous development tools and specialized frameworks to handle their complexities. The problem that arises from this scenario is the lack of common standards to help with the planning, documentation, and informed creation of MAS. Model-Driven Engineering (MDE) offers support in this context, particularly by enhancing communication and documentation through formalized representations. In this paper, we take a first step towards bridging MDE with agent engineering by proposing a metamodel derived from an analysis of available agent frameworks. Our study follows a quasi-systematic review approach to identify widely used frameworks, extract recurring concepts, and establish a common terminology. We systematically analyze 26 LLM-based agent frameworks to extract common terminology and propose an Ecore-based metamodel that provides a unified conceptual foundation for agent engineering. The metamodel captures essential construction and communication aspects, enabling standardized documentation and tooling development for LLM-based MAS.

Paper Nr: 47
Title:

Environment and Scenario Viewpoints to Execute SysML-Based Architectural Models

Authors:

Tales Viglioni, Jair Leite, Eder Xavier, Thais Batista, Everton Cavalcante and Flavio Oquendo

Abstract: The correctness of a software-intensive system encompasses analyzing its behavior in the operational environment. Although the ISO/IEC/IEEE 42010:2022 International Standard emphasizes system-environment interactions within software architecture descriptions, the literature lacks methods to explicitly model operational environments and relate them to the system behavior. This paper proposes extending a SysML-based architectural language with two new viewpoints: operational environment and scenarios. These viewpoints support modeling executable software architectures, including the system’s behavior and interaction within the operational context. They also serve as a basis for a simulation-driven verification and validation approach that incorporates operational environments and usage scenarios into software architecture models. We validate this approach by modeling the software architecture of an automated guided vehicle system using the extended architectural language and the proposed viewpoints, while also enabling its execution.

Paper Nr: 88
Title:

A Model-Driven Catalogue of Dark Pattern Smells

Authors:

Padma Iyenghar

Abstract: Dark patterns are manipulative interface strategies that steer users toward choices misaligned with their preferences, especially in e-commerce consent, checkout, and subscription flows. Existing work and regulation describe dark patterns mainly through visual examples or behavioural studies; the underlying interaction logic remains weakly formalised. Model-Driven Engineering (MDE) offers notations for such logic, but there is currently no model-based catalogue or modelling profile for expressing dark patterns as structural defects. This paper proposes a model-driven approach that treats dark patterns as dark pattern model smells. A dedicated Unified Modeling Language (UML) profile, DarkPatternUIProfile, is introduced to represent screens, action buttons, consent elements, form fields, and navigation flows via stereotypes and manipulation-relevant tagged values. On this basis, a catalogue of 13 dark pattern model smells is defined, each characterised by intent, structural symptoms, and model-level trigger conditions, and illustrated in a unified reference model. A cookie-consent case study shows how dark-pattern-laden flows can be instantiated, diagnosed, and refactored through systematic model transformations while preserving legitimate functionality. The approach enables early detection and structural correction of manipulative interaction designs, providing a foundation for ethically aligned quality assurance in model-driven interaction engineering.

Short Papers
Paper Nr: 18
Title:

Automating Compliance Verification through Generative AI: An LLM-Based Approach for Nuclear Systems Engineering

Authors:

Mouna El Alaoui, Sagar Jose, Feriel Bouchakour, Dorian De Oliveira, Clara De Kerautem, Quentin Lesigne, Pauline Suchet, Berenger Fister, Loic Montagne, Nicolas Bureau and Robert Plana

Abstract: Compliance verification is a critical activity in Systems Engineering (SE), particularly in safety-critical domains such as the nuclear industry, where large volumes of heterogeneous documentation must be checked against evolving regulatory requirements. The important volume of documents and the difficulty to interpret the content of the nuclear regulations make compliance verification a complex, time-consuming, task. In this work, we investigate the use of Large Language Models (LLMs) to automate compliance verification and provide decision support within Model-Based Systems Engineering (MBSE) workflows. We propose a tool-agnostic and non-intrusive approach that processes engineering documents to generate a compliance matrix containing, for each requirement, a compliance status, that defines whether a requirement is met, partially met, or not met by the conception decisions. For each compliance status, a justification is produced, supported by sources extracted directly from the processed engineering documents. The approach was validated in an experiment involving representative requirements, where results were compared against manual expert analysis. The automated process demonstrated a substantial reduction in execution time while maintaining a satisfactory quality. The residual incomplete or incorrect justifications confirm that human verification remains essential to interpret and validate the results, positioning the system as a copilot for engineering activities.

Paper Nr: 19
Title:

Physics4All DSL: A Domain-Specific Language for Democratising Physics Simulations and Advancing DSL Engineering with JetBrains MPS

Authors:

Sofia Meacham, Clément de La Bourdonnaye, Vaclav Pech and Hessa Alfraihi

Abstract: Simulations are essential in physics education but remain difficult for non-programmers to design and adapt. Existing tools often limit customisation, hindering teachers and students from tailoring experiments to their needs. This paper presents Physics4All, a domain-specific language (DSL) built with JetBrains MPS to democratise the creation of physics simulations. Physics4All introduces domain-specific constructs-worlds, objects, forces, dimensions, and vectors-expressed in familiar mathematical notation. Its modular generation pipeline supports multiple targets (Java and JavaScript), enabling simulations to run across platforms without altering models. Key innovations include implicit unit conversion, reusable forces and objects, and live type checking for correctness. Beyond the educational domain, these features illustrate generalisable DSL engineering principles for modularity, abstraction, and reusability. We evaluated Physics4All through a metrics-based comparison with a GPL baseline and an empirical case study involving secondary school teachers and educational technology developers. The comparison highlighted substantial reductions in implementation effort, while the case study confirmed high suitability, expressiveness, and productivity, with usability and maintainability identified as areas for improvement. Compared to widely used tools such as PhET, Algodoo, and COMSOL, Physics4All offers greater customisation while remaining accessible to non-programmers. The results demonstrate how DSLs can expand the reach and impact of simulation-based education while contributing to broader discussions on domain-specific language engineering.

Paper Nr: 21
Title:

Requirements-Driven Evaluation of Model-Based Low-Code Platforms for GDPR-Compliant Health Applications: A Comparative Study of Mendix and OutSystems

Authors:

Sofia Meacham and Chukwuebuka Obiora

Abstract: Low-code/no-code (LCNC) platforms are increasingly promoted for healthcare applications, enabling non-technical professionals to prototype digital solutions. In regulated domains, however, compliance with the General Data Protection Regulation (GDPR) is critical, and it is unclear whether LCNC platforms provide adequate support for such requirements. This paper introduces a requirements-driven evaluation framework that operationalises five GDPR provisions-data minimisation (Art. 5), lawfulness of processing (Art. 6), consent (Art. 7), privacy by design/default (Art. 25), and security of processing (Art. 32)-into concrete modelling tasks. The framework is applied in a comparative study of two leading LCNC platforms, Mendix and OutSys-tems, using a benchmark chronic disease management application. Findings show that Mendix offers more accessible support for non-technical users, particularly for consent and privacy-by-default, while OutSystems provides greater flexibility in data handling at the cost of higher configuration effort. The study contributes a structured framework for linking legal obligations to model-based development tasks and provides practical insights for selecting LCNC platforms in GDPR-regulated healthcare contexts.

Paper Nr: 24
Title:

Enhancing Educational Support for JetBrains MPS with a Retrieval-Augmented LLM Chatbot: A Structured Knowledge Integration Approach

Authors:

Sofia Meacham and Keith Phalp

Abstract: Model-Based Software Engineering (MBSE) with JetBrains MPS is challenging primarily because language engineering goes beyond using programming languages to designing them-working with meta-concepts, generators, and composition-so the learning curve is steep even with detailed documentation. We present an LLM-powered, retrieval-augmented chatbot for MPS education that combines official docs with expert-curated material, organizing both via composable graph indexes in LlamaIndex. We evaluate five configurations across two phases using the RAGAs framework along four dimensions: faithfulness, answer relevancy, context utilization, and harmfulness. Compared to a documentation-only baseline, faithfulness improves from 0.42 to 0.99; best context utilization reaches 0.71; answer relevancy remains 0.50–0.64 in the larger study; and harmfulness is as low as 0.05 (0.08 in the final configuration). These results indicate that (i) curated expert knowledge-beyond official docs-is crucial for onboarding to meta-level concepts, (ii) composable graphs materially improve grounding, and (iii) lightweight, targeted index summaries further boost reliability while remaining scalable. The approach generalizes to other MBSE tools where steep learning curves limit adoption, and we provide code and configuration artifacts to facilitate replication and classroom use.

Paper Nr: 30
Title:

Incremental Formalization for Informal Architectural Diagramming

Authors:

Malvina Latifaj, Jan Carlson, Antonio Cicchetti, Robbert Jongeling and Ifrah Qaisar

Abstract: Informal diagrams created in general-purpose diagramming tools are widely used in software architecture because they are quick to produce and easy to share. However, the lack of constraints in such tools often yields inconsistent notations and ad-hoc conventions, which in turn invite misinterpretation when diagrams are read outside their original context. Dedicated modeling languages and environments can mitigate these issues but are frequently resisted due to steep learning curves and disruptive adoption costs. Building on the flexible modeling paradigm, this paper proposes an approach to the lightweight formalization of informal diagrams. Grounded in observed industrial challenges and prior work on flexible modeling, we derive a set of design principles and instantiate them in an approach realized as a Draw.io plugin. We propose an approach that addresses challenges from industrial settings by enabling practitioners to introduce structure incrementally into the informal diagrams they already create, thereby helping resolve notational inconsistency and clarify meaning. Moreover, we show how our approach satisfies the guiding flexible modeling principles by introducing these model-like benefits without compromising the accessibility and speed of informal diagramming as valued in practice. The contribution enhances clarity and consistency within familiar workflows and lays the groundwork for subsequent capabilities such as rapid dissemination of conventions, enterprise-level aggregation, and additional quality checks should organizations choose to adopt them.

Paper Nr: 31
Title:

NOMAD: A Multi-Agent LLM System for UML Class Diagram Generation from Natural Language Requirements

Authors:

Polydoros Giannouris and Sophia Ananiadou

Abstract: Large Language Models (LLMs) are increasingly utilised in software engineering, yet their ability to generate structured artefacts such as UML diagrams remains underexplored. In this work, we present NOMAD, a cognitively inspired, modular multi-agent framework that decomposes UML generation into a series of role-specialised subtasks. Each agent handles a distinct modelling activity, such as entity extraction, relationship classification, and diagram synthesis, mirroring the goal-directed reasoning processes of an engineer. This decomposition improves interpretability and allows for targeted verification strategies. We evaluate NOMAD through a mixed design: a large case study (Northwind) for in-depth probing and error analysis, and human-authored UML exercises for breadth and realism. NOMAD outperforms all selected baselines, while revealing persistent challenges in fine-grained attribute extraction. Building on these observations, we introduce the first systematic taxonomy of errors in LLM-generated UML diagrams, categorising structural, relationship, and logical errors. Finally, we examine verification as a design probe, showing its mixed effects and outlining adaptive strategies as promising directions.

Paper Nr: 36
Title:

Model-Driven Quality Analysis of Cyber-Physical Systems: State of the Art and Perspectives

Authors:

Vittorio Cortellessa, Davide Di Ruscio, Tiziano Lombardi and Alfonso Pierantonio

Abstract: Analyzing quality attributes in Cyber-Physical Systems (CPS) is essential for ensuring their efficiency, and resilience. This paper presents an exploratory study of the current state of the art in CPS quality analysis, with a particular emphasis on the use of annotated models and the role of Digital Twins (DTs). Through a rigorous selection and filtering process, 11 key publications from the past six years were identified as foundational contributions to the field. These studies offer valuable insights into prevailing methodologies, recurring challenges, and emerging trends, illustrating how annotated models and DTs can enhance the assessment and management of CPS quality attributes. This work lays the groundwork for future research into more effective model-driven approaches and techniques for evaluating and improving CPS quality, potentially leveraging the capabilities of DTs.

Paper Nr: 42
Title:

Scalable Microservices for LLM-vs-LLM Interaction in Board Games

Authors:

Paulina Morillo, Kevin Bastidas, Bryan Guevara, Alex Terreros and Julio Proaño

Abstract: Board games provide structured, rule-based environments that can be used to study LLM behavior in turn-based decision settings. Deploying two LLM instances to play board games under concurrent workloads requires coordinating turn-based interactions, maintaining consistent game states, and managing external API calls within a cloud environment. In this work, we propose a microservices-based architecture deployed with Docker containers on Google Cloud Platform (GCP) to support LLM-vs-LLM simulations and to characterize system behavior under load. The architecture integrates LangChain and OpenRouter to manage multi-provider model access, prompt-driven interaction flows, and per-agent conversational context. Our evaluation focuses on infrastructure-level resource consumption, reporting CPU and memory utilization across increasing concurrency levels. The proposed design supports configurable simulations and provides an implementation basis for further work on scalable LLM interaction systems.

Paper Nr: 43
Title:

DemIstifyCPS: A Domain-Specific Language for Influence Modeling in Cyber-Physical Systems

Authors:

Barbara da Silva Oliveira, Nicolas Ferry and Julien Deantoni

Abstract: Cyber-Physical Systems (CPS) are integrated systems, composed of various system parts, comprising physical and computational processes. CPS development generates multiple artifacts, including design models and implementation code developed by several stakeholders, that may interact with each other and the environment in which the CPS operates. Teams rely on Model-Based Systems Engineering (MBSE) to manage this complexity. Standard MBSE relations capture functional exchanges between artifacts and between the system and its environment. However, many couplings across artifacts and the environment remain implicit, only partially understood, or unknown at design time. The absence of explicit modeling of these couplings makes it hard to understand the effects of design or environmental changes. In this paper, we model these hidden couplings as Influences. We propose DEMISTIFYCPS, a domain-specific language for design influences as first-class relations. A mobile robot use case demonstrates the language, showing how it surfaces influences and offers actionable feedback. Overall, our work indicates that making Influences explicit helps demystify CPS behavior and supports better decision-making.

Paper Nr: 49
Title:

Goal Models on Type and Instance Level for Groups of Multi-Instance Actors

Authors:

Torsten Bandyszak, Marian Daun and Jennifer Brings

Abstract: Nowadays, software-intensive systems are more and more connected and form complex system groups in which several software-intensive systems but also other actors such as humans and technical systems collaborate. During requirements engineering, the goals of “multi-instance actors”, i.e., types of actors that represent a set of similar actor instances, are specified. Since the number of concrete actor instances may not be known or not defined yet during early requirements engineering, current goal modeling approaches for such systems merely focus on actor types. In this paper, we argue that goal models are crucial on both the type as well as the instance level to foster the analysis of complex groups of collaborating actors, which can have various possible incarnations. We introduce explicit instance goal models and relate type- and instance-level goal modeling concepts. We applied our approach to a case study of vehicle platooning on highways, which demonstrates the usefulness of instance goal models for detecting defects in type-level goal models.

Paper Nr: 52
Title:

Towards an Ontology-Driven MBSE Framework for Life Cycle Assessment

Authors:

Fatima Danash, Imen Azouzi, Saadia Dhouib and Chokri Mraidha

Abstract: Evaluating the environmental impact of products across their life cycle requires structured data and systematic modeling. Ontologies provide a formal foundation for ensuring semantic consistency, interoperability, and data reusability in Life Cycle Assessment (LCA). However, existing LCA ontologies often lack conceptual coverage and remain inaccessible to practitioners unfamiliar with ontology engineering. This paper introduces a proposal of an ontology-driven framework integrated into a model-based systems engineering (MBSE) environment and tailored to the needs of LCA practitioners. The approach extends SysML v2 via a dedicated LCA-specific library and embeds formal semantics through the SmartLCA ontology, which covers activities, flows, impact methods, spatiotemporal dimensions, and more. A semantic integration architecture bridges SysML v2 models and the OWL-based ontology, enabling reasoning, validation, and knowledge reuse. The framework is built into the SySON editor, and uses visual modeling to make formal LCA modeling easier, while also ensuring consistency, traceability, and advanced analysis.

Paper Nr: 59
Title:

Towards a Traceability Framework for Multi-Paradigm Modeling of Cyber-Physical Systems

Authors:

HaPhan Tran, Moharram Challenger and Sadaf Mustafiz

Abstract: Multi-Paradigm Modeling (MPM) is increasingly vital for developing complex systems like Cyber-Physical Systems (CPS), as it integrate diverse modeling formalisms, abstraction levels, and workflows. However, this inherent heterogeneity introduces significant challenges in managing complexity and ensuring consistency across various development artifacts. A critical problem in MPM environments is the establishment and maintenance of comprehensive traceability, which is essential to understand artifact evolution, perform impact analysis, and verify system properties. This paper presents a conceptual framework and vision for embedding traceability into MPM methodologies through systematic model management. We propose a megamodel-based approach that captures artifacts and their relationships across MPM dimensions (multi-abstraction, multi-formalism, multi-view, and workflow), and discusses how different traceability information sources, including semantic links, transformation-generated traces, and megamodels, can be integrated. The aim is to lay the groundwork for a holistic traceability approach integrated with model management that enables powerful analyses, such as cross-paradigm consistency checking and comprehensive impact analysis, thereby addressing key challenges in managing the development of complex systems with MPM. We demonstrate the feasibility of our approach through tool support built on Sirius Web for megamodel instantiation and automated global trace composition, validated using a Wireless Sensor Network (WSN) Internet of Things (IoT) system example.

Paper Nr: 63
Title:

Explicit Energy Quantification in Wireless Sensor Networks Using Petri Nets

Authors:

Amel Berrachedi, Malika Ioualalen and Ahmed Hammad

Abstract: Recent advances in Wireless Sensor Networks (WSNs) have introduced several challenges related to their limited processing, storage, and especially energy capacity. These constraints necessitate preliminary verification to ensure the reliability required by such networks or similar systems. This paper focuses on quantifying energy consumption using a Petri Net model. To this end, we propose a formalism called Energy Petri Nets (EgPN), which explicitly models energy within sensor networks. In addition to the formal definition, we provide an algorithmic implementation enabling automated evaluation of network lifetime. We will present its applicability through a case study which is the clustering, one of the most widespread techniques for energy conservation.

Paper Nr: 68
Title:

Orchestrating Smart Health: A Nets-within-Nets Approach for Hybrid Fall Detection Systems

Authors:

João Pica and João-Paulo Barros

Abstract: Modern fall detection systems frequently do not have a formal specification, which makes it hard to validate, scale, or adapt them to new technologies. This creates a gap between theoretical models and the need for robust systems that connect distinct, changing external components, such as sensors, decision-making algorithms, and warning systems. The global population is aging, and delayed care after a fall might have profound effects, raising the importance of reliable, flexible detection systems. We propose a structured, three-tiered behavioural framework for fall detection via Coloured Petri Nets (CPNs). By separating the system’s fundamental logic from its outward components, different modules can function together. It allows decisions based on sensor inputs and data from calls to external code. We use simulation to illustrate and validate our model’s decision flows in various scenarios. This gives us a tested design that works well with IoT components, such as edge devices and servers. This CPN design provides a viable foundation for building new health systems for the next generation. It allows upgrading or replacing sensors, as well as AI-based deciders and actuators, without affecting the system’s validated core logic.

Paper Nr: 76
Title:

A SysML v2-Based Modeling Framework for LCA Integration in MBSE

Authors:

Onur Angin and Detlef Gerhard

Abstract: The so-called “double transformation” – the simultaneous integration of sustainability and digitalization – is becoming increasingly relevant across all industrial sectors. For complex products and systems, the major challenge is to embed sustainability considerations holistically from the very beginning of development rather than evaluating them retrospectively. Model-Based Systems Engineering (MBSE) provides strong potential for this purpose, but it is still predominantly applied to structure technical requirements instead of systematically linking them with ecological objectives. This work presents a framework that merges the domains of sustainability for Life Cycle Assessment (LCA) and systems engineering within a single integrated model using SysML v2. Leveraging the enhanced modeling and interoperability capabilities of SysML v2, sustainability metrics, circularity targets, and life-cycle criteria can be represented as explicit model elements of the system architecture. As a result, sustainability becomes an inherent part of the development process rather than an isolated task handled within separate software tools. The work emphasizes the necessity of establishing a well-designed architecture early on to consistently orchestrate multiple domains and to enable informed decision-making. The proposed framework is demonstrated using a simplified 3D printer example as a proof of concept, highlighting relevant architectural elements and key considerations.

Paper Nr: 85
Title:

An Operational Semantics for Extended OCL

Authors:

Kevin Lano

Abstract: The Object Constraint Language (OCL) has become an essential part of many model-driven engineering (MDE) approaches and tools, adding precision and semantic detail to software models such as class diagrams, and defining transformation rules in model transformation languages. The many uses of OCL have led to its extension beyond the official standard (version 2.4), to add new types and language elements, such as procedural statements. In this paper we identify how the operational semantics of the OCL standard can be extended to include map, reference and function types and procedural statements, and we describe applications of the semantics to emulation, analysis and validation of OCL specifications.

Paper Nr: 90
Title:

Model-Driven Development of Fuzzy-BDI Multi-Agent Systems

Authors:

Burak Karaduman, Baris Tekin Tezel and Moharram Challenger

Abstract: This paper introduces a domain-specific modelling language (DSML) called DSML4FJaCaMo for the development of fuzzy belief–desire–intention agents. The language follows a meta-modelling approach and extends the core elements of Jason and Cartago to represent agent and artefact perspectives. By incorporating fuzzy logic into the BDI paradigm, DSML4FJaCaMo enables the modelling of graded beliefs, uncertain perceptions, and adaptive decision-making processes. The proposed DSML provides a graphical modelling environment that supports the representation of fuzzy-BDI agents’ beliefs, desires, and intentions using Jason, while considering Cartago artefacts. Its operational semantics supports integration between these components, allowing for automatic code generation and artefact construction to create executable JaCaMo-based fuzzy systems. Overall, DSML4FJaCaMo enhances the JaCaMo ecosystem by providing a model-driven and hybrid reasoning framework that increases both the level of abstraction and the level of intelligence.

Paper Nr: 96
Title:

Modelling Food and Mood Relation with Dynamic Personas: An Ontology-Driven RAG-Based Recommendation Approach

Authors:

Donika Xhani, Kathleen Guan, Ausrine Ratkute, Caroline Figueroa, Renata Guizzardi, Jos van Hillegersberg and Gayane Sedrakyan

Abstract: Persona is not a static demographic label (“25-year-old student”) but an adaptive, evolving representation of the user. We propose a dynamic persona model that acts as a living mirror of the user, constantly adapting to their mental state, context, and behavior, and feeding that into food suggestions that support emotional wellbeing. The model combines relatively stable attributes (e.g., dietary preferences, allergies) with time-sensitive states (e.g., stress). We operationalize this through state charts and ontologies that link mental states with nutrition recommendations grounded in nutritional psychiatry. The resulting hybrid pipeline integrates ontological reasoning with adaptive learning to continuously update the state and recommend context-appropriate foods aimed at stabilizing or improving well-being. A proof-of-feasibility prototype demonstrates how state transitions can trigger timely adjustments in food suggestions without compromising nutritional adequacy and user constraints. This work positions dynamic personas as contextual twins that evolve with the user, enabling explainable and responsive food recommendations. The work also establishes the feasibility for integrating multimodal data streams from smart devices (e.g., wearables, smart kitchen tools, smart plates, smart fridges) to capture daily fluctuations relevant to mental health and link them semantically to food-related ontologies.

Paper Nr: 101
Title:

A SysML v2 Based Modeling Language and Tool for Task Planning and Runtime Verification with Digital Twins

Authors:

Luca Cristoforetti, Alessandro Flori, Tommaso Fonda, Kostantinos Kapellos, Andrea Micheli, Stefano Tonetta and Alessandro Valentini

Abstract: Digital Twins (DTs) are key enablers for autonomous and adaptive systems, providing a virtual counterpart that mirrors, predicts, and verifies the behavior of a physical asset. In space applications, where communication latency and uncertainty are significant, DTs enable support for automated task planning and runtime verification with formal methods. This paper presents the model-based design approach developed within the ExploDTwin project, introducing the Digital Twin Formal Modeling Language (DTFML), a SysML v2-based modeling language for specifying and analyzing planning and monitoring problems on top of a DT. DTFML extends SysML v2 with constructs for plannable durative activities, observable variables that can be read from telemetries, external functions used to simulate the resource consumption of activities. In this way, the DTFML enables rigorous definition of planning, plan monitoring, and runtime verification problems. The associated SAWS2 (Safety Analysis, Validation and Verification for SysML v2) tool enables automated validation and model checking through integration with formal verification engines. The approach is demonstrated on a space-exploration rover case study, showing its applicability to industrial applications.

Paper Nr: 104
Title:

Towards a Unified Conceptual Model of the Large Language Model Architecture

Authors:

Jesús Carreño-Bolufer, Giovanni Giachetti and Oscar Pastor

Abstract: Large Language Models (LLMs) have achieved great results in tasks such as answering questions, analysing sentiment and translating text. However, the nature of the domain often causes the development process to overlook the application of software engineering practices throughout the life cycle. This is particularly evident during the model design phase, where there are no standards in place to accurately define LLM architecture or communicate design decisions. Current documentation methods, such as Model Cards, are incomplete and rely on code to provide full details of the architecture, necessitating additional analytical effort. To address this challenge, this work proposes a preliminary conceptual model based on the original Transformer architecture, on which most LLMs are built. We expect our proposal to improve communication, facilitate knowledge reuse and enable model-driven solutions through architecture design specifications.

Paper Nr: 22
Title:

LLM-Supported Generation of Class Diagrams with Test Cases

Authors:

Christian Kop

Abstract: Large language models (LLMs) have become an indispensable part of modeling. Several application scenarios for modeling with LLMs have already been presented in order to obtain class diagrams from natural language texts. This paper presents an approach, in which natural language queries are the starting point to generate a class diagram. Such an approach could be used if the class diagram is modelled for describing the data that should be available in the database of an information system. With this approach it is intended, that the stakeholders of such a design process have both, the class diagram as well as typical queries that the database should handle. In the paper it is examined if an LLM would perform well in such an approach.

Paper Nr: 34
Title:

Enhancing Cybersecurity with Ontology-Based Whitelists: A Graph-Driven Approach to Proactive Threat Mitigation

Authors:

Pedro Alves, Miguel Ferreira, Rui Gonçalves, Tiago F. Pereira, Manuel Santos, Jorge Meira, João Routar, Pedro Fortuna and Ricardo J. Machado

Abstract: Cybersecurity faces growing challenges due to the sophistication of attacks and evasion techniques, making traditional blacklist-based methods insufficient for blocking threats. This paper proposes an innovative approach for constructing whitelists using complex networks and graph databases, enabling more precise and proactive access control. The methodology employed is based on modeling legitimate script behavior captured from navigating real websites, which are stored in a NoSQL database. From these records, an ontology was defined to represent relationships between entities, actions, and events executed by scripts in web environments. This structure was transposed into a Neo4j graph database, allowing dynamic queries to validate authorized behaviors. The outcome of this approach is the creation of a behavior-based whitelist, stored in the Neo4j database. Although the ontology itself does not perform blocking, it provides the foundational structure that enables, in combination with additional enforcement mechanisms, the identification and potential blocking of scripts exhibiting behaviors not included in the whitelist.

Paper Nr: 38
Title:

MDPNML: A Multidimensional Petri Net Markup Language Enabling Construction and Simulation of Comprehensive Digital Twin Models

Authors:

Atieh Khodadadi and Sanja Lazarova-Molnar

Abstract: A Digital Twin (DT) is a data-driven virtual representation of a physical system that updates in near-real-time and incorporates models of system physics and behaviors. Multidimensional Stochastic Petri Nets (MDSPNs), as an extension of traditional Stochastic Petri Nets (SPNs), provide an intuitive formalism for modeling and analyzing complex systems across multiple dimensions, enabling the development of comprehensive DTs. In MDSPNs, system objectives can be associated with different relevant dimensions, including time, energy, and waste. The Petri Net Markup Language (PNML), a standard XML-based interchange format, is widely used for sharing and executing Petri net models across tools. PNML, however, lacks multidimensional semantics. A PNML-compatible format for MDSPNs would enable portable model exchange across tools and allow conversion to time-oriented SPNs for generic PNML tools, supporting end-to-end traditional and comprehensive DT workflows. Such an extended PNML format also enhances model reproducibility and reduces the effort required to integrate dimension-specific behavior. In this paper, we introduce the Multidimensional Petri Net Markup Language (MDPNML) and identify necessary adaptations to the PNML format to represent MDSPNs. MDPNML supports multidimensional attributes, different dimensions in transitions, and simulation parameters for MDSPNs. Through an illustrative case study, we demonstrate MDPNML generation and execution for multidimensional simulation.

Paper Nr: 45
Title:

INEBLA: A Lightweight Model-Driven Framework for Transforming User Intent into Executable Business Process Models

Authors:

Omnia Saidani Neffati

Abstract: Business process modelling has utilized structured notation such as BPMN, and execution specifications such as BPEL, for many years. However, converting user needs into executable process models remains a technical undertaking that trained analysts must perform. This technical requirement inhibits the accessibility of business process modelling for non-technical stakeholders, hampering the speed at which organizations can adapt. While prior research has explored process discovery from structured logs or templates, little attention has been given to bridging natural language instructions and executable process logic. We introduce (Intention-to-NLP-to-Execution via BPMN with Lightweight Automation), a lightweight and explainable framework that translates user intent, expressed in free-form natural language, into executable business process models. The framework is able through a rule-based Natural Language Processing (NLP) pipeline to extract user intent, tasks, constraints and conditional logic from free-form natural language input and thereafter to automate the transformation from BPMN models to executable BPEL workflows. Finally, this approach is demonstrated with a real-world hotel booking scenario which highlights this ability of handling both linear and conditional instructions.

Paper Nr: 51
Title:

Towards an Architecture Modeling Language for Smart Car Parking Management Systems

Authors:

Mert Ozkaya, Alper Turunc and Umut Cobanoglu

Abstract: Car parking has become one of the most crucial challenges in densely populated urban environments, where limited parking space and high demand cause serious traffic congestions. While several efforts have been made on the implementation and integration of smart car parking management systems (SPMSs) for addressing the parking challenges, very little effort has been made on the architecture modeling and standardised representation of SPMS architectures. In this paper, we propose a novel architecture modeling language called ParkLang for the analysis and high-level design of SPMS architectures. ParkLang is based on our comprehensive analysis of the parking management domain and provides a detailed conceptual framework based on the C4 architectural style. ParkLang supports three interrelated and traceable viewpoints, which are the context, container, and component, each supported with a distinct graphical notation set. We developed a modeling editor for ParkLang using the MetaEdit+ meta-modeling technology and applied it to the real-world parking management problem in Istanbul, Turkiye. Our case study revealed that ¨ ParkLang supports the effective specification of complex SPMS architectures, consisting of diverse stakeholders, business processes, and application components, which can easily be traced and managed throughout the SPMS product lifecycle.

Paper Nr: 58
Title:

Natural Kinds Expressed Symbolically

Authors:

Steve McKeever

Abstract: Maxwell introduced the concept of systems of quantities and units in the late 19th century, defining a set of base units corresponding to observable physical phenomena, along with rules for combining them into compound units. In science and engineering, quantities are usually expressed as values paired with their units. The concept of a kind of quantity clarifies when two quantities can be meaningfully compared or combined, for example, torque and work share identical units but differ in kind. Existing unit of measure systems and static checkers typically fail to capture this distinction. We present a symbolic framework that models quantities, units, and kinds within a unified algebraic system, enabling safe and semantically aware arithmetic. The approach supports explicitly named dimensionless quantities and includes a unification algorithm that infers generic quantity types, reducing annotation overhead. The system is intended for integration with modelling languages and pluggable type checkers.

Paper Nr: 61
Title:

On the Practical Expressiveness of Triple Graph Grammars

Authors:

Anthony Anjorin and Thomas Buchmann

Abstract: Triple Graph Grammars (TGGs) are a visual, intuitive approach for specifying model transformations, allowing the automatic derivation of model management operations including forward/backward transformations and incremental synchronisation with guaranteed, desirable properties. The conceptual simplicity of TGGs comes at a price, however, as all TGG tools impose substantial limits on practical expressiveness (measured by ease of specification, size, and readability in this paper), rendering TGGs unsuitable for real-world transformations and representing a major barrier to their mainstream adoption. This paper discusses excerpts of model transformations that are exceedingly difficult (and perhaps even impossible) to specify using TGGs, analyses the underlying causes, and suggests suitable extensions of existing language features. Our goal is to inspire research that improves the practical expressiveness of TGGs and facilitates applications of the approach.

Paper Nr: 62
Title:

A Conceptual Model-Based Platform for Semi-Automatic Report Generation to Streamline Medical Diagnostics of Lung Cancer and PET/CT Data Management

Authors:

Francisco Miralles Ferrer, José Fabián Reyes Román, Pedro Abreu Sánchez, Jesús Carreño Bolufer, Elisa Verónica Caballero Calabuig and Oscar Pastor

Abstract: Positron Emission Tomography combined with Computed Tomography is a diagnostic test used in nuclear medicine services that obtain the functional and anatomical view of the patient’s human body. This technique is particularly valuable in oncology, as it enables healthcare professionals to detect and diagnose tumors with high accuracy. However, it is the doctors who, after carrying out the tests, must manage the data obtained and write a report following a specific structure. To efficiently manage this large amount of data and facilitate the writing task, an Information System is required. To this end, a conceptual model within the domain of nuclear medicine applied to lung cancer has been developed, serving as the foundation for a robust model-based application. In this work, the conceptual model is presented and extensively explored, as well as IE-PETer, a model-based software platform derived from the conceptual model. IE-PETer facilitates the generation of structured medical reports for nuclear medicine diagnostic tests and systematically stores data for future research and analysis.

Paper Nr: 69
Title:

Integrating Blockchain and IOT for Business Process Design and Traceability in Short Honey Supply Chains

Authors:

Imen Chaouachi Allani, Ilhem Abdelhedi Abdelmoula and Hella Kaffel Ben Ayed

Abstract: Ensuring the authenticity and quality of high-value products like honey has become a growing challenge in local and global agri-food markets that necessitates a paradigm shift in supply chain design. While Blockchain and Internet of Thing technologies have been widely explored for agri-food traceability, existing studies primarily focus on technical architectures or implementations, with limited attention to formal business process modeling. This paper addresses this gap by proposing a novel conceptual Business Process Model for a short honey supply chain. The main contribution is a process-centric redesign of the supply chain that conceptually integrates BC and IOT, under the validation of an Authority of Certification (AC). An AS-IS BPMN model identifies key vulnerabilities, such as manual record-keeping, limited verification, and exposure to fraud. The TO-BE BPMN model embeds trust and traceability, with IoT sensors capturing real-time quality and logistics data and Blockchain securing it in an immutable ledger accessible to all stakeholders, including the AC. The novelty of this work includes using BPMN as a unifying formalism, tokenizing quality certificates as process artifacts, and a process lock ensuring workflow progression only with validated certification and sensor-confirmed conditions. This technology-agnostic model offers a reusable framework to enhance transparency and trust in short agri-food supply chains.

Paper Nr: 91
Title:

EMF Model Generator: A Configurable Library for Generating Valid and Reproducible Model Instances

Authors:

Lorenzo Bettini

Abstract: Model-Driven Engineering tools require model instances for testing, validation, and demonstration purposes, yet manually creating such instances is tedious and error-prone. This paper presents the EMF Model Generator, a lightweight Java library for programmatically generating valid model instances from Eclipse Modeling Framework (EMF) metamodels. The tool uses a deterministic, configurable approach based on specialized setter components for attributes, containment references, cross-references, and feature maps, producing reproducible results. It handles complex EMF semantics including bidirectional references, multiplicity constraints, abstract classes with polymorphic instantiation, recursive containment structures, and heterogeneous feature maps. The architecture provides multiple customization levels, from high-level configuration to per-feature functions and complete setter replacement. The tool integrates directly with EMF’s reflective API without external dependencies, making it suitable for testing, prototyping, and demonstration purposes where predictable, valid model instances are needed.

Paper Nr: 92
Title:

Empirically Evaluating the Accessibility of a PoN-Enable Feature Diagrams Notation by the Red-Green Colorblind Community

Authors:

Mohamed El-Attar, Sarah Kohail and Rima Grati

Abstract: In 2016, an enhanced version of a feature diagram notation developed using the Physics of Notations (PoN) framework was introduced. Empirical evidence demonstrated that this revised notation was more cognitively effective than the original. However, the new notation relies on color, specifically red, which poses accessibility challenges for individuals with red–green color vision deficiency, as they cannot perceive the notation as originally intended. Consequently, the cognitive effectiveness of a red–green–deficient (RGD) version of the new notation relative to the original notation remained unknown. Although the PoN framework specifies several principles that may be satisfied with or without the use of color, it is unclear whether the absence of color would sufficiently hinder comprehension to diminish the cognitive advantage of the PoN-enabled feature diagram notation. It was also possible that the original notation might prove more cognitively effective for users with red–green color blindness. To investigate this, an empirical study involving 37 Information Technology professionals was conducted to evaluate the cognitive effectiveness of the RGD version of the PoN-enabled feature diagrams notation in comparison to the original notation. The experimental data were analyzed to identify any statistically significant differences. Both quantitative and qualitative results indicate that the RGD version of the PoN-enabled notation retains its cognitive effectiveness advantage over the original notation. The findings further reveal a division among participants regarding the aesthetic appeal of the two notations. Overall, adherence to the full set of PoN principles enabled the revised notation to maintain its cognitive effectiveness superiority despite the reduced use of color.

Paper Nr: 93
Title:

Utilizing Hybrid Bond Graphs to Define a Machine Model for Cyber-Physical Systems

Authors:

Martin Richter, Albrecht Stoye and Matthias Werner

Abstract: Frequently, Cyber-Physical Systems are programmed from a digital perspective. From this perspective, the semantics of applications for Cyber-Physical Systems are often not clear regarding their influence on the physical world. We argue that an adequate programming model should provide a physical perspective to the application developers. This enables them to describe the application target in terms of a desired behaviour of the physical system. The paper at hand presents a machine model for Cyber-Physical Systems, based on Hybrid Bond Graphs, which forms the foundation for this view. Additionally, it allows to describe the semantics of the application with respect to the influence devices have on their physical environment. A prototype for a model simulator is presented and validated.

Paper Nr: 97
Title:

An Instrumented BPMN Transformation to Support Scenario Modelling and Simulation

Authors:

Charlotte Strobbe, Rob Vingerhoeds, Stéphane Valès, Sophia Salas Cordero and Aurore Puissegur

Abstract: In the current industrial context, organizations involved in the design, implementation, integration, verification, validation, and qualification (IVVQ) of systems seek to maintain flexibility in design alternatives, limit resource consumption during early project phases, and enable rapid iteration throughout system development. To achieve these objectives, scenario modelling and simulation represent a promising means to support system design while laying the foundations for implementation and IVVQ activities. This paper proposes an instrumented transformation of BPMN models aimed at extending the scenario modelling and simulation capabilities offered by BPMN and its associated tools. In addition, the proposal is illustrated through an automotive use case, which demonstrates its applicability and highlights both its benefits for scenario-based design, implementation and IVVQ and the remaining limitations that motivate future work.

Area 3 - Software and System Engineering

Full Papers
Paper Nr: 33
Title:

A Metamodel for Enumerating Off-Nominal Scenarios in Operational Scenario Review through Bounded Model Checking

Authors:

Kazunori Someya and Toshiaki Aoki

Abstract: This study proposes a SysML-based metamodel that integrates model-based systems engineering (MBSE), safety analysis, and bounded model checking to support the review and verification of spacecraft operational scenarios, including abnormal states in off-nominal situations. The proposed metamodel enables centralized information management within the MBSE framework and allows the derivation of multiple off-nominal scenarios from a single nominal scenario represented in SysML activity diagrams by incorporating the results of STAMP/STPA safety analysis. A major challenge in reviewing off-nominal scenarios is that activity diagrams represent only a single execution example, while numerous execution variations can occur in practice depending on failure timing, making comprehensive coverage difficult. In the proposed approach, each scenario is transformed into state transition constraints that are analyzed using bounded model checking with a SAT solver. This process systematically enumerates all executable sequences of actions and identifies unexpected or missing transitions. A case study demonstrates that the proposed method enhances traceability among scenarios, safety analyses, and system design. Furthermore, it helps uncover new requirements derived from SAT solver results.

Paper Nr: 100
Title:

A DSL for Integrating Engineering Artifacts and Behavior into the Asset Administration Shell

Authors:

Harish Kumar Pakala, Bianca Wiesmayr, Christian Diedrich and David Cameron

Abstract: Discrete and process automation (DPA) systems span multiple engineering domains. They also evolve across different stages of the system life cycle, which leads to semantic heterogeneity and challenges for model-based systems engineering (MBSE). The Asset Administration Shell (AAS) defines a generic meta-model that enables semantic annotation via external vocabularies for information representation. However, AAS primarily supports static, hierarchical representations and lacks dynamic and behavioral constructs needed to capture workflows, orchestration, and batch production strategies. In this regard, this paper introduces SAFSL, the Semantically Annotated Functional Specification Language, which extends the AAS meta-model with dynamic and behavioral abstractions, thus, supporting modular, reusable, and semantically enriched specifications. SAFSL enables assume-guarantee reasoning and provides a foundation for automatic code generation, thereby reducing errors and improving consistency in DPA system modeling. The approach is demonstrated through a batch process plant case study, highlighting a) semantic integration of DEXPI, the digital P&ID representation, with AAS submodel templates, b) functional specification of batch-phase operations, c) the need for integrating electrical CAD diagrams with AAS submodel templates, and d) potential connections between DEXPI and ECAD representations. These insights reveal open challenges in bridging semantic gaps across heterogeneous submodel templates while illustrating the applicability of SAFSL for MBSE of DPA systems.

Short Papers
Paper Nr: 27
Title:

Using Attack and Failure Propagation Analysis for Context-Aware Security Control Suggestions

Authors:

Roman Trentinaglia, Thorsten Koch and Eric Bodden

Abstract: Cybersecurity is becoming increasingly important, especially in safety-critical domains, where cyber attacks can pose significant safety risks. In response, standards and laws such as the Cyber Resilience Act (CRA) require product teams to conduct comprehensive assessments of cybersecurity threats and implement appropriate security controls throughout the product lifecycle. Despite the availability of structured catalogs for requirements and mitigations, there is currently no automated method for integrating threat analysis results with these catalogs or for determining optimal control deployment strategies. Furthermore, addressing threats in isolation often results in long and redundant lists of potential controls, which increases development costs and complexity. To bridge this gap, we propose a semi-automated, model-based approach to suggest security controls. Our approach utilizes Security-enhanced Component Fault Trees (SeCFT) to analyze attack and failure propagation and employs a structured catalog to generate context-specific control recommendations along with appropriate deployment locations. This approach helps engineers efficiently select a coherent set of controls, enabling them to build a robust, multi-layered defense. We validated our approach through a proof-of-concept implementation in a real-world case study.

Paper Nr: 55
Title:

Towards OntoUML for Software Engineering: Evaluation of Constraints Performance in Relational Databases

Authors:

Jakub Jabůrek, Zdeněk Rybola and Petr Kroha

Abstract: In the Model-Driven Development approach to software engineering, models of the software system are elaborated and are transformed to the final implementation. OntoUML is a conceptual modeling language suitable for this purpose, as it can be used to construct ontologically well-founded data models. Relational databases are commonly used for the storage of application data. In existing research, transformation of OntoUML to various relational databases was proposed, with focus on preserving the domain constraints captured by the conceptual model. It follows that a trade-off exists between the enforcement of constraints (therefore improved data integrity) and database performance. In this paper, we present a series of experiments and measurements of the performance impact of each type of constraint for various database sizes and in various relational database implementations. As a result, we provide empirical data that can be used to estimate the performance impact in concrete scenarios and decide between the trade-offs.

Paper Nr: 56
Title:

Enabling Consistent Recombination of Heterogeneous Artifacts in Reactive Consistency Restoration Mechanisms

Authors:

Andreas Domanowski, Christoph Seidl, Marie Clausnitzer, Karl Kegel and Uwe Aßmann

Abstract: Reactive consistency restoration mechanisms (RCRM) maintain consistency between development artifacts by propagating repairing changes automatically. Currently, revision control systems do not distinguish between proactive changes intentionally authored by developers and automated reactive changes, making it difficult to understand and trace changes, as well as their interdependencies. It becomes unclear which artifacts in which revisions remain consistent, making it impossible to recombine, i.e., recompose and reuse them safely regarding their mutual consistency. In the long run, the inability to decide which revisions are mutually consistent and can thus be composed makes development a matter of guesswork. This work presents our ongoing research on augmenting the evolution history of systems that employ RCRM with consistency information. We propose a model for tracking relationships between proactive and reactive changes applied to development artifacts as well as consistency relationships between them. The benefit of our contribution is twofold: First, we make explicit which artifacts in which revisions are mutually consistent. Second, we enable the inductive expansion of consistency contexts, i.e., sets of mutually consistent revisions, by analysis of changes between revisions. Our approach allows to precisely determine whether specific revisions can be recomposed safely, enabling consistent recombination.

Paper Nr: 64
Title:

Validity Frames for Self-Adaptive Systems

Authors:

Raheleh Biglari, Joachim Denil and Claudio Gomes

Abstract: Validity Frames are a key concept in modelling and simulation, encoding the contexts in which models provide valid results. This position paper explores how Validity Frames propagate through optimisation processes and affect self-adaptive systems. We demonstrate that optimisers inherit constraints from model Validity Frames and generate their own validity conditions for the configurations they produce. These propagated Validity Frames can be formalised as software contracts with explicit assumptions and guarantees. Using a semi-active suspension system as a running example, we show how Validity Frames constrain design space exploration, enable runtime monitoring of validity boundaries, and support informed adaptation strategies. Our approach reveals a fundamental tradeoff between optimality and applicability: configurations optimised for narrow scenarios may achieve peak performance but fail when conditions change, while configurations evaluated across broader Validity Frames sacrifice peak performance for robustness. By explicitly tracking Validity Frames throughout the optimisation process, self-adaptive systems can reason about when adaptation is needed and select configurations appropriate to their current operational context. The full formalisation of Validity Frames into contracts and their composition, which allows for the precise integration of assumptions and guarantees across models, optimisers, and adaptive components, will be our main future work path.

Paper Nr: 82
Title:

Discovering Relationships among Code Smells through Association and Temporal Analysis

Authors:

Padma Iyenghar, Nils Baumgartner, Fynn Degen and Elke Pulvermüller

Abstract: Code smells impair software maintainability, yet their joint occurrence and temporal behaviour remain insufficiently understood. This study investigates co-occurrence patterns and evolutionary trends among twenty-one code smell types across seventeen open-source Java projects. Using association rule mining on SonarQube detections, supported by an external Data Clumps dataset, the analysis identifies recurrent smell combinations and probabilistic dependencies. Established relations such as Data Clumps → Long Parameter List are confirmed, while new strong associations emerge, including Dead Code ↔ Uncommunicative Name. A temporal correlation analysis shows that smell introductions consistently exceed removals, resulting in a monotonic accumulation of smells over project lifecycles. The findings provide empirical evidence of structured interdependencies among code smells and highlight characteristic growth patterns that can inform predictive refactoring and maintainability assessment.