MODELSWARD 2024 Abstracts


Area 1 - Methodologies, Processes and Platforms

Full Papers
Paper Nr: 35
Title:

Large Language Models in Enterprise Modeling: Case Study and Experiences

Authors:

Leon Görgen, Eric Müller, Marcus Triller, Benjamin Nast and Kurt Sandkuhl

Abstract: In many engineering disciplines, modeling is considered an essential part of the development process. Examples are model-based development in software engineering, enterprise engineering in industrial organization, or digital twin engineering in manufacturing. In these engineering disciplines, the application of modeling usually includes different phases such as target setting, requirements elicitation, architecture specification, system design, or test case development. The focus of the work presented in this paper is on the early phases of systems development, specifically on requirements engineering (RE). More specifically, we address the question of whether domain experts can be substituted by artificial intelligence (AI) usage. The aim of our work is to contribute to a more detailed understanding of the limits of large language models (LLMs). In this work, we widen the investigation to include not only processes but also required roles, legal frame conditions, and resources. Furthermore, we aim to develop not only a rough process overview but also a detailed process description. For this purpose, we use a process from hospitality management and compare the output of ChatGPT, one of the most popular LLMs currently, with the view of a domain expert.
Download

Short Papers
Paper Nr: 12
Title:

A Model-Based Framework for News Content Analysis

Authors:

Fazle Rabbi, Bahareh Fatemi, Yngve Lamo and Andreas L. Opdahl

Abstract: News articles are published all over the world to cover important events. Journalists need to keep track of ongoing events in a fair and accountable manner and analyze them for newsworthiness. It requires an enormous amount of time and effort for journalists to process information coming from mainstream news media, social media from all over the world, as well as policy and law circulated by governments and international organizations. News articles published by different news providers and reporters may also be subjective due to the influence of reporters’ backgrounds, world views and opinions. In today’s journalistic practice there is a lack of computational methods to support journalists to investigate fairness and monitor and analyze massive information streams. In this paper we present a model-based approach to analyze the perspectives of news publishers and monitor the progression of news events from various perspectives. The key concepts in the news domain such as the news events and their contextual information is represented across various dimensions in a knowledge graph. We presented a multi dimensional and comparative news event analysis method for analyzing news article variants and for uncovering underlying storylines. To show the applicability of the proposed method in real life, we also demonstrate a running example. The utilization of a model-based approach ensures the adaptability of our proposed method for representing a wide array of domain concepts within the news domain.
Download

Paper Nr: 31
Title:

Multi-Dimensional Process Analysis of Software Development Projects

Authors:

Thanh Nguyen, Saimir Bala and Jan Mendling

Abstract: Software processes are complex as they involve multiple actors and data which interplay with one another over time. Process science is the discipline that studies processes. Works in this area are already using multi-dimensional analyses approaches to provide new insights in business processes that go beyond the discovery of control flow via process mining. In this paper, we investigate the applicability of multi-dimensional process analysis. More specifically, we extract data from GitHub open-source repositories that was generated during software development, and evaluate diverse software development metrics. Our results help to explain performance issues by revealing multiple contributing factors, such a side-work, that hinder the progress completing a development task. With this work, we pave the way for multi-dimensional process analysis on software development data.
Download

Paper Nr: 49
Title:

On Augmenting Scenario-Based Modeling with Generative AI

Authors:

David Harel, Guy Katz, Assaf Marron and Smadar Szekely

Abstract: The manual modeling of complex systems is a daunting task; and although a plethora of methods exist that mitigate this issue, the problem remains very difficult. Recent advances in generative AI have allowed the creation of general-purpose chatbots, capable of assisting software engineers in various modeling tasks. However, these chatbots are often inaccurate, and an unstructured use thereof could result in erroneous system models. In this paper, we outline a method for the safer and more structured use of chatbots as part of the modeling process. To streamline this integration, we propose leveraging scenario-based modeling techniques, which are known to facilitate the automated analysis of models. We argue that through iterative invocations of the chatbot and the manual and automatic inspection of the resulting models, a more accurate system model can eventually be obtained. We describe favorable preliminary results, which highlight the potential of this approach.
Download

Paper Nr: 50
Title:

Model-Driven Methodology for Developing Chatbots Based on Microservice Architecture

Authors:

Adel Vahdati and Raman Ramsin

Abstract: With recent advancements in natural language processing algorithms and the emergence of natural language understanding services, chatbots have become a popular conversational user interface integrated into social networks and messaging services, providing businesses with new ways to engage with customers. Various tools and frameworks have been developed to create chatbots and integrate them with artificial intelligence services and different communication channels. However, developing chatbots is complex and requires expertise in various fields. Studies have shown that model-driven engineering can help overcome certain challenges of chatbot development. We propose a model-driven methodology that systematically manages the creation of an intelligent conversational agent. The methodology uses metamodels at different abstraction levels that enable the description of the problem domain and solution space. By providing a high-level structure based on microservice architecture, it improves maintainability, flexibility, scalability, and interoperability. A criteria-based analysis method has been used to evaluate the proposed methodology.
Download

Paper Nr: 52
Title:

A Tool for Modeling and Tailoring Hybrid Software Processes

Authors:

Andrés Wallberg, Daniel González, Luis Silvestre and María C. Bastarrica

Abstract: Hybrid software processes that combine agile and traditional practices are currently the most frequently used in industry. Most of the time, development is addressed with agile practices while management activities apply more traditional methods. However, the best combination of practices does not only depend on project attributes like project of team size, but also the characteristics that need to be emphasized, e.g. time to market or early value added. DynaTail has been proposed as a method that combines hybrid process tailoring and evaluation according to an intended characteristic to be optimized. It was evaluation in industry and, although it was well received, they highlighted the need for a supporting tool so that software developers only need to deal with elements of their processes and not technicalities of the method. In this paper we present DynaTool, a model-based tool to support the DynaTail method. We formalize it and illustrate its application by replicating the same case. We found that DynaTool can fully support DynaTail. Nevertheless, we still need to go back to industry to confirm its potential adoption.
Download

Paper Nr: 54
Title:

MBSE to Support Engineering of Trustworthy AI-Based Critical Systems

Authors:

Afef Awadid, Boris Robert and Benoît Langlois

Abstract: Because of the multidisciplinary nature of the engineering of a critical system and the inherent uncertainties and risks involved by Artificial Intelligence (AI), the overall engineering lifecycle of an AI-based critical system requires the support of sound processes, methods, and tools. To tackle this issue, the Confiance.ai research program intends to provide a methodological end-to-end engineering approach and a set of relevant tools. Against this background, an MBSE approach is proposed to establish the methodological guidelines and to structure a tooled workbench consistently. In this approach, the system of interest is referred to as the "Trustworthiness Environment" (i.e. the Confiance.ai workbench). The approach is an adaptation of the Arcadia method and hence built around four perspectives: Operational Analysis (the engineering methods and processes: the operational need around the Trustworthiness Environment), System Analysis (the functions of the Trustworthiness Environment), Logical Architecture and Physical Architecture (abstract and concrete resources of the Trustworthiness Environment). Given the current progress of the Confiance.ai program, this paper focuses particularly on the Operational Analysis, leading to the modeling of engineering activities and processes. The approach is illustrated with an example of a machine learning model robustness evaluation process.
Download

Area 2 - Modeling Languages, Tools and Architectures

Full Papers
Paper Nr: 19
Title:

System Architects Are not Alone Anymore: Automatic System Modeling with AI

Authors:

Ludovic Apvrille and Bastien Sultan

Abstract: System development cycles typically follow a V-cycle, where modelers first analyze a system specification before proposing its design. When utilizing SysML, this process predominantly involves transforming natural language (the system specification) into various structural and behavioral views employing SysML diagrams. With their proficiency in interpreting natural text and generating results in predetermined formats, Large Language Models (LLMs) could assist such development cycles. This paper introduces a framework where LLMs can be leveraged to automatically construct both structural and behavioral SysML diagrams from system specifications. Through multiple examples, the paper underscores the potential utility of LLMs in this context, highlighting the necessity for feeding these models with a well-defined knowledge base and an automated feedback loop for better outcomes.
Download

Paper Nr: 21
Title:

Fault Tree Reliability Analysis via Squarefree Polynomials

Authors:

Milan Lopuhaä-Zwakenberg

Abstract: Fault tree (FT) analysis is a prominent risk assessment method in industrial systems. Unreliability is one of the key safety metrics in quantitative FT analysis. Existing algorithms for unreliability analysis are based on binary decision diagrams, for which it is hard to give time complexity guarantees beyond a worst-case exponential bound. In this paper, we present a novel method to calculate FT unreliability based on algebras of squarefree polynomials and prove its validity. We furthermore prove that time complexity is low when the number of multiparent nodes is limited. Experiments show that our method is competitive with the state-of-the-art and outperforms it for FTs with few multiparent nodes.
Download

Paper Nr: 27
Title:

An Analysis and Simulation Framework for Systems with Classification Components

Authors:

Francesco Bedini, Tino Jungebloud, Ralph Maschotta and Armin Zimmermann

Abstract: Machine learning solutions are becoming more widespread as they can solve some classes of problems better than traditional software. Hence, industries look forward to integrating this new technology into their products and workflows. However, this calls for new models and analysis concepts in systems design that can incorporate the properties and effects of machine learning components. In this paper, we propose a framework that allows designing, analyzing, and simulating hardware-software systems that contain deep learning classification components. We focus on the modeling and predicting uncertainty aspects, which are typical for machine-learning applications. They may lead to incorrect results that may negatively affect the entire system’s dependability, reliability, and even safety. This issue is receiving increasing attention as “explain-able” or “certifiable” AI. We propose a Domain-Specific Language with a precise stochastic colored Petri net semantics to model such systems, which then can be simulated and analyzed to compute performance and reliability measures. The language is extensible and allows adding parameters to any of its elements, supporting the definition of additional analysis methods for future modular extensions.
Download

Paper Nr: 34
Title:

Kant: A Domain-Specific Language for Modeling Security Protocols

Authors:

C. Braghin, M. Lilli, E. K. Notari and Marian Baba

Abstract: Designing a security protocol is a complex process that requires a deep understanding of security principles and best practices. To ensure protocol effectiveness and resilience against attacks, it is important to strengthen security by design by supporting the designer with an easy-to-use, concise, and simple notation to design security protocols in a way that the protocol model could be easily mapped into the input model a verification tool to guarantee security properties. To achieve the goal of developing a DSL language for security protocol design, working as the front-end and easy-to-use language of a formal framework able to support different back-end tools for security protocol analysis, we present the abstract and concrete syntaxes of the Kant (Knowledge ANalysis of Trace) language. We also present a set of validation rules that we have defined to help the designer, already at design time, to avoid common security errors or to warn him/her regarding choices that might lead to protocol vulnerabilities. The effectiveness of Kant’s expressiveness is discussed in terms of a number of case studies where Kant has been used for modeling protocols.
Download

Short Papers
Paper Nr: 11
Title:

DecSup: An Architecture Description Language for Specifying and Simulating the Decision Support System Architectures

Authors:

Mert Ozkaya, Mehmet A. Kose and Egehan Asal

Abstract: Decision support systems (DSSs) have been existing for automating the decision making processes and reaching the optimum decision(s) using a data set in the quickest way. Despite the importance of DSSs, no any architecture description language (ADL) have been proposed for the high-level specifications and analysis of DSS architectures. So, in this paper, we propose a new ADL called DecSup which enables for the graphical specifications of DSS architectures in terms of the problem, diagnosis, and action components that interact with each other in an event-based manner. Problem components represent the domain data sets whose initiali-sation/change trigger an event for the diagnosis component. Diagnosis components include pattern predicates for making diagnosis using the events occurring. Whenever a diagnosis is made, another event is emitted for the action components to take any necessary actions. DecSup is supported with a prototype toolset for specifying the architecture models and transforming models in the Modelica simulation language. The transformed Modelica code can be used to simulate the DSS architecture models and test the architectural decisions via some scenarios. We evaluated DecSup using a case-study based on the contagious respiratory illnesses (i.e., cold, flu, and Covid-19).
Download

Paper Nr: 13
Title:

Concept of Automated Testing of Interactions with a Domain-Specific Modeling Framework with a Combination of Class and Syntax Diagrams

Authors:

Vanessa Tietz and Bjoern Annighoefer

Abstract: Domain-specific modeling (DSM) is a powerful approach for efficient system and software development. However, its use in safety-critical avionics is still limited due to the rigorous software and system safety requirements. Regardless of whether DSM is used as a development tool or directly in flight software, the software developer must ensure that no unexpected misbehavior occurs. This has to be proven by defined certification processes. For this reason, DOMAINES, a DSM framework specifically adapted to the needs of safety-critical (avionics) systems, is currently being developed. While it is possible to create and process domain-specific languages and models, the challenge lies in ensuring that the framework consistently performs as intended, providing the foundation for certification. For this purpose, a novel approach is employed: the introduction of a meta-meta-modeling language that combines syntax diagrams with a class diagram. This language serves as a comprehensive reference for the generation of test cases and the formal linking of grammar, meta-modeling language and implementation. This allows the implementation to be tested with every conceivable command. In addition, mechanisms ensure that this set of commands to be tested is a closed set.
Download

Paper Nr: 17
Title:

Comparative Evaluation of NLP Approaches for Requirements Formalisation

Authors:

Shekoufeh K. Rahimi, Kevin Lano, Sobhan Y. Tehrani, Chenghua Lin, Yiqi Liu and Muhammad A. Umar

Abstract: Many approaches have been proposed for the automated formalisation of software requirements from semi-formal or informal requirements documents. However this research field lacks established case studies upon which different approaches can be compared, and there is also a lack of accepted criteria for comparing the results of formalisation approaches. As a consequence, it is difficult to determine which approaches are more appropriate for different kinds of formalisation task. In this paper we define benchmark case studies and a framework for comparative evaluation of requirements formalisation approaches, thus contributing to improving the rigour of this research field. We apply the approach to compare four example requirements formalisation methods.
Download

Paper Nr: 18
Title:

Torque not Work, Representing Kinds of Quantities

Authors:

Steve McKeever

Abstract: A system of units, such as the SI system, will have a number of fundamental units representing observable phenomena and a means of combining them to create compound units. In scientific and engineering disciplines, a quantity would typically be a value with an associated unit. Managing quantities in software systems is often left to the programmer, resulting in well-known failures when manipulated inappropriately. While there are a large number of tools and libraries for validating expressions denoting units of measurement, none allow the kind of quantity to be specified. In this paper we explore the problem of quantities that might share the same units of measurement but denote different kinds of quantities, such as work and torque. We develop a data type that represents compound units in a tree structure rather than as a tuple. When performing arithmetic, this structure maintains the compound definition allowing for a richer static analysis, and a complete definition of arithmetic on kinds of quantities.
Download

Paper Nr: 25
Title:

MDE-Based Graphical Tool for Modeling Data Provenance According to the W3C PROV Standard

Authors:

Marcos A. Vieira and Sergio T. Carvalho

Abstract: The rise of the Internet of Things (IoT) and ubiquitous computing has led to a significant increase in data volumes, necessitating robust management. Data provenance is crucial for ensuring data reliability, integrity, and quality, tracking the origins, transformations, and movements of data. The W3C PROV standard, with syntaxes like PROV-N, provides textual and graphical representations for expressing and storing data provenance. However, despite its importance, there is a lack of user-friendly graphical tools for developers, particularly in IoT and ubiquitous computing. This paper addresses this gap by introducing an innovative graphical tool that enables the creation of user-friendly graphical data provenance models adhering to the W3C PROV standard. The tool offers an intuitive interface for developers, simplifying the process of obtaining PROV-N code from the generated provenance graph. We demonstrate the tool’s versatility across diverse domains, emphasizing its role in bridging the gap in graphical provenance modeling. The paper outlines the Model-Driven Engineering (MDE) methodology used in the tool development, and introduces its underlying Ecore metamodel aligned with the PROV data model (PROV-DM). Evaluation results of the metamodel are presented, and potential applications of the tool are discussed, emphasizing its contribution to enhancing provenance-aware applications.
Download

Paper Nr: 26
Title:

Coding by Design: GPT-4 Empowers Agile Model Driven Development

Authors:

Ahmed R. Sadik, Sebastian Brulin and Markus Olhofer

Abstract: Generating code from a natural language using Large Language Models (LLMs) such as ChatGPT, seems groundbreaking. Yet, with more extensive use, it’s evident that this approach has its own limitations. The inherent ambiguity of natural language proposes challenges to auto-generate synergistically structured artifacts that can be deployed. Model Driven Development (MDD) is therefore being highlighted in this research as a proper approach to overcome these challenges. Accordingly, we introduced an Agile Model-Driven Development (AMDD) approach that enhances code auto-generation using OpenAI’s GPT-4. Our work emphasizes "Agility" as a significant contribution to the current MDD approach, particularly when the model undergoes changes or needs deployment in a different programming language. Thus, we presented a case-study showcasing a multi-agent simulation system of an Unmanned Vehicle Fleet (UVF). In the first and second layer of our proposed approach, we modelled the structural and behavioural aspects of the case-study using Unified Modeling Language (UML). In the next layer, we introduced two sets of meta-modelling constraints that minimize the model ambiguity. Object Constraints Language (OCL) is applied to fine-tune the code constructions details, while FIPA ontology is used to shape the communication semantics. Ultimately, GPT-4 is used to auto-generate code from the model in both Java and Python. The Java code is deployed within the JADE framework, while the Python code is deployed in PADE framework. Concluding our research, we engaged in a comprehensive evaluation of the generated code. From a behavioural standpoint, the auto-generated code not only aligned with the expected UML sequence diagram, but also added new behaviours that improved the interaction among the classes. Structurally, we compared the complexity of code derived from UML diagrams constrained solely by OCL to that influenced by both OCL and FIPA-ontology. Results showed that ontology-constrained model produced inherently more intricate code, however it remains manageable. Thus, other constraints can still be added to the model without passing the complexity high risk threshold.
Download

Paper Nr: 29
Title:

Defining KPIs for Executable DSLs: A Manufacturing System Case Study

Authors:

Hiba Ajabri, Jean-Marie Mottu and Erwan Bousse

Abstract: Early performance evaluation is essential when designing systems in order to enable decision making. This requires both a way to simulate the system in an early state of design and a set of relevant Key Performance Indicators (KPIs). Model-Driven Engineering and Domain-Specific Languages (DSLs) are well suited for this endeavor, e.g. using executable DSLs fitting for early simulation. However, KPIs are commonly tailored to a particular system, and therefore need to be redefined for each of its variation. In light of these problems, this paper examines how KPIs can be defined directly at the level of a DSL, thus making them available for domain experts at the model level. We demonstrate this idea through a case study centered on a DSL to define, simulate, and evaluate the performance of simple manufacturing systems. Models simulation is performed by the DSL operational semantics, and yields execution traces that can then be analyzed by KPIs defined at the DSL level. Performance results are captured using the Structured Metrics Meta-model. We illustrate the usefulness of the proposed approach and KPIs to evaluate a simple hammer factory model and its subsequent reconfiguration.
Download

Paper Nr: 39
Title:

Jabuti CE: A Tool for Specifying Smart Contracts in the Domain of Enterprise Application Integration

Authors:

Mailson Teles-Borges, Jose Bocanegra, Eldair F. Dornelles, Sandro Sawicki, Antonia M. Reina-Quintero, Carlos Molina-Jimenez, Fabricia Roos-Frantz and Rafael Z. Frantz

Abstract: Some decentralised applications (such as blockchains) take advantage of the services that smart contracts provide. Currently, each blockchain platform is tightly coupled to a particular contract language; for example, Ethereum supports Serpent and Solidity, while Hyperledger prefers Go. To ease contract reuse, contracts can be specified in platform-independent languages and automatically translated into the languages of the target platforms. With this approach, the task is reduced to the specification of the contract in the language statements. This can be tedious and error-prone unless the language is accompanied by supportive tools. This paper presents Jabuti CE, a model-driven tool that assists users of Jabuti DSL in specifying platform-independent contracts for Enterprise Application Integration. We have implemented Jabuti CE as an extension for Visual Studio Code.
Download

Paper Nr: 46
Title:

AI-Based Recognition of Sketched Class Diagrams

Authors:

Thomas Buchmann and Jonas Fraas

Abstract: Class diagrams are at the core of object oriented modeling. They are the foundation of model-driven software engineering and backed up by a wide range of supporting tools. In most cases, source code may be generated from class diagrams which results in increasing productivity of developers. In this paper we present an approach that allows the automatic conversion of hand-drawn sketches of class diagrams into corresponding UML models and thus can help to speed up the development process significantly.
Download

Paper Nr: 53
Title:

Automatic Generation of Models from Their Metamodels Using Multilayer Perceptron Network

Authors:

Karima Berramla, El Abbassia Deba and Abou El Hassene Benyamina

Abstract: Model driven-Engineering (MDE) is one of the most recent disciplines of software development that enables us to use models and their transformations during the software life-cycle, from requirements to implementation and maintenance instead of using the classic programming languages. In this context, the generation of models is generally done manually to ensure conformity to their metamodels. There are several works (Batot and Sahraoui, 2016; Fleurey et al., 2009; Ben Fadhel et al., 2012) proposed to automate this process, but no one of them can ensure complete automation without starting from a set of models already defined by the user or with a good verification of all conformity constraints. In this paper, our objective is not only (i) how to generate the models in an automatic way and verify all conformity constraints but also (ii) how to use one of the most machine learning techniques to solve modeling problems in MDE context.
Download

Paper Nr: 43
Title:

Qualitative Reasoning and Design Space Exploration

Authors:

Baptiste Gueuziec, Jean-Pierre Gallois and Frédéric Boulanger

Abstract: The design of complex systems is a challenging task, combining optimization and testing techniques. Design space exploration allows the designer to optimize the parameters of the system, but is usually very costly and induces many inherent difficulties that require specific computation techniques to be solved. Qualitative reasoning was first introduced to describe system structures and causality, and developed to study the behavior of the system and discretize its state space into qualitative states. It is now used in diagnosis and verification to reduce computation time and precision while still preserving major properties of the behavior. This article presents how we think that qualitative reasoning can be applied to design space exploration in addition to state space discretization, i.e. how it may help in the choice of parameters for a system, by reducing the computational cost of this exploration.
Download

Paper Nr: 57
Title:

Towards a Domain Model for Learning and Teaching

Authors:

Oleg Shvets, Kristina Murtazin, Martijn Meeter and Gunnar Piho

Abstract: In software engineering, a domain model is a conceptual model that represents the concepts and relationships within a particular domain (education, banking, transportation, etc.) by modelling the behaviour and data of these concepts and relationships. This paper presents a novel domain model for learning and teaching in higher educational institutions, addressing the evolving needs of digitized education systems. We integrate concepts from domain engineering, model-driven architecture, and business archetypes to construct a comprehensive framework. Our model, developed by a team of experts from Tallinn University of Technology and Vrije Universiteit Amsterdam, focuses on the interplay between various educational components, including curriculum design, student assessment, and personalized feedback mechanisms. Two educational scenarios – providing feedback on student assignments and managing workplace-based learning – are examined to evaluate the model’s usability. Our contribution lies in offering a structured framework for modelling educational processes, thereby paving the way for more universal and interoperable domain models and ontologies that also support personalized learning and feedback.
Download

Area 3 - Applications and System Development

Full Papers
Paper Nr: 16
Title:

The Lifecycle of Data Clumps: A Longitudinal Case Study in Open-Source Projects

Authors:

Nils Baumgartner and Elke Pulvermüller

Abstract: This study explores the characteristics of data clumps, a specific type of code smells, in software projects. Code smells are characteristics in source code which indicate a deeper problem. Data clumps are identical groups of variables in different part of the code. The lack of datasets for data clumps can make it difficult to identify and manage these sets in software projects. We developed a tool to parse source code projects into an abstract syntax tree, facilitating detailed analysis of data clumps. Our findings reveal a notable presence of data clumps forming clusters, complicating manual refactoring. In this paper, we propose a unified reporting format for data clump detection and provide a granular dataset for data clumps. Additionally, we outline a detection methodology that can be applied across different programming languages and frameworks. We also provide a first look into the lifecycle and evolution of data clumps, showing that data clumps either remain in projects or accumulate over time. This work provides a foundation for further research aimed at enhancing software quality through identifying and refactoring data clumps, offering a starting point for discussions and improvements in this domain.
Download

Short Papers
Paper Nr: 28
Title:

Virtual61850: A Model-Driven Tool to Support the Design and Validation of Virtualized Controllers in Power Industry

Authors:

Nadine Kabbara, Timothe Grisot and Jerome Cantenot

Abstract: Model driven engineering (MDE) has seen a rising interest by the power industry particularly for supporting application and standards developments. Modern power systems often still involve many manual, error-prone configuration works. Examples include specification developments and configuration of communicating controllers. Recently, new concepts such as virtualized controllers (deployed in virtual machines or containers) have emerged. However, such a concept still remains rather new for power system experts where integrating virtualized controllers into their existing engineered information systems is a non straight-forward task. This study thus proposes to tackle some of the engineering and integration problems faced by this new concept thanks to the benefits offered by MDE. Virtual61850 is an implementation of a model-driven tool for supporting the configuration of future virtualized controllers into the power industry’s information systems. It supports basic industry requirements including platform independence, standardized legacy configuration languages (IEC 61850 standard), modularity, and integrated testing and validation. The application was benchmarked for scalable models creation, editing, and validations that are necessary for advanced industrial simulation and field deployments of virtualized controllers.
Download

Paper Nr: 15
Title:

Model-Based Assessment of Conformance to Acknowledged Security-Related Software Architecture Good Practices

Authors:

Monica Buitrago, Isabelle Borne and Jérémy Buisson

Abstract: Security-by-design considers security throughout the whole development lifecycle, to detect and fix potential issues as early as possible. With this approach, the software architect should assess some security level of the software architecture, to predict whether the software under development will have security issues. Previous works proposed several metrics to measure the attack surface, the attackability, and the satisfaction of security requirements on the software architecture. However, proving the correlation between these metrics and security is far from trivial. To circumvent this difficulty, we propose new metrics rooted in CWE, NIST guidelines and security patterns. So, our four novel metrics measure the conformance of the software architecture to these acknowledged security-related recommendations. The usage of our metrics is evaluated with case studies.
Download

Paper Nr: 33
Title:

Cycle-Accurate Virtual Prototyping with Multiplicity

Authors:

Daniela Genius and Ludovic Apvrille

Abstract: Model-based design for large applications, especially the mapping of applications’ tasks to execution nodes, remains a challenge. In this paper, we explore applications comprising multiple identical software tasks intended for deployment across diverse execution nodes. While these tasks are expected to have a unified representation in their SysML-like block diagrams, each must be specifically mapped to individual processor cores to achieve granular performance optimization. Additionally, inter-task communications should be allocated across multiple channels. We further demonstrate a method for automatically generating parallel POSIX C code suitable for a multiprocessor-on-chip. Our approach has proven especially effective for high-performance streaming applications, notably when such applications have a master-worker task structure.
Download

Paper Nr: 44
Title:

Constructive Assertions with Abstract Models

Authors:

Yoonsik Cheon

Abstract: An assertion is a statement that specifies a condition that must be true at a particular point during program execution. It serves as a tool to ensure the program functions as intended, reducing the risk of introducing subtle errors. Usually expressed algebraically, an assertion utilizes Boolean expressions to specify permissible relationships among program variables. In complex scenarios, calculating the expected value of a program variable often proves more effective than specifying the constraints it must adhere to. In this paper, we present an approach to formulating assertions using abstract models in a constructive manner, which complements the traditional algebraic style. Constructive assertions empower programmers to articulate comprehensive assertions, including pre and postconditions, in a succinct, comprehensible, reusable, and maintainable manner.
Download

Paper Nr: 45
Title:

Single-Sourcing for Desktop and Web Applications with EMF Parsley

Authors:

Lorenzo Bettini

Abstract: While Java allows a compiled program to run on different operating systems where a Java Virtual Machine is installed, a Java desktop application cannot be directly executed as a web application and vice-versa. Additional tools and techniques must be employed to achieve a “single-sourcing” mechanism. The Eclipse project EMF Parsley, built on top of EMF, aims to simplify implementing EMF applications by hiding most EMF internal details, providing some reusable UI (User Interface) components, and providing declarative customization mechanisms through a DSL with IDE support. In this paper, we show how EMF Parsley allows the developer to achieve “single-sourcing” for desktop and web applications: the developer can implement a desktop application that can also be deployed as a web application, re-using most of the source code, including the UI code, with a minimal effort to specify a small set of specific classes to start the application for the specific running platform.
Download

Paper Nr: 51
Title:

Using Personalised Authentication Flows to Address Issues with Traditional Authentication Methods

Authors:

Jack Holden and Deniz Cetinkaya

Abstract: Nowadays, a huge proportion of people’s data and files are stored online behind a password. While better and more secure methods exist, traditional password-based authentication remains the most predominant. With the current computing processing power trends and the advances in emerging technologies, the need for migration to improved authentication frameworks is becoming more essential. This paper explores the limitations of password-based authentication and how we could begin a gradual migration by using model-driven approaches, reducing password’s significance in authentication and encouraging the adoption of newer and more secure methods whilst still ensuring a low access barrier. This paper proposes a new model-based authentication approach returning choice back to the user. The users would be given the ability to choose their own authentication flow, helping bridge the digital divide ensuring people from all technical proficiencies, demographics, and socio-economic classes utilise more secure authentication flows without impacting usability or accessibility. This would be achieved through a modular technological solution allowing developers to add more secure methods of authentication as they come about. The modularity in combination with user choice will ultimately play a huge role in improving uptake and migration to newer authentication methods helping mitigate future risks.
Download

Paper Nr: 58
Title:

Collaborative Computing Paradigms: A Software Systems Architecture for Dynamic IoT Environments

Authors:

Prashant G. Joshi and Bharat M. Deshpande

Abstract: Connected systems are omnipresent, are used to monitor and control remotely, collect data and information. A variety of software systems architectures are designed which exploit computing paradigms – Edge, Fog, Mobile and Cloud – that process and analyse the data. Such an analysis is pivotal in decision making to increase operational efficiency. IoT has transformed industries like logistics, healthcare, industrial automation and agriculture and continues to refine decision making process ultimately to enhance the systems operations and efficiency. Expanding service capability of systems, has been topic of academic research and, rests on the foundation of bringing all resources in unified resource pool and make different computing facilities collaborate. Making the different computing facilities to collaborate to realise this has been part of many theoretical and experimental studies. Industry applications have adopted such systems architectures that has enhanced the applications capabilities. This paper proposes a collaborative and unified method of system software architectures, for IoT environments, that leverage collaboration among computing paradigms. With a view to expand services, as and when needed, a unified, dynamic and distributed analytics software systems architecture was explored and experimented. Proposed collaborative method is validated through its application for vehicle and driver behaviour and data center cooling systems.
Download