Limitations of Model Repositories
Philippe Desfray, Independent Researcher, France
The Role of Foundational Ontologies in Deep Modeling
Colin Atkinson, University of Mannheim, Germany
Models in Software Architecture Derivation and Evaluation - Challenges and Opportunities
Silvia Abrahão, Universitat Politecnica de Valencia, Spain
Verification and Performance Analysis of Embedded and Cyber-Physical Systems using UPPAAL
Kim G. Larsen, Aalborg University, Denmark
On Interaction in Data Mining
Andreas Holzinger, Medical University Graz, Austria
Limitations of Model Repositories
Philippe Desfray
Independent Researcher
France
Brief Bio
Philippe Desfray is involved since 17 years in model driven development research, standardization, and tooling. Co-founder of the SOFTEAM company (650 people in 2013), he has been precursor to OO Modeling (inventor of the Class Relation OO model, 1990, supported already at that time by a case tool), to MDA (1994, Addison Welsey book on the matter, dedicated tool), one of the main contributor in the UML standardization team at the OMG since the origin, and contributing to many modeling language standardization eforts (BPMN, SOA,..). Philippe serves as VP for R&D at SOFTEAM, is one key expert in the modeling and methodology fields, is driving international R&D projects and leading the Modelio UML/MDA case tool development team (www.modeliosoft.com; www.modelio.org).
Abstract
In today’s era of data sharing, immediate communication and world-wide distribution of participants, at a time when teams are asked to be ever more agile, the traditional approach of model repositories no longer meets expectations. Centralized organization has become inconsistent with the way in which the world and its companies function.
In today’s world, it is virtually impossible to set up a model repository for different enterprise entities, large-scale systems or projects, which can be accessed by all participants (readers, contributors, partners, and so on). Standard techniques based on a centralized repository with a designated manager come up against a vast variety of situations, with participants who neither want nor are able to conform to uniform rules and management.
This does not allow model-based knowledge management at an enterprise or global level. It inhibits agility and open team cooperation. We believe that this is a major hurdle to the dissemination of model-based approaches; the reality of heavyweight model management hinders the most appealing of model-based approaches.
Based on the latest technologies and research for model repositories, this talk will explain why current model repository technologies are a major drawback and will present a way of supporting highly decentralized organizations, and agile and open team cooperation. Scaling up and widening the scope of model repositories will enable modeling support to be applied to the "extended enterprise", which incorporates its eco-system (providers, partners, and so on).
The Role of Foundational Ontologies in Deep Modeling
Colin Atkinson
University of Mannheim
Germany
http://swt.informatik.uni-mannheim.de/
Brief Bio
Colin Atkinson has been the leader of the Software Engineering Group at the University of Mannheim since April 2003. Before that he has held positions at the University of Kaiserslautern, the Fraunhofer Institute for Experimental Software Engineering and the University of Houston – Clear Lake. His research interests are focused on the use of model-driven and component based approaches in the development of dependable and adaptable computing systems. He was a contributor to the original UML development process and is one of the original developers of the deep (multi-level) approach to conceptual modelling. He received his Ph.D. and M.Sc. in computer science from Imperial College, London, in 1990 and 1985 respectively, and his B.Sc. in Mathematical Physics from the University of Nottingham 1983.
Abstract
Given the fundamental role of domain models in software and knowledge engineering, it is important that they be as precise and detailed as possible whilst at the same being simple and easy-to-understand. This presents a challenge, however, because higher precision and detail often lead to more complexity and obscurity. To overcome this tension it is necessary to reconcile approaches focussed on maximizing the richness of domain models with approaches focussed on maximizing their simplicity. The former (richness) is addressed by vocabularies (i.e. sets of linguistic modeling concept (a.k.a. metamodels)) optimized to capture the full range of real world phenomena perceived by human beings. This is the role of foundational (a.k.a. upper level) ontologies such as BWW or UFO. The latter (simplicity) is achieved by modelling infrastructures that allows the basic ingredients of conceptual modelling (classification and generalization) to be expressed with minimum accidental complexity. This is the focus of deep (i.e. multi-level) modelling. In this talk Colin Atkinson will discuss the challenges faced in reconciling the goals of richness and simplicity in conceptual modelling and the benefits that can be gained through a synergetic integration of foundational ontologies and deep modelling.
Models in Software Architecture Derivation and Evaluation - Challenges and Opportunities
Silvia Abrahão
Universitat Politecnica de Valencia
Spain
http://www.dsic.upv.es/~sabrahao/
Brief Bio
Silvia Abrahão is an Associate Professor at the Department of Information Systems and Computation at the Universitat Politècnica de València (UPV) in Spain. She received a PhD in Computer Science from the UPV in 2004. Her research interests include quality assurance in model-driven software development, integration of usability in software development, model-driven software product line development, size and effort estimation in model-driven development and empirical investigation on the benefits of model-based software development approaches. She has leaded and participated in several research projects on these topics. In particular, she is currently the principal researcher of the SOPLE (Service-Oriented Product Line Engineering) project funded by the Spanish and Brazilian governments and the MULTIPLE (Multimodeling Approach for Quality-Aware Software Product Lines) project funded by the Spanish Ministry of Science and Innovation with close collaboration with the Software Engineering Institute (SEI) and the Rolls-Royce Software Centre of Excellence. Silvia has been a visiting scholar at the SEI, Carnegie Mellon University (USA) in 2010 and 2012, the Belgian Laboratory of Computer-Human Interaction at the Université catholique de Louvain (Belgium) in 2007, and the Ghent University (Belgium) in 2004 and 2006. She regularly serves in the PC of several major international conferences related to software engineering, human-computer interaction, web engineering and empirical software engineering. She is currently the General Co-Chair for MODELS 2014.
Abstract
Product architecture derivation is a crucial activity in Software Product Line (SPL) development since an inadequate decision during the architecture design directly impacts the quality of the product under development. Deriving individual products from shared software assets is a time-consuming and expensive activity. Although some methods for architecture derivation and evaluation have been proposed over the past years, there are a number of challenging issues as to how we derive product architectures that meet the required quality attributes for the system. In this talk, I will overview the state-of-the-art in using models for architecture derivation and evaluation, will discuss the challenges faced in deriving and evaluating product architectures and the benefits that can be gained from the use of a multimodel (i.e., a set of interrelated models that represents different viewpoints of a particular system) that allows the software engineer to analyze the impact of different design choices or software architectures before they are reified into runnable code.
Verification and Performance Analysis of Embedded and Cyber-Physical Systems using UPPAAL
Kim G. Larsen
Aalborg University
Denmark
Brief Bio
Kim Guldstrand Larsen is a full professor in computer science and director of the centre of embedded software systems (CISS). He received his Ph.D from Edinburgh University (Computer Science) 1986, is an honorary Doctor (Honoris causa) at Uppsala University 1999 and at Normal Superieure De Cachan, Paris 2007. He has also been an Industrial Professor in the Formal Methods and Tools Group, Twente University, The Netherlands. His research interests include modeling, verification, performance analysis of real-time and embedded systems with application and contributions to concurrency theory and model checking. In particular since 1995 he has been prime investigator of the tool UPPAAL and co-founder of the company UP4ALL International. He has published more than 230 publications in international journals and conferences as well as co-authored 6 software-tools, and is involved in a number of Danish and European projects on embedded and real-time systems. His H-index (according to Harzing’s publish or perish, January 2012) is 63. He is life-long member of the Royal Danish Academy of Sciences and Letters, Copenhagen, and is member of the Danish Academy of Technical Sciences as well as member of Acedemia Europeae. Since January 2008 he has been member of the board of the Danish Independent Research Councils, as well as Danish National Expert for the European ICT-program.
Abstract
Timed automata, priced timed automata and energy automata have emerged as useful formalisms for modeling a real-time and energy-aware systems as found in several embedded and cyber-physical systems. Whereas the real-time model checker UPPAAL allows for efficient verification of hard timing constraints of timed automata, model checking of priced timed automata and energy automata are in general undecidable -- notable exception being cost-optimal reachability for priced timed automata as supported by the branch UPPAAL Cora. These obstacles are overcome by UPPAAL-SMC, the new highly scalable engine of UPPAAL, which supports (distributed) statistical model checking of stochastic hybrid systems with respect to weighted metric temporal logic. The talk will review UPPAAL-SMC and its applications to the domains of energy-harvesting wireless sensor networks, schedulability and execution time analysis for mixed criticality systems, battery-aware scheduling with respect to correctness, fault- and performance analysis. In the talk I will also indicate how UPPAAL SMC may play be of benefit for counter example generation, refinement checking, testing, controller synthesis and optimization.
On Interaction in Data Mining
Andreas Holzinger
Medical University Graz
Austria
https://www.aholzinger.at/
Brief Bio
Andreas Holzinger is lead of the Holzinger Group (Human-Centered AI) at the Medical University Graz and Visiting Professor for explainable AI at the Alberta Machine Intelligence Institute in Edmonton, Canada. Since 2016 he is Visiting Professor for Machine learning in health informatics at Vienna University of Technology. Andreas was Visiting Professor for Machine Learning and Knowledge Extraction in Verona, RWTH Aachen, University College London and Middlesex University London. He serves as consultant for the Canadian, US, UK, Swiss, French, Italian and Dutch governments, for the German Excellence Initiative, and as national expert in the European Commission. Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998 and a second Ph.D. (Habilitation) in Computer Science from TU Graz in 2003. Andreas Holzinger works on Human-Centered AI (HCAI), motivated by efforts to improve human health. Andreas pioneered in interactive machine learning with the human-in-the-loop. For his achievements, he was elected as a member of Academia Europea in 2019. Andreas is paving the way towards multimodal causability, promoting robust interpretable machine learning, and advocating for a synergistic approach to put the human-in-control of AI and align AI with human values, privacy, security, and safety.
Abstract
One of the grand challenges in our networked world are the large, weakly structured and unstructured data sets. This is most evident in Biomedicine (Medical Informatics + Bioinformatics): The trend towards personalized medicine results in increasingly large amounts of (-omics) data. In the life sciences domain, most data models are characterized by complexity, which makes manual analysis very time-consuming and often practically impossible. To deal with such data, solutions from the machine learning community are indispensable and it is marvelous what sophisticated algorithms can do within high-dimensional spaces. We want to enable a domain expert end-user to interactively deal with these algorithms and data, so to enable novel discoveries and previously unknown insights. Our quest is to make such approaches interactive, hence to enable a computationally non-expert to gain insight into the data, yet to find a starting point: “What is interesting?”. When mapping the results back from arbitrarily high-dimensional spaces R^n into R^2 there is always the danger of modeling artifacts, which may be interpreted wrongly. A synergistic combination of methodologies and approaches of two areas offer ideal conditions towards working on solutions for such problems: Human-Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of supporting human intelligence with machine intelligence. Both fields have many unexplored, complementing intersections and the aim is to combine the strengths of automatic, computer-based methods, both in time and space, with the strengths of human perception and cognition, e.g. in discovering patterns, trends, similarities, anomalies etc. in data.