Program
At RET'19, We will continue with the presentation format introduced in RET'18, the X-ray sessions. The goal is to facilitate the conversation between workshop audience and presenters. At the end, we hope that all participants leave the workshop with the feeling (best case, even with evidence) that they have learned something.
Each paper presentation has an allotted time between 45 and 60 minutes. Presenters will be coached by a mentor (a member of the RET'19 OC). The content of each presentation consists of:
- 15-20 minutes paper presentation (as in regular conferences)
- 20-30 minutes X-ray, consisting of one or more of the following activities (this is not an exhaustive list):
- provide a demo of the presented tool / approach
- explain and demonstrate pitfalls in the presented analysis / technique
- include the audience to do a quick tutorial / exercise
- use polling to get input from the audience (see directpoll.com)
- gather data from the audience as data points for future studies
- pilot ideas with the audience / user their expertise
- demonstrate / discuss what has happened since the study presented in the paper has been written up. New developments? Unforeseen obstacles? New ideas?
- elaborate how does the study presented in the paper contribute to RET
- 10 minutes discussion/questions
The OC is looking forward to meet you in Montréal!
Session 1 |
09:00 - 09:30 |
Welcome! Who's who
Gregory Gay, General chair for RET'19
|
09:30 - 10:30 |
Keynote 1: Bernd Lehnert, Chief Development Architect, SAP
|
Bernd Lehnert (SAP, Canada)
After a short overview of SAP, major software engineering processes at SAP will be introduced. New business models or new business scenarios sometimes lead to disruptive changes to these processes. On the basis of two disruptive changes from the recent past, their impact on software engineering processes at SAP will be explained by real word examples.
Coffee Break (10:30-11:00) |
Session 2 |
11:00 - 12:00 |
Keynote 2: Betty H.C. Cheng, Professor, Michigan State University, USA
|
Betty H.C. Cheng (Michigan State University, USA)
Increasingly, software is expected to autonomously adapt its behavior at run-time in response to changing conditions in the supporting computing infrastructure and the surrounding physical environment. In order for an adaptive system to be trusted, it is important to have mechanisms to ensure that the program functions correctly during and after adaptations. It is challenging to develop and validate a self-adaptive system (SAS) that satisfies requirements, particularly when requirements can change at run time. Testing at design time can help verify and validate that a SAS satisfies its specified requirements and constraints. While offline tests may demonstrate that an SAS is capable of satisfying its requirements before deployment, an SAS may encounter unanticipated system and environmental conditions that can prevent it from achieving its objectives. In working towards a requirements-aware SAS, this talk overviews a framework that supports run-time monitoring and adaptation of tests for evaluating whether a SAS satisfies, or is even capable of satisfying, its requirements given its current execution context. Then we describe specific techniques that instantiate this framework, which apply a multidisciplinary approach to support requirements-based adaptive testing of an SAS at run time.
Lunch (12:00-14:00) |
Session 3 |
14:00 - 14:45 |
Mentor: TBD
Simone Vuotto, Massimo Narizzano, Luca Pulina and Armando Tacchella
Download: Paper
|
Simone Vuotto, Massimo Narizzano, Luca Pulina and Armando Tacchella (University of Sassari, University of Genoa, University of Sassari, University of Genoa)
Test-Driven Development (TDD) is a software development process that relies on test case generation from requirements. As customary, a test represents an expected behavior of the system under development. In the TDD development loop, if the test fails on a partial implementation of the system, then the implementation needs to be refined in order to comply to the test, otherwise other tests need to be created until a required coverage on the requirements is reached. The main bottleneck of TDD is that tests are generated manually, often on the basis of requirements stated informally.
In this paper we introduce a new automata based test generation algorithm implemented in SpecPro, our library for supporting analysis and development of formal requirements in cyber-physical systems. We consider specifications written in Linear Temporal Logic (LTL) from which we extract automatically trap properties representing the expected behaviour of the system under development.
With respect to manual generation, the main advantage of SpecPro is that it frees the developer from the burden of generating tests in order to achieve stated coverage targets. Our preliminary experiments show that SpecPro can handle specifications of small-but-critical components in an effective way.
14:45 - 15:30 |
Mentor: TBD
Palash Bera and Abhimanyu Gupta
Download: Paper
|
Palash Bera and Abhimanyu Gupta (Saint Louis University, USA)
As software test design is a manual process, test cases are generally prepared manually. We use a structured method to extract business requirements and feed these requirements in a tool (TestAlgo). The tool translates these requirements and automatically creates test cases and business requirements models such as process models. We suggest that this tool can not only save time and effort in creating automated test cases but also handle changes in requirements efficiently.
Coffee Break (15:30-16:00) |
Session 4 |
16:00 - 17:00 |
Mentor: TBD
Andrés Paz and Ghizlane El Boussaidi
Download: Paper
|
Andrés Paz and Ghizlane El Boussaidi (École de Technologie Supérieure, Canada)
Engineering avionics software is a complex task. Even more so due to their safety-critical nature. Aviation authorities require avionics software suppliers to provide appropriate evidence of achieving DO-178C objectives for the verification of outputs of the requirements and design processes, and requirements-based testing. This concern is leading suppliers to consider and incorporate more effective engineering methods that can support them in their verification and certification endeavours. This paper presents SpecML, a modelling language providing a requirements specification infrastructure for avionics software. The goal of SpecML is threefold: 1) enforce certification information mandated by DO-178C, 2) capture requirements in natural language to encourage adoption in industry. and 3) capture requirements in a structured, semantically-rich formalism to enable requirements-based analyses and testing. The modelling language has been developed as a UML profile extending SysML Requirements. A reference implementation has been developed and an empirical validation was performed in the context of an industrial avionics case study.
17:00 - 17:30 |
Lessons learned from the X-Ray sessions: a model for future workshops? (Mentors / Participants)
Workshop closing
|
Key Dates
Paper Submission: February 1, 2019
Deadline Extension: February 8, 2019
Author Notification: March 1, 2019
Camera-Ready Due: March 15, 2019
Workshop Date: May 28, 2019
Co-located with