Object-centric reflection with Bifrost

On Monday 10 December 2012, 13:30-14:30 INRIA Lille room B31 (new building), Jorge Ressia (University of Bern) will give a talk on “Object-centric reflection with Bifrost”

 

Abstract:
Reflective applications are able to query and manipulate the structure and behavior of a running system. This is essential for highly dynamic software that needs to interact with objects whose structure and behavior are not known when the application is written. Software analysis tools, like debuggers, are a typical example. Oddly, although reflection essentially concerns run-time entities, reflective applications tend to focus on static abstractions, like classes and methods, rather than objects. This is phenomenon we call the object paradox, which makes developers less effective by drawing their attention away from run-time objects. To counteract this phenomenon, we propose a purely object-centric approach to reflection. Reflective mechanisms provide object-specific capabilities as another feature. Object-centric reflection proposes to turn this around and put object-specific capabilities as the central reflection mechanism. This change in the reflection architecture allows a unification of various reflection mechanisms and a solution to the object paradox.

Bifröst is an object-centric reflective system based on first-class meta-objects. Development itself is enhanced with this new approach: talents are dynamically units of reuse, and object-centric debugging prevents the object paradox when debugging. Software analysis is benefited by object-centric reflection with Chameleon, a framework for building object-centric analysis tools and MetaSpy, a domain-specific profiler.

Des problèmes en variables booléennes à la gestion des dépendances entre logiciels

On Friday 30 November 2012, 14:00-14:30 INRIA Lille room B31 (new building), Daniel Le Berre (Université d’Artois ) will give a talk on “Des problèmes en variables booléennes à la gestion des dépendances entre logiciels”

 

Abstract:
Depuis une dizaine d’année, le nombre d’applications des outils basés sur la satisfiabilité de formules booléennes (SAT) ne cesse de croître : vérification de circuits, vérification de logiciels, bio-informatique, etc. Depuis juin 2008, la gestion des dépendances de la plate-forme ouverte Eclipse est effectuée par un outil d’optimisation en variable booléenne. Nos distributions Linux pourraient suivre une approche similaire pour la gestion de leurs paquets. Le but de ce séminaire est de présenter différents type de problèmes en variables booléennes (SAT, MAXSAT, Pseudo-Booléen, MUS) et leur application pratique dans le cadre de la gestion des dépendances entre logiciels. La modélisation utilisée dans le cadre du gestionnaire de dépendances p2 d’Eclipse sera détaillée. Le fonctionnement du gestionnaire de paquet Linux p2cudf, basé sur p2, sera ensuite introduit. La présentation se terminera par une présentation des travaux récents sur l’intégration d’approches en variables booléennes pour la configuration de produits manufacturés. Tous les problèmes présentés durant cette présentation peuvent être traités à l’aide de Sat4j (www.sat4j.org), notre outil de satisfaction et d’optimisation en variables booléennes.

Code Generation for Reactive Systems

On Wednesday 28 November 2012, 10:00-10:30 INRIA Lille room B31 (new building), Nestor Catano (The University of Madeira) will give a talk on “Code Generation for Reactive Systems”

Abstract: Event-B provides a formal language notation for the modelling and development of Reactive Systems. The notation is supported by the Rodin platform, an open-source Eclipse IDE that provides a set of tools for working with Event-B models, e.g. an editor, a proof generator, (semi-) automatic provers, and a model-checking animator. I will present an ongoing work on code generation for Event-B models and discuss how I envision to extend this work to support the code generation of Interactive Systems. Code generation begins with an abstract Event-B model for the particular system. The model can be refined (and associated refinement proof obligations discharged in Rodin) as needed to add more functionality, but the goal of these refinements is not to bring the model closer to code level as would normally be done in an Event-B development process with Rodin. Rather, once a refinement that includes adequate functionality is achieved, it is translated to a JML (short for Java Modeling Language) class specification. Then, Java reactive code for this specification is automatically generated from the JML specification.

Towards a foundation for engineering decentralized self-adaptive software systems

On Wednesday 31 October 2012, 14:00-14:30 INRIA Lille room B31 (new building), Danny Weyns (Linnaeus University, Sweden) will give a talk on “Towards a foundation for engineering decentralized self-adaptive software systems”

Abstract: The qualities of many software systems are of critical importance for our society. Examples are the openness of software for business collaborations and robustness of e-health systems. Engineering such systems and ensuring the required qualities is complex due to the inherent distribution of information and decision-making, and uncertainties about the system and its environment. Self-adaptation is generally considered as a promising approach to deal with these complexities. Self-adaptation enables a system to adapt itself autonomously to changes to achieve particular quality goals. Central to the realization of self-adaptation are feedback control loops. Despite the vast body of research, a comprehensive approach to engineer self-adaptive software systems is lacking, in particular for decentralized systems. The long-term goal of my research is to study and develop a foundation for engineering decentralized self-adaptive software systems that is grounded in control theory. In this talk, I will elaborate on our ongoing research in decentralized self-adaptive software systems and outline plans for future work.

SmarterDeals: A Context-aware Deal Recommendation System based on the SmarterContext Engine

On Friday 19 October 2012, 15:00-15:30 INRIA Lille room B31 (new building), Norha M. Villegas (University of Victoria) will give a talk on “SmarterDeals: A Context-aware Deal Recommendation System based on the SmarterContext Engine “

Abstract: Daily-deal applications are popular implementations of on-line advertising strategies that offer products and services to users based on their personal profiles. The current implementations are effective but can frustrate users with irrelevant deals due to stale profiles. To exploit these applications fully, deals must become smarter and context-aware. This talk presents SmarterDeals, our deal recommendation system that exploits users’ changing personal context information to deliver highly relevant offers. SmarterDeals relies on recommendation algorithms based on collaborative filtering (CF), and SmarterContext, our adaptive context management framework. SmarterContext provides SmarterDeals with up-to-date information about users’ locations and product preferences gathered from their past and present web interactions. The validation results demonstrate the suitability of our approach. For many deal categories the accuracy of SmarterDeals is between 3% and 8% better than the approaches used as baselines. For some categories, and in terms of multiplicative relative performance, SmarterDeals outperforms by 173.4%, and 37.5% on average.

A Framework to Compare Alert Ranking Algorithms

On Friday 12 October 2012, 14:00-14:30 INRIA Lille room B31 (new building), Simon Allier (INRIA) will give a talk on “A Framework to Compare Alert Ranking Algorithms “

Abstract: To improve software quality, rule checkers statically check if a software contains violations of good programming practices. On a real sized system, the alerts (rule violations detected by the tool) may be numbered by the thousands. Unfor- tunately, these tools generate a high proportion of “false alerts”, which in the context of a specific software, should not be fixed. Huge numbers of false alerts may render impossible the finding and correction of “true alerts” and dissuade developers from using these tools. In order to overcome this problem, the literature provides different ranking methods that aim at computing the probability of an alert being a “true one”. In this paper, we propose a framework for comparing these ranking algorithms and identify the best approach to rank alerts. We have selected six algorithms described in literature. For comparison, we use a benchmark covering two programming languages (Java and Smalltalk) and three rule checkers (FindBug, PMD, SmallLint). Results show that the best ranking methods are based on the history of past alerts and their location. We could not identify any significant advantage in using statistical tools such as linear regression or Bayesian networks or ad-hoc methods.

Using Feature Model to build Model Transformation Chains

On Friday 28 September 2012, 14:00-14:30 INRIA Lille room B31 (new building), Anne Etien (LIFL) will give a talk on “Using Feature Model to build Model Transformation Chains”
Abstract: Model transformations are intrinsically related to model-driven engineering. According to the increasing size of standardised meta-model, large transformations need to be developed to cover them. Several approaches promote separation of concerns in this context, that is, the definition of small transformations in order to master the overall com- plexity. Unfortunately, the decomposition of transformations into smaller ones raises new issues: organising the increasing number of transformations and ensuring their composition (i.e. the chaining). In this paper, we propose to use feature models to classify model transformations ded- icated to a given business domain. Based on this feature models, automated techniques are used to support the designer, according to two axis: (i) the definition of a valid set of model transformations and (ii) the generation of an executable chain of model transformation that ac- curately implement designer’s intention. This approach is validated on Gaspard2, a tool dedicated to the design of embedded system.

Domain Specific Warnings: Are They Any Better?

On Friday 21 2012, 14:00-14:45 LIFL room 116, Andre Hora (RMOD) will give a talk on “Domain Specific Warnings: Are They Any Better?”
Abstract: Tools to detect coding standard violations in source code are commonly used to improve code quality. One of their original goals is to prevent bugs, yet, a high number of false positives is generated by the rules of these tools, i.e., most warnings do not indicate real bugs. There are empirical evidences supporting the intuition that the rules enforced by such tools do not prevent the introduction of bugs in software. This may occur because the rules are too generic and do not focus on domain specific problems of the software under analysis. We underwent an investigation of rules created for a specific domain based on expert opinion to understand if such rules are worthwhile enforcing in the context of defect prevention. In this paper, we performed a systematic study to investigate the relation between generic and domain specific warnings and observed defects. From our experiment on a real case, long term evolution, software, we have found that domain specific rules provide better defect prevention than generic ones.

Introducing Rascal for meta programming

On Friday 11 2012, 14:00-14:45 LIFL room 111, Jurgen Vinju from CWI will present Rascal.
Abstract: Rascal is a domain specific language for meta programming in general. It supports parsing, model extraction, model analysis, code generation, visualization, etc. The first part of this talk introduces Rascal and motivates its existence and its language design.
The second part of the talk is about ongoing work. We have recently been applying Rascal to try and observe what the Cyclomatic Complexity metric actually means for understandability of Java methods. We parsed lots of Java code, then reduced the methods to their “control flow patterns”. We then used very basic statistical methods to observe that the CC metric may not be very informative about the intricacies of control flow in Java methods of open-source projects.

Claim Monitoring for Tackling Uncertainty in Adaptive Systems (N. Bencomo)

On April 20, 14:00-14:45 (INRIA Lille, Room W21), Nelly Bencomo will present her work on “Claim Monitoring for Tackling Uncertainty in Adaptive Systems”.

Abstract— There is an increasing need for software systems that are able to adapt dynamically to changes in their envi- ronment. However, a challenging characteristic of self-adaptive systems is that of uncertainty; a full understanding of all the environmental contexts they will encounter at runtime may be unobtainable at design time. Thus assumptions may have to be taken that risk being wrong, and this may lead to problems at runtime. In this paper we describe REAssuRE, which uses the concept of claims to explicitly represent such assumptions in goal models of the system. We define a semantics for claims in terms of their impact on how alternative goal operationalization strategies satisifice the system’s non-functional requirements or softgoals. We describe our implementation of REAssuRE which includes automatic claim value propagation and goal model evaluation, using in-memory representations of the goal models and associated claims. We demonstrate how claims can be monitored to verify claims at run time, and how falsified claims can trigger principled adaptation. We evaluate REAssuRE using an adaptive flood warning system.

Keywords-requirements awareness; self adaptive systems; goals; claims;
Presenter: Dr. Nelly Bencomo
Currently, Nelly is a Marie Curie Fellow at INRIA Paris – Roquencourt. Her Marie-Curie project is called Requirements-aware Systems (Requirements@run.time).

More info about her and her work at http://www.nellybencomo.me