Automation

From CSSEMediaWiki
Revision as of 10:11, 6 October 2010 by Josh Oosterman (Talk | contribs)
Jump to: navigation, search

Wal, and others, are interested in automating the detection, generation and correction of these techniques.

If we can built tools to automate evaluation against OO design rules, we should write better code. We can't fit all the smells, maxims, and heuristics in our head at any one time. However, we have to think carefully about how (and if) we can automate certain rules. It'd also be great if the tools can automatically fix the issues it finds.

This page is largely based off the work done for Code Critick, a system built here at UC.

Contents

Automatability

We have to look at design rules individually to pick the ones suited to automation.

Well defined

Some rules, such as Abstract classes should be base classes, and the Acyclic dependencies principle are already explicitly defined. As long as we can accurately parse and model the source code, these rules can be perfectly automated.

Using metrics

Other rules are not as clear-cut, but could still be automated using metrics. Certain heuristics, such as Long parameter list smell, can be detected by using a straightforward metric (in this case, number of parameters), and setting a allowable value range (perhaps less than 6).

Fowler said 'When a class is trying to do too much, it often shows up as too many instance variables,' also, '..a class with too much code is prime breeding ground for duplicated code, chaos and death.' This indicates that there are measurable aspects of the code (in this case, instance variables and amount of code) that relate to particular rules or smells.

Not Automatable

Some rules are probably impossible to automate. For example, Riel's Model the real world is something that requires domain knowledge.

Presentation

We have to also consider how to interact back to the user, once we've automatically evaluated code. Options could include:

  • Squigly green underline/In Situ code visulisation. Similar to Microsoft word for spelling errors
  • Compile-time errors & warnings, built into an IDE

The biggest problem with existing tools is information overload. Tools like [[1]] dump out thousands of style errors, and it's hard to know where to start. An automated OO evaluation is no different. OO design by nature involves working around conflicting forces, and often involves compromise.

Workarounds in the Critick research included an 'ignore once' list, severity ranking based on CodeRank, customisable metric thresholds, and grouping of violations that indicated the same root cause.

Existing Work

Research

  • Baer [1] categorized several popular heuristics (Riels) into the categories 'exactly testable', 'partially testable', 'vaguely testable' and 'not testable', way back in 1998.
  • Slinger [2] wrote an Eclipse plug-in for his thesis, which detected code smells such as Switch statement smell, Lazy Class and Refused Bequest based from the AST.
  • Josh Oosterman & Warwick Irwin[4] built a tool which automated various smells, maxims and heuristics.

Commercial Tools

There are many code validation tools out, but most focus on issues such as logic errors, duplicate code detection, code style & formatting, etc. ArgoUML quietly implements some design rules, such as 'Circular Association' and 'Remove reference to specific subclass'.

References

  1. ^ Holger Baer, O. C. Exploiting Design Heuristics for Automatic Problem Detection. Proceedings of the ECOOP Workshop on Experiences in Object-Oriented Re-Engineering, number 6/7/98 in FZI Report1998)
  2. ^ Slinger, S. Code Smell detection in Eclipse. Delft University of Technology, 2005.
  3. ^ Oosterman, J., Irwin, W. and Churcher, N. (2010) Code Critick: Using Metrics to Inform Design. Auckland, New Zealand: 21st Australian Software Engineering Conference (ASWEC 2010), 6-9 Apr 2010. 159-62
Personal tools