###### March 20, 2013

# Control Theory Adaptations Could Improve Critical Software: MIT

**redOrbit Staff & Wire Reports - Your Universe Online**

Researchers from MIT and Georgia Tech have adapted techniques that ensure airplanes won´t stall out during flight to prove that computer programs will not divide by zero.

Formal verification is common in hardware design, and in the development of critical control software that cannot tolerate bugs. The methods are also widely used in academic research and are slowly making their way into commercial software.

The current work could potentially expand the portfolio of formal verification techniques. Such an advancement would be particularly useful in approximate computation, in which computer systems designers swap small amounts of computational accuracy for large gains in speed or power efficiency.

The researchers adapted something known as a Lyapunov function, a mainstay of control theory. The graph of a standard Lyapunov function slopes everywhere toward its minimum value, and can be envisioned as something resembling a bowl.

If the function characterizes the dynamics of a physical system, and the minimum value represents a stable state of the system, then the curve of the graph guarantees that the system will move toward greater stability.

“The most basic example of a Lyapunov function is a pendulum swinging and its energy decaying,” said Mardavij Roozbehani, a principal research scientist at MIT´s Laboratory for Information and Decision Systems (LIDS) and lead author of a paper about the work.

“This decay of energy becomes a certificate of stability, or ℠good behavior,´ of the pendulum system,” he told MIT News.

Most dynamical systems are more complex than pendulums, and finding Lyapunov functions that characterize them can be challenging. However, there is significant literature on Lyapunov functions in control theory, and Roozbehani is hopeful that much of their work can apply to software verification.

In a paper about their work, Roozbehani and colleagues -- MIT professor of electrical engineering Alexandre Megretski and Eric Feron, a professor of aerospace software engineering at Georgia Tech -- envision a computer program as a set of rules for navigating a space defined by the variables in the program and the memory locations of the program instructions. Any state of the program constitutes a point in that space. Problems with a program´s execution, such as dividing by zero or overloading the memory, can be thought of as regions in the space.

In that context, formal verification is a matter of demonstrating that the program will never steer its variables into any of these danger zones.

To accomplish that task, the researchers introduced an analogue of Lyapunov functions they dubbed Lyapunov invariants. If the graph of a Lyapunov invariant is in some sense bowl shaped, then the task is to find a Lyapunov invariant such that the initial values of the program´s variables lie in the basin of the bowl, and all of the danger zones lie farther up the bowl´s walls.

Veering toward the danger zones would then be analogous to a pendulum´s suddenly swinging out farther than it did on its previous swing.

In practice, finding a Lyapunov invariant with the desired properties means systematically investigating different classes of functions, since there is no general way to predict in advance what type of function it will be – or even that it exists. But Roozbehani theorizes that, if his and his colleagues´ approach takes root, researchers will begin to identify algorithms that lend themselves to particular types of Lyapunov invariants, as has happened with control problems and Lyapunov functions.

Furthermore, since many of the critical software systems that require formal verification implement control systems designed using Lyapunov functions, “there are intuitive reasons to believe that, at least for control-system software, these methods will work well,” Roozbehani said.

He is also optimistic about possible applications in approximate computation.

Many control systems are based on mathematical models that can´t capture all of the complexity of real dynamical systems. As a result, control theorists have developed analytic methods that can account for model inaccuracies and provide guarantees of stability, even in the presence of uncertainty.

Those techniques could be well suited for verifying code that exploits approximate computation, Roozbehani said.

“Computer scientists are not used to thinking about robustness of software,” said George Pappas, chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania.

“This is the first work that is formalizing the notion of robustness for software. It´s a paradigm shift, from the more exhaustive, combinatorial view of checking for bugs in software, to a view where you try to see how robust your software is to changes in the input or the internal state of computation and so on.”

“The idea may not apply in all possible kinds of software“¦ but if you´re thinking about software that implements, say, controller or sensor functionality, I think there´s no question that these types of ideas will have a lot of impact."

The research is published the current issue of the journal *IEEE Transactions on Automatic Control*.