Next: References
Up: Context Sensitivity and Ambiguity
Previous: 2.2 Ambiguous Situations
The current software engineering methodologies, such as the Object-Oriented methods espoused by Shlaer & Mellor [4], and Rumbaugh [5], are limited by the underlying processing model, in each case a finite state machine. Further these methods require the finite state machine to be constructed manually (that is, all of the states and transitions must be manually enumerated). Although adequate for simple real-time applications, such as consumer electronics products, these methods are inappropriate for large-scale systems, such as satellite ground stations or mission critical electronics and control software. These methods are also heavily oriented to a `data-flow' analysis where the actual state architecture is relegated to exception handling.
There have been other attempts at raising the awareness of the software engineering community to the value of the formal specification in component design. The paper by Bentley [7] is noteworthy. He takes the position that small languages are useful for specification of a large number of `small' problems, which is true. In Bentley's context, ADA would be considered a language, the PIC graphics language is considered a small language. A parser for the former would consist of several thousands of states and the latter perhaps a few hundred states - both approachable with current technology. Unfortunately, our work shows that there are other classes of problems: the examples outlined above are at least an order of magnitude larger than an ADA compiler. Thus these problems are neither small nor are they simple. There must be a completely new level of technology developed if these problems are to be solved with the same level of confidence that compiler developers have grown to expect.
The now classic paper by Strachey [8] is the earliest paper found promoting the construction of systems to perform complex tasks through the application of symbol processing. The philosophical approach was later adopted by Xerox [9] in the TIP (Terminal Interface Package), which used the symbolic (or formal) specification to generate parsing modules for transforming concrete user actions into higher level abstractions meaningful to a system designer. The TIP package defined the concept of a Trigger as a change in the state of the device of interest, in this case usually the keyboard. Although this formalism was ideal for providing a highly effective means for each application designer to allow various key combinations to have a `local context', the model did not provide a method for handing more complex situations (such as longer sequences of key strokes, or even the power to represent the context sensitive problem discussed above).
The explicitly state-driven formal specification appears to show the most promise for providing designers of large-scale systems the opportunity to design systems that have a high degree of mission surety. We intend to extend the state-of-the art with regard to modeling formalisms that are capable of dealing with context sensitive situations and algorithms which must deal with ambiguous situations. We define ambiguous here to mean situations where the data must be inherently inconsistent by nature of the methods used to acquire it.
The work we are doing is intended to provide new methods of developing components by providing new computational models aimed at the types of applications outlined above. The technology we are starting from has achieved some notoriety from interesting demonstrations in the Artificial Intelligence community, but has not achieved wide acceptance due to the inefficiency of the current Lisp implementations. There is also a lack of adequate formal specification languages to form the basis of the interface between the application and the backtracking technology.
Stephen J. Bespalko and Alexander Sindt