Finite State Machines - A Note to Firmware Authors
Much has been written about Finite State Machines ("FSM") by mathematicians, engineers and scientists around the world and many fine articles are available on the internet (just go to Google and type in "finite state machine"). Even so, over the years, we've seen just about every kind of instrument system you can imagine and relatively few, actually, have been well thought out or executed. When we do find one, our job as engineering and scientific programmers is a LOT easier and projects are momentarily accelerated, saving both our time and the client's money.

The problem of building a coherent, finite state machine in C or assembler is a huge topic in and of itself and the concern of the firmware programmer. But keep in mind how your system behaves to the outside world ultimately becomes the main issue, doesn't it? After all, once built, any instrument system "behaves" and "communicates" in certain ways. Those "ways" are a whole different issue. The instrument may contain the best FSM internally and still flunk the course communicating with the world outside of it due to poor linguistic design.

The point we wish to make is that there's more (or should be more) to good FSM's than internally complete and closed logic (which usually produces FSM's incapable of making mistakes). There's also the LANGUAGE of their commands and responses and a set of RULES governing them and it. So wrapped around all of the FSM models is an even larger issue: language and communication. Good programmers have folded the communications issue properly within their FSM constructions. Poor programmers have not.

Large-scale integration of instruments into a responding and reporting system generally involves a great deal of sequencing, which, by definition, involves time management (which incidentally, doesn't lend itself easily to "object oriented programming" (OOP)). The temporal nature of these systems often is of key importance in the scientific community where all measurements are time-based owing, no doubt, to science's interest in cause-and-effect which, by definition, is sequential. And often, not only is time important, but so too is the order of events.

The timing and ordering of events leads to the need to time-stamp and buffer all measurements and the events themselves, including communications events with the world. Buffering - or "queuing" - guarantees that the order is preserved and since FSM's routinely output their "state" to the world, the buffering of their output is vital to the data acquisitioner. Well-constructed systems then become self-documenting. For example, whether retrieved from a disk file or obtained live via a communications channel, the data in a well-constructed system is byte-for-byte the same.

Now, think about it. If the FSM's output can ever be AMBIGUOUS (i.e., have more than one meaning), then what's the point of the output? This leads to:

RULE #1: Allow NO AMBIGUITY in communications either to or from an instrument system.

So, for example, suppose you have commands to an instrument which causes it to measure various things and report the results. You might have:


for measuring sensor #1 (with n sensors). If the instrument responds with:


but can also respond the same way to a different command, then VERY close coordination must exist between the command and the response in the acquisition program and a potential for error. Instead, if the instrument had simply responded with:

M1 1.0(cr)(lf)

then no ambiguity exists. You would be amazed at how many instruments aren't programmed to avoid this simple problem. Why is the latter response not ambiguous? Because of the next rule:

RULE #2: All commands and responses must be UNIQUE.

The letter "M" as the sentence "header", the sensor number "1" associated with M, the single space between the two ones digits and the carriage return and linefeed characters terminating the sentence can occur in no other way in the design of the instrument. They ONLY get issued in response to a single command.

OK, you might be saying "bonehead" at this point as this seems pretty obvious. But consider that in practice, Rules #1 and #2 are not enough. A noisy communications channel can cause errors in those bytes and for example, change the M to another character. What if you had another command called "set" signified by the letter "S":


which could mean turn sensor #1 on. If the letter "S" were scrambled during transmission and arrived as an "M" - well you get the picture. This leads to the next rule:

RULE #3: Reduce the chances of ERROR during transmission.

This is accomplished by electing some error checking scheme and inserting the error bytes in the sentence at a point which makes the most sense. There are many schemes available...from simple checksumming (adding up the ASCII value of all the bytes preceding the checksum bytes) to more advanced methods referred to as cyclic redundancy checks, or CRC's. In any case, error checking allows us to at least raise the probability of presuming that we have received the data as originally constructed. So our sentences would now look like:

M1**(cr)(lf) (command, where ** are CRC bytes)
M1 1.0**(cr)(lf) (response, where ** are CRC bytes)

The notion of an unambiguous and unique command header, and of fields separated by a unique and unambiguous character (like a comma, tab or space) followed by error checking characters all terminated with unique and unambiguous characters (like carriage returns, linefeeds, etc.) significantly improves our ability to correctly interpret any message.

Whether it's within the instrument itself or the controlling system outside of it, BOTH systems need to be able to correctly parse the data coming into them. The set of rules described above are a good start but are by no means complete (and only apply to ASCII data - binary data is a whole 'nother subject!!).

But getting back to FSM's again. We like to see complete "status sentences," i.e., the instrument's state report issued at a time interval of our choosing (if possible). In that status string is everything we want to know including it's clock and calendar so that we can match it against a world clock / calendar and maintain the exact sequence of events that are so important to scientists and engineers.

In general, all messages to and from any system need to be properly framed with unique header and terminator characters, have unambiguous fields, possess robust error checking and in general, be self-documenting. Responses to commands are the same. So any controller of such a system can simply log a sequential list of all commands issued to the instrument and all responses from it. Such a list then describes the time-domain behavior of the system forever more.