How old is the "severity" paradigm in logging?

Years of sysadmin left syslog's severity levels, as described by The BSD Syslog Protocol, clearly imprinted in my mind. You know the drill: Emergency, Alert, Critical, Error, Warning, Notice, Informational and Debug. This left traces elsewhere, such as Java's logger with its Severe, Warning, Info, Config, Fine/r/st.

While discussing it with someone under the impression that Java's was a quick hack, a bad fit, and telling of the mindset, I wondered how old this actually is -- syslog's dating back to the 80s with Sendmail. A quick search reveals that REXX has Termination, Severe, Error, Warning, Informational & Response, which seems to confirm my suspicions.

I'm sure it has its origins in the real world, probably with an army or train company's procedures, and it would be interesting to hear about it, but I'd very much like to know the origin and lineage of severity levels, and the notion of filtering "up to" that comes with it, in the computer business.


Solution 1:

In computing, I'm able to trace this back to about 1966 and System/360 mainframes. This antique JCL manual describes that each program returns a code, that could be further tested using COND=(,) clause to be equal/higher/lower than a given value. The informal convention of understanding the return code was:

  • 0 successfull execution
  • 4 warning
  • 8 error
  • 16 fatal error

I'm sure this convention emerged around that time. Various IBM-supplied utilities returned such codes, but right now I cannot find any manual to back this up.

Of course this convention could have been inherited from older systems, but I really have no idea about anything pre-OS/360.

Whenever I feel like learning some history I usually end up on bitsavers.org :)