It is interesting to me that one can draw parallels between what Edward Scheidt describes in the fourth part of Kryptos and what can be done with programmer code.
If we consider his career from SigInt to CIA cryptology and now into the private sector at TecSec, it isn’t unlikely to find him considering the problem more from a coding standpoint than a cipher problem. (To clarify, I use code here to refer to computer coding not coding in the espionage sense)
I had previously posted on the idea of obfuscated language/code but here is a little more to consider. I wouldn’t pretend to understand this better than an actual programmer so I’ve taken the following from a different source (not hard to find). In my mind it’s not so much that we’ll find exact similarities between what is done with computer code and what has been done with K4 but rather the intent, the spirit, the goal is what is similar. If we understand the one, perhaps we can understand the other.
Obfuscated code is source code in a computer programming language that has been made difficult to understand. Programmers may deliberately obfuscate code to conceal its purpose (a form of security through obscurity), to deter reverse engineering, or as a puzzle or recreational challenge for readers. Programs known as obfuscators transform human-readable code into obfuscated code using various techniques.
Some languages may be more prone to obfuscation than others. C, C++, and Perl are most often cited as easy to obfuscate. Macro pre-processors are often used to create hard-to-read code by masking the standard language syntax and grammar from the main body of code. The term shrouded code has also been used.
Obfuscating code to prevent reverse engineering is typically done to manage risks that stem from unauthorized access to source code. These risks include loss of intellectual property, ease of probing for application vulnerabilities and loss of revenue that can result when applications are reverse engineered, modified to circumvent metering or usage control and then recompiled. Obfuscating code is, therefore, also a compensating control to manage these risks. The risk is greater in computing environments such as Java and Microsoft’s .NET which take advantage of just-in-time compilation technology that allow developers to deploy an application as intermediate code rather than code which has been compiled into machine language before being deployed.
At best, obfuscation merely makes it time-consuming, but not impossible, to reverse engineer a program. When security is important, measures other than obfuscation should be used. The same trade-offs are made in branches of cryptography: an algorithm may be known to be fast but weak, but if the information is very short-lived there is little incentive, except as an intellectual exercise, for anyone to break it: the information becomes useless before it is broken.
No-one can guarantee that obfuscation will present any particular level of difficulty to a reverse engineer.
Obfuscators do not provide security of a level similar to modern encryption schemes. Even obfuscation with encryption can have flaws. Any program or data that is encrypted must be decrypted before it can be used by the computer. So it must exist, unencrypted, somewhere in memory; a reverse engineer can take a snapshot of that memory. Also, any strong encryption requires a key for decryption. For the program to be executable the key must be provided, leaving another avenue open for reverse engineering.
Reverse engineering is partly a study in pattern recognition, and the good engineer quickly learns the quirks of a particular compiler, processor, or even programmer, and can make educated guesses about the original code.
Ultimately this is what we’ll have to do with Scheidt to understand the masking technique he used.