What We Believe But Cannot Prove
ed John Brockman 2006
Some interesting essays by interesting people.
The one that sticks with me is Charles Simonyi's essay "I believe we are writing software the wrong way. ..." Simonyi equates modern programming to a error-prone form of encryption, and that programmers should write "program generators" that "separate issues of subject matter from issues of software engineering".
Boy howdy. The pesky little problem, of course, is ... how?
Simonyi's generator applied to chip design
I've designed chips with a "chip generator program" - specifically, chips that can be described as arrays, such as crossbars and displays and identity generators. If I need a 256x256 array of something, design a parameterized program to generate a 4x4. Study the hell out of it, look for interactions and bugs, develop a test generator module for the program that produces 100% fault coverage. Synthesize all the outputs and study those. Incorporate lots of built-in testability into the design. Too much is better than not enough - Moore's Law cures most diseases related to transistor size.
If possible, run the 4x4 all the way through the process, even including mask making and fab if possible. That tiny little array may not connect to anything, but sitting in a blank patch of somebody else's test or production circuit, it acts as both photolithographic area fill, and gives you an early look at how the cells actually turn out. But even as a test to the last stage before commitment to mask reticles, it may uncover bugs that could prevent first silicon success.
Now, scale up to an 8x8. The number of interactions goes up as the square of the number of elements, so this array takes 16 times as long to analyze as the 4x4 case. Lots more computation, a little more human work. If it works, then scale again to 16x16, and do something else for a few days while that much larger program cranks away. Then comes the big leap of faith - scale to 256x256, synthesize layouts and Verilog descriptions and test vectors. Sample test a tiny fraction of all possible test vectors and layout appearance and circuit simulations, then ship masks and cross fingers. The chips that come back will be a lot more likely to perform in your test fixture, and your test fixture will be ready for them. You can ship samples to customers the same day that your prototypes show up.
Back to Simonyi's idea ...
In the beginning, program generators may only work for similar "array" problems, but a lot of procedures can be reduced to arrays. Big, bloaty programs with a lot of sparse entries perhaps. But in software, you can instrument the arrays and the outputs; if your software test harness doesn't tickle all the array nodes, the software test is incomplete. If you rewrite the production program compactly and algorithmically, you can compare it to the array program by running both through the test harness.
WHO CARES if a production program designed for millions of seats uses millions of processors for months during test? Yes, that is costly, but it is far more costly to inflict a buggy program on millions of users. That cost works its way back to the software company though lower sales and lower price per (semi-worthless) unit. Further, no copycat can afford the million-processor test system. If they copy last month's software, or even get the source code, they will not be able to keep up with the software flow from the original designer. Only buggy, static, inhumane software needs "copy protection" - customer-partnered, rapidly evolving, bug-free software improves faster than counterfeiters can keep up with.
So ... I wish Simonyi (you ex-Microsoft devil, you!) luck. Building the tools for industrial-scale six-sigma software production won't be cheap or easy. But escaping from cottage production of cheesy software - even if that amateurish production fills acres of buildings in Redmond - will make a modern software industry possible. Designing chips to accelerate that software will be a pleasure.