The Usual Madness

Cause or effect? I can’t decide whether (a) bad people simply like using certain technologies in bad ways, or (b) the technologies themselves are toxic, causing brain damage and moral decay in anyone who uses them.

Data-bearing technologies seem particularly susceptible to madness; things sound good at the start of a project but (like a creeping evil) gradually subsume every spare cycle in the data center, growing to monsters that eat entire products, swallowing whole groups of engineers and spitting out beet farmers and potters who “ain’t gonna touch a computer again, ever, man.”

Case in point: Almost every time I see a product with an SQL component, the database schema and the routines that access it are screwed up to a degree that beggars the imagination. I’m not going to get into a whole litany of things that have been wrong (“This doesn’t need a re-write, it needs a can of gasoline and a match”), but it seems that very few people actually know how to use a database without some kind of Bertold Ray radiation infecting their minds.

Rules of thumb for databases:

If your stored procedure takes more than twelve hours to run, and you think that’s okay to put into production, you should seriously think of doing something else.  Like maybe digging holes for fenceposts.

If you are using rows as a means of remote procedure call (where everybody is polling for new messages), consider doing something else.  See above.

If your application has three thousand views on fifteen hundred tables, your installer sets up over two thousand stored procedures, and query times are measured in cups of coffee and consequent trips to the john, maybe you should use something else to hold the data on a hundred employees. I’m just sayin’.

If you thought “Um, what’s the the problem with that?” while reading any of the above, I don’t want to know about it.  Please go get a job at a competitor.

Case 2: XML.

When Christ returns from vacation he’ll be suspended from angels trumpeting the glories of XML, bearing DTDs that work and XML parsers that don’t suck (or at least, that don’t completely suck; even Heaven has technological limits). Until that happy time, we’re pretty much in the stink.

Don’t get me wrong. XML is great at some things, just not the things that it’s mostly being used for. It’s terrible for RPC, it’s a hideous substitute for INI files, and it just bloody blows great globby, orbiting chunks as a makefile replacement. Twice the obscurity and eight times the syntax is a poor substitute for data portability (which it ain’t) and clarity (hoo ha).

I can think of maybe three legitimate uses of XML, and they mostly involve document markup.

Case 3: HTTP.

You haven’t lived, you haven’t really tasted the breadth of life’s possibilities, the wonders of wandering freely on this fantastic globe we live on, until you’ve given up and walked away from your attempt at an HTTP proxy. That first clean breath of pure air in the parking lot as you realize “Hey, the nightmare is over!” is a memorable experience.  The thrill as you burn all of the backup tapes on the grill in the back yard (lots of lighter fluid, natch) is nearly as enjoyable.  The knowledge that you’ve purged the planet of another great evil . . . well, no one will thank you, but just imagine silent applause from the Shadowy Guardians of Freedom.  If only people knew.  But they won’t, because you’ve destroyed it all, every last cancerous bit, and it can trouble us no more.

The problem isn’t so much HTTP; it’s an okay standard – not the best thing, not the most efficient thing in the world, but adequate. The problem is everyone else’s idea of what HTTP should be. Or rather, how HTTP wound up to be, in their hands. Sometimes the hands are caring, thoughtful and responsible hands; hands you can trust attached to brains you can count on. But sometimes the hands are cunningly evil, leading you down a pretty path until, deep in the woods, in obscure corner conditions, they turn on you and do unspeakable, bad things to your sanity.

What you wind up with are edge conditions and misinterpretations that cause you to wonder how many people read standards.  “Hey, it’s got GET HTTP/1.1 and a bunch of lines with stuff . . . [sniffs the air] . . . looks good, let’s ship it!”  Not that the standards documents are all that clear sometimes.  But still.


I have a stored-up rant on RPC, but first I’m going to bathe the cat, just to get in the finishing mood.


This entry was posted in Rantage. Bookmark the permalink.

10 Responses to The Usual Madness

  1. Captain Confusion says:

    I was working on a project where the entire configuration was stored in XML. The idea was that eventually that there would be a GUI configuration tool and the user would never see any XML.

    Of course the budget ran out so setting up a project using the software meant editing hundreds of arcane XML files in a crap freeware XML editor (no budget remember). It didn’t help that every time MS rolled out a new version or an update for IE it would steal the XML file-type associations.

    Also validating a 50MB XML file against its schema and then loading it into memory as a document takes a lot of CPU and RAM. Starting up the app and waiting for it to load was one of those times you could go get a cup of copy and go the the loo. Of course you never did because you anxiously watched the console output for error messages.

    Starting a new project now. XML suggested as a scripting/configuration solution. I may swing by the security shop to pick up a cattle prod. They are really useful for persuading silly people of the error of their ways.

  2. SDC says:

    You’ve probably heard this one, but for those who haven’t, I always thought this quote about XML was accurate:

    XML is like violence – if a little doesn’t solve your problem, use some more.

    It explains a lot about how XML is generally (mis)used, anyway…

  3. Barry says:

    I think the main reason XML is so popular as a kitchen sink is because (a) there are libraries for it that ship with things like PHP, and (b) the libraries take you from XML to organized data automagically, meaning you never have to worry about arbitrary ordering of the data.

  4. landon says:

    @Barry: The libraries I’ve used have been callback-based and iterator-based, which means (essentially) that you’re enumerating over the elements of your XML schema and that you have to write additional code to make sense of the stuff as it “flies by.” So typically I’ve written hash-table-based adapters, which completely wipe-out order. This would seem to go against the original design of XML, which was a mark-up language rather than a data-bearing language, but it’s what most people use XML for these days.

    Maybe I need to get a better set of XML parsers.

    The problem with this is that a mapping from an XML scheme to a hash table is not necessarily easy; what you wind up with (often) is a complex set of nested hash tables, vectors and similar gorp that are nearly as bad to navigate as the original XML. Furthermore, there are different “styles” of data-bearing to deal with (e.g., key-value pairs in tags, or putting keys in tags and the data as “plain text,” and probably other variants). So we’re no longer talking a nice, simple dictionary now, but something that has to adapt to fit all the many ways that XML can be used to express data.

  5. Thomas says:

    Fully agree with the XML rant – it seems to get worse with every project, too! And the freedom to use either plain text or tag attributes for data – or to even mix both – completely kills the idea of pre-built code libraries that will always work. You’ve got to reinvent the wheel every time.

  6. landon says:

    I once worked on an embedded system that had bandwidth to its file server on the order of 1K/second. Customers were *waiting* while this little box dragged down configuration file after configuration file.

    Someone insisted on an XML-based configuration file. Given the limited ROM space we had, I wrote a parser of a subset of XML that fit in about 1K of code. Didn’t support DTDs, pretty obviously.

    I nearly lost it when someone complained that the custom parser didn’t support comments. _Customers are waiting_ and you want to blather away about stuff in a file that no one but you will ever see?

    In retrospect, I should have ripped out the “XML” support and just given them a bunch of structs with fixed-size strings. Would have saved space all around. Go ahead, _comment that_….

  7. SecretSquirrel says:

    Yep, XML is rubbish. Totally agree, what’s wrong with a database or struct for data, and what’s wrong with good old fashioned config files?

    It’s a sad day when you see people testing and debugging code that only has ro read a config file…

  8. Paul says:

    Great post, and your comments on XML seem to have really touched a raw nerve!

    My pet peeve: using XML for log files. I have in mind a particular enterprise software vendor that has been infected with this mutant virus over the past few years.

    The touted benefits – standardization, integration and extensibility – are dubious at best, and usually longer term benefits more for the vendor not user. Of course a stiff price we have to pay: storage bloat and vastly diminshed usabilty from a sysadmin perspective (forget all those quick tail -f | grep tricks)

  9. Sgt Turmeric says:

    I thought writing an HTTP proxy was bad. Then I had to write a MIME parser.

  10. Aaaah! says:

    There is no such thing as RPC. RPC is a delusion that’s been with us since the early days of networked computers and standardised in CORBA and its descendants. What actually happens is you send messages and hope for responses. If you don’t get a response your process is buggered and you need to find out what actually happened on the other computer. Spend any time reading about distributed computing and you’ll understand half the problems (or like Landon experience them from someone’s misguided design instead).

Leave a Reply

Your email address will not be published. Required fields are marked *