This comment deserved some expansion.  Let the fur fly.

Does this mean we can stop now?

 No.

Can the pain finally end?

What pain?  If it’s painful, stop doing that.  Feel better now?  Great!

Can we have integers that really are integers and not integers modulo 2 to the power of (something indefinite-1)?

We did that in the 60s.  Remember BCD-based computing?  It sucked real hard.  A modern “bignum” architecture might be interesting, but its numerics would be toasted in performance-sensitive applications like graphics.  You could go “floats everywhere” as long as you don’t care about correct answers.

Really, numerics are a small part of life to a programmer.  Mostly what I deal with are packing and unpacking issues; I don’t worry about addition or multiplication, I worry about this-fitting-into-that, what transformations are made on data, and how to abstract stuff.  At the end of the day, if I need a 64 bit something, I’ll make it so.

Can we have char’s that are signed, or unsigned, I mean can’t we just decide!

Agree with you here.  This is a continuing irritation (however, it does not keep me awake at night).

Can we finally admit “unsigned” is just a storage space optimization?

Only if you can cram an extra kilobyte or so into the embedded system I worked on six years ago.  [This will seem to contradict my earlier statement about “just make that sucker 64 bits,” but engineering is all about cost tradeoffs, and knowing when to make them.]

Can we have garbage collection please? That stuff was done and sorted a decade ago!

Excellent, I can allocate memory in my interrupt handlers now!  It’s solved only if you have a runtime available that supports GC — if you are writing performance-sensitive code, or systems-level stuff, you don’t have this facility available.  [Now would be a great time to point out the research paper I missed on doing incremental GC from within an interrupt handler or something.]

I still seriously like LISP machines . . . but only a few years ago learned that most of them spent their lives running with GC turned off.

Can we have a sane way of specifying, packing and unpacking serialized data?

This is a well-solved problem.  Actually, it’s a great class of what I like to call a “too-solved” problem; in all probability there are millions of ways to pack and upack serialized data, thousands that are robust enough to call “real,” and of these a few are standard.  However, the standard ones mostly suck.

The way to stop this collective madness is, of course, to get a committee on that and get a decent standard together.  No, wait . . . . you wanted sane.

[kills self]

The last time we did that we got XML, and no, the pain has not ended. Indeed, I would echo your whole comment, but change “C” to “XML” and poise a knife over my wrist.

Can we all just stop using C!  Pretty Please!

Thank you for asking nicely.  However, the answer is: No, not today.  However, many problems that would once have been madness to use a higher-level language for are now perfectly reasonable to write in (say) C# or Java or Erlang. Or Visual Basic, for that matter.

I expect the use of C to erode over the next few decades, but I do not think we will ever be utterly rid of it.  I have seen interesting environments (LISP or C# on bare metal) that look promising, and the idea is frankly exciting, but we’re not there yet.

This entry was posted in Rantage. Bookmark the permalink.

20 Responses to

  1. Sharkey says:

    You could go “floats everywhere” as long as you don’t care about correct answers.

    Just a quick comment: Lua defaults to “doubles everywhere”, which has equivalent precision as 32-bit integer math (better in fact, since the mantissa is 53 bits). On systems with a floating-point coprocessor, it works quite well. Granted, Lua has found a niche as an embedded language, not systems programming, but doubles everywhere doesn’t imply incorrect answers.

    W.r.t. unsigned int vs signed int, the strongly-typed side of me believes that the typing is the important part, not the extra bit you get to play with. The typing allows one to conclude that unsigned cannot be negative, eliminating additional checking; of course, the typing of the code needs to be correct, which leads me to my next point…

    I don’t think C is going anywhere either, more’s the pity. I think the benefit of speed and size of execution binary is more than outweighed by the costs of security/stability errors inherent in C and its libraries. C has remained popular as long as that the trade-off has been hidden for so long; connecting these devices to a malicious network has finally uncovered those dormant costs.

  2. Interested C# Dev says:

    “C# on bare metal”? Do you have any published (on the Internet, not just in journals) information about this? It sounds very interesting. Maybe a project name or something I can google? Thanks.

  3. foo says:

    The Lisp Machine GC was only turned off by some during the first years when there was no good GC. Later the GCs improved and one would have been stupid to not turn it on.

  4. SeanJA says:

    Some context for that first comment would be nice ;)

  5. Ze says:

    Any comments on Go as a candidate to substitute for C?

  6. Tobie says:

    @Interested C# Dev

    Check out the netmf.com site

  7. ilowry says:

    Isn’t D (http://www.dprogramming.com/) going to dethrone C? :)

  8. Your big holdup appears to be tiny embedded systems. Why are you writing C there?!? That’s what Forth is for!

  9. Phil says:

    Personally, I hope D will take over C. However, it has a lot of maturing to do first. It is garbage-collected, but the GC is written in D itself and manual memory management is still supported.

    @Interested C# Dev: I don’t know if this is the best example, but check out http://en.wikipedia.org/wiki/Singularity_(operating_system).

  10. Someone says:

    “Proper integers” does not imply either of “BCD-based computing” or “bignums”.

    The simplest way of doing it would be to make over- and underflow and (for division, inexactness) harder to ignore, for example by throwing an exception and/or by returning a ‘NaN’ value. With signed two’s complement arithmetic, taking a bit pattern for ‘NaN’ would also fix the issue that -x may underflow, and could also be a way to get rid of that pesky overflow in abs(x)

    I also am not sure that the performance impact of computing the right value, rather than computing something fast would have to be that large; in many cases, a good compiler could, possibly with some help from the programmer, prove that a variable always will fit a C-style int.

  11. John Carter says:

    I’m the original commenter that Dad Hacker is responding to…

    I’ll agree with “Someone” when he says Proper Integers != BCD. I’m thinking more like Scheme or Rubies numeric tower but will settle for “fall over and die noisily” if it wraps.

    Sharkey sayeth… “W.r.t. unsigned int vs signed int, the strongly-typed side of me believes that the typing is the important part, not the extra bit you get to play with.”

    Alas, in a language like Pascal/Modula-N/… you may be right. But C permits implicit conversion between signed, unsigned and enum types according to a rather arcane set of rules (which I have yet to find a C programmer who can quote them accurately!)

    ie. You don’t get type safety at all. (Newer versions of compilers warn only about bad behaviour with pointers to these types due to strict aliasing rules and optimization.)

    I was thinking about this all after I wrote my rant.

    Structs are an abortive mix between an Object instance and attempting to specify the binary layout. Why should you _ever_ use uint16_t? Because you want to conserve space and or pick apart a binary layout.

    For example, you may argue you want to use uint16_t because it is faster than uint32_t on a 16 bit word embedded device.

    If that’s what you meant, you should have used uint_fast16_t!

    Using structs to specify the binary layout doesn’t work as the C standard doesn’t permit you to specify alignment, endianess or padding of struct fields! Yet many C programs I have seen (partly because I work in the comms field) spend pages and pages on picking apart bit packings.

    D looks promising, but misses several things I have touched on here.

    As for GC in interrupt handlers… malloc tends to invoke mutex_lock/unlock which tends to not be nice when writing ISR’s. Mostly for ISR’s you should have statically allocated buffers.

  12. James says:

    What Ze said. Will I be wasting my time learning and utilizing Go rather than honing my skills with C?

  13. Phil says:

    If I understand Go correctly, it would be difficult if not impossible to write a garbage collector in the language. I don’t think anything can replace C unless it also could write a garbage collector and slip out from its own GC if necessary.

    That’s why I mentioned that D’s GC is written in D; however I think it might get some special compiler support right now, but there was some talk of the possibility removing the new keyword and replacing it with a generic function so the GC might really be completely implemented within the language. The syntax would have to change from “new T()” to “new!T()”, which is the D equivalent of new() in languages that use angle brackets for template parameters. D does link to the C runtime library and the D runtime library if I understand correctly; so malloc and free are always available.

  14. Phil says:

    s/new()/new<T&rt;/

    The blog stripped out the <T&rt; when I posted.

  15. Phil says:

    Sorry for posting so many times in a row, feel free to delete the two follow-up posts and edit my origin with the fix. I meant:

    s/new()/new<T>()/

    I’m not reposting if I get that wrong this time.

  16. Leahn Novash says:

    I’ve been reading your archives. It is a slow moment at work after all and Reddit is too busy worried about dogs being thrown of bridges to show anything interesting or worth reading. So this question is a lot off topic since it is about something you posted like 7 years or so ago, but what do you have against sscanf?

  17. Anon says:

    I don’t really like the mindset “I want to be a programmer, but I want to work with high level concepts like architecture. I don’t want to bother with low level stuff like integer limits, memory allocation or performance. Testing is menial work for the test department, I am too busy being creative”

    Game programmers, embedded programmers, people trying to serve 100 bajillion hits per second from a cheap web server. These people think in terms of “how do I make sure the inner loop is really efficient”. I’ve seen some wonderful designs like this – they seem over complicated, bloated even, when they are starting up but once they’re up they are almost preternaturally efficient. Of course the price of doing this is that you’re very very aware of low level issues, because those are the ones that give you a nasty surprise when your code moves out of your test environment and into the live one.

    Still I can sort of understand that most people don’t think this way. Still it is irritating when you see people building websites that are going to get millions of hits per day in some fashionable new language which is orders of magnitude slower than C on the system they are using, or merging code into the baseline of an embedded system’s version control tree with a data segment that is four times the size of SDRAM.

    I think there are engineers and academics in computer science. And many people who claim to be academics (or worse architects) are staggeringly poor engineers. You can even see this with website designers. I’ve met people that build websites out of fashionable but bleeding edge technologies. Now they fail on everything but FF. IE they have an excuse that it doesn’t support bleeding edge CSS, but often the site will fail on Opera and Chrome too. Basically they’ve used an advanced technology badly and in an inappropriate way and they haven’t tested properly. The worst thing is that this sort of propellerheadery looks much better on a resume than a more pedestrian technology choice. Still for most of their users it is a much worse one.

    And yeah, I’m ranting about straw men.

  18. Ian Farquhar says:

    C is like DOS, RC4, GIF and cockroaches. No matter how much we try to kill it, it will never ever truly die.

    Yeah, I’m as fascinated by Go as anyone else, and as I write security code it’s got some nice features from my PoV. OTOH, I can’t see myself reaching for anything but gcc next time I need to code something efficient and fast.

  19. Shannon says:

    Some of us older programmers still like C. Perhaps that’s indicative of a character flaw? Not enough bits for our char?

  20. ashleigh says:

    Agree with Shannon. Nothing wrong with C – and doubly so on small embedded micros (Pah to Forth – don’t be silly). C won’t go away when you don’t want to write assembler and you HAVE to fit into a 4 K micro with 200 bytes of RAM.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>