This comment deserved some expansion. Let the fur fly.
Does this mean we can stop now?
Can the pain finally end?
What pain? If it’s painful, stop doing that. Feel better now? Great!
Can we have integers that really are integers and not integers modulo 2 to the power of (something indefinite-1)?
We did that in the 60s. Remember BCD-based computing? It sucked real hard. A modern “bignum” architecture might be interesting, but its numerics would be toasted in performance-sensitive applications like graphics. You could go “floats everywhere” as long as you don’t care about correct answers.
Really, numerics are a small part of life to a programmer. Mostly what I deal with are packing and unpacking issues; I don’t worry about addition or multiplication, I worry about this-fitting-into-that, what transformations are made on data, and how to abstract stuff. At the end of the day, if I need a 64 bit something, I’ll make it so.
Can we have char’s that are signed, or unsigned, I mean can’t we just decide!
Agree with you here. This is a continuing irritation (however, it does not keep me awake at night).
Can we finally admit “unsigned” is just a storage space optimization?
Only if you can cram an extra kilobyte or so into the embedded system I worked on six years ago. [This will seem to contradict my earlier statement about "just make that sucker 64 bits," but engineering is all about cost tradeoffs, and knowing when to make them.]
Can we have garbage collection please? That stuff was done and sorted a decade ago!
Excellent, I can allocate memory in my interrupt handlers now! It’s solved only if you have a runtime available that supports GC — if you are writing performance-sensitive code, or systems-level stuff, you don’t have this facility available. [Now would be a great time to point out the research paper I missed on doing incremental GC from within an interrupt handler or something.]
I still seriously like LISP machines . . . but only a few years ago learned that most of them spent their lives running with GC turned off.
Can we have a sane way of specifying, packing and unpacking serialized data?
This is a well-solved problem. Actually, it’s a great class of what I like to call a “too-solved” problem; in all probability there are millions of ways to pack and upack serialized data, thousands that are robust enough to call “real,” and of these a few are standard. However, the standard ones mostly suck.
The way to stop this collective madness is, of course, to get a committee on that and get a decent standard together. No, wait . . . . you wanted sane.
The last time we did that we got XML, and no, the pain has not ended. Indeed, I would echo your whole comment, but change “C” to “XML” and poise a knife over my wrist.
Can we all just stop using C! Pretty Please!
Thank you for asking nicely. However, the answer is: No, not today. However, many problems that would once have been madness to use a higher-level language for are now perfectly reasonable to write in (say) C# or Java or Erlang. Or Visual Basic, for that matter.
I expect the use of C to erode over the next few decades, but I do not think we will ever be utterly rid of it. I have seen interesting environments (LISP or C# on bare metal) that look promising, and the idea is frankly exciting, but we’re not there yet.