Once upon a time I thought it would be “good for me” to learn denotational semantics. It looked like people were doing cool things with it, it looked like languages that leaned on it heavily (like ML) were worth learning, so I got a couple of books and started reading.

A couple books later I was still as confused as ever. And mad, too.

Even in an introductory text I got the sense that the authors were trying to make themselves look smart at their readers’ expense. A page into a new chapter and they’d whip out a completely new symbol (a crippled-looking M, or a Q doubled-over in agony), surround it with a constellation of superscripts and italic subscripts, then incant “… it is therefore obvious that…” and be off into the galactic void, sky-writing with half of the goddamn Greek and Egyptian alphabets. I felt stuck in mud, dumb as a sack of bricks, and seriously doubting that giving the authors unsupervised access to TeX had been a good idea.

I can see myself in a course using that book as a text . . . well, no I can’t. I’d wind up in a corner muttering about wacko square-brackets with candy-striped uprights and the semantics of lambda-something-or-other under zeta-prime reduction.

I think my head is built wrong when it comes to mathematical notation. I see a gaggle of heiroglyphs in close formation and I have to think hard about operator precedence and how the particles in the statements need to be parsed, and in what order. Vanilla math, no problem. Calculus, I have to think about the ‘dx’ stuff. Denotational semantics, well, I nearly threw one book across the room when the author started using undefined operators from out of the blue; no definition, *nothing* to help me understand the particular rabbits on that page. Okay, professor, I hereby declare (goofy squiggle) to mean “my prof is a poopy head, x dx,” put that in your theorem prover and smoke it. But you’ll never figure out what (goofy squiggle) really means, because I won’t define it anywhere. It seems only fair that way.

Is this how academic feuds start?

So I went back to writing tools and operating systems, an ignorant man, a person upon whom attempts at enlightenment had been squandered, but a happier one.

At least math notation makes for a good argument towards understandable variable names 🙂

Somebody is using one- or two-letter methods in public interfaces? Just shove that math book in his face – chances are he’s dreaded them already – and speak “Do you really wanna be like this guy?”

I think the problem with a lot of mathematical notation, and the reason a lot of programmers take issue with it, is that it’s largely inconsistent. Different ways of doing things were all invented by different people, and, in the name of tradition, very little ‘rationalisation’ of the notation has happened over time.

Thank god it’s not just me then that finds maths completely incomprehensible.

Show me any “normal” programming language (yes, even VB and JavaScript – but not COBOL, I have a mental affliction that makes learning COBOL impossible) or even Perl and I can work out what it does and feel happy that at some level I understand what the original programmer was thinking.

Show me a page of maths gibberish and something fuses in my brain, kind of like when you wander into a high street computer shop and start asking about RAM and PCI-Express busses and the salesdroid you’re talking to goes cross-eyed and unresponsive.

I try to parse maths like I do code, but then I too come across too many undefined wiggly heiroglyphs and the words “thus” and “therefore” and the strange phrase “this proves for” (proves /what/ for?), and my brain spews pages of errors that seem to boil down to “does not understand at all”.

I do understand algebra though, it’s just like coding – somewhere in my brain is a ‘pythagoras’ function and a ‘calculate the area of a circle’ function. I can’t rearrange equations though, and proofs never made any sense to me.

Now to really confuse things, mix in some physics…

Very well stated. I’ve had the exact same experience in higher math courses, though I managed to avoid “simian ticks”. Group theory, in particular, gives me flashback nightmares, as well as field theory. I struggled and struggled in those classes, and never got more than a surface understanding of manipulating the symbols.

How about outing the books?

I’m a student myself, and this sort of thing also bothers me too. I usually have to sit down and mentally whip myself into the state of mind required to try and parse the math into an English explanation, and then try and understand that too.

I struggled with this same problem through out undergrad and into grad school. I knew I was a good programmer and I couldn’t understand why these things seemed so foreign to me. Then some smart professor came along and actually explained to me the truth; I was a great programmer and a really lousy computer scientist. After finally understanding this I stopped taking programming classes and started taking heavily cs theory classes and math. No (physical) computers. No code.

For example, you will find that taking one theoretical language course will explain the motivation behind almost all programming languages. Take it from someone who learned the hard way. Learn the abstraction first!!! Everything else is just a really more specific and simple example of some abstraction.

I learned that while there are a great deal of bad text books, developing math intuition and not getting scared off by notation takes time and effort.

Yeah. “So it is clear that

(expression one)

is equivalent to

(expression two).”

And I’m like — it IS? I stare at it for a while and realize they’ve done about 8 substitutions in between two lines.

Check out the R5RS standard sometime, it provides a nice clean denotational semantics for the core of Scheme expressed in lambda calculus. LC is a set of reduction rules simple enough for a middle-schooler to pick up (provided their brains aren’t daunted by the Greek letters), and Scheme is close enough to it that its core semantics finds ready expression in it. It’s really quite remarkable and exemplary of how somethings things seem more complicated than they are (or are made more complicated than they should be).

Yes, the R5RS denotational semantics are nice, but they come with an absolute minimum of explanation. I don’t think it will make sense to anyone who doesn’t already grok denotational semantics.

I liked Gordon’s “The denotational description of programming languages”. YMMV.

I’ll out a book: Du and Ko, Problem Solving in Automata, Languages, and Complexity

Look, I’m cool with the material being hard to understand. But like Langdon, things would suddenly appear that the authors were clearly taking for granted that you just knew or what find elsewhere. Thats just BAD text book writing right there. We did the whole book in one semester even though the authors recommended it for two. In the review for the course, I told the professor that using that book made me question how thoroughly the school actually wanted us students to learn this material. Somehow, I managed a B+.

I don’t really know a lot about denotational semantics, but operational semantics serves a similar purpose. I recommend reading Benjamin C. Pierce: Types and Programming Languages. I think it requires only high school math, and it defines everything it uses, including sets, lambda calculus and operational semantics. The focus is on type checking (including advanced type reconstruction/inference). Probably the best CS book I have ever read.

Sounds like my Algorithm Analysis class in school… that textbook and the Romanian professor along with it was just strange. I halfway wondered if we should be sacrificing live chickens or chanting mantras at some point.

And yet, somehow in some weird way I was grasping the concepts, because I got an A- and a personal congratulations from the professor who said I “got it” better than most. Er, ok…

Glad to see there are others who quickly get lost in the crazy symbols of certain higher level math topics. I had a hell of a time with advanced calculus, for some reason I would look at the symbols, and they would stare back at me… were they shaking? Were they shaking with mocking laughter? Once an excessive number of greek symbols started entering a course, it started to literally become “all greek to me”. Everything took me an incredibly long time because I had to remember every time what function a given symbol stood for, then how to use it… then the next one.. and the next one… After finally parsing the symbols I had to look at the whole equation again to try to figure out what it was supposed to do. This was after looking at hundreds of them.

I think a large portion of the problem is that math notation is quite old, created by people who understood greek (either natively or through its study). So when they see lambdas and sigmas and all these other distorted horrible symbols with strange names running around, they weren’t frightened at all, because they had seen these many times before.

I was faced with “I define some concept you have never heard of as being represented by this symbol />|

Oh my stars and garters yes.

As someone who’s trying to pull himself up by his bootstraps from a liberal arts degree to programming competance, I have to say… Oh my GOD I wish SICP and TAoCP didn’t require such strong familiarity with mathematical symbols to even begin to make use of them.

I’m inching my way through SICP, but sometimes I have to look at someone else’s solution just to understand the notation for the PROBLEM.

Thought I’d save time by starting some parts of my CS degree during the vacation year.

Java and the Basics of Computer Science was a blast, so I figured what the hey and took Calculus next.

That was a year ago and I’m still fighting the trauma, beginning with Algebra I this next period.

Real university starts in a year and I’m still pretty horrified by it all.

Nice to see I’m not the only one..

My main problem with mathematical notation is that there isn’t (AFAIK) any easy way to look up the meaning of a symbol that you don’t know.

How much space would it take up in a book to at least map the weird symbols to their English pronunciations? Then, at least I could google them.

“I’ll out a book: Du and Ko, Problem Solving in Automata, Languages, and Complexity”

That book is evil. I haven’t used it for coursework, but I did a research project on P, NP and NP-completeness, and at the start of it, my supervisor said, I can give you a book on the subject (and it was Du and Ko) but I recommend you go to the library and get one that you can actually read.

That said, once you know what’s going on, it’s a good reference, and I ended up citing it in my project.

I stopped studying math during functional analysis II – when I realized that I had completely lost track of what anything in the course could conceivably be good for and made the mistake of asking the prof, who turned out to never have considered that question. Interestingly enough, he originally came from physics, so the posing the question was not as nonsensical as it might have seemed.

That said, I was reminded of that moment a few days ago, upon finding an article that heralded the clarity and straightforwardness of APL.

Here’s an implementation of the game of life in APL, thankfully with explanations. I still struggle with the idea of that string of symbols at the top being “readable”.

http://catpad.net/michael/apl/