# Prototyping thought

Published: Sat 28 October 2006

In a recent post, pozorvlak reminded me of one of the reason it is important to have a good, obvious, and quick-to-write programming language around.

He, as I, is a mathematician, spending his time thinking, finding patterns, and trying to formulate (more or less) absolute proof that his patterns hold all the time, alternatively ways to demonstrate that they may not be universal.

In the post linked above, he starts by a neat little exercise, gets interested, and goes out to look at more examples. These show a very clear pattern, and after following this pattern quite some way out, he finally believes the pattern enough to start searching for a proof: which he also finds.

Now, this is one central heuristic pattern in mathematics. Calculate examples. Look at the examples. All of a sudden a pattern appears, and you think up a reason why such a pattern could be there. While proving the "correctness", if you may, of the pattern, you'll either end up formulating in enough detail what's going on so that you may possibly expand the statement to cover even more -- or you may realize that it does not hold and find a way to construct a counterexample. Either way, the mass of knowledge increases.

Now, what does this have to do with programming languages?

I started looking at Haskell due to a sequence of blogposts from sigfpe, who was describing Haskell programming from a highly algebraic point of view. And even after my involving myself in the subject, one thing that stands out is the relatively low distance between thought expressed in my ordinary day-to-day mathematical discourse, and thought expressed in Haskell code. For the ubiquitous example, let's take the factorial function. Mathematically, I think about factorials as
"n! is the product of all integers from 1 to n"
or
"n!=Γ(n-1)" (though this not so often, I must admit)
Then again, "the product of all", when I introduce it strictly for the first time tends to be expressed along the lines of
"Π:sub:n=1k f(n) is f(k)*Πn=1k-1" or something along those lines.

So, what happens if I want to start playing around a bit with factorials, and see whether I can say anything interesting about them?

In C or similar languages, I'll fire up an editor, and a command shell, and start writing code that looks something like

```#include
int main(int argc, void* argv) {
int i, j, f=1;
scanf("%d", &i);
for(j=1; j
and I will spend more time writing scaffolding and reworking the problem into something I can write in C than I will actually working with my mathematical intuition about it.
C++, Java, PHP, Perl -- it won't be much different in any of these. None of them have an interpreter I like, none of them differ all too much from the mindset of C. (Yes, I say this knowing full well the difference between imperative and object-oriented programming...)
In functional languages, the way I write the code is very much more reminiscent of the way I define it in my head. I could dig out enough Lisp or Scheme from the back of my mind to give you an example, but it's getting late and I couldn't be bothered. So instead I'll go straight on to the thingie I want to praise.
In Haskell, I'll fire up my interpreter, and write at the prompt something very much like

> let fac 0 = 1; fac n = n*fac (n-1)
```

and then start firing off actual calculations into the box.

```> (fac 8)/((fac 3)*(fac 5))
56.0
```

During my time so far, I've used Haskell quite a bit in my research to generate lists of testcases. Not necessarily because Haskell is the best possible language for it, but because I could very quickly write down my thoughts, get working code out of it, and just go ahead and do what I wanted. Most languages I've worked with have their uses - and I'm not out to bash anything right now. But with Haskell, I've had a much lower threshold thought-to-code, and much less distance between my coder head and my mathematician head, than with anything else I've seen so far.