Why is writing a compiler in a functional language easier? [closed]
I've been thinking of this question very long, but really couldn't find the answer on Google as well a similar question on Stackoverflow. If there is a duplicate, I'm sorry for that.
A lot of people seem to say that writing compilers and other language tools in functional languages such as OCaml and Haskell is much more efficient and easier then writing them in imperative languages.
Is this true? And if so -- why is it so efficient and easy to write them in functional languages instead of in an imperative language, like C? Also -- isn't a language tool in a functional language slower then in some low-level language like C?
Solution 1:
Often times a compiler works a lot with trees. The source code is parsed into a syntax tree. That tree might then be transformed into another tree with type annotations to perform type checking. Now you might convert that tree into a tree only containing core language elements (converting syntactic sugar-like notations into an unsugared form). Now you might perform various optimizations that are basically transformations on the tree. After that you would probably create a tree in some normal form and then iterate over that tree to create the target (assembly) code.
Functional language have features like pattern-matching and good support for efficient recursion, which make it easy to work with trees, so that's why they're generally considered good languages for writing compilers.
Solution 2:
A lot of compiler tasks are pattern matching on tree structures.
Both OCaml and Haskell have powerful and concise pattern matching capabilities.
It's harder to add pattern matching to imperative languages as whatever value is being evaluated or extracted to match the pattern against must be side-effect free.
Solution 3:
One important factor to consider is that a big part of any compiler project is when you can self-host the compiler and "eat your own dog food." For this reason when you look at languages like OCaml where they are designed for language research, they tend to have great features for compiler-type problems.
In my last compiler-esque job we used OCaml for exactly this reason while manipulating C code, it was just the best tool around for the task. If the folks at INRIA had built OCaml with different priorities it might not have been such a good fit.
That said, functional languages are the best tool for solving any problem, so it logically follows that they are the best tool for solving this particular problem. QED.
/me: crawls back to my Java tasks a little less joyfully...
Solution 4:
Basically, a compiler is a transformation from one set of code to another — from source to IR, from IR to optimized IR, from IR to assembly, etc. This is precisely the sort of thing functional languages are designed for — a pure function is just a transformation from one thing to another. Imperative functions don't have this quality. Although you can write this kind of code in an imperative language, functional languages are specialized for it.
Solution 5:
One possibility is that a compiler tends to have to deal very carefully with a whole host of corner cases. Getting the code right is often made easier by using design patterns that structure the implementation in a way that parallels the rules it implements. Often that ends up being a declarative (pattern matching, "where") rather than imperative (sequencing, "when") design and thus easier to implement in a declarative language (and most of them are functional).