Should do-notation be avoided in Haskell?
Solution 1:
do
notation in Haskell desugars in a pretty simple way.
do
x <- foo
e1
e2
...
turns into
foo >>= \x ->
do
e1
e2
and
do
x
e1
e2
...
into
x >>
do
e1
e2
....
This means you can really write any monadic computation with >>=
and return
. The only reason why we don't is because it's just more painful syntax. Monads are useful for imitating imperative code, do
notation makes it look like it.
The C-ish syntax makes it far easier for beginners to understand it. You're right it doesn't look as functional, but requiring someone to grok monads properly before they can use IO is a pretty big deterrent.
The reason why we'd use >>=
and return
on the other hand is because it's much more compact for 1 - 2 liners. However it does tend to get a bit more unreadable for anything too big. So to directly answer your question, No please don't avoid do notation when appropriate.
Lastly the two operators you saw, <$>
and <*>
, are actually fmap and applicative respectively, not monadic. They can't actually be used to represent a lot of what do notation does. They're more compact to be sure, but they don't let you easily name intermediate values. Personally, I use them about 80% of the time, mostly because I tend to write very small composable functions anyways which applicatives are great for.
Solution 2:
In my opinion
<$>
and<*>
makes the code more FP than IO.
Haskell is not a purely functional language because that "looks better". Sometimes it does, often it doesn't. The reason for staying functional is not its syntax but its semantics. It equips us with referential transparency, which makes it far easier to prove invariants, allows very high-level optimisations, makes it easy to write general-purpose code etc..
None of this has much to do with syntax. Monadic computations are still purely functional – regardless of whether you write them with do
notation or with <$>
, <*>
and >>=
, so we get Haskell's benefits either way.
However, notwithstanding the aforementioned FP-benefits, it is often more intuitive to think about algorithms from an imperative-like point of view – even if you're accustomed to how this is implemented through monads. In these cases, do
notation gives you this quick insight of "order of computation", "origin of data", "point of modification", yet it's trivial to manually desugar it in your head to the >>=
version, to grasp what's going on functionally.
Applicative style is certainly great in many ways, however it is inherently point-free. That is often a good thing, but especially in more complex problems it can be very helpful to give names to "temporary" variables. When using only "FP" Haskell syntax, this requires either lambdas or explicitly named functions. Both have good use cases, but the former introduces quite a bit of noise right in the middle of your code and the latter rather disrupts the "flow" since it requires a where
or let
placed somewhere else from where you use it. do
, on the other hand, allows you to introduce a named variable right where you need it, without introducing any noise at all.