What can C++ offer as far as functional programming?

Solution 1:

Let me start by noting that most of these are not "intrinsic", or shall we say, "required"; many of these are absent from notable functional languages, and in theory, many of these features can be used to implement the others (such as higher order functions in untyped lambda calculus).

However, let's go through these:

Closures

Closures are not necessary, and are syntactical sugar: by the process of Lambda Lifting, you can convert any closure into a function object (or even just a free function).

Named Functors (C++03)

Just to show that this isn't a problem to begin with, here's a simple way to do this without lambdas in C++03:

Isn't A Problem:

struct named_functor 
{
    void operator()( int val ) { std::cout << val; }
};
vector<int> v;
for_each( v.begin(), v.end(), named_functor());

Anonymous functions (C++11)

However, anonymous functions in C++11 (also called lambda functions, as they derive from the LISP history), which are implemented as non-aliasingly named function objects, can provide the same usability (and are in fact referred to as closures, so yes, C++11 does have closures):

No problem:

vector<int> v;
for_each( v.begin(), v.end(), [] (int val)
{
    std::cout << val;
} );

Polymorphic anonymous functions (C++14)

Even less of a problem, we don't need to care about the parameter types anymore in C++14:

Even Less Problem:

auto lammy = [] (auto val) { std::cout << val; };

vector<int> v;
for_each( v.begin(), v.end(), lammy);

forward_list<double> w;
for_each( w.begin(), w.end(), lammy);

I should note this fully support closure semantics, such as grabbing variables from scope, both by reference and by value, as well as being able to grab ALL variables, not merely specified ones. Lambda's are implicitly defined as function objects, providing the necessary context for these to work; usually this is done via lambda lifting.

Higher Order Functions No problem:

std::function foo_returns_fun( void );

Is that not sufficient for you? Here's a lambda factory:

std::function foo_lambda( int foo ) { [=] () { std::cout << foo; } };

You can't create functions, but you can function objects, which can be passed around as std::function same as normal functions. So all the functionality is there, it's just up to you to put it together. I might add that much of the STL is designed around giving you reusable components with which to form ad-hoc function objects, approximating creating functions out of whole cloth.

Partial Function Applications No problem

std::bind fully supports this feature, and is quite adept at transformations of functions into arbitrarily different ones as well:

void f(int n1, int n2, int n3, const int& n4, int n5)
{
    std::cout << n1 << ' ' << n2 << ' ' << n3 << ' ' << n4 << ' ' << n5 << '\n';
}

int n = 7;
// (_1 and _2 are from std::placeholders, and represent future
// arguments that will be passed to f1)
auto f1 = std::bind(f, _2, _1, 42, std::cref(n), n);

For memoization and other partial function specialization techniques, you have to code it yourself using a wrapper:

template <typename ReturnType, typename... Args>
std::function<ReturnType (Args...)>
memoize(ReturnType (*func) (Args...))
{
    auto cache = std::make_shared<std::map<std::tuple<Args...>, ReturnType>>();
    return ([=](Args... args) mutable  
    {
        std::tuple<Args...> t(args...);
        if (cache->find(t) == cache->end())
            (*cache)[t] = func(args...);

        return (*cache)[t];
    });
}

It can be done, and in fact it can be done relatively automatically, but no one has yet done it for you. }

Combinators No problem:

Let's start with the classics: map, filter, fold.

vector<int> startvec(100,5);
vector<int> endvec(100,1);

// map startvec through negate
std::transform(startvec.begin(), startvec.end(), endvec.begin(), std::negate<int>())

// fold startvec through add
int sum =  std::accumulate(startvec.begin(), startvec.end(), 0, std::plus<int>());

// fold startvec through a filter to remove 0's
std::copy_if (startvec.begin(), startvec.end(), endvec.begin(), [](int i){return !(i==0);} );

These are quite simple, but the headers <functional>, <algorithm>, and <numerical> provide dozens of functors (objects callable as functions) which can be placed into these generic algorithms, as well as other generic algorithms. Together, these form a powerful ability to compose features and behavior.

Let's try something more functional though: SKI can easily be implemented, and is very functional, deriving from untyped lambda calculus:

template < typename T >
T I(T arg)
{
    return arg;
}

template < typename T >
std::function<T(void*)> K(T arg)
{
return [=](void*) -> T { return arg; };
}

template < typename T >
T S(T arg1, T arg2, T arg3)
{
return arg1(arg3)(arg2(arg1));
}

These are very fragile; in effect, these must be of a type which returns it's own type and takes a single argument of their own type; such constraints would then allow for all the functional reasoning of the SKI system to be applied safely to the composition of these. With a little work, and some template metaprogramming, much of this could even be done at compile time through the magic of expression templates to form highly optimized code.

Expression templates, as an aside, are a technique in which an expression, usually in the form of a series of operations or sequential order of code, is based as an argument to a template. Expression templates therefore are compile time combinators; they are highly efficient, type safe, and effectively allow for domain specific languages to be embedded directly into C++. While these are high level topics, they are put to good use in the standard library and in boost::spirit, as shown below.

Spirit Parser Combinators

template <typename Iterator>
bool parse_numbers(Iterator first, Iterator last)
{
    using qi::double_;
    using qi::phrase_parse;
    using ascii::space;

    bool r = phrase_parse(
    first,                          
    last,                           
    double_ >> (char_(',') >> double_),   
    space                           
    );

    if (first != last) // fail if we did not get a full match
        return false;
    return r;
}

This identifies a comma deliminated list of numbers. double_ and char_ are individual parsers that identify a single double or a single char, respectively. Using the >> operator, each one passes themselves to the next, forming a single large combined parser. They pass themselves via templates, the "expression" of their combined action building up. This is exactly analogous to traditional combinators, and is fully compile time checked.

Valarray

valarray, a part of the C++11 standard, is allowed to use expression templates (but not required, for some odd reason) in order to facilitate efficiency of transforms. In theory, any number of operations could be strung together, which would form quite a large messy expression which can then be aggressively inlined for speed. This is another form of combinator.

I suggest this resource if you wish to know more about expression templates; they are absolutely fantastic at getting all the compile time checks you wish done, as well as improving the re-usability of code. They are hard to program, however, which is why I would advise you find a library that contains the idioms you want instead of rolling your own.

Function Signatures As Types No problem

void my_int_func(int x)
{
    printf( "%d\n", x );
}

void (*foo)(int) = &my_int_func;

or, in C++, we'd use std::function:

std::function<void(int)> func_ptr = &my_int_func;

Type Inference No problem

Simple variables typed by inference:

// var is int, inferred via constant
auto var = 10;

// y is int, inferred via var
decltype(var) y = var;

Generic type inference in templates:

template < typename T, typename S >
auto multiply (const T, const S) -> decltype( T * S )
{
    return T * S;
}

Furthermore, this can be used in lambdas, function objects, basically any compile time expression can make use of decltype for compile time type inference.

But that's not what you are really after here, are you? You want type deduction as well as type restriction, you want type reconstruction and type derivations. All of this can be done with concepts, but they are not part of the language yet.

So, why don't we just implement them? boost::concepts, boost::typeerasure, and type traits (descendant from boost::tti and boost::typetraits) can do all of this.

Want to restrict a function based on some type? std::enable_if to the rescue!

Ah, but that's ad hoc right? That would mean for any new type you'd want to construct, you'd need to do boilerplate, etc etc. Well, no, but here's a better way!

template<typename RanIter>
BOOST_CONCEPT_REQUIRES(
    ((Mutable_RandomAccessIterator<RanIter>))
    ((LessThanComparable<typename Mutable_RandomAccessIterator<RanIter>::value_type>)),
    (void)) // return type
stable_sort(RanIter,RanIter);

Now your stable_sort can only work on types that match your stringent requirements. boost::concept has tons of prebuilt ones, you just need to put them in the right place.

If you want to call different functions or do different things off types, or disallow types, use type traits, it's now standard. Need to select based on parts of the type, rather than the full type? Or allow many different types, which have a common interface, to be only a single type with that same interface? Well then you need type erasure, illustrated below:

Type Polymorphism No problem

Templates, for compile time type polymorphism:

std::vector<int> intvector;
std::vector<float> floatvector;
...

Type erasure, for run time and adaptor based type polymorphism:

boost::any can_contain_any_type;
std::function can_call_any_function;
any_iterator can_iterator_any_container;
...

Type erasure is possible in any OO language, and involves setting up small function objects which derive from a common interface, and translate internal objects to it. With a little boost MPL boilerplate, this is fast, easy, and effective. Expect to see this become real popular soon.

Immutable Datastructures Not syntax for explicit constructions, but possible:

Can be done via not using mutators or template metaprogramming. As this is a lot of code (a full ADT can be quite large), I will link you here, to show how to make an immutable singly linked list.

To do this at compile time would require a good amount of template magic, but can be done more easily with constexpr. This is an exercise for the reader; I don't know of any compile time libraries for this off the top of my head.

However, making an immutable datastructure from the STL is quite easy:

const vector<int> myvector;

There you are; a data structure that cannot be changed! In all seriousness, finger tree implementations do exist and are probably your best bet for associative array functionality. It's just not done for you by default.

Algebraic data types No problem:

The amazing boost::mpl allows you to constrain uses of types, which along with boost::fusion and boost::functional to do anything at compile time that you would want in regards to ADT. In fact, most of it is done for you:

#include <boost/mpl/void.hpp>
//A := 1
typedef boost::mpl::void_ A;

As stated earlier, a lot of the work isn't done for you in a single place; for example, you'd need to use boost::optional to get optional types, and mpl to get unit type, as seen above. But using relatively simple compile time template mechanics, you can do recursive ADT types, which means you can implement generalized ADT's. As the template system is turing complete, you have a turing complete type checker and ADT generator at your disposal.

It's just waiting for you to bring the pieces together.

Variant based ADT's

boost::variant provides type checked unions, in addition to the original unions in the language. These can be used with no fuss, drop in:

boost::variant< int, std::string > v;

This variant, which can be int or string, can be assigned either way with checking, and you can even do run time variant based visitation:

class times_two_visitor
    : public boost::static_visitor<>
{
public:
    void operator()(int & i) const
    {
        i *= 2;
    }
    void operator()(std::string & str) const
    {
        str += str;
    }
};

Anonymous/Ad-hoc data structures No problem:

Of course we have tuples! You could use structs if you like, or:

std::tuple<int,char> foo (10,'x');

You can also perform a good deal of operations on tuples:

// Make them
auto mytuple = std::make_tuple(3.14,"pi");
std::pair<int,char> mypair (10,'a');

// Concatenate them
auto mycat = std::tuple_cat ( mytuple, std::tuple<int,char>(mypair) );

// Unpack them
int a, b;
std::tie (a, std::ignore, b, std::ignore) = mycat; 

Tail Recursion No explicit support, iteration is sufficient

This is not supported or mandated in Common LISP, though it is in Scheme, and therefore I don't know if you can say it's required. However, you can easily do tail recursion in C++:

std::size_t get_a_zero(vector<int>& myints, std::size_t a ) {
   if ( myints.at(a) == 0 ) {
      return a;
   }
   if(a == 0) return myints.size() + 1;

   return f(myints, a - 1 );   // tail recursion
}

Oh, and GCC will compile this into an iterative loop, no harm no foul. While this behavior is not mandated, it is allowable and is done in at least one case I know of (possibly Clang as well). But we don't need tail recursion: C++ totally is fine with mutations:

std::size_t get_a_zero(vector<int>& myints, std::size_t a ) {
   for(std::size_t i = 0; i <= myints.size(); ++i){
       if(myints.at(i) == 0) return i;
    }
    return myints.size() + 1;
}

Tail recursion is optimized into iteration, so you have exactly as much power. Furthermore, through the usage of boost::coroutine, one can easily provide usage for user defined stacks and allow for unbounded recursion, making tail recursion unnecessary. The language is not actively hostile to recursion nor to tail recursion; it merely demands you provide the safety yourself.

Pattern Matching No problem:

This can easily be done via boost::variant, as detailed elsewhere in this, via the visitor pattern:

class Match : public boost::static_visitor<> {
public:
    Match();//I'm leaving this part out for brevity!
    void operator()(const int& _value) const {
       std::map<int,boost::function<void(void)>::const_iterator operand 
           = m_IntMatch.find(_value);
       if(operand != m_IntMatch.end()){
           (*operand)();
        }
        else{
            defaultCase();
        }
    }
private:
    void defaultCause() const { std::cout << "Hey, what the..." << std::endl; }
    boost::unordered_map<int,boost::function<void(void)> > m_IntMatch;
};

This example, from this very charming website shows how to gain all the power of Scala pattern matching, merely using boost::variant. There is more boilerplate, but with a nice template and macro library, much of that would go away.

In fact, here is a library that has done all that for you:

#include <utility>
#include "match.hpp"                // Support for Match statement

typedef std::pair<double,double> loc;

// An Algebraic Data Type implemented through inheritance
struct Shape
{
    virtual ~Shape() {}
};

struct Circle : Shape
{
    Circle(const loc& c, const double& r) : center(c), radius(r) {}
    loc    center;
    double radius;
};

struct Square : Shape
{
    Square(const loc& c, const double& s) : upper_left(c), side(s) {}
    loc    upper_left;
    double side;
};

struct Triangle : Shape
{
    Triangle(const loc& a, const loc& b, const loc& c) : first(a), second(b), third(c) {}
    loc first;
    loc second;
    loc third;
};

loc point_within(const Shape* shape)
{
    Match(shape)
    {
       Case(Circle)   return matched->center;
       Case(Square)   return matched->upper_left;
       Case(Triangle) return matched->first;
       Otherwise()    return loc(0,0);
    }
    EndMatch
}

int main()
{
    point_within(new Triangle(loc(0,0),loc(1,0),loc(0,1)));
    point_within(new Square(loc(1,0),1));
    point_within(new Circle(loc(0,0),1));
}

As provided by this lovely stackoverflow answer As you can see, it is not merely possible but also pretty.

Garbage Collection Future standard, allocators, RAII, and shared_ptr are sufficient

While C++ does not have a GC, there is a proposal for one that was voted down in C++11, but may be included in C++1y. There are a wide variety of user defined ones you can use, but the C++ does not need garbage collection.

C++ has an idiom know as RAII to deal with resources and memory; for this reason, C++ has no need for a GC as it does not produce garbage; everything is cleaned up promptly and in the correct order by default. This does introduce the problem of who owns what, but this is largely solved in C++11 via shared pointers, weak pointers, and unique pointers:

// One shared pointer to some shared resource
std::shared_ptr<int> my_int (new int);

// Now we both own it!
std::shared_ptr<int> shared_int(my_int);

// I can use this int, but I cannot prevent it's destruction
std::weak_ptr<int> weak_int (shared_int);

// Only I can ever own this int
std::unique_ptr<int> unique_int (new int);

These allow you to provide a much more deterministic and user controlled form of garbage collection, that does not invoke any stop the world behavior.

That not easy enough for you? Use a custom allocator, such as boost::pool or roll your own; it's relatively easy to use a pool or arena based allocator to get the best of both worlds: you can easily allocate as freely as you like, then simply delete the pool or arena when you are done. No fuss, no muss, and no stopping the world.

However, in modern C++11 design, you would almost never use new anyway except when allocating into a *_ptr, so the wish for a GC is not necessary anyway.

In Summary

C++ has plenty of functional language features, and all of the ones you listed can be done, with the same power and expression ability of Haskell or Lisp. However, most of these features are not built in by default; this is changing, with the introduction of lambda's (which fill in the functional parts of the STL), and with the absorption of boost into the standard language.

Not all of these idioms are the most palatable, but none of them are particularly onerous to me, or unamendable to a few macros to make them easier to swallow. But anyone who says they are not possible has not done their research, and would seem to me to have limited experience with actual C++ programming.

Solution 2:

From your list, C++ can do:

  • function signatures as types
  • type polymorphism (but not first-class like in many functional languages)
  • immutable data structures (but they require more work)

It can do only very limited forms of:

  • higher order functions / closures (basically, without GC most of the more interesting higher-order functional idioms are unusable)
  • adhoc data structures (if you mean in the form of light-weight structural types)

You can essentially forget about:

  • algebraic data types & pattern matching
  • partial function applications (requires implicit closures in general)
  • type inference (despite what people call "type inference" in C++ land it's a far shot from what you get with Hindley/Milner a la ML or Haskell)
  • tail calls (some compilers can optimise some limited cases of tail self-recursion, but there is no guarantee, and the language is actively hostile to the general case (pointers to the stack, destructors, and all that))
  • garbage collection (you can use Boehm's conservative collector, but it's no real substitute and rather unlikely to coexist peacefully with third-party code)

Overall, trying to do anything functional that goes beyond trivialities will be either a major pain in C++ or outright unusable. And even the things that are easy enough often require so much boilerplate and heavy notation that they are not very attractive. (Some C++ aficionados like to claim the opposite, but frankly, most of them seem to have rather limited experience with actual functional programming.)

Solution 3:

(Just to add a little to Alice's answer, which is excellent.)

I'm far from a functional programming expert, but the compile-time template metaprogramming language in C++ is often seen as being "functional", albeit with a very arcane syntax. In this language, "functions" become (often recursive) class template instantiations. Partial specialisation serves the purpose of pattern matching, to terminate recursion and so on. So a compile-time factorial might look something like so:

template <int I>
struct fact
{
    static const int value = I * fact<I-1>::value;
};

template <>
struct fact<1>
{
    static const int value = 1;
};

Of course, this is pretty hideous, but many people (particularly the Boost developers) have done incredibly clever and complex things with just these tools.

It's possibly also worth mentioning the C++11 keyword constexpr, which denotes functions which may be evaluated at compile time. In C++11, constexpr functions are restricted to (basically) just a bare return statement; but the ternary operator and recursion are allowed, so the above compile-time factorial can be restated much more succinctly (and understandably) as:

constexpr int fact(int i)
{
    return i == 1 ? 1 : i * fact(i-1);
}

with the added benefit that fact() can now be called at run-time too. Whether this constitutes programming in a functional style is left for the reader to decide :-)

(C++14 looks likely to remove many of the restrictions from constexpr functions, allowing a very large subset of C++ to be called at compile-time)