What's the difference between: Asynchronous, Non-Blocking, Event-Base architectures?
Solution 1:
Asynchronous Asynchronous literally means not synchronous. Email is asynchronous. You send a mail, you don't expect to get a response NOW. But it is not non-blocking. Essentially what it means is an architecture where "components" send messages to each other without expecting a response immediately. HTTP requests are synchronous. Send a request and get a response.
Non-Blocking This term is mostly used with IO. What this means is that when you make a system call, it will return immediately with whatever result it has without putting your thread to sleep (with high probability). For example non-blocking read/write calls return with whatever they can do and expect caller to execute the call again. try_lock for example is non-blocking call. It will lock only if lock can be acquired. Usual semantics for systems calls is blocking. read will wait until it has some data and put calling thread to sleep.
Event-base This term comes from libevent. non-blocking read/write calls in themselves are useless because they don't tell you "when" should you call them back (retry). select/epoll/IOCompletionPort etc are different mechanisms for finding out from OS "when" these calls are expected to return "interesting" data. libevent and other such libraries provide wrappers over these event monitoring facilities provided by various OSes and give a consistent API to work with which runs across operating systems. Non-blocking IO goes hand in hand with Event-base.
I think these terms overlap. For example HTTP protocol is synchronous but HTTP implementation using non-blocking IO can be asynchronous. Again a non-blocking API call like read/write/try_lock is synchronous (it immediately gives a response) but "data handling" is asynchronous.
Solution 2:
In an asynchronous hardware, code asks some entity to do something and is free to do other things while the action gets done; once the action is complete, the entity will typically signal the code in some fashion. A non-blocking architecture will make note of spontaneously-occurring actions which code might be interested in, and allow code to ask what such actions have occurred, but code will only come aware of such actions when it explicitly asks about them. An event-based architecture will affirmatively notify code when events spontaneously occur.
Consider a serial port, from which code will want to receive 1,000 bytes.
In a blocking-read architecture, the code will wait until either 1,000 bytes have arrived or it decides to give up.
In an asynchronous-read architecture, the code will tell the driver it wants 1,000 bytes, and will be notified when 1,000 bytes have arrived.
In a non-blocking architecture, the code may ask at any time how many bytes have arrived, and can read any or all such data when it sees fit, but the only way it can know when all the data has arrived is to ask; if the code wants to find out within a quarter second when the 1000th byte has arrived, it must check every quarter-second or so.
In an event-based architecture, the serial port driver will notify the application any time any data arrives. The driver won't know how many bytes the application wants, so the application must be able to deal with notifications for amounts that are smaller or larger than what the application wants.
Solution 3:
So to answer your first and second question:
Non-blocking is effectively the same as asynchronous - you make the call, and you'll get a result later, but while that's happening you can do something else. Blocking is the opposite. You wait for the call to return before you continue your journey.
Now Async/Non-blocking code sounds absolutely fantastic, and it is. But I have words of warning. Async/Non-blocking are great when working in constrained environments, such as in a mobile phone... consider limited CPU / memory. It's also good for front-end development, where your code needs to react to a UI widget in some way.
Async is fundamental to how all operating systems need to work - they get shit done for you in the background and wake your code up when they've done what you asked for, and when that call fails, you're told it didn't work either by an exception, or some kind of return code / error object.
At the point of when your code asks for something that will take a while to respond, your OS knows it can get busy with doing other stuff. Your code - a process, thread or equivalent, blocks. Your code is totally oblivious to what else is going on in the OS while it waits for that network connection to be made, or while it waits for that response from an HTTP request, or while it waits for that read/write a file, and so on. You code could "simply" be waiting for a mouse click. What actually was going on at during that time was your OS is seamlessly managing, scheduling and reacting to "events" - things the OS is looking out for, such as managing memory, I/O (keyboard, mouse. disk, internet), other tasks, failure recovery, etc.
Operating Systems are frickin' hard-core. They are really good at hiding all of the complicated async / non-blocking stuff from you the programmer. And that's how most programmers got to where we are today with software. Now we're hitting CPU limits, people are saying things can be done in parallel to improve performance. This means Async / non-blocking seems like a very favourable thing to do, and yes, if your software demands it, I can agree.
If you're writing a back-end web server, then proceed with caution. Remember you can scale horizontally for much cheaper. Netflix / Amazon / Google / Facebook are obvious exceptions to this rule though, purely because it works out cheaper for them to use less hardware.
I'll tell you why async / non-blocking code is a nightmare with back-end systems....
1) It becomes a denial of service on productivity... you have to think MUCH more, and you make a lot of mistakes along the way.
2) Stack traces in reactive code becomes undecipherable - it's hard to know what called what, when, why and how. Good luck with debugging.
3) You have to think more about how things fail, especially when many things come back out of order to how you sent them. In the old world, you did one thing at a time.
4) It's harder to test.
5) It's harder to maintain.
6) It's painful. Programming should be a joy and fun. Only masochists like pain. People who write concurrent/reactive frameworks are sadists.
And yes, I've written both sync and async. I prefer synchronous as 99.99 of back-end applications can get by with this paradigm. Front-end apps need reactive code, without question, and that's always been the way.
Yes, code can be asynchronous, non-blocking AND event-based.
The most important thing in programming is to make sure your code works and responds in an acceptable amount of time. Stick to that key principle and you can't go wrong.