What is the difference between all types of Markov Chains?
Solution 1:
A Markov chain is a discrete-valued Markov process. Discrete-valued means that the state space of possible values of the Markov chain is finite or countable. A Markov process is basically a stochastic process in which the past history of the process is irrelevant if you know the current system state. In other words, all information about the past and present that would be useful in saying something about the future is contained in the present state.
A discrete-time Markov chain is one in which the system evolves through discrete time steps. So changes to the system can only happen at one of those discrete time values. An example is a board game like Chutes and Ladders (apparently called "Snakes and Ladders" outside the U.S.) in which pieces move around on the board according to a die roll. If you are looking at the board at the beginning of someone's turn and wondering what the board will look like at the beginning of the next person's turn, it doesn't matter how the pieces arrived at their current positions (the past history of the system). All that matters is that the pieces are where they currently are (the current system state) and the upcoming die roll (the probabilistic aspect). This is discrete because changes to the system state can only happen on someone's turn.
A continuous-time Markov chain is one in which changes to the system can happen at any time along a continuous interval. An example is the number of cars that have visited a drive-through at a local fast-food restaurant during the day. A car can arrive at any time $t$ rather than at discrete time intervals. Since arrivals are basically independent, if you know the number of cars that have gone through by 10:00 a.m., what happened before 10:00 a.m. doesn't give you any additional information that would be useful in predicting the number of cars that will have visited the drive-through by, say, noon. (This is under the usual but reasonable assumption that the arrivals to the drive-through follow a Poisson process.)
A Markov decision process is just a Markov chain that includes an agent that makes decisions that affect the evolution of the system over time. (So I don't think of it as a separate kind of Markov chain, since the usual Markov chain definition doesn't include such an agent.) A (continuous-time) example would be the potato chip inventory for a local grocery store. If you know the inventory level at 10:00 a.m. and are trying to predict the inventory level at noon, the inventory levels before 10:00 a.m. don't tell you anything beyond what you already know about the level at 10:00 a.m. The decision aspect arises because the manager can decide when to place orders so that bags arrive at certain times. Thus the inventory level at any time $t$ depends not just on (the probabilistic aspect of) customers arriving randomly and taking bags off the shelf but also on the manager's (deterministic) decisions. (An example of a discrete-time Markov decision process is the board game Parcheesi. The board position at the beginning of the next player's turn depends only on the current board position, the current player's dice roll (the Markov chain aspect) and the current player's decision as to which pieces to move based on the dice roll (the decision process aspect).)
Solution 2:
markov chain is a special type of stochastic process where the outcome of an xperiment depends only on the outcome of the previous xperiment. It can be found in natural and social sciences e.g a random walker and the number of each species of an ecosystem in a given year.