Why is it that aa full deployment will require a minimum of 10 servers?

Solution 1:

The reason for this is that OpenStack is not really meant for a 2 node cluster. It is meant to scale to thousands of nodes. It has many disconnected components, including mysql, rabbitmq, several API services, etc. For your case of "just trying openstack out" you can simply use the local provider to install everything on one box rather than MaaS which wants to put every component of OpenStack on its own machine (and there are about 9 components, plus 1 for juju/maas).

Have a look at this page to help configure the local provider:

https://jujucharms.com/docs/stable/clouds-LXD

And then follow these instructions:

https://help.ubuntu.com/community/UbuntuCloudInfrastructure

Except instead of setting up MaaS, and generating/downloading the environments.yaml from MaaS, you just put this in:

default: local
environments:
  local:
    type: local
    default-series: precise
    data-dir: /home/youruser/.juju/data

All of the services will end up in their own containers on the same box, which will have some limitations (like nova volumes not working right).

Solution 2:

Beside that can you please explain what exactly MAAS gonna install on each server?

You can use community contributed charms where each service like MySQL or RabbitMQ requires dedicated node or write yours where you can combine them.

Does it has some RAID kind of mechanism inside?

You can utilize RAID when deploying node into MaaS.

If one or two servers go down and it may handle everything?

MaaS doesn't provide built in reservation or High Availability for deployed services. If you are talking about OpenStack, the answer is yes: nova-compute can relaunch instances from failed nodes. Best practices for Swift requires 3 copy of your data, so 2 failed nodes is not a problem.

My final question is it says somewhere on Ubuntu docs that each server should have at-least 16GB RAM ? Is it Must or optional ?

No, this is not required. Probably, you've mentioned this statement from Mark's blog: "Add another node to the Hadoop cluster, and make sure it has at least 16GB RAM”.