understanding Openstack physical architecture [closed]

So, please correct me if I am wrong:

I want to build my own cloud computing platform using openstack.

I buy a rack server- https://www.serverstack.in/

I install say 1TB of Hard disk. 30 GB of RAM and the processor is already built in.

I install redhat Linux on this . I install the required software. I use a deploying tool to install Openstack. Open the port it hosts openstack to the public. Is this correct or am I missing something? I understand there is more to billing and stuff in openstack.

If there is more traffic and people want more computing power, how can a dual core CPU balance all of them? Would I require a more costly CPU server stack?

In case I need to expand my deployment, do I simply replicate the whole setup and add an extra entry server which will load balance to each of the openstack deployments?

I know I am confusing a lot of things here.


Solution 1:

Before I answer your questions, I am curious what you are planning to achieve, and why you think OpenStack is the right tool to achieve it.

I buy a rack server

You don't run a production OpenStack cloud on a single server. Typically, you need three controllers to ensure high availability, plus as many servers as you need to run your workloads (they are the compute nodes of your cloud).

I install redhat Linux on this . I install the required software. I use a deploying tool to install Openstack.

At a very high level, this is correct. However, there is no easy way to install and configure OpenStack, since there are so many options depending on your datacenter environment. It's very different from inserting a Ubuntu DVD, answering a few questions and ending up with a functioning Linux installation after 20 minutes.

Open the port it hosts openstack to the public. Is this correct or am I missing something?

Networking is the most complex part of a simple OpenStack cloud. It's a bit more than just opening a port.

I understand there is more to billing and stuff in openstack.

OpenStack doesn't have a billing module, although Cloudkitty provides some foundation for billing. You will still have to develop your own solution or use an add-on billing solution.

And yes, there is a lot of "stuff" in OpenStack.

If there is more traffic and people want more computing power, how can a dual core CPU balance all of them? Would I require a more costly CPU server stack?

Your cloud can handle a certain workload. If you see that your workload is too much for your current cloud setup, you add a compute node. Larger installations (starting with 50 or 100 compute nodes perhaps) also may have to add controllers and upgrade their network infrastructure.

In case I need to expand my deployment, do I simply replicate the whole setup and add an extra entry server which will load balance to each of the openstack deployments?

Load Balancing has a very specific meaning, and it's not a suitable term here, but in a way, yes, the OpenStack controllers balance the workload (i.e. virtual machines) over the compute nodes. Adding a compute node is not hard.

In my opinion, the best way to learn about OpenStack is installing it manually. Be warned, it's a steep learning curve, but it's rewarding. When I did it, I was forced to learn a lot about fundamental subjects like networking and open-source culture.