In this post I will try to throw some light on capabilities and need of this new exciting framework Vert.x.
I have been following this framework for last few years and it is great to see that it is now being adopted by many large companies means it is stabilized now.
Promoted by Redhat vert.x is a very lightweight library for distributed computing.
Today almost all applications are n-tier. This is not an invention anymore but need of the hour. You can not scale a monolithic applications and those days are long gone when you could just create a scaffolding app in Ruby on rails and keep modifying the same app to run your business.
There are many reasons for why an application should be divided into different components a.k.a services (or microservices). But I see capability to iterate and roll out new features is very important to create a distributed systems.
So today Business layer is divided in n number of different components. This is not something new. Many enterprise applications have been built that way.
What is changing now is the tools that lets you create this distributed architecture without actually going into nitty-gritty of distributed software design.
Let’s consider a scenario. You have a web applications (let’s say ruby on rails app).
You start getting lot of traffic which your single server can not handle anymore.
So what do you do?
You hide behind a load balancer and you spawn a new instance. When new requests come they get redirected to one of the server using some load balancing strategy (round-robin?)
Again if traffic spikes you repeat the process.
There is a problem here
a- It works only when you have monolithic application
b- It does not scale.
If you are looking to build up a high traffic website then you just cant keep adding up servers to load balancer and assume everything will work fine.
So many large applications will divide business layer, the layer which does most of the useful tasks into multiple different components and expose APIs which are consumed by Web Layer.All these business components will be doing some specific tasks and each one of them will be scaled independently.
Great…seems like an scalable solution. But now we have created another problem
How do we manage these different components or services? How do we discover them in system? We can go to our good old load balancer and start assigining each service a DNS and let load balancer do the job.
This seems unmanageable. How many DNS we would like to have and how many times we will have to reconfigure load balacner? And why in this world I would want to have DNS assigned to each of these services..I certainly do not want to expose them to outside world. That job is with my web application. So I should have something better. CORBA is out of question. Java RMI god save me from. Thrift is a possibility but it does not tick all the boxes outlined below
Is there any other way?
What if we could handle this at software layer rather than hardware layer? What if we could do it dynamically? What if we could interact between these services without overhead of HTTP? Turns out all this is possible now with frameworks like Vert.x.
This is what we want to achieve with Vert.x
a. Distributed architecture
b. Fault tolerant applications
c. Highly scalable application layer
d. Dynamic discovery of services (micro?)
e. Remove overhead of Http.
f. Interaction between services without installing a separate software like RabbitMQ.
Now that we have laid the foundation and defined our objectives we will start writing some code for Vert.x.
Happy coding !!