Running A Company

Why We’re Going Serverless and You Should Too

Why We’re Going Serverless and You Should Too

We’re going serverless at Droplr and you should, too. Without a doubt, Droplr is an innovative company. We’re not only driving innovation in how our customers work, but also we’re internally empowering our team with bleeding-edge technologies that allow us to grow faster and ship new features while maintaining an agile pace.

The thing that often blocks companies from scaling their business up is infrastructure. More customers means more servers needed to handle all that new traffic. The more complex your infrastructure is, the more difficult it is to deploy new features at a steady tempo. Also, you need more new developers to maintain the growing Cloud.

All these things made us think: “If these things are hindering the rapid growth of our business, why don’t we just kill the infrastructure and go serverless?”

Why We Love our Infrastructure and Why We Want to Kill It

From the very beginning, Droplr has been using AWS Cloud for our underlying infrastructure. We store your drops in Amazon S3, which allows us to store almost unlimited amounts of files while applying top security standards.

To make them highly accessible in all corners of the world, we use Amazon CloudFront CDN (Content Delivery Network), which distributes our drops spatially and guarantees both high availability and performance.

Related: How Logz.io Ensures Our Serverless Architecture Security

We operate a whole fleet of Amazon EC2 instances. This gives us enough computing power to run APIs, Dashboard, and all the processing that is constantly going on in the background. All of those microservices are run in Docker containers within Amazon ECS clusters, and they’re automatically deployed by our Jenkins Continuous Integration server.

They run redundantly, behind Elastic Load Balancers, so that we can ensure both the high availability and top-notch performance you expect from Droplr.

It all sounds great, doesn’t it? This model, however, forces us to maintain the underlying virtual machines, upgrading, scaling and securing them properly.

The problem is that we only need this much CPU power at certain moments – when our application needs to handle traffic spikes during your working hours. The rest of the time we’re wasting resources.

https://droplr.com/wp-content/uploads/2019/11/spacex-71870-scaled.jpg

Entering the Future of Cloud With Serverless

And here comes Serverless – the most disruptive thing that has happened in the world of technology lately, and one of the hottest topics in Silicon Valley. Accenture dubs it the next generation of Cloud, and having used AWS Lambda for some time now, we can testify to the validity of it.

Why the buzz? With Serverless:

  • All you need to worry about is the code in your web application. Once you have this secured, you’re ready to scale.
  • This technology empowers you to scale the system from one user to millions of them from day one.
  • There are no servers to crash and, what’s even nicer, you only pay for the actual duration of your code execution.

We don’t want to go into the details of our migration to AWS Lambda right now. All we want to say is that it’s been well worth it. Since the migration, within just a few months we’ve managed to:

  • Reduce the costs of maintaining our infrastructure 10-fold.
  • Shorten the time needed to deploy new production grade microservices to 1 minute.
  • Bring our capacity to scale to a whole new level.

In the next posts we’re going to tell you how we did all that. We’ll also talk you through our proof of concept phase and where we’re now in terms of technology stabilization. We’ve also been invited to write a guest blog post for Serverless.com, so stay tuned!

Capterra Rating
GetApp Rating
G2 Rating
Software Advice Rating
TrustRadius Rating
CNET Rating

Experience working faster, not harder

Questions? Ask us on a live chat now.