Micro DDoS Mitigation and Sudden High Access Number in Web Applications with Limited Resources

4

It's vast documentation of how to mitigate (proactively reduce impact as it occurs) from denial-of-service attacks on web applications. People typically quote services like Cloudflare or as per their application on servers like Amazon EC2 and do load balancing.

This theoretical question questions alternatives to problems on a much smaller scale , with no option to use a more complex solution, but with advantage of the problem being simpler and for a point event, where sysadmin or developer is seeing the thing happen.

Situation A: Micro DDoS (small-scale denial of service attack)

A typical example of such an attack (only part that matters, the rest has already been filtered before)

  • There are few IPs attacking and they do not change. Text logs show them

Situation B: High sudden access

For some reason, your site gets popular when being summoned by someone famous or on television, goes through the following:

  • Hundreds of people access a few pages. Most only the home page and a second page, both of which just display information and do nothing special on the server side

Common

In both cases, the site goes down and is knocked over by the shared hosting company or by your company's small server for high CPU usage. The cost of generating and not caching pages is greater than demand. There is also no network problem, as there is enough bandwidth to meet the demand, but your application can generate something up to 15-25req / s.

Assume that you do not have root access, you could not install new modules or change your operating system's firewall. I would also not be able to migrate the site to another server, for financial or time reasons, since both situations will be punctual and last for a maximum of 1 to 3 hours.

In addition to the language used in your application, you could also have any other tool that an ordinary user would have, such as access to .htaccess on an Apache server and IIS web.config.

    

asked by anonymous 18.02.2014 / 01:54

1 answer

4

Situation 1 is the easiest to circumvent.

Create a wrapper to monitor all accesses to your application (Servlet Filter via Java, HTTP Module / Filter if ASP.NET), count source IPs, define a maximum access range and a ban period if this margin is outdated.

Situation 2 is a little more complicated, but still optimizable. You mentioned that the pages are informative and with little backend processing - basically HR / LW.

Homework

We can assume that basic performance points are covered (static content like CSS, JS and images is set to be cached).

In advance you can run stress tests to determine bottlenecks (database, renders, shared resources). You will eliminate 90% of possible saturation points.

Before the crisis

Depending on how your application is implemented, you can define layers of temporary storage. From the most basic to the most shocking:

  • Shared objects stored in memory
  • Local resource storage, rather than remote
  • Rendered HTML storage if objects have not been updated

During the crisis

You've browsed everything you could, synchronizing as little as possible; yet your server has achieved 100% processing. One possible solution:

  • Take a snapshot (rendered HTML) of the offending pages. Leave a Filter / Module ready to intercept all calls to these pages and to issue captured snapshots as a result. If there is any content that needs to be dynamic (login information, for example), make it run in an IFRAME or be loaded via AJAX.

Many services use this mechanics - for example, NewEgg Flash ( link ) or DealExtreme ( link ). You may notice that the page is initially loaded without the user's credentials, even though it is signed in .

    
18.02.2014 / 05:40