Category Archives: Imperva

  • 0

DevOps in the Cloud: How Data Masking Helps Speed Development, Securely

Category : Imperva

Many articles have discussed the benefits of DevOps in the cloud. For example, the centralization of cloud computing provides DevOps automation with a standard platform for testing and development; the tight integration between DevOps tools and cloud platforms lowers the costassociated with on-prem DevOps automation technology; and cloud-based DevOps reduces the need to account for resources leveraged as it tracks the use of resources by data, application, etc. With all these benefits, cloud-based DevOps seems to provide more flexibility and scalability to organizations, allowing software developers to produce better applications and bring them to market faster.

However, moving the entire application testing, development, and production process to the cloud may cause security issues. In this post, we discuss the security issues associated with a fast-moving, cloud-based DevOps environment and ways to mitigate those issues without impacting speed to market.

Protect Data from Breaches

If the recent Uber data breach taught us anything, it’s that protection around production data disappears as soon as you make a copy of that data. In the case of the Uber breach, the hackers worked their way in via the software engineering side of the house. Software engineers then became compromised users as their login credentials were stolen, giving hackers access to an archive of sensitive rider and driver data (a copy of production data).

Get the Realistic Data You Need, When You Need It

As a developer, you may get frustrated with security restrictions placed around using production data for testing and development. But if you think about it for a moment, a data breach could cost you and the security folks their jobs when the finger of guilt points your way. Nonetheless, while it is important to prevent sensitive data from breach, it is also critical for companies to deliver software faster to the market and maintain high quality, especially when competitors are adopting cloud to increase the pace of software development. As a developer, your mission is to deliver quality code on time and in order to do so, you need realistic data to put your code through its paces. And yet it can be time consuming to get approvals from the security team and wait for DBAs to extract data from production databases.

Data Masking Removes Sensitive Information

The good news is there’s technology available to balance the needs from both ends. Data masking has proven to be the best practice in removing sensitive information while maintaining data utility. Data masking (or pseudonymization) has been referenced by Gartner (account required) and other industry analysts as required elements for data protection. This technology replaces sensitive data (access to which should be limited to a need-to-know basis) with fictional but realistic values to support DevOps in the cloud without putting sensitive data at risk. The masked data maintains referential integrity and is statistically and operationally accurate. For example, let’s say a data record shows that Terry Thompson is 52 years old and that his social security number (SSN) is 123-00-4567. After the data is masked, that record may then become John Smith whose SSN is 321-98-7654. The masked data retains the exact format of the original (real) data, maintaining the data richness that allows developers to do their jobs.

data masking example

Data masking replaces original data with fictitious, realistic data

Security and Productivity Go Hand in Hand

With data masking, companies don’t have to choose between security and productivity, which tends to be one of the most common dilemmas. Data masking ensures the data being used is anonymized and always protected—regardless of how it is being used, by whom, and how often it is copied. It’s the key for developers to embrace all the benefits associated with the cloud. Masking sensitive information in the cloud gives developers peace of mind when producing better applications and allows you to truly bring those apps to market faster without getting a red light from the security team. Better still, the finger of guilt can’t point in your direction in the event a hacker breaks in because you never had the data to begin with.

Watch our whiteboard video session to learn more about data masking and how it works

Source: https://www.imperva.com/blog/2017/12/devops-in-the-cloud-how-data-masking-helps-speed-development-securely/?utm_source=linkedIn&utm_medium=organic-social&utm_content=devops-data-masking&utm_campaign=2017-Q4-linkedin-awareness

Author: Sara Pan


  • 0

Q3 2017 Global DDoS Threat Landscape Report

Category : Imperva

Today we are releasing our latest Global DDoS Threat Landscape Report, a statistical analysis of 5,765 network and application layer DDoS attacks mitigated by Imperva Incapsula services during Q3 2017.

Before diving into the report’s highlights, it should be mentioned that this quarter was marked by an adjustment to our methodology, which reflects the changes we’ve observed in the threat landscape for the past few quarters.

Specifically, we readjusted the definition of a DDoS attack to compensate for the growing prevalence of short-lived repeat assaults. We also significantly expanded the scope of our analysis, more than doubling the data points in our report.

Read the full report >>

For a more detailed overview of our new methodology and sampling methods, click here.

Report Highlights

Three Out of Four Bitcoin Sites Attacked

This quarter, amidst a spike in the price of bitcoin, 73.9 percent of all bitcoin exchanges and related sites on our service were attacked. As a result, the relatively small and young industry made the top-10 attacked industry list.

ddos-report-by-industry

The campaign against bitcoin exchanges exemplifies the way DDoS offenders are drawn to successful online industries, especially new and under-protected ones.

From extortionists that launch ransom DDoS attacks, to more professional offenders working as hired guns for competitors, DDoS attackers are always “following the money”. In this specific case, DDoS  attacks could also be attempt to manipulate bitcoin prices, a tactic attackers have been known to use in the past.

Organizations in the cryptocurrency industry and other high-growth digital fields should take notice.

High Packet Rate Attacks Grow More Common

Q3 2017 saw the number of high packet rate, network layer attacks—assaults in which the packet-forwarding rate escalated above 50 Mpps—continue to grow, reaching five percent.

What’s more, 144 attacks went above 100 Mpps, while the highest rated assault of the quarter peaked at 238 Mpps, up from 190 Mpps in Q2 2017. In comparison, in Q1 2017 we mitigated only six DDoS attacks that escalated above the 100 Mpps mark.

ddos-report-attack-rate

This paints a worrisome picture, as not all mitigation solutions are able to handle attacks at this rate.

Faced with the steep increase in the number of high packet rate attacks, mitigation vendors should upgrade their scrubbing equipment, as we recently did ourselves when we introduced the Behemoth v2 scrubbers, which have a 650 Mpps processing capacity.

Buyers are also not exempted from responsibility, and it’s up to them to inquire about the processing capability of the mitigation solution they’re about to purchase.

Network Layer Attacks Are Extremely Persistent

In Q3 2017, half of network layer targets were attacked at least twice, while nearly a third were hit more than ten times.

ddos-report-new-attack-persistence

Repeat application layer attacks dropped from 75.8 percent to 46.7 percent quarter over quarter, largely because of how we now measure individual DDoS events. Even with these changes, however, Q3 2017 saw nearly 16 percent of targets exposed to six or more attacks.

These statistics are a reminder of something that most businesses targeted by DDoS attacks already know: if you’re hit once, chances are that you’ll be hit again. Considering how relatively easy and cheap it is to mount multiple attacks, most offenders won’t hesitate to stage repeat assaults, even after several attempts have been blocked by a mitigation provider.

This also highlights the need for a hands-off DDoS protection solution, the kind that doesn’t require an IT department to go into ‘alert mode’ every time an attack hits. Otherwise, the implication of multiple repeat attacks can be dire—not necessarily due to a domain or service going down, but because of work routine disruptions and important inner processes being brought to a halt.

Botnet Activity Out of India and Turkey Continues to Grow

Following an increase in Q2 2017, Indian and Turkish botnet traffic continued to rise this quarter. In India, traffic more than doubled from 1.8 percent in the prior quarter to 4.0 percent in Q3 2017. The uplift was even more dramatic for Turkey, where botnet activity more than tripled from 2.1 percent to 7.2 percent quarter over quarter.

In total, India and Turkey represented more than ten percent of total botnet activity this quarter and were home to more than five percent of all attacking devices. Also appearing on the short list of attacking counties are the more common culprits, including Russia, the US and Vietnam.

ddos-report-botnet-location

So far, we are unable to attribute the increased activity out of India and Turkey to any particular botnet, type of malware or attacking device. We’ll continue to monitor the situation and, if the trend continues, will allocate further resources to try and identify the reasons behind the uptick in botnet activity in the two countries.

Short-lived Attacks Drive Methodology Changes

Security researchers have different methods for classifying when a DDoS attack has taken place.  Some count every attack individually, while others view a string of assaults as a single event. The differences largely come down to defining when an attack has started and when it has stopped.

In previous reports, using a methodology that dates back to 2015, we considered a quiet (attack-free) period of at least ten minutes to mark the end of an assault.

Over the last year, however, we witnessed a growing number of cases (e.g., pulse wave assaults) in which perpetrators launched attack bursts in rapid succession, thereby rendering the old way of measuring individual assaults less reliable. At times, this caused several DDoS bursts occurring only slightly more than 10 minutes apart to be considered as separate attack events.

In response, we updated our methodology’s quiet period from ten minutes to 60 minutes. This allows us to better aggregate successive attacks against the same target, significantly minimizing statistical bias towards repeated short-lived attacks.

ddos-report-new-methodology

Changing our methodology was also an opportunity to revamp our sampling methods and expand the scope and quality of the information we provide.

The result, following a substantial investment in manpower and resources, is a report that is not only easier to read and navigate but is also more than twice as detailed as its predecessors; now showing 19 data points, compared to nine before.

These new data points include:

  • Analysis of network layer attack persistence
  • Statistics about network layer attack rates and sizes
  • Information about the most attacked industries
  • And more

Read the full report >>

Source: https://www.imperva.com/blog/2017/12/q3-2017-global-ddos-threat-landscape-report/?utm_source=linkedIn&utm_medium=organic-social&utm_content=q317-ddos-report&utm_campaign=2017-Q4-linkedin-awareness

Author: Igal Zeifman


  • 0

Build-Your-Own Data Masking. Yes or No?

Category : Imperva

A lot of organizations are taking great strides to protect their sensitive data with a multi-layered strategy—one that includes data masking. We’ve even seen many tackling this critical data security component in DIY fashion, often tasking one resource with developing and implementing scripts to ensure the box gets “checked” on this key data protection layer.

You might be thinking, if it’s that simple, then why invest in a purpose-built data masking solution? In fact, a lot of customers have tried their hand at DIY-data masking, usually for a one-off project…which invariably explodes into a full-time job (or jobs). That’s where we begin to see the crux of the build versus buy issue. What starts out seemingly simple can quickly become complex.

The DIY Approach

When evaluating the DIY approach, certain risks, challenges and opportunities must be factored into the build-your-own data masking cost/benefit analysis, including:

  • The typical nature of DIY data masking — i.e., largely simplistic and unsecure masking techniques.
  • The lack of data consistency, and the growing size and complexity of data being maintained relative to the DIY capabilities for masking.
  • The fact that DIY data masking is typically poorly documented and difficult to maintain in the face of growing data sets, evolving requirements and changing personnel.
  • The need to manually discover sensitive data limits the effectiveness of DIY scripts from the outset.
  • The opportunity costs associated with tying up resources on a critical, but non-core business function like data masking when less expensive, more effective technology options are available.

The Case for Purpose-Built Data Masking Software

The case for commercial-off-the-shelf (COTS) software is straightforward. As with any purpose-built software, COTS data masking offers numerous advantages over the homegrown approach, including an expert, repeatable, consistent data masking application with high-quality data transformation algorithms that are non-reversible and secure.

Consistent Masking. Everywhere.

A few homegrown scripts may offer similar levels of security, but most use simplistic masking techniques that are much less secure. Homegrown scripts sometimes provide the ability to maintain referential integrity within the target database, but very few allow consistent masking across different databases, and over time, so that the same data is masked consistently everywhere. This is industry standard functionality that comes out-of-the-box in commercial offerings.

Easy Configuration and Maintenance

Data masking should be easy to configure and maintain, with good documentation and support. Homegrown scripts, on the other hand, are usually poorly documented and difficult to maintain. Typically there is only one person within the organization who understands how the masking works, so scripts aren’t often maintained and new sensitive data doesn’t get masked. The effort involved in writing new scripts suggests that many databases would not be masked because it’s too difficult to set up masking for them. COTS data masking makes it easy to configure and to see what data you’re masking, as well as how you’re masking it.

Performance Optimization

Best practice is built into commercial data masking software, including the implementation of automated sensitive data discovery as a lead-in to masking, and a consistent interface across all supported database platforms. The masking engine is optimized for performance and every data masking run benefits from those optimizations. Homegrown scripts must be optimized manually, with every column or table optimized individually. And more often than not, this does not occur on a timely basis as the one person who knows how it works could be off on another project, sick, or on vacation. COTS applications don’t take a week off to hit the beach in Mexico!

DIY Can Cost

In summary, DIY scripts may be better than doing nothing, but as outlined above, your team resources could be put to greater use. You can better protect your critical data and potentially save valuable budget using purpose-built data masking software.

Source: https://www.imperva.com/blog/2017/11/build-your-own-data-masking-yes-or-no/?utm_source=linkedIn&utm_medium=organic-social&utm_content=diy-data-masking&utm_campaign=2017-Q4-linkedin-awareness

Author: Steve Pomroy


  • 0

Sith Spam Bots Take a Page from a Star Wars Novel(s)

Category : Imperva

Online excitement is at an all-time high, as the December release of Star Wars: The Last Jedi, the latest installment in the Star Wars saga, draws closer. With the internet abuzz about recent trailers, looming figures appearing in posters and spoilers in beer glasses, it looks like no one is immune to Star Wars fever, not even cybercriminals.

This is what we recently witnessed firsthand while mitigating a wave of send-to-a-friend spam attacks with an interesting thematic twist.

Form-Filler Bots and Send-to-a-Friend Spam

Send-to-a-friend (a.k.a. share-with-a-friend) is a social sharing module commonly found on commercial websites. As the name suggests, their purpose is to help users email the details of a product/service to their friends. To make sharing easy, the email is sent directly from the website itself, typically by filling in a short online form.

What many fail to consider, however, is that these modules tend to draw the interest of spammers.

For them hijacking such modules has several benefits, including:

  • IP obfuscation – Email services rely on IP reputation for spam filtering. Sending messages from a server that has no prior record of malicious activity is a way of bypassing such filters. It’s also a way of avoiding the risk of being tracked by law enforcement.
  • Spam link obfuscation – Send-to-a-friend emails are issued by legitimate senders and feature design templates and details of real offerings. Spam links in such emails are more likely to be clicked by their recipients.
  • No operational costs – Sending emails en masse is expensive. Abusing third-party email services is a good way to avoid these high costs.

Motivated by the aforementioned, spammers will actively search for sites with send-to-a-friend functions that can be exploited for their own purposes.

Example send-to-a-friend form as filled in by a spam bot

Fig 1. Example send-to-a-friend form as filled in by a spam bot

Once a target is found, spammers will unleash a host of form-filler bots on the send-to-a-friend form, using them to send thousands of emails with embedded malicious links.

It was while monitoring the activity of such bots that we saw spammers turning to the dark side of the Force.

A Page from a Star Wars Novel

We first felt a “disturbance” in mid-October, when several of our customers were bombarded with suspicious WinHTTP POST requests from yet-unidentified bots.

The high rate of these requests, in addition to the considerable number of targets, caught our attention. Even more so, the similarities between the attacks showed them all to be part of a larger coordinated assault.

During the first week of the assault, from October 10th to 16th, 33 unrelated domains on our network were hit by over 275,000 attack requests. A week later, the number of targets had increased to 60, and the volume of the attack had almost tripled—reaching a total of over a million requests.

Fig 2. Number of attacks requests blocked per day

The assault was carried out via a botnet, which enabled the attackers to spread the request output. This was likely an attempt to avoid rate-limiting mechanisms, which are commonly used to protect online forms. In total, in the first two weeks of the assault, we were able to identify 6,915 devices participating in the attacks, 98.9% of which were located in China.

The thing that truly piqued our interest, however, was the content that the bots were stuffing into the comment section of the send-to-a-friend emails.

There, alongside a link or two to a sleazy-looking website offering a selection of gambling apps, we found snippets of text that didn’t look like the randomized content we’re used to seeing in spam emails.

One swift Google search later and we discovered that these snippets were in fact quotes taken from Star Wars novels, chopped up into incomprehensible chunks and used for the purpose of peddling mobile slots.

Here, for example, is a POST request crafted to inject spam links with an excerpt from Path of Destruction, a Star War Legends novel.

… 
propertyId=XXXXXX&unitId= XXXXXX &systemId=vrbo&toEmail=XXXXXX@XXXXXX&share
Comments= … [spam link] there's no reason for us to move so soon," Des replied, 
struggling to remain calm. "If they start at dusk, it's going to take at least three hours 
&referrer=[website targeted by form-filler bots]
…
Fig 3. Example of a POST request used in the spam campaign

 Source material for the spam comment

Fig 4. Source material for the spam comment

At present, we can only speculate about the reasons for the unwarranted quotes from Star Wars literature.

Most likely, however, the spammers were trying to add some uniqueness to their emails, and further hinder detection by filtering mechanisms scanning for content patterns. In the process of doing so, the culprits probably also decided to pay homage to one of their passions.

One way or another, much like the rest of us, these scruffy-looking nerf herders have Star Wars on their mind.

Pass on What You Have Learned

As these very words are typed, the aforementioned attacks continue to target our customers. A sampling of some of the more recent attack requests shows that attackers have now moved away from using content from Star Wars novels and expanded their range to include quotes from “Jane Eyre” and the works of Edgar Allan Poe.

While our customers have nothing to worry about, the scope of the attack makes us believe that many other sites are also being targeted outside of our deflector shields.

For these unprotected targets, the repercussions of an attack could be dire, as they are at risk of being blacklisted by major email service providers. This can have a severe impact on the day-to-day of an online business—not only hampering email marketing campaigns but also preventing any sort of reliable email communication with customers.

For an online service that often relies on emails for billing, support and other mission-crucial activities, this immediately translates into a lot of overhead. Not to mention, plenty of headaches for everyone involved.

To stay ahead of the attackers, developers and operators need to recognize the security risks that come with having an email sharing option on their service and take steps to prevent it from being abused by form-filler bots.

At a minimum, they should include a rate-limiting mechanism that will prevent an IP address from issuing unreasonable numbers of requests over a specific period of time. Other DIY solutions are to have all users fill in CAPTCHAs and to enforce registration as a prerequisite to sending out an email message.

That said, our experience shows that persistent attackers will eventually circumvent any, and all of these halfway methods. Your best bet, as you would expect, is to go with a purpose-made bot filtering solution, or as Master Yoda would say: “Do or do not, there is no try”.

Source: https://www.incapsula.com/blog/form-filler-bots-do-star-wars.html?utm_source=linkedin&utm_medium=organic_social&utm_campaign=2017_q4_starwarsbots

Authors: Igal Zeifman, Avishay Zawoznik


  • 0

Can a License Solve Your Cloud Migration Problem?

Category : Imperva

No, but it can certainly reduce friction.

Cloud adoption is no longer an if, but a when. Even Gartner says there’s no such thing as a ‘no cloud policy.’

The winds of technology change are blowing, but no enterprise is talking about 100% cloud adoption in the near term. Hybrid IT environments are and will be the norm, as organizations continue to rationalize which systems, databases and apps will migrate to the cloud and when, and which will stay on-prem. Cloud versus on-prem decisions for how to deploy security services only compound this complexity.

Managing the technical aspects of that migration is challenge enough, without running into administrative roadblocks like licensing issues. In this post, we’ll explore how Imperva takes licensing out of the cloud migration equation so our customers are protected, no matter where the winds of technology change take their apps.

Managing a Moving Target

Most enterprise IT organizations are configuring and maintaining systems to secure hundreds of applications and databases. In many cases, digital transformation is growing this count exponentially. That’s a moving target all by itself.

In a hybrid environment, nothing is static. Network topology changes and process fluctuations are the norm, as organizations plan and execute their migrations in real-time. However, legacy vendor licensing models are frequently tied to:

  • Whether the protected asset is based in a data center or in the cloud.
  • Whether the security service itself is deployed on-prem or runs as-a-service.

That’s a vendor-centric rather than a customer-centric approach that can add budgetary and procurement complexity to what may be an already complex migration problem.

Furthermore, as renewal dates loom, determining which systems should be licensed for a data center implementation – and how many are moving off that license and onto a cloud license – can sabotage migration plans.

Verisk Analytics is a leading data analytics provider encompassing more than 40 companies. CISO Ted Cooney explains how they faced this issue, and took deployment into their own hands:

“The size and nature of our company makes it hard to project exactly what solutions we are going to need as we migrate to the cloud. We don’t want to be stuck with the wrong mix of cloud or on-premises solutions as this could hold up our migration given that our procurement process to add new licenses can take a month.”

To stay agile in its migration to the cloud, Verisk Analytics opted for Imperva’s innovative licensing model, FlexProtect.

“Imperva FlexProtect gives me the agility I need to move solutions around quickly, and it’s one less worry as I migrate to the cloud,” continued Cooney.

FlexProtect for Applications is a flexible licensing approach, providing the choice to deploy Imperva application security products when and where they are needed — on-prem, in the cloud, or both.

FlexProtect for Data offers licensing for a set number of database servers, regardless of network topology or process fluctuations. Imperva software can be deployed and re-deployed to meet evolving network and capacity requirements for those licensed database servers. Think of it as ‘mix and match.’

Innovation Doesn’t Stop with Technology

Innovation is often about simplification, and not just in the tech realm. In creating FlexProtect as a flexible, single-license approach to securing apps, databases and files, we’ve insured that the right mix of Imperva data security products can be deployed whenever and however works best for each unique enterprise architecture.

For example, one Imperva customer, a large mobile operator, used FlexProtect to augment its existing portfolio of Imperva solutions. SecureSphere WAF had already been deployed in the data center. FlexProtect allows them to implement Incapsula DDoS, CDN and load balancing on top of the original SecureSphere deployment.

Another customer, a large healthcare company, used FlexProtect to mix and match its solutions based on what specific business units needed. Imperva Incapsula was deployed for its web properties and SecureSphere WAF was deployed for their on-premises applications.

And finally, a sports apparel company chose FlexProtect so that the IT team could use either Incapsula cloud-based services or SecureSphere on-premises solutions. They know they need to move to the cloud, they just don’t know when or how long the migration will take. Imperva’s FlexProtect is simplifying the process. Now the IT team can make decisions about their cloud migration based on what makes sense for their organization, rather than pacing their progress with licensing renewal dates.

Cloud Migration Next Steps

Data and apps are at the top of the cloud migration list for most organizations, but in the meantime, hybrid IT is the rule rather than the exception. And, by its very definition, hybrid IT is an interim state. Every organization is at a different place on the migration path from data center to cloud.

Imperva’s FlexProtect aims to take licensing out of the cloud migration equation with a single license at one price whether Imperva security solutions are deployed in a rack or in the cloud.

Source: https://www.imperva.com/blog/2017/11/license-solve-cloud-migration-problem/?utm_source=linkedin&utm_medium=organic-social&utm_content=innovation-flexprotect&utm_campaign=2017-Q4-linkedin-awareness

Author: Morgan Gerhart


  • 0

Cloud WAF Versus On-Premises WAF

Category : Imperva

“The Times They Are a Changin’”, Bob Dylan knew it in 1964 and what was true then is even move true today. There continues to be ongoing debate on web application firewalls (WAFs), specifically which is better for the enterprise—on-premises solutions or those in the ever-changing cloud.

When searching for a WAF for your business, you will find dozens of products to select from. As you evaluate your options, one of the key decisions you will need to make is whether to select a cloud or on-premises solution. However, don’t consider this an “either-or” decision. It’s not necessarily a matter of choosing only one—cloud or on-prem. In many cases, it makes sense to utilize both in a hybrid deployment.

In this post we’ll share the benefits of a hybrid WAF deployment and review the advantages and disadvantages of both cloud and on-prem WAFs.

Hybrid WAF Deployment

Typically, as part of their transition to the cloud we see customers move workloads to the cloud over time, or move only specific workloads to the cloud and leave others on-prem. In this case you need adequate app security in both locations and a hybrid WAF deployment is best.

To eliminate the threat closest to the threat origin, deploying a cloud-based WAF at the edges of your network and regionally to clean and scrub connections prior to entering your network makes sense. This ensures bad actors and cyber threats are eliminated before they breach your outer perimeter. The added benefit includes reducing threat-related network traffic that may negatively impact your network while additionally driving down network related expenses.  Your on-premises WAF then processes and focuses on more complex business related and internal threats.

When moving to the cloud, flexible licensing is important. Trying to estimate exactly the right amount of app protection you’ll need in the cloud versus on-prem at any one time can be challenging – and potentially expensive as you might over-invest out of caution. Look for a single license that offers you the ability to deploy products how, when and where you need them. Imperva FlexProtect lets customers move applications among Imperva on-prem and cloud solutions without incurring additional costs.

Cloud WAF Versus On-Prem WAF

The fundamental difference between the two options is how they’re deployed. An on-prem WAF runs either in your data center, or potentially as a virtual machine within your infrastructure-as-a-service (IaaS) cloud presence—and is then managed by your internal technical staff, accessed through LAN and VPN when outside the local area network. A cloud WAF is provided as software as a service (SaaS) and accessed through a web interface or mobile app.

Let’s review how they compare on a number of key factors.

Infrastructure

With a cloud WAF, complexities and cost of capacity planning are fully managed by your cloud provider, but with an on-premises solution you’re responsible for these activities. This usually means that on-premises solutions are more expensive in terms of hardware, maintenance and administration. But not always. In some cases, depending upon data center topology and amount of app traffic, on-prem can be less expensive.

Here are a few things to consider when it comes to infrastructure:

Hardware: With an on-prem WAF, purchasing hardware to support peak traffic calculations commonly results in excess security capacity. The other side of the equation is no better. If you get capacity planning wrong and your solution can’t handle the traffic most WAF solutions are designed to “fail open” (WAF fails and allows all traffic good and bad through) which then leaves your organization exposed. At a minimum you will need to consider the following expenses:

  • Compute costs
  • Networking costs
  • Disk storage costs
  • Backup/recovery/failover costs
  • Infrastructure labor costs

Maintenance: Updates within cloud environments tend to occur more regularly than on-prem due in large part to the service provider needing to align to a common maintenance schedule, resource availability and solution standards. Additionally, with an on-prem WAF your in-house technical team is responsible for making timely updates to the WAF, whereas updates to a cloud WAF solution are completely managed by the cloud provider.

A cloud WAF replaces the upfront and ongoing costs associated with maintaining an on-premises system, with simple usage-based and pay as you go pricing. You pay a regular fee based on the bandwidth utilized.  

Scalability

Cloud-based solutions were designed to leverage efficiency via scalability. Cloud WAFs have compute capacity that far exceeds any on-premises solution, so functionality like bot detection, account take over and fraud prevention become far more effective. Consider six months down the road your capacity needs to double. With a cloud WAF it’s literally point and click on demand. With an on-prem WAF it requires hardware procurement, installation and configuration.

The ability to scale seamlessly and without consideration of additional hardware and infrastructure changes is key to cloud offerings.

Cost

Cloud WAF solutions are generally priced as a monthly or annual subscription, with additional cost for training and support. The advantage of this pricing model is your expenditure can be categorized as OPEX instead of CAPEX. There’s minimal initial investment and it’s easy to forecast. You avoid hardware and unforeseen maintenance costs. All the hardware, backups and maintenance are managed by the vendor.

Traditional, appliance-based on-prem WAF licensing usually  at a minimum a one-time investment for the license (perpetual license), which is usually based on appliance capacity and/or throughput. Generally, on-prem WAF solutions will be CAPEX. You will also need to identify an implementation partner and account for those implementation costs.

Again, if you’re looking at a hybrid WAF deployment, you might want to consider a flexible, subscription-basedlicensing model that spans both on-prem and cloud deployments.

Implementation

Typically cloud-based WAF implementations are considerably faster than on-prem WAF solutions. The average time for cloud deployment will be calculated in weeks whereas on-premises WAF implementation can take weeks or months depending on the company size, number of users, locations, and required customizations.

Security

Maintaining data security is critical regardless of which option you go with. With a cloud WAF, software is hosted within highly secured data centers and the cloud services provider is responsible for data security.

An on-prem WAF is only as good as the company’s ability to secure access to that data. Within many organizations data security is not their primary focus which in turn creates data vulnerabilities, exposure and increased risk. The server and software are installed locally on the company’s premises, access can be closely monitored and controlled as long as data security and physical security is taken seriously and reviewed regularly.

Policy Management and Customization

Cloud WAF solutions come with standard features, such as DDoS protection, content delivery acceleration (CDN), load balancing, APIs, application delivery rules and standard rule sets. Minimal customization is possible because as a customer you will have less access to the source code. However, most enterprise-level on-prem WAF providers will offer access to deep policy development and delivery rules to customize your experience effectively.  They give customers the ability to control behavior at a granular level.

On-prem WAF solutions tend to be more customizable, allowing you to customize the interaction between the applications and the WAF at a more detailed level. For instance, let’s say you have built special functionality on top of your HR system to extract data, compile that data, enrich it and then move it for later consumption. This custom internal process falls outside of the “typical” product behavior. On-prem solutions are going to be able to drill down and have the flexibility to capture this new process easily. The cloud WAF solution may have difficulty as the products don’t typically allow for unique process development.

Which WAF is Right for Your Organization?

Both on-prem and cloud WAFs have their own advantages and disadvantages, which often drive the decision for a hybrid WAF deployment. Selecting the right deployment for your organization’s architecture is dependent on your company’s management and stakeholder preferences, security policy and priorities, budget, and vision.

For more information on WAF requirements and solutions, download Gartner’s 2017 Magic Quadrant for Web Application Firewalls.

Source: https://www.imperva.com/blog/2017/11/cloud-waf-versus-on-premises-waf/?utm_source=linkedIn&utm_medium=organic-social&utm_content=cloud-waf-vs-onprem&utm_campaign=2017-Q4-linkedin-awareness

Author: Jon Burton


  • 0

Detecting Data Breaches, Why Understanding Database Types Matters

Category : Imperva

Different data characteristics and access patterns found in different database systems lead to different ways of detecting suspicious data access, which are indicators of potential data breaches. To accurately detect data access abuse we need to classify the database processing type. Is it a transactional database (OLTP) or a data warehouse (OLAP)?

OLTP vs. OLAP – What’s the Difference?

Today, in the relational database world there are two types of systems. The first is online transactional processing (OLTP) and the second is online analytical processing (OLAP). Although they look the same, they have different purposes. OLTP systems are used in business applications. These databases are the classic systems that process data transactions. The queries in these databases are simple, short online transactions and the data is up-to-date. Examples include retail sales, financial transaction and order entry systems.

OLAP systems are used in data warehouse environments whose purpose is to analyze data efficiently and effectively. OLAP systems work with very large amounts of data and allow users to find trends, crunch numbers, and extract a ‘big picture’ from the data. OLAP systems are widely used for data mining and the data in them is historic. As OLAP’s number-crunching usually involves a large data set, the interactions with the database last longer. Furthermore, with OLAP databases it’s not possible to predict what the interactions (SQL queries) will look like beforehand.

OLAP and OLTP data flow

Figure 1: OLAP and OLTP data flow

The different nature of OLTP and OLAP database systems leads to differences in users’ access patterns and variations in the characteristics of the data that is stored there.

Comparing Access Patterns

With OLTP we expect that users will access the business data that is stored in the database using the application interface. Interactive (or human) users are not supposed to access the business application data directly through the database. One exception might be DBAs who maintain the database, but even in this case there is no real reason that a DBA should access business applicative data directly. It is more likely that DBAs will only access the system tables (which store the metadata of the data store) in the database.

With OLAP the situation is different. Many BI users and analysts regularly access the data in the database directly and not through the application interface to produce reports and analyze and manipulate the data.

The Imperva Defense Center worked with dozens of databases across Imperva enterprise customers to analyze the data access patterns for OLTP and OLAP databases over a four-week period. We used audit data collected by SecureSphere and insights gathered from CounterBreach. Figure 2 shows the average number of new interactive users who accessed these databases during the four-week period.

new interactive users in the database

Figure 2: The number of new interactive users who accessed OLTP and OLAP databases over time.

As indicated in Figure 2, there were almost no new interactive (or human) users who accessed OLTP databases over time. However, this was not the situation for OLAP databases.

Comparing Data Characteristics

The data in OLTP systems is up-to-date. In most cases, the tables that hold the business application data are not deleted and repeatedly re-created – they’re stable.

On the other hand, in OLAP systems the data that is saved in the database is historic data. There are ETL (extract, transform, load) processes which upload and manipulate data on the database periodically (hourly/daily/weekly). In many cases, the data is uploaded to new tables each time, for example each day the data is uploaded to a table, which contains the date of the data upload. This leads to many new tables in the database, including temporary tables which help manipulate the data, and tables which are deleted over time.

Again, the Imperva Defense Center analyzed the characteristics of data stored in OLTP and OLAP databases using Imperva enterprise customers’ audit data collected by SecureSphere and insights gathered from CounterBreach. Figure 3 shows the average number of new business application data accessed by interactive users over a four-week period. The average number of new business application tables in OLTP is very low, whereas in OLAP this amount is much higher.

New business app tables in database over time

Figure 3: The number of new business application tables in the database over time.

Incorporating OLTP and OLAP Differences to Improve Detection of Suspicious Data Access

Detecting potential data breaches in a relational database requires identifying suspicious activity in the database. To identify suspicious activity successfully—without missing attacks on one hand and not identifying many false positive incidents on the other—the detection should be based on the story behind the database.

We need to ask ourselves, what is the purpose of the database? How should we expect interactive users to act in the database? How do we expect applications to act in the database? What can we tell about the data in the database? To answer these questions a deep understanding of databases – user types, data types and database types – is required.

The latest release of Imperva CounterBreachadds further understanding of the database types and factors database types into its detection methods. Leveraging the Imperva Defense Center research on behaviors of interactive users for OLTP and OLAP databases, CounterBreach uses machine learning to classify database types based on access patterns of interactive users to the database. The machine learning algorithm analyzes a number of different aspects…the number of business intelligence (BI) users and DBAs who access the database, which data is accessed by those interactive users, the amount of new business application data that is created in the database, and more.

With an understanding of the database type, CounterBreach determines the best method to detect suspicious activity. In databases that act like OLTP systems, it detects and alerts on any abnormal access by an interactive user to business applicative data.

In OLAP systems where interactive users access business application data as part of their day-to-day work, CounterBreach won’t alert on such behavior because it’s legitimate. In these systems, it will let BI users do their jobs and use other indicators, such as an abnormal amount of records exfiltrated from the database’s business application tables, to detect data abuse. This helps keep data driven business processes functioning and reduces the number of false positives detected.

Ongoing Research

Imperva data scientists continue to research and identify additional characteristics that distinguish OLTP and OLAP systems. These characteristics go beyond the access patterns of interactive users to the database and the data stored in the database. They include the names of the tables that are stored in the database, the source applications which are used to access the database, ETL processes, diversity between operations in the database, ratio between different entities access to the database and much more. This ongoing research will further refine the detection accuracy needed to detect potential data breaches.

Learn more about data breach detection. Read our paper on the top ten indicators of data abuse and understand how to identify insider threats.

Source: https://www.imperva.com/blog/2017/10/detecting-data-breaches-why-understanding-database-types-matters/?utm_source=linkedIn&utm_medium=organic-social&utm_content=database-types&utm_campaign=2017-Q4-linkedin-awareness

Author: Shiri Margel


  • 0

Tuning Capacity Tips for SecureSphere Database Activity Monitoring

Category : Imperva

You have Imperva SecureSphere Database Activity Monitoring (DAM) up and running. You’ve deployed the system and configured your business audit policies. So, what’s next?

In a previous post I discussed the capacity management challenges of database monitoring solutions, in this post I’ll elaborate on the solutions SecureSphere has for managing and resolving those challenges. I’ll explain the importance of managing your DAM capacity over time, review how you can discover capacity issues and share ways to mitigate any problems.

Why the Need for Tuning?

As discussed in the do’s and don’ts post on capacity estimation, it is very difficult to get an accurate estimate on the expected capacity per database. It’s better to estimate for the entire deployment.

The problem with estimates is, well, they’re estimates. Things can change or perform differently than you expect. You could find out for instance that your 20 core MySQL database has less activity than your 8 core MSSQL. Why? It could be due to several reasons, such as:

  • The application that uses the MySQL is much more efficient and caches data
  • The DBA responsible for the MySQL doesn’t back up everything, while in MSSQL he does
  • The MySQL server contains other databases as well, or hosted applications

Even if you did a great job estimating your capacity needs when you first deployed, it’s just a matter of time before it loses accuracy. From upgrading your database, changing your applications, adding more users, or adding more databases to your deployment – your capacity requirements WILL change over time. These changes will affect not only your overall capacity requirements, but capacity per database.

SecureSphere Database Activity Monitoring Terminology

Before I drill down into SecureSphere solutions for capacity management, there are a few terms that would be helpful to understand. The basic operation of DAM (usually) involves agents, which are installed on the database servers, and gateways, which are used to process the database activity sent from the agents. This means that every agent needs at least one connection to at least one gateway.

There are two primary methods to manage multiple gateways:

  • Gateway Group – simple grouping of gateways with no scale out or capacity related logic
  • Cluster – more advanced grouping of gateways with additional logic for scale out and redundancy

Every gateway has a maximum capacity measured in IPU (Imperva Performance Units), and every agent has a relative load on the gateway, which is also measured in IPU. The purpose of IPU measurement is so you can compare and estimate the capacity impact of the agent assignment on gateways.

With the basic terminology down, let’s find out if you have capacity issues.

Discovering If You Have Capacity Issues

The best way to identify most capacity problems in SecureSphere DAM is through the health monitoring feature,which displays alarms on various issues. There are a few that indicate an overload problem, either at the gateway level or at the cluster level (see Figure 1):

  • Gateway capacity warning – major warning, gateway at a high load state
  • Gateway capacity alert – critical warning, gateway at a critical load state
  • Cluster capacity warning – major warning, cluster of gateways at a high load state
  • Cluster capacity alert – critical warning, cluster of gateways at a high load state

SecureSphere DAM health monitoring - 1

Figure 1: SecureSphere displaying current status of SecureSphere components

There are also alarms for special scenarios. For example, if an agent load is more than your gateway can handle (depending on the gateway model) you will receive an alarm with a recommendation to scale out.

These alarms are based on real time measurements of the overall load of each gateway and the relative load of each of its corresponding agents. Each alarm contains a detailed explanation with recommendations for mitigation.

You can be more proactive by analyzing the current agent load and total gateway(s) load via the cluster management feature (Figure 2), which displays detailed information for all types of clusters and gateway groups. You can see which agents are assigned to which gateways, the capacity information, versions, status, etc.

SecureSphere DAM cluster management - 2

Figure 2: SecureSphere DAM cluster management feature enabling cluster maintenance

Four Ways to Solve SecureSphere Database Activity Monitoring Capacity Issues

There are four ways to solve for SecureSphere DAM capacity issues. Let’s take a look at each one.

Manual Load Balancing

You can choose to analyze your deployment and manually change the assignment of agents to gateways. It is also possible to set a threshold to prevent assigning an agent to a gateway if that gateway is overloaded according to its current assignment. It is important to note that this threshold is based on the estimates given to each agent upon initial assignment and not according to real time measurements.

Automatic Load Balancing

Another way to solve a load balancing issue is to let SecureSphere do it automatically. The automatic load balancing feature (Figure 3) ensures the cluster will be optimized for the long term. Its aim is NOT to solve a momentary peak load, but to cluster balance over time. It doesn’t change the agent assignment unless it improves the overall load scenario.

Configuring automatic load balancing in SecureSphere - 3

Figure 3: Configuring automatic load balancing in SecureSphere

Scale Up/Out

In other scenarios, you might discover that you need more gateways (scale out), or more powerful gateways (scale up) (Figure 4). This means that load balancing is not a relevant solution – you simply don’t have enough capacity. The alarms will guide you with recommendations, but in some cases it will still be beneficial to contact support to make the best decision.

Scale up and scale down server capacityv2

Figure 4: Scale out (add more gateways) versus scale up (add more powerful gateways)

Large Server Cluster

There are a few special scenarios which can lead to capacity-related alarms. One of them is discovering that a certain agent’s required capacity is larger than a “full gateway”, and that multiple gateways are required to handle this single database. In this scenario, you will see the appropriate alarm with a recommendation to create a large server cluster. The large server cluster is used to solve the problem of monitoring very large databases (minimum of 128 cores might be considered large – depending on various factors).

Helpful Guidelines

As you can see, SecureSphere DAM supplies multiple tools to handle capacity management before there’s a problem and mitigate any existing ones. It is highly recommended not to wait for the next alarm to pop up, but to follow these guidelines:

  • Use clusters when you can, and let them do the load balancing for you
  • Be proactive – check the actual IPU measurements per gateway and per agent to fine tune your current deployment and improve your future capacity estimations
  • Be attentive to all alarms and follow recommendations

By utilizing these solutions and best practices you will improve the foundation for future growth plans and keep the capacity management overhead of your SecureSphere DAM deployment to a minimum.

 

Source: https://www.imperva.com/blog/2017/10/tuning-capacity-tips-securesphere-database-activity-monitoring/?utm_source=linkedin&utm_medium=organic-social&utm_content=tuning-dam&utm_campaign=2017-Q4-linkedin-awareness

Author: Yoni Nave


  • 0

Ransomware Attacks on MySQL and MongoDB

Category : Imperva

Ransomware is arguably one of the most vicious types of attack cyber security experts are dealing with today. The impact ransomware attacks can have on an organization is huge and costly. A ransomware payment alone does not reflect the total expense of an attack—the  more significant costs come from downtime, data recovery and partial or total business paralysis. Following the recent NotPetya ransomware attacks, Maersk estimated their losses at $200-$300 million, while FedEx estimated theirs at $300 million. Needless to say, ransomware-related losses seem to be growing in size.

It is well known that typical ransomware encrypts files—but what about ransomware targeted at databases? We’ve previously written about it, but database ransomware continues to be less talked about even though it introduces a potentially larger risk since an organization’s data and core applications rely on the data in its databases.

In this post we’ll explain how database ransomware attacks work and provide analysis of two database ransomware attacks recently monitored by our systems: one on MySQL and another on NoSQL (MongoDB).

Methods Used to Attack Databases with Ransomware

There are three primary methods used to attack databases with the goal of corrupting or tampering with data:

1) SQL/NoSQL – inside attack

Considering access to the database is already given (whether through brute force, a DBA account that was compromised or even a malicious insider who already has access), an attacker can drop/insert/update data and hence modify the data. This can be done with a few simple SQL transactions/NoSQL commands.

2) SQL/NoSQL – external attack

A web app vulnerability, like SQL Injection or NoSQL injection, allows attackers to execute any SQL statement they wish to make. Although we’ve already seen ransomware attacking web apps, we haven’t seen such a method targeting databases in the wild yet, but it’s likely to happen.

Another method for external attackers is to target databases with public IP. This can be easily done with online services like Shodan.

3) Encrypting the database file

The database file is where the database schema and data are stored. This type of attack is exactly the same as traditional ransomware attacks that target files. The only caveat (from the ransomware point of view) is that it must terminate the database process before encrypting, as it holds the database file, making it unmodifiable to other processes while in use.

Analysis of Database Ransomware Attacks in the Wild

Let’s take a look at two SQL/NoSQL transaction-based attacks that were recently monitored by our systems.

MySQL

The attacker successfully gained access to the databases through brute force user/password combinations. Afterwards, the next step was “show databases”. Then, each of the enumerated databases was deleted with the “drop database” statement.

It is important to note that database monitoring and enforcement systems cannot rely on cumulative suspicious activities per connection (stream). With this attack, after every SQL statement, the attacker’s client logged out before taking the next SQL statement. So deleting a 10-table database would have ended up with 11 sequenced connections (extra one for listing the tables). Also the “Follow TCP Stream” feature in Wireshark will show one malicious activity at a time and not the entire attack sequence.

Figures 1-3 show how the attacker listed the databases and dropped one of them.

database ransomware - attack lists the databases

Figure 1: The attack lists the databases

database ransomware - attacker ends the connection

Figure 2: The attacker ends the connection before proceeding to the next phase

database ransomware - attacker deletes a database

Figure 3: The attacker deletes a database

After disposing of the data in this database, the attacker created a table named “Readme” and left the ransom note there (Figures 4 and 5).

database ransomware - creating a readme table

Figure 4: Creating a “Readme” table

database ransomware - inserting the ransomware note

Figure 5: Inserting the ransomware note that explains to the victim what happened and how to pay

And this is how it looks in Imperva SecureSphere database activity monitoring (Figure 6):

Figure 6: SecureSphere audit screen shows the entire attack stack

The ransom note details (as described in Figure 5):

– eMail: cru3lty@safe-mail.net
– BitCoin: 1By1QF7dy9x1EDBdaqvMVzw47Z4JZhocVh
– Reference: https://localbitcoins.com
– Description: Your DataBase is downloaded and backed up on our secured servers. To recover your lost data: Send 0.2 BTC to our BitCoin Address and Contact us by eMail with your MySQL server IP Address and a Proof of Payment. Any eMail without your MySQL server IP Address and a Proof of Payment together will be ignored. You are welcome.

Note: with this attack the attacker didn’t even bother to read the data before deleting it.

It appears this group is changing its bitcoin address every few weeks. The above bitcoin address was used in an attack that took place three weeks ago, while our systems observed a new bitcoin payment address just a few days ago: 1G5tfypKqHGDs8WsYe1HR5JxiwffRzUUas (see Figure 7).

New bitcoin address for MySQL ransomware

Figure 7: New bitcoin address for MySQL ransomware monitored by Imperva SecureSphere

MongoDB

MongoDB is a NoSQL database, but the attack’s logic is very much the same. Login was easier for the attacker this time as no authentication was required. Access control is not enabled by default on MongoDB, so the entrance ticket was just knowing the IP and the (well known) port. According to Shodan, there are ~20,000 MongoDBs with public IP with no authentication. This is ~40% of all public-facing MongoDBs.

Figures 8 and 9 show where the attacker listed the databases and deleted one of them.

database ransomware - attacker lists the databases

Figure 8: The attacker lists the databases

database ransomware - attacker deletes one of the databases

Figure 9: The attacker deletes one of the databases

In order to let the victim know about the attack (and how to pay), the attacker created a “Warning” database with a “Readme” inside. This is the JSON generated with MongoDB’s native audit…

Creating the Readme document to store the ransom note

Figure 10: Creating the Readme document to store the ransom note

And here’s the message itself…

Writing the ransom note and bitcoin account

Figure 11: Writing the ransom note and bitcoin account

The ransom note details (as described in Figure 11):

– eMail: cru3lty@safe-mail.net
– BitCoin: 1Ptza47PgMtFMA6fZpLNzacb1EPkWDAv6n
– Solution: Your DataBase is downloaded and backed up on our secured servers. To recover your lost data: Send 0.2 BTC to our BitCoin Address and Contact us by eMail with your MongoDB server IP Address and a Proof of Payment. Any eMail without your MongoDB server IP Address and a Proof of Payment together will be ignored. You are welcome!

Although this is a different bitcoin (BTC) address than the MySQL attack, note the attacker’s contact info – it’s the same group as the MySQL attack and also the top group mentioned in this article on 26K victims of MongoDB attacks. Our systems also indicated both attacks originated from the same IP (China).

To Pay or Not to Pay?

At the time of writing, there were two payments to the MySQL account (none for the latest attack) and three payments to the MongoDB account. A total of 1 BTC, which is $4,800.

Imperva doesn’t suggest customers pay the ransom (although that is a dilemma when no backup is in place), and with these specific attacks we’d highly recommend not paying it, even without a backup. This is due to the fact that for these two recorded and audited attacks, the attacker did not even read the data before disposing it. The attacker listed the databases and immediately dropped the tables without even backing it up, so restoring the data is impossible (for the attacker).

Takeaways

Enforcing behavioral-based policies are effective at detecting these kind of attacks – you can identify brute force attacks, login attempts with known database user dictionaries, abnormal behavior of an application user or SQL audit profiler, etc. But here are a few items you can implement right away for quick security wins:

  • Make sure your database cannot be accessed from the internet. Usually there is no real need to expose a database; only the web app server and a jump server for the DBAs should have access to the database’s isolated network (VPN/VPC).
  • Make sure firewall rules are in place, whitelisting approved IPs only
  • Have audit enabled (using a database activity monitoring solution or even native audit)
  • Alert on failed logins (for brute force attempts), preferably with some minimal threshold
  • Take regular backups

Source: https://www.imperva.com/blog/2017/10/ransomware-attacks-on-mysql-and-mongodb/?utm_source=linkedin&utm_medium=organic-social&utm_content=database-ransomware&utm_campaign=2017-Q4-linkedin-awareness

Author: Elad Erez


  • 0

How to Protect AWS ECS with SecureSphere WAF

Category : Imperva

Adoption of container technology is growing widely. More and more workloads are being transferred from traditional EC2 compute instances to container-based services. However, the need for securing the web traffic remains the same regardless of the elected platform.

In this post, we’ll deep dive into protecting web applications running on AWS ECS with SecureSphere WAF. While protecting ECS with SecureSphere is very similar to classic SecureSphere WAF deployment on AWS, we will cover the differences, and provide hints on the recommended way to protect the ECS cluster.

ECS Cluster Configuration

Amazon’s con­tainer web services run on ECS instances inside a VPC. It is important to configure the ECS instances on private subnets to ensure that the web traffic is only accessible through SecureSphere. It is also recommended to use an internal application load balancer (ALB) to access ECS services from a single DNS name – that way you can provision new services that will automatically be protected, without making any changes in SecureSphere.

Unprotected AWS ECS environment - 1

Figure 1: Unprotected AWS ECS environment

In the above diagram (Figure 1), we have:

  • An ECS cluster with ECS instances in two availability zones and in private subnets
  • A public NAT instance/gateway configuration for the ECS instances to communicate with AWS API (ECS requirement)
  • green service, with containers spread across both ECS instances
    • The green service is registered to a target group in our internal ALB. Using the ALB rules, we will be able to register multiple ECS services to the same ALB with the same DNS endpoint:
      register multiple ECS services to same app load balancer

Here our service is only accessible from inside the VPC so we need to deploy SecureSphere WAF for external access.

Deploying SecureSphere WAF

Deploying SecureSphere WAF is done using CloudFormation templates provided by Imperva. For more information on deployment check out this blog post.

Before deploying SecureSphere we need to set up the following resources:

  • WAF private subnets (with outbound Internet routing to access AWS API)
  • External load balancer (ELB)

After the deployment, our environment should look something like this (Figure 2):

ECS environment protected by SecureSphere WAF - 2

Figure 2: ECS environment protected by SecureSphere WAF

 

Notes about the deployment:

  • You can see that this deployment is suited for any web endpoint inside the VPC, not just ECS
  • We used the “1 Subnet” GW template, a dual subnet template is also available
  • The management server (MX) is in a private subnet, so you will not be able to access it from the Internet. You can access it from a jump box or using NAT routing
  • The external ELB acts as our public endpoint. We need to configure DNS so that our green service hostname will be routed to the ELB. Usually our SSL termination will be on the ELB using an HTTPS listener

SecureSphere Configuration

In our example environment, our networking configuration is simple – all web traffic passes through the ELB to our gateway scaling group, and from the gateways to the internal ALB. The ALB is responsible for routing to the appropriate ECS service based on host rules.

All we now have to do is configure a reverse proxy rule in the MX to route the traffic to the internal ALB:

configure a reverse proxy rule in SecureSphere MX

Provisioning Additional ECS Services

We can now spin up new tasks and services in ECS that will automatically be protected without making any network changes in SecureSphere. If our new service, red uses the same SSL certificate as green, we can just:

  • Attach the red service to a new target group in the internal ALB
  • Route the red DNS to our external ELB

Because AWS load balancers don’t support SNI (both classic and application), if we want to use a different certificate for a new service (blue) we’ll need to create a new ELB to terminate the HTTPS and connect it to the gateway auto scaling group. After that, we can use the same GW stack and internal ALB – without making any changes to SecureSphere (Figure 3).

ECS services protected by SecureSphere WAF - 3

Figure 3: Multiple ECS services protected by a single SecureSphere WAF stack

Notes on SecureSphere Automation

In this blog post we demonstrated how to provision ECS services automatically without making any changes to the SecureSphere configuration. There will be different scenarios where this is not the case:

  • Deploying a dedicated gateway stack (with/without MX) for an ECS service
  • Updating reverse proxy rules to route to a newly added internal load balancer
  • Uploading a new SSL certificate in the event SecureSphere terminates HTTPS

We’ll feature how to automate SecureSphere configuration for these deployment scenarios in future blog posts. To get started deploying SecureSphere in your ECS environment today, try our SecureSphere offering on the AWS Marketplace.

Source: https://www.imperva.com/blog/2017/10/protect-aws-ecs-securesphere-waf/?utm_source=linkedIn&utm_medium=organic-social&utm_content=aws-ecs&utm_campaign=2017-Q4-linkedin-awareness

 


Support