Category Archives: Imperva

  • 0

Ransomware Attacks on MySQL and MongoDB

Category : Imperva

Ransomware is arguably one of the most vicious types of attack cyber security experts are dealing with today. The impact ransomware attacks can have on an organization is huge and costly. A ransomware payment alone does not reflect the total expense of an attack—the  more significant costs come from downtime, data recovery and partial or total business paralysis. Following the recent NotPetya ransomware attacks, Maersk estimated their losses at $200-$300 million, while FedEx estimated theirs at $300 million. Needless to say, ransomware-related losses seem to be growing in size.

It is well known that typical ransomware encrypts files—but what about ransomware targeted at databases? We’ve previously written about it, but database ransomware continues to be less talked about even though it introduces a potentially larger risk since an organization’s data and core applications rely on the data in its databases.

In this post we’ll explain how database ransomware attacks work and provide analysis of two database ransomware attacks recently monitored by our systems: one on MySQL and another on NoSQL (MongoDB).

Methods Used to Attack Databases with Ransomware

There are three primary methods used to attack databases with the goal of corrupting or tampering with data:

1) SQL/NoSQL – inside attack

Considering access to the database is already given (whether through brute force, a DBA account that was compromised or even a malicious insider who already has access), an attacker can drop/insert/update data and hence modify the data. This can be done with a few simple SQL transactions/NoSQL commands.

2) SQL/NoSQL – external attack

A web app vulnerability, like SQL Injection or NoSQL injection, allows attackers to execute any SQL statement they wish to make. Although we’ve already seen ransomware attacking web apps, we haven’t seen such a method targeting databases in the wild yet, but it’s likely to happen.

Another method for external attackers is to target databases with public IP. This can be easily done with online services like Shodan.

3) Encrypting the database file

The database file is where the database schema and data are stored. This type of attack is exactly the same as traditional ransomware attacks that target files. The only caveat (from the ransomware point of view) is that it must terminate the database process before encrypting, as it holds the database file, making it unmodifiable to other processes while in use.

Analysis of Database Ransomware Attacks in the Wild

Let’s take a look at two SQL/NoSQL transaction-based attacks that were recently monitored by our systems.

MySQL

The attacker successfully gained access to the databases through brute force user/password combinations. Afterwards, the next step was “show databases”. Then, each of the enumerated databases was deleted with the “drop database” statement.

It is important to note that database monitoring and enforcement systems cannot rely on cumulative suspicious activities per connection (stream). With this attack, after every SQL statement, the attacker’s client logged out before taking the next SQL statement. So deleting a 10-table database would have ended up with 11 sequenced connections (extra one for listing the tables). Also the “Follow TCP Stream” feature in Wireshark will show one malicious activity at a time and not the entire attack sequence.

Figures 1-3 show how the attacker listed the databases and dropped one of them.

database ransomware - attack lists the databases

Figure 1: The attack lists the databases

database ransomware - attacker ends the connection

Figure 2: The attacker ends the connection before proceeding to the next phase

database ransomware - attacker deletes a database

Figure 3: The attacker deletes a database

After disposing of the data in this database, the attacker created a table named “Readme” and left the ransom note there (Figures 4 and 5).

database ransomware - creating a readme table

Figure 4: Creating a “Readme” table

database ransomware - inserting the ransomware note

Figure 5: Inserting the ransomware note that explains to the victim what happened and how to pay

And this is how it looks in Imperva SecureSphere database activity monitoring (Figure 6):

Figure 6: SecureSphere audit screen shows the entire attack stack

The ransom note details (as described in Figure 5):

– eMail: cru3lty@safe-mail.net
– BitCoin: 1By1QF7dy9x1EDBdaqvMVzw47Z4JZhocVh
– Reference: https://localbitcoins.com
– Description: Your DataBase is downloaded and backed up on our secured servers. To recover your lost data: Send 0.2 BTC to our BitCoin Address and Contact us by eMail with your MySQL server IP Address and a Proof of Payment. Any eMail without your MySQL server IP Address and a Proof of Payment together will be ignored. You are welcome.

Note: with this attack the attacker didn’t even bother to read the data before deleting it.

It appears this group is changing its bitcoin address every few weeks. The above bitcoin address was used in an attack that took place three weeks ago, while our systems observed a new bitcoin payment address just a few days ago: 1G5tfypKqHGDs8WsYe1HR5JxiwffRzUUas (see Figure 7).

New bitcoin address for MySQL ransomware

Figure 7: New bitcoin address for MySQL ransomware monitored by Imperva SecureSphere

MongoDB

MongoDB is a NoSQL database, but the attack’s logic is very much the same. Login was easier for the attacker this time as no authentication was required. Access control is not enabled by default on MongoDB, so the entrance ticket was just knowing the IP and the (well known) port. According to Shodan, there are ~20,000 MongoDBs with public IP with no authentication. This is ~40% of all public-facing MongoDBs.

Figures 8 and 9 show where the attacker listed the databases and deleted one of them.

database ransomware - attacker lists the databases

Figure 8: The attacker lists the databases

database ransomware - attacker deletes one of the databases

Figure 9: The attacker deletes one of the databases

In order to let the victim know about the attack (and how to pay), the attacker created a “Warning” database with a “Readme” inside. This is the JSON generated with MongoDB’s native audit…

Creating the Readme document to store the ransom note

Figure 10: Creating the Readme document to store the ransom note

And here’s the message itself…

Writing the ransom note and bitcoin account

Figure 11: Writing the ransom note and bitcoin account

The ransom note details (as described in Figure 11):

– eMail: cru3lty@safe-mail.net
– BitCoin: 1Ptza47PgMtFMA6fZpLNzacb1EPkWDAv6n
– Solution: Your DataBase is downloaded and backed up on our secured servers. To recover your lost data: Send 0.2 BTC to our BitCoin Address and Contact us by eMail with your MongoDB server IP Address and a Proof of Payment. Any eMail without your MongoDB server IP Address and a Proof of Payment together will be ignored. You are welcome!

Although this is a different bitcoin (BTC) address than the MySQL attack, note the attacker’s contact info – it’s the same group as the MySQL attack and also the top group mentioned in this article on 26K victims of MongoDB attacks. Our systems also indicated both attacks originated from the same IP (China).

To Pay or Not to Pay?

At the time of writing, there were two payments to the MySQL account (none for the latest attack) and three payments to the MongoDB account. A total of 1 BTC, which is $4,800.

Imperva doesn’t suggest customers pay the ransom (although that is a dilemma when no backup is in place), and with these specific attacks we’d highly recommend not paying it, even without a backup. This is due to the fact that for these two recorded and audited attacks, the attacker did not even read the data before disposing it. The attacker listed the databases and immediately dropped the tables without even backing it up, so restoring the data is impossible (for the attacker).

Takeaways

Enforcing behavioral-based policies are effective at detecting these kind of attacks – you can identify brute force attacks, login attempts with known database user dictionaries, abnormal behavior of an application user or SQL audit profiler, etc. But here are a few items you can implement right away for quick security wins:

  • Make sure your database cannot be accessed from the internet. Usually there is no real need to expose a database; only the web app server and a jump server for the DBAs should have access to the database’s isolated network (VPN/VPC).
  • Make sure firewall rules are in place, whitelisting approved IPs only
  • Have audit enabled (using a database activity monitoring solution or even native audit)
  • Alert on failed logins (for brute force attempts), preferably with some minimal threshold
  • Take regular backups

Source: https://www.imperva.com/blog/2017/10/ransomware-attacks-on-mysql-and-mongodb/?utm_source=linkedin&utm_medium=organic-social&utm_content=database-ransomware&utm_campaign=2017-Q4-linkedin-awareness

Author: Elad Erez


  • 0

How to Protect AWS ECS with SecureSphere WAF

Category : Imperva

Adoption of container technology is growing widely. More and more workloads are being transferred from traditional EC2 compute instances to container-based services. However, the need for securing the web traffic remains the same regardless of the elected platform.

In this post, we’ll deep dive into protecting web applications running on AWS ECS with SecureSphere WAF. While protecting ECS with SecureSphere is very similar to classic SecureSphere WAF deployment on AWS, we will cover the differences, and provide hints on the recommended way to protect the ECS cluster.

ECS Cluster Configuration

Amazon’s con­tainer web services run on ECS instances inside a VPC. It is important to configure the ECS instances on private subnets to ensure that the web traffic is only accessible through SecureSphere. It is also recommended to use an internal application load balancer (ALB) to access ECS services from a single DNS name – that way you can provision new services that will automatically be protected, without making any changes in SecureSphere.

Unprotected AWS ECS environment - 1

Figure 1: Unprotected AWS ECS environment

In the above diagram (Figure 1), we have:

  • An ECS cluster with ECS instances in two availability zones and in private subnets
  • A public NAT instance/gateway configuration for the ECS instances to communicate with AWS API (ECS requirement)
  • green service, with containers spread across both ECS instances
    • The green service is registered to a target group in our internal ALB. Using the ALB rules, we will be able to register multiple ECS services to the same ALB with the same DNS endpoint:
      register multiple ECS services to same app load balancer

Here our service is only accessible from inside the VPC so we need to deploy SecureSphere WAF for external access.

Deploying SecureSphere WAF

Deploying SecureSphere WAF is done using CloudFormation templates provided by Imperva. For more information on deployment check out this blog post.

Before deploying SecureSphere we need to set up the following resources:

  • WAF private subnets (with outbound Internet routing to access AWS API)
  • External load balancer (ELB)

After the deployment, our environment should look something like this (Figure 2):

ECS environment protected by SecureSphere WAF - 2

Figure 2: ECS environment protected by SecureSphere WAF

 

Notes about the deployment:

  • You can see that this deployment is suited for any web endpoint inside the VPC, not just ECS
  • We used the “1 Subnet” GW template, a dual subnet template is also available
  • The management server (MX) is in a private subnet, so you will not be able to access it from the Internet. You can access it from a jump box or using NAT routing
  • The external ELB acts as our public endpoint. We need to configure DNS so that our green service hostname will be routed to the ELB. Usually our SSL termination will be on the ELB using an HTTPS listener

SecureSphere Configuration

In our example environment, our networking configuration is simple – all web traffic passes through the ELB to our gateway scaling group, and from the gateways to the internal ALB. The ALB is responsible for routing to the appropriate ECS service based on host rules.

All we now have to do is configure a reverse proxy rule in the MX to route the traffic to the internal ALB:

configure a reverse proxy rule in SecureSphere MX

Provisioning Additional ECS Services

We can now spin up new tasks and services in ECS that will automatically be protected without making any network changes in SecureSphere. If our new service, red uses the same SSL certificate as green, we can just:

  • Attach the red service to a new target group in the internal ALB
  • Route the red DNS to our external ELB

Because AWS load balancers don’t support SNI (both classic and application), if we want to use a different certificate for a new service (blue) we’ll need to create a new ELB to terminate the HTTPS and connect it to the gateway auto scaling group. After that, we can use the same GW stack and internal ALB – without making any changes to SecureSphere (Figure 3).

ECS services protected by SecureSphere WAF - 3

Figure 3: Multiple ECS services protected by a single SecureSphere WAF stack

Notes on SecureSphere Automation

In this blog post we demonstrated how to provision ECS services automatically without making any changes to the SecureSphere configuration. There will be different scenarios where this is not the case:

  • Deploying a dedicated gateway stack (with/without MX) for an ECS service
  • Updating reverse proxy rules to route to a newly added internal load balancer
  • Uploading a new SSL certificate in the event SecureSphere terminates HTTPS

We’ll feature how to automate SecureSphere configuration for these deployment scenarios in future blog posts. To get started deploying SecureSphere in your ECS environment today, try our SecureSphere offering on the AWS Marketplace.

Source: https://www.imperva.com/blog/2017/10/protect-aws-ecs-securesphere-waf/?utm_source=linkedIn&utm_medium=organic-social&utm_content=aws-ecs&utm_campaign=2017-Q4-linkedin-awareness

 


  • 0

Practical Tips for Personal Online Security

Category : Imperva

As a cybersecurity professional I write about enterprise security on a daily basis. But with the start of National Cyber Security Awareness Month(NCSAM) I was inspired to switch gears and write about personal security given this week’s theme of simple steps to online safety for consumers. So, with pen in hand (a keyboard actually) here are some practical tips and best practices for protecting your personal information and identity online.

Simple Steps to Online Safety

Obviously there are many things one can do to keep safe in today’s cyber world; however, many of them require deep knowledge of security and/or a significant level of effort.

Ideally you should select a different password for every site you use, and store them all in a secure password manager with two-factor authenticationevery time you open the password manager, but this is not practical for most people. For me personally—and I live and breathe security every day—I started this practice about three years ago and 174 out of my 403 passwords are still duplicated.

So, with “simple” being the operative word, the tips here are for everyone, not just cybersecurity pros, and shouldn’t take more than 10 minutes to implement (plus two minutes to read this blog post!).

Two Quick Things You Can Do to Easily Boost Your Personal Cybersecurity

Many times in life we need to focus on what matters most, and cybersecurity is no different. From an online security standpoint, the two things that matter most are your smartphone and your email accounts.

Our smartphones are the center of our online life and identity. We use them anywhere between dozens to hundreds of times a day. They hold our most personal data in emails, contacts, and even cherished memories—like pictures and digital assets, such as a chat from someone we care about or a video of grandchild. And they connect us to the many different services we use through apps.

Protect Your Most Important Accounts

That said, I’m not going to tell you to install an antivirus program for your smartphone, that would be maybe number 25 on my list and today I want to focus on the top two items. And those are to protect access to your most important accounts, which are:

  • Your mobile carrier account
  • Your primary and secondary email accounts

The reason I recommend that everybody takes significant steps to protect these specific accounts is that they serve as the second factor for authentication into so many sites such as online banking, investment, shopping, healthcare, education, travel, you name it.

The steps to implement are simple. For your online mobile carrier account and your primary and second email accounts:

  • Choose a strong, unique password (for each account) that you don’t use anywhere else and either remember it or save it in your password management application. Length is key: a password with 14 or more characters is very strong. Here are some other tips for creating a strong password.
  • Enable two-factor authentication on the accounts.

For those of you who have not set up two-factor authentication, most online businesses automatically assign your mobile device or a secondary email address as your second factor of authentication, that’s why protecting them is so important. Take your mobile carrier account…if not well-secured adversaries can log into your account, request a new SIM card for your account, pop it into their phone, and voila…now they own your second factor authentication. That may sound far-fetched, but it’s not. Cybercrime is everywhere and consumers and businesses alike need to stay vigilant about protecting themselves.

Security Is for Everyone

With these 2-3 accounts extremely well protected you can easily boost your personal cybersecurity.

Overall, maintaining clean password hygiene is very time consuming and not everyone has the knowledge or resources to do it, but implementing these two best practices for those key accounts is something anyone can do. Even my grandparents!

Source: https://www.imperva.com/blog/2017/10/practical-tips-personal-online-security/?utm_source=linkedIn&utm_medium=organic-social&utm_content=ncsam-security-tips&utm_campaign=2017-Q4-linkedin-awareness

Author: Shahar Ben Hador


  • 0

Building a Security Risk Management Program

Category : Imperva

The frequency of data breaches today highlights the need to peel back the onion on security programs and identify a laser-focused mission and ultimate goal. As a compliance manager, I know the horror stories first hand.

Let’s take a deeper dive into security and risk management basics to enable your program to add value for your business and help prevent breaches.

Security Risk Management Foundations

It all starts with a fundamental management-supported, skilled and budgeted security program.  Security programs are not a cut and paste of your neighbors’ security program.  Each program is unique, and must be tailored to your organization and its risks, forming an integral component of an enterprise risk management (ERM) program.

The goal is to identify areas of risk to the organization, its people, processes, technology and environment, and to drive management to implement controls to limit the exposure.  This, like any risk program, plays a trifecta balancing game between the risk, cost and benefit.

How ISO 27001 Can Help

The ISO 27001 jump starts a program by providing a well-structured framework for developing an Information Security Management System (ISMS), driven by solid corporate requirements (see Figure 1).  The ISO 27001 contains the key areas required by a security program, in addition to the details that are required within each area.  Well accepted internationally, it helps satisfy customer requests for solid security programs—and future certification (if this is your goal).

security risk drivers

Figure 1: Corporate risk drivers help determine the requirements for your security risk management program.

Knowing your organization carries a huge advantage over others and includes the people, the culture, the IT infrastructure, the assets…you may even know the regulatory and legal requirements for information security.  Knowing the processes, how things work (or how you think they do) is a big advantage, but this is only the beginning.

Meetings with key management and teams across the organization solidify connections and generate a detailed picture of the organization, risk status and controls across existing processes.  The data collected, once analyzed, provides input into the ISMS program, including: assets, risk owners, control owners, risks, and more.  You will find that time brings success if you stay engaged in the process.

I have found that employees/consultants, not spending 100% of their time involved in the ISMS, provide limited value to a continuous risk program, unless the organization is very small.  Business is constantly changing and along with it the risks.  Not being in the “zone” constantly depletes the value of your risk program. So, where to from here….

Identify Your Assets / Ask the Right Questions

Breached organizations might ask these questions:

  • Did we focus on ‘the gold’? — or assets, as we like to call them in the info security world — such as customer credit card numbers and biometric data
  • Did we fully understand the threats and risks to the business?
  • Was management made aware of the output of the above (if any)?

Risk management is there to ultimately protect an organizations’ key assets. So, start identifying them.  Assets include information, processes, systems, infrastructure and people in the organization.

A significant impact to any of these can affect the core business and ultimately management’s core objectives.  Threats are not only IT’s! …threats take advantage of vulnerabilities in any area of the business (Figure 2).

layered security

Figure 2: Layered security is key. Threats take advantage of vulnerabilities in any area of the business.

Start by asking:

  • What are my most sensitive assets?
  • What are my areas of highest risk to them?
  • How are we protecting those assets? – THINK!!! People, Process, Technology and Physical
  • Does that approach make sense?
  • What risk (residual risk) remains and is that acceptable to management?

Following these fundamental questions and making decisions leads to building a security risk program.  Outlined below are solid building blocks for a program, with a focus on three key areas.

Building the Protection Program

ISO 27002

The first makes use of the ISO 27002standard controls, to focus on the relevant business areas and their baseline implementation guidelines.  With over 100 controls outlined in detail, this provides an excellent starting point.

Documentation

This is a tough one, but to establish a solid program requires documentation that aligns with the business processes, and is reviewed and approved by management.  There are three main reasons why documentation helps build integrated security controls (and why not to include it as an add on):

  • Gaining clarity of the actual processes
  • Officially assigning the control owners responsibility to perform the process and related controls, as outlined
  • Management review and approval of the new processes, generating commitment to the process via responsibility

NOTE: Generic purchased or provided documents are a great start, but unless tailored to your organization they do not serve for much.  Refer to the ISO 27001, for more details on the documentation process.

Team Effort

One cannot work in isolation when building the ISMS program.  You need a team effort and to rely on other risk-focused business areas of the organization similar to yours.  Your risk buddies include legal, IT and finance. There may be other risk partners, depending on the size of your organization.  The help of experienced internal or external assessors is an integral part of the team and enables one to perform technical assessments, audits and reviews to identify gaps and where threats can claim a victory.

Be Kind, Tough and Smart

Info security professionals must be kind, tough and smart (in no specific order). When building the security risk program many look at us—the auditor or compliance manager—as the enemy (or worse!). But like any good relationship, you must appreciate what each other brings to the table—understand each person has their own responsibilities and unique challenges in performing their job for the organization.

As the security SME, you will often need to stand your ground on matters of security recommendations and best practices, but strive to do so in a matter-of-fact way.  Once the security program begins to show value to the business and stakeholders, any adversarial feelings usually start to change. This is not an overnight reaction and may take dedication and focus—and as the subject matter expert, one helping them to manage their risks so they stay out of trouble.

Your smarts will help you shine: you will begin to gain the compliance managers’ trust; help keep them honest; and reduce the threats and business risks, which is the ultimate goal. In some cases, you will help to support their budget requests for resources, additional infrastructure, and more.

Useful Resources

Hopefully these tips prove helpful as you build out a security program and work with internal stakeholders and compliance team members. Below you’ll find links to additional information that could be useful.

ISO 31000 Risk Management

COSO Enterprise Risk Management

 

Source: https://www.imperva.com/blog/2017/09/building-a-security-risk-management-program/?utm_source=linkedIn&utm_medium=organic-social&utm_content=security-program&utm_campaign=2017-Q3-linkedin-awareness

Author: David Lewis

 


  • 0

How to Deploy SecureSphere WAF on Azure

Category : Imperva

If you host apps in the cloud, then you need security in the cloud. The Imperva SecureSphere Web Application Firewall (WAF) identifies and acts upon dangers maliciously woven into innocent-looking website traffic, both on-premises and in the cloud, such as:

  • Blocking technical attacks such as SQL injection, cross-site scripting and remote file inclusion that exploit vulnerabilities in web applications;
  • Business logic attacks such as site scraping and comment spam;
  • Botnets and DDoS attacks; and
  • Preventing account takeover attempts in real-time, before fraudulent transactions can be performed.

In this post, we’ll walk through the steps needed to deploy a SecureSphere WAF to protect an existing Azure-based web environment. Imperva also provides a quick-start Azure Resource Manager (ARM) deployment template which could be useful as a reference for automating the deployment process (for more details see “Deployment Kit ARM Template” below).

Deploying SecureSphere WAF on Azure

General Architecture

A typical deployment of SecureSphere WAF on the Azure platform includes the following elements (see Figure 1):

  • SecureSphere Management Console (MX): The MX is required to specify networking rules, manage security configurations, handle security violations and produce reports.
  • Layer of SecureSphere WAF Gateways: These are the WAF instances that process the traffic and apply the security.
  • External Load Balancer: The load balancer will distribute the traffic between the deployed WAF gateway instances.
  • Internal Load Balancer: The WAF load balancers distribute traffic from the WAF among the deployed web servers.

typical Azure deployment environment with SecureSphere WAF

Figure 1: Typical Azure deployment environment with SecureSphere WAF

Before You Begin

Make sure you have the following prerequisites on hand/in place before you begin deployment:

  • Imperva License File. The license can be obtained through Imperva.
  • Virtual Network and Subnets, in which the WAF instances will be deployed.

Deploying SecureSphere Management Server

The first step in deploying the Imperva SecureSphere WAF is to deploy the SecureSphere Management Server. Below are the steps.

  1. Navigate to the Azure portal: https://portal.azure.com
  2. From the tiles, select Marketplace (or select Browse > Marketplace), and search for Imperva. Select the latest version of SecureSphere Web Application Firewall.
  3. Create a new deployment. Specify the required parameters, including:
    1. Deployment name; user name; authentication details; resource group.
    2. Networking settings.
    3. Security settings.
      Hint: Make sure that the MX is not accessible from the internet.
    4. Select the desired machine type. Instance type A3 and above is recommended. More powerful instance types may improve the WAF performance.
  4. Launch the “First Time Login” operation:
    1. Once the virtual machine is ready, connect to it using SSH.
    2. Operation is launched by typing ftl in the command prompt. Follow the instructions on screen. Make sure to select component type Management. For detailed information on the FTL process consult the SecureSphere deployment guide.

Secure Azure Deployments with SecureSphere - first time login

  1. After the FTL operation finishes successfully, upload the license:
    1. Login to the web console: point your browser to https://<Your Management IP address>:8083.
      Hint: Accessing the web console may require using a hop server if access from the internet is not allowed.
    2. Enter admin username credentials.
    3. Upload the license file in the license window.

Deploying SecureSphere WAF Gateways

After you’ve completed the steps above, you follow these steps to deploy a highly-available stack of SecureSphere WAF gateways.

  1. Deploy SecureSphere WAF Gateway instances. Repeat steps 1-4 from the “Deploying SecureSphere Management Server” section.
  2. To ensure high availability add all WAF Gateways to Availability Set.
    Azure guarantees that machines that are in the same Availability Set are in totally separate fault domains, and therefore are not vulnerable to the same local failure.Secure Azure Deployments with SecureSphere - availability set
  3. Launch the “First Time Login” operation on each Gateway machine:
    1. Once the virtual machine is ready, connect to it using SSH.
    2. Operation is launched by typing ftl in the command prompt. Follow the instructions on screen.
      Make sure to select component type Gateway.
      This operation will also bind the Gateway and the Management Server (by specifying the MX IP address). For detailed information on the FTL process refer to the SecureSphere deployment guide.

Configure Networking

After the SecureSphere WAF MX and Gateways are in place, it’s time to configure the networking and allow the traffic to flow from the External Load Balancer to the Internal Load Balancer(s):

  1. Access SecureSphere console via a web browser using the following path: https://<Your Management IP address>:8083 and log on.
  2. Place the Gateways in the same Gateway Group, this way all the gateways will apply the same routing and security policies.

SecureSphere WAF gateways configuration screen

  1. For each Gateway in the Gateway Group, create an alias – a mapping of network interfaces. Give all the aliases in the same Gateway Group the same name. The alias will be used when configuring networking rules.
  2. Within the Site Tree configurations, create Server Group and HTTP Service.
    Server groups are a representation of one or more servers located in a specific site.
    Web services represent the services that SecureSphere monitors.
  3. Configure routing:
    1. In the HTTP Service Reverse Proxy configuration, create Reverse Proxy rules. Every rule created will direct the traffic to a different destination (for example, a different Internal Load Balancer). For detailed information on the configuration process refer to the SecureSphere deployment guide.

SecureSphere - Configuring Reverse Proxy screen

Testing the Deployment

Once the deployment process is finished, you then validate that everything is configured properly.

  1. To test the deployment, generate valid HTTP calls to the external load balancer and make sure you receive the expected response from the web servers.
  2. To test the security configuration, generate malicious HTTP requests. Log into the SecureSphere MX and look at the alerts dashboard, to check if new violations are generated. This is the time to tune the security configuration.
    Hint: make sure that the security policies are applied to the web services you created.

When security is properly tuned, you can switch to “active” mode and start blocking malicious traffic.

Deployment Tips

Consider these helpful hints for your deployment:

  • Auto-Scaling: It’s possible to deploy an Auto-Scaling SecureSphere Environment. Auto-scaling allows you to automatically launch new WAF instances as the load increases. More information is available in the SecureSphere documentation.
  • Static IP Addresses: By default, new machines in Azure are created with a dynamic private IP address. Since this can cause communication problems between the different SecureSphere elements if the IP address changes after restart, you must configure the IP address assignment as static after the machine is deployed.
  • External Load Balancers and Session Stickiness: As the SecureSphere WAF Gateway is sensitive to session state, it is highly recommended that you enable session stickiness on the external load balancer.
  • VNet-to-VNet Connection: In a situation where there is more than one VNet, and there are Gateways on all the VNets but only one Management Server on one of the VNets, you may use a VNet-to-VNet connection to enable communication between the VNets to enable the Management Server to connect with all the Gateways. For more information, see the Microsoft Azure documentation.
  • TLS/SSL Termination: TLS/SSL termination can have a major impact on the performance. Terminating the encryption before the WAF will significantly improve the performance. For that purpose, Azure Application Gateway could be used (or any other load balancing solution).
  • Automated Deployment: The deployment process can be automated. Azure Resource Manageror any other orchestration tool can be used. SecureSphere provides vast REST API support for automation purposes.
  • Deployment Kit ARM Template: Imperva provides a Deployment Kit ARM template for quick deployment. The kit will create a virtual network, SecureSphere management console and scalable WAF, including a Load Balancer. This template could be useful as a reference for automating the deployment process. To deploy the Deployment Kit:
    • Navigate to the Azure portal: https://portal.azure.com
    • From the tiles, select Marketplace (or select Browse > Marketplace). Search for SecureSphere WAF Deployment Kitand follow the instructions.

Learn more about Imperva SecureSphere for Azure.

Source: https://www.imperva.com/blog/2017/09/secure-azure-deployments-securesphere-waf/?utm_source=linkedIn&utm_medium=organic-social&utm_content=azure-waf&utm_campaign=2017-Q3-linkedin-awareness

Author: Offir Zigelman


  • 0

Gartner Again Named Us a Leader in WAF

Category : Imperva

We added flexible pricing. You get the easy decision.

Gartner, Inc. has released the 2017 Magic Quadrant for Web Application Firewalls (WAF). Read the new report and see why Imperva is a WAF leader for four consecutive years.

Download


  • 0

Discovering a Session Hijacking Vulnerability in GitLab

Category : Imperva

GitLab is a widely used SaaS provider that focuses on developer related issues, including Git repository management, issue tracking and code review. During a recent pen test of GitLab (I wanted to see the service was a good fit to use at Incapsula), I was surprised to come across a vulnerability that leaves users exposed to session hijacking attacks.

While the vulnerability was discovered a while back, we wanted to wait until GitLab had a chance to review and address the issue. They’ve since initiated a series of fixes and approved our disclosure of the vulnerability.

In the following post, I will describe the vulnerability and the steps that GitLab has taken to fix it. I hope that this can be of value to other service providers who might be dealing with similar issues.

First, a bit about session hijacking.

What is Session Hijacking?

Session hijacking is a well-known attack involving the interception of session tokens that identify individual users logged into a website. An attacker can use a hijacked token to access a user’s account, make illegal purchases, change login credentials and access credit card details, just to name a few of the potential consequences.

Methods for stealing session tokens include: man in the middle (MITM) attacks, in which forged authentication keys are used to pass off a connection as secure; brute force attacks, in which a botnet executes millions of requests using random session IDs until an authorized token is found; and SQL injections, in which malicious SQL code is used to access sensitive data.

Uncovering the GitLab Vulnerability

My first indication that there might be an issue with the GitLab service came when I saw that my session token was fully visible in my URL.

Fig. 1: Token is visible in URL.

 

A simple copy/paste of the token granted me access to every actionable item on the GitLab platform, e.g., user dashboards, account information, individual projects and website code. To make sure this wasn’t a simple glitch, I used the same token on different browsers and machines—all with the same result.

Fig. 2: Token is private, meaning it does not expire.

 

What’s more, I saw that GitLab authenticates their users with persistent private session tokens. Once issued, they never expire no matter how long a user’s been inactive, or even if they are logged out of their account.

Practically speaking, this means that session tokens are left exposed and vulnerable to any of the attack methods outlined above. Because they do not expire, a stolen token can be used at any time, even months after the theft.

To make matters worse, the tokens were only 20 characters long. As such, they were susceptible to brute force attacks. Given their persistent nature and the admin level access they granted, this added up to a real security concern.

Patch Plans

I first contacted GitLab about the vulnerability on May 18th of this year. They informed me that I wasn’t the first to point out the threat—later, I even saw it mentioned on one of their support forums.

Since then, I’ve been in ongoing communication with GitLab regarding their patch plans. By now, they’ve implemented the following measures:

  1. Replacing private tokens with RSS tokens for fetching RSS feeds to avoid exposing session IDs.
  2. Expanding personal access tokens that offer role-based access controls. These provide the same functionality as private tokens, albeit with better security.

Additionally, GitLab is gradually phasing out private tokens altogether, a process that is set to be completed in the near future.

Session Hijacking Protection IncapRules

Session hijacking is a serious threat to online users’ privacy, money and identity. Protection comes not only from plugging site vulnerabilities, such as the one discussed above, but also from safeguarding against the attacks, (e.g., MITM, brute force), attempting to exploit them.

Incapsula mitigates these attacks using our proprietary rule engine, IncapRules.

While waiting for GitLab to issue a patch, we created one such rule that protects against brute force attacks by only allowing private tokens on specific types of requests (i.e., atom requests).

This means that an attacker that has obtained a session token will only be able to read, and not add or modify user information.

If you’re one of our enterprise customers and aren’t using IncapRules, we strongly suggest you talk to your sales rep.

This free feature grants granular control over your security settings, allowing you to safely mitigate numerous attack scenarios, including the one we talked about in this post.

 Source: https://www.incapsula.com/blog/blocking-session-hijacking-on-gitlab.html?utm_source=linkedin&utm_medium=organic&utm_campaign=2017_q3_gitlabhijacking
Author: Daniel Svartman

  • 0

Data Protection and the GDPR Job Market

Category : Imperva

The May 2018 deadline for full GDPR compliance will be upon us all before we know it. The GDPR will affect all organizations—regardless of their location—that handle personal data coming out of the EU. Article 37 of the GDPR requires organizations to retain a data protection officer (DPO) if, among other reasons, the organization’s core activities require “regular and systematic monitoring” of personal data on a “large scale.”

Article 39 of the GDPR requires a DPO to monitor an organization’s compliance with the GDPR and its own internal policies to ensure the proper care and use of personal data. To do so, DPOs must remain current regarding data protection laws and practices, conduct internal privacy assessments, and ensure that an organization’s data compliance matters are up-to-date.

Given the number of positions that need to be filled and a global skills shortage, time is starting to run short. According to the International Association of Privacy Professionals (IAPP):

“…once the GDPR takes effect, at least 28,000 DPOs will be needed in Europe and the United States alone. Applying a similar methodology, we now estimate that as many as 75,000 DPO positions will be created in response to the GDPR around the globe.”

All-encompassing DPO Responsibilities

A typical DPO will need to be able to address the following areas of data privacy and data security:

  • Data retention
  • Data anonymization and pseudonymization
  • Security risk assessment of current business practices involving personal data
  • Privacy impact assessment of new products, platform, services or processes, vendor assessments and audits
  • IoT and breach management

The same IAPP article says:

“A single DPO may represent a group of undertakings or multiple public authorities or bodies. The GDPR requires a DPO to be ‘designated on the basis of professional qualities and, in particular, expert knowledge of data protection law and practices’ and the ability to fulfill the tasks designated under Article 39. These tasks involve regulatory compliance, training staff on proper data handling, and coordinating with the supervisory authority, with an ability to understand and balance data processing risks.”

Misleading Number of Advertised Positions

Being curious about this enforceable GDPR requirement, Imperva conducted a cursory survey of IT security professionals at the recent Infosecurity Europe event. Of the 310 respondents, 79% acknowledged that their organization is already preparing to meet the GDPR, and 67% already had a DPO on staff.

But what about the 21% of organizations that aren’t presently working toward GDPR compliance, and the 22% that haven’t yet hired a DPO? It was eye-opening to learn that 52% weren’t planning on hiring a DPO until the second half of 2018 or beyond—after GDPR enforcement commences.

Research into GDPR Job Listings

That surprising data aside, Imperva decided to investigate online job listings as organizations worldwide seek to name a designated DPO. We learned that DPOs aren’t the only GDPR-related position that organizations are looking to fill.

Here are some key findings from our research:

  • There will be a growing demand to fill DPO openings, especially contract positions.
  • In our analysis of a prefiltered subset of over 18K Indeed.com job postings from 32 countries, nearly 5.8K matched the search terms GDPR, DPOdata protection or data privacy.
  • In second place behind the UK, the US has the most job listings—ahead of all European countries.
  • DPO recruitment will likely accelerate later this year and on into next as the enforcement deadline fast approaches.
  • Being especially true for big datapositions, there is a growing expectation of IT and business staff to take on increased data privacy and protection responsibilities. For example, one European data scientist at Amgen—whose primary responsibility is clinical studies—is also expected to be “assessing, developing and executing data privacy compliance programs.”
  • With the focus on hiring information security, compliance and IT staff to support the GDPR regulation, technology capabilities—such as data and records management, process automation and impact assessment tools—become essential to achieving compliance.
  • Our survey revealed that 55% of respondents expect AI or machine learning solutions to bolster DPO efforts, although they don’t foresee this happening until three to five years from now.

GDPR Salaries and Demand Growth

Due to the impending regulatory enforcement, there will be high demand and corresponding salaries associated with GDPR jobs. For example, the UK IT Jobs watch list (Figure 1) shows percentile ranges coupled with salaries that can approach £100K (nearly $130K USD). A related IAPP survey shows a global annual median salary of $106,500.

GDPR - UK Salaries

Figure 1: GDPR-related salaries in the UK

Figure 2 shows GDPR-related job demand growth in the UK has grown from zero in 2015 to over 300 this year, while Figure 3 shows the rise in GDPR-related job listings as a percentage of all advertised IT positions. Organizations are quickly looking to fill their requirement with contract jobs/consultants, with UK salaries on the rise.

GDPR-related job vacancies - UK

Figure 2: GDPR-related job vacancies in the UK

GDPR - Job Trends

Figure 3: Postings citing GDPR as a percentage of all advertised IT positions (permanent and contract)

The UK leads the list of countries posting GDPR-specific jobs. At roughly 1/4th of those listings, it’s followed by the US with the second-most listings (Figure 4).

GDPR job postings - top countries

Figure 4: Top countries posting GDPR-specific jobs (breakdown of 455 job postings, purple dots on the map above). 

Roles, Descriptions and Certifications

Of the nearly 5.8K Indeed.com openings we analyzed, fewer than 300 cited DPO as the primary role. All others can be termed supporting positions—for example a legal counsel or IT pro who will be responsible for data privacy and GDPR, in addition to other duties (Figure 5).

GDPR - job roles

Figure 5: Job roles distribution

We then used text analytics to get an overall perspective of the 455 GDPR-specific jobs. Shown in Figure 6, high-ranking counts of word groupings (n) let us create the following bigrams and trigrams(n-grams for n=2 and n=3, respectively). Both aided is in our statistical analysis as we examined keywords used in the job postings.

GDPR bigrams and trigrams

Figure 6: Word count analysis. [2] (Click to enlarge image.)

Our study then looked at certifications cited in the Indeed.com postings (Figure 7). Here, relevant certifications are issued by (ISC)2, ISACA, and the IAPP. But while the latter is the largest international privacy organization, there is no official GDPR or DPO certification. This could make it difficult for hiring companies and job seekers to target the “right” or “best” certification to target in relation to filling a DPO position.

GDPR - desired certifications

Figure 7: Specified certifications in job listings

Desired Skill Sets

Next, we looked at specified tools, database knowledge and programming languages. Somewhat surprisingly, Microsoft Excel topped the list (Figure 8). In relation to databases and programming languages Hadoop and Scala are cited, clearly the domain of big data and data science pros.

GDPR - desired tools databases programming languageFigure 8: Specified skill sets

One UK ManpowerGroup report points to GDPR as driving big data jobs. Our analysis suggests that such skill set demand is coming from a privacy by design objective, with responsibilities including GDPR compliance support.

What GDPR Compliance Means to You

Where does your organization stand with respect to GDPR compliance? Whether you’re on the inside, looking to hire a DPO, or are on the outside seeking a DPO position, this job analysis should provide you with good insights about factors and timing in addressing the GDPR challenge.

While it’s expensive to hire GDPR and DPO professionals, organizations need to budget accordingly for it. In addition, certain technologies can help you address the GDPR business need—delivering benefits in relation to data security, process efficiencies, records management and risk assessment.

[1] We created the interactive, GDPR-specific job openings map using the open source Leaflet for R package that uses the public OpenStreetMap GIS initiative.

[2] In our study we used the R™ text mining package to create a text corpus and a document-term matrix. From counts of the high ranking words, we were able to derive the bigrams and trigrams.


  • 0

Challenges of Big Data Security

Category : Imperva

Database security best practices are also applicable for big data environments. The question is how to achieve security and compliance for big data environments given the challenges they present. Issues of volume, scale, and multiple layers/technologies/instances make for a uniquely complex environment. Not to mention some of the big data stored and processed can also be sensitive data. Who has access to that data within your big data environment? Are the environment and the data vulnerable to cyber threats? Do your big data deployments meet compliance mandates (e.g., GDPR, HIPAA, PCI, and SOX)?

Drew Schuil, Vice President of Global Product Strategy, returns to talk about big data security in today’s Whiteboard Wednesday. Learn about the challenges associated with securing big data and requirements for protecting it as you build out your plan.


Video Transcription

Hi, welcome to Whiteboard Wednesday. My name is Drew Schuil, Vice President of Global Product Strategy at Imperva, and today’s topic is Challenges of Securing Big Data.

I meet with a lot of customers and chief security officers and we talk about protecting databases and file systems, so structured and unstructured data. And when I bring up big data, it’s often something that’s an afterthought or really hasn’t been looked at yet. So, we want to talk about some of the issues and things to get in front of this problem—this opportunity—as it arises.

Challenges of Big Data Security - still

Click to enlarge image.

The Big Data Trend

Let’s look at some of the trends. The biggest thing to note here is that big data is growing and it’s coming fast. IDC is predicting double digit growth in big data lakes within large enterprises and part of the reason that data collection is exploding is we’re seeing a proliferation of IoT, or Internet of Things, devices. Whether it’s the consumer market or the business environment, these devices are collecting metadata that’s very valuable to organizations for data analytics, for market trends, for consumer activity. The more and more data that’s being collected is being thrown into these big data lakes.

That leads us to our next trend here, which is sensitive data. Most organizations I talk to say, “Look, we’re not storing credit card numbers. We’re fairly/100% certain about that.” However, when we start looking at some of the newer regulations that have teeth, like the Europe’s GDPR, now the scope is potentially wider when we talk about personally identifiable information (PII). Things like first name, last name, email address, address, some of these little pieces of information that perhaps were benign before are coming into compliance, into scope, for data protection and [it’s important to make] sure that we’ve got a security strategy in mind.

Big Data Security Requirements

Let’s look at the framework. As you can see, access control, threat filtering, etc.—really the same kind of concepts that we had [for relational database security], but there’s some spin. There are some new things when we talk about big data.

  • Access Control and Threat Filtering: Specifically, with the first one, access control. When we talk about database environments as an example, they are fairly locked down. You’ve got DBAs, you’ve got least permission, auditing and entitlements reviews if you’re in financial services. However, within big data environments, because of the nature of big data and the analytics and the people that need to have access to it, a lot of times permissions are granted on a very wide basis. It’s a little different when we’re thinking about production databases versus production big data environments, because more and more people have access. With that, it increases the landscape for threats. Whether it’s endpoint threats and malware infections and account takeover, whether it’s malicious insider use cases—someone gaining access to data that they shouldn’t have. Or a DDoS attack, someone that says, “Hey, this is a big data environment that’s critical to the business, I’m going to extort you by threatening to DDoS that environment.” The same types of threats that we see with other business applications.
  • Activity Monitoring and Alerts: That leads to activity monitoring. I mentioned GDPR, Europe’s data privacy regulation, and activity monitoring and auditing. Being able to understand who is accessing what. Is that appropriate? Does that violate some regulation or data security standard within the organization? And then being able to get this information to the team that’s responsible for securing it. A lot of times that means feeding it from the monitoring tools into a SIEM or into a SOC or some other monitoring mechanism.

[We’ve got the] trends, it’s taking off. Big data is not something to ignore. We’ve got the same requirements. In the next section we’ll talk about some of the challenges that are introduced inherently by big data.

Big Data Security Challenge #1 – the Data Itself:

So, we have the same security requirements, but some very different challenges when looking at how to secure big data and it starts with the three V’s: volume, velocity, and variety.

  • Volume: One of the benefits of a big data environment is that it can handle massive amounts of data and actually make sense of it and crunch it in a lot of different ways to produce valuable results for the business.
  • Velocity: The other challenge is velocity. Particularly within high tech environments or retail or banking, where decisions need to be made very quickly on this data, having a security solution that is real time not only for alerting and monitoring, but also blocking—to keep up with that becomes a challenge when we’re balancing cost versus risk.
  • Variety: The third issue is variety. Because of the amount of, I’d say, the relaxed permissions that we have within big data, the number of people from different departments and access points coming in and doing different things to the data, it really becomes a challenge when we start talking about data discoveryand classification. Which data is sensitive, so that I can have some focus and scope? And then how do I apply policies against that if I’m having a challenge classifying the data …and I’m also having a challenge in terms of classifying the users and permissions and whether they should or shouldn’t be accessing the data? It really compounds the problem when we look at big data in the context of these three V’s.

Big Data Security Challenge #2 – the Environment:

The second challenge here is the environment.

  • Multiple Layers: When we look at big data environments, it’s not as simple as our traditional, let’s say, database environment, where we’ve got an application talking to an Oracle database and a pretty clear, crisp understanding of where we need to put in controls and blocking points. If we look at our diagram over here, we’ve got multiple different layers from distributed storage and querying layers to different management applications. Look at this environment and just the complexity of it. It should look much more difficult from a security perspective than something, again, like an Oracle or DB2 or SQL server stand-alone type of an application to protect.
  • Different Technologies: We’ve also got different technology mixed in to each big data environment, so you may have NoSQL, NSQL, data warehouse, BI tools. You’ve got all these different types of technologies within the environment, so again it’s not as cookie cutter as we’re used to in the past.
  • Multiple Instances/Dispersed Data Stores: You may have different instances and it may be dispersed over a wide geography, particularly if we’re dealing with a large multi-national, like a retailer, and we’re crunching data across multiple different regions. Now, you’ve got to not only look at the complexity of a single environment, but replicate that, be able to have the security environment talk to other security environments across a wide geography. You start to see some of the challenges when we talk about securing big data environments.

Big Data Security Challenge #3 – People:

All right. The third challenge is people and like we’ve talked about before in other sessions, people can often be the weakest link when we’re talking about security, especially in a complex environment like big data. If we look at the people that are most adept to administrating and dealing with a big data environment, we’re talking about computer scientists, PhD types. We’re talking about people that are going to be really focused on anything but security. They’re going to be focused on making the system work fast, getting accurate results. Really the last thing on their mind is going to be security and compounding that, again, is the privileged access problem.

The nature of these environments is very different from what we’re used to with a traditional database environment, where, let’s say, you’ve got production that’s very locked down and then maybe you’re doing some data masking, a best practice for your pre-production and test within a big data environment. A lot of times it’s much more open and you’ve got developers with very unrestricted access to potentially sensitive data and security – again, it is an afterthought when we talk about people accessing these systems.

Where to Start?

We’ve talked about big data trends. We’ve talked about some of the challenges in securing big data. Let’s talk about what you can do next. Where you can start. This section is a little bit of motherhood and apple pie, but some interesting tidbits that we’ve heard from customers that we’ve talked to.

  • Raise Awareness: We’re actually seeing financial services and retail, some of the early adopters, implement solutions like Imperva to address the security and compliance requirements for big data. And what they’ve told us is they’ve started with raising awareness within the organization. Basically saying, “Hey, we’ve got databases, we’ve got file systems, we’ve got cloud that we need to deal with. Big data also falls into our data security strategy.” Just raising awareness within the organization, so it doesn’t come as a surprise.
  • Proactively Interview Business Units: Then proactively interviewing the business units. So, talking to marketing, talking to the CRM teams, the customer support teams, talking to any of the business units that may be early adopters of big data so that you can get in front of and be aware of those projects and not be reacting later on to surprise projects.
  • Develop a Strategy/Build a Plan: Developing a strategy and building a plan. It’s much easier to be able to respond very quickly to the executives and say, “Hey, I knew this was coming. Here’s our plan. I’ve had this plan in place for the last six months, 18 months.” Really to just get ahead of these issues.

Additional Security Requirements for Big Data

Some of the requirements that we’ve heard from early adopters that are rolling out Imperva to protect big data…back to the complexities that it’s got to be able to address the three V’s. It’s got to be scalable, it’s got to be able to address very high performance environments, it’s got to be able to be deployed in a distributed environment across multiple different geographies. We’ve got to be able to integrate with other pieces of the security ecosystem, much like we see in protecting our structured and unstructured data. We’ve got to be able to integrate with SIEM. We’ve got to be able to pull information in about the risk profiles of users who are interacting with data, profiles that may contain information not seen by Imperva, but by some of your other security tools.

And they want to be able to leverage existing solutions. So, if they’ve already deployed a solution like Imperva to audit their databases, to audit their file systems in SharePoint, to audit their cloud systems, to secure their web applications…why not be able to take that same console, the same policy engines, the same framework they’ve already developed and apply that to the next data type? To apply that to big data. So, that’s something that a lot of organizations are looking for, not yet another vendor but to be able to consolidate their vendor portfolio.

Then, finally, actionable alerts. This really goes back to being able to provide context. In a database environment, we’re talking about millions and millions of events, let’s say in Oracle or DB2. Now, when we shift to talking about big data, it could be billions or trillions of events. So, it becomes even more important that we have things like machine learning that can understand and make sense of good versus bad and inappropriate behavior so that we can send actionable alerts, that we can send single digit alerts to the rest of the ecosystem, the SIEM and the SOC and so forth.

So, that’s our big data talk. I hope you found it helpful and please tune in for additional whiteboard sessions. Thanks.


  • 0

Addressing Data Across Borders for the GDPR

Category : Imperva

Most enterprises today do business across the globe, have databases in multiple countries and DBAs or users in different regions who have access to those databases. With GDPR mandating privacy requirements for personal data of European Union (EU) residents and visitors, it is important for an organization to know and control who accesses that data and what those with access authority can do with it.

Chapter 5 of the GDPR addresses “data transfers to third country or international organizations” and Article 44 of Chapter 5 specifically talks about “general principle for transfers”, which outlines the requirement for preventing unauthorized data transfers outside of EU member states.

Compliance with GDPR Article 44 requires either:

  • Blocking transfer of personal data outside the EU; or
  • Ensuring adequate data protection

In both cases, the starting point for compliance with the GDPR is data discovery and data classification followed by implementation of strong security policies, audit policies and reporting.

Imperva SecureSphere can help organizations comply with the GDPR by blocking the transfer of personal data outside the EU and ensuring adequate data protection. In this post, I’ll review how the SecureSphere database security solution can not only classify sensitive data and prevent it from crossing a specific geographic location to meet the Article 44 requirement, but also generate audit logs and reports that can assist with investigations, reporting mandates and data forensics (Figure 1).

enforce cross border data transfers for GDPR - 1

Figure 1: Imperva SecureSphere helps enforce cross-border data transfers by mapping to GDPR requirements

Database Discovery

Many organizations are not aware of all the databases that exist in their network. Often times, a DBA may create databases to test an upgrade for example, then forget to take it down, thus leaving a database containing potentially sensitive data unsecured and unmonitored.  SecureSphere Database Discovery scans and reports on all the databases that exist in the network, providing you with detailed information on each including IP address, port number, OS type and version (Figure 2).

cross border data transfer - database discovery - 2

Figure 2: Database Discovery scan results

Data Classification

After database discovery, it is important to understand what kind of data exists in your databases. The goal here is to look for any sensitive or privileged information. SecureSphere can identify sensitive data using column names or a content-based search using regular expressions making it highly accurate (Figure 3).

cross border data transfer GDPR - data classification - 3

Figure 3: Data classification scan results

Security Policy

Security policies play a key role in protecting against known/unknown attacks and threats and complying with regulations and organization guidelines. Let’s say for example you have two DBAs in different countries trying to access a database in Germany. You would need to define and enforce security policies that ensure the DBAs are accessing only the data they are authorized to access based on their location (Figure 4).

You can set up a security policy in SecureSphere that allows Mark, a DBA in Germany, to access the database in Germany, but block access by Franc, a DBA in Singapore, as Franc should not be allowed access due to his geo location (Figure 5).

cross border data transfer GDPR - user role location mapping

Figure 4: User role and location mapping

In our example, SecureSphere’s security policy is tracking and blocking based on:

  • User first name, last name and role
  • From which country they are accessing the data
  • What query are they trying to run
  • Which database they are trying to access and if that database contains any sensitive information

cross border data transfer GDPR - security policy - 5

Figure 5: SecureSphere security policy blocks a DBA in Singapore from accessing a German database

Audit Policy

Auditing is necessary as it records all user activities, provides visibility into transactions, and creates an audit trail that can assist in analyzing data theft and sensitive data exposure.

In the snapshot below, you see response size “0” for the DBA in Singapore, confirming he was not able to access and perform a query on the database in Germany.  Whereas the DBA from Germany has a response size of “178”, indicating he was able to execute the query and access the database (Figure 6).

cross border data transfer GDPR - audit logs - 6

Figure 6: SecureSphere audit logs showing database activity

Measurement and Reporting

SecureSphere can also create detailed reports with charts using multiple parameters such as user, database, schema, query, operation, response size, sensitive data access, affected rows and more (Figure 7).  This information can be used to report on activity that assists in maintaining compliance with various regulations.

cross border data transfer GDPR - reporting - 7

Figure 7: Create and manage reports on database activity

Watch our demo to learn more about how SecureSphere can address the GDPR requirement of preventing data from crossing a specific geographic location.

Source: https://www.imperva.com/blog/2017/08/data-across-borders-gdpr/?utm_source=linkedIn&utm_medium=organic&utm_campaign=2017_Q3_bordersgdpr

Author: Sumit Bahl


Support