Category Archives: Imperva

  • 0

Pushing Incapsula SIEM Logs Directly to an Amazon S3 Bucket

Category : Imperva

Incapsula allows you to push your account’s SIEM logs directly to a designated bucket in Amazon S3. Pushing your Incapsula SIEM logs to cloud storage lets you examine your log data in new ways. For example, your Incapsula SIEM logs can be combined with SIEM logs from other platforms to give you a single source of security issues across your entire tech stack.

We’ll demonstrate how to configure Incapsula to push SIEM logs to an Amazon S3 bucket by following these five major steps:

  • Step 1 – Create an Amazon S3 bucket for your Incapsula SIEM logs
  • Step 2 – Create access keys for your AWS account
  • Step 3 – Copy a test file to your Amazon S3 bucket
  • Step 4 – Check your Amazon S3 bucket for the copied test file
  • Step 5 – Configure Incapsula to push SIEM logs to Amazon S3

Step 1 – Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

As a first step, let’s create a new Amazon S3 bucket to hold our Incapsula SIEM log files.

  1. Use your web browser to sign in to your AWS account and go to the AWS Management Console.
  2. Select All services > Storage > S3.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  3. Click Create bucket to start the Create bucket wizard.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  4. In the Name and region step, enter a unique Bucket name, and select the Region where you want to store your bucket. Note: You cannot use the bucket name shown in the following illustration, incapsula-siem-logs, because it has already been used. Your bucket name must be globally unique. A best practice for avoiding bucket naming issues is to use a DNS-compliant name, such as incapsula-siem-logs.company_name.com.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  5. Click Next to go to the Set properties step.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  6. Recommended: Enable logging by clicking the Disabled link and specifying a target bucket and prefix for your logs. You can choose to store your log files in the same bucket as your SIEM logs or in a separate bucket. The optional target prefix you specify can help you identify access requests to your SIEM log bucket. Access log information can be useful in security and access audits. Click Learn more for additional information.

Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

  1. Click Next to go to the Set permissions step, and then expand the Manage users section.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  2. Under Objects and Object permissions, make sure Read and Write permissions are enabled for the account Owner, and then click Next to go to the Review step.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  3. Check your configuration settings. If you need to make changes, click the corresponding EditWhen you are satisfied with your settings, click Create bucket.

You’ve now created a bucket with the configuration you need for holding your Incapsula SIEM log files.

Step 2 – Create Access Keys for Your AWS Account

Although as the account owner you can freely copy files to and from your new S3 bucket, enabling Incapsula to programmatically write to your Amazon S3 SIEM bucket requires that you use access keys for your AWS account. You can use one of the following two options to obtain access keys:

  • Use the IAM access keys of your AWS account – You can get these access keys by signing in to your AWS account and selecting IAM.
  • Create an access key based on the IAM account – You can create an access key separate from the ones already associated with your account.

Use the following steps to create an access key for your AWS root account:

Use your AWS account email address and password to sign in to the AWS Management Console.

Note: If you previously signed in to the console with IAM user credentials, your browser might open your IAM user sign-in page. You can’t use the IAM user sign-in page to sign in with your AWS account credentials. Instead, choose Sign-in using root account credentials to go to the AWS account sign-in page.

  1. In the top left Services -> IAM (or right -> My Security Credentials)
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  2. Choose Continue to Security Credentials.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  3. Choose Account User name.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  4. Select the Security credentials tab.
    Create an Amazon S3 Bucket for Your Incapsula SIEM Logs
  5. Scroll down and either use an existing access key or Create access key

Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

  1. Choose your desired action.

To create an access key:

Choose Create access key. Then save the access key ID and secret access key to a file on your computer. After you close the dialog box, you can’t retrieve this secret access key again.

Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

  1. Make sure and copy Access key ID and Secret access key or Download .csv file

Create an Amazon S3 Bucket for Your Incapsula SIEM Logs

You’ve now created an access key to use.

Step 3 – (Optional) Copy a Test File to Your Amazon S3 Bucket

At this point, it’s a good idea to make sure everything is working. You can do this by using the AWS command-line tools to copy a file from your computer to your S3 bucket. Following these steps also confirms that your AWS access key ID and secret access key are working.

  1. Install the AWS Command Line Interface. For step-by-step instructions and links to AWS CLI for Linux, Microsoft Windows and iOS, go to http://docs.aws.amazon.com/cli/latest/userguide/installing.html.
  2. From a command prompt, run aws configure.

Fill in the requested information as the AWS CLI prompts you for the following:

  • AWS Access Key ID – The access key ID that you generated. The access key ID is listed on the Your Security Credentials
  • AWS Secret Access Key – The secret key that you downloaded or copied and pasted for safekeeping. If you did not save your secret key, you cannot retrieve it from AWS – you must generate a new one.
  • Default region name – The region whose name you specified for your S3 bucket. This parameter must be specified using the region code with no spaces, such as us-west-1. For a current list of S3 region codes, go to http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region.
  • Default output format – Specify jsontext, or table. For the purposes of pushing files from Incapsula, this setting does not matter.

You only need to specify these configuration parameters once per CLI installation. They remain in effect until you change them.

  1. Execute a directory listing of your bucket with the following command:
    aws s3 ls s3://bucket_name
    If successful, this command returns a list of zero or more files, depending on various settings, such as whether you have enabled access logs and whether any access has occurred that would result in log files.
  2. Copy a file to your bucket with the following command:
    aws s3 cp path_name/file_name s3://bucket_name
    If successful, this command returns the message:
    upload: path_name/file_name to s3://bucket_name/file_name

You’ve now installed and configured the AWS CLI, confirmed your AWS key ID and secret key, and copied a file from your local computer to your S3 bucket.

Step 4 – (Optional) Check Your Amazon S3 Bucket for the Copied Test File

To confirm that your file is in your S3 bucket, you can perform the following steps:

  1. Execute a directory listing of your bucket with the following command:
    aws s3 ls s3://bucket_name
    Among the list of files in your bucket, make sure that the list contains the file you copied in the previous step.
  2. Sign in to your AWS account and go to the AWS Management Console.
  3. Select All services > Storage > S3.
    Amazon S3 bucket and Incapsula
  4. On the Amazon S3 page, under Bucket name, click the name of the bucket you created for your Incapsula SIEM logs.
    Amazon S3 bucket and Incapsula
  5. Verify that the file you copied is listed.
    Amazon S3 bucket and Incapsula

Step 5 – Configure Incapsula to Push SIEM Logs to Amazon S3

Now that Amazon S3 is properly configured and you have your AWS access key, you’re ready to set up Incapsula to start pushing your SIEM log files to your S3 bucket.

  1. Use your web browser to go to https://my.incapsula.com/login, and then enter your Incapsula log in credentials and click Sign in.
     Configure Incapsula to push SIEM logs to Amazon S3
  2. Click Logs in the navigation panel.
     Configure Incapsula to push SIEM logs to Amazon S3
  3. In the Logs Setup page, select Amazon S3.
     Configure Incapsula to push SIEM logs to Amazon S3
  4. Enter the following:
  • AWS Access Key ID in the Access key field.
  • AWS Secret Access Key in the Secret key field.
  • Path name for your S3 bucket location in the Path field.
     Configure Incapsula to push SIEM logs to Amazon S3
  1. Click Test connection to verify that all your entries are correct.

That’s all there is to configuring Incapsula to push your SIEM logs to an Amazon S3 bucket.

 

Source: https://www.incapsula.com/blog/incapsula-siem-logs-to-amazon-s3.html?utm_source=linkedin&utm_medium=organic&utm_campaign=2017_q2_siembuckets

Author: Farzam Ebadypour


  • 0

DDoS Attacks Can Lead to Large Outages

Category : Imperva

The focus of the news media has been on massive DDoS attacks, with recent headlines proclaiming attacks in excess of 500Gbps. In this webinar, DDoS testing expert and NimbusDDOS founder Andy Shoemaker will demonstrate in a live DDoS attack how even a tiny attack can cause significant outage, including:

1. How a small gap can be responsible for a big outage
2. A demo of a live attack on a site
3. Methods you can use to identify and fix these risk areas
4. How to detect all layer 7 attacks automatically

Attend


  • 0

Today’s File Security is So ‘80s, Part 2, Detect Suspicious File Access with Dynamic Peer Groups

Category : Imperva

In a previous post, we shared three primary reasons why the traditional, static approach to file security no longer works for today’s modern enterprises. Working groups are formed organically and are cross-functional by nature, making a black and white approach to file access control outdated—it can’t keep pace with a constantly changing environment and creates security gaps. Files can be lost, stolen or misused by malicious, careless, or compromised users.

We also introduced a new file security approach—one that leverages machine learning to build dynamic peer groups within an organization based on how users actually access files. By automatically identifying groups based on behavior, file access permissions can be accurately defined for each user and dynamically removed based on changes in user interaction with enterprise files over time.

In this post, we’ll review the algorithms used to create dynamic peer groups that identify suspicious file access activity and help solve the traditional access control problem.

Building Dynamic Peer Groups to Detect Suspicious File Access

Several steps are required to dynamically place users in virtual peer groups according to how they access data (see Figure 1).

First, granular file access data is collected and processed. Next, a behavioral baseline is established that accounts for every file and folder accessed by each user. Based on how they access enterprise files, the dynamic peer group algorithm assigns users who may belong to different Active Directory (AD) groups into virtual peer groups. If the algorithm does not have enough information to associate a user with a specific peer group, the user is placed in a new peer group in which they are the sole member. Once virtual peer groups are established, access to resources by unrelated users can be flagged; this enables IT personnel to immediately follow up on such incidents.

dynamic peer groups_suspicious file access

Figure 1 – Overview of suspicious file access detection process

Granular data inputs

Algorithm input comes from Imperva SecureSphere audit logs. These contain access activity that provides full visibility regarding which files users access over time. Each event contains the following fields:

NAME DESCRIPTION
Date and Time Date and time of file request
User Name Username used to identify requesting user
User Department Department to which user belongs (as registered in Active Directory)
User Domain Domain in which the user is a member
Source IP IP that initiated the file request
Destination IP IP to which the file request was sent
File Path Path of requested file
File Name Requested file name
File Extension Requested file extension
Operation Requested file operation (e.g., create, delete)

Architecture

The behavioral models are created daily and simulate a sliding window on the audit data. This lets the profile dynamically learn new behavioral patterns and ignore old and irrelevant ones. Additionally, the audit files are periodically transferred to a behavior analytics engine. This improves existing behavioral models and reports suspicious incidents.

The behavior analytics engine is divided into two components:

  • Learning process (profilers) – Initially run over a baseline period, profilers are algorithms that profile the objects and activity in the file ecosystem and relate it to normal user behavior. These include users, peer groups, and folders, as well as the correlation between the objects. Profilers are activated daily afterward, both to enhance the profile as more data becomes available, and to keep pace with environmental changes (e.g., when new users are introduced).
  • Detection (detectors) – Audit data is usually aggregated over a short period (less than one day) before being processed by the detector. Activated when new data is received, detectors pass file access data from the profiler through predefined rules to identify anomalies. They then classify suspicious requests, reporting each as an incident.

Create peer groups using machine learning algorithms

To build peer groups, data must first be cleansed of irrelevant information—including files accessed by automatic processes, those that are accessed by a single user, and popular files frequently opened by many users in the organization.

Now with clean data, Imperva builds a matrix of the different users (rows) and folders accessed over time (columns). Each entry contains the number of times a user has accessed a given folder in the input data time frame.

The matrix is very sparse because the majority of users do not access most folders; therefore, dimensionality reduction is performed on that matrix to reduce both the scarcity and noise in the data. This leaves meaningful data access patterns which become the clustering algorithm input.

A density-based clustering algorithm is used to divide the different peer groups within the organization into homogeneous groups called clusters. Members of a given cluster have all accessed similar folders, with a typical cluster containing about four to nine users. The process also makes certain that users in different clusters are unique.

Define virtual permissions to enterprise files

The notion of “close” and “far” clusters are used to define the virtual permissions model of each user. For every cluster, the algorithm determines which peer groups are close and far based on the similarity between it and the other clusters. Distances are partitioned into two groups using a k-means algorithm; a smaller distance designates a closer cluster.

Each user is permitted access to folders accessed by others within their own cluster, or by users belonging to close clusters.

Detect suspicious file access

The detector aspect of the algorithm identifies suspicious folder access. Within a profiling period, for example, user John’s access to a given folder is considered suspicious if the folder is only accessed by users belonging to clusters far from his.

Dynamic Peer Groups_peer groups_file access
Imperva CounterBreach automatically determines the “true” peer groups in the organization and then detects unauthorized access from unauthorized users.

Incident severity (e.g., high, medium or low) is a function of the number of users and clusters having accessed the folder during the learning period. The ratio between the first and second quantities implies severity; higher values indicate higher severity (many users grouped in a small number of clusters). Lower values (close to 1) indicate reduced confidence, as the number of users equals or approaches the number of clusters. Personal folders and files are given careful consideration when ranking severity.

Adding context to accessed files with dynamic labels

With the goal of providing sufficient context to security teams so they can understand and validate each incident, Imperva presents typical behavior of the user who performed the suspicious file access activity. In addition, a label is applied to each folder accessed during the incident; this helps SOC teams evaluate the content or relevance of the files in question.

In assigning a label to a folder, the algorithm assesses the users who accessed it during the profiling period, as well as those from their peer groups. It then looks for the group (or groups) in Active Directory (AD) that best fits this set of users. This has two relevance aspects: the first, called precision, is how many users in the set are also in the AD group; the second is recall, the number of users in the AD group also contained in the user set. The best AD group (or groups) becomes the folder label—for example, Finance-Users, EnterpriseManagementTeam, or G&A-Administration. The label provides security teams with more context about the nature of the files pertaining to an incident.

Up Next: Examples from Customer Data

To validate the algorithms explained above, several Imperva customers allowed us to leverage production data from their SecureSphere audit logs. Containing highly granular data access activity, the log data provided full visibility into which files users accessed over a given duration—we saw the algorithms identify some very interesting real-life file access examples.

In our next post in this series we’ll review those examples and demonstrate the effectiveness of this automated approach to file access security.

For additional information on detecting suspicious file access with dynamic peer groups read the full Imperva Hacker Intelligence Initiative (HII) report: Today’s File Security is So ‘80s.

– See more at: https://www.imperva.com/blog/2017/06/detect-suspicious-file-access-with-dynamic-peer-groups/?utm_source=linkedIn&utm_medium=organic&utm_campaign=2017_Q2_HIIreport2#sthash.FTmLI8ra.dpuf


  • 0

Why the Traditional Approach to File Security is Broken

Category : Imperva

In today’s knowledge-driven economy, modern enterprises have a fluid organizational structure in which most employees have access to most data to do their jobs. Working groups are formed organically and are cross-functional by nature. The amount of unstructured data organizations create is growing exponentially. Traditional, black and white file access control can’t keep pace with the ever-changing environment. This creates a security gap in which data contained in files can be lost, stolen or misused by malicious, careless, or compromised users.

Almost any security team will confirm that the traditional static approach to file security, centered on individually granting users access to files based on their department and function, is not effective because it places too much of an administrative burden on enterprise IT teams. Setting up, maintaining and enforcing permissions to grant and deny access has proven to be ineffective when it comes to securing enterprise files.

A new Hacker Intelligence Initiative (HII) report based on research from the Imperva Defense Center reveals three primary reasons why the traditional approach to file security no longer works. We review each of them below, as well as introduce a new way to secure files in today’s dynamic, modern enterprise.

Permissions are granted, but rarely revoked

A key problem is that file permissions are easily granted but are rarely rescinded. With permissions increasing 26% after an employee’s first year and 11% annually afterward, they rapidly accumulate for every user. Yet statistics reveal that users only ever access less than 1% of the resources to which they are granted permission. In other words, the vast majority of resources to which users had access held no interest or only very temporary interest to them.

Theoretically, granting permissions should include resource owners (typically a senior person related to the relevant resource) who define the access policy, coupled with IT teams serving as the enforcer. But the trigger for revoking permissions is not well defined, causing permissions to accumulate.

To better understand the implications, Imperva researched the relationship between employment duration and granted permissions within an organization having more than 1,000 employees. Figure 1 shows the strong correlation between employment date and the number of folders each user is permitted to access. On average, permissions increase by 11% annually, with the biggest jump—26%—occurring after a single year of employment.

file security permissions over time

Figure 1: The effect of employment duration on folder permissions

Users do not touch most files to which they have permitted access

In assessing the effectiveness of the traditional permissions model, the research team compared the number of folders opened by the organization’s users in a specific month to the number of folders which they had permission to access. Figure 2 shows that 75% of the users accessed fewer than 36 folders per month; 50% opened fewer than 10 folders monthly. The number of folders opened has a very low correlation with the number of permitted folders. This reveals that typical user behavior is independent of the resource permissions they are granted.

file security number of folders used

Figure 2: Number of folders used during one month per user

On average, users are permitted access to 370,000 folders, each containing seven files. All users in our case study used less than 1% of their granted permissions (see Figure 3).

file security permissions percentage

Figure 3: Percentage of permissions used

Enterprise-level file permissions have become increasingly complex

The complexity of managing enterprise-level file permissions on a daily basis makes it increasingly difficult at any given moment for security teams to keep track of who has access to what. Permission inheritance between folder structures further complicates effective oversight of unstructured data. This is one reason why so many data breaches happen as a result of insiders within an organization.

Folder use varies over time. It can be separated into two categories:

  • Folders having a continual interest. They are repeatedly used by the user and permission should be granted.
  • Folders having a temporary interest. These are used over a specific duration, e.g., those containing information regarding a specific project or financial quarter. More than 99% of all shared folders are in this category.

To reduce risk, static permissions should remain for folders continually in use. But without dynamic permissions management for folders having only temporary interest (where 99% of files are stored), it is unmanageable for IT teams to keep pace with constant permission changes.

Instead, the permissions model should be dynamic and based on user behavior. This serves to relieve IT staff of a near-impossible task while simultaneously protecting an organization’s data assets.

A New Approach – Dynamic Peer Groups

Some resources in the file share should never be accessed by certain users. For example, developers do not need access to financial data. Traditional, static access controls can still be used to define resources to which a user or group should never have access. But all other files require a new security approach that is flexible, agile, dynamic and easier to maintain over time.

Backed by research, Imperva has developed an improved file security approach based on how users actually access files. Using highly granular input data collected from Imperva SecureSphere audit logs, coupled with advanced machine learning algorithms, Imperva is able to build dynamic peer groups and determine appropriate permissions based on how users actually access files within an organization. This also allows IT teams to dynamically remove permissions as changes in user interaction with enterprise files occurs over time.

Once virtual peer groups have been established for an organization, suspicious folder access by unauthorized users is identified and flagged, allowing security teams to immediately follow up on critical incidents pertaining to file access. This approach results in less work for IT teams, more appropriate access in today’s dynamic work environment, and a higher level of security.

Learn more about dynamic peer group functionality available in Imperva CounterBreach.

Read the full HII report: “Today’s File Security is So ’80s”.

– See more at: https://www.imperva.com/blog/2017/06/why-the-traditional-approach-to-file-security-is-broken/#sthash.LV4BspJC.dpuf


  • 0

Meeting SOC 2 Type II Compliance with Incapsula

Category : Imperva

We are pleased to announce that Imperva has released an audited SOC 2 Type II report for the Incapsula service. A SOC 2 Type II report establishes trust, and not all companies in the space are endorsed by AICPA, the governing standards body.

SOC 2 Type II Compliance and What It Means for You

The AICPA (American Institute of CPAs) is the world’s largest member association representing the accounting profession. One of its key functions is to set global auditing standards for companies, organizations and governments. Auditors are able to offer audit opinion based on these rigorous professional standards of compliance

The SOC 2 standard is a set of non-financial principles that measure how well a service organization, like Imperva Incapsula, controls its information.  This certification helps build customer trust in organizations, without having to perform their own compliance investigation. The Trust Services Principles (TSP) includes five criteria for controls including

  • Security
  • Availability
  • Processing integrity
  • Confidentiality
  • Privacy

An example would be that our service is available when needed and that personal information passing through it is maintained confidential at all times.

SOC 2 replaced SAS-70 some years ago as the standard for service providers. SOC Type I reviews controls and evidence in place at a point in time. Type II reports include a review of controls and evidence for a period of six to 12 months.  This makes the evidence collection and process far more comprehensive and intensive.

The SOC certification is just one of three third-party certifications (see below for more details) we perform for our services to meet stringent compliance and regulation standards. Our security services secure your customers’ interactions with you and help you gain their trust.

Achieving Certification for Incapsula

The Imperva Incapsula audit was conducted by a third-party compliance audit firm, covering the following principles and related criteria that are most relevant to our service:

  • Security — The system is protected against unauthorized access, use, or modification to meet the entity’s commitments and system requirements.
  • Availability — The system is available for operation and use to meet the entity’s commitments and system requirements.

Each of the related TSP Common Criteria has risks associated with it.  The TSP also provides illustrative controls which relate to those risks. The Imperva Incapsula service has approximately 100 controls based on the Common Criteria and the two principal areas.  The audit reviewed controls from May 1, 2016, to November 30, 2016.

Our auditors reviewed our controls for the testing period and offered a non-qualified (no significant exceptions) opinion.  The report accurately reflects our documented controls and our operational implementation of the controls.

Questions to Ask Your Provider

If you are with a cloud security provider it’s time to find out if it is SOC 2 Type II certified. Perhaps you are in the process of selecting a cloud service and looking into its commitment to security, availability and privacy. If that’s the case, ask the following questions to see how your provider is protecting your data and transactions.

  • Request a current SOC 2 Type II report.
  • Identify the principles and controls that you feel are needed in the report.
  • Validate that the audit report includes these controls.
  • Check the current status of the control according to the auditors findings.
  • Discuss any questions or concerns you may have about the report.
  • Document your expectations in your service provider agreement.

Incapsula is built on policies, standards, processes and controls measuring up to the requirements in SOC, PCI and ISO 27001. You are welcome to contact us for more information on our SOC 2 Type II or other certifications.


  • 0

Top 5 GDPR Myths: Get the Facts

Category : Imperva

The General Data Protection Regulation (GDPR) has been garnering much attention since its formal adoption in April 2016.  With the effective date of May 25, 2018 fast approaching, some popular myths have emerged surrounding the regulation.

In this blog post, we’ll examine and debunk a few of the most notable ones.

Myth #1: “We’re a US-based company so the GDPR doesn’t apply to us.”

In short, the GDPR will apply to US-based companies that offer goods or services to individuals in the European Union (EU) or monitor the behavior of individuals if the behavior occurs in the EU.  Even US-based companies that have no physical presence in the EU will be subject to the GDPR if they process an EU resident or visitor’s personal data in connection with goods or services offered to those individuals or if those companies monitor the behavior of EU residents or visitors while those individuals are within the EU. The GDPR could apply, for example, if a US citizen visits a US-based website while vacationing in Spain and that website monitors that citizen’s behavior while in Spain.

Given the cross-border nature of the modern-day economy, it’s also not unusual to see US-based companies with offices overseas, including in the EU.  Personal data processed, whether the processing occurs in the EU or not, in the context of the activities of a US-based company’s EU establishment will be subject to the GDPR.

Myth #2: “Since the UK is leaving the EU, we don’t need to worry about GDPR compliance.”

According to this Information Age article, about 25% of UK businesses have stopped preparing for GDPR compliance as they feel it won’t apply to them given the upcoming UK departure from the EU in 2019.

The reality is that GDPR enforcement will begin a good ten months before Brexit occurs.  And, even after the UK leaves the EU, there is still a very high probability UK businesses will be subject to GDPR compliance requirements because the GDPR applies to the personal data of all EU residents.  Given there are many EU residents living in the UK and UK businesses will continue to do business with residents of EU countries, the GDPR requirements will still apply to UK businesses long after Brexit is completed.

Myth #3: “Personal data that is already in our database isn’t subject to the GDPR.”

The GDPR applies to personal data, regardless of when that data was collected.  In other words, if the data was collected before the GDPR goes into effect (May 25, 2018), the company and relevant data will still be subject to GDPR requirements.

As long as the data can be traced back or associated with an individual who was in the EU at the time the data was collected (a “data subject”) via a name, ID number, or some other physiological, genetic, or similar factor, then that data will be considered within the scope of GDPR protection.  As an example, contact information gathered from prospective customers must have been gathered in compliance with the GDPR notice and consent requirements to be used for marketing purposes after May 25th, 2018.

Myth #4: “My data is stored with my cloud service provider so it’s their responsibility to remain compliant with the GDPR, not mine.”

The GDPR imposes a high duty of care upon data controllers in selecting their personal data processing service providers. Similar duties are imposed if a service provider contracts with a sub-processor. Businesses utilizing personal data for business purposes cannot “pass the buck” to their cloud or security service providers that are processing or storing personal data on their behalf.

So, even if a data controller is not storing personal data (i.e., it uses a third party to store such data), the data controller will still be held responsible for compliance with the GDPR.  Both controllers and processors share responsibility for meeting GDPR requirements.

Myth #5: “Our company uses pseudonymization and encryption to protect personal data, so that should be enough for GDPR purposes.”

Given the rapid pace of innovation, simply pseudonymizing (aka data masking) or encrypting the data, while useful, may not be enough to fully secure the data and meet the requirements of the GDPR.

Specifically, Article 32 of the regulation requires companies to implement appropriate technical and organizational measures to ensure a level of security appropriate to the risks that are presented by a company’s data processing activities.  In assessing the appropriate level of security, companies are required to pay particular attention to the risk of accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to personal data that is transmitted, stored or otherwise processed.

In determining what technical and organizational measures would be appropriate, companies must take into account the current state of the art, costs of implementation, and the nature, scope, context and purposes of the processing as well as the risk of varying likelihood and severity to the rights and freedoms of the individuals whose data is being processed.

Under this article, businesses must do what is appropriate, including but not limited to and likely more than, just pseudonymization and encryption to ensure data security.  Information governance technologies that address data retention and defensible disposition issues are examples of additional measures that enhance data security.

Next steps

The issues discussed above are currently top-of-mind for many security, compliance, and IT professionals tasked with meeting GDPR requirements. To assess your organization’s readiness, review this blog post for a planning timeline and identify the next steps that make the most sense for you.

Wondering how your organization compares to others when it comes to GDPR readiness? Read the results of our GDPR survey.

– See more at: https://www.imperva.com/blog/2017/05/top-5-gdpr-myths/#sthash.bEQCFUt4.dpuf


  • 0

Dynamic Application Profiling: What It Is and Why You Want Your WAF to Have It

Category : Imperva

Because web applications are unique, they have distinct structures and dynamics, and – unfortunately – different vulnerabilities. A web application security device, therefore, must understand the structure and usage of the protected applications. Depending on the complexity of the protected application, this task can entail managing thousands or even hundreds of thousands of constantly changing variables including users, URLs, directories, parameters, cookies, and HTIP methods.

Dynamic application profiling automates management by learning application structure and usage with little-to-no manual tuning. It streamlines configuration, provides up-to-date and accurate security, and substantially reduces administrative overhead.

We refer to Imperva’s patented dynamic application profiling technology as Dynamic Profiling. In this post, we’ll explain our approach, what makes us different, and why we consider dynamic application profiling must-have functionality when evaluating web application firewall solutions.

What is Dynamic Profiling?

SecureSphere’s Dynamic Profiling automatically models an application’s structure and elements to learn legitimate user behavior such as acceptable form field values and protected cookies. Valid application changes are automatically detected and incorporated into the profile over time. Because web applications often change, SecureSphere’s automatic learning capability ensures that the application profile is always up-to-date. By comparing profiled elements to actual traffic, SecureSphere can detect unacceptable behavior and prevent malicious activity with pinpoint precision. In this way, Dynamic Profiling delivers completely automated security without requiring manual configuration or tuning.

While Dynamic Profiling automatically builds the profile of protected web applications and detects application changes over time, it also allows organizations to manually adjust the application profile (although we hear from customers that they rarely manually modify the dynamic profile due to its accuracy).

All aspects of SecureSphere’s application profile are customizable – meaning security managers can modify the application profiles to bridge any differences between actual usage and corporate security policies.

Why Do You Need Dynamic Profiling?

Any application security architecture that relies upon manual rule creation by a security administrator requires constant rule-based tuning to account for changes to the applications. For example, many web application firewalls require manually-created rules to define expected behaviors for client-side scripts. These manual rules specify detailed application variables such as allowed URLs, parameters, parameter types, and parameter constraints. Maintenance of these rules can be a major source of operational overhead as many sites rely on hundreds of scripts. Any script change requires a parallel rule change to avoid false positives.

Considering that many security managers are not kept abreast of all application changes, manually maintaining a white list security model is untenable.

Dynamic Profiling overcomes the biggest drawback of other web application firewall solutions—manual rule creation and maintenance by automatically recognizing and incorporating application changes into its profile over time.

Dynamic Profiling in Depth: Technical Specifications

SecureSphere uses machine learning to create a positive security model of the applications’ profile. While doing so it avoids false-positive by learning the behavior of the common way to interact with the application. The positive security model is also used to detect anomalies from the common interaction with the application. By learning and analyzing web requests and responses to the production web applications, SecureSphere dynamically models the application structure, elements and expected application usage (see Figure 1). It takes approximately 2-5 days of analyzing live traffic to build the application profile.

SecureSphere automatically profiles the following elements:

  • URLs
  • Directories
  • Cookies
  • Form fields and URL parameters
  • HTIP methods
  • Referrers
  • User authentication forms and fields for application user tracking
  • XML elements
  • SOAP actions

Dynamic Application Profiling screenshot

Figure 1: Dynamic Profiling automatically builds a profile of application elements, structure and usage, including URLs, form fields, parameters, cookies and expected user input.

In addition, SecureSphere determines expected user behavior by analyzing many different web users and their usage patterns. Expected usage attributes include:

  • Form field and parameter value length (approximate minimum and maximum length)
  • If a parameter is required or if is optional
  • If a parameter can be modified by the end user
  • The parameter value type (for example: numeric, Latin characters, foreign characters)
  • Allowed character groups (slash, white spaces, quotes, periods, commas, etc.)
  • If a cookie is protected or if it can be modified by the user
  • If a cookie must be set by the web server or if it can stored in the browser cache

Differentiating Between Legitimate and Illegitimate Activity

Because SecureSphere automatically learns application elements, structure and usage based on real web traffic, it must differentiate between acceptable user requests and application attacks. Otherwise, it might be possible for SecureSphere to add illegitimate requests to the application profile. SecureSphere uses the following techniques to differentiate between acceptable and malicious activity:

  • SecureSphere ignores known malicious behavior (HTTP protocol violations, known attack signatures like SQL injection, double encoding, etc.) when building the profile
  • SecureSphere analyzes server responses. If the web server replies with an error code such as “404: Not Found” or “500:Internal Server Error”, then SecureSphere will ignore the request
  • SecureSphere ignores web requests that have no referrer or have an external referrer unless the server generates a “200: OK” or a “304: Not Modified” response code. These types of requests may be generated by robots or scripts.
  • SecureSphere analyzes many different attributes when developing the web application profile. It builds the profile based on the number of occurrences, length of time, and uniformity of requests.
  • In addition, customers can restrict learning to trusted source IP addresses. Or customers can ignore non-trusted IP addresses.

SecureSphere automatically updates the profile over time.

  • The administrator can be automatically alerted every time the profile changes.
  • For profile updates, the behavior must be repeated by multiple sources. In addition, the behavior must be seen a certain amount of times per hour during a minimum number of hours. By default, most elements will be learned if the element is accessed by at least 50 different IP addresses or users and if the behavior is repeated at least 50 times for at least 12 different hours. All of these profile settings are configurable.

Maintain Security, Reduce IT Overhead, Keep Users Productive

Dynamic application profiling addresses the taxing manageability issues of traditional web application firewalls. With Imperva’s Dynamic Profiling organizations can protect their sensitive web applications and back-end data in an automated way, without introducing excessive IT overhead and without blocking legitimate web users.

– See more at: https://www.imperva.com/blog/2017/05/dynamic-application-profiling/#sthash.rIQQc1TZ.dpuf


  • 0

Database Activity Monitoring: A Do’s and Don’ts Checklist for DBAs

Category : Imperva

In a previous post, we looked at the limitations of native audit, the free tool often used by database administrators (DBAs) for logging database activity. While it has its appeal—it’s already part of the database server and does not require additional cost for third-party appliances or software—native audit has issues when it comes to performance at scale, carries hidden costs, and fails to meet several compliance requirements.

In this post, we look at the benefits of database activity monitoring as another approach to implementing data-centric security measures.

Database Activity Monitoring, Defined

Gartner states that database activity monitoring (DAM) “refers to a suite of tools that… support the ability to identify and report on fraudulent, illegal or other undesirable behavior, with minimal impact on user operations and productivity.” These tools have evolved from basic user activity analysis to include robust data-centric security measures, such as data discovery and classification, user rights management, privileged user monitoring, data protection and loss prevention, etc.

According to the Securosis white paper, “Understanding and Selecting a Database Activity Monitoring Solution,” a database activity monitoring solution, at a minimum, is able to:

  • Independently monitor and audit all database activity, including administrator activity and SELECT query transactions. Tools can record all SQL transactions: DML, DDL, DCL (and sometimes TCL). It can do this without relying on local database logs, thus reducing performance degradation to 0% – 2%, depending on the data collection method.
  • Securely store the audit logs to a central server outside the audited database.
  • Monitor, aggregate, and correlate activity from multiple heterogeneous Database Management Systems (DBMSs). Tools can work with multiple DBMSs (e.g., Oracle Database, Microsoft SQL Server, and IBM DB2) and normalize transactions from different DBMSs, despite differences between SQL flavors.
  • Ensure that a service account only accesses a database from a defined source IP, and only runs a narrow group of authorized queries. This can alert you to compromises of a service account either from the system that normally uses it, or if the account credentials show up in a connection from an unexpected system.
  • Enforce separation of duties, by monitoring and logging database administrator activities.
  • Generate alerts for rule-based or heuristic-based policy violations. For example, you might create a rule to generate an alert each time a privileged user performs a SELECT query that returns more than 5 results from a credit card column. The trigger alerts you to the possibility that the application has been compromised via SQL injection or other attack.

Some DAM tools also:

  • Discover and provide visibility into the location, volume, and context of data on premises, in the cloud, and in legacy databases.
  • Classify the discovered data according to its personal information data type (credit card number, email address, medical records, etc.) and its security risk level.
  • Provide pre-defined policies for PCI, SOX, and other generic compliance requirements.
  • Offer closed-loop integration with external change management tools to track approved database changes implemented in SQL. Other tools can then track administrator activity and provide change management reports for manual reconciliation.

Build Your Evaluation Checklist

Every organization wants a database activity monitoring solution designed for minimal impact on their databases. With that in mind, we’ve developed a checklist that DBAs and other stakeholders can use when evaluating solutions.

Here are five things you want a database security monitoring solution to do and five things you don’t:

The Do’s

  • Consumes 1- 3 percent of CPU and disk resources, using an agent-only collection method. (You can cap the resource consumption, if needed.) Using an agent-only collection method, rather than a non-inline ‘sniffer’ or an inline bridge deployment, allows you to cluster gateways. And that helps ensure high-availability performance of your database.
    • Note: This consumption is significantly lower than the approximately 20 percent associated with native database auditing.
  • Provides continuous, real-time monitoring of local SQL traffic, such as IPC and Bequeth. It can also optionally monitor all incoming network-based SQL traffic to the database.
  • Issues a TCP reset on a blocked session, which appears as if the client lost a network connection. As a result, nothing changes in the database and normal database client connection cleanup occurs as usual.
  • Consumes minimal network bandwidth for monitoring incoming SQL statements to the gateway, plus some metadata such as response time or number of rows returned.
    • Note: You can also monitor outbound network traffic via a separate interface, but that may create security issues if you trap sensitive data. It also creates a high volume of network traffic data.
  • Provides a single, graphical interface for troubleshooting. You can quickly see what resources the agent is currently consuming, as well as view a history of resource consumption. If blocking is enabled, you can specify sending an email to the database activity monitoring tool, Security Information and Event Manager (SIEM), or other notification system.

The Don’ts

  • Won’t require installation of any objects in your database. No script to install. No credentials to install, other than operating credentials.
  • Won’t alter or require altering of your database, database configuration files or database parameters. The agent is not touching your databases or doing anything to your databases.
  • Won’t require a host reboot, except in rare use cases such as a DB2 on AIX database bounce.
  • Won’t require a new or existing database user account for installation, monitoring, or blocking.
  • Won’t write to the file system, except in the case of communication loss to gateway due to a block. And that can be curtailed as soon as the communication is re-established.

Summary

Given today’s ever-evolving security threats, combined with the exponential growth in both volume and use of sensitive data, it’s critical that data-centric security measures be deployed. These measures, which focus on safeguarding data as it moves across networks, servers, applications, or endpoints, come in two flavors: native database auditing tools and database activity monitoring.

Native database auditing tools, although a free part of the database, generate numerous hidden costs — performance degradation and extra hardware, software, storage, and labor expenses — while failing to meet either compliance or security requirements. Your data is still at risk.

Database activity monitoring provides the robust compliance and security coverage necessary for protecting your data, without the costs associated with native database auditing.

– See more at: https://www.imperva.com/blog/2017/05/database-activity-monitoring-checklist/#sthash.VdAEDunD.dpuf


  • 2

Why Care About Data-Centric Security?

Category : Imperva

It’s no surprise that data breaches are evolving and becoming increasingly more complex. According to the Verizon 2017 Data Breach Investigation Report, data breaches are “complex affairs often involving some combination of human factors, hardware devices, exploited configurations or malicious software.” In today’s interconnected world, a breach can involve one or more paths to your data, including:

  • Excessive, inappropriate, and unused user privileges
  • Privileged user abuse
  • Insufficient web application security
  • Database misconfigurations and/or missing patches
  • Query injections — SQL injections that target traditional databases and NoSQL injections that target Big Data platforms
  • Malware-infected devices and unsecured storage media
  • Social engineering — baiting, phishing, pharming, pretexting, ransomware, tailgating, and others

For example, a multi-vector attack can use team and system silos — a DDoS attack distracts, while another vector utilizes compromised user credentials obtained via a spear phishing email and a malware-infected device — to circumvent security and steal thousands of data records.

Data breaches are further helped by weak audit trails that make it difficult to determine the ‘who, what, where, and when’ of a data breach. This allows aggressors to repeatedly exploit security gaps and attack the weakest prey via the path of least resistance. Case-in-point: According to the New York Times, Yahoo was attacked in August 2013 (exposing one billion user accounts) and again in late 2014 (exposing 500 million user accounts) because they were not even aware that they were attacked until 2016, when the stolen records were offered for sale on the Tor network.

The Data Protection Struggle is Real

Each high-profile data breach brings increased pressure for organizations to properly protect their sensitive data. In addition, compliance regulations such as SOX, HIPAA, and PCI require complete visibility and an uninterrupted record of what data is accessed, when, and by whom. The new GDPR has similar requirements.

However, many companies struggle to implement the cohesive, multi-layered, and multi-stakeholder approach necessary for defending against complex data breaches. Some of the challenges they face include:

  • Exponential growth in both the volume and use of sensitive data
  • Variety of data repositories — heterogeneous databases, big data platforms, file servers, data collaboration systems, cloud-based file-sharing services, etc. — that need to be protected
  • Duplication and migration of data across repositories, as organizations try to extract maximum value from data by using it to support an ever-expanding array of business processes
  • Tight budgets that require people to do more with less

Because of these, and maybe other challenges, many organizations typically focus their attention on protecting the enterprise’s networks, devices, and applications. Their security measures include next-gen firewalls, anti-virus programs, spam filters, malware blockers, network auditing, and similar security tools.

Unfortunately, if an attacker gets past your firewalls or malware blockers or other security defenses, and there are limited or no data layer protections in place, your data is at risk.

Data-Centric Security Measures — A Fighting Chance

Given today’s ever-evolving security threats, it’s critical that data-centric security measures be deployed — it’s your last chance to stop an in-progress data attack. These data-centric security measures, which focus on safeguarding data before it moves across networks, servers, applications, or endpoints, include (see Table 1):

Security Measure

Description

Data discovery and classification Discovers and provides visibility into the location, volume, and context of data on premises, in the cloud, and in legacy databases. Classifies the discovered data according to its personal information data type (credit card number, email address, medical records, etc.) and its security risk level.
User rights management Identifies excessive, inappropriate, and unused privileges.Analyze individual’s activities against their peers’ behavior looking for anomalies and excessive rights.
Privileged user monitoring Monitors privileged user database access and activities.Enforces separation of duties.
Data protection Ensures data integrity and confidentiality through change control reconciliation, data-across-borders controls, query whitelisting, etc.
Data loss prevention Monitors and protects data in motion.  Blocks attacks, privilege abuse, unauthorized access, malicious web requests, and unusual activity to prevent data theft.
Data access across borders management Limits which data can be accessed by users outside the borders defined by international privacy regulations or internal governance.
Change management Monitors, logs, and reports on data structure changes. Shows compliance auditors that changes to the database can be traced to accepted change tickets.
VIP data privacy Maintains strict access control on highly sensitive company data, including data stored in multi-tier enterprise applications such as SAP and PeopleSoft.
Ethical walls Maintains strict separation between business groups to comply with M&A requirements, government clearance, etc.
User tracking Maps web application end user to the shared application/database user to the final data accessed.
Secure audit trail archiving Secures the audit trail from tamper, modification, or deletion, and provides forensic visibility.

Table 1: Data-centric security measures

Implementing these measures helps answer questions such as:

  • Where is sensitive data located? What is its at-risk level? How do we ensure that our data is not corrupted or exposed?
  • Who is accessing the data and how are they accessing it?
  • Are we compliant with industry regulations such as SOX, HIPAA, PCI—and soon, GDPR? Do we have the right level of auditing? Are we enforcing separation of duties?
  • Are we applying the right policies across the right databases and Big Data in a uniform and consistent manner?
  • How do we differentiate between authorized and unauthorized access? And how do we block unauthorized access? What happens if someone’s credentials are compromised?

For more information about data-centric security, read our white paper: “Seven Keys to a Security Data Solution.”

– See more at: https://www.imperva.com/blog/2017/05/data-centric-security/?utm_source=linkedIn&utm_medium=organic&utm_campaign=2017_Q2_datacentricsecurity#sthash.ud7XX2L3.dpuf


  • 0

The Successful CISO: Tips for Paving the Way to Job Security

Category : Imperva

Seasoned CISOs know that failure to plan past a two-year window is dangerous—to both their company and their job security. But it’s all too common for many security strategies to look only two years out.

Imperva CISO Shahar Ben-Hador has been with Imperva for eight-and-a-half years—the last two-and-a-half in the role of CISO, just past that infamous two-year mark. He recently joined Paul Steen, Imperva’s Vice President for Global Product Strategy, to discuss the phenomenon of the “Two-Year Trap”, how he works to avoid it, and his thoughts on how CISOs can extend their job life expectancy with a long-term view.

Read highlights from their conversation below and listen to the complete recorded webcast here: “How Not to Get Fired as a CISO: Building a Long-Term CISO Strategy”

——————————

Paul: When asked to describe the role of the CISO in one sentence, you say “it’s to make sure that the company does not get breached.” But what if a company does get breached? Does that CISO get fired?

Shahar: I think a lot of InfoSec professionals used to believe that would be the immediate consequence, but I know of several CISOs who experienced a breach, and because they were so essential to their company and performed their job well—doing all the right things, both before and after the breach—remained in their role.

That said, it’s important to identify the threats you are working to protect against and agree on the priorities with your management team. Once you agree on the most important assets, develop great programs for those. If an asset outside of that gets breached, the probability is good that you’ll continue in your role because you focused your efforts on the right priorities for the company.

Paul: What sort of person is needed to be a CISO? What are the important qualities?

Shahar: A successful CISO needs to be both strategic—long-term plan, collaborate with teams, communicate to executive management and the board—and tactical. The devil is in the details. You can pick a great technology that’s right for the business, but you can also completely screw up implementation. I’m not saying the CISO needs to implement it on their own and know every aspect of every implementation, but a CISO needs to work with their team to make sure projects are managed down to the smallest detail.

Another quality would be embracing innovation. I think those who don’t will have a difficult time being a successful CISO. Threats evolve and adversaries innovate all the time, so defenses to prevent attacks have to innovate and evolve too. And businesses evolve as well. Think about a coffee shop chain that used to only take payments in store, but now offers mobile payment options and customer portals. As a company’s infrastructure evolves, so does their threat landscape.

Paul: There’s still that requirement to maintain the long view and look far enough down the road. How do you balance that?

Shahar: There’s a lot of hype about the next big thing, mostly from vendors who have a great new product to offer. As a CISO, I have to assess what’s going to be a fundamental technology over a longer period of time and what’s fundamental for our business. Sometimes it’s not easy and everybody makes mistakes. But the longer you remain in your role, your ability to predict what’s going to be a long-term success versus just a short-term trend improves.

Paul: It can be beneficial for a company to have a CISO who stays in the role for a while; they have the potential to be strategic with time. Why do CISOs change jobs? When do you typically see that jump happening?

Shahar: I’ve talked with a lot of colleagues about this and found some trends. I see CISOs who stay on the job for 15 years or more—typically very successful, seasoned people who are doing a great job for their companies. Then I see other CISOs who stay for about two years or leave shortly after. I think the reason for that is it’s natural for people to focus on and fix the immediate gaps, and it takes about two years to close the primary gaps you identify. Then I see CISOs who do not plan for a longer tenure because they think they may get fired. But the reality is if you don’t plan for a longer tenure, you very well may be fired! You can do a terrific job in two years, but if you haven’t planned for the third or fourth year, your role is at risk.

Paul: How can a CISO avoid that “Two-Year Trap”?

Shahar: Here’s what I do for myself and with my team. Every six months, I imagine that I just got hired into my job. Every six months, I look back to see what was done before my time, before my “new” time, and then review what might need to be changed with my team. Maybe something was right a year ago, but not anymore. This process keeps us very, very focused, both on the practical level, as well as the strategic level. We always have a plan for the next two years.

Paul: Of course, as a CISO, you can’t do the job without your team. If you can’t maintain a team, then that doesn’t bode well for your own longevity.

Shahar: Exactly. I think that part of a CISO’s role, and I’m not saying it’s an easy one, is to educate the company that there is essentially zero unemployment in InfoSec. That they need to be open to being more flexible and offering competitive packages. Employees want to reap the benefits of their efforts and many want to stay at the same company for longer than two years. In that case, it’s the CISO’s job to keep their team intact and work on keeping things competitive for those professionals.

Paul: What about looking at the long-term view of security?

Shahar: Here’s how I view it—every company has their own DNA. Every company has things they care about more than anything else for their business, their management and employees. That DNA is different for each company, and I think being a CISO for many years helps you better understand that DNA and how to protect it—whether it’s healthcare information, credit card information, a proprietary application, or what an application looks like. For some, it’s their cars, whether they’re vulnerable to attack and can be taken over. It’s fundamental for the CISO to understand what those critical assets are and focus on protecting them. The DNA may evolve over time, but more than likely not as fast as the attacks.

Paul: Would you give us some tips that you’ve followed to be successful when it comes to this long-term planning?

Shahar: Sure. My favorite is what I call red [and blue] team activity. Red team activity is pen testing performed internally, typically by very skilled employees who are kind of “hackers on license”. They are given free rein to break you wherever they want. My number one recommendation is to use this strategy as much as you can, and the more often the better. Exercise them on their own. Exercise them against the blue team—the blue team is the defense, the internal response function. It’s always better if they find something and not the adversaries. Yes, it creates a lot of work to fix their findings, but it’s always been successful and energizing for me and the teams.

– See more at: https://www.imperva.com/blog/2017/04/the-successful-ciso-tips-for-paving-the-way-to-job-security/?utm_source=linkedIn&utm_medium=organic&utm_campaign=2017_Q2_successfulciso#sthash.pHyMMhfc.dpuf


Support