Category Archives: Palo Alto

  • 0

Palo Alto Networks Now a Six-Time Gartner Magic Quadrant Leader!

Category : Palo Alto

Gartner’s 2017 Magic Quadrant for Enterprise Network Firewalls has been released, and Palo Alto Networks is proud to be positioned in the Leaders quadrant for the sixth consecutive year. I invite you to read the 2017 Magic Quadrant for Enterprise Network Firewalls report.

Gartner’s Magic Quadrant provides a graphical competitive positioning of technology providers in markets where growth is high and provider differentiation is distinct. Leaders execute well against their stated visions and are well-positioned for tomorrow. Gartner researchers continue to highlight both our ability to execute and the completeness of our vision. You can find more details in the report.

More than 39,500 customers in more than 150 countries have chosen Palo Alto Networks to realize the benefits of a truly next-generation security platform, safeguard critical assets, and prevent known and unknown threats. To protect our customers and stay ahead of sophisticated cyberattackers, we maintain a steadfast commitment to innovation. We recently introduced several more disruptive capabilities:

  • Application Framework: With a SaaS-based consumption model, Palo Alto Networks Application Framework allows customers to use new apps to solve the most challenging security use cases with the best technology available, without the cost and operational burden of deploying new infrastructure.
  • GlobalProtect cloud serviceGlobalProtect cloud service eases your next-generation firewall and GlobalProtect deployment by leveraging cloud-based security infrastructure operated by Palo Alto Networks.
  • Logging Service: Palo Alto Networks Logging Service is a cloud-based offering for context-rich, enhanced network logs generated by our security offerings, including those of our next-generation firewalls and GlobalProtect cloud service.

Source: https://researchcenter.paloaltonetworks.com/2017/07/palo-alto-networks-now-six-time-gartner-magic-quadrant-leader/

Author: 


  • 0

Palo Alto Networks Unit 42 Vulnerability Research November 2017 Disclosures – Adobe

Category : Palo Alto

As part of Unit 42’s ongoing threat research, we can now disclose that Palo Alto Networks Unit 42 researchers have discovered seven vulnerabilities addressed by the Adobe Product Security Incident Response Team (PSIRT) as part of their November 2017 security update release.

CVE Vulnerability Name Affected Products Maximum Severity Rating Impact Researcher(s)
CVE-2017-16388 Use after free Adobe Acrobat Critical Remote Code Execution Gal De Leon
CVE-2017-16389 Use after free Adobe Acrobat Critical Remote Code Execution Gal De Leon
CVE-2017-16390 Use after free Adobe Acrobat Critical Remote Code Execution Gal De Leon
CVE-2017-16393 Use after free Adobe Acrobat Critical Remote Code Execution Gal De Leon
CVE-2017-16398 Use after free Adobe Acrobat Critical Remote Code Execution Gal De Leon
CVE-2017-16414 Out-of-bounds read Adobe Acrobat Critical Remote Code Execution Gal De Leon
CVE-2017-16420 Out-of-bounds read Adobe Acrobat Critical Remote Code Execution Gal De Leon

Palo Alto Networks customers who deploy our Next-Generation Security Platform are protected from zero-day vulnerabilities such as these. Weaponized exploits for these vulnerabilities are prevented by Traps multi-layered exploit prevention capabilities. Threat prevention capabilities such as application control, IPS, and WildFire provide our customers with comprehensive protection and automatic updates against previously unknown threats.

Palo Alto Networks is a regular contributor to vulnerability research in Microsoft, Adobe, Apple, Google Android and other ecosystems. By proactively identifying these vulnerabilities, developing protections for our customers, and sharing the information with the security community, we are removing weapons used by attackers to threaten users, and compromise enterprise, government, and service provider networks.

Source: https://researchcenter.paloaltonetworks.com/2017/12/unit42-palo-alto-networks-unit-42-vulnerability-research-november-2017-disclosures-adobe/
Author: Christopher Budd

  • 0

Palo Alto Networks at AWS re:Invent: Amazon GuardDuty Integration and Networking Competency Achieved

Category : Palo Alto

There’s been a lot of action at AWS re:Invent. First off, Palo Alto Networks was included in the Amazon GuardDuty announcement as an integration partner.  Amazon GuardDuty is a new threat detection service that identifies potentially unauthorized and malicious activity such as escalation of privileges, use of exposed credentials, or communication with malicious IPs, URLs, or domains. Findings generated by Amazon GuardDuty provide customers with an accurate and easy way to continuously monitor and protect their AWS accounts and workloads. Highlighting the power of the VM-Series automation and management features, we have integrated with Amazon GuardDuty to allow Palo Alto Networks customers to proactively protect their AWS environments.

The VM-Series integration with Amazon GuardDuty uses an AWS Lambda function to collect threat findings such as malicious IP addresses. The Lambda function feeds the malicious IP address findings to the VM-Series, using the XML API to create a dynamic address group within a security policy that blocks any activity emanating from the IP address. When Amazon GuardDuty updates the list of malicious IP addresses, the dynamic address group and security policy are automatically updated, without administrative intervention. The integration demonstrates how threat intelligence findings generated by Amazon GuardDuty can be used by the VM-Series to protect business critical workloads on AWS. (Watch this video to learn more.)


AWS Networking Competency Approved

In addition to our Amazon GuardDuty integration, we were also recognized at re:Invent as achieving AWS’ Networking Competency designation with an emphasis on Networking Connectivity. This designation complements the Security Competency that Palo Alto Networks achieved in 2016, and recognizes that Palo Alto Networks provides proven technology and deep expertise to help customers adopt, develop and deploy networks on AWS.

Achieving the AWS Networking Competency with the VM-Series differentiates Palo Alto Networks as an AWS Partner Network (APN) member that provides technical proficiency and proven customer success with a specific focus on networking connectivity including:

  • Capable of acting as a router and intelligently forward packets
  • Manages routing and availability between different network paths
  • Provides Virtual Private Network (VPN) functionality

To learn more about our Amazon GuardDuty integration or to see the VM-Series in action, swing by booth #2409 all this week at AWS re:Invent.

Here’s a summary of all the ways to engage with Palo Alto Networks at AWS re:Invent.

 Source: https://researchcenter.paloaltonetworks.com/2017/11/palo-alto-networks-aws-reinvent-amazon-guardduty-integration-networking-competency-achieved/
Author Matt Keil

  • 0

2018 Predictions & Recommendations: Automated Threat Response Technology in OT Grows Up

Category : Palo Alto

Automated Threat Response and Relevance to IT and OT

Automated threat response, which we’ll simply refer to as ATR, is the process of automating the action taken on detected cyber incidents, particularly those deemed malicious or anomalous. For each type of incident, there is a predefined action for containment or prevention where newer technologies, such as behavioral analytics and artificial intelligence, are utilized to bring incidents of interest to the surface. With these technologies, the goal is to automate the process of detection, and implement an equally automated and closed-loop process of prevention. This not only reduces the burden on the SecOps teams but also shortens the response time. Over recent years, IT organizations have needed to adopt ATR technologies, such as our WildFireand behavioral analytics offerings, to be able to better combat the advanced attacks that have increased in frequency and capability.

So how applicable is this technology in protecting Industrial Control Systems (ICS) and Operational Technology (OT) environments from advanced threats? It is clearly relevant for the adjacent corporate and business networks of the OT environment, which are often internet- connected and subject to usage by threat actors as a pivot point for an attack. But what I’m more interested in is the relevance to the core areas of ICS: Levels 3, 2, 1 of the Purdue model, and the DMZs between them. ATR is very relevant, in fact.

Consider the scenario where an HMI station in an electric utility Energy Management System is suddenly being used to issue an unusually high amount of DNP3 operate commands (to open breakers), much higher than the baseline. This could constitute a malicious event or, at minimum, an anomalous one. ATR systems could be used to detect such events and automatically respond, whether it is to block the rogue device or limit the connection; for example, by giving the device of interest read-only access for the DNP3 protocol.

So why has this technology not been adopted yet? There are several reasons. Most OT organizations’ current OT cybersecurity initiatives focus on visibility and access control. Advanced threat prevention is a longer-term initiative. Second, the newer AI/machine learning technologies used to baseline ICS-specific traffic and detect anomalies have been mostly reserved for R&D or PoC environments. Third, ICS/OT asset owners and operators tend to be very conservative. The idea of allowing a system to automatically respond to detecting threat incidents is pretty scary for most OT folks due to the fear of accidentally blocking legitimate traffic/devices and causing downtime. Finally, the use cases and response actions for incidents detected in OT have not been well-defined.

2018 Is the Year of ATR in OT

My prediction for 2018 is that ATR in OT will reach production-level maturity and be deployed in a meaningful way. “Meaningful” means that we will start seeing large-scale deployments by leading operators of ICS in critical infrastructure and manufacturing environments.

There are several reasons I believe this will be the case. Some leading organizations have matured beyond visibility and segmentation, and have completed their PoCs of the technology. To add, a strong ecosystem around the OT-specific profiling, behavioral analytics, and anomaly detection has emerged. Some of these solutions exist as dedicated sensors or as modules that supplement SIEM devices. Initially deployed as stand-alone detection tools, these ICS network- monitoring solutions are starting to be integrated with enforcement devices, such as our next-generation firewalls, which are then used to realize the appropriate threat response.

Further driving the adoption are recent high-profile, cyber-physical attacks, such as those to the Ukraine grid in 2015 and 2016, which many perceive may have been mitigated or possibly prevented with ICS-specific ATR technologies. The scope of ATR in OT could also be applied to threats typically associated with IT, like ransomware, which could still impact OT. The effect of WannaCry in causing downtime in some manufacturing plants in 2016 is an example of this. I also see the development of OT incident response playbooks and semi-automated approaches, which make adoption into OT more amenable for resource-constrained and risk-averse OT types.

To be sure, ATR in OT will be initially limited to cases deemed less risky in terms of accidentally causing process downtime or safety issues. What defines “less risky” is certainly debatable and is still being worked out. Also, this definition will differ between organizations. However, there are some seemingly amenable ATR in OT scenarios mentioned often in my discussions with OT security teams. These include the case where a pre-existing host is suddenly performing unusual commands and then limiting its access; for example, limiting an HMI or engineering workstation to read-only access to the PLCs. It may also include blocking new devices that were not included in any installation plan of record.

One other such scenario would be to quarantine a non-critical host, such as a redundant HMI, found to be infected with ransomware. Another aspect of how I see gradual adoption is in how OT users will want the option to manually accept or reject a proposed threat response. This isn’t a fully automated approach, but is likely a necessary intermediate step toward proving full automation. Integrators developing these systems will be wise to develop user interfaces and workflows to support this semi-automated approach.

Palo Alto Networks Enables Automated Response

In anticipation of this growing use case, Palo Alto Networks has been engaged closely with the ATR ecosystem, customers and industry organizations to put in place the integration required to facilitate the adoption. A key enabler for the integration and automation is our powerful application programming interface, which makes interacting with sensors, SIEMs and other system elements very easy. Furthermore, the flexibility and granularity of the controls that could be automatically implemented are immense. Specifically, for OT environment, users can apply the App-ID we have for ICS to implement protocol-level responses down to functional commands; for example, to the DNP3 operate command mentioned earlier. Couple that with User-ID for role-based access, Content-ID for control over payloads/threats and, of course, more basic controls based on IP and port, and here you have a very flexible ATR platform that can accommodate a range of response templates, which could be tied to an organizations risk tolerance. A hazard and operability study (HAZOP) may be something organizations do to determine the appropriate ATR for certain scenarios where more conservative responses, which may include redundant systems, would be applied to processes/operations.

Whether it is in 2018 or later that our users decide to implement ATR in OT, they will be happy to know we have the capabilities in place, and the ecosystem to support their initiatives.

Source: https://researchcenter.paloaltonetworks.com/2017/11/2018-predictions-recommendations-automated-threat-response-technology-ot-grows/

Author: 


  • 0

Why is Ransomware Still a Huge Problem for Organizations?

Category : Palo Alto

Ransomware has been around for nearly 30 years but it remains one of the biggest business disruptors today. Why is ransomware still such a big problem for organizations?

Unit 42 discusses why the ransomware industry continues to thrive. Unit 42 is the Palo Alto Networks threat intelligence team. Made up of accomplished cybersecurity researchers and industry experts, Unit 42 gathers, researches, analyzes, and provides insights into the latest cyber threats, then shares it with Palo Alto Networks customers, partners and the broader community to better protect enterprise, service provider and government computing environments.


  • 0

SILVERTERRIER: The Next Evolution in Nigerian Cybercrime

Category : Palo Alto

Nigerian Threat Actors have long been considered a nuisance rather than a threat. Palo Alto Networks Unit 42 returns to the topic that launched our research in 2014 with our latest report “SILVERTERRIER: The Next Evolution in Nigerian Cybercrime.” This report shows that Nigerian threat actors are capable and formidable adversaries successfully attacking major companies and governments by using cheap, off-the-shelf commodity malware.

The history of Nigerian threat actors and their use of unsophisticated technology makes it easy to underestimate the threat. This report shows why it’s not just wrong but dangerous to take Nigerian threat actors lightly.

 

Download


  • 0

Automatic Static Detection of Malicious JavaScript

Category : Palo Alto

JavaScript, alongside HTML and CSS, is considered a core technology for building web content. As an influential scripting language found nearly everywhere on the web, it provides several unique vulnerabilities for malicious developers to attack unsuspecting users and infect otherwise legitimate and safe websites. There is a clear and eminent need for users of the web to be protected against such threats.

Methodologies used for the judgment of JavaScript safety can be separated into two broad categories: static and dynamic. Static analysis of JavaScript will treat the textual information in the script as the sole source of raw data. Computation can take place on this text to calculate features, estimate probabilities and serve other functions, but no code is ever executed. On the other hand, dynamic analysis will include evaluation of the script through internet browser emulation. Varied by the complexity and breadth of emulation, this has the potential to provide much more insightful information about the JavaScript’s true functionality and, thus, safety. However, this comes at the cost of increased processing time and memory usage.

“Obfuscation” is the intentional concealing of a program’s functionality making it difficult to interpret at a textual level. Obfuscation is a common problem for static analysis; dynamic analysis is much more effective at overcoming obfuscation. Minor obfuscation can include things like random or misleading variable names. However, heavier amounts of obfuscation aren’t so simple. Here is an example of a heavily obfuscated script, abridged for brevity:

Javascript_1

As you can see, there is no way to infer the script’s functionality from a textual standpoint. Here is the original script, before obfuscation:

Javascript_2

A human can easily interpret this original script, and there is much more readily available information about its level of safety. Note that, textually, the obfuscated and original scripts look almost nothing alike. Any text-based features extracted from the two files would likely look completely different.

It is important to note that both benign and malicious developers use obfuscation. Well-intentioned developers will often still obfuscate their scripts for the sake of privacy. This makes the automatic detection of malicious JavaScript tricky since, after a script has been obfuscated, malicious and benign code can look nearly the same. This problem is pervasive. In a randomly sampled set of 1.8 million scripts, approximately 22 percent used some significant form of obfuscation. However, in practice, we’ve found the use of obfuscation to be largely disproportionate between malicious and benign developers. In a labeled sample of about 200,000 JavaScript files, over 75 percent of known malicious scripts used obfuscation, while under 20 percent of known benign scripts used it.

A natural concern arises for traditional machine learning techniques, trained on hand-engineered, static textual features generated from scripts, to unintentionally become simple obfuscation detectors. Indeed, using the presence of obfuscation as a determining factor of maliciousness wouldn’t give you bad results in any evenly distributed dataset. Accordingly, heavily weighting the presence of obfuscation is a likely result of training algorithms to improve accuracy. However, this is not desirable. As mentioned, legitimate developers use obfuscation in a completely benign manner, so obfuscated benign samples need to be rightfully classified as benign to avoid too many false positives.

Static Analysis

Extracting Hand-Engineered Features

Despite these challenges, we’ve found the use of static textual features still has the potential to perform well. Our experiments suggest static analysis can produce acceptable results with the added benefit of simplicity, speed and low memory consumption.

Our experiments took place on approximately 200,000 scripts, around 60 percent of them benign and the other 40 percent malicious. This skew in the distribution was intentional. In any natural sample taken from crawling the internet, the percent of scripts that are malicious would be around 0.1 percent, whereas in our training set we are using 40 percent malicious samples. If we trained with the natural 0.1% distribution we would have problems with our results.  For example if you created a classifier that always said “Benign” it wouldn’t be very useful, but it would be right 99.9% of the time!

Using a training set with more malicious samples than benign samples runs the risk of developing an obfuscation detector, since most malicious samples are obfuscated. If they represent most of the dataset, obfuscation will likely be learned as a strong predictor to get good accuracy. We instead introduced a distribution only slightly skewed toward benign samples, which forces the model to learn to better detect benign samples without overpowering the need to detect malicious samples. This also maximizes the amount of obfuscated benign samples, which we are particularly concerned with and want to train on as much as possible.

Here is a visualization of a uniformly selected, random sample of our 128-dimensional data:

Javascript_3

In the visualization, blue is benign and red is malicious. Although it’s not perfect (as is the case with any real-world data), notice the local separability between the benign and malicious clusters. This approximated visualization, created using the technique known as t-SNE, was a good omen to continue our analysis.

With a 70 percent training and 30 percent testing split, a random forest with just 25 trees could achieve 98 percent total accuracy on the test set. As mentioned before, test set accuracy is not always very informative. The more interesting numbers attained are the 0.01 percent false positive rate and the 92 percent malicious class recall. In English, this means only 0.01 percent of benign samples were wrongly classified as malicious, and out of the entire pool of malicious samples, 92 percent of them were correctly detected. The false positive ratio was manually decided by adjusting the decision threshold to ensure certain quality standards. The fact that we maintained 92 percent malicious class recall while enforcing this low false positive ratio is a strong result. For comparison, typical acceptable malicious class recall scores fall between 50 and 60 percent.

We hypothesize that certain obfuscation software is more commonly used among benign developers than malicious developers and vice versa. Aside from the fact that randomly generated strings will all look different anyway, more interestingly, certain obfuscation techniques result in code that is structured differently from other obfuscation techniques at a higher level. Distributions of characters will also likely change with different obfuscation methods. We believe our features might be picking up on these differences to help overcome the obfuscation problem. However, we believe there is a much better solution to the problem, which we will detail here.

Composite Word-Type Statistical Language Model

As opposed to hand-engineered feature extraction, a more robust and general approach to go about static analysis on text files is to build statistical language models for each class. The language models, which for simplicity’s sake can be thought of as probability distributions, can then be used to predict the likelihood of scripts to fit in that class. Let’s discuss a sample methodology to build such a system but many variations are possible.

The language model can be defined over n-grams to make use of all information in the script. More formally, we can write a script as a collection of J n-grams as such (the value of n is unimportant):

Javascript_4

Then, we can build a malicious class model, script M, and a benign class model, script B. The weight of each n-gram for both models can be estimated from the data. Once these weights are determined, they can be used to predict the likelihood of a script belonging to either class. One possible way to measure these weights from a set of scripts all belonging to the same class is simply to calculate the number of times that n-gram appears in all scripts in the set over the total number of n-grams found in all scripts in the set. This can be interpreted as the probability of an n-gram appearing in a script of a given class. However, because of the effects of naturally common n-grams being weighted heavily in both classes despite being uninformative, one may instead seek out a measurement such as term frequency-inverse document frequency. TF-IDF is a powerful statistic, commonly used in the domain of information retrieval, that helps alleviate this problem.

Once the language models have been defined, we can use them for the sake of calculating the likelihood of a script belonging to either class. If your methods of model construction build the model as a probability distribution, the following equations will do just that:

 

Javascript_5

In the above, C(J) represents the true class of J, which is either 0 or 1 for benign and malicious respectively. The class with the highest probability can be chosen as the predicted class. A different variation entirely could be to use the weights of n-grams as features to calculate a feature vector for each script. These feature vectors can be fed into any modern machine learning algorithm to build a model that way.

However, note that the space of possible n-grams in JavaScript code is massive; much larger than standard English text. This is especially true in the presence of obfuscation since randomized strings of letters, numbers and special characters are very common. In its unaltered form, the n-gram space from which the models are built is likely too sparse to be useful. Inspired by recent research, a potential solution to this problem is to introduce what is known as composite word-type. This is a mapping of the original n-gram space onto a much smaller and more abstract n-gram space. Concretely, the idea is to have a predefined set of several classes into which possible characters or character sequences can fall. Consider this string of JavaScript code as a demonstrative example:

var x = 5;

A naïve character-level unigram formation of this statement would look like this:

[‘v’, ‘a’, ‘r’, ‘ ‘, ‘x’, ‘ ‘, ‘=’, ‘ ‘, ‘5’, ‘;’]

Alternatively, one could define classes, such as whitespace, individual keywords, alphanumeric, digit, punctuation, etc., to reduce this level of randomness. Using those classes, the unigram formation would look like this:

[‘var’, ‘whitespace‘, ‘alphanumeric’, ‘whitespace‘, ‘punctuation’, ‘whitespace‘, ‘digit’, ‘punctuation’]

Notice that the randomness has been significantly reduced in this new space. Many possible statements, which would all look very different from the perspective of a character-level unigram, could all fit into the above abstraction. All the possible fits to the abstraction have their underlying meaning expressed while ignoring ad hoc randomness. This increases the difficulty for malicious developers to undermine the detection system since this is very robust to string randomization and variance in general.

It makes sense to have a unique class for each JavaScript keyword since those are informative pieces of information that must occur in a standard form to compile. Other alphanumeric strings may also contain useful information, and thus it is not advisable to abstract away all instances into one class. Instead, you might make a list of predictive keywords you expect to find and add them as classes or derive them from the data itself. That is, count the number of occurrences of alphanumeric strings across malicious and benign scripts separately, and discover which strings have the largest difference in frequency between the two.

Shallow Dynamic Analysis

Despite the strong potential from static analysis, the problem is only alleviated, not completely solved. Benign, obfuscated samples are still under greater suspicion than is desirable. This is confirmed by the manual inspection of false positives, which are almost all obfuscated benign samples. The only way to completely overcome obfuscation is with dynamic analysis. A shallowly dynamic strategy generally known as deobfuscation includes a family of techniques used to evaluate and unpack obfuscated code back into its original form. More complex and dynamic analysis techniques exist that typically consist of tracking all actions taken by a script in an emulated browser environment. We won’t discuss those methods in this post, since we’re aiming to demonstrate that strong, dependable behavior can come from simpler, quicker methods.

There are many open source tools meant for JavaScript deobfuscation. Utilizing these tools as a pre-processing step on a script-by-script basis can ensure we generate features from strictly deobfuscated script. Of course, this changes the appearance of the data and demands either a new set of textual features or recomputed statistical language models. As mentioned before, a large increase in robustness is to be expected when working with deobfuscated script compared to obfuscated script. The verbosity and detail of each script is often greatly increased, which machine learning or language models can leverage to gain better insight and give better predictions.

Source: https://researchcenter.paloaltonetworks.com/2017/10/engineers-work-automatic-static-detection-malicious-javascript/

Author: 


  • 0

Only YOU Can Secure Your Data in the Public Cloud (In My Best Smokey the Bear Voice)

Category : Palo Alto

Public cloud security is a shared responsibility but exactly who is accountable for what when it comes to the public cloud? Let’s begin with the facts:

  1. Public cloud refers to a set of virtualized resources (compute, networking, applications) operating on someone else’s computer, but that you control.
  2. Public cloud provides tremendous benefits, including agility, scalability, and faster access to innovative technologies.
  3. Security challenges in the public cloud mirror those faced within an on-premises data center (e.g. how to protect your applications and data from successful cyberattacks).
  4. Attackers are location agnostic. Their intent is to gain access to your network, navigate to a target, be it data, intellectual property, excess compute resources, then execute their end goal – regardless of whether it is on the network or in the cloud.

I think we can all agree on these points. But let’s take a closer look at security and determine who is accountable when it comes to the public cloud.

Public cloud vendors such as Amazon Web Services (AWS) and Microsoft Azure, profess that “public cloud data center infrastructuresare more secure,” but what they are talking about is their data center infrastructure, on which you are deploying your applications and data. YOU are responsible for protecting the applications, access to those applications, and the associated data.

Cloudworkflows

Breakdown of shared security responsibility in the public cloud

Let that sink in. YOU are in complete control of what security to implement and you must take steps to safeguard your content.

How Can You Secure Your Workloads in the Public Cloud?

Following are some of the key security capabilities required to ensure your applications and data in the public cloud are protected:

  • Visibility and control over all traffic and applications in the public cloud, irrespective of port. Comprehensive traffic insight and control enables more informed policy decisions and better security.
  • Safely enable applications, users and content. Allow the traffic and applications you want, deny all others, and grant access based on user need and credentials.
  • Block lateral movement of cyberthreats (e.g., malware).Exerting application-level controls in between VMs reduces the threat footprint; policies can be applied to block known and unknown threats.
  • Deploy new applications and next-generation security in an automated manner. Native management features (e.g. bootstrapping, dynamic address groups, fully documented XML API) enable automated policy updates and deployments.
  • Policy consistency and cohesiveness across virtual and physical firewall form factors. A simplified, centrally managed network security management offering is a must.

The  Palo Alto Networks VM-Series can help you accomplish all these things and more in the public cloud. But don’t take our word for it.  Take a FREE Virtual Ultimate Test Drive and witness the virtual firewalls live and in action.

Source: https://researchcenter.paloaltonetworks.com/2017/10/can-secure-data-public-cloud-best-smokey-bear-voice/

Author: 


  • 0

8 Months to Go – Are You Getting the Most Out of What You Already Own?

Category : Palo Alto

As we move closer to the May 2018 deadline for GDPR, more and more businesses are focused on ensuring their ability to meet the requirements set out by the regulation. All too often, people assume this requires additional investments, which at some level will be true; but you should also be challenging your organisation as to how you get more from what you already have.

Equally, I hear many looking to data loss prevention (DLP) and encryption tools as primary requirements for protecting data. However, having worked with both for many years during previous stages of my career, I would highlight that these, like every capability, come with their own implementation challenges. Often, adjacent technologies can help reduce the scope of these challenges and the associated costs.  To cover all of these would be a mammoth blog; as such I’m only going to pick a couple, just to start your creative thinking and help you on your GDPR journey.

Here, I’m going to focus on how your firewall can help reduce the effort and cost of better securing the personally identifiable information (PII) data lifecycle. How do you validate if you are getting the most from what you have already, and where could you join up security processes that today may be owned and implemented by different teams in the business?

The all too common first thought with PII data lifecycle management is to try and classify all your data – a task that has the potential to take until the end of time – as every day we generate new data or new instances of existing data. Often, much of this focus is on how to bring clarity to the volume of unstructured data.

You need a change of mindset, which is not to try and define where all your PII data is, but instead to define – and then enforce – where your PII data can and should be. This reduces the scope of where you need to then apply DLP controls.

Most organisations already have insight on PII data in known business processes. In practice, this could include customer relationship management (CRM) tools, threat intelligence that may include some form of PII data, and HR systems. However, there is often still a significant gap between where structured data is and where it is thought to be. Being able to truly define where it is will allow you to start identifying the points where it becomes unstructured data.

If you are using good Layer 7 firewalls in your business, you can use these to help map your real-time application usage. This will allow you to see which apps are talking with which others, those that are communicating outside the business, and which users are doing this and at what volume.

The core goal here is to think to the Zero Trust model: can you segment your organisation’s PII data – allowing you to reduce where it can move – into being unstructured and, more importantly, reduce the scope of what you need to apply security to. Likewise, such visibility can help you define at which points you need encryption versus the points where the data should never reside or be accessed to start with.

Now pivot to managing the PII itself: a good Layer 7 firewall will typically include some level of content inspection. If you do have a DLP solution in place that tags data, your firewall should also be able to leverage the tags inserted into the data. You may wonder why that’s of value; well, your firewall can typically inspect inside encrypted traffic, which may give access to data the DLP solution could not otherwise analyse. Also, depending on your configuration, you may be able to leverage your firewall to give you more enforcement points if you have used it to segment your organisation’s system traffic.

If you don’t have a DLP tool in place but are looking to enforce your PII dataflows, once again, a Layer 7 firewall will likely have some content inspection. While not as rich as that provided by DLP, many will allow you to look at regular expressions (words, common structure forms like banking card data, etc.) in common file types that will give you a light version that may suffice, in some cases, to both map and enforce data usage.

So, what are the takeaways here?

Take anything to the Nth degree and it can solve a problem. DLP and encryption are key to managing the PII data lifecycle. However, they can be expensive from both a Capex and Opex perspective. You can use other tools and processes to reduce dependence on them.

GDPR is a rare opportunity to take a step back. It’s amazing to see how many organisations invest in state-of-the-art technology, such as a next-generation firewall, that can do all these things, but then still use it like their 20-year-old port and protocol firewall that works at Layer 3.

Before you spend any more of your valuable budget, challenge yourself and your organisation on what you already have. Ensure you know the capabilities at the core, as well as the additional components. Map them out and then consider how they could streamline your processes to reduce the scope and effort. In this instance, looking at how you zone your traffic or go to a full Zero Trust model is not a specific GDPR callout, but it does align to the notion of what is state-of-the-art best practice and, more importantly, what could reduce where your PII data proliferates. This means less to then secure, and less risk to the organisation.

 Source: https://researchcenter.paloaltonetworks.com/2017/10/cso-gdpr-8-months-go-getting-already/
Author: 

  • 0

Palo Alto Networks Unit 42 Vulnerability Research September and October 2017 Disclosures

Category : Palo Alto

As part of Unit 42’s ongoing threat research, we can now disclose that Palo Alto Networks Unit 42 researchers have discovered vulnerabilities that have been addressed by Microsoft in their September and October security update releases.

CVE Vulnerability Name Affected Products Researcher
CVE-2017-8567 Microsoft Office Remote Code Execution Microsoft Excel for Mac 2011 Jin Chen
CVE-2017-8749 Internet Explorer Memory Corruption Vulnerability Internet Explorer 10, Internet Explorer 11 Hui Gao
CVE-2017-11793 Scripting Engine Memory Corruption Vulnerability Internet Explorer 9, Internet Explorer 10, Internet Explorer 11 Hui Gao
CVE-2017-11822 Internet Explorer Memory Corruption Vulnerability Internet Explorer 9, Internet Explorer 11 Hui Gao

For current customers with a Threat Prevention subscription, Palo Alto Networks has also released IPS signatures providing proactive protection from these vulnerabilities. Traps, Palo Alto Networks advanced endpoint protection, can block memory corruption based exploits of this nature.

Palo Alto Networks is a regular contributor to vulnerability research in Microsoft, Adobe, Apple, Google Android and other ecosystems. By proactively identifying these vulnerabilities, developing protections for our customers, and sharing the information with the security community, we are removing weapons used by attackers to threaten users, and compromise enterprise, government, and service provider networks.

Source: https://researchcenter.paloaltonetworks.com/2017/10/unit42-palo-alto-networks-unit-42-vulnerability-research-september-october-2017-disclosures/

Author: 


Support