Category Archives: Palo Alto

  • 0

Preventing Cybercrime Through Collaboration and Information Sharing

Category : Palo Alto

Cybercrime is not something that any one entity can tackle alone, and increasingly, companies are understanding the importance of information sharing in preventing successful cyberattacks.

In our recent State of Cybersecurity in Asia-Pacific survey, 44 per cent of organisations across the region have already started sharing threat information with other companies in their industry. The exposure of malicious cyber actors and their techniques plays an important role in changing their behavior. The more broadly information about threats is shared, the more efficiently organisations can work to prevent cyberattacks.

This week, we became the first cybersecurity company to sign a Data Exchange Agreement (DEA) with INTERPOL. Aimed to combat criminal trends in cyberspace, cyberthreats and cybercrime, this agreement marks a mutual commitment to openly share threat intelligence and equip law enforcement officers with powerful information needed to prevent cybercrime.

In addition to our involvement in the Cyber Threat Alliance and our role earlier this year in the INTERPOL-led operation targeting cybercrime across the ASEAN region, this agreement underscores our commitment to threat intelligence sharing.

For more information, please read our press release about our collaboration with INTERPOL.

 Source: https://researchcenter.paloaltonetworks.com/2017/08/cso-preventing-cybercrime-collaboration-information-sharing/
Author: 

  • 0

Risk Remediation is Easy with Aperture

Category : Palo Alto

Learn more about SaaS security:


  • 0

How the Next-Generation Security Platform Contributes to GDPR Compliance

Category : Palo Alto

The General Data Protection Regulation is the European Union’s forthcoming personal data protection law. In May 2018, the GDPR will replace the 1995 Data Protection Directive, significantly changing the rules surrounding protection of personal data of EU residents.

The Palo Alto Networks Next-Generation Security Platform can help with organisations’ security and data protection efforts related to GDPR compliance by assisting in securing personal data at the application, network and endpoint level, as well as in the cloud. It can also assist in understanding what data was compromised in the unfortunate instance of a breach, but first and foremost it will help organisations prevent data breaches from happening at all.

Download

Source: https://www.paloaltonetworks.com/resources/whitepapers/gdpr-compliance-next-generation-security-platform.html

 


  • 0

How Does Credential Theft Affect Your Organization? Find out in today’s Breach Prevention Week webinar

Category : Palo Alto

The effects of a credential-based attack differs by organization and by job function. In this session, we will cover a look at how these attacks affect different types of organizations, along with the analysis and demonstration of how an attack is done.

In this session, hear about:
* Credential theft industry research coverage
* Industry analysis of the problem space
* Application of the credential theft lifecycle in light of recent attacks

 

Brian Tokuyoshi, Sr Product Marketing Manager, Palo Alto Networks

 | 46 mins

Play


  • 0

Finding the Value of ‘Intangibles’ in Business

Category : Palo Alto

We modeled the Cybersecurity Canon after the Baseball or Rock & Roll Hall-of-Fame, except for cybersecurity books. We have more than 25 books on the initial candidate list, but we are soliciting help from the cybersecurity community to increase the number to be much more than that. Please write a review and nominate your favorite. 

The Cybersecurity Canon is a real thing for our community. We have designed it so that you can directly participate in the process. Please do so!

Book review by Canon Committee Member, Rick Howard: “How to Measure Anything: Finding the Value of ‘Intangibles’ in Business” (2011), by Douglas W. Hubbard.

Executive Summary

Douglas Hubbard’s “How to Measure Anything: Finding the Value of ‘Intangibles’ is an excellent candidate for the Cybersecurity Canon Hall of Fame. He describes how it is possible to collect data to support risk decisions for even the hardest kinds of questions. He says that network defenders do not have to have 100 percent accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. He writes that this particular view of probability is called Bayesian, and it has been out of favor within the statistical community until just recently, when it became obvious that it worked for a certain set of really hard problems. He describes a few simple math tricks that all network defenders can use to make predictions about risk decisions for our organizations. He even demonstrates how easy it is for network defenders to run our own Monte Carlo simulations using nothing more than a spreadsheet. Because of all of that, “How to Measure Anything: Finding the Value of ‘Intangibles’ is indeed a Cybersecurity Canon Hall of Fame candidate, and you should have read it by now.

Introduction

The Cybersecurity Canon project is a “curated list of must-read books for all cybersecurity practitioners – be they from industry, government or academia — where the content is timeless, genuinely represents an aspect of the community that is true and precise, reflects the highest quality and, if not read, will leave a hole in the cybersecurity professional’s education that will make the practitioner incomplete.” [1]

This year, the Canon review committee inducted this book into the Canon Hall of Fame: “How to Measure Anything in Cybersecurity Risk,” by Douglas W. Hubbard and Richard Seiersen. [2] [3]

According to Canon Committee member Steve Winterfeld, “How to Measure Anything in Cybersecurity Risk” is an extension of Hubbard’s successful first book, “How to Measure Anything: Finding the Value of ‘Intangibles’ in Business. It lays out why statistical models beat expertise every time. It is a book anyone who is responsible for measuring risk, developing metrics, or determining return on investment should read. It provides a strong foundation in qualitative analytics with practical application guidance.” [4]

I personally believe that precision risk assessment is a key, and currently missing, element in the CISO’s bag of tricks. As a community, network defenders, in general, are not good at transforming technical risk into business risk for the senior leadership team. For my entire career, I have gotten away with listing the 100+ security weaknesses within my purview and giving them a red, yellow, or green label to mean bad, kind-of-bad, or not bad. If any of my bosses would have bothered to ask me why I gave one weakness a red label vs. a green label, I would have said something like: “25 years of experience…blah, blah, blah…trust me…blah, blah, blah…can I have the money, please?”

I believe the network defender’s inability to translate technical risk into business risk with precision is the reason that the CISO is not considered at the same level as other senior C-suite executives, such as the CEO, CFO, CTO, and CMO. Most of those leaders have no idea what the CISO is talking about. For years, network defenders have blamed these senior leaders for not being smart enough to understand the significance of the security weaknesses we bring to them. But I assert that it is the other way around. The network defenders have not been smart enough to convey the technical risks to business leaders in a way they might understand.

This CISO inability is the reason that the Canon Committee inducted “How to Measure Anything in Cybersecurity Risk,” and another precision risk book called “Measuring and Managing Information Risk: A FAIR Approach” into the Canon Hall of Fame. [5][4][3][6][7]. These books are the places to start if you want to educate yourself on this new way of thinking about risk to the business.

For me though, this is not an easy subject. I slogged my way through both of these books because basic statistical models completely baffle me. I took stat courses in college and grad school but sneaked through them by the skin of my teeth. All I remember about stats was that it was hard. When I read these two books, I think I only understood about a three-quarters of what I was reading, not because they were written badly, but because I struggled with the material. I decided to get back to the basics and read Hubbard’s original book that Winterfeld referenced in his review: “How to Measure Anything: Finding the Value of ‘Intangibles’ in Business” to see if it was also Canon-worthy.

The Network Defender’s Misunderstanding of Metrics, Risk Reduction and Probabilities

Throughout the book, Hubbard emphasizes that seemingly dense and complicated risk questions are not as hard to measure as you might think. He reasons from scholars like Edward Lee Thorndike and Paul Meehl from the early twentieth century about Clarification Chains:

If it matters at all, it is detectable/observable.
If it is detectable, it can be detected as an amount (or range of possible amounts).
If it can be detected as a range of possible amounts, it can be measured. [8]

As a network defender, whenever I think about capturing metrics that will inform how well my security program is doing, my head begins to hurt. Oh, there are many things that we could collect – like outside IP addresses hitting my infrastructure, security control logs, employee network behavior, time to detect malicious behavior, time to eradicate malicious behavior, how many people must react to new detections, etc. – but it is difficult to see how that collection of potential badness demonstrates that I am reducing material risk to my business with precision. Most network defenders in the past, including me, have simply thrown our hands up in surrender. We seem to say to ourselves that if we can’t know something with 100 percent accuracy, or if there are countless intangible variables with many veracity problems, then it is impossible to make any kind of accurate prediction about the success or failure of our programs.

Hubbard makes the point that we are not looking for 100 percent accuracy. What we are really looking for is a reduction in uncertainty. He says that the concept of measurement is not the elimination of uncertainty but the abatement of it. If we can collect a metric that helps us reduce that uncertainty, even if it is just by a little bit, then we have improved our situation from not knowing anything to knowing something. He says that you can learn something from measuring with very small random samples of a very large population. You can measure the size of a mostly unseen population. You can measure even when you have many, sometimes unknown, variables. You can measure the risk of rare events. Finally, Hubbard says that you can measure the value of subjective preferences, like art or free time, or of life in general.

According to Hubbard, “We quantify this initial uncertainty and the change in uncertainty from observations by using probabilities.” [8] These probabilities refer to our uncertainty state about a specific question. The math trick that we all need to understand is allowing for ranges of possibilities within which we are 90 percent sure the true value lies.

For example, we may be trying to reduce the number of humans who have to respond to a cyberattack. In this fictitious example, last year the Incident Response team handled 100 incidents with three people each – a total of 300 people. We think that installing a next-generation firewall will reduce that number. We don’t know exactly how many, but some. We start here to bracket the question.

Do we think that installing the firewall will eliminate the need for all humans to respond? Absolutely not. What about reducing the number to three incidents with three people for a total of nine? Maybe. What about reducing the number to 10 incidents with three people for a total of 30? That might be possible. That is our lower limit.

Let’s go to the high side. Do you think that installing the firewall will have zero impact on reducing the number? No. What about 90 attacks with three people for a total of 270? Maybe. What about 85 attacks with three people for a total of 255? That seems reasonable. That is our upper limit.

By doing this bracketing we can say that we are 90 percent sure that installing the next-generation firewall will reduce the number of humans who have to respond to cyber incidents from 300 to between 30 and 255. Astute network defenders will point out that this range is pretty wide. How is that helpful? Hubbard says that first, you now know this, where before you didn’t know anything. Second, this is the start. You can now collect other metrics, perhaps, that might help you reduce the gap.

The History of Scientific Measurement Evolution

This particular view of probabilities, the idea that there is a range of outcomes that you can be 90 percent sure about, is the Bayesian interpretation of probabilities. Interestingly, this different view of statistics has not been in favor since its inception, when Thomas Bayes penned the original formula back in the 1740s. The naysayers originated from the Frequentists. Their theory said that the probability of an event can only be determined by how many times it has happened in the past. To them, modern science requires both objectivity and precise answers. According to Hubbard:

“The term ‘statistics’ was introduced by the philosopher, economist, and legal expert Gottfried Achenwall in 1749. He derived the word from the Latin statisticum, meaning ‘pertaining to the state.’ Statistics was literally the quantitative study of the state.” [8]

In the Frequentist view, the Bayesian philosophy requires a measure of “belief and approximations. It is subjectivity run amok, ignorance coined into science.” [7] But the real world has problems where the data is scant. Leaders worry about potential events that have never happened before. Bayesians were able to provide real answers to these kinds problems, like the defeating of the Enigma encryption machine in World War II and finding a lost and sunken nuclear submarine that was the basis for the movie “The Hunt for Red October.” But It wasn’t until the early 1990s that the theory became commonly accepted. [7]

Hubbard walks the reader through this historical research about the current state in scientific measurement. He explains how Paul Meehl, in the early 1900s, demonstrated time and again that statistical models outperformed human experts. He describes the birth of information theory with Claude Shannon in the late 1940s and credits Stanley Smith Stevens, around the same time, with crystalizing different scales of measurement from sets to ordinals to ratios and intervals. He reports how Amos Tversky and Daniel Kahneman, through their research in the 1960s and 1970s, demonstrated that we can improve our measurements around subjective probabilities.

In the end, Hubbard defines “measurement” as this:

  • Measurement: A quantitatively expressed reduction of uncertainty based on one or more observations. [8]

Simple Math Tricks

Hubbard explains two math tricks that, after reading, seem impossible to be true, but when used by a Bayesian proponents, greatly simplify measurement-taking for difficult problems:

  • The Power of Small Samples: The Rule of Five: There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population. [8]
  • The Single Sample Majority Rule (i.e., The Urn of Mystery Rule): Given maximum uncertainty about a population proportion – such that you believe the proportion could be anything between 0% and 100% with all values being equally likely – there is a 75% chance that a single randomly selected sample is from the majority of the population. [8]

I admit that the math behind these rules escapes me. But I don’t have to understand the math to use the tools. It reminds me of a moving scene from one of my favorite movies: “Lincoln.” President Lincoln, played brilliantly by Daniel Day-Lewis, discusses his reasoning for keeping the southern agents – who want to discuss peace before the 13th Amendment is passed – away from Washington.

“Euclid’s first common notion is this: Things which are equal to the same thing are equal to each other. That’s a rule of mathematical reasoning. It’s true because it works. Has done and always will do.” [9]

The bottom line is that “statistically significant” does not mean a large number of samples. Hubbard says that statistical significance has a precise mathematical meaning that most lay people do not understand and many scientists get wrong most of the time. For the purposes of risk reduction, stick to the idea of a 90 percent confidence interval regarding potential outcomes. The Power of Small Samples and the Single Sample Majority Rule are rules of mathematical reasoning that all network defenders should keep handy in their utility belts as they measure risk in their organizations.

Simple Measurement Best Practices and Definitions

As I said before, most network defenders think that measuring risk in terms of cybersecurity is too hard. Hubbard explains four rules of thumb that every practitioner should consider before giving up:

  • It’s been measured before.
  • You have far more data than you think.
  • You need far less data than you think.
  • Useful, new observations are more accessible than you think. [8]

He then defines “uncertainty” and “risk” through a possibility and probabilistic lens:

Uncertainty:

The lack of complete certainty, that is, the existence of more than one possibility.

Measurement of Uncertainty:

A set of probabilities assigned to a set of possibilities.

Risk:

A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.

Measurement of Risk:

A set of possibilities each with quantified probabilities and quantified losses.  [8]

In the network defender world, we tend to define risk in terms of threats and vulnerabilities and consequences. [10] Hubbard’s relatively new take gives us a much more precise way to think about these terms.

Monte Carlo Simulations

According to Hubbard, the invention of the computer made it possible for scientists to run

thousands of experimental trials based on probabilities for inputs. These trials are called Monte Carlo simulations. In the 1930s, Enrico Fermi used the method to calculate neutron diffusion by hand with human mathematicians calculating the probabilities. In the 1940s, Stanislaw Ulam, John von Neumann, and Nicholas Metropolis realized that the computer could automate the Monte Carlo method and help them design the atomic and hydrogen bombs. Today, everybody who has access to a spreadsheet can run their own Monte Carlo simulations.

For example, take my previous example of trying to reduce the number of humans who have to respond to a cyberattack. We said that, during the previous year, 300 people responded to a cyberattack. We said that we were 90 percent certain that the installation of a next-generation firewall would result in a reduction in the number of humans who have to respond to an incident to between 30 and 255.

We can refine that number even more by simulating hundreds or even thousands of scenarios inside a spreadsheet. I did this myself by setting up 100 scenarios where I randomly picked a number between 0 and 300. I calculated the mean to be 131 and the standard deviation to be 64. Remember that the standard deviation is nothing more than a measure of spread from the mean. [11][12][13] The rule of 68–95–99.7 says that 68 percent of the recorded values will fall within the first standard deviation. 95 percent will fall within the second standard deviation. 99.7 percent will fall within the third standard deviation. [8] With our original estimate, we said there was a 90 percent chance that the number is between 30 and 255. After running the Monte Carlo simulation, we can say that there is a 68 percent chance that the number is between 76 and 248.

How about that? Even a statistical luddite can like me an run his own Monte Carlo simulation.

Conclusion

After reading Hubbard’s second book in the series, “How to Measure Anything in Cybersecurity Risk,” I decided to go back to the original to see if I could understand with a bit more clarity exactly how the statistical models worked and to determine if the original was Canon-worthy too. I learned that there was probably a way to collect data to support risk decisions for even the hardest kinds of questions. I learned that we, network defenders, do not have to have 100 percent accuracy in our models to help support these risk decisions. We can strive to simply reduce our uncertainty about ranges of possibilities. I learned that this particular view of probability is called Bayesian, and it has been out of favor within the statistical community until just recently, when it became obvious that it worked for a certain set of really hard problems. I learned that there are a few simple math tricks that we can all use to make predictions about these really hard problems that will help us make risk decisions for our organizations. And I even learned how to build my own Monte Carlo simulations to supports those efforts. Because of all of that, “How to Measure Anything: Finding the Value of ‘Intangibles’ is indeed Canon-worthy, and you should have read it by now.

Source: https://researchcenter.paloaltonetworks.com/2017/07/cybersecurity-canon-measure-anything-finding-value-intangibles-business/

Author: Rick Howard


  • 0

Palo Alto Networks Now a Six-Time Gartner Magic Quadrant Leader

Category : Palo Alto

Gartner’s 2017 Magic Quadrant for Enterprise Network Firewalls has been released, and Palo Alto Networks is proud to be positioned in the Leaders quadrant for the sixth consecutive year. I invite you to read the 2017 Magic Quadrant for Enterprise Network Firewalls report.

Gartner’s Magic Quadrant provides a graphical competitive positioning of technology providers in markets where growth is high and provider differentiation is distinct. Leaders execute well against their stated visions and are well-positioned for tomorrow. Gartner researchers continue to highlight both our ability to execute and the completeness of our vision. You can find more details in the report.

More than 39,500 customers in more than 150 countries have chosen Palo Alto Networks to realize the benefits of a truly next-generation security platform, safeguard critical assets, and prevent known and unknown threats. To protect our customers and stay ahead of sophisticated cyberattackers, we maintain a steadfast commitment to innovation. We recently introduced several more disruptive capabilities:

  • Application Framework: With a SaaS-based consumption model, Palo Alto Networks Application Framework allows customers to use new apps to solve the most challenging security use cases with the best technology available, without the cost and operational burden of deploying new infrastructure.
  • GlobalProtect cloud serviceGlobalProtect cloud service eases your next-generation firewall and GlobalProtect deployment by leveraging cloud-based security infrastructure operated by Palo Alto Networks.
  • Logging Service: Palo Alto Networks Logging Service is a cloud-based offering for context-rich, enhanced network logs generated by our security offerings, including those of our next-generation firewalls and GlobalProtect cloud service.

DISCLAIMER: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Source: https://researchcenter.paloaltonetworks.com/2017/07/palo-alto-networks-now-six-time-gartner-magic-quadrant-leader/

Author: 


  • 0

How Traps Protects Against Astrum

Category : Palo Alto

Astrum is a relatively old exploit kit (EK) that is also known as Stegano EK. We noted in January 2017 how Stegano/Astrum had reappeared in recent months and talked about how Traps protects against it.

Since then, researchers have seen Astrum updated with new specific countermeasures that target security products and seek to evade detection, making it one of the most evolved threats out there today.

How Does It Work?

Astrum is currently being used as part of the AdGholas malvertising campaign. The AdGholas campaign uses malicious scripts in banner ads on legitimate websites. The malicious scripts direct users to an Astrum exploit kit server behind the scenes which then attacks the user’s system.

Astrum uses malicious Adobe Flash files that attempt to exploit vulnerabilities in Adobe Flash Player (CVE-2015-8651CVE-2016-1019 and CVE-2016-4117) and Microsoft Internet Explorer (CVE‑2016‑0189). While these vulnerabilities have been patched, users with older versions of Flash and missing Microsoft patches are still at risk of successful attacks against them.

If the malicious Flash file is successful, it will download the payload onto the victim’s machine. Astrum has been known to deliver banking Trojans, including Ursnif. However, recent payloads include ransomware and other malware. Most recently, researchers have seen Astrum spreading the Mole ransomware.

Why Is It Unique?

Since March 2017, researchers have seen Astrum updated with tactics that specifically target detection and analysis.

Astrum exploits an information disclosure vulnerability (CVE-2017-0022) to identify and evade antivirus products. Astrum also utilizes Diffie-Hellman key exchange to incorporate an anti-replay feature to prevent security researchers from reviewing and diverting malicious network activity. Further adding to the challenge of detecting and analyzing Astrum is its use of HTTPS to encrypt its traffic.

And finally, Astrum encrypts the malicious Flash file so that the bulk of the malicious content is encrypted and only a small decryption stub is unencrypted. Astrum takes additional steps to defeat decryption in a sandbox environment by making the ability to decrypt the malicious Flash file machine-specific: the file cannot be decrypted anywhere but on the targeted system.

How Do You Stop It?

Taken all together, these recent updates to Astrum result in it thwarting most security protections. Its use of HTTPS challenges firewall-based protections. Its use of encryption for the malicious payload bypasses most traditional signature-based antivirus solutions. And the machine-specific decryption countermeasure thwarts the sandboxing found on many more advanced security products.

With the advanced evasion techniques Astrum utilizes, endpoint security needs real-time protections to stop Astrum on the target system after the malicious Flash file is decrypted but before it successfully executes.

Palo Alto Networks Traps advanced endpoint protection offers DLL security to prevent access to crucial DLL metadata from untrusted code locations. Traps also offers JIT mitigation to prevent JIT code from calling out-of-the-norm operating system functions. Traps offers unique protections against advanced exploitation capabilities, successfully preventing Astrum and exploit kits of its like.

Attackers will try to evade sandboxes and traditional signature-based antivirus in many unique ways, one of which is described above. However, the attacker cannot disguise the actual malicious activity he is trying to deploy. Traps is anti-evasive and not based on signatures and stops the malicious activity itself, which cannot be hidden or replaced.

Source: https://researchcenter.paloaltonetworks.com/2017/07/how-traps-protects-against-astrum/

Author: 

  • 0

Tips for Gamifying Your Cybersecurity Education and Awareness Programs

Category : Palo Alto

Employees are fast becoming the weakest link in the defence against cybercriminals. Sometimes common sense can only go so far, as you need to make sure that best practices around security don’t go in one ear and out the other. Whether through innocent mistakes or because they were targeted for their access to sensitive information, employee error can easily open the door to malware or information theft.

Successful attacks often involve poor processes and exploit human tendencies. To reduce an organisation’s threat surface, the focus of regular employee training needs to shift from reaction to prevention. Pure compliance-driven approaches have proven to be ineffective for organisations when used for employee security training, usually because it isn’t interesting or personal enough to capture employees’ imaginations. Businesses should focus on educating employees about how to protect their personal data, thereby encouraging employees to enact further security-orientated practices in the workplace.

Employee training may take different forms, including the increasing practice of “gamifying” cybersecurity education programs. Gamification is the process of using gaming mechanics in a non-gaming context, leveraging what is exciting about games and applying it to other types of activities that may not be so fun. Designed with elements of competition and reward, gamification programs are becoming popular because they can be used within a variety of industries.

Many businesses currently use gamification in such areas as customer engagement, and employee education and training to drive performance and motivation. Gaming elements include one-on-one competitions, rewards programs, and more.

There are two key ways business owners can use gamification as a way of addressing cybersecurity in their organisation:

1. Make training more exciting and engaging for employees

Using gamification can help businesses improve their cybersecurity in numerous ways, including showing employees how to avoid cyberattacks and learning about vulnerabilities in software.

Global consulting firm PwC teaches cybersecurity through its Game of Threats. [1] Executives compete against each other in real-world cybersecurity situations, playing as either attackers or defenders. Attackers choose the tactics, methods, and skills of attack, while defenders develop (defence) strategies, and invest in the right technologies and talent to respond to the attack. The game gives executives an understanding of how to prepare for and react to threats, how well-prepared the company is, and what their cybersecurity teams face each day.

Gamifying will help make the training process more exciting and engaging for employees, increasing employee awareness of cybersecurity practices, including how to deal with attacks correctly.

2. Offer incentives and rewards to encourage desired behaviours

Human error is responsible in most security breaches, with employees feeling pressured to complete work by certain deadlines and as quickly as possible, which can result in them overlooking important company policy regarding security.

For example, running so-called PhishMe campaigns can be a great way to train employees on better email security. These include regular phishing emails sent across the organisation, testing the staff’s response and action.

Gamification lets businesses reward those employees who follow security procedures and adhere to the correct security guidelines, which will further promote good behaviour. This may take the form of employees receiving a badge or recording points, which are then displayed on a scoreboard for the office to follow. In some organisations, after employees reach specific milestones, they are presented with a material reward, such as a gift voucher.

This system also allows for the identification of those who display poor behaviour within gamification and may result in the employee needing to complete further cybersecurity training. Recognising and rewarding employees when they do the correct thing leads to continued positive behaviour, motivating employees to undertake safe practices and resulting in a more cyber-secure working environment.

At the heart of any security awareness training is education to teach employees a shared sense of responsibility for the data they work with, and the data they create and use at home. All security awareness campaigns should become part of an ongoing process, not a one-time initiative. Leaders of any business, big or small, can sometimes feel they lack the resources needed to drive an effective cybersecurity education campaign, but this can be done without breaking the bank.

  • Visual aids work well. Start with some small videos, posters and/or contests as a reminder to drive the message home for all to understand that security is everyone’s responsibility.
  • ‘Fear of God’ tactics do not work. The business goal should be to build a culture of cyber awareness, so treat this like a marketing campaign with the intent to persuade and change the behaviour of an employee.
  • Short and concise work best. Long emails always get ignored. Keep them short and fun, and ALWAYS ensure it is a top-down approach. Employees look up to their leaders. If the leaders do not embody a cyber-secure culture, why should the employees? The aim is to educate employees about best practices, not force them to be cybersecurity experts. Make it fun and have a laugh, so everyone can learn at the same time.
  • Reinforcement and follow-up are key. Training is a constant; learn from what works and re-educate as needed. Re-test your newly onboarded, as well as existing, staff members on whether they fall for a phishing email, and check to see how many employees still fail to recognise a fake email. Encourage communication to report a fake and call out departmental groups that may be lagging. The aim is not to single people out, but rather create some healthy rivalry within the organisation.

Eliminating cyber risks in any business is an ongoing process, but it can be managed. We need to foster a way for employees to call out where they question something and re-educate as needed. If employees walk away from the security awareness program questioning before they click on something malicious, you have moved the needle towards being more secure.

[1] https://www.pwc.com/us/en/financial-services/cybersecurity-privacy/game-of-threats.html

Source: https://researchcenter.paloaltonetworks.com/2017/07/cso-winning-game-cybercriminals/
Author: Sean Duca

  • 0

Best Practice Security Policy Concepts and Methods for Your Data Center

Category : Palo Alto

A data center houses an enterprise’s most critical data, such as source code, financial and personal information, or designs for pharmaceutical drugs – the enterprise’s digital crown jewels.

IngorGraphic

Designing and deploying a best practice security policy to protect your valuable data means protecting not only the perimeter of your enterprise network; it means protecting the connections into and out of the data center perimeter, as well as the connections between servers and VMs inside the data center.

But how do you transition to a data center best practice security policy?

In “Data Center Best Practice Security Policy Part 1: Concepts,” you’ll be presented with ways to think about a best practice security policy strategy and how to design it for your particular business, with the goal of achieving positive security enforcement that allows only the users, applications and content that you explicitly permit on the network, and denies all other traffic. It addresses questions such as:

  • How do you create a transition strategy that aligns with your business goals?
  • How do you decide which assets to protect first?
  • What methods should you use to make the transition?
  • How will you protect your data center during the transition?

Coming Soon!

If you enjoyed part 1, look for “Data Center Best Practice Security Policy Part 2: Implementation” to learn the specific best practices to apply to traffic at the perimeter and inside the data center.

 Source: https://researchcenter.paloaltonetworks.com/2017/06/tech-docs-best-practice-security-policy-concepts-methods-data-center/
Author: 

  • 0

Decline in Rig Exploit Kit

Category : Palo Alto

Starting in April 2017, we saw a significant decrease in Rig exploit kit (EK) activity after two major campaigns, EITest and pseudo-Darkleech, stopped using EKs. Figure 1 shows the hits for the Rig EK from December 2016 through May 2017, highlighting this trend.

This blog reviews recent developments in the EITest and pseudo-Darkleech campaigns that have contributed to the current drop in Rig EK. We also explore other causes for the overall decline of EK activity as others have noted in recent reports. Finally, due to the anemic nature of today’s EK scene, we review some methods criminals are focusing on for malware distribution.

decline-in-rig_1

Figure 1: Hits for Rig EK from December 2016 through May 2017.

Two Major Campaigns Stop Using Rig EK

At the very end of March 2017, researchers stopped seeing indicators of the pseudo-Darkleech campaign. Pseudo-Darkleech was a long-running campaign that switched to Rig EK in September 2016. Since September 2016, pseudo-Darkleech accounted for a significant amount of Rig EK seen on a daily basis. When pseudo-Darkleech disappeared, Rig EK activity dropped approximately 50 percent from previous months.

Three to four weeks later, another long-running campaign cut back on its use of Rig EK. Near the end of April 2017, the EITest campaign began pushing tech support scams. Previously, EITest had also generated a great deal of Rig EK traffic, but as the criminals behind this activity began focusing on other techniques, Rig EK levels dropped another 50 percent in May 2017. As we enter June, EITest is primarily pushing tech support scams, and it does not appear to be utilizing EKs at this time.

Figure 2 shows the hits for Rig EK from March 1, 2017 through May 31, 2017 in more detail. Note on the chart when pseudo-Darkleech disappears and EITest shifts focus and their impact on Rig EK traffic.

Although researchers still find Rig from other campaigns like RoughTed or Seamless, recent levels are their lowest since we began tracking this EK.

decline-in-rig_2

Figure 2: Rig EK hits from March 1st through May 31st, 2017.

Not the Threat They Once Were

Rig is not the only EK suffering in today’s threat landscape. All EKs have been affected. So why aren’t EKs as active as they once were?

One contributing factor is that the target surface for EKs is getting smaller.

EKs typically use browser-based exploits targeting Microsoft Windows systems. They are primarily focused on Internet Explorer, Microsoft Edge, and Adobe Flash Player. EKs are largely ineffective against more popular browsers like Chrome, a product that has gone through four major version updates this year alone.

Users (potential victims) are moving to other browsers, and this has greatly reduced the number of possible targets for current EKs. As shown below in Figure 3, as of May 2017, only 19 percent of the desktop browser market was taken by Microsoft Edge and Internet Explorer 11 combined.

decline-in-rig_3

Figure 3: Desktop browser market share in May 2017 from NetMarketShare.com.

With a declining target base, EKs are not aging gracefully. In previous years, we saw a variety of EKs used by various campaigns. But by the end of 2015, notable EKs like Sweet Orange and Fiesta had disappeared. As 2016 progressed, other prominent EKs like Nuclear and Angler also shut down. The graveyard of expired EKs has several dozen names by now.

This lack of diversity has impacted EK development. According to Proofpoint, more than a year has passed since any major EK has featured a zero-day exploit, making EKs far less effective compared to previous years.

Furthermore, the security community has been much more active against EKs. Recent efforts by Cisco Talos in 2016 and RSA Research in 2017 have seen researchers coordinating with hosting providers to take down servers used in domain shadowingschemes favored by EKs. The resulting setbacks have not been permanent, but they have significantly impacted operations for criminals using EKs.

Ultimately, a declining browser target base, lack of new exploits, and recent efforts by the community to fight domain shadowing have all contributed to an overall decline in EK activity.

What Are Criminals Turning To?

As EKs become more ineffective, criminals are focusing on other methods like malicious spam attacks, or social engineering schemes like HoeflerText notifications  like shown in Figure 4. Whether through spam or a browser popup, criminals trick potential victims into double-clicking a file that infects their computers.

decline-in-rig_4

Figure 4: A fake HoeflerText notification in Google Chrome that leads to malware.

In some cases, URLs will redirect to an EK one day and then on following days will often redirect to a fake installer for something like Adobe Flash Player like shown in Figure 5. These social engineering schemes are becoming more common, and researchers often run across them as they search for EKs.

decline-in-rig_5

Figure 5: Fake Flash installer distributing the same malware Rig EK did the day prior.

In some cases, criminals have turned away from malware entirely, and are focusing on apparently more lucrative activity. For example, the EITest campaign has switched to pushing tech support scams. At first, this seemed to be location-based, targeting the US and UK. However, as we go into June 2017, this type of activity is all we have found from the EITest campaign in recent days.

Figure 6 below shows a page viewed on June 7th, 2017 from a website compromised by the EITest campaign. The highlighted portion is a URL that redirects to a tech support scam website shown in Figure 7 that states your computer has been infected.

decline-in-rig_6

Figure 6: Injected script in page from a site compromised by the EITest campaign.

decline-in-rig_7

Figure 7: The tech support scam site.

This particular campaign also has audio continually reinforcing the same information. You cannot simply click okay or close the browser. The windows will immediately reappear. In order to close the browser and stop the audio, you must use the task manager to kill the browser process.

These tech support scams have been so successful that they are now a constant feature of our threat landscape. The EITest campaign has been pushing them for more than a month now.

Conclusion

Although EK activity levels are down, we still see indicators of Rig and Magnitude on a near-daily basis. But EKs are a relatively minor factor in today’s threat landscape compared to social engineering schemes and malspam. Users who follow best security practices are much less likely to be affected by the EK threat.

However, this situation could change as new exploits appear and updated techniques are used in malware distribution. It always pays to be prepared. Threat detection, preventions, and protection solutions like Palo Alto Networks next-generation security platform are a key part of any prevention strategy.

 Source: https://researchcenter.paloaltonetworks.com/2017/06/unit42-decline-rig-exploit-kit/
Author: 

Support