Category Archives: Gigamon

  • 0

Key Message from Palo Alto Networks Ignite 2017: We Need to Work Better Together

Category : Gigamon

Last week, I had a security choice to make: Go to the Gartner Security and Risk Management Summit in National Harbor, MD, or Palo Alto Network’s Ignite 2017 in Vancouver, BC. Of course, Gigamon had a presence at both and I was lucky enough to head north. I wasn’t alone—around 3,500 security professionals were signed up for the conference, including many Gigamon’s technical alliance and channel partners. I guess this shouldn’t be surprising given the natural affinity of our technologies.

For me, one of the major show themes was that the industry needs to work better together to ensure our customers get the protection they are looking for from our technologies. While Palo Alto Networks execs set out an ambitious vision for the future of the security market, they were quick to point out that no one company could deliver everything. Thus, we have to find a way to let customers quickly try out new security technologies without the need for huge, disruptive deployment projects. Albeit with a slightly different focus than Palo Alto Network’s newly laid out vision, this is a message Gigamon has been promoting for some time with our GigaSECURE Security Delivery Platform.

Automation and Audience Validation

During our presentation on how Gigamon automated our AWS development environment (including securing each developer’s VPC via an automated VPN extension of our corporate network), the audience further validated the need to work better together. They showed considerable interest in how we automated the set-up of the various tools to remove the need for developers to be expert in each environment they are asked to use, give greater control to IT to ensure corporate assets are protected, and reduce the time to developer productivity on a new platform.

Our talk about automation was far from the only mention at the conference. Our partner Splunk was actively promoting their Adaptive Response framework and solutions from the likes of Phantom Cyber were also present. It’s essential that, as an industry, we develop solutions that automate the run-of-the-mill activities that a security analyst currently does by hand. The scarcity of good security talent means we need to ensure our customers get the most possible from every resource they have available. Gigamon is working heavily in this area in conjunction with a number of customers, so look out for announcements later this summer.

The other manifestation of “better together” was through my conversations with our channel partners from all over the World. It was great to spend some time with folks from our new EMEA distributor Exclusive Networks and I also got to talk with channel partners from Canada, Poland, Brazil, Mexico, Japan, and, of course, the U.S.A. All of them were full of stories about the value of deploying Gigamon along with Palo Alto Networks devices and eager to discuss how we can add to Palo Alto Network’s new strategy. We had a lot of ideas as you can imagine, but we’ll save those for another blog.


Author: Phil Griston

  • 0

Gartner Security and Risk Management Summit, All Roads Lead to…Visibility!

Category : Gigamon

This week in National Harbor, MD, Gartner held its annual Security and Risk Management Summit—an event that has become a meeting ground for security thought leaders.

Gartner kicked off the event with a keynote that introduced its new strategic approach for cybersecurity defenders: CARTA (Continuous Adaptive Risk and Trust Assessment). An evolution of the Gartner Adaptive Security Architecture, CARTA recognizes the need for cybersecurity teams to adapt to the significant risks facing defenders today. Risk and, by extension, trust, can no longer be binary decisions and, depending on the context, responses must be more adaptive. A CARTA approach relies on the use of APIs for automation, moves away from simple rule-based systems, and puts a greater emphasis on detection/response vs. mere prevention.

Contextual and continuous visibility is at the heart of CARTA—and precisely why the GigaSECURE Security Delivery Platform, introduced two years back, has become so popular among forward-looking security operations centers. Rather than merely relying on prevention techniques, security operations teams can leverage continuous and pervasive visibility to enable multiple tools in their security arsenal—from detection to containment or even predictive analytics tools—to see more and secure more of their infrastructure.

The Call for a New Cybersecurity Model

Indeed, a standout message at this year’s event was the strong recognition that cybersecurity defenders need a new model from which they can re-architect their enterprise security framework.

Our own CTO Shehzad Merchant spoke to an attentive audience about the need for a new model for cybersecurity defenders, one we call the “Defender Lifecycle Model.” Rather than a patchwork of antiquated Band-Aids that can’t combat the invasion of infectious diseases, the Defender Lifecycle Model acts much like the human immune system and can more effectively help organizations understand, characterize, and defeat ever-evolving polymorphic threats. (More on this soon!)

Other Hot Topics: TLS Decryption and the Cloud

Security practitioners and analysts had lots to discuss and ask about TLS decryption and cloud:

  • Is decryption best done inside a security appliance, e.g. firewall or a Web proxy? Or is it best done on a Visibility Platform? While the first impulse might be to decrypt inside the security appliance, closer analysis reveals that such an approach severely penalizes performance of that security appliance and does not allow efficient offloading to other security tools that need to inspect decrypted traffic (remember, a modern cybersecurity model is more than just prevention). In contrast, a Visibility Platform allows for the computationally expensive decryption process to be done once, with decrypted traffic then sent to multiple security tools.
  • What is the impact of TLS 1.3 on TLS decryption methods and specifically on security operations?
  • What kind of application workloads are organizations moving to the public cloud? New workloads? Or is it a lift and shift of existing workloads? Turns out this depends on a variety of factors and there isn’t a common pattern yet. However, one aspect of of cloud adoption that does have security operations nervous today is lack of the right visibility into the cloud.
  • How can organizations automate their Security Operations Center (SOC)? How could they use approaches like Software-Defined Visibility to better automate tasks in the SOC?
  • As security analytics move from today’s descriptive and diagnostic methods to future predictive and prescriptive techniques, how should data acquisition methods change?

Indeed, for these reasons, it felt like all roads led to visibility at the Gartner Summit. . . and, by extension, to the visibility market leader Gigamon exhibit booth!

– See more at:

  • 0

Pervasive Visibility Extended to AWS GovCloud (US)

Category : Gigamon

In April, Gigamon announced our quick progression in achieving Amazon Web Services (AWS) APN Advanced Technology and Public Sector partner status along with our availability in AWS Marketplace. Since, we’ve continued to move rapidly to expand our cloud capabilities and I’m pleased to announce that the Gigamon Visibility Platform for AWS is now available in the GovCloud (US) region.

AWS GovCloud is a specific region of the AWS Cloud designed for U.S. government organizations hosting proprietary data such as sensitive patient records, financial data, personally identifiable information (PII) and other controlled unclassified information (CUI). Organizations aiming to benefit from cloud adoption require an independent public cloud environment that is not only scalable and elastic, but also complies with stringent regulatory and compliance requirements such as Federal Risk and Authorization Management Program (FedRAMP), International Traffic in Arms Regulations (ITAR) and the Health Insurance Portability and Accountability Act (HIPAA).

Now, the more than 50 federal organizations in the Department of Defense, intelligence community and civilian agencies and dozens of state and local government and educational entities who’ve adopted the Gigamon Visibility Platform into their infrastructures can extend it into AWS GovCloud (US) to gain deeper and more pervasive visibility to manage, secure, and understand sensitive data and regulated workloads and ensure compliance with stringent federal compliance requirements.

It’s important to note that the other AWS regions offer the same high level of security as AWS GovCloud (US) and support existing security controls and certifications. What is unique to the GovCloud (US) region is that it is maintained by U.S. citizens only and can be accessed through FIPS 140-2 service endpoints.

Adhering to the Presidential Executive Order for Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, Gigamon stands at the forefront of today’s cloud transformation as we enable organizations to safely migrate their data and applications to AWS GovCloud (US) and extend visibility and control of data-in-motion in their on-premise data centers.

Gigamon will be exhibiting at the AWS Public Sector Summit on Tuesday, June 13, to Wednesday, June 14, at Booth #456 in the Walter E. Washington Convention Center, Washington, D.C. We invite you to come meet with a visibility expert on the Gigamon Public Sector team and learn more about availability of the Visibility Platform for AWS and GovCloud (US).

– See more at:

  • 0

The Visibility Platform: See What Matters

Category : Gigamon

Webinar Title: The Visibility Platform: See What Matters™
Date: Tuesday, June 06, 2017
Time: 11:00 AM Pacific Daylight Time
Duration: 30 Min

Mid-level managers and practitioners in security operations and network operations teams should attend this webinar to discover how visibility is more than just basic “TAP Aggregation” and can be used as a strategic weapon in your infrastructure against malware outbreaks such as WannaCry. Gigamon Vice President of Product Line Management Ananda Rajagopal will introduce you to the paradigm of ‘Software-Defined Visibility’ to leverage the power of automation in visibility operations. You will learn how the Gigamon Visibility Platform can help you:

  • Improve the performance of overloaded management tools
  • Maximize infrastructure performance with automated visibility
  • Extract metadata from network traffic for advanced insight

RSVP and attend the live 30-minute webinar on June 6 at 11AM PST to receive your $10 Starbucks Giftcard. Can’t attend? RSVP and receive the on-demand link. You can take a coffee break at any time and see what you missed.

* To be eligible for $10 Starbucks gift card webinar attendees must attend on the webinar for 15 minutes or longer. Attendees must provide a valid email address and phone number.  Gift cards will be delivered within 10 business days of webinar conclusion. Starbucks is in no way affiliated with this promotion. STARBUCKS and the Starbucks Logo are trademarks of Starbucks Corporation. All other trademarks are the property of their respective owners.

  • 0

Who Owns Cybersecurity Risk Management?

Category : Gigamon

In light of the countless cyber incidents reported daily—including the high-profile Yahoo database breaches that impacted hundreds of millions of customers—the question of risk responsibility is more front and center than ever before. To date, there’s remained a troubling tendency to view cybersecurity as fundamentally different and separate from other organizational risks. Or, it’s simply viewed as an “IT problem” best left handled by those with the requisite experience and operational subject matter expertise.

 And there’s the rub. Just because something is complex and highly technical doesn’t absolve senior leadership of their responsibility for it. That includes Yahoo’s CEO Marissa Mayer as well as, say, hospital board members and executives who have long been responsible for protecting their organizations from complicated and complex risks associated with quality, patient safety, and evolving medical innovations.

Cybersecurity can no longer be ignored or treated separately by senior leadership. Because if it is, who then owns cybersecurity risk management?  

The Role and Responsibility of the Board

Many boards delegate cybersecurity governance and oversight to an audit or risk committee. Others approach it as a separate strategic priority or within an existing enterprise strategic risk management governance structure. Some don’t address it at all.

The size, industry, and business complexity of an organization often dictates the approach. For example, the board of a bank would likely take a different approach to cybersecurity governance than, perhaps, a mining company with extensive IP-enabled machinery and control systems.   

Regardless of the approach, just as boards are ultimately responsible and legally accountable for overseeing an organization’s financial health, systems and controls, so, too, are they responsible for providing strategic risk management direction to senior leadership as well as oversight of systems, policies, processes and controls in regards to cybersecurity.

While board members may not actually need to be able to write firewall rules, they certainly need to attain and maintain an acceptable level of “cybersecurity literacy.” And they need to ensure the fulfillment of their governance, oversight and fiduciary responsibilities by making cybersecurity a strategic priority and holding management accountable for managing and reporting results.

The National Association of Corporate Directors has nicely distilled these responsibilities down to five principles:

PRINCIPLE 1: Directors need to understand and approach cybersecurity as an enterprise-wide risk management issue, not just an IT issue.

PRINCIPLE 2: Directors should understand the legal implications of cyber risks as they relate to their company’s specific circumstances. 

PRINCIPLE 3: Boards should have adequate access to cybersecurity expertise, and discussions about cyber-risk management should be given regular and adequate time on the board meeting agenda.

PRINCIPLE 4: Directors should set the expectation that management will establish an enterprise-wide, cyber-risk management framework with adequate staffing and budget.

PRINCIPLE 5: Board discussion of cyber risk management should include identification of which risks to avoid, accept, mitigate, or transfer through insurance, as well as specific plans associated with each approach.

More complete details on these principles are available in the NACD Director’s Handbook on Cyber-Risk Oversight.

The Role and Responsibility of the CEO 

While the board is responsible for providing strategic direction and oversight, the CEO is ultimately accountable to the board for the operational management of cybersecurity risk and the implementation of policies, procedures and controls to ensure these objectives are being met. This responsibility includes reporting to the board in a timely, transparent and detailed manner.

Often, the CEO will defer to the chief information officer (CIO) or, if the organization is larger and more complex, possibly the chief information security officer (CISO) to present quarterly or annually to the board. These presentations can sometimes take the form of assurances that “everything is being done” and may also include metrics and key performance indicators as data points for review.

Where this approach falls short of proper governance is in the case where there was an inability to meet key performance indicators or an actual breach occurred. The CEO cannot shift responsibility onto the shoulders of the CIO or CISO and lay blame with the IT department. This would be the equivalent of the CEO differing to the CFO to present a dismal financial report to the board and blaming the accounting department for a drastic decline revenue.

The inability of a CISO to meet key performance indicators might be due to insufficient budget priority given to cybersecurity in general or, alternatively, a drastic decline in revenue might have resulted from loss in consumer confidence due to a security or privacy breach.  Today, there is no way to separate cybersecurity from all other strategic objectives and operations of any organization, regardless of its complexity.

Moreover, each business unit or department must also embrace cybersecurity as a business imperative and priority. The extent to which they do so will be a direct reflection of the level of strategic priority given to it by both the board and CEO.

Along with setting the proper “tone from the top,” the CEO must provide direction and resolve conflicts related to conflicting departmental priorities. For example, marketing and sales may want to ensure that a product is easy to use and insist on removing friction to user adoption such as second-factor authentication or other security enhancements demanded by product engineering that may impede a potential consumer from choosing and purchasing the product or service. 

Balancing the need to drive adoption and, consequently, revenue versus the need to protect both customers and the organization and therefore the brand is not a decision that can be made by front line management. Nor should they shoulder the responsibility.

Ultimately, there is no escaping the reality that the board is responsible for oversight and strategic direction of cybersecurity while the CEO owns operational management responsibility. However, these responsibilities need to be aligned and integrated into all other strategic and operational business decisions.

Accordingly, the IT department or the CISO are responsible for the day-to-day activities required to implement, manage and report on cybersecurity risk and should report to a member of the senior leadership team or the CEO directly who can oversee the enterprise’s cybersecurity program decision-making, and to whom the board can look as accountable for cybersecurity.

So Who Owns Management of Cybersecurity Risk?

The question is best answered in terms of who owns financial risk within the organization? Or who owns patient safety risk? Or who owns risk associated with shareholder value? Each organization may take a different approach to answering these questions, but elevating cybersecurity risk to the strategic level of these other risk categories, recognizing that it also intersects significantly with all of these other risk categories and dealing with it as a strategic priority at all levels of the organization is no longer optional. 

– See more at:

  • 0

WannaCry: Whodunit?

Category : Gigamon

That’s the, er, $61,614.02 question!

The worldwide WannaCry ransomware attack has been making headlines since Friday afternoon when it began running rampant at hospitals in the UK, causing manufacturing plant shut downs across Europe, and propagating and encrypting everything it could get its hands on, from ATMs to marketing display panels.

WannaCry infects unpatched Windows-based computers and immediately encrypts 176 different file types, appending .WCRY to the end of the file name. Once encryption is complete, the user receives a taunting message demanding that a roughly $300 USD bitcoin ransom be paid to decrypt and release the files. If not paid within three days, the demand doubles; if not paid within seven days, WannaCry threatens to delete the files permanently. These tick-tock deadlines are used, of course, to create a sense of urgency for victims to pay.

Uh, Don’t Show Me the Money?

It sounds like a real criminal moneymaker, doesn’t it? With current reports suggesting outbreaks in more than 150 countries and possibly 300,000+ computers infected, at roughly $300 a ransom, you’d think the perpetrators of this global heist would be making off like bandits. But apparently, they’re not.

Analysis of the three bitcoin addresses hard-coded into the ransomware indicates that, at the time of writing, a total of 35.47151311 bitcoin ($61,614.02 USD) had been paid in 235 separate transactions.

That’s the great thing about bitcoin. Anyone can view all transactions and check out how many people have actually paid the ransom so far. Have a look for yourself:

Or you just check out this handy, real-time graph prepared by Elliptic:

But Still, Whodunit?

Okay, so we’ve established that whoever did this isn’t getting rich. However, due to the sheer amount of damage done and resulting and continuing chaos from the attack, it’s still important to figure out who is behind it all.

Law enforcement and security researchers have been on the case since Friday and it appears they are making some progress. On Monday, Google Security Researcher Neel Mehta (@neelmehta)  posted this tweet along with the hashtag #WannaCryptAttribution.

The security research community immediately jumped on the clues he provided and determined that an earlier version of WannaCry, from February 2015, shared some code with a backdoor program known as Contopee. Contopee has been used extensively by a hacker group known as the Lazarus Group. And the Lazarus Group is largely believed to be operating under the control of the North Korean government.

Kaspersky provided the screenshot below that demonstrates the similarity between the two ransomware samples. Shared code has been highlighted.

Source: Securelist, “WannaCry and Lazarus Group—the Missing Link?”

Symantec conducted their own analysis and agrees:

  • Co-occurrence of known Lazarus tools and WannaCry ransomware: Symantec identified the presence of tools exclusively used by Lazarus on machines also infected with earlier versions of WannaCry. These earlier variants of WannaCry did not have the ability to spread via SMB. The Lazarus tools could potentially have been used as method of propagating WannaCry, but this is unconfirmed.
  • Shared code: As tweeted by Google’s Neel Mehta, there is some shared code between known Lazarus tools and the WannaCry ransomware. Symantec has determined that this shared code is a form of SSL. This SSL implementation uses a specific sequence of 75 ciphers that, to date, have only been seen across Lazarus tools (including Contopee and Brambul) and WannaCry variants.

While these findings do not indicate a definite link between Lazarus and WannaCry, we believe there are sufficient connections to warrant further investigation and will continue to share further details of our research as the case unfolds. (Source: Symantec, “What You Need to Know about WannaCry Ransomware.”)

Matt Suiche, who has provided outstanding analysis and commentary on all things WannaCry, has also independently confirmed the similarities in the source code. His full post on attribution is available here.

His full post on attribution is available here.

This isn’t the first time the Lazarus Group has been found to reuse code. BAE Systems earlier linked the hack of Sony Pictures in 2014 with a Bangladesh bank heist of $81 million in 2016, concluding in this report that the malicious code used across the attacks was so similar that both attacks were most likely the work of the same hacking group.

So Did Lazarus or Didn’t Lazarus?

Though too early to say definitively that it was the Lazarus Group and North Korea, it would make a lot of sense. Lazarus Group has a documented history of committing cybercrimes against financial institutions with the primary goal of stealing money, rather than for purposes of espionage or gaining strategic military advantage like other nation state actors. And circumstantial evidence, as mentioned above, is beginning to emerge.

Frankly, with the rather convenient hard-coded kill switch included, the sloppy execution of launching an attack on a Friday afternoon, and the rushed patches to subsequent variants emerging over the past few days, the attack overall just seems to lack the style and sophistication we’ve come to expect from other hacking groups such as Fancy Bear (APT 28).

If attribution to the Lazarus Group is eventually validated, it would be the first nation state-developed ransomware attack that I’m aware of and, likely, the first time that a hostile nation has also leveraged offensive capabilities from the Equation Group release. Whoever did this, they’ve certainly proven themselves to be ingenious and insidious cybercriminals in terms of the development of their attack vector—if rather incompetent at making money. At least, this time.

– See more at:

  • 0

What is normal? Organizations use machine learning to ferret out data anomalies

Category : Gigamon

Over time, technology can automatically raise red flag about suspicious activity.

Machine learning has been a staple of our consumer-driven economy for some time now.

When you buy something on Amazon or watch something on Netflix or even pick up groceries at your local supermarket, the data generated by that transaction is invariably collected, stored, analyzed and acted upon.

Machines, no surprise, are perfectly suited to digesting mountains of data, observing our patterns of consumption, and creating profiles of our behaviors that help companies better market their goods and services to us.

Yet it’s only been in the past few years that machine learning, aka data mining, aka artificial intelligence, has been brought to bear on helping companies defend their business networks.

This is an intgerview  with Shehzad Merchant, chief technology officer at Gigamon, at the RSA 2017 cybersecurity conference. Gigamon is a Silicon Valley-based supplier of network visibility and traffic monitoring technology.  A few takeaways:

Machines vs. humans. There is so much data flowing into business networks that figuring out what’s legit vs. malicious is a daunting task. This trend is unfolding even as the volume of breach attempts remain on a steadily rising curve. It turns out that cyber criminals, too, are using machine learning to boost their attacks. Think about everything arriving in the inboxes of an organization with 500 or 5,000 employees, add in all data depositories and all the business application depositories, plus all support services; that’s where attackers are probing and stealing.

Understanding legitimate behaviors. To catch up on the defensive side, companies can turn to machine learning, as well. Machines are suited to assembling detailed profiles of how employees, partners and third-party vendors normally access and use data on a daily basis. It’s not much different than how Amazon, Google and Facebook profile consumers’ online behaviors for commercial purposes. “You have to apply machine learning technologies because there is so much data to assimilate,” Merchant says.

Identifying suspicious behaviors. The flip side is that machines can be assigned to do the first-level triaging—seeking out abnormal behaviors. Given the volume of data handling that goes on in a normal workday, no team of humans, much less an individual security analyst, is physically capable of keeping pace. But machines can learn over time how to automatically flag events like a massive file transfer taking place at an unusual time of day and being executed by a party that normally has nothing to do with such transfers. The machine can raise a red flag—and the security analyst can be dispatched to follow up.

“We’ve got to level the playing field … today, it’s machine versus humans,” Merchant says. “Organizations have to throw technologies, like machine learning into the mix, to be able to surface these threats and anomalies, so that we take out the bottlenecks.”

  • 0

WannaCry: What We Know So Far . . .

Category : Gigamon

An unprecedented cyber-attack by a ransomware variant known as WannaCry—which encrypts a computer’s files and demands payment to unlock them—has propagated at a speed never before seen by cybersecurity researchers. To date, more than 75,000 systems across 100+ countries have been reported infected, with a major toll taken on operational services at targets such as Telefonica in Spain, the National Health Service (NHS) in the UK, and FedEx in the US (with European countries, including Russia, among the worst hit).

Though the spread seems to be slowing, by no means has it stopped. Based on data provided by MalwareTec, The New York Times has compiled an animated map showing just how fast WannaCry has disseminated, and it’s certainly an eye opener to those who regard cybersecurity as a nuisance problem rather than a potential enterprise-level risk.

One of the first organizations to report the attack was the NHS in the UK, where security teams continue to work around the clock to restore the systems of some 45 affected hospitals in England and Scotland. Incredibly dangerous, the attack put patient safety at risk as it left some hospitals and doctors unable to access patient data and led to the cancellation of operations and medical appointments.

WannaCryThe Twitter account for NHS’s East Kent Hospitals sent a message to all staff indicating that the ransomware may have been attached to an email with “Clinical Results” in the subject. If this report is true, it appears that hospitals were specifically targeted.

While it’s likely that the NHS received the majority of the early press on Friday due to the time of day the attack took hold (early morning in the UK), WannaCry spread fast and not only to other organizations around the world, but also to devices and systems other than standard employee workstations. German rail operator Deutsche Bahn, for example, said that while the attack did not disrupt train services, it did infect its systems, including station display monitors.

A display at Chemnitz station in eastern Germany shows a ransom demand on Friday night. Photograph: P. Goetzelt/AFP/Getty Images

How does WannaCry work and why is this attack unique?

While this ransomware variant is actually rather run-of-the-mill, the way it is infecting systems and spreading so quickly is unique. Most ransomware relies on a user to click a malicious link or file attachment in a phishing email to infect a computer. While cybercriminals can spam out thousands or even millions of phishing emails a day, a successful infection still relies on an unwitting end user to become an unwilling accomplice and trigger the attack. While this is still an incredibly effective technique, it limits the overall ability for most ransomware to spread as each individual target user needs to fall for the trick. Not so with WannaCry.

It appears that once a single instance of WannaCry infects a PC behind the firewall, it can move laterally within networks and self-propagate to other systems. Initial analysis by security researchers indicates that it can do this by: scanning and identifying systems with ports 139 and 445 open, listening to inbound connections, and heavily scanning over TCP port 445 (Server Message Block/SMB), which allows the malware to spread on its own in a manner similar to a worm. The worm then loops through every RDP session on a system to execute the ransomware as that user targeting admin accounts. It also installs the DOUBLEPULSAR backdoor and corrupts shadow volumes to make recovery more difficult.

WannaCry is able to do this where the PC is open to listening and has not been updated with the critical MS-17-010 security patch from Microsoft that was issued on the 14th of March and addresses vulnerabilities in SMBv1 (Microsoft doesn’t mention SMBv2). Windows 10 machines were not subject to the vulnerability addressed by this patch and are, therefore, not at risk of the malware propagating via this vector.

Additionally, Talos has observed WannaCry exploiting DOUBLEPULSAR, a persistent backdoor that is generally used to access and execute code on previously compromised systems and that documented the offensive exploitation framework released as part of the Shadow Brokers cache.

What happens to infected systems?

Once the ransomware infects a system, it starts encrypting everything it can find. The file taskche.exe searches for both internal and external drives mapped to a letter such as “c:/” or “d:/” so that mapped network shares could also be affected. When it finds files of interest, it encrypts them using 2048-bit RSA encryption. How strong is that? Well . . . a DigiCert post calculated that it would take 1.5 million years with a current and standard desktop machine to crack it.

The user then receives a notification on their screen demanding $300 in Bitcoin to release files and restore the system. Otherwise, if the ransom is not paid, the files will be rendered permanently inaccessible or out-and-out deleted. Some reports indicate that if the user doesn’t pay within six hours, the ransom amount will increase to $600 while others indicate that the user has three days to pay. This is incredibly insidious social engineering as it creates a sense of urgency for the user to just pay the ransom or else face a rising cost or losing everything. A sense of hope is also instilled by providing the ability to decrypt a small selection of files, attempting to demonstrate to the user that if they comply with the extortion and pay the ransom, they will receive access to their remaining files.

It should be noted that the criminals behind the attacks are under no obligation whatsoever to provide decryption keys. So paying the ransom may not actually result in recovering access to the system and files. What’s more, paying a ransom not only marks the user as a potential target for future extortion attempts, but it also helps fund the very criminals that perpetrated the crime to develop new and more sophisticated attacks.

Why the spread is slowing down: the kill switch

Talos noted early in the investigation of the attack that WannaCry was sending requests to the domain “” This is likely a human generated domain as the characters included largely consist of keys in the top row of the keyboard. These patterns are generally the result of someone “mashing” the keyboard and are easily recognizable by security researchers.

It appears that if WannaCry can communicate with this domain, it will stop execution and not infect the system. As this domain was not registered, each infection would attempt communication, fail to reach the domain, and, therefore continue to execute and infect the PC. Once the ransomware became able to communicate with that specific domain, it would stop with the domain, once registered, acting as a “kill switch.” This is highly unusual and appears to have been hardcoded into the malware by the creator in case he, she, or they wanted to stop the spread of the attack.

WannaCryA wily security researcher @malwaretechblog, with the help of Darien Huss from security firm Proofpoint, found and activated the kill switch by simply registering the domain.

“I saw it wasn’t registered and thought, ‘I think I’ll have that’,” said MalwareTech. The purchase cost him $10.69. Immediately, the domain name was registering thousands of connections every second.

While it seems almost anti-climactic, the kill switch appears to have worked and is slowing the spread of infections. Unfortunately, we can likely expect that copycats are already working on variations of the attack and with bad guys everywhere learning a great deal from this incident, we can also expect new variants and modifications to the attack to launch soon.

WannaCry When the Conficker worm was running rampant and creating a huge botnet back in 2008, , a security researcher similarly found that it was calling home to randomly generated domain names for Command and Control instructions. He was able to limit Conficker’s ability to execute any commands by registering all the domain names. Variant A and B of Conficker downloaded daily from any of 250 pseudorandom domains. While registering 250 domains a day was getting a little expensive and time consuming, it was still possible for defenders, collectively know as the Conficker Cabal, to keep ahead of the attacker. This strategy fell apart when the attacker released Variant C, which downloaded daily from 500 of 50,000 pseudorandom domains. We can likely expect that future variants of WannaCry and copycats will employ a similar approach and ensure that discovering and activating a simple kill switch will not be effective ever again.

So now what?

Mitigation Recommendations

  1. Ensure all pre-Windows 10 PCs are fully patched. Patch the Windows 10 ones just to be safe, too!
  2. Ensure Microsoft bulletin MS17-010 has been applied.
  3. SMB publically accessible Internet ports 139, 445 should be immediately blocked to prevent inbound traffic.
  4. Block all known TOR exit node IP addresses at the firewall. These are generally available from security intelligence feeds.
  5. If for some reason you can’t patch a device (medical device or other closed architecture systems), make sure to disable SMBv1.

Prevention Recommendations

  1. Ensure you are running the most up-to-date operating system on all your devices, not just PCs.
  2. Have a formal patch management system in place to ensure that all vendor patches are applied to all endpoints in a timely manner.
  3. Install some form of endpoint protection for anti-malware on all your systems and ensure you apply regular updates.
  4. Simply having updated firewalls and endpoint protection is no longer enough. This attack moved laterally behind the firewall. This means that end-to-end, complete network visibility and security tools that can detect, prevent, and mitigate threats throughout your physical, virtual, and cloud networks are now mandatory.
  5. Ensure not only that you have business continuity and disaster recovery plans in place, but that they are updated and tested regularly.
  6. Backup everything! And ensure that you have offline backups as attackers also frequently target backup systems to increase the odds that you will pay the ransom.
  7. Train your users! Employees should receive both security awareness training to help them identify and report threats and protect themselves as well as traditional security training to help them know what is expected of them and how to comply to organizational security policies and procedures.

Related Technical Resources

– See more at:

  • 0

The role of AI in cyber security

Category : Gigamon

Hyper-connected workplaces and the growth of cloud and mobile technologies have sparked a chain reaction when it comes to security risks. The vast volume of connected devices feeding into networks provide a dream scenario for cyber criminals — new and plentiful access points to target. Further, security on these access points is often deficient.

For businesses, the desire to leverage IoT is tempered by the latest mega breach or DDoS attack creating splashy headlines and causing concern.

However, the convenience and automation IoT affords means it isn’t an ephemeral trend. Businesses need to look to new technologies, like AI, to effectively protect their customers as they broaden their perimeter.

The question becomes, how can enterprises work with, and not against, artificial intelligence?

The emergence of AI in cyber security

Machine learning and artificial intelligence (AI) are being applied more broadly across industries and applications than ever before as computing power, data collection and storage capabilities increase. This vast trove of data is valuable fodder for AI, which can process and analyse everything captured to understand new trends and details.

For cyber security, this means new exploits and weaknesses can quickly be identified and analysed to help mitigate further attacks. It has the ability to take some of the pressure off human security “colleagues.” They are alerted when an action is needed, but also can spend their time working on more creative, fruitful endeavours.

A useful analogy is to think about the best security professional in your organisation. If you use this star employee to train your machine learning and artificial intelligence programs, the AI will be as smart as your star employee.

Now, if you take the time to train your machine learning and artificial intelligence programs with your 10 best employees, the outcome will be a solution that is as smart as your 10 best employees put together. And AI never takes a sick day.

It becomes a game of scale and leveraging these new tools can give enterprises the upper hand.

AI under attack

AI is by no means a cyber security panacea. When pitted directly against a human opponent, with clear circumvention goals, AI can be defeated. This doesn’t mean we shouldn’t use AI, it means we should understand its limitations.

AI cannot be left to its own devices. It needs human interaction and “training” in AI-speak to continue to learn and improve, correcting for false positives and cyber criminal innovations.

This hybrid approach already has proven itself to be a valuable asset in IT departments because it works efficiently alongside threat researchers.

Instead of highly talented personnel spending time on repetitive and mundane tasks, the machine takes away this burden and allows them to get on with the more challenging task of finding new and complex threats.

Predictive analytics will build on this by giving security teams the predictive insight needed to stop threats before they become an issue as opposed to reacting to a problem. This approach is not only more cost effective in terms of resources, but also is favourable for the business due to the huge reputational and financial damage a breach can cause in the long term.

Benefits of machine learning

Alongside AI, machine learning is becoming a vital tool in a threat hunter’s tool box. There is no doubt machine learning has become more sophisticated in the past couple of years and will continue to do so as its learnings are compounded and computing power increases.

Organisations face millions of threats each day, so it would be impossible for threat researchers to analyse and categorise them all. As each threat is analysed by the machine, it learns and improves. This not only helps protect organisations now, but compiles this valuable data for use in predictive analytics.

However, just staying ahead of the hackers and the threats they pose is not enough to protect organisations as the new vulnerabilities and new devices that come online will make this more and more difficult.

The continued and enhanced standardisation on data formats and communication standards is crucial to this effort. Once data flows and formats are clearly defined, not just technically but also semantically, machine learning systems will be far better placed to effectively police the operations of such systems.

The industry needs to work towards finding the sweet-spot between unsupervised and supervised machine learning so that we can fully benefit from our knowledge of current threat types and vectors and combine that with the ability to detect new attacks and uncover new vulnerabilities.

Much like AI, machine learning in threat hunting must be guided by humans. Human researchers are able to look beyond the anomalies that the machine may pick up and put context around the security situation to decide if a suspected attack is truly taking place.

The future

For the security industry to get the most out of AI, they need to recognise what machines do best and what people do best. Advances in AI can provide new tools for threat hunters, helping them protect new devices and networks even before a threat is classified by a human researcher.

Machine learning techniques such as unsupervised learning and continuous retraining can keep us ahead of the cyber criminals. However, hackers aren’t resting on their laurels. Let’s give our threat researchers the time to creatively think about the next attack vector while enhancing their abilities with machines.


  • 1

Visibility as Foundation for a New Security Model

Category : Gigamon

Defining a “Security 1.0” model is not difficult: It includes the familiar safeguards found in every modern enterprise including firewalls, anti-virus software, perimeter networks, data leakage protection, and so on. The goals of this 1.0 approach have been ambitious, focusing on prevention of attacks. The corresponding implementation, however, has not worked: The offense is far ahead of the defense.

Speaking this week at the Gigamon Cybersecurity Forum in Washington, Shehzad Merchant, CTO of Gigamon, argued for an improved model – one called “Security 2.0.” Motivation for this model includes addressing the acceleration of technology innovation and dealing with the fact that advanced persistent attacks have become democratized. Anyone today can rent or buy advanced attacks.

Merchant’s model includes four pillars: The first involves Prevention using the familiar set of safeguards found in Security 1.0. The second involves Detection, which allows for the building of context using data collection and machine-learning-based analysis. The third involves Prediction, which allows for triangulation of intent using artificial intelligence and cognitive solutions. Finally, the fourth pillar involves Containment, where proper remediation and security action are taken to reduce risk.

For these pillars to work, Merchant recommends use of an underlying security delivery platform to support visibility into real-time enterprise activity. This might seem obvious, but the velocity of change in technology can make visibility a formidable goal. “Processing an Ethernet frame on a 10 Gbps network,” Merchant explained, “requires that multiple security decisions be made in the time it takes – several nanoseconds – for light to travel just a few feet.”

The Gigamon CTO explained that the GigaSECURE Security Delivery Platform was created to feed metadata, rich intelligence, and full information capture into the pillars of the Security 2.0 model. The platform was designed to breathe context into the Prevention-Prediction-Containment cycle, because without situational awareness, the wrong security decision might be made – which can bode poorly for that Ethernet frame mentioned above.

Certainly, there are significant technical and operational challenges associated with the practical deployment of real-time security protections along the lines of the Gigamon offer. Enterprise networks are typically a tangled mess of legacy, existing, and new components, so any clean, rational 2.0 model based on a set of logical pillars must still be carefully tailored to fit the specifics of a local environment. (If only every enterprise could be a greenfield.)

Nevertheless, the shift from 1.0 prevention to 2.0 situationally aware protection using an underlying data analytic framework seems sensible. Even in truly complex environments where this might seem quite challenging, the effort to adjust your local cyber security methodology toward this proposed Security 2.0 model seems to have great upside potential with little downside risk.

– See more at: