Category Archives: Gigamon

  • 0

Protecting Critical Infrastructure Is … Well … Critical

Category : Gigamon

In our day-to-day lives, we rely on a well-running infrastructure. Whether that infrastructure is transportation, power plants or water utilities, we expect seamless functionality – there’s a reason we call them “critical.”

Today however, we no longer live in an analog world. Everything, including infrastructure, is increasingly being connected digitally, and with digitization comes the risk of greater vulnerability and the potential for online attacks to result in real, physical tragedy. Dams, communications infrastructure, nuclear reactors … these critical infrastructure sectors consist of assets, systems and networks that if impacted, could cripple the economy and put public health, safety and national security at risk.

Thankfully, the Department of Homeland Security (DHS) has been thinking about these vulnerabilities and has identified 16 critical infrastructure sectors as vital to the United States’ economy. In fact, last week on October 23, based on joint analytic efforts between the DHS and FBI, the US-CERT issued a technical advisory that warned of advanced persistent threat (APT) activity targeting energy and other critical infrastructure sectors.

It should be a no-brainer that every country needs to take special steps to safeguard its critical infrastructure, but if you still need convincing, I suggest watching the absorbing documentary “Zero Days” about the Stuxnet malware that was famously used to destroy centrifuges in Iranian nuclear facilities.

A Whole Other Ballgame

Protecting critical infrastructure is a different ballgame compared to protecting data center assets. Several characteristics stand out:

  • Remote locations. Unlike with data centers, many elements of critical infrastructure are typically distributed across a large geographical region. Many of these locations are unmanned or at best, have very few personnel.
  • Long equipment life span. Most active infrastructure elements in data centers have a useful life of about five years. By contrast, the lifetime of critical infrastructure equipment is extremely long, often spanning 10 to 20 years or more. The immediate implication is that cybersecurity defense postures must consider the impact of legacy equipment running several vendors’ outdated software.
  • Government regulation. Critical infrastructure is typically regulated by a government body to ensure compliance, failing which, drastic fines are levied on the critical infrastructure operator or owner.
  • Legacy technologies. Many critical infrastructure elements communicate over legacy technologies such as Supervisory Control and Data Acquisition (SCADA) – a method developed to standardize universal access to a variety of local control modules in industrial control systems (ICS), which are at the heart of critical infrastructure.
  • Unencrypted communications. Much to an attacker’s delight, most communications over a SCADA infrastructure are unencrypted. Moreover, the nature of SCADA communications also requires timely response and interaction between the communicating entities, making such equipment soft targets for denial-of-service (DoS) attacks.

These characteristics combined with the criticality of the sector have made such infrastructure elements high-value targets for threat actors. Unlike a data center breach that leads to valuable data loss, a similar critical infrastructure breach could have a devastating impact on lives, health or economies. Indeed, research over the last few years in both academia and industryhas shown potential risks to critical infrastructure from malware and ransomware attacks, malicious payloads and other threat vectors.

What Can Be Done to Protect Critical Infrastructure?

Fortunately, awareness on this topic has been on the rise. Earlier this year, President Trump signed an Executive Order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure and the National Institute of Standards and Technology (NIST) has also developed a framework for improving critical infrastructure cybersecurity.

If you or your organization is responsible for some part of critical infrastructure, there are three steps that you can take as part of developing your risk management strategy:

  • Close the visibility gaps: Put simply, it is essential to have continuous network visibility across both information technology (IT) and operational technology (OT) operations.
  • Close the budget gaps: With the right visibility platform, you should be able to get a significant boost in ROI.
  • Close the protection gaps: If your current operational processes are coming in the way of upgrades and new cybersecurity initiatives, consider using innovations like inline bypass to speed deployment of new security tools or software.

For a more detailed explanation of the above steps, please read the Gigamon Point of View “Aligning Agency Cybersecurity Practices with the Cybersecurity Framework.”

Already, several critical infrastructure sectors have deployed Gigamon visibility solutions to achieve these protections. For example, many leading public power utilities have used the GigaSECURE Security Delivery Platform to develop a visibility strategy to detect grid tampering, obtain insight right down to substations and gateway nodes, and extract both network traffic and vital metadata to feed their central Security Operations Centers (SOCs) and achieve compliance with NERC[1] CIP[2].


Author: Ananda Rajagopal

  • 0

What Is Network Visibility?

Category : Gigamon

At a time when network security and monitoring is of vital importance due to increasing volumes of data and growing cybersecurity concerns, comprehensive network visibility is a must for most businesses. Despite this growing need, many companies fall short of their network visibility goals. For example, network blind spots have become a major problem for organizations. According to a recent survey from Vanson Bourne, roughly two-thirds – 67 percent – of organizations say that network blind spots are one of the biggest challenges they face when trying to protect their data.

A full half of organizations have inadequate information to identify potential threats. Many companies – almost 40 percent of them – lack a fully deployed comprehensive program or process to pinpoint, notify and respond to a security breach.

In short, about 75 percent of organizations agree that they need to improve their network visibility to better enable network security, and that starts with understanding what it is and how it can help. 

The Basic Definition

Network visibility covers a lot of ground, but its definition is actually rather simple. The term refers to being aware of everything within and moving through your network with the help of network visibility tools. In this way, network visibility tools are used to keep a close and constant eye on network traffic, monitored applications, network performance, managed network resources and big data analytics, which in turn, requires effective and scalable data collection, aggregation, distribution and delivery.

Network visibility, however, is not a passive function as it allows you to exert greater control over all these aspects. The more in-depth, proactive and extensive your network visibility, the more control you have over your network data, and the better you can make decisions regarding the flow and protection of that data.  

What You Can Do with Network Visibility

Improving network visibility has many benefits. Let’s take one example mentioned above: application monitoring. Most businesses have a host of applications they use as part of their operations. With better network visibility, application monitoring improves, allowing you to optimize overall application performance. You can filter critical application traffic to the proper tools and better track when and by whom each application is used while not overloading any of your application monitoring tools. Think about it: Why would you send non-video traffic to a video server? Or email traffic to non-email gateways? It’s simply a waste of server processing power and network bandwidth. 

Network Visibility Leads to Better Security

Perhaps the biggest appeal of improving network visibility is the boost it provides to security efforts. Better network visibility allows you to monitor network traffic for malicious behavior and potential threats more closely. For example, you can better detect when someone gains unauthorized access to the network, thereby allowing security measures to respond quickly. The same goes for detecting malware hidden within encrypted network traffic, practically a necessity today as companies increasingly use SSL/TLS for securing their communications.

No security system is perfect, however, and some security breaches may still occur. In the event of a breach, improved network visibility can accelerate the time to identify and contain the threat, reducing the time, effort and cost involved in mitigating the incident.

Network Visibility Contributes to Business Transformation

Network visibility offers benefits that go beyond improving security; in fact, it can help your company grow both now and into the future. With comprehensive network visibility, you can readily identify trends early on and see where and how your network data is increasing. This helps you plan for future growth and not be caught in a period of catch-up, which can hurt business transformation projects, such as data and big data analytics, cloud adoption and the Internet of Things (IoT). In essence, effective network visibility helps you scale your network – and your business.

  • 0

No More Network Blind Spots, See Um, Secure Um

Category : Gigamon

East Coast summer nights of my childhood were thick with humidity, fireflies and unfortunately, merciless mosquitoes and biting midges. So, when a West Coast friend said she had a summertime no-see-um tale to tell, I was ready to commiserate.

My friend likes to camp – alone. Not in deep, dark, remote backcountry, but, you know, at drive-in campgrounds. Pull in, pitch a tent, camp – that’s her style. While not the most private, she likes the proximity to restrooms and even, people.

Before one adventure, she was gathering provisions at Costco when she saw a “no-see-um” tent for sale. “Well, this is exactly what I need,” she thought. No longer would she have to lower her “shades” or head to the restroom to change. She’d be free to undress in her tent, relax and fall asleep to the hum of an adjacent freeway.

Of course, we can all figure out how this story ended. After having enjoyed her newfound freedom for an evening, she returned the following morning from a visit to the loo only to realize the naked truth.

Like a Good Boy Scout, Are You Prepared?

While my friend’s false sense of security bordered on the ridiculous – okay, it was ridiculous – it speaks to the potential for misjudging cybersecurity readiness. Her problem was that she felt secure when she wasn’t – a blind spot of sorts that could have led to more than just awkward consequences.

In a way, the same holds true with enterprises who have bought innumerable security tools – perimeter firewalls, endpoint antivirus, IPSs – to keep prying eyes out. They, too, often have a false sense of security. Unlike my friend, it’s not that they don’t understand how these tools work; rather it’s that they don’t understand that these tools cannot provide complete network protection.

There are simply too many bad guys and too little time to detect and prevent all cyberattacks. Not only is malware everywhere – for example, zero-day exploits and command-and-control infrastructures are available for purchase at a moment’s notice by anyone with a computer and the desire to wreak havoc – but with data flying across networks at increasing speeds and volumes, it’s more and more difficult for enterprises to do any intelligent analysis to uncover threats and prevent attacks from propagating across core systems.

Detecting compromises is hard. It requires monitoring a series of activities over time and security tools only have visibility into a certain set of activities – most cannot see and comprehend the entire kill chain. This incomplete view is more than problematic – it’s dangerous.

In fact, according to 67 percent of respondents to a new Vanson Bourne survey, “Hide and Seek: Cybersecurity vs. the Cloud,” network blind spots are a major obstacle to data protection. The survey, which polled IT and security decision-makers on network visibility and cloud security preparedness, also revealed that 43 percent of respondents lack complete visibility into all data traversing their networks and half lack adequate information to identify threats. By all counts, such data blindness could lead to serious security implications – not only within enterprise environments, but also in the cloud, where 56 percent of respondents are moving critical, proprietary corporate information and 47 percent are moving personally identifiable information.

See the Forest and the Trees

Sometimes we apply an available tool because it sounds like it’ll do the job – ahem, my dear friend and her no-see-um tent – but fully understanding the purpose and assessing the efficacy of your security tools isn’t a minor detail to be overlooked. Enterprises who’ve been buying more tools to address the security problem are beginning to question if they are getting the right return on their investments, especially when they have no means to measure how secure they are. To further complicate matters, more tools often increase the complexity of security architectures, which can exacerbate the data blindness issue.

So, what can be done? For sure, preventative solutions shouldn’t go away – they play a critical role in basic security hygiene and protecting against known threats – but they must be augmented with solutions for better detection, prediction and response in a way that doesn’t create more blind spots. In other words, with a new approach that is founded on greater visibility and control of network traffic to help increase the speed and efficacy of existing security tools and that allows enterprises to say, “Okay, this is where my investments are going and these are the gaps I need to address to become more secure or even, to identify if it’s possible to become more secure or not.”

If you’re unsure how secure your network is, maybe start with a few simple questions:

  • Can you see into all data across your network? Or does some data remain hidden due to silos between network and security operations teams?
  • Are your security tools able to scale for faster speeds and increased data volume? Without diminishing their performance?
  • What about your cloud deployments – are they being used securely? Is there clear ownership of cloud security?


Author: Erin O’Malley

  • 0

If Money Can’t Buy Happiness, Can It Buy Security?

Category : Gigamon

Gigamon has just published the results of a recent Vanson Bourne survey that polled IT and security decision-makers from the U.S., U.K., France and Germany on their cloud security preparedness and network visibility issues. Though it covers cybersecurity, cloud and GDPR trends, at its heart, the survey tries to answer the question: What makes our networks insecure?

Even with an abundance of security tools at their disposal, companies remain vulnerable to compromise.

New cyber threats make everyone insecure. To restore our own perception of safety, the tendency is to buy new security tools. In fact, many of those surveyed plan to increase cybersecurity spend by 36 percent over the next three years. However, at the same time, 70 percent of respondents intuitively recognize that more money toward more tools doesn’t necessarily mean better security against new cyber threats.

What Else Can We Do?

What is preventing us from securing our networks? In its research, Vanson Bourne identified three key factors that are putting networks at risk:

  1. Hidden data: A large amount of network data remains hidden due to segmentation of data and tools between NetOps and SecOps.
  2. Too much data: The increasing speed and growth of network traffic is stressing monitoring and security tools.
  3. Lack of cloud security: Organizations are migrating high-value information to the cloud, where security is limited and application data is difficult to access.

All together, these factors result in data blindness: the inability to see, understand and secure network traffic. A by-product of this is an inability to adapt our networks, change our strategies and anticipate threats. For instance, the survey reveals that lack of visibility into network traffic is also making GDPR compliance more difficult by preventing enterprises from developing a robust GDPR strategy that maps to dedicated budget.

Fighting Data Blindness with Network Visibility

While survey respondents aren’t necessarily convinced that buying more and more security tools is the answer, they aren’t sure what is and must continue to do something to help guard against increasing cybersecurity threats and the risk of data loss. At the same time, this doesn’t mean that existing security tools aren’t capable, but rather, that they may not have the visibility into the data they need to do their jobs as quickly and efficiently as possible.

Take a fleet of Ferraris. These are high-performing cars, but only if they are provided the right fuel. No fuel, no go. It’s the same for security tools. There are incredible products on the market today, but without visibility into the data they need, they may underperform.

The Vanson Bourne survey results confirm that it is imperative for enterprises to adopt a platform that provides greater visibility into their network traffic, and one that’s integrated with their security tools for increased speed and effectiveness. One like GigaSECURE, the industry’s first Security Delivery Platform that can help organizations see what matters in their enterprise and beyond for better data protection.

Read the Vanson Bourne survey and our analysis. I hope that it will help you consider how these factors are influencing your own network security and cloud strategy. See What Matters.™

  • 0

5 Keys to Quick and Effective Identity Verification Service Deployment

Category : Gigamon

ID fraud is a critical issue for MNOs (Mobile Network Operators); there are approximately 200 types of fraud, and 35% of all mobile fraud comes from subscriptions. It’s an issue that cannot be ignored; the cost is too great for many MNOs to bear. Furthermore, in addition to damaging profits, it damages consumers as well, thanks to the inhibitive effect fraud has on innovation. How can we innovate successfully if we are continually forced to divert significant funds and resources towards mitigating fraudulent activity?

As we’ve discussed in a previous post, there are three overarching reasons to care about the problem:

  • Revenue: the total annual cost of identity fraud globally is €40 billion
  • Regulation: financial services on mobile are growing. MNOs must meet KYC regulations or face heavy fines
  • Reputation: identity fraud victims will abandon networks they no longer trust to keep them secure

But how can we counteract all this fraud? The answer lies in the deployment of trusted and tested identity verification services that can perform effective checks in real time. These solutions are available and are flexible enough to meet a wide range of needs – they can provide identity document verification (to check authenticity), customer authentication(to check the holder is the correct owner) through advanced biometric checks, risk assessment (which checks a holder against control lists), ID verification reports (for audits) and automatic form filling (to speed up enrolment and limit manual input errors).

With all of this in mind, MNOs will of course want to know what the keys to success will be. Can they be confident that it’ll all work? See below for the five key factors that will affect the success and effectiveness of a roll out.

  1. A phased and systematic approach

Phasing implementation ensures the effectiveness of the solution is well tested and perfected before it’s fully initiated. With this approach, teams can draw on best practices and lessons learned, rather than migrating all stores at the same time, which can pose problems. These first stages are essential when trying to understand, analyze and document the dynamics of identity fraud on a small scale, before expanding it across all stores.

This phased and systematic approach also requires anticipation of new regulations which might be introduced during deployment; of course, this is easier said than done. It is essential though, if you want to ensure ID checks can be extended to all use cases (including enrolment for specific value-added services) as well as purchase and renewal of prepaid and postpaid SIMs. As a result of all this, MNOs will ensure they meet current legal requirements and will be prepared for the introduction of more.

  1. Strong feedback

Feedback is crucial and shouldn’t be underestimated. Store managers can share best practice techniques whenever possible. With profitability as a collective main objective, any solution that cuts or at least reduces ID fraud and related costs should be welcomed with open arms. As soon as the benefit of the ID Verification solution is realized, it will then be discussed at length internally, encouraging strong adoption across the board.

  1. A user-centric approach

When it comes to acceptance, we must keep things as simple and convenient as possible for all employees and customers. This means in-store staff will be able to focus on customer care rather than on admin.

It can be something as simple as automated form filling that provides convenience for the customer and clerk, as it speeds up enrolment and avoids needless input errors.

And if the company can prove it is handling its customers’ details securely while streamlining interaction, it will be able to build a deeper and more trusted customer relationship.

  1. Integrating with legacy infrastructures

The best identity verification services are designed to have a minimal impact on existing infrastructures. They plug seamlessly into existing IT systems and can be used (with or without scanners) on mobile devices such as smartphones and tablets. This easy and flexible integration into existing infrastructure ensures a quick deployment. In addition, adaptable reporting allows easy integration into existing back-end systems.

  1. Addressing MNOs’ acquisition strategies

On top of regular internet and mobile services, MNOs can also offer more value-added services now, such as transport ticketing and banking and payment services. For example, our own identity verification services from Gemalto offers a unique and consistent way to cover all those services at the same time, helping streamline sales processes both in-store and remotely.

So, there you have it – the five key factors for successful ID verification deployment.


Author: Didier Benkoel-Adechy

  • 0

Why We Need to Think Differently about IoT Security

Category : Gigamon

Breach fatigue is a real issue today. As individual consumers and IT professionals, we risk getting de-sensitized to breach alerts and notifications given just how widespread they have become. While this is a real issue, we cannot simply let our guard down or accept the current state – especially as I believe the volume and scale of today’s breaches and their associated risks will perhaps pale in comparison to what’s to come in the internet of things (IoT) world.

It is one thing to deal with loss of information, data and privacy, as has been happening in the world of digital data. As serious as that is, the IoT world is the world of connected “things” that we rely on daily – the brakes in your car, the IV pumps alongside each hospital bed, the furnace in your home, the water filtration system that supplies water to your community – but also take for granted simply because they work without us having to worry about them. We rarely stop to think about what would happen if … and yet, with everything coming online, the real question is not if, but when. Therein lies the big challenge ahead of us.

Again, breaches and cyberattacks in the digital world are attacks on data and information. By contrast, cyberattacks in the IoT world are attacks on flesh, blood and steel – attacks that can be life-threatening. For example, ransomware that locks out access to your data takes on a whole different risk and urgency level when it is threatening to pollute your water filtration system. Compounding this is the fact that we live in a world where everything is now becoming connected, perhaps even to the point of getting ludicrous. From connected forks to connected diapers, everything is now coming online. This poses a serious challenge and an extremely difficult problem in terms of containing the cyberrisk. The reasons are the following:

  1. The manufacturers of these connected “things” in many cases are not thinking about the security of these connected things and often lack the expertise to do this well. In fact, in many cases, the components and modules used for connectivity are simply leveraged from other industries, thereby propagating the risk carried by those components from one industry to another. Worse still, manufacturers may not be willing to bear the cost of adding in security since the focus of many of these “connected things” is on their functionality, not on the ability to securely connect them.
  2. Consumers of those very products are not asking or willing in many cases to pay for the additional security. Worse still, they do not know how to evaluate the security posture of these connected things or what questions to ask. This is another big problem not just at the individual consumer level, but also at the enterprise level. As an example, in the healthcare space, when making purchasing decisions on drug infusion pumps, hospitals tend to make the decision on functionality, price and certain regulatory requirements. Rarely does the information security (InfoSec) team get involved to evaluate their security posture. It is a completely different buying trajectory. In the past, when these products did not have a communication interface, that may have been fine. However, today with almost all equipment in hospitals – and in many other industries – getting a communications interface, this creates major security challenges.
  3. Software developers for connected devices come from diverse backgrounds and geographies. There is little standardization or consensus on incorporating secure coding practices into the heart of any software development, engineering course or module across the globe. In fact, any coursework on security tends to be a separate module that, in many cases, is optional in many courses and curriculums. Consequently, many developers globally today have no notion of how to build secure applications. The result is a continual proliferation of software that has been written with little to no regard to its exploitability and is seeping into the world of connected things.

These are all significant and vexing challenges with neither simple fixes nor a common understanding or agreement on the problem space itself. I won’t claim to have a solution to all of them either, but in a subsequent blog, I will outline some thoughts on how one could begin to start approaching this. In the meanwhile, I think the risk and rhetoric around cyber breaches associated with the world of connected things could perhaps take on an entirely new dimension.


Author: Shehzad Merchant

  • 0

Gigamon Introduces New Integrations with Splunk and Phantom, Bringing Its Defender Lifecycle Model to Life

Category : Gigamon

Solutions Enable Security Operation Teams to Accelerate and Automate Threat Detection and Containment

Gigamon Inc., the industry leader in visibility solutions, today announced new integrations with both the Splunk and the Phantom platforms aimed at accelerating incident response, reducing the time to threat detection and automating threat mitigation. The Gigamon® IPFIX Metadata Application for Splunk, the Gigamon® Adaptive Response Application for Splunk and the Gigamon® App for Phantom utilize industry standards and open APIs for seamless integration across product offerings. These solutions empower SecOps and DevOps teams to  take immediate action and effectively combat rapidly evolving and persistent cybersecurity threats.

These three Gigamon integrations bring to life the Defender Lifecycle Model, a new approach to security that addresses the increasing speed, volume and polymorphic nature of cyber threats. The model is based on a foundational layer of pervasive visibility across the four key pillars of prevention, detection, prediction and containment that are essential in a modern cybersecurity infrastructure. The model leverages the GigaSECURE® Security Delivery Platform and enables the integration of machine learning, artificial intelligence (AI) and security workflow automation to shift control away from the attacker and back to the defender.

The industry notes business demand for cybersecurity solutions. According to Gartner, “By 2020, 60 percent of enterprise information security budgets will be allocated for rapid detection and response approaches, up from less than 20 percent in 2015.” 1

“These new Gigamon solutions for the Splunk and Phantom platforms help customers better manage the threat environment by streamling the collection, analysis and reaction to suspicious data using the GigaSECURE®  Security Delivery Platform across their cybersecurity infrastructure. This will help customers expand their use case options, and accelerate both their deployment timelines and the time-to-value,” said Ananda Rajagopal, vice president of products at Gigamon. “The integrated solutions speed threat identification and mitigation by automating what is often a time-consuming and complex manual process.”

Splunk Integration Capabilities Overview

For SecOps teams who are challenged with managing an overwhelming amount of data, incidents and potential threats, the new Gigamon integrations for Splunk deliver the visibility and control required to quickly and effectively identify critical incidents and threats and automatically take steps to mitigate them.

The Gigamon IPFIX Metadata Application for Splunk allows Splunk customers to ingest network metadata generated by the GigaSECURE Security Delivery Platform. The Gigamon Adaptive Response Application for Splunk enables security operations center (SOC) teams to automate actions on the GigaSECURE platform in response to threats detected in Splunk ES.

The solutions can be used for a variety of use cases including:

  • The isolation of an infected host trying to resolve high-entropy domain names or block rogue DNS servers.
  • The detection and mitigation of malware attacks such as the recent WannaCry ransomware cyberattack.
  • The redirection of traffic to a recorder or a specific security tool chain for advanced analysis can be performed when usual network traffic activity is seen.

“Increasingly sophisticated threats cannot be eliminated with any single technology. There is no silver bullet for security,” said Haiyan Song, senior vice president and general manager of Security Markets at Splunk. “We created the Adaptive Response Initiative to help organizations more efficiently and flexibly combat advanced attacks with their existing security architectures. Members like Gigamon are key to the success of Adaptive Response. We look forward to working with them as the world embraces an analytics-driven approach to security.”

Phantom Integration Capabilities Overview

The Gigamon App for Phantom users provides SecOps teams with automated and orchestrated security operations and case management. The application utilizes REST APIs provided by the GigaSECURE Security Delivery Platform and enables Phantom users to trigger workflows or remediation actions on the GigaSECURE Security Delivery Platform in response to specific events.

Key benefits of the Gigamon App for Phantom include automating common security operations tasks through predefined playbooks, and orchestrating network threat detection and mitigation to reduce mean time to resolution.

“Gigamon provides an innovative, extensible, and pervasive platform for visibility that integrates with Phantom to orchestrate and automate critical security operations tasks,” says Oliver Friedrichs, CEO and founder, Phantom. “Our integrated solutions allow SecOps teams to work smarter, respond faster and strengthen their network defense postures.”


The Gigamon IPFIX Metadata Application for Splunk and The Gigamon Adaptive Response Application for Splunk are available for free download from Splunkbase. The Gigamon App for Phantom is available for free download from the Phantom Apps online community.

1Gartner, Inc., Shift Cybersecurity Investment to Detection and Response, Ayal Tirosh, Paul E. Proctor, May 3, 2017.

 Additional Resources



  • 0

7 Cybersecurity Tactics to Watch in Government

Category : Gigamon

Are you up to speed on the latest government cyber developments?

Download Your Guide Now

It’s not a matter of if but when agencies will fall victim to a cyber breach. The reality is no agency is immune to attacks, whether internal or external, which is why governments must take a proactive approach to security.

According to the Government Accountability Office, the number of reported cyber incidents rose from 5,503 in fiscal year 2006 to 77,183 in fiscal year 2015. Some agencies have gone beyond the routine cyber awareness training for staff and are using innovative tactics, such as internal phishing exercises, to keep security top of mind for employees at all levels. A growing number of agencies are also using analytics and anomaly detection tools to identify and prioritize cyber risks.

Check out this new guide to explore the creative ways agencies are trying to get ahead of cybersecurity attacks.

Donwload now

  • 0

The Case for Network Visibility

Category : Gigamon

As a security professional and a consumer, my ears perk up when I hear about security breaches in the daily news. My first thought is, “Has my personal data been compromised?” (most of us initially react with emotion and self-interest) and then I ponder how the solutions from my company, Gigamon, could be applied to prevent such breaches in future. I also look at what security experts, analysts, reporters and other influencers around the industry are saying.

Companies that have suffered serious breaches have invested much in security.  Reports I’ve seen state that, in many of these instances, significant investments have been made in firewalls, intrusion prevention systems, malware protection and a host of other security solutions.  The companies are doing their best – and most organizations do – to secure business critical data and the personally identifiable information of their customers.  So why is it so hard to stop these attacks? What are cybersecurity operations teams missing?  How could they rethink cybersecurity to address the modern day threat landscape?

From my perspective, a totally new and different security approach is required that goes beyond the traditional “buy more tools approach” that is not only becoming more cost prohibitive, but also creates inefficiencies and hinders performance. All signs point to the fact that consistent and concerted attention to visibility, rather than prevention, is the key to robust network security.

The exponential growth of data traveling through enterprise networks means that instead of investing in more tools, organizations must invest in and implement technology that detects and analyzes data-in-motion and sends only the necessary data to the nearest available set of security tools such as the firewall or intrusion prevention system for processing.  This type of approach levels the playing field and changes the equation from “man fighting against machine” (since the attacks are likely coming from well-appointed systems in use by hackers and nation states) to “machine vs. machine.”  This approach is eloquently explained in the Defender Lifecycle Model security approach proposed by my friend and colleague, Shehzad Merchant and is one proposed, at least in theory, by a recent research report from Gartner entitled “Use a CARTA (continuous adaptive risk and trust assessment) Strategic Approach to Embrace Digital Business Opportunities in an Era of Advanced Threats.”

The harsh, new reality is that cyberattacks and data breaches are inevitable. And while there is not yet a perfect approach, it’s essential that enterprises shift their approach to add pervasive visibility to their traditional prevention measures – alongside detection, prediction and containment – to improve the security of their applications and the business critical and personal data traversing their network.

With detection and response integrated into security operations, today’s businesses gain a strategic advantage in the fight to wrestle the massive volume of network cyber threats that exist in this brave new world. And that is a major step forward in shifting control and advantage way from malicious attackers and back to defenders.


Author: Graham Melville

  • 0

Visibility is Essential

Category : Gigamon

Vanson Bourne Report: Lack of Visibility is a Leading Obstacle to Securing Enterprise and Cloud Networks

Lack of visibility is leaving organizations struggling to identify network data and investigate suspicious network activity tied to malicious attacks. Sixty-seven percent of respondents cited network blind spots as a major obstacle to security:
Monitoring and security tools are stressed by the increasing speed and growth of network traffic.
High value information is being migrated to the cloud, where visibility is limited and application data is not easily accessible.
A large amount of network data remains hidden due to data and tools still being segmented.

Executive Summary

Learn the root causes of lack of visibility and their impact on your network security.

Vanson Bourne Report

Get insights from surveyed global IT leaders about network security on premises and in the cloud.