Category Archives: Gigamon

  • 0

Why We Need to Think Differently about IoT Security

Category : Gigamon

Breach fatigue is a real issue today. As individual consumers and IT professionals, we risk getting de-sensitized to breach alerts and notifications given just how widespread they have become. While this is a real issue, we cannot simply let our guard down or accept the current state – especially as I believe the volume and scale of today’s breaches and their associated risks will perhaps pale in comparison to what’s to come in the internet of things (IoT) world.

It is one thing to deal with loss of information, data and privacy, as has been happening in the world of digital data. As serious as that is, the IoT world is the world of connected “things” that we rely on daily – the brakes in your car, the IV pumps alongside each hospital bed, the furnace in your home, the water filtration system that supplies water to your community – but also take for granted simply because they work without us having to worry about them. We rarely stop to think about what would happen if … and yet, with everything coming online, the real question is not if, but when. Therein lies the big challenge ahead of us.

Again, breaches and cyberattacks in the digital world are attacks on data and information. By contrast, cyberattacks in the IoT world are attacks on flesh, blood and steel – attacks that can be life-threatening. For example, ransomware that locks out access to your data takes on a whole different risk and urgency level when it is threatening to pollute your water filtration system. Compounding this is the fact that we live in a world where everything is now becoming connected, perhaps even to the point of getting ludicrous. From connected forks to connected diapers, everything is now coming online. This poses a serious challenge and an extremely difficult problem in terms of containing the cyberrisk. The reasons are the following:

  1. The manufacturers of these connected “things” in many cases are not thinking about the security of these connected things and often lack the expertise to do this well. In fact, in many cases, the components and modules used for connectivity are simply leveraged from other industries, thereby propagating the risk carried by those components from one industry to another. Worse still, manufacturers may not be willing to bear the cost of adding in security since the focus of many of these “connected things” is on their functionality, not on the ability to securely connect them.
  2. Consumers of those very products are not asking or willing in many cases to pay for the additional security. Worse still, they do not know how to evaluate the security posture of these connected things or what questions to ask. This is another big problem not just at the individual consumer level, but also at the enterprise level. As an example, in the healthcare space, when making purchasing decisions on drug infusion pumps, hospitals tend to make the decision on functionality, price and certain regulatory requirements. Rarely does the information security (InfoSec) team get involved to evaluate their security posture. It is a completely different buying trajectory. In the past, when these products did not have a communication interface, that may have been fine. However, today with almost all equipment in hospitals – and in many other industries – getting a communications interface, this creates major security challenges.
  3. Software developers for connected devices come from diverse backgrounds and geographies. There is little standardization or consensus on incorporating secure coding practices into the heart of any software development, engineering course or module across the globe. In fact, any coursework on security tends to be a separate module that, in many cases, is optional in many courses and curriculums. Consequently, many developers globally today have no notion of how to build secure applications. The result is a continual proliferation of software that has been written with little to no regard to its exploitability and is seeping into the world of connected things.

These are all significant and vexing challenges with neither simple fixes nor a common understanding or agreement on the problem space itself. I won’t claim to have a solution to all of them either, but in a subsequent blog, I will outline some thoughts on how one could begin to start approaching this. In the meanwhile, I think the risk and rhetoric around cyber breaches associated with the world of connected things could perhaps take on an entirely new dimension.

To learn more now about how a Security Delivery Platform can optimize your security posture, download the complete Security Inside Out e-book. Stay safe.

Source: https://blog.gigamon.com/2017/10/15/need-think-differently-iot-security/?utm_content=bufferd8099&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer

Author: Shehzad Merchant


  • 0

Network Virtualization: What Is It and How to Optimize It?

Category : Gigamon

As on-premises environments become more expensive and complex, organizations are virtualizing more and more of their traditional infrastructure. In fact, a 2016 Gartner report found that, on average, enterprises have virtualized 75 percent or more of their data centers.

By virtualizing the network, the network administrator can automate many of the tasks previously performed manually, making the network much easier to scale. Additionally, network virtualization allows a single hardware platform to support multiple virtual devices that can be used as needed to cut costs and increase flexibility.

What Is Network Virtualization?

 As defined by Wikipedia, network virtualization separates the management plane from the control plane by combining hardware and software resources into a single, software-based administrative entity called a virtual network. This virtual network simulates the functionality of traditional hardware. Once a software-based view of the network has been created, the hardware is then only responsible for forwarding packets while the virtual network is used to deploy and manage network services.

So Why Virtualize Your Network?

Here are a few key benefits to consider:

  1. Boost IT Productivity: Network virtualization can reduce the cost of purchasing and maintaining hardware, which is especially useful for organizations with bursty workloads that would require over-provisioning to keep up with demand. Also, as data volume and speed increase, the ability to scale efficiently allows security teams to maintain better network visibility.
  2. Improved Security and Recovery Times: Network virtualization allows organizations to control which types of traffic go through the physical network. Many attackers rely on the fact that once they’ve breached the security perimeter, there are few, if any, security controls in place. Network virtualization allows organizations to better combat security threats by creating micro-perimeters within the network. With this ability, known as micro-segmentation, they can keep sensitive data within a certain virtual network that only authorized users can access. For example, an organization could secure VoIP data by placing it within its own virtual network with restricted user access. According to Forrester Consulting: “‘Micro-segmentation provided through network virtualization paves the way for implementing a Zero Trust model. Where previous security models assumed the threat was outside the network, Zero Trust assumes even the network is insecure.” Additionally, network virtualization can reduce or even eliminate outages created by hardware failures and improve disaster recovery times. Disaster recovery with traditional network hardware requires many manual, time-intensive steps, including changing the IP address and updating the firewall. Network virtualization eliminates these steps.
  3. Faster Application Delivery: Without virtualization, network provisioning is a time-intensive, manual process. As a result, any time an application requires fundamental network changes, the application deployment time is extended. Moreover, the risk of a deployment failure increases significantly when organizations perform manual deployments. Since network virtualization automates network configuration, they can instead cut application deployment time from weeks to minutes. Reducing deployment time can have a significant impact on a company’s bottom line, allowing for faster new-product rollouts or major application updates.

Why Gigamon as a Network Virtualization Solution

 To monitor and secure virtual workloads, it is critical to have immediate and deep visibility of network activity across the entire infrastructure. Application and security monitoring tools need to be able to analyze security threats, congestion points and application behavior. To accomplish this, data from the physical and virtual network must be readily accessible.

 Gigamon offers an integrated solution using the GigaSECURE® Security Delivery Platform for both VMware NSX and ESX network virtualization. With it, security operations and networking teams can automate traffic visibility of both physical and virtual workloads and networks while benefiting from the efficiency of a virtualized network.

To learn more, please read our “Enhanced Monitoring for VMware Infrastructure” solution brief 

Source: https://blog.gigamon.com/2018/01/04/network-virtualization-optimize/?utm_content=buffer487e1&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer

Author: Diana Shtil


  • 0

Gigamon and Plixer – Extending Visibility into the Public Cloud with NetFlow and IPFIX

Category : Gigamon

With the increasing awareness of security vulnerabilities, many enterprises are realizing that despite deploying the best security layers, no network or cloud deployment is safe from cyberthreats. To respond efficiently to attacks, organizations must now gather security analytics not only from their own network, but also from their public cloud deployments; this is where Plixer comes in. Plixer delivers a network traffic analytics system, called Scrutinizer® that supports fast and efficient incident response spanning from on-premises all the way to the public cloud. The solution allows you to gain visibility into cloud applications, security events and network traffic. It delivers actionable data to guide you from the detection of network and security events all the way to root cause analysis and mitigation.

What kind of challenges are customers who are moving to the public cloud?

Public cloud adoption rates continue to rise with no signs of slowing down. Moving to the cloud enables organizations to reduce cost, increase business agility and enjoy resource elasticity. As cloud models mature and comfort levels rise, organizations have begun to move their core business applications to the cloud. In contrast to the benefits of cost and agility, the increasing rate of public cloud adoption has left IT professionals with critical blind spots. When complaints of poor user experience come in, IT must quickly identify root cause. Often, even after determining their own network is not at fault, IT is faced with cloud providers pointing the finger back at them. Organizations need visibility that extends across the LAN, WAN and public cloud to prove their innocence and hold cloud providers accountable.

Scrutinizer’s collection, correlation and reporting of Gigamon NetFlow and IPFIX exports from the network infrastructure and the public cloud are the key to providing the visibility IT teams desperately need. The United States Computer Emergency Readiness Team (US-CERT) recently wrote, “Reviewing network perimeter [NetFlow] will help determine whether a network has experienced suspicious activity.” In addition, Gartner, Inc., says “Network traffic analysis improves the ability of security analysts to spot these attacks with a higher degree of certainty, facilitating a triage of events and prioritization of actions to be taken.”[i]

What is Plixer’s solution for the public cloud and what are the benefits for mutual customers when deploying applications on AWS?

Scrutinizer is the industry’s leading network traffic analytics system. It supports fast and efficient incident response with deep visibility into cloud applications, security events and network traffic. It delivers actionable data to guide you from the detection of network and security events all the way to root cause analysis and mitigation no matter where the problem arises.

Network team benefits:

  • Enriched data context into network traffic.
  • Increased efficiency and reduced cost.
  • Improved network and application performance.
  • Rapid reporting at massive scale.

Security team benefits:

  • Reduced security risks.
  • Improved time-to-resolution.
  • Better contextual forensics.
  • Advanced security analytics.

Joint Gigamon and Plixer customers gain valuable insight into their private, public and hybrid cloud implementations, allowing them to maintain better user experiences, reduce security risks and optimize network and application performance no matter where the application resides.

Sample dashboard of exported NetFlow and IPFIX data

How does Plixer integrate with Gigamon on AWS?

As the foundation of the Plixer incident response and behavior analysis architecture, Scrutinizer performs the collection, threat detection and reporting of all flow technologies on a single platform. It collects rich Gigamon NetFlow, IPFIX and metadata exports from the Gigamon Visibility Platform to gain deep traffic visibility across an enterprise’s wired, wireless and cloud-based infrastructure. Together, Gigamon and Plixer offer insight into the cloud, providing much needed visibility to help organizations hold their cloud providers accountable to service-level agreements and secure key infrastructure.

Unlike legacy, all-in-one solutions that cannot keep pace in today’s sophisticated and complex threat environments, Scrutinizer excels at delivering real-time context and situational awareness. Now with the help of Gigamon, this can extend all the way into the public cloud.

If you haven’t tried Scrutinizer yet, you can download the free edition to see how well it works with the Gigamon Visibility Platform and how it can help you better manage and secure your own network while keeping both your boss and users delighted.

For more information, please watch the Gigamon and Plixer Network Traffic Analytics video.


[i] Gartner, Inc., “Hype Cycle for Infrastructure Protection,” July 2016.

Spurce: https://blog.gigamon.com/2017/11/29/gigamon-plixer-extending-visibility-public-cloud-netflow-ipfix/?utm_content=bufferc4d5f&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer

Author: Bob Noel


  • 0

Gigamon Introduces the First Scalable SSL Decryption Solution for 100Gb Networks

Category : Gigamon

Reduces Costs and Time-to-Threat Detection via Architectural Approach that Enables Traffic to be Decrypted Once and Sent to Multiple Security Tools for Inspection

Gigamon Inc., the leader in traffic visibility solutions for cybersecurity and monitoring applications, today announced the industry’s first visibility solution to support SSL/TLS decryption for high speed 100Gb and 40Gb networks. Part of the GigaSECURE Security Delivery Platform, the solution empowers companies to decrypt and re-encrypt their data once and inspect it with multiple best-of-breed security tools. This helps to expose hidden threats in SSL/TLS sessions, reduce security tool overload, and extend the value and return-on-investment (ROI) of existing security tools.

With the volume of data flowing through corporate networks having increased significantly in recent years, companies have upgraded to higher speed networks running at 40Gb and 100Gb. Meanwhile, there is a dramatic rise in the volume of data running on these high-speed networks that is encrypted, driven by the increased use of SaaS applications such as Microsoft Office365 and Dropbox. Gartner estimates that, through 2019, more than 80 percent of enterprises’ web traffic will be encryptedi.

“Traditional network security architectures are ineffective at supporting the explosive growth in high speed traffic and, more importantly, at identifying and stopping malware and data exfiltration that use encryption,” said Ananda Rajagopal, vice president of products for Gigamon. “Many security and monitoring tools become overloaded in 100Gb network environments, so it’s clear a new approach is needed. Our new solution enables enterprises to stop the sprawl by redeploying security tools from the edge of their network to the core, where it’s easier to spot lateral attacks and more quickly identify threats.”

Malware leverages SSL/TLS encryption to hide and avoid inspection. A Trustwave 2017 reportii estimates that 36 percent of malware samples analyzed used some form of encryption. In 40Gb and 100Gb networks, decrypting, exposing and identifying hidden threats in encrypted traffic is increasingly more challenging since most security and monitoring tools do not support such speeds. In addition, a tool-by-tool approach is very complex, costly and inefficient. Research from NSS Labsiii indicates a performance degradation of up to 80 percent when security tools decrypt traffic and perform their specific security function.

“By utilizing Check Point’s Infinity architecture, which manages Next-Generation Threat Prevention gateways worldwide, Gigamon provides world-class performance and a resilient security architecture, enabling inline SSL protection for our largest customer deployments,” said Jason Min, head of business and corporate development, Check Point Software. “Our partnership with Gigamon delivers optimal performance and advanced threat prevention which is critical for enterprises in this era of veiled cyber threats.”

“It’s great to see the ‘decrypt once, inspect many times’ architectural approach that Gigamon is taking to inline SSL decryption. It’s an efficient approach that will help our customers and solution provider community take advantage of whichever security solutions best suit their business need,” said Matt Rochford, vice president of the cybersecurity group in Arrow Electronics’ enterprise computing solutions business.

The expansion of the GigaSECURE Security Delivery Platform is a continuation of the Gigamon security strategy which debuted in 2015 and was extended with metadata and public cloud visibility last year. This year the company announced its inline SSL/TLS decryption solution and introduced the Defender Lifecycle Model. When implemented, the Defender Lifecyle Model empowers cybersecurity professionals to use continuous network visibility to control and automate tasks between best-of-breed security tools in the continuum of prevention, detection, prediction and containment. Recently the company announced the extension of its public cloud offerings and new applications for Splunk and Phantom in support of the Defender Lifecycle Model. Gigamon continues to build on its vision with the expansion of its security offerings for both public cloud and on-premises infrastructure.

GigaSECURE, a Security Delivery Platform

This solution includes:

  • GigaVUE® visibility nodes, such as the GigaVUE-HC2 or GigaVUE-HC3.
  • GigaSMART® module corresponding to the selected visibility node.
  • An inline bypass module to provide resiliency in 10, 40 or 100Gb networks.
  • Ability to activate desired security modules including SSL/TLS Decryption, Application Session Filtering, and NetFlow/Metadata Generation.

Resources

  • Blog post: Stop the Sprawl, Security at the Speed of the Network
  • Feature brief: SSL/TLS Decryption
  • Web page: SSL/TLS Decryption

Source: https://www.gigamon.com/company/news-and-events/newsroom/100gb-ssl-decryption.html?utm_content=buffer622e4&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer


  • 0

Protecting Critical Infrastructure Is … Well … Critical

Category : Gigamon

In our day-to-day lives, we rely on a well-running infrastructure. Whether that infrastructure is transportation, power plants or water utilities, we expect seamless functionality – there’s a reason we call them “critical.”

Today however, we no longer live in an analog world. Everything, including infrastructure, is increasingly being connected digitally, and with digitization comes the risk of greater vulnerability and the potential for online attacks to result in real, physical tragedy. Dams, communications infrastructure, nuclear reactors … these critical infrastructure sectors consist of assets, systems and networks that if impacted, could cripple the economy and put public health, safety and national security at risk.

Thankfully, the Department of Homeland Security (DHS) has been thinking about these vulnerabilities and has identified 16 critical infrastructure sectors as vital to the United States’ economy. In fact, last week on October 23, based on joint analytic efforts between the DHS and FBI, the US-CERT issued a technical advisory that warned of advanced persistent threat (APT) activity targeting energy and other critical infrastructure sectors.

It should be a no-brainer that every country needs to take special steps to safeguard its critical infrastructure, but if you still need convincing, I suggest watching the absorbing documentary “Zero Days” about the Stuxnet malware that was famously used to destroy centrifuges in Iranian nuclear facilities.

A Whole Other Ballgame

Protecting critical infrastructure is a different ballgame compared to protecting data center assets. Several characteristics stand out:

  • Remote locations. Unlike with data centers, many elements of critical infrastructure are typically distributed across a large geographical region. Many of these locations are unmanned or at best, have very few personnel.
  • Long equipment life span. Most active infrastructure elements in data centers have a useful life of about five years. By contrast, the lifetime of critical infrastructure equipment is extremely long, often spanning 10 to 20 years or more. The immediate implication is that cybersecurity defense postures must consider the impact of legacy equipment running several vendors’ outdated software.
  • Government regulation. Critical infrastructure is typically regulated by a government body to ensure compliance, failing which, drastic fines are levied on the critical infrastructure operator or owner.
  • Legacy technologies. Many critical infrastructure elements communicate over legacy technologies such as Supervisory Control and Data Acquisition (SCADA) – a method developed to standardize universal access to a variety of local control modules in industrial control systems (ICS), which are at the heart of critical infrastructure.
  • Unencrypted communications. Much to an attacker’s delight, most communications over a SCADA infrastructure are unencrypted. Moreover, the nature of SCADA communications also requires timely response and interaction between the communicating entities, making such equipment soft targets for denial-of-service (DoS) attacks.

These characteristics combined with the criticality of the sector have made such infrastructure elements high-value targets for threat actors. Unlike a data center breach that leads to valuable data loss, a similar critical infrastructure breach could have a devastating impact on lives, health or economies. Indeed, research over the last few years in both academia and industryhas shown potential risks to critical infrastructure from malware and ransomware attacks, malicious payloads and other threat vectors.

What Can Be Done to Protect Critical Infrastructure?

Fortunately, awareness on this topic has been on the rise. Earlier this year, President Trump signed an Executive Order on Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure and the National Institute of Standards and Technology (NIST) has also developed a framework for improving critical infrastructure cybersecurity.

If you or your organization is responsible for some part of critical infrastructure, there are three steps that you can take as part of developing your risk management strategy:

  • Close the visibility gaps: Put simply, it is essential to have continuous network visibility across both information technology (IT) and operational technology (OT) operations.
  • Close the budget gaps: With the right visibility platform, you should be able to get a significant boost in ROI.
  • Close the protection gaps: If your current operational processes are coming in the way of upgrades and new cybersecurity initiatives, consider using innovations like inline bypass to speed deployment of new security tools or software.

For a more detailed explanation of the above steps, please read the Gigamon Point of View “Aligning Agency Cybersecurity Practices with the Cybersecurity Framework.”

Already, several critical infrastructure sectors have deployed Gigamon visibility solutions to achieve these protections. For example, many leading public power utilities have used the GigaSECURE Security Delivery Platform to develop a visibility strategy to detect grid tampering, obtain insight right down to substations and gateway nodes, and extract both network traffic and vital metadata to feed their central Security Operations Centers (SOCs) and achieve compliance with NERC[1] CIP[2].

Source: https://blog.gigamon.com/2017/10/29/protecting-critical-infrastructure-well-critical/?utm_content=buffer9a02d&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer

Author: Ananda Rajagopal


  • 0

What Is Network Visibility?

Category : Gigamon

At a time when network security and monitoring is of vital importance due to increasing volumes of data and growing cybersecurity concerns, comprehensive network visibility is a must for most businesses. Despite this growing need, many companies fall short of their network visibility goals. For example, network blind spots have become a major problem for organizations. According to a recent survey from Vanson Bourne, roughly two-thirds – 67 percent – of organizations say that network blind spots are one of the biggest challenges they face when trying to protect their data.

A full half of organizations have inadequate information to identify potential threats. Many companies – almost 40 percent of them – lack a fully deployed comprehensive program or process to pinpoint, notify and respond to a security breach.

In short, about 75 percent of organizations agree that they need to improve their network visibility to better enable network security, and that starts with understanding what it is and how it can help. 

The Basic Definition

Network visibility covers a lot of ground, but its definition is actually rather simple. The term refers to being aware of everything within and moving through your network with the help of network visibility tools. In this way, network visibility tools are used to keep a close and constant eye on network traffic, monitored applications, network performance, managed network resources and big data analytics, which in turn, requires effective and scalable data collection, aggregation, distribution and delivery.

Network visibility, however, is not a passive function as it allows you to exert greater control over all these aspects. The more in-depth, proactive and extensive your network visibility, the more control you have over your network data, and the better you can make decisions regarding the flow and protection of that data.  

What You Can Do with Network Visibility

Improving network visibility has many benefits. Let’s take one example mentioned above: application monitoring. Most businesses have a host of applications they use as part of their operations. With better network visibility, application monitoring improves, allowing you to optimize overall application performance. You can filter critical application traffic to the proper tools and better track when and by whom each application is used while not overloading any of your application monitoring tools. Think about it: Why would you send non-video traffic to a video server? Or email traffic to non-email gateways? It’s simply a waste of server processing power and network bandwidth. 

Network Visibility Leads to Better Security

Perhaps the biggest appeal of improving network visibility is the boost it provides to security efforts. Better network visibility allows you to monitor network traffic for malicious behavior and potential threats more closely. For example, you can better detect when someone gains unauthorized access to the network, thereby allowing security measures to respond quickly. The same goes for detecting malware hidden within encrypted network traffic, practically a necessity today as companies increasingly use SSL/TLS for securing their communications.

No security system is perfect, however, and some security breaches may still occur. In the event of a breach, improved network visibility can accelerate the time to identify and contain the threat, reducing the time, effort and cost involved in mitigating the incident.

Network Visibility Contributes to Business Transformation

Network visibility offers benefits that go beyond improving security; in fact, it can help your company grow both now and into the future. With comprehensive network visibility, you can readily identify trends early on and see where and how your network data is increasing. This helps you plan for future growth and not be caught in a period of catch-up, which can hurt business transformation projects, such as data and big data analytics, cloud adoption and the Internet of Things (IoT). In essence, effective network visibility helps you scale your network – and your business.


  • 0

No More Network Blind Spots, See Um, Secure Um

Category : Gigamon

East Coast summer nights of my childhood were thick with humidity, fireflies and unfortunately, merciless mosquitoes and biting midges. So, when a West Coast friend said she had a summertime no-see-um tale to tell, I was ready to commiserate.

My friend likes to camp – alone. Not in deep, dark, remote backcountry, but, you know, at drive-in campgrounds. Pull in, pitch a tent, camp – that’s her style. While not the most private, she likes the proximity to restrooms and even, people.

Before one adventure, she was gathering provisions at Costco when she saw a “no-see-um” tent for sale. “Well, this is exactly what I need,” she thought. No longer would she have to lower her “shades” or head to the restroom to change. She’d be free to undress in her tent, relax and fall asleep to the hum of an adjacent freeway.

Of course, we can all figure out how this story ended. After having enjoyed her newfound freedom for an evening, she returned the following morning from a visit to the loo only to realize the naked truth.

Like a Good Boy Scout, Are You Prepared?

While my friend’s false sense of security bordered on the ridiculous – okay, it was ridiculous – it speaks to the potential for misjudging cybersecurity readiness. Her problem was that she felt secure when she wasn’t – a blind spot of sorts that could have led to more than just awkward consequences.

In a way, the same holds true with enterprises who have bought innumerable security tools – perimeter firewalls, endpoint antivirus, IPSs – to keep prying eyes out. They, too, often have a false sense of security. Unlike my friend, it’s not that they don’t understand how these tools work; rather it’s that they don’t understand that these tools cannot provide complete network protection.

There are simply too many bad guys and too little time to detect and prevent all cyberattacks. Not only is malware everywhere – for example, zero-day exploits and command-and-control infrastructures are available for purchase at a moment’s notice by anyone with a computer and the desire to wreak havoc – but with data flying across networks at increasing speeds and volumes, it’s more and more difficult for enterprises to do any intelligent analysis to uncover threats and prevent attacks from propagating across core systems.

Detecting compromises is hard. It requires monitoring a series of activities over time and security tools only have visibility into a certain set of activities – most cannot see and comprehend the entire kill chain. This incomplete view is more than problematic – it’s dangerous.

In fact, according to 67 percent of respondents to a new Vanson Bourne survey, “Hide and Seek: Cybersecurity vs. the Cloud,” network blind spots are a major obstacle to data protection. The survey, which polled IT and security decision-makers on network visibility and cloud security preparedness, also revealed that 43 percent of respondents lack complete visibility into all data traversing their networks and half lack adequate information to identify threats. By all counts, such data blindness could lead to serious security implications – not only within enterprise environments, but also in the cloud, where 56 percent of respondents are moving critical, proprietary corporate information and 47 percent are moving personally identifiable information.

See the Forest and the Trees

Sometimes we apply an available tool because it sounds like it’ll do the job – ahem, my dear friend and her no-see-um tent – but fully understanding the purpose and assessing the efficacy of your security tools isn’t a minor detail to be overlooked. Enterprises who’ve been buying more tools to address the security problem are beginning to question if they are getting the right return on their investments, especially when they have no means to measure how secure they are. To further complicate matters, more tools often increase the complexity of security architectures, which can exacerbate the data blindness issue.

So, what can be done? For sure, preventative solutions shouldn’t go away – they play a critical role in basic security hygiene and protecting against known threats – but they must be augmented with solutions for better detection, prediction and response in a way that doesn’t create more blind spots. In other words, with a new approach that is founded on greater visibility and control of network traffic to help increase the speed and efficacy of existing security tools and that allows enterprises to say, “Okay, this is where my investments are going and these are the gaps I need to address to become more secure or even, to identify if it’s possible to become more secure or not.”

If you’re unsure how secure your network is, maybe start with a few simple questions:

  • Can you see into all data across your network? Or does some data remain hidden due to silos between network and security operations teams?
  • Are your security tools able to scale for faster speeds and increased data volume? Without diminishing their performance?
  • What about your cloud deployments – are they being used securely? Is there clear ownership of cloud security?

Source: https://blog.gigamon.com/2017/08/16/no-network-blind-spots-see-um-secure-um/?utm_content=bufferfc292&utm_medium=social&utm_source=linkedin.com&utm_campaign=buffer

Author: Erin O’Malley


  • 0

If Money Can’t Buy Happiness, Can It Buy Security?

Category : Gigamon

Gigamon has just published the results of a recent Vanson Bourne survey that polled IT and security decision-makers from the U.S., U.K., France and Germany on their cloud security preparedness and network visibility issues. Though it covers cybersecurity, cloud and GDPR trends, at its heart, the survey tries to answer the question: What makes our networks insecure?

Even with an abundance of security tools at their disposal, companies remain vulnerable to compromise.

New cyber threats make everyone insecure. To restore our own perception of safety, the tendency is to buy new security tools. In fact, many of those surveyed plan to increase cybersecurity spend by 36 percent over the next three years. However, at the same time, 70 percent of respondents intuitively recognize that more money toward more tools doesn’t necessarily mean better security against new cyber threats.

What Else Can We Do?

What is preventing us from securing our networks? In its research, Vanson Bourne identified three key factors that are putting networks at risk:

  1. Hidden data: A large amount of network data remains hidden due to segmentation of data and tools between NetOps and SecOps.
  2. Too much data: The increasing speed and growth of network traffic is stressing monitoring and security tools.
  3. Lack of cloud security: Organizations are migrating high-value information to the cloud, where security is limited and application data is difficult to access.

All together, these factors result in data blindness: the inability to see, understand and secure network traffic. A by-product of this is an inability to adapt our networks, change our strategies and anticipate threats. For instance, the survey reveals that lack of visibility into network traffic is also making GDPR compliance more difficult by preventing enterprises from developing a robust GDPR strategy that maps to dedicated budget.

Fighting Data Blindness with Network Visibility

While survey respondents aren’t necessarily convinced that buying more and more security tools is the answer, they aren’t sure what is and must continue to do something to help guard against increasing cybersecurity threats and the risk of data loss. At the same time, this doesn’t mean that existing security tools aren’t capable, but rather, that they may not have the visibility into the data they need to do their jobs as quickly and efficiently as possible.

Take a fleet of Ferraris. These are high-performing cars, but only if they are provided the right fuel. No fuel, no go. It’s the same for security tools. There are incredible products on the market today, but without visibility into the data they need, they may underperform.

The Vanson Bourne survey results confirm that it is imperative for enterprises to adopt a platform that provides greater visibility into their network traffic, and one that’s integrated with their security tools for increased speed and effectiveness. One like GigaSECURE, the industry’s first Security Delivery Platform that can help organizations see what matters in their enterprise and beyond for better data protection.

Read the Vanson Bourne survey and our analysis. I hope that it will help you consider how these factors are influencing your own network security and cloud strategy. See What Matters.™


  • 0

5 Keys to Quick and Effective Identity Verification Service Deployment

Category : Gigamon

ID fraud is a critical issue for MNOs (Mobile Network Operators); there are approximately 200 types of fraud, and 35% of all mobile fraud comes from subscriptions. It’s an issue that cannot be ignored; the cost is too great for many MNOs to bear. Furthermore, in addition to damaging profits, it damages consumers as well, thanks to the inhibitive effect fraud has on innovation. How can we innovate successfully if we are continually forced to divert significant funds and resources towards mitigating fraudulent activity?

As we’ve discussed in a previous post, there are three overarching reasons to care about the problem:

  • Revenue: the total annual cost of identity fraud globally is €40 billion
  • Regulation: financial services on mobile are growing. MNOs must meet KYC regulations or face heavy fines
  • Reputation: identity fraud victims will abandon networks they no longer trust to keep them secure

But how can we counteract all this fraud? The answer lies in the deployment of trusted and tested identity verification services that can perform effective checks in real time. These solutions are available and are flexible enough to meet a wide range of needs – they can provide identity document verification (to check authenticity), customer authentication(to check the holder is the correct owner) through advanced biometric checks, risk assessment (which checks a holder against control lists), ID verification reports (for audits) and automatic form filling (to speed up enrolment and limit manual input errors).

With all of this in mind, MNOs will of course want to know what the keys to success will be. Can they be confident that it’ll all work? See below for the five key factors that will affect the success and effectiveness of a roll out.

  1. A phased and systematic approach

Phasing implementation ensures the effectiveness of the solution is well tested and perfected before it’s fully initiated. With this approach, teams can draw on best practices and lessons learned, rather than migrating all stores at the same time, which can pose problems. These first stages are essential when trying to understand, analyze and document the dynamics of identity fraud on a small scale, before expanding it across all stores.

This phased and systematic approach also requires anticipation of new regulations which might be introduced during deployment; of course, this is easier said than done. It is essential though, if you want to ensure ID checks can be extended to all use cases (including enrolment for specific value-added services) as well as purchase and renewal of prepaid and postpaid SIMs. As a result of all this, MNOs will ensure they meet current legal requirements and will be prepared for the introduction of more.

  1. Strong feedback

Feedback is crucial and shouldn’t be underestimated. Store managers can share best practice techniques whenever possible. With profitability as a collective main objective, any solution that cuts or at least reduces ID fraud and related costs should be welcomed with open arms. As soon as the benefit of the ID Verification solution is realized, it will then be discussed at length internally, encouraging strong adoption across the board.

  1. A user-centric approach

When it comes to acceptance, we must keep things as simple and convenient as possible for all employees and customers. This means in-store staff will be able to focus on customer care rather than on admin.

It can be something as simple as automated form filling that provides convenience for the customer and clerk, as it speeds up enrolment and avoids needless input errors.

And if the company can prove it is handling its customers’ details securely while streamlining interaction, it will be able to build a deeper and more trusted customer relationship.

  1. Integrating with legacy infrastructures

The best identity verification services are designed to have a minimal impact on existing infrastructures. They plug seamlessly into existing IT systems and can be used (with or without scanners) on mobile devices such as smartphones and tablets. This easy and flexible integration into existing infrastructure ensures a quick deployment. In addition, adaptable reporting allows easy integration into existing back-end systems.

  1. Addressing MNOs’ acquisition strategies

On top of regular internet and mobile services, MNOs can also offer more value-added services now, such as transport ticketing and banking and payment services. For example, our own identity verification services from Gemalto offers a unique and consistent way to cover all those services at the same time, helping streamline sales processes both in-store and remotely.

So, there you have it – the five key factors for successful ID verification deployment.

Source: https://blog.gemalto.com/mobile/2017/10/17/5-keys-quick-effective-identity-verification-service-deployment/

Author: Didier Benkoel-Adechy


  • 0

Why We Need to Think Differently about IoT Security

Category : Gigamon

Breach fatigue is a real issue today. As individual consumers and IT professionals, we risk getting de-sensitized to breach alerts and notifications given just how widespread they have become. While this is a real issue, we cannot simply let our guard down or accept the current state – especially as I believe the volume and scale of today’s breaches and their associated risks will perhaps pale in comparison to what’s to come in the internet of things (IoT) world.

It is one thing to deal with loss of information, data and privacy, as has been happening in the world of digital data. As serious as that is, the IoT world is the world of connected “things” that we rely on daily – the brakes in your car, the IV pumps alongside each hospital bed, the furnace in your home, the water filtration system that supplies water to your community – but also take for granted simply because they work without us having to worry about them. We rarely stop to think about what would happen if … and yet, with everything coming online, the real question is not if, but when. Therein lies the big challenge ahead of us.

Again, breaches and cyberattacks in the digital world are attacks on data and information. By contrast, cyberattacks in the IoT world are attacks on flesh, blood and steel – attacks that can be life-threatening. For example, ransomware that locks out access to your data takes on a whole different risk and urgency level when it is threatening to pollute your water filtration system. Compounding this is the fact that we live in a world where everything is now becoming connected, perhaps even to the point of getting ludicrous. From connected forks to connected diapers, everything is now coming online. This poses a serious challenge and an extremely difficult problem in terms of containing the cyberrisk. The reasons are the following:

  1. The manufacturers of these connected “things” in many cases are not thinking about the security of these connected things and often lack the expertise to do this well. In fact, in many cases, the components and modules used for connectivity are simply leveraged from other industries, thereby propagating the risk carried by those components from one industry to another. Worse still, manufacturers may not be willing to bear the cost of adding in security since the focus of many of these “connected things” is on their functionality, not on the ability to securely connect them.
  2. Consumers of those very products are not asking or willing in many cases to pay for the additional security. Worse still, they do not know how to evaluate the security posture of these connected things or what questions to ask. This is another big problem not just at the individual consumer level, but also at the enterprise level. As an example, in the healthcare space, when making purchasing decisions on drug infusion pumps, hospitals tend to make the decision on functionality, price and certain regulatory requirements. Rarely does the information security (InfoSec) team get involved to evaluate their security posture. It is a completely different buying trajectory. In the past, when these products did not have a communication interface, that may have been fine. However, today with almost all equipment in hospitals – and in many other industries – getting a communications interface, this creates major security challenges.
  3. Software developers for connected devices come from diverse backgrounds and geographies. There is little standardization or consensus on incorporating secure coding practices into the heart of any software development, engineering course or module across the globe. In fact, any coursework on security tends to be a separate module that, in many cases, is optional in many courses and curriculums. Consequently, many developers globally today have no notion of how to build secure applications. The result is a continual proliferation of software that has been written with little to no regard to its exploitability and is seeping into the world of connected things.

These are all significant and vexing challenges with neither simple fixes nor a common understanding or agreement on the problem space itself. I won’t claim to have a solution to all of them either, but in a subsequent blog, I will outline some thoughts on how one could begin to start approaching this. In the meanwhile, I think the risk and rhetoric around cyber breaches associated with the world of connected things could perhaps take on an entirely new dimension.

Source: https://blog.gigamon.com/2017/10/15/need-think-differently-iot-security/

Author: Shehzad Merchant


Support