Monthly Archives: February 2017

  • 0

10 Steps for Combating DDoS in Real Time

Category : F5

While the nature of DDoS attacks is constantly evolving, two things are clear: the volume of attacks is increasing and every business is at risk.

What You’ll Learn

  • The importance of having a plan in place before attacks happen.
  • The four basic types of DDoS attacks and what you can do to protect yourself against them.
  • How to build your response plan using the customizable playbook and action plan

TUESDAY, FEB. 28, 2017 | 11:00AM-12:00PM PST

Register here

  • 0

Top Healthcare Topics from HIMSS Forum

Category : Gigamon

In healthcare, there can often be a disconnect between IT and executive leadership when it comes to prioritization of cybersecurity risk management. Finding ways to bridge this gap has been a prevailing theme at both this week’s HIMSS 2017 Conference and Exhibition and last quarter’s HIMSS Privacy and Security Forum.

Between Two Worlds

When it comes to cybersecurity governance for healthcare, I’m a bit of a unicorn—in that I work both in the cybersecurity industry at Gigamon Canada and serve as an active member on the board of directors for the Brant Community Healthcare System. Having lived and worked in the two worlds, I have a unique perspective.

On one hand, I’m all too aware of the challenges CISOs and security teams face in getting the attention of senior leadership and the budgets they need (but most often don’t get) to properly protect their organizations. 

At the same time, I also understand how difficult it is for board members to assess and prioritize cyber risk alongside the immense financial challenges of operating a complex multi-site hospital system, including:

  • Maintaining and improving the quality of patient care at a time when obesity rates are skyrocketing
  • Managing the complexity and costs associated with chronic diseases—such as cancer, cardiovascular disease, and diabetes—that are taxing many healthcare systems to their limits
  • Ensuring employee satisfaction and engagement
  • Finding funding and resources to keep up with innovation and improvements in medical equipment and technologies

Let’s just say, our meetings often run long.

Striking a Balance

For many healthcare executives and board members, addressing and managing cybersecurity risk can be both a distraction and gratuitous expense at a time when every dollar is needed for patient care.

Striking a balance and integrating the needs of privacy and security into the strategic objectives of an organization and not simply treating it as an “IT thing” was reiterated throughout the event to be “an absolute necessity.”

Having attended the HIMSS Privacy and Security Forum, my top takeaways were:

#1 Healthcare is vulnerable and under attack like no other industry, and we need to work together more often and more effectively to find solutions.

The headlines never stop. Hospital hacked. Hospital network held for ransom. Tens of thousands of medical records breached. Most of us can’t even read the news anymore, let alone the growing list of breaches on the U.S. Department of Health and Human Services Office for Civil Rights Break Portal (dubbed “the wall of shame” by many in the industry).

Keynote speakers Joel Brenner, former senior counsel for the NSA, and Stephen Nardone, practice director of security and mobility at Connection, drove the “vulnerability” message home in their presentations, respectively: “Cybersecurity: How’d It Get So Bad – and Can We Do Anything about It?” and “Mitigating Cyber Threats in Healthcare.”

Unlike monetary transactions, health data is very detailed, unstructured, and personal. For instance, if someone commits fraud by using your credit card to make unauthorized purchases, it’s relatively easy for your bank to detect, investigate, and get your money back. But once someone steals your health information, it’s out there. There’s no way to get it back and cybercriminals can use it for more nefarious purposes than simply taking a stolen credit card number to buy a gigantic TV on Amazon.

Due to the unique characteristics of healthcare data, it can also take weeks, months, or years to even detect fraud, making it much more valuable than credit card data. The risk of losing this data isn’t simply costly for patients or an annoyance, it can be life-threatening, making the stakes—and value to cybercriminals—much higher.

Stealing hospital and patient data is highly profitable with a low risk of getting caught. Steve Borg, Director and Chief Economist U.S. Cyber Consequences Unit presented a session, “Economics of Cyber Attacks on Healthcare Providers,” that made me think more about how changing these economic factors will be key to combating cybersecurity threats. He also discussed how sharing threat intelligence and security best practices plus creating new forums for education and collaboration can improve security defenses. Denise Anderson, the executive director of NH-ISAC, reiterated his position in her “real case scenarios” session, “Threat Intelligence: Head off Attacks before the Damage Is Done.”

In short, the most important thing organizations can do is ensure that cybersecurity risk management is an enterprise-wide strategic focus. An executive-level mandate is critical to success, as all departments and employees—from clinicians to housekeeping—need to be engaged to make a difference.

#2 We’re bad at framing and communicating risk to executives and the board. We need to do better.

Telling the board scary stories isn’t effective anymore, if it ever was. At this point, everyone has CNN-sensationalized breach headline fatigue and another slide deck filled with hacking horrors isn’t going to resonate. In fact, it may even annoy them—which certainly won’t help your business case to get a new firewall approved.

As much as we may complain that executives and boards don’t really understand cybersecurity, we continue to present highly technical presentations that confuse and bore them. This has to change. We in IT need to take the time to understand their priorities and communicate strategically from a business and financial risk perspective—not just tactically or with a myopic viewpoint limited to our own project and department needs.

Most CISOs are coming around to the idea that cybersecurity risk has to be framed in a language that CEOs, CFOs, and boards can understand and act on. Sure, this includes dropping computer industry jargon, but the true value will come from calculating and communicating the value of what’s at risk and aligning your efforts to the overall strategic objectives of the organization.

The Healthcare Security Leadership Panel: State of the Union—including John Donohue, associate CIO of Technology and Infrastructure Penn Medicine, Anahi Santiago, CISO of Christina Care Health System, and Darren Lacey, CISO of Johns Hopkins University & Johns Hopkins Medicine—discussed how  CISOs are being rewarded for taking a more strategic approach by “getting a seat at the c-level table” where enterprise-wide strategy is developed.

I reiterated similar points during my own session, recommending CISOS to always speak the language of your audience and simplify messaging to the very core of what you’re trying to accomplish. For example, stop calling them “hackers” and start calling them just plain old “criminals.”

A CPA on a board, for instance, can’t relate to an APT that has exploited privileged user credentials to install root kits on multiple endpoints and has bypassed IPS by encrypting command and control messaging.  But he can relate to spending $100k on a firewall because criminals just tried to steal personal health data that’s worth $20 million—which could also expose the organization to the risk of HIPAA violation fines, potential class-action suits in the tens of millions and damage to hospital’s reputation.

By the same token, if a CEO is a former physician, frame the risk of a cybersecurity breach that steals patient information within the context of an imperative to “do no harm.” She’s likely to relate the nature of the risk directly to the patient, which will result in better understanding.

And lastly, it doesn’t hurt to establish a bit of situational awareness to the strategic needs of the organization. For example, it might not be in the best idea to request funds for an anti-phishing application (one that tricks employees into clicking on fake emails and shames them into compliance with security policies and procedures) when the next item on the board’s agenda is addressing low employee morale.

We also need to reciprocate by doing a better job of understanding c-level and board priorities and communicate strategically. CISOs who crack this code and carry it out effectively will have a much greater success and be less likely to have their organizations end up on the breach “wall of shame.”

#3 Privacy and security must be baked into not only every system, but also every business decision.

CISOs understand that the Target breach was a result of compromised third-party credentials and, as a result, senior executives lost their positions. Retail has an added complexity in managing non-traditional endpoints such as cash registers and inventory-tracking handheld devices, but that’s nothing compared to what healthcare contends with today.

The Internet of Medical Things is already here and it’s not very secure. Stephanie Jernigan, assistant professor, Operations Management Department at Boston College, presented a session based on findings from a global research study at MIT Sloan Management Review titled, “Ready of Not: Here Comes the Internet of Things.”

While the study confirmed much of what we thought we knew about IoT in healthcare, it also provided some new insights. It found that organizations with strong analytics infrastructures and skillsets were better able to leverage IoT investments. Devices that fall under the Internet of Medical Things category are easier to attack because they are more physically and digitally accessible. This is especially true with wearable devices that leave the hospital with the patient.  

Another disturbing finding showed that, “Despite these issues, 76% of the survey’s respondents felt they didn’t need to improve their sensor data security and 68% felt they didn’t need to improve their overall data security.” What makes this doubly disturbing as the study also revealed that as analytics capability improves so does overall success in terms of both patient outcomes and overall security posture.

And yet, there is no perceived need to improve analytical capabilities? And while medical IoT devices are likely to be the most at risk of any devices in this category, little is being done in terms of securing these devices?

#4 Achieve security success by managing a portfolio of innovation.

The highlight of the forum was Aetna CISO Jim Routh’s presentation, “How to Build a Security Technology Portfolio: Take Risks to Manage Risks.” Aetna serves more than 46 million people and, therefore, has a great deal of personal health information to protect.

Routh’s unique approach to not only keeping up with, but also staying ahead of hackers is to devote 25 percent of his budget in purchasing new and emerging technology from early-stage start-ups. Start-ups are more likely to make a better deal financially than established players and they also may provide a technology edge in that most attackers are likely to target and exploit vulnerabilities in more mature and widely deployed solutions.

Routh views this overall approach much like how one would build a balanced and diversified investment portfolio with 75 percent in blue chips and 25 percent in high-growth, high-impact, but also potentially higher-risk investments.

No other speaker personified the need to frame risk and communicate strategically more than Routh. And I particularly loved his phrase to describe his approach: “Taking risks to reduce risks.”

– See more at:

  • 0

20 Questions for SecOps Platform Providers

Category : FireEye

Security operations capabilities for the masses is long overdue. Here’s how to find a solution that meets your budget and resources.

The security operations platform is quickly emerging as a favorite talking point for 2017, even for organizations that do not find themselves with an expansive budget to improve their security maturity and posture. Of course, doing so is a complex undertaking with a wide variety of moving parts. Or is it?

Historically, advanced SecOps has been beyond the reach and resources for all but the most elite organizations. Today, the cloud has opened up new possibilities for these enhanced capabilities at reduced cost. This, in turn, creates new opportunities for mid-sized and smaller enterprise-sized organizations.

Of course, where there is interest, there are vendors ready to pounce. Lately, there are quite a few vendors talking about their security operations platforms. How can the conscientious security buyer interrogate potential vendors to make the most-informed decision? As you might guess, I would suggest a game of 20 questions.

Image Credit: By DuMont Television/Rosen Studios, New York-photographer.Uploaded by We hope at en.wikipedia (eBay itemphoto frontphoto back) [Public domain], via Wikimedia Commons.

Image Credit: By DuMont Television/Rosen Studios, New York-photographer.Uploaded by We hope at en.wikipedia (eBay itemphoto frontphoto back) [Public domain], via Wikimedia Commons.

1. How do you make it easy to seamlessly operationalize intelligence? Reliable, high-fidelity intelligence is an important component of a mature security operations capability. Plenty of vendors offer intelligence, and I have already discussed how to differentiate between different intelligence offerings. But there is another important point worth mentioning here. The greatest intelligence in the world won’t help an organization if it can’t operationalize it. In other words, if it isn’t easy for you to leverage intelligence to help defend your organization, it is more or less useless.

2. How do you facilitate risk mitigation? Everyone knows that security is all about risk mitigation. But if knowledge about risks and threats to the organization cannot be operationalized to help manage and mitigate risk, that knowledge is wasted.

3. Do you honestly believe that I want more alerts? I am suffering from a bad case of alert fatigue. What I need is help making order out of the chaos, and turning all of that information into knowledge.

4. Where is my context? Alerts without the appropriate context do not provide a true understanding of what is going on. That makes it difficult for organizations to make educated, informed decisions. Context is king.

5. Can you provide me protection against a variety of attack vectors that compromise organizations? If a security operations platform cannot cover multiple different attack vectors, it isn’t going to cut it.

6. Can you help me see? The importance of proper visibility across the network, endpoints, mobile, cloud, and SaaS is huge. If you can’t see it, you can’t detect it.

7. How do you model attacker behavior? The best way to identify attacker behavior within an organization is to deeply understand different characteristics of that behavior, model them, and subsequently develop algorithms that recognize them. Simply developing algorithms without understanding how attackers attack isn’t going to be very productive.

8. How is your performance? Security operations is about both collection and analysis. It isn’t enough to collect vast quantities of data. Any reasonable SecOps platform needs to be able to allow analysts to interrogate that data rapidly.

9. Do you have integrated case management? The “swivel chair” effect, and the days of cutting and pasting manually between different systems needs to come to an end. If the analysis and investigation I am doing cannot be fed automatically into a case or ticket, that isn’t going to work for me.

10. How do you scale? I want to know that as my needs grow, I can buy additional capacity and functionality as necessary without a long, complex, and disruptive deployment cycle.

11. How do you provide integration between distinct components in a diverse security ecosystem? My security ecosystem is diverse, and you need to be able to help me maximize and optimize my existing investments.

12. How flexible is your query language? Can I ask precise, incisive, targeted questions? If your query language does not support that, it is not helpful.

13. Can you augment my existing talent? Although I want to run security operations 24×7, that’s not a realistic expectation, given my current resources. How can you augment my staff to help us get there?

14. Do you provide seamless pivoting across a wide variety of data sources? I don’t have time to issue multiple queries across multiple different systems to get the relevant data that I need.  If you can’t provide me a single interface to all of the data across my security ecosystem, I’m not interested.

15. Do you have an integrated automation and orchestration capability? Manual processes are inefficient and error-prone. I need to take advantage of automation and orchestration, but it needs to be integrated into the platform for that platform to be realistic.

16. Will you end my cutting and pasting nightmare? In 2017, seamless integration between alerting, analysis, investigation, case management, and documentation should be a given.

17. Can you help me free up resources for higher order work? It is not a good use of time or money to have analysts spending most of their time performing clerical tasks. I need them to focus on higher-order work.

18.  Do you have real analytics based on real knowledge of attacker behavior? Everyone talks about analytics these days. But the only analytics that stand a chance of reliably detecting attacker behavior with low noise are analytics based on intimate knowledge of attacker behavior.

19. Do you support flexible deployment options? Any realistic platform needs to be easily consumable in a variety of different ways.

20. Is your solution affordable? The time to bring security operations to the masses is long overdue. In order to make that a reality, any solution needs to suit my budget.

  • 0

SteelFusion 5.0 Extends Hybrid Cloud Investments to Edge IT

Category : Riverbed

Riverbed has been preaching for years that remote and branch offices (ROBOs) are the engines that drive the business. And with the cloud revolution is firmly upon us, we see even greater importance placed on Edge IT because of the expectation from users for near real-time service delivery regardless of location. This places an extreme amount of pressure on today’s IT organizations. In fact, our research indicates that over 91% of IT professionals say that incorporating cloud-based applications into their portfolio of corporate applications has increased the complexity associated with managing ROBOs.

Take a breath, don’t panic! SteelFusion helps harness your myriad of enterprise IT assets and extend them to the ROBO edge. Be it traditional backend NAS/SAN storage, hyperconverged infrastructure, public cloud, and even network services, SteelFusion maximizes these investments by enabling IT to fully utilize them without having to buy duplicative infrastructure at each and every location. But there is more to the SteelFusion solution than just infrastructure reduction. The most compelling part is that IT operational procedures and full-time staff are also centralized which enhances efficiencies to remarkable levels! In fact, a recent survey from ESG revealed up to 87% in business productivity gains alone with a SteelFusion software-defined edge solution.

These savings are real! In fact, we recently hit the 1,000+ customer milestone where organizations have seen enormous total cost of ownership (TCO) savings across the globe. In our latest release of SteelFusion, we’ve even further enhanced the value of extending enterprise IT investments by announcing the following:

1. NAS support in the data center extended to the edge

NFS-based storage (especially in virtual environments) has been gaining momentum over the last few years. With the ease of configuration and comparable performance as block-based storage (iSCSI, Fiber Channel), NFS has become a very viable option for enterprise-class storage needs. With SteelFusion 5.0, we are now able to support NFS storage in the data center and extend these investments to any number of remote edge locations. With simplicity and rapid provisioning being the guiding light of this feature, we have enhanced our software to include a streamlined deployment wizard that can deploy an application in 5 easy steps. Below is video that demonstrates how easy it is to instantly provision an application, service, or entire site.

2. General availability of Virtual SteelFusion Edge

We announced back in September the introduction of Virtual SteelFusion Edge (vSFED), which is an embedded offering that provides the benefits of SteelFusion software in a prepackaged integrated offering on commercial off-the-shelf hardware. vSFED is now generally available via an exclusive system integrator driven go to market via Avnet in North America. This offering can be customized to the compute and connectivity needs of the remote and branch office to provide additional deployment flexibility under 3 primary scenarios:

  • To meet unique compute / capacity
  • To meet size, weight, power and environmental constraints
  • To meet standardized edge computing server vendor requirements

Get more details on the Virtual SteelFusion Edge.

Onward and upward…

SteelFusion now has a completely storage agnostic vision for edge IT. While cloud is on virtually every IT professional’s radar, the reality is very few organizations have migrated all of their applications, infrastructure and data there. With a SteelFusion software-defined approach to edge IT, we extend all infrastructure investments as well as full-time IT staff to distributed ROBO sites. Regardless of where you are in your cloud journey, SteelFusion simplifies edge IT operations and management.

Get more information on how SteelFusion is changing the game for ROBO IT.

  • 0

Building Trust in a Cloudy Sky

Category : McAfee

The state of cloud adoption and security

This report, based on responses from 1,400 IT security professionals from around the globe, looks at cloud adoption, changes in data center environments, and the challenges with visibility and control over these new architectures.

Cloud services are now a regular component of IT operations, and are utilized by more than 90% of organizations around the world. Many are working under a Cloud First philosophy, only choosing to deploy an internal service if there is no suitable cloud variant available. As a result, IT architectures are rapidly shifting to a hybrid private/public cloud model, with those surveyed expecting 80% of their IT budget to be cloud-based within an average of 15 months.

Intel Security surveyed over 2,000 IT professionals in September 2016 to produce this annual review of the state of cloud adoption, representing a broad set of industries, countries, and organization sizes. In the face of a continuing shortage of skilled security personnel, the impact of this scarcity on cloud adoption was a priority for this year’s report. Other objectives included understanding the adoption of different cloud usage models, identifying the primary concerns with private and public cloud services, and investigating the evolving impact of Shadow IT.

Research participants were senior technical decision makers from small (500-1,000 employees), medium (1,000-5,000 employees), and large (more than 5,000 employees) organizations, located in Australia, Brazil, Canada, France, Gulf Coast (Saudi Arabia & United Arab Emirates), Germany, Japan, Mexico, Singapore, the United Kingdom, and the United States.
Key Findings:

■ Cloud services are widely used in some form, with 93% of organizations utilizing Software-, Infrastructure-, or Platform-as-a-Service offerings.

■ The average number of cloud services in use in an organization dropped from 43 in 2015 to 29 in 2016, indicating potential consolidation of cloud providers or solutions. Cloud architectures also changed significantly, from predominantly private-only in 2015 to increased adoption of public cloud resulting in a predominantly hybrid private/public infrastructure in 2016.

■ Almost half (49%) of the professionals surveyed stated that they had slowed their cloud adoption due to a lack of cybersecurity skills, with the worst shortages in Japan, Mexico, and the Gulf Coast countries.

■ The trust and perception of public cloud services continues to improve year-over-year. Most organizations view cloud services as or more secure than private clouds, and much more likely to deliver lower costs of ownership and overall data visibility. Those who trust public clouds now outnumber those who distrust public clouds by more than 2:1.

■ Improved trust and perception, as well as increased understanding of the risks by senior management, is encouraging more organizations to store sensitive data in the public cloud. Personal customer information is the most likely type of data to be stored in public clouds, kept there by 62% of those surveyed.

■ Cloud applications continue to be a vector for cyberattacks, and over half (52%) of the respondents indicate that they have definitively tracked a malware infection to a SaaS application.

■ Shadow IT is a growing concern for the IT department. Driven by the slower adoption of IT or the mainstream acceptance of clouds, almost 40% of cloud services are commissioned without the involvement of IT. As a result, 65% of IT professionals think that this phenomenon is interfering with their ability to keep the cloud safe and secure.

■ Virtualization of private data center architectures is progressing. On average, 52% of an organization’s data center servers are virtualized, and most expect to have the conversion to a fully software-defined data center completed within 2 years.

Conclusions and Recommendations

Businesses are trusting cloud services with a wide range of applications and data, much of it sensitive or business critical. Data goes to where it is needed, most effective, and most efficient, and security needs to be there in advance to quickly detect threats, protect the organization, and correct attempts to compromise the data. Cost and resource savings of cloud services are real, and the wide variety of offerings makes it possible to choose the best fit for the organization. Security vendors are delivering tools to address fundamental security concerns, such as protecting data in transit, managing user access, and setting consistent policies across multiple services.

The movement of sensitive data to the public cloud may attract cybercriminals. Attackers will look for the easiest targets, regardless of where they are located. Integrated or unified security solutions are a strong defense against these threats, giving security operations visibility across all of the services the organization is using and what data sets are permitted to traverse them.
User credentials, especially for administrators, will be the most likely form of attack. Organizations should ensure that they are using authentication best practices, such as distinct passwords, multifactor authentication, and even biometrics where available.

Despite the majority belief that Shadow IT is putting the organization at risk, security technologies such as data loss prevention (DLP), encryption, and cloud access security brokers (CASBs) remain underutilized. Integrating these tools with an existing security system increases visibility, enables discovery of shadow services, and provides options for automatic protection of sensitive data at rest and in motion throughout any type of environment.

While it is possible to outsource work to various third-parties, it is not possible to outsource risk. Organizations need to evolve towards a risk management and mitigation approach to information security. Consider adopting a Cloud First strategy to encourage adoption of cloud services to reduce costs and increase flexibility, and put security operations in a proactive position instead of a reactive one. For full report please download here.


  • 0

Check Point Prevention at the Movies, Rogue One: Data Loss on a Galactic Scale

Category : Check Point

The Client: The Galactic Empire

The situation: Security researchers at Check Point have attributed an attack on the client to a hacking group calling itself the “Rebel Alliance.” Researchers have identified the motive driving the attack was to exfiltrate the Empire’s intellectual property, specifically a file named “Stardust” containing the plans for a large weapons station or “Death Star.” This incident was consistent with a complex attack method which included data leakage by an insider, an exploit our researchers named DroidChanger targeting vulnerabilities in Internet of Droids or IoD devices, compromised physical security and insufficient access control over networks, documents and devices. Forensic analysis revealed the attack was executed through the theft of removable backup storage media (a data-at-rest data loss), followed by the attacker’s exfiltration of data across an air-gap network using an Empire RF transmission facility on site.


Logs reveal the cyber-attack started with an insider sending a file containing a hologram that leaked information about a design defect in the client’s weapons station. The attack could have been prevented at the beginning if the client had deployed Check Point software blades for Intrusion Prevention, and Data-Loss Prevention as well as restricting access to the hologram file using Check Point Capsule. When Check Point’s incident response team activated SandBlast Advanced Threat Prevention, our researchers found the Empire’s network was infected with bots that contained an exploit targeting a vulnerability in the enforcer droid operating system. The exploit gave attackers elevated privileges, which let them reprogram enforcer droids. SandBlast’s Anti-Bot feature blocked the bots from communicating with their C&C server preventing further infection of enforcer droids. Logs showed that before enabling SandBlast, one enforcer droid designated K-2SO was affected by the exploit and reprogrammed by the attackers. The reprogrammed enforcer droid enabled the attackers to bypass physical security and enter the client’s storage facility for backup tapes.

Because the client recognized the sensitive nature of the information stored in the facility, they had deployed air-gap security, meaning data had to be manually transferred from storage to an RF transmission system. In addition, the client employed an RF blocking device to further prevent data exfiltration, a data-in-motion data loss. Unfortunately, the client deployed the RF transmission facility and shield without user-access control leaving the facility open to a severe data-loss incident despite air-gap security.


As stated earlier, the client should immediately deploy Check Point Intrusion Prevention, Data Loss Prevention and SandBlast zero-day threat prevention to protect network assets. The client should immediately deploy Mobile Threat Prevention to protect droids against exploits. Enforcer Droids should be patched to fix the vulnerability. However since they are running the Android OS, it is uncertain when patches will be available so virtual patching is recommended. In addition, the client should create firewall rules controlling user access to critical network devices like the RF transmitter and shield. The client should also institute document access control using Check Point Capsule to prevent the exposure of files physically stolen from backup and other storage facilities.

Going beyond this data loss incident, our researchers found the client’s star destroyers’ and planet-based IT networks still used default passwords supplied by galactic defense contractors. This would make it it easy for droids controlled by threat actors to initiate brute force attacks to find passwords for the client’s IT and operating technology (OT) networks.  We recommend changing all of the client’s default network passwords to passwords more difficult to guess including upper case, lower case, numbers and special characters.  In addition, we recommend the client adds other authentication factors such as RFID cards and possibly biometric sensors which would prevent droids from entering networks to steal data and take over ICS systems controlling everything from security doors to trash compacting units.

We also recommend the client segments their network. For example, put physical systems like life support and door activators on one network sector, weapons control on another network, and personnel and security information on another network, protecting each sector with its own security controls under Check Point’s R80 unified security management platform.

  • 0

The Cyber-Intelligence Nexus: Russia’s Use of Proxies

Category : FireEye

What if network defenders knew that a cyber operation occurred during Moscow business hours, that it involved a Russian IP address, and that the cyber actors used a Cyrillic keyboard? Would those indicators by themselves be enough for attribution?  Given the Russian cyber environment, the answer is clearly “no.” Those indicators could be shared by any of the cyber actors in Russia, with or without the support of the Russian government, or by other worldwide actors trying to masquerade as Russians.

The Russian government itself is advanced in its cyber capabilities, but it also has access to Russian hackers, hacktivists, and the Russian media.  These groups disseminate propaganda on behalf of Moscow, develop cyber tools for Russian intelligence agencies like the FSB and GRU, and hack into networks and databases in support of Russian security objectives. Russia’s use of such proxies complicates attribution after a cyber incident, making it harder to determine whom to respond to, constraining potential cyber deterrence against Russian entities.

Russia cannot be prevented from complicating attribution through proxy use. These proxy relationships are institutionalized and mutually beneficial for both Russia’s government and its proxies. Instead, the key to better attribution is intelligence – both technical and traditional. It is necessary to understand not just the bits and bytes of malware, but also Russian actors’ cyber tactics, techniques, and procedures, as well as proxies’ motivations and relationships .

Russian-language hackers are the main proxy group working with Russian intelligence on cyber operations. The government usually allows cybercriminals to operate from Russia as long as the criminals do not go after Russian targets. This impunity gives the government leverage over hackers for their cooperation in developing malware or pursuing Russian government targets.

For example, a 2014 report finds that Russian cyber actors–TEMP.Noble–elicited a Russian cybercriminal’s services to create malware and exploit frameworks, or relatively automated attack kits, for operations against Eastern European governments and NATO. Another example of Russian intelligence leveraging the Russian hacker community is BlackEnergy malware, which has been used by criminals since 2007 to establish botnets for distributed denial-of-service attacks against Estonian sites. BlackEnergy botnets were redirected to target Georgian and U.S. assets during Russia’s 2008 invasion of Georgia, and a new version of BlackEnergy malware was used in 2015 to attack Ukrainian power distribution utilities.

Similar to its use of criminal hackers as proxies, Russian also taps into the hacktivist community, benefiting from their expertise and networks as well as the plausible deniability of proxy use. Hacktivists themselves may seek out government sponsorship as top cover to limit liability and potentially for additional profit. Given how such a relationship can be mutually beneficial, it is likely that the ties between hacktivists and the Russian government will continue.

The allegedly pro-ISIS hacktivist group, CyberCaliphate, is a probable front for Russian government activity. Although most of CyberCaliphate’s operations were of limited sophistication and focused simply on bringing more attention to ISIS, the group’s ties to Russian intelligence surfaced when they compromised the French news channel TV5Monde and used the same infrastructure associated with APT28, the Russian group behind the Democratic National Committee hack.

The Russian media also acts as a government proxy, a relationship that recently has received significant coverage due to claims that Russian media meddled in the U.S. election. In January, the U.S. Intelligence Community released a report detailing close ties between the Russian government and RT, the news site formerly known as Russia Today. In the cyber domain, Russia’s influence extends into social media. In 2014, Moscow passed a law that grants the government greater oversight and influence over bloggers, requiring bloggers with over 3,000 daily readers to register with the government.

The Internet Research Agency (IRA) illustrates an even more direct tie between the government and social media. The IRA employs hundreds of Internet trolls who receive daily instructions from the Kremlin about which topics to promote in social media and what their opinion on topics should be.

Given that  Russia is one of the most active sources of cyber threat activity in the world, honing intelligence on Russian actors in particular is crucial to cyber defense. It is only by fleshing out the specific tactics, techniques, and procedures and cyber infrastructure of each proxy group, the relationships between the groups, and how the cyber operation fits in with their motivations that it becomes clearer who ultimately is behind a cyber incident. Once attribution is better established, then network defenders can proceed more assertively with measures targeted to that specific actor to undermine ongoing cyber operations and deter future ones. Intelligence is key to attribution – particularly in this tangled web of Russian cyber proxies.

  • 0

How Incapsula Protects Against Data Leaks

Category : Imperva

The recent incident at Cloudflare involved some circumstances where edge servers were running past the end of a buffer and returning memory that contained private information such as HTTP cookies, authentication tokens, HTTP POST bodies, and other sensitive data.

To read more about the incident, see the articles from Ars TechnicaThe New York Times and others.

In this post, we summarize what a buffer overrun is and how  Incapsula mitigates the risk of buffer overruns and data leakage.

What’s a Buffer overrun?

Say you have a sack of 20 apples, and you want to pull away 5 apples, and your “pseudo-program” looks like this:

while (there_are_apples_in_the_sack) {
   if (apples_in_hand == 5)

This would work well and give you 5 apples from the sack.

However, if the take_apples() function, in some cases, advances the counter from 4 directly to 6, the condition apples_in_hand == 5 is not met and the program will keep pulling apples from the sack.

In the case of a buffer overrun, such as the one found in Cloudflare, replace “sack of apples” with “memory”, and instead of taking apples, the program is parsing HTML content. When a buffer check is not met, the application continues to send whatever is next in memory irrespective of whether the data is relevant.

How common was the leak?

According to Cloudflare, the leak occurred with 1 in every ~3,300,000 HTTP requests between February 13th and February 18th potentially affecting 0.00003% of requests. While seemingly statistically small, any data leak is a serious bug, and Cloudflare took the necessary steps to fix the leak immediately.

Are Incapsula customers vulnerable to the same leak?

No. Incapsula uses an entirely different technology stack than Cloudflare. According to Cloudflare, the bug that led to buffer overrun was not a flaw in their HTML parser, but rather how they used the parser. Cloudflare reports that it uses a Ragel-based parser and NGINX as its proxy server. Incapsula does not utilize either of these two products for processing customer website traffic.

Does Incapsula parse HTML responses?

No. Incapsula can inject content into HTML responses to enrich pages with additional capabilities, but we do not parse the entire HTML document. The result is that we are not vulnerable to malformed HTML pages which triggered the data leak.

For example, take the following HTML page:

      <title>Hello World!</title>
      <p>Hello World!</p>

Incapsula can inject HTML content into either the <head> or near the </body> tags. The following <script> tag can be injected using Incapsula into the tag:

      <title>Hello World!</title>
      <script type=”text/javascript” src=”utils.js”></script>
      <p>Hello World!</p>

Can a similar bug occur with Incapsula?

When the Cloudflare news broke, we set up a team to check whether we were vulnerable. One of the things we did was to review our HTML manipulation code for any sign of buffer overruns or related bugs. The Cloudflare blog reported the following code as the root cause of the data leak:

if ( ++p == pe )
   goto _test_eof;

p is a pointer to a memory buffer containing the HTML document and pe is a pointer to the last byte in that buffer. The bug occurs where p is advanced such that it passes pe, and the condition p == pe is not met. The correct code would be:

if ( ++p >= pe )
   goto _test_eof;

Looking at the equivalent (stripped down) code in Incapsula shows the correct pattern. Note that the Incapsula code checks the opposite condition of whether to continue consuming bytes from the buffer so p<pe is the correct condition in that case:

for ( ; p < pe ; ++p) {
   // do something

How do we protect the Incapsula service from these issues?

First, our approach to software engineering is to minimize the probability our developers are not fully aware of the code running our services. That’s not to say we don’t use core software libraries like libc or OpenSSL, but we try to make our mission-critical systems like our HTTP/S reverse proxy, DNS and Behemoth scrubbing servers as transparent as possible for our developers.

Interestingly, the first version of our HTTP parser was based on Ragel but we only used it for a few months. Our developers felt that the additional level of abstraction of writing the parser definition introduced opaqueness which they were not comfortable with. So Ragel was dropped early on in favor of much simpler code that was a better fit with our needs.

Second, and because of major production issues we suffered, we started using static code analysis which can spot bugs like buffer overruns quite easily. However, including generated code in static code analysis is an uncommon practice in the industry as that code is not under the developer’s control, so they can’t fix the potential defects. We suspect that approach will be re-evaluated after this incident.

Finally, about one-third of the Incapsula engineering staff are test automation engineers. Their job is to make sure Incapsula services evolve safely and that any new functionality we add does not break anything our customers rely on. Our HTML manipulation features are tested daily and we continually evaluate the need for additional tests.

How would Incapsula react to a similar issue?

Similar to Cloudflare, we have teams working 24×7 in some sites and we also pursue a follow the sun pattern. We can globally and locally disable every piece of functionality in our code base to quickly turn off features if needed.

  • 1

Tenable Network Security & CyberArk

Category : Cyber-Ark

C. Thomas (Space Rogue), Strategist for Tenable Network Security, talks about the benefits of conducting a credentialed-protected scan. By integrating with CyberArk Application Identity Manager™, Tenable Nessus® can offload the management of the privileged accounts to CyberArk enabling improved scan accuracy and performance. Tenable Network Security is a member of the C³ Alliance, CyberArk’s Global Technology Partner Program.

  • 0

Data Security Key for IoT

Category : HP Security

In case you haven’t been paying attention, Internet of Things (IoT) devices are everywhere, in our appliances at home, in the cars we drive, and the buildings were we work. Industries that use IoT connected devices are very diverse: manufacturing, energy, telco, healthcare and transportation, to name just a few. And the numbers of devices keep growing. Gartner, Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from 2015, and will reach 20.8 billion by 2020.  And predictably, in 2016, we saw the first IoT breaches, either on the device itself, or a theft of data.

All of this connectivity means more data, gathered from more places, than ever before in history. Internet of Things has amazing potential. It can transform how and when decisions are made throughout business and our daily lives—but only if that data can be processed, analyzed, and put to use effectively and securely, states the Database Trend &  Applications (DBTA) Internet of Things Market Survey.

To shed light on the current state of IoT adoption and maturity, Unisphere Research and DBTA joined forces with Radiant Advisors to launch an IoT market research study with the support of sponsorships from MapR and HPE Security—Data Security. To shed light on the current state of IoT adoption and maturity, the researchers surveyed current and potential users across North America, to find out what challenges they faced and the success factors that are emerging in the market.

Why use IoT?

So with so many devices touching so many aspects of our lives, are business fully utilizing the power of IoT? The survey reveals that many companies are only in early phases of adoption of IoT. The primary use cases for IoT involve data analytics, and data science to invent new business models and capitalize on insights into customers and products, states the study. So while IoT and connected devices can talk to each other, and connect to the internet, IoT is really about the data it collects and how businesses can take advantage of this treasure trove of data. Although the study did not detail the types of data being collected from such a wide variety of sensors and devices, ultimately, data that identifies an individual will be collected, whether it be a VIN number from a connected car, or healthcare information from a medical device. Therefore, the study points out, data privacy and security challenges should be addressed early in IoT program design and development.

The top three technologies that buyer-side respondents plan to add, according to the survey, are related to properly supporting data science with IoT initiatives: data analytics or data science platforms (48%), cloud-based big data platforms or services for data acquisition (40%), and data security, encryption and masking (33%). Data security encompasses secure data capture and transport for safely using IoT data—as well as recognition of the potential for secure analytics, states the study.

The top three leading factors that most impact IoT technology decisions for buyer-side respondents are total cost of ownership (31%) followed by data privacy and regulatory compliance concerns (25%), and data security and governance capabilities or adherence at 15%. This demonstrates an awareness of the importance of data privacy and security for handling IoT data.

Unsurprisingly, the survey shows that companies are focused on leveraging advanced analytics and data science in ways that lead to deeper insights about their processes, customers and products, while establishing and reinforcing methods for data privacy and secure analytics.

Obstacles to IoT initiatives

Still, 33% of companies surveyed are having trouble understanding value of using IoT devices, and 24% can’t justify the return on investment, according to the survey. Data privacy and regulatory compliance is the next most significant challenge, with 12% of respondents.

When asked about the role of data security specifically, 78% of buyer-side respondents indicate that data security (or lack thereof) will impact their progress with IoT. The fact that over three-quarters of responders are so concerned about not having proper data security in place that it is inhibiting their adoption of this game-changing technology, is unfortunate. IoT manufactures that build security in to their devices, and IoT users that use the best practice of data-centric security will be the real winners in the rush to utilize IoT.


The IoT Market Survey concludes that it is important to understand the role of data privacy and security and incorporate security into the design and development process. Failure to take into account data privacy and security at the start of an IoT project will likely require retrofitting and/or reassessment of technology decisions, warns the survey. In either case, the associated costs and setbacks will hamper IoT rollout and business planning.

More information: