Monthly Archives: April 2017

  • 0

The Next Generation Data Center Demands a Next Generation Storage Architectur

Category : NetApp

In today’s digital economy, markets and customer purchasing behaviors are changing. Today’s customers expect everything to be available online, anytime, anywhere, and from any type of device. In order to satisfy these expectations, enterprise IT departments have to react quickly to changing business needs while continuing to manage the mission-critical legacy workloads that “keep the lights on.” In addition to adhering to regulatory requirements and complying with existing change-management processes, enterprise IT is faced with multiple operational challenges that put pressure on the resiliency and reliability of the infrastructure they manage.

To this end, enterprises are having to engage in a process of digital transformation, away from traditional infrastructure and toward a flexible technology stack that has the agility, scalability, predictability, and automation to react to changing business needs without risking normal business operations.

The process of transformation is typically unique for every enterprise — as are the business drivers that prompt it. At one end of the spectrum, organizations based on traditional enterprise IT are looking to achieve drastic cost savings from the consolidation of their virtualized environments, while at the other side, IT organizations are implementing infrastructure to support DevOps cultures that provide self-service resources and enable the refactoring of traditional client/server workloads into agile cloud-based applications. Given the diversity of organization, drivers, and environments, enterprise IT is looking to the highly flexible architecture of a next generation data center (NGDC) to enable their transformation — an architecture that can meet their changing business needs while seamlessly integrating into, and supporting, their existing infrastructure.

A next generation data center such as this cannot be, by its very nature, reliant upon traditional storage infrastructure. Instead, its foundations are built upon a new type of storage, a next generation storage architecture (NGSA) – one that is inherently agile, scalable, and predictable.

Enterprise IT Can’t Transform Using Storage that Forces it to Live in a Traditional Infrastructure World

The NGSA is the next generation in storage  — one that can scale non-disruptively and incrementally across multiple platforms to support business growth — yet continue to provide guaranteed, controlled performance at reduced, cloud-like operational costs. It has the agility to easily automate, scale, and orchestrate across multiple platforms in addition to providing predictable workload delivery at scale through self-service capabilities that are irrespective of the platform used.

Only NetApp SolidFire has a next generation storage architecture that can meet all these requirements and enable enterprise IT to transition from existing environments to the next generation data center. Organizations are demanding IT transformation without operational risk — irrespective of existing environment. Only a next generation data center powered by a next generation storage architecture can meet this need.

Download your complimentary 2017 Strategic Roadmap for Storage report from Gartner, and learn more about how you can be successful in your storage transformation process.


  • 0

8 Ways Governments Can Improve Their Cybersecurity

Category : Pulse Secure

It’s hard to find a major cyberattack over the last five years where identity — generally a compromised password — did not provide the vector of attack.

Target, Sony Pictures, the Democratic National Committee (DNC) and the U.S. Office of Personnel Management (OPM) each were breached because they relied on passwords alone for authentication. We are in an era where there is no such thing as a “secure” password; even the most complex password is still a “shared secret” that the application and the user both need to know, and store on servers, for authentication. This makes passwords inherently vulnerable to a myriad of attack methods, including phishing, brute force attacks and malware.

The increasing use of phishing by cybercriminals to trick users into divulging their password credentials is the most alarming — a recent report from the Anti-Phishing Working Group (APWG) found that 2016 was the worst year in history for phishing scams, with the number of attacks increasing 65% over 2015. Phishing was behind the DNC hack, as well as a breach of government email accounts in Norway, and was the method that state-sponsored hackers recently used in an attempt to steal the passwords of prominent U.S. journalists. Phishing is on the rise for a simple reason: it is a relatively cheap and effective form of attack, and one that puts the security onus on the end-user. And, given that many users tend to reuse passwords, once these passwords are compromised, they can be used to break into other systems and bypass traditional network security measures.

In response to the increased frequency of such authentication-based cyberattacks, governments around the world are pursuing policies focused on driving the adoption of multi-factor authentication (MFA) solutions that can prevent password-based attacks and better protect critical data and systems. The U.S., UK, EU, Hong Kong, Taiwan, Estonia and Australia are among the countries that have focused on this issue over the last five years.

One challenge countries face: there are hundreds of MFA technologies vying for attention, but not all are created equal. Some have security vulnerabilities that leave them susceptible to phishing, such as one-time passwords (OTPs) — a password that is valid for only one login session or transaction — which, while more secure than single factor authentication, are themselves still shared secrets that can be compromised. Some solutions are unnecessarily difficult to use, or have been designed in a manner that creates new privacy concerns.

As policymakers work to address these authentication issues, they will need to adopt solutions that move away from the shared secret model while also being easy for consumers and employee to use. Per a new white paper that The Chertoff Group published, governments can best ensure the protection of critical assets in cyberspace by following eight key principles for authentication policy:

  1. Have a plan that explicitly addresses authentication. While a sound approach to authentication is just one element of a proper approach to cyber risk management, any cyber initiative that does not include a focus on strong authentication is woefully incomplete.
  1. Recognize the security limitations of shared secrets. Policymakers should understand the limitations of first-generation MFA technologies such as OTPs that rely on shared secrets and look to incent adoption of more secure alternatives, such as those that utilize public key cryptography where keys are always stored on — and never leave — the user’s device, like FIDO authentication standards.
  1. Ensure authentication solutions support mobile. As mobile transaction usage grows, any policy that is not geared toward optimizing use of MFA in the mobile environment will fail to adequately protect transactions conducted in that environment.
  1. Don’t prescribe any single technology or solution — focus on standards and outcomes. Authentication is in the midst of a wave of innovation, and new, better technologies will continue to emerge. For this reason, governments should focus on a principles-based approach to authentication policy that does not preclude the use of new technologies.
  1. Encourage widespread adoption by choosing authentication solutions that are easy to use. Poor usability frustrates users and prevents widespread adoption. Next-generation MFA solutions dramatically reduce this “user friction” while offering even greater security gains. Policymakers should look for incentives to encourage use of next-generation MFA that addresses both security and user experience.
  1. Understand that the old barriers to strong authentication no longer apply. One of the greatest obstacles to MFA adoption has been cost — previously, few organizations could afford to implement first-generation MFA technologies. Today, there are dozens of companies delivering next-generation authentication solutions that are stronger than passwords, simpler to use and less expensive to deploy and manage.
  1. Know that privacy matters. MFA solutions can vary greatly in their approach to privacy — some track users’ every move or create new databases of consumer information. Such solutions raise privacy concerns and create new, valuable caches of information that are subject to attack. Thankfully, today several authentication companies have adopted a “privacy by design” approach that keeps valuable biometrics on a user’s device and minimizes the amount of personal data stored on servers.
  1. Use biometrics appropriately. The near ubiquity of biometric sensors in mobile devices is creating new options for secure authentication, making it easier to use technology such as fingerprint and face recognition. However, biometrics are best used as just one layer of a multi-factor authentication solution — matching a biometric on a device to then unlock a second factor. Ideally, biometrics should be stored and matched only on a device, avoiding the need to address privacy and security risks associated with systems that store biometrics centrally. Any biometric data stored on a server is vulnerable to getting in the wrong hands if that server is compromised. This was the case in June 2015 with the United States Office of Personnel Management (OPM) breach resulting in 1.1 million compromised fingerprints.

Policymakers have resources and industry standards to help guide them as they address these principles. The Fast Identity Online (FIDO) Alliance has developed standards designed to take advantage of the advanced security hardware embedded in modern computing devices, including mobile phones. FIDO’s standards have been embraced by a wide cross-section of the technology community and are already incorporated into solutions from companies such as Microsoft, Google, PayPal, Bank of America, Facebook, Dropbox, and Samsung.

No technology or standard can eliminate the risk of a cyberattack, but the adoption of modern standards that incorporate MFA can be an important step that meaningfully reduces cyber risk. By following these eight principles, governments can create a policy foundation for MFA that not only enhances our collective cyber security, but also helps to ensure greater privacy and increased trust online.


  • 0

Mole Ransomware: How One Malicious Spam Campaign Quickly Increased Complexity and Changed Tactics

Category : Palo Alto

On April 11th 2017, we saw a new malicious spam campaign using United States Postal Service (USPS)-themed emails with links that redirected to fake Microsoft Word online sites. These fake Word sites asked victims to install malware disguised as a Microsoft Office plugin.

This campaign introduced a new ransomware called Mole, because names for any encrypted files by this ransomware end with .MOLE. Mole appears to be part of the CryptoMix family of ransomware since it shares many characteristics with the Revenge and CryptoShield variants of CryptoMix.

The campaign quickly changed tactics and increased complexity.

After two days on April 13, 2017, the attackers behind these fake office plugins changed the format and began including additional malware. Along with Mole ransomware, victims would be infected with both Kovter and Miuref. Then, on the following day, April 14, 2017, the attackers stopped using a redirect link in the malicious spam and instead linked directly to a fake Word online site. Figure 1 shows the attackers’ changing tactics from Tuesday April 11, 2017 through Friday April 14, 2017.

mole_1

Figure 1: Changing tactics April 11 – April 14, 2017

April 11th – Introducing Mole Ransomware

From Tuesday April 11th to the early hours of Wednesday April 12th, the fake Word Online used Google Docs links to provide Mole ransomware disguised as an Office plugin. Criminals behind this campaign abused Google Docs to provide a link for an executable file. File names were plug-in.exe or plugin.exe. Figure 2 shows how these fake Microsoft Word Online documents would attempt to lure users into downloading the Mole ransomware.

mole_2

Figure 2: Fake Microsoft Word Online site with link to a Google Documents URL with the ransomware.

After downloading the executable, the infection chain is straight-forward. The victim executes the ransomware and infects his or her Windows computer. The mechanics behind a Mole ransomware infection have already been covered at the Internet Storm Center (ISC) and Bleeping Computer. Figure 3 shows the April 12 Mole ransomware in action.

mole_3

Figure 3: Desktop of a Windows host infected with Mole ransomware on April 12th

April 13th – Introducing .js Files and Additional Malware

By Thursday April 13, 2017, this campaign changed tactics. The fake Microsoft Word Online sites no longer used a Google Docs URL to provide their malware. Instead, the malware was sent as a zip archive directly from the compromised site being used as a fake Microsoft Word Online page. The zip archives contained JavaScript (.js) files designed to infect Windows computers with Mole ransomware and additional malware.

The Figures 4 and 5 below illustrate the newer format used for malware infections by this campaign, where the new file is a zip archive named plugin.zip that contains a .js-based downloader named plugin.js.

mole_4

Figure 4: Fake Microsoft Word Online site later on April 13th with link to a zip archive instead of an executable

mole_5

Figure 5: The zip archive contains a .js file

The plugin.js is a type of file downloader commonly called a Nemucod. This .js file downloads and installs three Windows executable files named exe1.exe, exe2.exe, and exe3.exe as shown below in Figure 6.

mole_6

Figure 6: Plugin.js installing 3 items of malware as shown in a reverse.it analysis

Network traffic generated by this infection is similar to Nemucod downloaders we have seen from other campaigns. In Figure 7 below, you can see URLs for exe1.exe, exe2.exe, and exe3.exe from forum-turism.org.ro.

mole_7

Figure 7: Traffic from an infection filtered in Wireshark

The three items of follow-up malware are named exe1.exe, exe2.exe, and exe3.exe. In the early days of this campaign, they have been Mole ransomware, Kovter, and Miuref, respectively.

The Emails

mole_8

Figure 8: An example of the malicious spam from Thursday April 13th

Emails from this campaign follow the same format as originally reported from Tuesday April 11, 2017. Figure 8 above shows an example email. They have a variety of subject lines, spoofed sending email addresses, and message text. Through Thursday April 13, 2017, the URLs were different for each message. By Friday April 14th, these emails were linking directly to the fake Microsoft Word Online pages, so the URLs for that day were the same.

Conclusion

Most large-scale malicious spam campaigns tend to stick with operating patterns that are much easier to identify and track. This particular campaign has evolved more quickly than we usually see. Such changing tactics are likely a way to avoid detection.

And this campaign continues to evolve. By Tuesday April 18, 2017, it stopped distributing Mole ransomware, and it began pushing the KINS banking Trojan with Kovter and Miuref. By Friday April 21, 2017, this campaign moved from USPS-themed emails to messages about speeding tickets, and it began utilizing a fake parking services website.

Why did we stop seeing Mole ransomware? Because families of ransomware are constantly changing. CryptoMix variants like Mole rarely stay around for more than a few weeks before being repackaged and distributed as a new variant. The samples of Mole ransomware we have identified so far are tagged in AutoFocus using the MoleRansomware tag.


  • 0

Cloud Networks Made Simple with Riverbed on AWS

Category : Riverbed

AWS makes it simple to set up cloud-based resources. But do you have secure, high-capacity, high-performance connectivity to and between AWS cloud instances? That’s where Riverbed comes in. Riverbed’s SD-WAN solution enables cloud migration and performance, all managed via quick and simple workflows.

Register for the upcoming webinar to experience a fundamentally new approach to networking, with real users sharing their experience in addition to a joint customer, OpenEye, that has saved time and money by utilizing this approach to cloud networking.

OpenEye Scientific Software leveraged Riverbed SteelConnect to very quickly and easily establish secure connections to and between AWS VPCs, saving hours of network administration time.

When: May 16, 2017 | 10:00 am PDT/1:00 pm EDT
Join us to Learn:
  • How you can automate secure connectivity to AWS, between AWS regions and between cloud providers
  • How you can improve network performance and efficiency while simplifying management
  • How OpenEye utilized this integrated solution to save time and money when moving to the Cloud
Who Should Attend:

Network Ops/Engineer, Cloud Architect, Cloud Ops/Engineer, DevOps Engineer, Enterprise Architect

AWS Speaker:  Nick Matthews, Partner Solutions Architect

Riverbed Technology Speaker:  Angelo Comazzetto, Technical Director of Cloud

Customer Speaker:  Craig Bruce, Scientific Software Development Manager, OpenEye Scientific

  • 0

Solving Bigger Problems for Government: Q&A with Carahsoft

Category : Gigamon

Gigamon prides itself on partnering with the best—and this includes Carahsoft, an IT solutions provider known and trusted for delivering best-of-breed hardware, software, and support to federal, state, and local government agencies. Wanting to learn more about trends in the sector, we knew exactly who to call: Carahsoft VP Brian O’Donnell. We met with Brian to pick his brain about key concerns for agencies and how they’re looking to address them.

What are you seeing as a top challenge for the government sector?

Security is top of mind for everybody. Helping the government solve its point solution problem is a challenge—for them and for us. Rather than going to them with one product to fix one small problem, we combine multiple vendor solutions to solve much bigger problems. That’s valuable.

Carahsoft is unique in that we represent so many different vendors and support so many different resellers. For example, we are the sole government distributor for Splunk, Palo Alto Networks, VMware; and we also support resellers like BAI and ClearShark. While we’re not customer-facing, we have breadth across the public sector given the number of products we sell. As part of our role, we work to train, enable, and help the front-line resellers better understand the challenges agencies are facing while also hosting webinars and in-person events to educate government customers and drive leads back to resellers.

Of the government verticals—federal, state, local, and SLED—which ones are growing?

All of them are growing, but local may be the fastest. It’s interesting because, a few years back when the housing market crashed, there was considerable turmoil in the state and local verticals. While lots of people were getting out of them, Carahsoft made a conscious effort to double down, believing that, at some point, things would turn around. And thankfully, they did. We’re now seeing the fruits of those efforts.

How are budgets these days? More money to spend? Less?

There’s a degree of uncertainty in Washington. Again, while state and local have been healthy as of late, we’re watching and tracking to see what the new administration may to do in terms of budget cuts. It may be too early to call, but we haven’t seen significant impact yet. And because technology is solving so many problems, even if cuts were to be made, we don’t see them affecting technology spend.
Can you share a Gigamon-Carahsoft use case?

For us, a use case centers around figuring out what an ecosystem looks like and how to go to market. When we look at Gigamon, we don’t see a single vendor or a single solution. We see a much bigger story—with Gigamon as a hub from which so many other tools connect. It’s a story that allows Carahsoft to go in many directions.

For instance, do we match vendor to vendor? Do we bring in a BAI or ClearShark reseller? Do we bring together a Gigamon sales rep and a Palo Alto sales rep? We can build a case study around one customer’s experience with various combined product sets that can be used to sell to 10 more customers.

Is that combined solution approach helping Carahsoft grow revenue and brand recognition?

Absolutely. Our company initiative is not to focus on selling point products, but to combine products into a bigger solution. And by solving bigger problems for the government, we’re seeing bigger returns. That’s a win. And by helping resellers position more products to grow the size of their opportunity, we’re, in turn, growing the size of our opportunity. That’s another win.

We’ve also been gaining exposure in vendor sales plays as we do more joint marketing with partners. This includes teaming with VMware, Splunk, Palo Alto Networks, and FireEye at the upcoming Gigamon Cybersecurity Summit in Washington, D.C.

Yes! Carahsoft is managing the Partner Pavilion at this week’s Cybersecurity Summit. Can you tell us more about that?

With the Partner Pavilion, our goal is to bring together the right resellers and vendor ecosystem partners to help tell that bigger, more complete story. Since partnering with Gigamon a few years ago, we’ve seen again and again how the Gigamon Visibility Platform is central to any security architecture. We want customers to walk away with a better understanding of how Gigamon fits into their overall security architecture, as well as how other ecosystem partners and Carahsoft can further help solve their challenges. I think the event will be a home run.

To learn more, register to attend Gigamon’s 2nd annual Government Cybersecurity Summit, taking place this week (April 26) in Washington, D.C., and featuring keynote speaker General James Clapper, former Director of National Intelligence.

– See more at: https://www.gigamon.com/blog/2017/04/24/solving-bigger-problems-government-qa-carahsoft/#sthash.1r7xfguz.dpuf


  • 0

Protecting Sensitive Data In and Beyond the Data Lake

Category : HP Security

The need to secure sensitive data in Hadoop and IoT ecosystems

Hadoop is a unique architecture designed to enable organizations to gain new analytic insights and operational efficiencies through the use of multiple standard, low-cost, high-speed, parallel processing nodes operating on very large sets of data. The resulting flexibility, performance, and scalability are unprecedented. But data security was not the primary design goal.

When used in an enterprise environment, the importance of security becomes paramount. Organizations must protect sensitive customer, partner and internal information and adhere to an ever-increasing set of compliance requirements. But by its nature, Hadoop poses many unique challenges to properly securing this environment, not least of which include automatic and complex replication of data across multiple nodes once entered into the HDFS data store. There are a number of traditional IT security controls that should be put in place as the basis for securing

Hadoop, such as standard perimeter protection of the computing environment, and monitoring user and network activity with log management. But infrastructure protection by itself cannot prevent an organization from cyber-attacks and data breaches in even the most tightly controlled computing environments. Hadoop is a much more vulnerable target—too open to be able to fully protect. Further exacerbating the risk is that the aggregation of data in Hadoop makes for an even more alluring target for hackers and data thieves. Hadoop presents brand new challenges to data risk management: the potential concentration of vast amounts of sensitive corporate and personal data in a low-trust environment. New methods of data protection at zeta-byte scale are thus essential to mitigate these potentially huge Big Data exposures.

Data protection methodologies

There are several traditional data de-identification approaches that can be deployed to improve security in the Hadoop environment, such as storage level encryption, traditional field-level encryption and data masking. However, each of these approaches has limitations. For example, with storage-level encryption the entire volume that the data set is stored in is encrypted at the disk volume level while “at rest” on the data store, which protects against unauthorized personnel who may have physically obtained the disk, from being able to read anything from it. This is a useful control in a Hadoop cluster or any large data store due to frequent disk repairs and swap-outs, but does nothing to protect the data from any and all access when the disk is running— which is all the time.

Data masking is a useful technique for obfuscating sensitive data, most often used for creation of test and development data from live production information. However, masked data is intended to be irreversible, which limits its value for many analytic applications and post-processing requirements. Moreover, there is no guarantee that the specific masking transformation chosen for a specific sensitive data field fully obfuscates it from identification, particularly when correlated with other data in the Hadoop “data lake.” While all of these technologies potentially have a place in helping to secure data in Hadoop, none of them truly solves the problem nor meets the requirements of an end-to-end, data-centric solution.

Data-centric security

The obvious answer for true Hadoop security is to augment infrastructure controls with protecting the data itself. This data-centric security approach calls for de-identifying the data as close to its source as possible, transforming the sensitive data elements with usable, yet de-identified, equivalents that retain their format, behavior, and meaning. This protected form of the data can then be used in subsequent applications, analytic engines, data transfers and data stores, while being readily and securely re-identified for those specific applications and users that require it. For Hadoop, the best practice is to never allow sensitive information to reach the HDFS in its live and vulnerable form. De-identified data in Hadoop is protected data, and even in the event of a data breach, yields nothing of value, avoiding the penalties and costs such an event would otherwise have triggered.

The solution—HPE SecureData for Hadoop and IoT

HPE SecureData for Hadoop and IoT provides maximum data protection with industry-standard, next generation HPE Format-preserving Encryption (FPE), (see NIST SP-800-38G) and HPE Secure Stateless Tokenization (SST) technologies.

With HPE FPE and SST, protection is applied at the data field and sub-field level, preserves characteristics of the original data, including numbers, symbols, letters and numeric relationships such as date and salary ranges, and maintains referential integrity across distributed data sets so joined data tables continue to operate properly. HPE FPE and SST provide high-strength encryption and tokenization of data without altering the original data format. HPE SecureData encryption/tokenization protection can be applied at the source before it gets into Hadoop, or can be evoked during an ETL transfer to a landing zone, or from the Hadoop process transferring the data into HDFS. Once the secure data is in Hadoop, it can be used in its de-identified state for additional processing and analysis without further interaction with the HPE SecureData. Or the analytic programs running in Hadoop can access the clear text by utilizing the HPE SecureData high-speed decryption/de-tokenization interfaces with the appropriate level of authentication and authorization.

If processed data needs to be exported to downstream analytics in the clear—such as into a data warehouse for traditional BI analysis—there are multiple options for re-identifying the data, either as it exits Hadoop using Hadoop tools or as it enters the downstream systems on those platforms

To implement data-centric security requires installing the HPE SecureData infrastructure components and then interfacing with the appropriate applications and data flows. SDKs, APIs and command line tools enable encryption and tokenization to occur natively on the widest variety of platforms, including Linux®, mainframe and mid-range, and supports integration with a broad range of infrastructure components, including ETL, databases, and programs running in the Hadoop environment, and is available for any Hadoop distribution. HPE Security—Data Security has technology partnerships with Hortonworks, MapR, Cloudera and IBM, and certifications to run on each of these. HPE SecureData is integrated with the Teradata® Unified Data Architecture™ (UDA), and with the HPE Vertica Big Data Platform.

Rapid evolution requires future-proof investments

Implementing data security can be a daunting process, especially in the rapidly evolving and constantly changing Hadoop space. It’s essential for long-term success and future-proofing investments, to apply technology via a framework that can adapt to the rapid changes ongoing in Hadoop environments. Unfortunately, implementations based on agents frequently face issues when new releases or new technology are introduced into the stack, and require updating the Hadoop instance multiple times. In contrast, HPE SecureData for Hadoop and IoT provides a framework that enables rapid integration into the newest technologies needed by the business. This capability enables rapid expansion and broad utilization for secure analytics.

Securing the Internet of Things

Failure to protect sensitive data in the Hadoop environment holds major risk of data breach, leaking sensitive data to adversaries, and non-compliance with increasingly stringent data privacy regulations such as the General Data Protection Regulation (GDPR). Big Data use cases such as realtime analytics, centralized data acquisition and staging for other systems require that enterprises create a “data lake” — or a single location for the data assets.

While IoT and big data analytics are driving new ways for organizations to improve efficiencies, identify new revenue streams, and innovate, they are also creating new attack vectors which make easy targets for attackers. This is where perimeter security is critical, but also increasingly insufficient – it takes, on average, over 200 days before a data breach is detected and fixed.

As the number of IoT connected devices and sensors in the Enterprise multiplies, the amount of sensitive data and Personally Identifiable Information collected at the IoT Edge and moving into the back-end in the data center–is growing exponentially. The data generated from IoT is a valued commodity for adversaries, as it can contain sensitive information such as Personally Identifiable Information (PII), payment card information (PCI) or protected health information (PHI). For example, a breach of a connected blood pressure monitor’s readings alone may have no value to an attacker, but when paired with a patient’s name, it could become identity theft and a violation of (HIPAA) regulations.

IoT is here to stay. A recent Forbes article predicted that we will see 50 billion interconnected devices within the next 5-10 years. Because a multitude of companies will be deploying and using IoT technologies to a great extent in the near future, security professionals will need to get ahead of the challenge of protecting massive amounts of IoT data. And, with this deluge of sensitive IoT data, Enterprises will need to act quickly to adopt new security methodologies and best practices in order to enable their Big Data projects and IoT initiatives.

New threats call for new solutions – NiFi Integration

A new approach is required, focused on protecting the IoT data as close to the source as possible. As with other data sources, sensitive streaming information from connected devices and sensors can be protected with HPE FPE to secure sensitive data from both insider risk and external attack, while the values in the data maintain usability for analysis. However, Apache NiFi, a recent technology innovation, is enabling IoT to deliver on its potential for a more connected world. Apache NiFi is an open source platform that enables security and risk architects, as well as business users, to graphically design and easily manage data flows in their IoT or back-end environments.

HPE SecureData for Hadoop and IoT is designed to easily secure sensitive information that is generated and transmitted across Internet of Things (IoT) environments, with HPE Format-preserving Encryption (FPE). The solution features the industry’s first-to-market Apache™ NiFi™ integration with NIST standardized and FIPS compliant format-preserving encryption technology to protect IoT data at rest, in transit and in use.

The HPE SecureData NiFi integration enables organizations to incorporate data security into their IoT strategies by allowing them to more easily manage sensitive data flows and insert encryption closer to the intelligent edge. This capability is included in the HPE SecureData for Hadoop and IoT product. In addition, it is certified for interoperability with Hortonworks DataFlow (HDF).

With this industry first, the HPE SecureData for Hadoop and IoT solution now extends data-centric protection, enabling organizations to encrypt data closer to the intelligent edge before it moves into the back-end Hadoop Big Data environment, while maintaining the original format for processing and enabling secure Big Data analytics

 

 


  • 0

Are You Getting Buried by the Endpoint Security Snowball Effect?

Category : McAfee

It starts out innocently enough: there’s a dangerous emerging threat to endpoints that can sneak past current defenses. A new startup has just the solution to stop it. Sure, you’re not thrilled about adding another agent and interface to your already overtaxed security team’s portfolio, but it’s just this one small addition, and it really does provide important protection.

Fast forward to a year later, and there’s another new threat. Now, you’re looking at another new endpoint product, with yet another new agent and interface. Six months later, it happens again. And again. All of a sudden, your security teams are managing a dozen different agents across your environment. They’re struggling just to keep their heads above water. And because there’s so much complexity, it now takes even longer to detect and respond to threats.

You’ve just been hit by the “endpoint security snowball effect.” And you’re not alone.

Proliferating Complexity

According to a recent Forrester survey commissioned by McAfee, the average organization is now monitoring 10 different endpoint security agents. When they need to investigate and remediate a new threat—those times when literally every second matters—they’re swiveling between an average of five different interfaces.

How did we get here? A number of industry and organizational trends converged to create the current predicament, including:

  • Silver bullet startups: The last several years have witnessed an explosion of new endpoint security products hitting the market. Many are very innovative. The problem is that none of them have command over the full security architecture. They’re designed to solve niche problems, making them hard to integrate into an overarching, automated security framework.
  • Conglomerate growth through acquisition: There are a few comprehensive security players in the market—but most have grown through acquisition, not by innovating their own products. Their endpoint tools may all have the same logo, but the products themselves remain distinct in their development and the engineering resources they require.
  • Diverse buying centers within organizations: Many organizations have experienced their own rapid growth, both geographically and through acquisitions. The result is that there may be several different buying centers in an organization for endpoint security, with different people making purchasing decisions to meet specific needs

It hasn’t helped that, for years, the accepted best practice for endpoint security was to layer multiple “best-of-breed” solutions. As many organizations are now seeing firsthand, that approach quickly snowballs—and eventually becomes an avalanche—creating more complexity than any security team can keep up with.

These days, more organizations—over 50 percent according to the Forrester survey—are turning back to single-vendor solutions. They’re prioritizing endpoint solutions that can do more things, more efficiently, with better accuracy and less complexity.

Envisioning a Better Solution

Fortunately, this is not the first time that CISO’s have seen this problem. A decade ago, organizations were similarly buried in disparate tools and processes for the basic IT architecture.

In response, the industry moved toward the concept of the “service-oriented architecture” (SOA), sometimes called the enterprise service bus. The idea was to create a single, common framework that everything could plug into, where disparate solutions could communicate, and IT could move away from constant manual integration.

So the model already exists. Now, we need to apply it to endpoint security. What should that look like?

First, individual endpoint security operations can no longer be built around siloed point products. Each layer of endpoint security should be modular, like a blade snapping into a server chassis. Components should be able to exchange data in real time, so that, for example, when a new threat is detected by one piece of the system, the rest of the defense fabric is instantly aware of it and can automatically inoculate the rest of the environment. Everything should be visible from a single interface, so that the friction between different agents and processes disappears. And the security framework should be highly adaptable, so you can continually add new capabilities without requiring a top-to-bottom rip and replace.

It’s a different approach than most solutions out there today. But the sooner organizations start demanding it from their security vendors as a baseline business requirement, the sooner we’ll see snowballing endpoint complexity melt away.

Find Out More

McAfee is making this vision a reality right now. Our Dynamic Endpoint solution was designed from the ground up to break down barriers between siloed solutions, linking endpoint capabilities across the threat defense lifecycle into a single security fabric. We’re making endpoint defenses more adaptive and automated. And we’re helping security teams in every industry operate more efficiently—and stamp out security snowballs before they start.

To learn more about Dynamic Endpoint, watch our webcast recording “Busting the Silver Bullet Malware Myth.”


  • 0

MobileIron and Microsoft Strategy

Category : Mobile Iron

This three-part blog series is my perspective on Microsoft’s strategy, the evolution of Microsoft Intune, and the critical role MobileIron plays in a Microsoft shop. My opinions are based on publicly available and third-party data plus my analysis of Microsoft’s actions. Part II of this series provides a high-level comparison between MobileIron and Microsoft Intune, while Part III provides technical details on that comparison.

Like almost every infrastructure software company in the world, MobileIron is both partner and competitor with Microsoft. Most of our customers are also Microsoft customers.

I believe Microsoft’s future depends on the success of three initiatives:

  • Migrate compute workload quickly to Azure
  • Don’t lose the battle for identity
  • Win back the developer

Three product solutions provide the underlying pillars for these three initiatives.

1. All roads lead to Microsoft Azure

For Microsoft to win, enterprise workload must move to Microsoft Azure instead of Amazon Web Services (AWS) or Google Cloud Platform. Azure consumption is a central metric Microsoft can measure to gauge whether its strategy is working. Each month, compute cycles, data storage, and transactions in Azure must increase at a rate higher than the rest of the market.

Will it increase Azure workload?” is a simple litmus test to predict Microsoft’s actions.

2. All roads start from Microsoft Azure Active Directory

Microsoft cannot afford to lose its position as the system of record for identity. I believe Microsoft Azure Active Directory is the most important product in the Microsoft stack. Microsoft has been very public that “identity is the control plane.” As a result, Azure services are all tightly tied to the identity services that Microsoft provides.

If a Google or an Okta starts taking over identity within a customer, Microsoft loses its most important architectural control point. Office 365 is not only a productivity suite, but also a forcing function to drive identity into the Microsoft Cloud.

3. All roads are built on Microsoft Graph

Before we talk about Microsoft Graph, let’s first turn the clock back 20 years. Microsoft became the largest software company in the world because it won the hearts and minds of developers. Customers go where developers are, and developers were inevitably on Microsoft platforms. Both server-side and client-side developers built on Windows. Microsoft Developer Network (MSDN) was the center of the universe because almost everyone used Microsoft tools.

Then Linux matured and many new developers, like MobileIron, chose it as their server platform. At the same time, client applications on the desktop moved into the browser. In 2010, iOS and Android adoption exploded and, as always, developers followed their customers and started building native apps for those OS platforms. Meanwhile, cloud became the primary infrastructure choice of startups, and AWS quickly established a leadership position.

Now it is 2017. A new startup, funded today, will most likely run in AWS, with Android, iOS, and web apps on the front-end. There is a good chance that the startup will not use any Microsoft development technologies even if the service is consumed on Windows devices. That was infeasible 15 years ago, but practical now.

Microsoft must win back the developer. Winning with Office 365 but losing the developer is not an option.

Microsoft Graph is the centerpiece of the Azure developer strategy. It is the API stack for Azure, and Microsoft needs as many developers to use it as possible.

The Role of MobileIron and Microsoft Intune

At MobileIron, we’ve seen Microsoft’s strategy evolve over the last few years. Microsoft Intune is a perfect example. Because of the strong position Microsoft System Center Configuration Manager (SCCM) has held in the traditional desktop management market, I believe Microsoft assumed Intune could easily achieve a similar position in the enterprise mobility management (EMM) market.

But it didn’t work out that way. Intune struggled with capability breadth, depth, and maturity against the more established EMM players. Intune lacked the fundamental advantage of SCCM – control of the operating system. Apple and Google, not Microsoft, were the primary OS vendors in mobile.

Intune needed a product advantage and it came in the form of Office 365 controls. Microsoft decided not to use the native frameworks for app configuration and security that Apple and Google had built into their operating systems (http://www.appconfig.org/), even though that was the preference of many Microsoft customers. Instead Microsoft built a proprietary set of controls for Office 365 apps and only exposed them to their EMM product, Intune. This meant that other EMM products could not leverage incremental security functions for Office apps, like preventing copy / paste or ensuring that a document was not saved to a consumer storage service.

The Microsoft sales team starting pitching that “only Intune secures Office 365.” They tried to convince customers to uproot their entire existing EMM infrastructure and switch to Intune to access a handful of Office configurations. Customers pushed back and the common outcome was not that they switched to Intune, but rather that they lived without these additional, useful configurations.

In January 2017, Microsoft changed course and exposed these functions through new Microsoft Graph APIs. Access to these APIs still requires the customer to buy Microsoft’s Enterprise Mobility + Security (EMS) suite, which includes Intune, so the Microsoft sales team does not lose a revenue opportunity. However, to me it indicates that Microsoft realized adopting a closed approach to Office security was not in the customer’s or Microsoft’s best interests.

I believe that, over time, product economics and strategy alignment will naturally shift the focus of Intune from trying to compete head-to-head for EMM business to instead providing Azure policy middleware that other EMM products can leverage. The middleware model better meets customer requirements and, more importantly for Microsoft, drives adoption of Microsoft Graph. Microsoft has a tremendous incentive to secure Azure services but none to secure Android or iOS as OS platforms.

The true battle for Microsoft is not EMM. It’s winning back the developer through Microsoft Graph and moving enterprise workload to Azure with identity at the core.

Please read Part II of this series, “MobileIron and Microsoft Intune,” for more details on these two products.


  • 0

7 Practices that Make Your Organization Vulnerable to Cyber Attacks

Category : Cyber-Ark

Today I read “How you can be the smartest cybersecurity expert in the room” on CIO.com. The author notes, “many CIOs and senior IT leaders are almost clueless about where to focus and how to start building next-gen security functions.” He references 20 CIS Critical Security Controls presented by the SANS Institute that organizations can implement to dramatically reduce risk. He acknowledges that list is too much for most busy IT teams, so he directs readers to focus on the top five CIS controls which can still lead to an “85 percent reduction in raw cyber security vulnerabilities.”

If you happen to be one of the smartest security people in the room, you already know that critical security control #5 is “Controlled Use of Administrative Privileges.” Where does this stand on your priority list?

Answer the questions below and consider whether or not your team has good or bad habits in place. If you answered yes to the questions below, your organization is susceptible to an attack. It’s time to implement controls around privileged credentials.

Learn more in The CISO View research report, “Rapid Risk Reduction: A 30-Day Sprint to Protect Privileged Credentials.”


  • 0

6 steps to prepare your architecture for the cloud

Category : F5

Face it: most IT architectures are complicated. And if you’re considering moving to cloud, you’re right to be concerned about the vast changes that will be required of your architecture—and your organization—as you make your transition.

The good news is that if you’re like most companies, you’ve done this before. Many times. About every three to five years you overhaul your core architectures. You adjust how you deliver applications. You strive to increase performance, enhance security, and reduce costs.

The bad news is that with cloud, things will be even more complicated. You might not have control over services. You may not be able to hard code connections or do things the old way.

There will be some pain. But, like they say, “No pain, no gain,” right?

Here are six steps to get started.

1. Assess what you have

What is the state of your applications? How many do you have? How important are they to your business? What sorts of data do they hold, and—most importantly—what are the dependencies between them?

Start thinking about the categories your apps will fit into. You will have three options.

  1. Adopt SaaS
  2. Migrate to the cloud
  3. Keep them where they are

2. Decide which apps are ripe for outsourcing to SaaS

Do the easy part first. Identify your apps that are virtual commodities. You’re likely to find a lot of them. Do you really need to support your own Exchange server, your out-of-date HR system, or your homegrown sales automation tools? Are they worth the effort of your people or the OpEx you incur? If not, save yourself a lot of trouble by subscribing to a sales, HR, productivity, or other appropriate solution. Let third parties do your heavy lifting. You’ll get obvious, quick wins with SaaS.

3. Analyze and decide on the rest

Next you’ll need to assess your remaining apps and decide which to migrate to cloud and which to keep where they are.

Ask yourself the following questions: If we move app X, how many things will break? Where are the data stores? What are the dependencies? What network services are they using? Which apps require workarounds to normal procedures and protocols to make them work?

You’ll have answers to those questions for many of your apps. For others, you may not know the answers until you actually try to move them. The greater the risk of breakage and the more complicated and less known the dependencies are, the more likely you are to keep an app where it is.

As you map out these dependencies, document them. This will be useful even if only a few of your apps end up in the cloud.

4. Standardize

Next, examine your app delivery policies and look for opportunities to standardize. You should have a limited number of standard load balancing policies—say 10—rather than hand-tuned configurations for every app. Determine standardized storage tiers. Define standardized network services. Talk to your developers about the benefits of standardization and gain their commitment. Make templates to help them deploy things quickly and easily.

5. Simplify and secure access

Ask yourself who is going to be accessing each app and from where. You have to plan for user behavior, connectivity, and appropriate bandwidth. Many of the applications that you seek to move to the cloud—whether private or public—may need to be more readily accessible from anywhere. Moving them to the cloud will place less stress on the infrastructure.

There are also authentication and security issues; most businesses have traditionally used network rather than app controls to determine access. In a public cloud, you may need new access technologies—gateways that determine access in ways that simply didn’t exist before.

6. Plan your architecture

When you go to the cloud, the architecture will be different because the constructs aren’t static. For monolithic applications like databases, the mechanisms that were formerly tied to specific IP addresses or other constant constructs won’t work in the cloud. You may need additional load balancers or proxies that will help provide consistency in an environment that is always changing. Make additional points of control so you can ensure that everyone can access your apps consistently and without disruption.

“Lift and shift” isn’t easy

This is hard stuff. As we said at the beginning, IT architectures are complicated.

While it may not be easy, it’s worthwhile—for the cost savings (OpEx and CapEx) and scalability alone. And some enterprises have achieved massive savings just by preparing for cloud. By assessing your existing app inventories, analyzing dependencies, documenting everything, and standardizing and simplifying as much as possible, you’ll be in the perfect position to decide what to move and what not to move.


Support