Category Archives: HP Security

  • 0

Quantum Computing? Really?

Category : HP Security

We just had an interesting discussion here about quantum computing, quantum cryptography and post-quantum cryptographic algorithms. I’m afraid that I might have wasted a few people’s time with a rant about why I’m not impressed by the possibilities for quantum computing.

I started by noting my thoughts on how hard it is going to be to build a large-scale quantum computer. Keeping quantum coherence long enough to do significant calculations with quantum computers may turn out to be really hard. As in so hard that we’ll need to create a new level above NP-hard to describe how hard it is.

This thinking might be a bit out of date. I haven’t had a lab where I’ve had equipment that let me play with quantum effects for over 20 years. Things might have become much easier since then, but I still think that it’s going to turn out to be extremely hard to build large-scale quantum computers. Perhaps even impossible. If I had to bet, I’d bet on impossible.

But I also rambled on about why I think that quantum computers are incredibly sloppy because they might be able to accomplish so little with so much.

If you have a register comprising n classical bits, that register can hold any one of 2n possible values. If you replace those classical bits with quantum bits (qubits), that register can hold all possible 2nvalues at once. An eight-bit register can hold any one of 256 possible values, while a register of eight qubits can hold all 256 of them at once. A 64-bit register can hold any one of 264 possible values, while a register of 64 qubits can hold all 264 of those values at once.

If you’re going to use Shor’s algorithm to factor an n-bit RSA modulus, you roughly need a register comprising 2n qubits. Today, most RSA moduli are of the 2,048-bit variety, so to use Shor’s algorithm to factor one you’d need 4,096 qubits. That’s a lot. Those 4,096 qubits are holding 24,096 values at once. We have that 24,096 is about 101,233. In either form, that’s a very big number.

How big?

There are about 1080 atoms in the visible universe. You can get this number two ways. One involves a SWAG (Scientific-Sounding Guess) of the number of galaxies in the visible universe, the number of stars in the typical galaxy, the mass of a typical star, etc. Or you can derive the number from precise observations made by astronomers. The results are about the same.

I personally find this to be more than slightly annoying. It reminds me more than a little of when I used to work in finance, where we had roughly two types of analysts: quants and cowboys. The quants (like me) were generally introverts who favored their computers over people and who would spend weeks in dimly lit rooms carefully building mathematical models to use to value deals. The cowboys were generally extroverts who drank a lot and just used their intuition to value deals. Annoyingly, there seemed to be absolutely no difference in how well either type of analyst did. So in addition to learning a lot about finance in this particular job, I also learned that life isn’t even close to being fair.

In any event, if you had a quantum computer that holds way more states than the number of atoms in the visible universe, you could use it to crack a 2,048-bit RSA encryption key. You’d think that with that many states you could end hunger, cure cancer, reverse global warning and bring back the TV show Firefly. But you can’t. That’s why I’m not impressed.

End of rant.

Source: https://www.voltage.com/crypto/quantum-computing-really/

Author: Luther Martin


  • 0

Still Using Stateful Tokens? There is a Better Way!

Category : HP Security

Lately, with regards to tokenization, we’ve been seeing . . .

As you see many of the discussions going on related to tokenization solutions – stateless, vaultless, vaulted, etc. – let’s step back and segment these conversations around the market the solution is wishing to service.  Generally speaking, selling tokenization into a market is divided into two main categories – organizations that want to control their destiny and those that feel better equipped to let others provide and manage this service for them.  If you keep this in mind some key criteria comes up pretty quickly that weighs in on the technology decision – cost, manageability, transaction throughput and type, scalability and last but not least security standards.

This argument has been around a while for many years . . . to tokenize or not to tokenize

According to Markets and Markets (Tokenization Market, Global Forecast to 2022), “the rise in adoption of payment security trends and increase in the number of security breaches targeting payment transaction processes are among the major factors for the growth of the tokenization market around the globe.”  As organizations delve into the age of digital commerce, the collection of cardholder data and the protection of this data is imminent.

Payment Card Industry Data Security Standard (PCI DSS) provides guidelines mandating protection of cardholder and ACH numbers in systems.  They recommend tokenization as a way to actively remove live data from the infrastructure by replacing it with a “token” that is valueless to hackers.  In the event of a breach, the token would be useless and valueless on the market, unless the hacker was able to get the token table.  Gaining PCI compliance is a key driver for organizations to implement tokenization with the added benefit of audit scope reduction.  There are times when organizations elect to encrypt over tokenizing but for the purpose of this discussion let’s look at some of the reasons customer use tokenization.

  • Compliance management requirements
  • Reduce audit scope costs
  • Strengthen applications and processes (payment security and user authentication) by creating a surrogate value

If the surrogate value can be used more than once then all bets are off for reducing risk.  The value of “deriving” a token for a single use, greatly reduces the target for cyber-criminals.

So what works best?

Many talk about stateful / vaulted tokenization vs stateless / vaultless. Let’s look at the different tokenization types.

Stateful tokens are assigned to PAN through the use of an index or token table which is usually a two-column table with the PAN and corresponding tokens are listed across from each other in the table.  The table needs a “location” for it to be stored and managed preferably a separate hardware device – a database, hardware security module (HSM) or a secure location.  To maintain integrity and avoid data loss, the state of the system (specifically, the token vault) must be constantly backed-up and/or replicated. Since replication is not instantaneous, multiple copies of the vault will be out of sync, resulting in cumbersome situations where multiple tokens are issued for the same PAN (often referred to as “collision”), or multiple PANs are associated with the same token.

Stateful tokenization solutions

  • Offer a centralized location for security attacks to random number generators
  • Require maintaining and continuous synchronization of token tables
  • Scalability challenges for large deployments – possible token collisions
  • Cost implications and sustainability (i.e., hardware centric organization vs a non-hardware centric organization; organization size and maturity, etc.)

With stateless tokenization, multiple token tables are randomly generated one time for all possible PANs. This generation uses random numbers and a provably secure method. Each and every PAN in the numeric range has a token assigned to it for the life of the table(s). Since every token PAN is pre-associated with a token, the tables are stateless; they do not change. This eliminates the need to synchronize a database across data centers, or constantly back it up. (See more in the Coalfire Systems Hyper Secure Stateless Tokenization (SST) PCI DSS Technical Assessment White paper). 

The better way:  Secure Stateless Tokenization

What if there was a better way to tackle payment security breaches while enabling a streamlined experience for the customer and reducing many of the scalability challenges faced in large deployments – especially if you have Hadoop or Big Data implementation?  We have taken this into account as we built our stateless tokenization solution (SST) into SecureData.

Secure Stateless Tokenization uses a set of static, pre-generated tables containing random numbers created using a FIPS random number generator and based on published and proven academic research. These static tables reside on the SecureData appliances, and are used to consistently produce a unique, random token for each clear text PAN input, resulting in a token that has no relationship to the original PAN. No “token vault” database is required, thus improving the speed, scalability, security, and manageability of the tokenization process.

Secure Stateless Tokenization (SST):

  • Eliminates tokenization and de-tokenization (read/write) to disk (RAM)
  • Enables horizontal and vertical scalability with large deployments and data collections (i.e., Cloud, Hadoop, Big Data, IoT)
  • Created random one-time tokens with no databases, no data synchronization, no collisions, and high performance
  • Provides an enhanced approach that maximizes speed, scalability, security and management of the tokenization process
  • Unified back-end platform enabling growth not just in tokenization but also enabling expansion to protect PII data
  • Forgo the challenges of incorrect token mapping
  • Cost (hardware, license, etc) and complexities of managing the token tables can become overwhelming if you aren’t a hardware organizations.

Is encryption a potential better fit?

The Tokenization vs encryption argument has been raging for many years which may be brought up by vendors as a solution often if they aren’t succeeding with their tokenization solution.  Encryption has its place in data protection but if you are required to meet PCI DSS mandates, tokenization is the best solution. A thought for future discussion.

Source: https://www.voltage.com/tokenization/still-using-stateful-tokens-better-way/

Author:  TRISH REILLY


  • 0

Has storage and server encryption kept pace with modern IT to adequately reduce risk?

Category : HP Security

Storage and server vendors seem to be stuck with the historical mindset of traditional data-at-rest encryption. Data from applications is exposed while in-use, but sits blissfully protected at-rest, only to be again exposed to a potential breach when applications need to access it once again. This is a recipe for disaster, enabling gaps in protection; but is there a better approach? Yes, there is! It is format-preserving encryption and a game-changer for storage and server security.

An evolution up the stack and beyond: From system-level to data-centric encryption

Server and StorageFormat Preserving Encryption (FPE) which persists with the data, is a more trustworthy and comprehensive data-centric approach to address the risk of data exposure. FPE is able to protect data across platforms that had previously relied on a “system-centric” approach which can’t scale outside of the storage or server environment. FPE affords all the benefits of traditional AES encryption, while going further to maintain the same general “look and feel” of the original data. The approach is familiar to tokenization methods by substituting original data with a safer replacement, and in the case of FPE, doesn’t break applications or schema. Data looks the same and can be managed similar to the original data. FPE enables enough context into the original content for operating on the information, but making it useless outside of the business application environment. So, why does this approach matter?

Case in point, many attacks happen during data-in-use or transit. Consider malware in the application tier, e.g., a Point of Sale app—well ahead of where the data may be stored or eventually archived. With more and more analytic processes using sensitive data from IoT and mobile applications, data-in-use risks are increasingly more problematic as the focal point of today’s critical attack vector. If you consider the increasing practice of creating and using data lakes to achieve rapid insight from the various enterprise data sources, data-at-rest encryption is simply not enough and not where the real risk has migrated. New times require new approaches.

Comprehensive: Data-centric at-rest, in motion and in use!

Modern data security must protect data persistently in-use, in-motion and at-rest—not as three separate states that allow for gaps to be exploited. Many large enterprises compelled to reduce sensitive live data exposure from breach risks or to comply with privacy regulations can now use NIST-recommended standard FPE today in a platform-agnostic approach. Everything from mainframes such as IBM z-series; across major big data and mission-critical platforms such as Teradata, HPE NonStop; via open systems, such as Windows, Linux, Unix etc. and across applications and data stores.

FPE can avoid the need to unnecessarily decrypt for the vast majority of the data’s lifecycle after capture and protection at source. Sensitive data classes can then remain protected, reducing risk across all platforms—that is, not just one specific IT ecosystem, but across all wherever data may flow. So, we might encrypt on capture on z/OS apps (e.g., a CICS transaction engine) and process locally as needed in protected form, pass secured data on to downstream systems for analysis without decryption over ETL (e.g., into Teradata, or Vertica, into AWS, to Azure, into Hadoop, and so on). Exposure can be limited to a small number of processes or people that need the actual cleartext data, which can be controlled to very specific qualified use cases.

Keeping data with format, meaning, context and value retained without the ongoing performance impact of decrypt/encrypt operational cycles offers a more reliable approach, applying across all platforms where improper exposure is a possibility. For data and line of business owners, this reduces liability and streamlines compliance approaches to data security, pseudonymization and data de-identification required by complex regulations like GDPR, PCI, HIPAA, NYDFS. The technique can be used for sophisticated data workflows in contemporary agile enterprises, building on micro-service based apps and serverless computing methods that reflect today’s advanced business environments of hybrid IT.

The best of all worlds

Data consumers can now run more applications and analytics processes on FPE-encrypted data, without the traditional burden of limited data-at-rest controls and with minimal application impact on performance. FPE provides a game-changer with its data-centric and IT platform-agnostic approach, allowing protection to persist as data in managed across modern IT. Businesses can now do more with their ever-increasing data volumes vs locking down data-at-rest that restricts data to a few trusted data scientists.

Source: https://www.voltage.com/encryption/storage-server-encryption-kept-pace-modern-adequately-reduce-risk/

Author: MARK BOWER


  • 0

Encryption for Data-at-Rest Leveraging OASIS KMIP

Category : HP Security

As a universally-accepted best practice, there is no substitute for encryption of data-at-rest as the last line of defense.  For many companies, it’s like having an “easy” button.  All data, known or unknown sitting at-rest is encrypted, protected and secure.

Encryption’s value is no longer in question in large enterprises. Rather the broader challenge they face as they look to manage petabytes of data in complex backup environment is, “How do I overcome the substantial costs and time required to manage my encryption keys?”  Other concerns might include:  What about key storage, key rollover, on-demand key generation, key database, data access policies, key replication, symmetric-key management etc.

Enterprises that are serious about protecting the integrity of their data, their clients’ data and complying with government regulations no longer really seriously dispute the value of encryption. Rather they recognize that data encryption is crucial to preserving their company’s value, its reputation, and even its long term viability.

The Key Management Interoperability Protocol (KMIP) represents a breakthrough from an encryption deployment standpoint.  Enterprises can deploy an open source encryption solution that has no dependencies upon existing proprietary key management approaches. Using KMIP, any provider’s encryption methodology that supports KMIP, may communicate with a KMIP server to obtain the keys it needs to encrypt data.

OASIS KMIP

The KMIP standard effort is governed by the OASIS standards body. OASIS (Organization for the Advancement of Structured Information Standards) is a not-for-profit, international consortium that drives the development, convergence, and adoption of open standards for the global information society. The OASIS KMIP Technical Committee works to define a single, comprehensive protocol for communication between encryption systems and a broad range of new and legacy enterprise applications, including email, databases, and storage devices. By removing redundant, incompatible key management processes, KMIP will provide better data security while at the same time reducing expenditures on multiple products.

The big advantage over other encryption key management techniques is that rather than each encryption vendor needing to provide and manage a proprietary key management solution, KMIP provides the keys that each encryption methodology needs. In this way, enterprises have the flexibility to deploy encryption at whatever layer they need without seeing their costs or complexity rise significantly.  HPE Security recommends customers leverage our HPE Enterprise Secure Key Manager (ESKM) with KMIP standardized interoperability for data-at-rest, and our data-centric security solution – HPE SecureData (stateless key management) for data-in-motion and data-in use.OASIS KMIP

Advantages to using KMIP

Here are three important advantages to use KMIP encryption methodologies.

#1 – An enterprise only needs one key manager. The Key Manager works across all encryption key offering thus reducing the complexity and cost alone.  Users only have to learn one graphical user interface (GUI).

  • Our HPE ESKM key management solution is perfect for addressing this single user GUI desire and benefit.

#2 – Enterprises can deploy encryption at whatever layer they need.  By decoupling and centralizing key management, the cost and complexity are far less than if each layer or application had their own encryption key management.

  • Unlike stateless key management, the HPE ESKM appliance vaults keys, which can offer a different set of advantages and provides the last line of defense – encryption for data-at-rest.

#3 – Business processes are unaffected.  By obtaining the encryption key from the same source, processes like backup and replication that leverage deduplication can occur uninterrupted.  Data optimization and movement processes can request the keys it needs to safely and securely decrypt and then re-encrypt data.

  • Our HPE ESKM solution for symmetric key management automates key replication during key generation, and can accelerate enterprise key management time to value.

Leveraging KMIP interoperability between multi-vendor products reinforces the reality of choice of vendor solutions for CIOs, CSOs and CTOs, enabling products from multiple companies to be deployed as a single enterprise security solution that addresses both their current and future requirements.

Industry standards are importance to HPE

For HPE, customer adoption of key management integration using OASIS KMIP further enables encryption to be embedded into more mainstream applications and systems to simplify interoperability. Industry standards are of the utmost importance to HPE and today we have one of the most robust partner integration portfolios available in the market. Our HPE Enterprise Secure Key Manager(ESKM) solution supports FIPS 140-2 and Common Criteria Evaluation Assurance Level (EAL2+) standards.  HPE ESKM has KMIP standardized interoperability and HPE Secure Encryption to enable you to protect and ensure continuous access to business-critical, sensitive, data-at-rest encryption keys, both locally and remotely.

Source: https://www.voltage.com/eskm/encryption-data-rest-leveraging-oasis-kmip/

Author:  SHERYL WHARFF


  • 0

Format-Preserving Encryption Summer Reading

Category : HP Security

Hello again, format-preserving encryption enthusiasts and data security fans around the globe!

This week we saw an insightful article published into the merits of Format-Preserving Encryption (FPE) backed by well-vetted methods in Connect Converge, the magazine for the HPE NonStop community. Here is another good case for adopting proven security, without compromising performance.

Format-preserving encryptionRead the full article, “Format-Preserving Encryption – And then there was one” by Karen Martin in the Summer issue of Connect Converge.

Let’s recap recent events in the encryption world. The National Institute of Standards and Technology (NIST) originally considered three FPE modes—FF1, FF2, and FF3—as modes of operation of the Advanced Encryption Standard (AES). FF2 did not survive to publication after an attack that demonstrated the security strength of FF2 is less than 128 bits. Recently, FF3 has been broken by researchers Betül Durak (Rutgers University) and Serge Vaudenay (Ecole Polytechnique Fédérale de Lausanne). Note: these attacks are independent of NIST continued endorsement of FF1 format-preserving encryption.

For further background, see our blog post titled, “Can I Trust My Vendor’s Security Claims? Peer-reviewed vs. self-certification methods.”

Moving the discussion forward further, author Karen Martin now continues the conversation in her compelling article and states in her argument:

“The three FFX modes were very similar, but not identical. FF1 was designed to handle longer messages and longer tweaks than the other two algorithms and used a 10-round Feistel network; FF2 was designed for shorter messages and tweaks than FF1 and used a 10-round Feistel network; FF3 fixed the length of the tweak at 64-bits and only used an 8-round Feistel network, which made it slightly faster. The differences in the three modes were slight, but crucial. As of today (May 2017), only FF1 is approved by NIST.”

What do you think? When does it make sense to accept more security risk to improve performance? Or can you achieve the best of both worlds without unnecessary compromise? Read the full article and join the discussion. As always, we’re happy to join the debate with you and help answer difficult questions that separate the proven methods from the empty claims. Happy encrypting!

Source: https://www.voltage.com/encryption/format-preserving-encryption-summer-reading/

Author: NATHAN TURAJSKI


  • 0

New GDPR-Focused Media Hub Launched By IDG/CIO and Hewlett Packard Enterprise

Category : HP Security

Do you have questions regarding the pending enforcement of the European Union’s General Data Protection Regulation (GDPR) and its impact on your business?  If so, look no further — GDPR & Beyondlaunched this week. GDPR and Beyond is a new online media hub developed for Information Governance and Security professionals looking to understand more about GDPR and how it is going to impact a company’s collection, maintenance and protection of its customers’ data.

GDPR’s reach is extensive in that it not only applies to EU companies, but also multi-national organizations that collect personal data of EU citizens. GDPR mandates tighten and deepen governance, data security and data privacy to ensure the adequate protection of the fundamental rights and freedoms of EU citizens with regard to their personal data.

The website, sponsored by Hewlett Packard Enterprise (HPE), features insightful articles, interviews and videos from an experienced and knowledgeable editorial team at IDG/CIO Magazine, with key inputs for selected content from HPE subject matter experts including David Kemp – specialist business consultant, Tim Grieveson – chief cybersecurity strategist, and Sudeep Venkatesh – global head of pre-sales for HPE Data Security.

Below is a sample of the type of interactive content included on the website:

  • How can I find the information and personal data that will fall under these regulations?
  • How can I cost effectively respond to legal matters requiring information under my management?
  • How can I protect, store and securely back up personal data?
  • What types of data protection technologies can help to secure data without breaking business processes?
  • How can I identify information for disposition in accordance with the “right to be forgotten?”
  • Can I report a breach within the timeline required by the EU data protection regulations?
  • How can I reduce my overall risk profile?

GDPR & Beyond aims to foster discussion and idea exchange around the topics of how IT and the lines of business must collaborate to drive GDPR compliance by the May 25, 2018 effective date. Included in the content will be an assortment of educational, thought-leading and opinion-based articles that discuss how organizations’ efforts to comply enable them to become more efficient in their use of data and their ability to mitigate risk.

More content will continue to be posted to the GDPR & Beyond site in addition to current highly valuable articles:

Visit GDPR & Beyond today to learn more about how to prepare for GDPR.

Source: https://www.voltage.com/gdpr/new-gdpr-focused-media-hub-launched-idgcio-hewlett-packard-enterprise/

Author: Lori Hall


  • 0

Streamlining Your Data-Security Program to Meet Regulatory Change

Category : HP Security

Mark Bower, Global Director of Product Management, HPE Security – Data Security

Attend

Data security and the challenge of data protection is increasing in scope and difficulty. The massive volume of data that businesses are collecting is growing exponentially, and managing compliance delivery is a daunting task with huge negative consequences for getting it wrong. While organizations have long needed to safeguard intellectual property and confidential information, changes in information technology and business models introduce new threats, and new regulations. Governments and industry bodies are imposing new regulations to motivate organizations to protect the privacy and confidentiality of information. Responsibilities can vary widely by region and by industry, and staying on top of an ever-shifting regulatory landscape is complex and challenging, but it isn’t impossible.

Successful organizations coordinate enterprise-wide regulatory compliance activities with tools to identify and address new and changing regulations, and are able to map the impact of these regulations across the entire infrastructure, and prioritize compliance activities according to business impact. By deploying a consistent, sustainable, scalable and measurable process for managing regulatory change, they are able to eliminate manual, non-scalable and non-strategic activities to reduce the cost and improve the speed of regulatory compliance programs.


  • 0

Neutralizing Data Breach and Insider Threat

Category : HP Security

Governments and enterprises are more challenged than ever to protect their most valuable data, from a citizen’s social security number to highly classified data. But endpoint or network security can’t stop attackers, and much less a malevolent insider. The solution lies in protecting the data itself.

Recent NIST and FIPS validations make groundbreaking Format-Preserving Encryption (FPE) technology available to government and enterprises. FPE “de-identifies” sensitive data, rendering it useless to attackers, while maintaining its usability and referential integrity for data processes and applications, and easily layering protection into decades-old legacy systems. Join HPE and (ISC)2 for an exploration of this topic in the 1st part of a three part series.

Presented by:
Terence Spies, CTO, HPE: Brandon Dunlap, Moderator

Watch for free


  • 0

Four pillars of payments security, one solution: Welcome to the age of AKB

Category : HP Security

The retail market relies on payments security, yet encryption hasn’t treated four distinct security fundamentals as a whole—until now.

In this ever-growing, evolving world of payments security, encryption and cryptography play important roles by protecting users from the bad guys. While attacks on poorly designed applications are more common, a more sophisticated attack is designed to exploit the weakest link in the chain or algorithm that protects it. To constantly protect from the threats of data breaches, newer and stronger algorithms are needed that also strengthen the chain as a whole. To that end, security methodology fundamentals rely on four key pillars:

  • Identification (who)
  • Authentication (integrity)
  • Authorization (privilege)
  • Confidentiality (encryption)

Multiple advancements have taken place within each pillar. Yet the methodologies or designs only saw them as unique, separate entities—and continued advancement in each point as standalone. Organizations focused on one without treating the four as part of a complete solution yet in reality, these key pillars are interrelated and should be treated as such.

The non-cash retail payment market relies on security. The algorithm’s journey from data encryption (DES) to Triple Data Encryption (TDEA or 3DES) in the early 2000s paralleled the National Institute of Standards and Technology’s approval of and recommendation for organizations to adopt the stronger algorithm. The ease of CPU processing and quantum computing now brings 3DES encryption into question; NIST’s currently recommends migration to Advanced Encryption Standard (AES)—an even stronger algorithm.

Along with the encryption algorithms, further strengthening of security measures resulted from the introduction of the Initialization Vector (IV), which ensures no repetition in the encrypted data (cipher text). IV greatly reduces the ability to detect a pattern and thus disables the possibility of deciphering the cipher text. Thus, the race began to solve the current algorithm problem, while introducing newer weaknesses and a new problem to solve. Yet the race neglected how to address the four key pillars as a whole rather than part by part. Thus, the requirement arose for additional foolproof digital fencing: logical and physical controls.

The middleman cuts in, but AKB holds the key

As the industry looked to address the four key pillars, man-in-the-middle attacks (MiTM) remained a potential problem in cryptography and encryption. MiTM attacks exploit the weakest point in the chain. Not having a strong relationship between the encryption key and its designed attribute (encryption, decryption, exportability, etc.) meant that an interceptor (MiTM) could change the behavior of the outcome.

The Payment Card Industry (PCI) Security Standards Council released a bulletin in March 2017 for PCI PIN Security Requirement 18-3. It provides a revised plan to implement managed structures (called key blocks) to address the individuality of the four pillars. This requires organizations to consider the pillars as a whole—and not individual items. A specification, published in ANSI X9 TR-31, defines the AES key-wrap process, also commonly known as ANSI Key Block (AKB).

AKB was the first market-specified published key block that resolved this by hard binding the key with the intended attributes along with the integrity to ensure that the cipher text hasn’t been modified.

The AKB brings two important features. The key is protected by using the approved key bundling standard requirements, thus greatly reducing MiTM attacks. Additionally, key usage attributes are securely bound to the key itself. This prevents misuse of the key type or its intended use. For example, the key is identified as an encryption key—so it can’t be used to decrypt data or for key exportability.

AKB was the first market-specified published key block that resolved this by hard binding the key with the intended attributes along with the integrity to ensure that the cipher text hasn’t been modified. The AKB brings two important features. The key is protected by using the approved key bundling standard requirements, thus greatly reducing MiTM attacks. Additionally, key usage attributes are securely bound to the key itself. This prevents misuse of the key type or its intended use. For example, the key is identified as an encryption key—so it can’t be used to decrypt data or for key exportability.

With payments disruption and an emerging landscape questioning the status quo—along with increasing non-bank competition such as the Internet of Things, mobile wallets, gift cards and fleet cards brought by commercialization—a greater need exists to ensure the payment market is well protected, while fostering growth and innovation. AKB’s adoption by the regulatory bodies such as PCI will unite the four key pillars into a cogent whole.

Source: https://www.voltage.com/payments/four-pillars-payments-security-one-solution-welcome-age-akb/

Author: PRIYANK KUMAR


  • 0

Cryptography for Mere Mortals #15

Category : HP Security

An occasional feature, Cryptography for Mere Mortals attempts to provide clear, accessible answers to questions about cryptography for those who are not cryptographers or mathematicians.

Phil Smith III, Senior Architect & Product Manager, Mainframe & Enterprise Distinguished Technologist and Dave Mulligan, Chief Services Strategist, HPE Security – Data Security

Q: I heard that National Institute of Standards and Technology (NIST) just repudiated the format-preserving encryption (FPE) standard—should we be concerned about that?

A: Maybe. Let’s talk some more about standards. In installment 14, we talked about why standards are important.

Since that post, NIST released Special Publication 800-38G, “Recommendation for Block Cipher Modes of Operation: Methods for Format-Preserving Encryption”. This included two new modes of AES, FF1 and FF3; FF1 is the Format-Preserving Encryptionincluded in HPE SecureData, proven through almost a decade of real-world use. (For those who are wondering: FF2 was another approach, which was discarded partway through the standards process due to weaknesses found by the standards body’s analysis.)

Great! A new standard, with two choices that achieve similar results! Vendors leapt on the FPE bandwagon and started implementing these new modes in their products. Many of them chose to implement the FF3 mode, and have products available now.

Now comes the bad news: as discussed in April, a problem was found with FF3 that makes it vulnerable to attack. O noes! Standards fail! Maybe standards aren’t so wonderful after all?!

Not so fast. Yes, FF3 has a weakness, and yes, vendors and customers who chose that route have a problem. But it falls in the category of “an honest mistake”, and is one that can be rectified without embarrassment or arguing. Contrast that with having chosen an encryption algorithm not blessed by any standards body: if a weakness is discovered, there’s no good excuse for having chosen it. Worse, without a neutral third party saying “Hey, there’s a problem”, a sleazy vendor could just say “We don’t think this matters, move along, nothing to see here.”

Besides, this weakness was discovered because it was a standard: the cryptographic community tends to focus its analysis efforts on standard-based algorithms. There is a positive feedback loop here: the focus is on standards-blessed algorithms, which encourages customers to use those, which encourages more analysis… The alternative is security by obscurity: a non-standard, untested algorithm might be secure, but nobody knows. Which is hardly a solid basis for a security posture.

Bottom line is, the exception does not invalidate the value of standards, and enterprises examining their choices for data protection would be foolish to select approaches that are not at least on a standards track.

HPE SecureData, of course, has offered FF1 for almost a decade, on a variety of platforms, and is not subject to the weakness that FF3 suffers from. We take a conservative approach in designing our solutions, and FF1 includes extra internal “rounds” (iterations) that increase its security, helping to guard against new attacks such as the one that makes FF3 vulnerable. This is just one reason enterprises that have done the analysis consistently choose HPE SecureData to protect their information.

Meanwhile, companies using an FF3-based approach must act, as discussed in the April post here. If data protected using FF3 is breached, the data will of course still be less vulnerable than if it were not protected at all, but the organization will not be able to claim exemption from data breach disclosure rules. This means they must take the same steps as if the data were not protected at all: suffer disclosure, fines, etc. Considering the full costs of this remediation, it is clear that taking security shortcuts carries significant risk; The 2016 Ponemon Cost of Cyber Crime Studyreported that the total average cost for a breach is now $7 million!

Source: https://www.voltage.com/crypto/cryptography-mere-mortals-15/?platform=hootsuite

Author: PHIL SMITH III

 


Support