• 0

Addressing Data Across Borders for the GDPR

Category : Imperva

Most enterprises today do business across the globe, have databases in multiple countries and DBAs or users in different regions who have access to those databases. With GDPR mandating privacy requirements for personal data of European Union (EU) residents and visitors, it is important for an organization to know and control who accesses that data and what those with access authority can do with it.

Chapter 5 of the GDPR addresses “data transfers to third country or international organizations” and Article 44 of Chapter 5 specifically talks about “general principle for transfers”, which outlines the requirement for preventing unauthorized data transfers outside of EU member states.

Compliance with GDPR Article 44 requires either:

  • Blocking transfer of personal data outside the EU; or
  • Ensuring adequate data protection

In both cases, the starting point for compliance with the GDPR is data discovery and data classification followed by implementation of strong security policies, audit policies and reporting.

Imperva SecureSphere can help organizations comply with the GDPR by blocking the transfer of personal data outside the EU and ensuring adequate data protection. In this post, I’ll review how the SecureSphere database security solution can not only classify sensitive data and prevent it from crossing a specific geographic location to meet the Article 44 requirement, but also generate audit logs and reports that can assist with investigations, reporting mandates and data forensics (Figure 1).

enforce cross border data transfers for GDPR - 1

Figure 1: Imperva SecureSphere helps enforce cross-border data transfers by mapping to GDPR requirements

Database Discovery

Many organizations are not aware of all the databases that exist in their network. Often times, a DBA may create databases to test an upgrade for example, then forget to take it down, thus leaving a database containing potentially sensitive data unsecured and unmonitored.  SecureSphere Database Discovery scans and reports on all the databases that exist in the network, providing you with detailed information on each including IP address, port number, OS type and version (Figure 2).

cross border data transfer - database discovery - 2

Figure 2: Database Discovery scan results

Data Classification

After database discovery, it is important to understand what kind of data exists in your databases. The goal here is to look for any sensitive or privileged information. SecureSphere can identify sensitive data using column names or a content-based search using regular expressions making it highly accurate (Figure 3).

cross border data transfer GDPR - data classification - 3

Figure 3: Data classification scan results

Security Policy

Security policies play a key role in protecting against known/unknown attacks and threats and complying with regulations and organization guidelines. Let’s say for example you have two DBAs in different countries trying to access a database in Germany. You would need to define and enforce security policies that ensure the DBAs are accessing only the data they are authorized to access based on their location (Figure 4).

You can set up a security policy in SecureSphere that allows Mark, a DBA in Germany, to access the database in Germany, but block access by Franc, a DBA in Singapore, as Franc should not be allowed access due to his geo location (Figure 5).

cross border data transfer GDPR - user role location mapping

Figure 4: User role and location mapping

In our example, SecureSphere’s security policy is tracking and blocking based on:

  • User first name, last name and role
  • From which country they are accessing the data
  • What query are they trying to run
  • Which database they are trying to access and if that database contains any sensitive information

cross border data transfer GDPR - security policy - 5

Figure 5: SecureSphere security policy blocks a DBA in Singapore from accessing a German database

Audit Policy

Auditing is necessary as it records all user activities, provides visibility into transactions, and creates an audit trail that can assist in analyzing data theft and sensitive data exposure.

In the snapshot below, you see response size “0” for the DBA in Singapore, confirming he was not able to access and perform a query on the database in Germany.  Whereas the DBA from Germany has a response size of “178”, indicating he was able to execute the query and access the database (Figure 6).

cross border data transfer GDPR - audit logs - 6

Figure 6: SecureSphere audit logs showing database activity

Measurement and Reporting

SecureSphere can also create detailed reports with charts using multiple parameters such as user, database, schema, query, operation, response size, sensitive data access, affected rows and more (Figure 7).  This information can be used to report on activity that assists in maintaining compliance with various regulations.

cross border data transfer GDPR - reporting - 7

Figure 7: Create and manage reports on database activity

Watch our demo to learn more about how SecureSphere can address the GDPR requirement of preventing data from crossing a specific geographic location.

Source: https://www.imperva.com/blog/2017/08/data-across-borders-gdpr/?utm_source=linkedIn&utm_medium=organic&utm_campaign=2017_Q3_bordersgdpr

Author: Sumit Bahl

  • 0

Mastering the Endpoint, A Forrester Report

Category : McAfee

Organizations now monitor 10 different security agents on average, and swivel between at least five different interfaces to investigate and remediate incidents. Learn how to master endpoint security with these recommendations from Forrester.

Please  download the Forrester report.

  • 0

Mobile Document Management with Docs@Work

Category : Mobile Iron

Content is the lifeblood of the enterprise. Today companies face a proliferation of content that is viewed on mobile devices.

Users with mobile devices need easy and secure access to the documents that are essential for their work, where they can easily view content, make edits and share that content securely with their colleagues, without compromising corporate data. The challenge lies in the Mobile IT team being able to provide a native mobile content management experience for users without sacrificing document security.

MobileIron Docs@Work provides an intuitive way to access, annotate, share, and view documents across a variety of email, on-premise and cloud content management systems, enabling complete Mobile Content Management for the enterprise. Data loss prevention (DLP) controls are set by IT to protect documents from unauthorized distribution and end users can be more productive with integrated editing capabilities. Docs@Work controls whether third-party apps can access stored documents and utilizes policies and permissions set in MobileIron Core.

Docs@Work Features Include:

Secure Content Hub

Allows the user to securely view and store documents in specific apps on their device. The secure content hub can selectively wipe documents when a user or device falls out of compliance and blocks clipboard actions for enterprise content.

Email Attachment Control

Emails are scanned for attachments and then filtered. If necessary, “open in” access is blocked to the attachments so that only Docs@Work can open the attachment.

Content Repository Access

Mobile users have secure access to an array of content across on-premise and cloud repositories such as SharePoint, OneDrive Pro, Office 365, Box and Dropbox. Users can be more productive and easily access a range of content from one mobile app and IT can establish key policies to ensure enterprise content is secured.

Automatic and Secure Tunneling to Enterprise Content Repositories

Accessing content is secure and easy with Single Sign On (SSO) and per App VPN for Docs@Work. An end-­user can easily access enterprise content repositories behind the corporate firewall without needing a separate VPN.

Integrated Editing and Upload to Source Repositories

With Docs@Work, users can mark-up documents downloaded from content repositories or saved from email attachments. Edited documents on the mobile device can be securely stored or shared with colleagues and re-uploaded to available repositories. Docs@Work supports annotation for PDF and non-­PDF document types.

Published Sites

With Docs@Work Published Sites, content administrators can proactively push important content to a user’s device. All content is securely stored, synchronized and available for offline viewing. Administrators can choose which content or repository locations should be distributed, based on a variety of device and user attributes, such as enterprise directory group membership.

Docs@Work Security Manager

Docs@Work Security Manager provides an additional level of security and visibility. It allows organizations to set document upload, download, edit, expiration policies and selectively wipe specific documents off a device. For example, if a pricelist must be updated every 30 days, Docs@Work Security Manager can ensure that the expired document is wiped from the device and updated with the new one. Docs@Work Security Manager activities trail provides visibility into work documents accessed, when and by whom they were accessed and on what device. Enforcement actions takes are also tracked providing granular reporting that supports the compliance strategy of the organization.

Source: https://www.mobileiron.com/en/products/product-overview/docswork

  • 0

How to Perform Continuous ONTAP Upgrades Without Sacrificing IT Stability

Category : NetApp

Don’t be surprised if you see the NetApp IT storage team busy doing other tasks during ONTAP® upgrades these days. Thanks to the power of the First Application System Test (FAST) program, which supports early adoption of ONTAP, the Customer-1 program is upgrading to the latest version of ONTAP with absolutely no disruption. In fact, the team is doing multiple upgrades on a weekly basis. This blog explores how we integrate ONTAP upgrades into a production environment without sacrificing IT stability.

Good Old Days?

Remember years back when application data was deployed on a filer? We would rarely see downtime unless there was a hardware failure or power outage. Configuration changes, such as export rules, network interface or route, were sometimes done on the fly in local memory. We’d forget about those in-memory changes on the filer.

When a hardware failure or power outage occurred, restoring the affected storage resource could quickly become turn into a fire drill. Some of the non-persistent changes were not documented, resulting in a mad scramble to discover the missing configuration. No wonder application owners resisted storage upgrades; it translated to downtime. We often delayed ONTAP upgrades to ensure we had stable operations. The irony of this situation was not lost on our storage team. We were expecting NetApp customers to be using the latest version of ONTAP but we weren’t always using it ourselves.

Customer-1 Adopts FAST

The Customer-1 program is the first adopter of NetApp products and services in our IT production environment.  It is also responsible for the operation of our global data centers. Recognizing that we were missing out on the many features of new ONTAP releases, Customer-1 joined NetApp Engineering’s FAST Program several years ago.

Under FAST, we agreed to deploy release candidate versions of ONTAPstorage management software in exchange for providing feedback on bugs and other performance issues prior to general release. We would exercise the code as well as reap early access to ONTAP’s latest features. Our goal was to improve our ONTAP lifecycle management so we were no longer afraid of storage upgrades.

Now Customer-1 installs pre-release ONTAP code into our lab and backup when Customer-0 (the Engineering IT group that also runs release candidate versions in its production environment) says the code is stable. Once we are comfortable with the stability of the code running in our lab (a non-customer facing and low-risk environment), we deploy ONTAP into sub-production and then into production.

We have some instances serving more than 100 applications. At first, trying to install even one ONTAP upgrade/week was challenging. With so much data to process, it was easy to miss potential risks. FAST helped us whittle our upgrade preparation process down to four hours using manual checklists and cross-checks.

To further improve efficiency, we added python scripts to compile a summary report with a pass/fail matrix that flags areas of concern. Now the Command Center can complete the precheck list in two hours and focus on the flagged areas.

Although painful at first, the process has been liberating in many ways, especially with ONTAP’s non-disruptive feature. We can upgrade one to two ONTAP clusters/week in addition to launching major releases twice a year and patches in between. Our lifecycle management process follows a regular cadence with absolutely no impact on the stability of business applications. Over time, we have identified 30 software bugs for Product Engineering to fix.


Our ability to repeatedly deliver ONTAP upgrades without any disruption to IT operations has also built the confidence of our customers, the business application owners. We regularly meet with them to proactively review the release schedule to avoid conflicts with application releases and ensure there are no surprises.

Shrinking Lifecycle

Over time, we have experienced numerous benefits. Our software lifecycle has shrunk; we are now running the latest ONTAP version in our production environment in 45 days or less. We have expanded the process to include NetApp OnCommand® InsightAltaVault®StorageGrid®E-Series, and CI switch upgrades.

We have also increased our storage efficiency by taking advantage of ONTAP’s features well in advance of their general availability. For example, we were able to leverage the ONTAP 8.3 cluster image update wizard that updates by cluster instead of node. We are currently running ONTAP 9.2, which offers cross-volume (aggregate-level) deduplication, which has helped improve our Flash storage efficiency.

Thanks to the rigor of FAST, we have a constant flow of upgrades, but we no longer have to fear downtime or search frantically for configuration scripts. Instead, ONTAP upgrades are just another task in our daily routine. And that leaves us more time to work on the fun stuff in our jobs.

Source: https://newsroom.netapp.com/blogs/how-to-perform-continuous-ontap-upgrades-without-sacrificing-it-stability/

Author:  Ram Kodialbail

  • 0

Kevin Mandia, CEO of FireEye, Speaks at DoDIIS17 About Cybersecurity

Category : FireEye

Kevin Mandia, CEO of FireEye, talks about Russia, China, Iran, North Korea and cyber security at at DoDIIS17


  • 0

Notes From DODIIS 2017: Talking Cyber Espionage and Insider Threat

Category : Forcepoint

Read on for a sneak peek into some of the Insider Threats insights I will be sharing at DoDIIS today as part of the “Industry Perspective on Cyber Espionage and Insider Threat” panel.

Insider threat is both a very old concept and a new one. The cyclical nature of technology concepts is constant, with only the players and methods changing. However, the instruments of data movement are getting smaller. In the past a person had to literally carry reams of paper out of the building to do the same kind of damage a person with a cell phone camera, cloud storage account, or a USB drive can today. Additionally, interconnections within the growing technology-enabled physical world and the infinitely connected web have allowed for more esoteric ways of information movement and access through the average smart home thermostat or wifi-enabled light bulb.

This newfound ability to deal damage in small packages has created a secondary issue: the accident. When data was big, taking the form of paper, floppy disks, or CD-ROMs, it took physical media or a lot of upload time to cause widespread harm. Again, this isn’t a concept any reasonable security practitioner is unaware of. In fact, I’m counting on it. The issue is not that there is growing risk and the world is harsh place, or that people will forever try to gain an unfair edge, but the reality that the line between maliciousness and accidents is growing ever greyer.

The Grey Area between Accidents and Maliciousness

When exfiltration and infiltration methods were complex and incredibly risky (think Cold War spy tactics) an accident would be defined as taking a folder of documents home, leaving a laptop on a train or having your Blackberry stolen. Now it is as simple as an unnoticed incorrect autocomplete address in Outlook with a sensitive attachment, or a misunderstanding about sensitivity and upload to a cloud drive. A mistakenly clicked email about a fake password reset can risk a whole company, just ask a few retailers or Hollywood producers.

This creates several avenues of discussion mainly around training and awareness (do it), thoughtful and effective controls (get some), and security analysis and response (make it tougher). The issue with insider issues is that mindset is everything. The motivation and goal of the actor is what determines the real difference between a stern lecture, employment termination or law enforcement arrest. Did the person really mis-click that link in the email? Did they really not notice the other address? Actually, they probably didn’t notice and just thought they had to provide their password. Realistically, there are only a few real-life Jason Bourne or Ethan Hunt types in the world — and if those people were targeting you odds are you’d have little chance of stopping it.

We need to realize that people are people and not computers. If we approach insider threat analysis as a black and white issue like malware then we risk more than wasted time. If an analyst suspects a computer to be infected with malware, they can patch or re-image without a second thought. The computer won’t get offended or quit. But we all live in a world of greys, not black and white. The sooner we start to recognize that different tactics and analysis are needed to better assess activities to determine that mindset the better.

This isn’t about ignoring or discounting troubling events, it is about understanding context, asking questions and realizing that while we have machines learning how to identify malware patterns we just aren’t that good at people yet. A computer really can’t have good days and bad days, but people have every kind of day imaginable. Some end one day feeling like they need to take their traffic and coffee-fueled frustrations out on others and “get their due,” but go back home, have a Coke and a smile and then the next day is a bit brighter. Let’s look at insider threat as managing both the light and dark side of the human condition, and ensure that people are aware of the rules, we have good controls to help contain when they forget or break them, and analysis that isn’t based on “guilty before proven innocent.”

If you are in St. Louis attending DoDIIS today be sure to stop by Room 103 at 1:30 p.m. CT to hear more during the “Industry Perspective on Cyber Espionage and Insider Threat” panel.

Or, if you aren’t attending DoDIIS but would like to learn how you can “Operationalize a Practical Insider Threat Program” in your organization, view my webcast here.

Source: https://blogs.forcepoint.com/insights/notes-dodiis-2017-talking-cyber-espionage-and-insider-threat?utm_source=LinkedIn&utm_medium=Organic__Social_&utm_content=DoDIIS_Insider&utm_campaign=worldwide_organic_social_corporate_linkedin&sf_src_cmpid=70137000000QGcV&Agency=none&Region=GLOBAL&adbsc=forcepoint73652657&adbid=6302864142923558912&adbpl=li&adbpr=7584467

Author: Brandon Swafford

  • 0

Gigamon IT Survey Highlights Lack of Visibility as a Leading Obstacle to Securing Enterprise and Hybrid Cloud Networks

Category : Gigamon

Over two thirds of IT decision-makers cite blind spots as a major obstacle to data protection

Gigamon, the industry leader in traffic visibility solutions, today announced the results of a commissioned survey, “Hide and Seek: Cybersecurity and the Cloud,” conducted by Vanson Bourne, an independent market research company. The survey polled information technology (IT) and security decision-makers in the U.S., the U.K., Germany and France about their cloud security preparedness and network visibility issues.

The results of this survey demonstrate that lack of visibility is leaving organizations struggling to identify network data and investigate suspicious network activity tied to malicious attacks. Sixty-seven percent of respondents cited network “blind spots” as a major obstacle to effective data protection while 50 percent of those, who do not have complete visibility of their network, reported that they lacked sufficient information to identify threats.

Survey findings pinpoint three root causes of data blindness that are posing network security risks:

  • The increasing speed and growth of network traffic stresses monitoring and security tools, which are not adept at handling large amounts of traffic. Seventy-two percent of respondents report that they have not scaled their monitoring and security infrastructure to meet the needs of increased data volume.
  • High value information is being migrated to the cloud, where visibility is limited and application data is not easily accessible. Eighty-four percent of respondents believe that cloud security is a concern holding their organization back from adopting the latest technologies. When asked what types of information they are moving to the cloud, 69 percent of respondents reported day-to-day work information and 56 percent cited critical and proprietary corporate information.
  • A large amount of network data remains hidden due to data and tools still being segmented by organizational boundaries. IT and security decision-makers are not able to quickly identify and address threats and security events. Seventy-eight percent of respondents report that because different network data is being utilized between NetOps and SecOps teams, there is no consistent way of accessing it nor understanding it. Forty-eight percent of respondents, who do not have complete visibility over their network, report they did not possess information on what is being encrypted in the network.

“Today’s attackers have the advantage as cybercrime is a thriving economy and attacks are focused on infiltrating the network and stealing important company information,” said Ananda Rajagopal, vice president of products at Gigamon. “It is imperative for enterprises to adopt a visibility platform that provides visibility and control of their network traffic, and one that’s integrated with their security tools to accelerate threat detection and improve efficiencies.”

The Gigamon Visibility Platform directly addresses network “blind spots” by offering:

  • The most scalable visibility platform with up to 800Gbps of processing capability per node and up to 25.6Tbps when clustered, to meet the latest demands of the network.
  • Cross-architecture deployments on premises, in remote offices and in the cloud to securely migrate high-value information to public clouds.
  • An end to siloed data and tools that are segmented. Monitoring and security tools access the same data, encrypted or not, so that network and security operators can consistently access and understand what matters.

Gigamon solves data blindness by providing security and network operations teams with the pervasive visibility and control to automate and accelerate threat detection for securing enterprises and hybrid clouds. Learn more about our Gigamon Visibility Platform and Gigamon Visibility Platform for AWS.

The independent survey was commissioned by Gigamon and administered by Vanson Bourne in May 2017. Respondents consisted of 500 IT and security decision-makers of organizations with over 1,000 employees. The regional representation of respondents includes 200 respondents in the U.S. and 100 respondents each in the U.K., France and Germany.

Additional Resources

  • Vanson Bourne survey overview page
  • “Hide and Seek: Cybersecurity and the Cloud” report presentation
  • “Hide and Seek: Cybersecurity and the Cloud” executive summary (U.S. results)
  • “Hide and Seek: Cybersecurity and the Cloud” executive summary (U.K. results)
  • “Leading Obstacle to Securing Enterprise and Hybrid Cloud Networks” instagraphic
  • Highlights of the Vanson Bourne survey blog

Source: https://www.gigamon.com/company/news-and-events/newsroom/gigamon-it-survey-highlights-lack-of-visibility-leading-obstacle-security-enterprise-hybrid-cloud-networks.html

  • 0

Quantum Computing? Really?

Category : HP Security

We just had an interesting discussion here about quantum computing, quantum cryptography and post-quantum cryptographic algorithms. I’m afraid that I might have wasted a few people’s time with a rant about why I’m not impressed by the possibilities for quantum computing.

I started by noting my thoughts on how hard it is going to be to build a large-scale quantum computer. Keeping quantum coherence long enough to do significant calculations with quantum computers may turn out to be really hard. As in so hard that we’ll need to create a new level above NP-hard to describe how hard it is.

This thinking might be a bit out of date. I haven’t had a lab where I’ve had equipment that let me play with quantum effects for over 20 years. Things might have become much easier since then, but I still think that it’s going to turn out to be extremely hard to build large-scale quantum computers. Perhaps even impossible. If I had to bet, I’d bet on impossible.

But I also rambled on about why I think that quantum computers are incredibly sloppy because they might be able to accomplish so little with so much.

If you have a register comprising n classical bits, that register can hold any one of 2n possible values. If you replace those classical bits with quantum bits (qubits), that register can hold all possible 2nvalues at once. An eight-bit register can hold any one of 256 possible values, while a register of eight qubits can hold all 256 of them at once. A 64-bit register can hold any one of 264 possible values, while a register of 64 qubits can hold all 264 of those values at once.

If you’re going to use Shor’s algorithm to factor an n-bit RSA modulus, you roughly need a register comprising 2n qubits. Today, most RSA moduli are of the 2,048-bit variety, so to use Shor’s algorithm to factor one you’d need 4,096 qubits. That’s a lot. Those 4,096 qubits are holding 24,096 values at once. We have that 24,096 is about 101,233. In either form, that’s a very big number.

How big?

There are about 1080 atoms in the visible universe. You can get this number two ways. One involves a SWAG (Scientific-Sounding Guess) of the number of galaxies in the visible universe, the number of stars in the typical galaxy, the mass of a typical star, etc. Or you can derive the number from precise observations made by astronomers. The results are about the same.

I personally find this to be more than slightly annoying. It reminds me more than a little of when I used to work in finance, where we had roughly two types of analysts: quants and cowboys. The quants (like me) were generally introverts who favored their computers over people and who would spend weeks in dimly lit rooms carefully building mathematical models to use to value deals. The cowboys were generally extroverts who drank a lot and just used their intuition to value deals. Annoyingly, there seemed to be absolutely no difference in how well either type of analyst did. So in addition to learning a lot about finance in this particular job, I also learned that life isn’t even close to being fair.

In any event, if you had a quantum computer that holds way more states than the number of atoms in the visible universe, you could use it to crack a 2,048-bit RSA encryption key. You’d think that with that many states you could end hunger, cure cancer, reverse global warning and bring back the TV show Firefly. But you can’t. That’s why I’m not impressed.

End of rant.

Source: https://www.voltage.com/crypto/quantum-computing-really/

Author: Luther Martin

  • 0

Cisco and IBM collaborate to increase security effectiveness

Category : Cisco

On May 30, 2017, Cisco and IBM Security announced a key relationship to address the rising tide of security threats and the need to respond rapidly. Cisco and IBM Security will work together to offer specific product integrations, a managed security service provider (MSSP) roadmap, and threat intelligence collaboration programs.

The relationship focuses on making security simpler and more effective and is a reflection of each company’s commitment to openness and interoperability. Together, Cisco and IBM are focused on reducing the time to detect and mitigate threats, giving you integrated tools to automate threat response with greater speed and accuracy.

What are the offerings?

Here’s a closer look at the three pillars of the relationship:

1. Product integrations

Both organizations are building integrations among the product portfolios. Cisco is building new apps for the IBM QRadar SIEM platform, which helps security teams understand and respond to advanced threats. A variety of Cisco® security solutionswill increase the effectiveness of IBM QRadar® over time, with data from networks, endpoints and the cloud. On the other hand, IBM is building extensions into Resilient and X-Force Exchange to include Cisco products and intelligence data.

The first three apps focus on integrations with Cisco Firepower® technologyCisco Threat Grid and Cisco Identity Services Engine (ISE), and will be available on the IBM Security App Exchange.

Meanwhile, IBM is building extensions into Resilient and X-Force Exchange to include Cisco products. Resilient and X-Force Exchange will be able to ingest Cisco Threat Grid content.

2. Services

The IBM End to End Outsourcing and Managed Security Services team is working with Cisco to deliver new services aimed at further reducing complexity. As enterprise customers manage their equipment on premise and in a datacenter, they are also looking to migrate their security infrastructure to public and private cloud providers. IBM Security will provide outsourcing and managed security services to support Cisco security platforms in leading public cloud services as well as legacy on premise and datacenter environments.

Cisco and IBM Security customers will be able to consume these solutions in a way that complements their existing architecture. Customers will be able to build and manage their own integration, working with a trusted channel partner common to both IBM and Cisco, as well as deploy a full turnkey managed solution supported by IBM Security Services.

3. X-Force and Talos research collaboration

We have also established a new relationship between the IBM X-Force and Cisco Talos security research teams, who now share threat intelligence research and coordinate around major cybersecurity incidents. Shared intelligence also means enhanced performance of security products, and richer outcomes such as reduced time to detect.

For example, Cisco and IBM threat research teams collaborated on defending against the WannaCry ransomware attack. IBM and Cisco researchers coordinated their actions and exchanged insights into how the malware was spreading. Afterward, they continued the joint investigation to provide clients and the industry with the most relevant information.

What’s new and what’s next?

Product integrations will become available in the coming weeks, starting with the Cisco Firepower NGIPS, NGFW and Threat Grid apps. The Cisco ISE app will follow in the late fall and additional apps will become available later in 2017 and beyond. We are excited that the IBM Security team is working closely with Cisco product teams, and we hope to highlight this collaboration in future promotions from both companies including blogs and webinars.

Another important announcement

Today, IBM announced its intention to stop selling its Intrusion Prevention System (IPS) solution, the IBM QRadar Network Security (XGS) product line. This decision will take effect on December 31, 2017. However, current customers will be supported for a full five years through December 31, 2022.

IBM’s decision was based on an analysis of market conditions, competitiveness, strategic fit and it also reflects IBM’s belief in the strength and value of our partnership. When IBM XGS customers look to refresh their network security defenses, IBM’s sales organizations will introduce Cisco’s Firepower NGIPS and Firepower NGFW solutions.

More information on the Cisco and IBM security alliance is coming soon.

Visit our Cisco Firepower page to learn more about Cisco’s industry leading NGIPS.

Source https://blogs.cisco.com/security/cisco-and-ibm-collaborate-to-increase-security-effectiveness?CAMPAIGN=Security&Country_Site=us&POSITION=Social+Media&REFERRING_SITE=LinkedIn&CREATIVE=Cisco%20Security

Author: Dov Yoran 

  • 0

The growing U.S. IT productivity gap

Category : Citrix

Productivity growth has slowed down despite our rising investment in IT. Learn the causes of the slowdown and how to close the gap and increase productivity.

Demo series

See how Citrix Workspace delivers an integrated digital workspace that’s streamlined for IT control and easily accessible for users.

Source: https://www.citrix.com/products/citrix-workspace/resources/productivity-gap-infographic.html?utm_content=bufferd124f&utm_medium=social%2Bmedia&utm_source=linkedin.com&utm_campaign=corp%2Bsocial%2Bmarketing%2B(organic%2Bposts%2Band%2Bfeeds)’