Category Archives: NetApp

  • 0

Introducing Elio

Category : NetApp

Meet Elio, NetApp’s new virtual support assistant with IBM Watson cognitive computing, part of Digital Support.


  • 0

Software Defined Storage: The Future of Storage Coming into Focus

Category : NetApp

As data becomes more diverse, more distributed, and more dynamic, data management needs to adapt. For many companies, data is no longer centralized in the data center — it’s also at remote sites, cloud, and temporary instances. Data is  diverse, spanning archival to high performance, file and block, secure and open, critical to temporary. This diversity requires specially adapted services. And finally, data changes characteristics – it grows, moves, shrinks, changes in importance.

In addition to adaptability, agility is required. Software-defined storage promises this cloud-like agility for on-premises deployment. NetApp® ONTAP® Select was released just a year ago, and we are already seeing proof of its impact.

How do you create powerful momentum for a breakthrough data management service? By delivering unprecedented flexibility and innovation that meet real customer needs. Case in point: NetApp ONTAP Select

Did You Know?

ONTAP Select is an easily deployed data management solution that runs on your current hardware, existing servers, transforming it into a software-defined storage (SDS) infrastructure. You can have it both ways: the best of the cloud-agility, superfast granular scaling, and also the resiliency and peace of mind of on-premises, closely managed resources.

ONTAP Select is used across a vast array of industries and use cases, including financial services, high-tech manufacturing, pharmaceuticals, transportation, and public sector, and in all geographical regions of the globe. Here are some examples:

  • A European consumer goods software company needed a robust, cost-effective data management solution that enabled centralized backup of a diverse dataset, pulled from consumer services, loyalty programs, social media, and IoT sensors. ONTAP Select was chosen over the competition because it offered the right combination of replication and backup capabilities to support their business objectives.
  • A top-10 enterprise software company worldwide is using Select for group shares and remote locations, taking advantage of the ease of deployment, cost efficiency and management consistency. Their requirements did not call for dedicated appliances, and they were able to leverage existing hardware to create a unified storage environment across remote sites and their data centers. The ONTAP framework has been very successful for them previously and this provided a means to tie in remote sites into the existing infrastructure with great results from an operational efficiency and financial return perspective.
  • A global legal services company based in the mid-west found that ONTAP Select was the right fit for their needs, specifically for remote sites where HW existed already, but where they wanted to tie in the data collection and replication to their Data center environment with the same interface and tools leveraging NetApps data fabric approach. The result was super fast replication at a very cost effective price point, that will be easily expandable.
  • A world-wide Pharmaceutical is deploying ONTAP Select vNAS in 10s of countries to provide NAS services on their HCI platform. The addition of ONTAP Select to the solution provides users with home directories and file shares in ROBO locations. Not only does the vNAS solution provide an extremely cost effective answer for CIFS and NFS but it adds a set of capabilities otherwise not available for HCI solutions today.

In fact, successful companies are building new business solutions based on ONTAP Select, such as Sugon, a leading Chinese IT infrastructure company that is delivering Select with their systems to grow their data storage business. And Vector Data is providing ruggedized systems  with Select for tactical application, so that  land, sea, and mobile forces can gather and share data to help their missions succeed, which is a benefit to all of us.

Why Is ONTAP Select Growing So Quickly?

The answer is focused innovation. NetApp has been a leader in data storage and data fabric solutions for 25 years by pushing the innovation envelope based on evolving customer needs. Select is a perfect example of how we deliver flexibility through targeted innovation:

  • ONTAP Select vNAS provides NFS and CIFS services on any ESXi based HCI system
  • ONTAP Select can be quickly installed on any commodity hardware and be up in running in minutes.
  • You can take advantage of NVMe drives (an industry first) with ONTAP, giving you a broad choice for deployment with groundbreaking performance.

It’s no surprise that Select is gaining traction so quickly (growing at more than 50% month over month for the past year) and has 10PB+ under management for more than 200 customers around the globe and across a wide range of industries. And we’re just getting started.

Source: https://blog.netapp.com/software-defined-storage-the-future-of-storage-coming-into-focus/

Author: Jay Subramanian


  • 0

NetApp AutoSupport is Evolving

Category : NetApp

Those eagle-eyed NetApp users among you may have noticed that there have been a few changes to NetApp’s Support site recently. Namely, the introduction of Active IQ. This blog post focuses on Active IQ, the new services and features that come with it, and the benefits they’ll bring to NetApp customers.

What is Active IQ?

Active IQ is the evolution of AutoSupport. It’s a cloud service that combines AutoSupport telemetry and My AutoSupport features with new predictive analytic capabilities across the NetApp Data Fabric. Using the AutoSupport information gathered from your Data Fabric assets, Active IQ provides proactive insights to improve availability, efficiency, and performance of your storage systems and help you optimise operations across your hybrid cloud.

Active IQ is available from the Tools menu on the NetApp Support site and via a mobile app.

New features of Active IQ include:

  • Customizable, responsive dashboard
  • Capacity trending and forecasting that lets you know when you need more storage
  • One-click lookup for your systems, sites, groups, and clusters
  • Workload tagging
  • Improved visibility for case tracking and trending

What’s the Big Deal About Active IQ?

Active IQ offers a set of advanced data services that deliver analytics, insights, and advisories based on community wisdom from NetApp’s massive user base that it’s built up over the last 20 years.

AutoSupport telemetry feeds data into Active IQ from all these Data Fabric endpoints: AltaVaultFlexPodSolidFireNetApp HCIFASAFFONTAP CloudONTAP SelectE-Series, and StorageGRID—so pretty much all NetApp products are covered. It also feeds information from other data fabric assets such as OnCommand.

Active IQ leverages machine learning to teach the telemetry system new patterns, so it’s continually learning and adapting to your evolving environments.

As Active IQ is a cloud-based service, you’ll be able to keep on top on the health and efficiency of your systems anywhere, anytime, from any device.

Using information from your systems, Active IQ can:

  • Recommend upgrades for ONTAP systems and provide upgrade plans
  • Proactively identify system risks related to configuration issues or known bugs
  • Provide configuration, capacity, efficiency, and performance views and reports for better management of your NetApp systems
  • Predict storage growth to identify capacity addition needs

Note: You’ll need to enable AutoSupport on your storage systems to receive the benefits of Active IQ.

New and Improved Discovery Dashboard

The Active IQ Discovery Dashboard has been improved to provide a customisable and more responsive experience.

Updates include:

  • Product portfolio inventory
  • One-click capacity and contract renewal
  • New storage efficiency recommendations
  • Performance hot spots
  • Health summary and trends
  • Storage efficiency and risk advisory services
  • Upgrade recommendations
  • Link to guided problem solving and chat (integrated with Elio with IBM Watson™)
  • Summary of cases
  • Performance hot spot – coming soon

Even the Mobile App Got a Facelift!

The new Active IQ mobile app replaces the My AutoSupport app and is available for both iOS and Android platforms. If you’ve already installed the app, it’s likely that it’s updated itself, so you won’t have to download it again.

Some new features in the app include:

  • One tap for:
    • Capacity additions
    • Renewals
    • Mobile chat (powered by IBM Watson)
    • Email for mobile app support
  • View installed base and system details
    • Sites, clusters, and systems
    • AutoSupport data and top sections
  • View proactive recommendations
    • Storage efficiency
    • Performance
    • ONTAP upgrades
    • System risks
  • Digital support content and capabilities
    • View and update case notes
    • Guided problem solving

You can also quickly locate systems through recent searches and add favourites. Be sure to regularly update, because this app is sure to get better and better as time goes on.

So, Why Do I Think You Need Active IQ?

SolidFire customers have benefited from Active IQ for some time now and it’s great to see it ported to all NetApp systems. Allowing you to track your entire NetApp ecosystem in real-time is a real value add. The continuous proactive monitoring and diagnosis of your systems, and improved tools, will enable support to be delivered much more rapidly than ever before, and should ensure clusters are maintained and operated at the highest possible level of availability and performance. Furthermore, Active IQ is available at no extra charge for all NetApp customers under support.

Learn more about how Active IQ is leading the charge for NetApp’s data-driven evolution by watching this new overview video.

Source: https://blog.netapp.com/netapp-autosupport-is-evolving/

Author: Dave Brown


  • 0

How to Uncover New Savings with Infrastructure Analytics

Category : NetApp

Business-level metadata is absolutely critical for controlling costs and deriving greater value from your IT infrastructure. For example, knowing that a certain storage system can provide the equivalent of $1 million worth of capacity isn’t particularly useful to anyone. However, if you can allocate and attribute that capacity down to the level of business units, users, or even individual developers, it creates awareness of where and how resources are being used. If you see that a business unit that isn’t contributing much to the bottom line is consuming more than its fair share of IT resources, someone in your company is likely to care about that. Ideally, you need to be able to tie every piece of physical infrastructure back to the activities of those who are using it, and at a granular level.

In a previous post, I discussed about how OnCommand Insight (OCI) delivers business insights and control to help facilitate process-based automation. This time I want to show how OCI delivers business-level metadata that is helping customers enable new operating models and control costs.

Creative Cost Reporting Can Change Bad Behavior

Showback reporting has been a bit of a failure in many organizations. When a showback report arrives at the end of the month, it might show that a user or business unit asked for a certain number of virtual machines and that each of them cost $1,000. However, this type of reporting is often viewed as “funny money” that is used only for internal bookkeeping. When the people consuming the resources have no incentive to change their behavior, it remains business as usual.

This is where the idea of “shameback” reporting comes in. NetApp customers have used OCI to implement these types of cost awareness reports after they discovered that showback alone was not enough to change behavior. Instead of just providing a list of resources with a cost allocated to each resource, a shameback approach shows the delta between the resources requested by a user or business unit and the actual level of usage, along with a ranking of the worst offenders. For example, if a business unit requests a platinum VM, but could have satisfied its workload with a bronze VM, a shameback report created using OCI makes this difference clear for everyone to see.

A creative approach to cost reporting can lead individuals to change their behavior. No one wants to be at the top of the report, so they learn to become more intelligent about the resources they request. When applied across a large organization, this type of reporting can lead to better resource utilization and big savings.

Cost Awareness Dashboard Example

Smart Metering Increases Utilization Rates

A customer I work with in the UK recently noticed that the usage of AWS cloud services was growing rapidly, while the demand for internal IT resources was falling. It is a development-heavy operation, and a lot of developers were going to AWS, buying VMs to support a project, putting it on their company credit cards, and claiming it as an expense. That approach created a problem for the IT team, because they were losing customers to the cloud. It was also a potentially ticking time bomb for the offending business units with regard to compliance, security, and unanticipated costs. When you have no idea where your data is, you have no way of knowing how much this type of “shadow IT” activity is costing.

In this case, the IT team wasn’t competing with AWS on price or capabilities. It all came down to flexibility and agility. To help address this problem, we created a billing system for the customer that was essentially a phone bill for IT. It showed daily storage charges in GB/hour and hourly VM, CPU, and memory costs. Each report used a similar template, and charges for CPU, memory, and storage resources were tied back to the developers in various development teams across the different business units.

For this customer, the next step in attacking the situation is likely to be smart metering. Here in the UK, a nationwide energy plan called Economy 7 is one example of how smart metering can be used to change behavior. The plan prices electricity rates cheaper at night, when demand is low, to encourage customers to shift their usage to off hours. Data center operators often find themselves in a similar situation, with excess capacity at night, or whenever the off-peak period falls for their operation.

By offering customers—whether internal or external—price incentives to run workloads during off hours, you not only gain an opportunity to win back or keep customers, but you can also accommodate more data center activity without having to build out new capacity, add infrastructure, buy additional virtualization licenses, and so on. It’s kind of a win-win for IT. Of course, this scenario only works if you have access to analytics that allow you to track usage at fine granularity. OCI provides this granular view into infrastructure utilization.

Make Your Business More Competitive

It’s difficult to compete with hyperscale cloud providers on the perceived cost of providing IT infrastructure services, but you can compete by offering flexible consumption and pricing options tailored to meet the needs of the business. OCI provides data insights that allow you to implement new operating models to control costs and increase the perceived value of your IT services, while helping to discourage shadow IT.

More Information

Discover how NetApp customers are benefiting from infrastructure analytics in these blog posts and by attending the NetApp Insight conference:

Source: https://blog.netapp.com/how-to-uncover-new-savings-with-infrastructure-analytics/

Author: Joshua Moore


  • 0

Be Data Ready with NetApp and SAP to maximize your opportunities today – and tomorrow

Category : NetApp

Fill out the form and we’ll send you “NetApp for SAP top 10 reasons”

 


  • 0

The Wait Is Over – SnapCenter 3.0 Is Here

Category : NetApp

How can you jumpstart protecting your SAP HANA databases with NetApp® SnapCenter® 3.0? What do you need to consider when upgrading your SAP HANA to HANA 2.0 SPS1 or later? How do you migrate from NetApp Snap Creator® to SnapCenter?

 In this post, I address some of these questions about how to jumpstart your data protection journey with SnapCenter 3.0 and the new SAP HANA plug-in.

The Wait Is Over

The first customer shipment of SnapCenter 3.0 began on July 28.  NetApp customers can download the software and documentation from the NetApp support site. With the SnapCenter SAP HANA plug-in, you can now protect your SAP HANA 1.0 single-container databases as well as your SAP HANA 2.0 SPS1 or later Multitenant Database Container (MDC) single-tenant databases.

Migrating from Snap Creator to SnapCenter

If your SAP HANA databases are already protected with Snap Creator, and if no other changes or updates are planned, you can plan your migration to SnapCenter without any time pressure. The easiest way to start your data protection with SnapCenter is to install and configure SnapCenter in parallel with Snap Creator and test your operational processes by moving single SAP HANA databases to SnapCenter. The purpose is to validate the configuration settings and integration into your data center operations. A future blog post will focus on migration examples.

Upgrading to SAP HANA 2.0 SPS1 or Later

When you are planning to upgrade SAP HANA to HANA 2.0 SPS1 or later, the migration process looks different. With this HANA upgrade, the internal structure changes in such a way that snapshot-based backups created with SAP HANA single-container versions cannot be used for recovery with the new HANA MDC version.

 Therefore, when you plan a migration to SAP HANA 2.0 SPS1, the immediate switch from Snap Creator to SnapCenter for data protection after the upgrade is mandatory. NetApp recommends keeping at least one snapshot-based backup of all HANA related volumes as well as a file-based backup before the upgrade. As usual for SAP upgrades, you should plan and test the upgrade not only for technical reasons, but also to validate the software and process dependencies.

Installing SnapCenter

The following video, “SC30-Initial Installation and Configuration”, shows a SnapCenter SAP HANA plug-in installation as it is used for a typical test installation. For larger deployments and details about architecture, installation, and configuration options, consult the SnapCenter documentation and the NetApp technical report SAP HANA Backup and Recovery with SnapCenter.

 

Find Out More

NetApp SnapCenter 3.0 is easy to install and configure and offers a comprehensive, integrated data protection solution for SAP HANA single-container databases as well as SAP HANA 2.0 SPS1 or later MDC single-tenant databases.

 Read my previous blog posts about SAP HANA and SnapCenter:

Source: https://newsroom.netapp.com/blogs/the-wait-is-over-snapcenter-3-0-is-here/

Author: Bernd Herth


  • 0

EUC Decoder Ring for the HCI Buyer

Category : NetApp

In this post, I’m going to talk about the evolution of end user computing (EUC), and why for the modern EUC buyer, HCI is a must-explore path. If you’re not familiar with NetApp’s HCI announcement, I encourage you to explore what’s coming up.

And if you’re attending VMworld US 2017 this week, stop by booth 421 and ask all the questions. Without further ado …

Traditional designs – Inflexibility due to pods/silos

Having lived through “the decade of VDI” I am glad we are where we are today. Back in the day, there was never a way to granularly control one group of users and keep them from affecting another.

Users are the most unpredictable asset of the organization. When the dev team decides to run SETI at home on virtual desktops or have a departmental LAN party during lunch to get their Doom fix, you know the helpdesk is going to get busy. To get around this chaos we built “pod”-based infrastructures to provide users isolation. Isolation was super coarse, however, and you could only protect large groups from affecting other large groups.

So we tried to understand chaos with assessment tools. While they didn’t catch every crazy behavior, they provided us with imperial data to assist in sizing these “pods” for CPU, RAM, IOPS, capacity, concurrency, app usage, graphics, and network. Once you had the formula inputs from Lakeside or Liquidware of the number of concurrent users, apps they used, and resources consumed, you had a formula for calculating how much data center CPU, RAM, network, IOPS, and GPU was required to build a 2000+ user POD.

Considering these PODs are probably a full rack of gear, the unit of scale and isolation of resources is significant.

New design – EUC as a mixed workload

As VDI evolves, end user apps and end user data (files) become more accessible through a virtual workspace, enhancing EUC (end user computing). An emerging trend is collapsing previous VDI silos to run as just another workload in a truly software-defined data center. NetApp calls this out as a key use case for next generation data centers and outlines the value of resource efficiency and much more.

IT is seeking the business value promise of the modern desktop virtualization infrastructure through desktop, end user apps, and end user data (files) accessible via a virtual workspace. VMware leads the charge here with Horizon Suite.

Now we approach a truly software-defined set of resources to rapidly provision and consume data at the pace of business.

For the modern EUC buyer, HCI leads the way

Today’s EUC buyer is looking for infrastructure beyond just convergence of hardware. They want fully programmable and massively simplified infrastructure to build apps, VMs, and offer consumption of IT to all modes of operations. Hyper converged infrastructure (HCI) has been growing in adoption especially for EUC because of its ability to deliver resources as simply as possible.

Traditional SAN and converged infrastructure pale in comparison to HCI because of their complexity. The result is greater adoption and utilization of HCI for delivering EUC.

What falls short with first-gen HCI

As HCI came to the market, the first wave was “no SAN” because the disks were direct-attached storage (DAS) and managed via the HCI console. It was a truly unified provisioning experience. Rapidly, as that approach succeeded, the market grew to an abundance of offerings.

The second-wave debate was:

  • HCI “in kernel” – storage managed in the kernel of the hypervisor
  • HCI “guest VM” – storage being managed in a guest VM outside the hypervisor kernel

So much has been debated about strengths of both and both are great for first-gen HCI. The reality is that first-gen HCI keeps hitting scaling issues. Inflexibilities also make it impossible to scale the original VDI resource requirements: compute, ram, IOPS, capacity, etc.

The design criteria wants true software-defined capabilities to pool, abstract, and extend. But this results in first-gen HCI limitations around:

  • Global efficiencies of inline dedupe
  • True multi-tenancy
  • Performance controls to keep bully VMs from creating the noisy neighbor effect (which is why you build silos)
  • Scale – both size of scale and flexibility in how you scale
  • API programmability

Why NetApp HCI for EUC

NetApp Analyst Day announced the next generation of HCI. What NetApp HCI delivers is fully extended capabilities against a design criteria that demands simplicity, performance, scale, flexibility, pre-programmed capabilities, and fully programmable APIs.

The platform is fully integrated to deliver a VMware Private Cloud experience fully provisioned within minutes out of the box. Built upon the foundation of NetApp Element OS, you’ll experience all that has thrilled EUC buyers previously: QoS, flexibility and scale, and overall simplicity.

 

Gone are the silos. Gone are scale limitations. EUC will leverage SQL and other workloads with full confidence. You will find globally efficient resources available with simplicity.

Source: https://newsroom.netapp.com/blogs/euc-decoder-ring-for-the-hci-buyer/

Author: Keith Norbie


  • 0

NetApp Vision for NVMe over Fabrics and Storage-Class Memory

Category : NetApp

Flash is powering the digital transformation. New emerging real-time applications will demand even more. NetApp has a vision for integrating new technologies such as NVMe over Fabrics (NVMe-oF) and storage-class memory (SCM) to accelerate the digital transformation through nondisruptive integration of these “disruptive” innovations. Figure illustrates the vision of how these technologies work together.

I was privileged to share a bit of the NetApp vision recently at Flash Memory Summit 2017. For the first time, NetApp publicly shared three proof-of-concept demonstrations of how we might use these new technologies. Note the disclaimer at the end of this blog post explaining that these are not committed products with a committed timeline yet, but rather representatives of the overall NetApp vision.

 The following video contains both a description of that overall vision and three demos:

 

To learn more about the NetApp vision for NVMe, SCM, and more, visit the NVMe page on NetApp.com or read Ravi Kavuri’s recent blog series on NVMe:

Source: https://newsroom.netapp.com/blogs/netapp-vision-for-nvme-over-fabrics-and-storage-class-memory/

Author: Jeff Baxter


  • 0

How to Perform Continuous ONTAP Upgrades Without Sacrificing IT Stability

Category : NetApp

Don’t be surprised if you see the NetApp IT storage team busy doing other tasks during ONTAP® upgrades these days. Thanks to the power of the First Application System Test (FAST) program, which supports early adoption of ONTAP, the Customer-1 program is upgrading to the latest version of ONTAP with absolutely no disruption. In fact, the team is doing multiple upgrades on a weekly basis. This blog explores how we integrate ONTAP upgrades into a production environment without sacrificing IT stability.

Good Old Days?

Remember years back when application data was deployed on a filer? We would rarely see downtime unless there was a hardware failure or power outage. Configuration changes, such as export rules, network interface or route, were sometimes done on the fly in local memory. We’d forget about those in-memory changes on the filer.

When a hardware failure or power outage occurred, restoring the affected storage resource could quickly become turn into a fire drill. Some of the non-persistent changes were not documented, resulting in a mad scramble to discover the missing configuration. No wonder application owners resisted storage upgrades; it translated to downtime. We often delayed ONTAP upgrades to ensure we had stable operations. The irony of this situation was not lost on our storage team. We were expecting NetApp customers to be using the latest version of ONTAP but we weren’t always using it ourselves.

Customer-1 Adopts FAST

The Customer-1 program is the first adopter of NetApp products and services in our IT production environment.  It is also responsible for the operation of our global data centers. Recognizing that we were missing out on the many features of new ONTAP releases, Customer-1 joined NetApp Engineering’s FAST Program several years ago.

Under FAST, we agreed to deploy release candidate versions of ONTAPstorage management software in exchange for providing feedback on bugs and other performance issues prior to general release. We would exercise the code as well as reap early access to ONTAP’s latest features. Our goal was to improve our ONTAP lifecycle management so we were no longer afraid of storage upgrades.

Now Customer-1 installs pre-release ONTAP code into our lab and backup when Customer-0 (the Engineering IT group that also runs release candidate versions in its production environment) says the code is stable. Once we are comfortable with the stability of the code running in our lab (a non-customer facing and low-risk environment), we deploy ONTAP into sub-production and then into production.

We have some instances serving more than 100 applications. At first, trying to install even one ONTAP upgrade/week was challenging. With so much data to process, it was easy to miss potential risks. FAST helped us whittle our upgrade preparation process down to four hours using manual checklists and cross-checks.

To further improve efficiency, we added python scripts to compile a summary report with a pass/fail matrix that flags areas of concern. Now the Command Center can complete the precheck list in two hours and focus on the flagged areas.

Although painful at first, the process has been liberating in many ways, especially with ONTAP’s non-disruptive feature. We can upgrade one to two ONTAP clusters/week in addition to launching major releases twice a year and patches in between. Our lifecycle management process follows a regular cadence with absolutely no impact on the stability of business applications. Over time, we have identified 30 software bugs for Product Engineering to fix.

 

Our ability to repeatedly deliver ONTAP upgrades without any disruption to IT operations has also built the confidence of our customers, the business application owners. We regularly meet with them to proactively review the release schedule to avoid conflicts with application releases and ensure there are no surprises.

Shrinking Lifecycle

Over time, we have experienced numerous benefits. Our software lifecycle has shrunk; we are now running the latest ONTAP version in our production environment in 45 days or less. We have expanded the process to include NetApp OnCommand® InsightAltaVault®StorageGrid®E-Series, and CI switch upgrades.

We have also increased our storage efficiency by taking advantage of ONTAP’s features well in advance of their general availability. For example, we were able to leverage the ONTAP 8.3 cluster image update wizard that updates by cluster instead of node. We are currently running ONTAP 9.2, which offers cross-volume (aggregate-level) deduplication, which has helped improve our Flash storage efficiency.

Thanks to the rigor of FAST, we have a constant flow of upgrades, but we no longer have to fear downtime or search frantically for configuration scripts. Instead, ONTAP upgrades are just another task in our daily routine. And that leaves us more time to work on the fun stuff in our jobs.

Source: https://newsroom.netapp.com/blogs/how-to-perform-continuous-ontap-upgrades-without-sacrificing-it-stability/

Author:  Ram Kodialbail


  • 0

Back Up Your SaaS Data with NetApp Cloud Control for Microsoft Office 365

Category : NetApp

Unless something goes really wrong, IT rarely makes headlines. Unfortunately, the threats to modern IT have recently and dramatically been brought into the spotlight by the WannaCry, Petya, and NotPetya ransomware, which affected over 275,000 computers worldwide. Targets included the UK’s National Health Service. The effect on the NHS was catastrophic, with hospitals unable to take X-rays, check prescriptions, or access patients’ medical records. It was also one of the clearest examples of the impact that ransomware can have on a business. Things went wrong here indeed.

However, ransomware is just one of many threats. Year after year, new surveys find that human error remains a leading cause of data loss. Something as simple as a synchronisation error or as malicious as a rogue administrator could threaten the safety of your data.

No Data, No Business

Data is the life blood of modern business, and any form of unwanted data loss is a significant threat to business success. At NetApp, we talk a lot about the expectation economy. We live in a world where customers have increasingly high expectations of businesses and can be deterred by delays of mere seconds in accessing services or information, in favour of a competitor. Now imagine the impact on a customer if you have to reveal that you have lost their vital information. This is a conversation that nobody wants to have, either with their boss or with the customer.

According to the University of Texas, 94% of companies that suffer a catastrophic data loss don’t survive. Although data loss to some may seem like a first-world problem rather than a true catastrophe, in a business context the impact can be profound. Leave aside for one moment the loss of revenue caused by downtime, lost transactions, and unrecorded time. Think about the sheer embarrassment, the damage to your reputation, and the loss of trust, which can take years to rebuild.

This is certainly one of the issues that keep CIOs awake at night. Ultimately, the way to avoid sleepless nights is to invest in a fully managed, enterprise-grade, always-on data management solution with best-in-class security features. However, if the unthinkable happens, what businesses really need is a plan B—a way to reset their data to where it needs to be. And NetApp has launched exactly that.

A Safety Net for Data Loss

In EMEA, 43% of enterprises use Office 365, making it one of the leading software-as-a-service (SaaS) environments available. The platform is increasingly used for business-critical operations, and business-critical Office 365 files that “go missing” or are accidentally deleted can seriously impact businesses. Although Office 365 offers built-in once-a-day automatic backup, it does not protect against accidental deletion, virus and malware, hackers, or ransomware attacks. NetApp® Cloud Control, launched with

 

Offering enterprise-class data protection for Exchange Online, SharePoint Online, and OneDrive for Business, Cloud Control is a secure, scalable service that can work across the cloud, in on-premises storage, or in a mix of the two to protect your business’ mission-critical data from accidental deletion, corruption, or malicious intent. The service, offered on a licence per seat per year basis, requires no installation and is easy for any business to use.

Being able to mitigate the impact of data loss, regardless of the source, in a timely, efficient, and effective way should be a strategic imperative of any modern business. The resulting benefits are significant—sparing businesses blushes and helping them maintain their reputation in a competitive marketplace, as well as preventing revenue loss. With such high stakes, and customers focused on what is happening to their data, businesses can’t afford to be complacent.

Data is under constant threat. We will never be able to fully control disaster, human error, and cyberthreats. What we can control is how effective our plan is before a disaster happens. This will be a crucial differentiating factor in how modern enterprises cope with the increasing wave of threats to their data.

Source: https://newsroom.netapp.com/blogs/back-up-your-saas-data-with-netapp-cloud-control-for-microsoft-office-365/

Author: Martin Warren


Support