Category Archives: NetApp

  • 0

How to Perform Continuous ONTAP Upgrades Without Sacrificing IT Stability

Category : NetApp

Don’t be surprised if you see the NetApp IT storage team busy doing other tasks during ONTAP® upgrades these days. Thanks to the power of the First Application System Test (FAST) program, which supports early adoption of ONTAP, the Customer-1 program is upgrading to the latest version of ONTAP with absolutely no disruption. In fact, the team is doing multiple upgrades on a weekly basis. This blog explores how we integrate ONTAP upgrades into a production environment without sacrificing IT stability.

Good Old Days?

Remember years back when application data was deployed on a filer? We would rarely see downtime unless there was a hardware failure or power outage. Configuration changes, such as export rules, network interface or route, were sometimes done on the fly in local memory. We’d forget about those in-memory changes on the filer.

When a hardware failure or power outage occurred, restoring the affected storage resource could quickly become turn into a fire drill. Some of the non-persistent changes were not documented, resulting in a mad scramble to discover the missing configuration. No wonder application owners resisted storage upgrades; it translated to downtime. We often delayed ONTAP upgrades to ensure we had stable operations. The irony of this situation was not lost on our storage team. We were expecting NetApp customers to be using the latest version of ONTAP but we weren’t always using it ourselves.

Customer-1 Adopts FAST

The Customer-1 program is the first adopter of NetApp products and services in our IT production environment.  It is also responsible for the operation of our global data centers. Recognizing that we were missing out on the many features of new ONTAP releases, Customer-1 joined NetApp Engineering’s FAST Program several years ago.

Under FAST, we agreed to deploy release candidate versions of ONTAPstorage management software in exchange for providing feedback on bugs and other performance issues prior to general release. We would exercise the code as well as reap early access to ONTAP’s latest features. Our goal was to improve our ONTAP lifecycle management so we were no longer afraid of storage upgrades.

Now Customer-1 installs pre-release ONTAP code into our lab and backup when Customer-0 (the Engineering IT group that also runs release candidate versions in its production environment) says the code is stable. Once we are comfortable with the stability of the code running in our lab (a non-customer facing and low-risk environment), we deploy ONTAP into sub-production and then into production.

We have some instances serving more than 100 applications. At first, trying to install even one ONTAP upgrade/week was challenging. With so much data to process, it was easy to miss potential risks. FAST helped us whittle our upgrade preparation process down to four hours using manual checklists and cross-checks.

To further improve efficiency, we added python scripts to compile a summary report with a pass/fail matrix that flags areas of concern. Now the Command Center can complete the precheck list in two hours and focus on the flagged areas.

Although painful at first, the process has been liberating in many ways, especially with ONTAP’s non-disruptive feature. We can upgrade one to two ONTAP clusters/week in addition to launching major releases twice a year and patches in between. Our lifecycle management process follows a regular cadence with absolutely no impact on the stability of business applications. Over time, we have identified 30 software bugs for Product Engineering to fix.

 

Our ability to repeatedly deliver ONTAP upgrades without any disruption to IT operations has also built the confidence of our customers, the business application owners. We regularly meet with them to proactively review the release schedule to avoid conflicts with application releases and ensure there are no surprises.

Shrinking Lifecycle

Over time, we have experienced numerous benefits. Our software lifecycle has shrunk; we are now running the latest ONTAP version in our production environment in 45 days or less. We have expanded the process to include NetApp OnCommand® InsightAltaVault®StorageGrid®E-Series, and CI switch upgrades.

We have also increased our storage efficiency by taking advantage of ONTAP’s features well in advance of their general availability. For example, we were able to leverage the ONTAP 8.3 cluster image update wizard that updates by cluster instead of node. We are currently running ONTAP 9.2, which offers cross-volume (aggregate-level) deduplication, which has helped improve our Flash storage efficiency.

Thanks to the rigor of FAST, we have a constant flow of upgrades, but we no longer have to fear downtime or search frantically for configuration scripts. Instead, ONTAP upgrades are just another task in our daily routine. And that leaves us more time to work on the fun stuff in our jobs.

Source: https://newsroom.netapp.com/blogs/how-to-perform-continuous-ontap-upgrades-without-sacrificing-it-stability/

Author:  Ram Kodialbail


  • 0

Back Up Your SaaS Data with NetApp Cloud Control for Microsoft Office 365

Category : NetApp

Unless something goes really wrong, IT rarely makes headlines. Unfortunately, the threats to modern IT have recently and dramatically been brought into the spotlight by the WannaCry, Petya, and NotPetya ransomware, which affected over 275,000 computers worldwide. Targets included the UK’s National Health Service. The effect on the NHS was catastrophic, with hospitals unable to take X-rays, check prescriptions, or access patients’ medical records. It was also one of the clearest examples of the impact that ransomware can have on a business. Things went wrong here indeed.

However, ransomware is just one of many threats. Year after year, new surveys find that human error remains a leading cause of data loss. Something as simple as a synchronisation error or as malicious as a rogue administrator could threaten the safety of your data.

No Data, No Business

Data is the life blood of modern business, and any form of unwanted data loss is a significant threat to business success. At NetApp, we talk a lot about the expectation economy. We live in a world where customers have increasingly high expectations of businesses and can be deterred by delays of mere seconds in accessing services or information, in favour of a competitor. Now imagine the impact on a customer if you have to reveal that you have lost their vital information. This is a conversation that nobody wants to have, either with their boss or with the customer.

According to the University of Texas, 94% of companies that suffer a catastrophic data loss don’t survive. Although data loss to some may seem like a first-world problem rather than a true catastrophe, in a business context the impact can be profound. Leave aside for one moment the loss of revenue caused by downtime, lost transactions, and unrecorded time. Think about the sheer embarrassment, the damage to your reputation, and the loss of trust, which can take years to rebuild.

This is certainly one of the issues that keep CIOs awake at night. Ultimately, the way to avoid sleepless nights is to invest in a fully managed, enterprise-grade, always-on data management solution with best-in-class security features. However, if the unthinkable happens, what businesses really need is a plan B—a way to reset their data to where it needs to be. And NetApp has launched exactly that.

A Safety Net for Data Loss

In EMEA, 43% of enterprises use Office 365, making it one of the leading software-as-a-service (SaaS) environments available. The platform is increasingly used for business-critical operations, and business-critical Office 365 files that “go missing” or are accidentally deleted can seriously impact businesses. Although Office 365 offers built-in once-a-day automatic backup, it does not protect against accidental deletion, virus and malware, hackers, or ransomware attacks. NetApp® Cloud Control, launched with

 

Offering enterprise-class data protection for Exchange Online, SharePoint Online, and OneDrive for Business, Cloud Control is a secure, scalable service that can work across the cloud, in on-premises storage, or in a mix of the two to protect your business’ mission-critical data from accidental deletion, corruption, or malicious intent. The service, offered on a licence per seat per year basis, requires no installation and is easy for any business to use.

Being able to mitigate the impact of data loss, regardless of the source, in a timely, efficient, and effective way should be a strategic imperative of any modern business. The resulting benefits are significant—sparing businesses blushes and helping them maintain their reputation in a competitive marketplace, as well as preventing revenue loss. With such high stakes, and customers focused on what is happening to their data, businesses can’t afford to be complacent.

Data is under constant threat. We will never be able to fully control disaster, human error, and cyberthreats. What we can control is how effective our plan is before a disaster happens. This will be a crucial differentiating factor in how modern enterprises cope with the increasing wave of threats to their data.

Source: https://newsroom.netapp.com/blogs/back-up-your-saas-data-with-netapp-cloud-control-for-microsoft-office-365/

Author: Martin Warren


  • 0

Three Simple Questions You Should Ask Every Flash Vendor About NVMe

Category : NetApp

In the coming years, every major storage vendor will be rolling out its own solutions for NVMe, NVMe-oF, and SCM. However, as with most things in life, the devil is in the details. Different vendors will implement new solid-state technologies and architectures in very different ways.

As you evaluate your options, don’t be afraid to ask tough questions. Make sure that in all the talk about amazing speeds and feeds, your vendor isn’t pushing a solution that will create more problems than it solves.

Put Flash Vendors to the Test

Most organizations won’t overhaul their entire storage architectures overnight. They’re more likely to bring in new technologies for specific targeted purposes. For example, organizations might initially deploy SCM only for the most performance-sensitive applications, while continuing to use flash solid-state disks for the rest. The best way to do that is to be able to non-disruptively introduce these new solid-state storage technologies as just another tier of storage that functions alongside your existing storage (both flash and hard disk drives).

Is that what your vendor is offering? Here are three key questions to help you find out:

What will I have to give up to use NVMe and SCM capabilities?

You shouldn’t have to sacrifice enterprise-class data management features and resiliency and create another silo to use new storage technologies. You choose storage vendors not just for the latest technology, but also because they offer mature and stable software, resiliency and redundancy, flexible data management, and simple application integrations. If choosing a new storage technology means sacrificing any of these things, tread carefully.

How will this solution evolve?

Underlying hardware technologies can change quickly. Avoid solutions custom-built from the ground up for a specific new technology, or you’ll have a hard time adding new capabilities over time. Run away from any vendor insisting that its custom-built solid-state media in a proprietary form factor can compete in even the medium term with the inherent price, performance, and longevity of commodity-based solutions from giants such as Intel, Samsung, and Toshiba. Don’t get locked in. Instead, look for software-centric solutions that can continually incorporate new underlying hardware and storage media without making wholesale changes.

How will this affect my existing environment?

You don’t want to have to deploy an entirely separate silo for each new storage technology in your data center. Rather, you should be able to use mature clustering technology not only to introduce new technology non-disruptively but also to provision and manage applications that adhere to your service levels regardless of the specific mix of underlying architectures and storage media.

Bottom line, as exciting as new storage innovations are, they don’t change your core requirements for storage: high-performance, reliable access to your data, with simple management and operation. NVMe , NVMe-oF, and SCM technologies really can bring amazing new capabilities, but only if they’re designed to function as part of your real-world data center, not as a science experiment.

NetApp Aces the Test

At NetApp, we offer some of the fastest flash storage platforms in the industry, and we constantly release new features and capabilities for our products. We do these things with a focus on software innovation and scale-out architectures, so our customers can continually extend the value of their NetApp investments.

We’re bringing this same approach to NVMe, NVMe-oF, and SCM. As new technology innovations offer more performance, density, and cost efficiency, our customers will be able to take advantage of them easily, non-disruptively, and as part of their existing enterprise-grade storage platforms.

Visit our booth at Flash Memory Summit, August 8–10, 2017, and attend our keynote, “Creating the Fabric of a New Generation of Enterprise Apps,” on Thursday, August 10, 11:30 a.m. to noon.

More Information

Explore the implications of these new innovations in the other three blog posts in this series:

Source: https://newsroom.netapp.com/blogs/three-simple-questions-you-should-ask-every-flash-vendor-about-nvme/

Author: Ravi Kavuri


  • 0

10 Good Reasons to Upgrade to ONTAP 9.2

Category : NetApp

NetApp® ONTAP® 9.2 was released in May 2017 and went GA on June 29. It’s a free and easy nondisruptive upgrade.

We have already published a comprehensive overview of all the great customer value that ONTAP 9.2 provides. Now, for the TLDR (too long; didn’t read) audience, here are 10 good reasons to upgrade, in handy infographic format:

 


  • 0

Up and to the Right, NetApp Moves Up in 2017 Gartner Magic Quadrant for Solid State Arrays

Category : NetApp

Gartner recently released its July 2017 Magic Quadrant for Solid State Arrays (SSA).  NetApp has improved its position moving up and to the right in the Leaders Quadrant.  This past year, customers drove NetApp market share gains to new heights. Gartner’s May 2017 Market Share Analysis: SSDs and Solid-State Arrays, Worldwide, 2016 shows NetApp share growth at more than triple that of the market in 2016, moving NetApp to the #2 market share position:

Source: Gartner (May 2017) *Market Share Analysis reports provide qualitative insight, which are basically the why behind the figures.

Not a bad year for recognition of the value that NetApp is providing to customers in this arena.  Now let’s look ahead.  What can customers expect from NetApp and how might leadership criteria evolve in the future?

Innovating beyond media and the array

NetApp will continue to lead in providing flash-media-based solutions that drive peak performance and efficiency for legacy and emerging applications and that power the build out of cloud-like next generation data centers.  Customers should expect NetApp to leverage strong innovation and deep supplier relationships to deliver timely, nondisruptive upgrades that leverage the next media wave. For example, NVMe-over-Fabrics (NVMe-oF), and storage class memory (SCM) technologies bring order-of-magnitude improvements in throughput, latencies, and efficiency. Learn more about our efforts and innovation here.

While media-based innovation is exciting, we believe customers expect strategic vendors to think beyond media and the array. Digital transformation is high on the C-suite priority stack with most organizations seeking to use data to optimize operations, enable new customer touch points and drive new revenue streams. SSA vendors who can help customers achieve these larger goals by better managing their data across premise and public cloud will provide significantly more value as a strategic partner.  That is where NetApp will continue to stand out. The NetApp SSA portfolio integrates with a full ecosystem of hybrid cloud data services enabled by the NetApp Data Fabric.

The leadership criteria of the future

NetApp’s vision for data management is a data fabric that seamlessly connects a customers’ increasingly distributed, diverse, and dynamic data environment. NetApp Data Fabric is an architecture that provides data management capabilities and services that span a choice of endpoints and ecosystems connected across premises and the cloud to accelerate digital transformation. The Data Fabric delivers consistent and integrated data management services for data visibility and insights, data access and control, and data protection and security. Here are a some of the capabilities that are integrated with our SSA portfolio today:

  • Pay-as-you-go backup and disaster recovery with public cloud
  • Automated policy-based tiering of cold data to public cloud
  • Automated data sync with public cloud to leverage cloud analytics
  • Performance, capacity and cost monitoring across premises and public cloud

Solving data management challenges to achieve digital transformation

NetApp will continue to lead when it comes to developing cutting edge technology and capability in our SSA offerings.  However, it will take much more to be a strategic vendor in the digital era.  Going forward, I predict that integrated hybrid cloud data services and their broader ability to accelerate a customer’s digital transformation agenda will increasingly differentiate the leaders in all infrastructure segments.  Ask your SSA vendor how they plan to meet the emerging criteria of a strategic infrastructure vendor with integrated hybrid cloud data services.

Gartner Disclaimer:
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Gartner Market Share Analysis: SSDs and Solid-State Arrays, Worldwide, 2016, Joseph Unsworth and John Monroe, 2 May 2017

Source: https://newsroom.netapp.com/blogs/up-and-to-the-right-netapp-moves-up-in-2017-gartner-magic-quadrant-for-solid-state-arrays/

Author: Brett Roscoe


  • 0

FlexPod Survival Guide, The Start of an IT Revolution

Category : NetApp

Do you know the Beatles song “Revolution”? The lyrics of that song,referencing the world’s need for a change and ability to deliver, poignantly encapsulate Beatles’ impact. The Beatles’ most significant contributions to music occurred over 6 short years (1964–1970). It was a remarkable time, and they were the right group to deliver a revolution. How many technology companies run their course in 6 years? How many in much less? How many, like Cisco and NetApp, celebrate over 25 years of innovation and have the right engineers to deliver on market demands?

FlexPod®, now in its sixth year, recently celebrated one of its best quarters ever, tallying a 44% year-over-year growth of revenue reported in IDC’s Quarterly Converged Systems Tracker for Q4 of calendar year 2016. FlexPod excels in a world of converged, hyper converged, and hybrid cloud. Now is an interesting time to reflect, and this new blog series provides a place to do so. There truly is a place for multiple strategies in today’s world of on-premises, near the cloud, or on-cloud workloads. When FlexPod launched, there wasn’t an accurate IDC designation, many sales teams didn’t know what to sell, and many customers thought they could ask for a single FlexPod SKU. The Wikipedia page for “converged infrastructure” even appeared after FlexPod was born. To the engineers working on FlexPod, none of that was relevant; what mattered was solving customer challenges.

I enjoy the history of how bands like the Beatles and impactful products or solutions got their start. There are multiple viewpoints for any event in history, as you know. From my perspective, FlexPod was born at the intersection of the need for engineering best practices, increased customer demand, the emergence of strategic vendor relationships, and channel partner readiness.

Engineering Best Practices

In 2006, many in the market viewed NetApp—then called “Network Appliance”—as a NAS-only vendor. NetApp needed to test large numbers of compute resources against FAS controllers and NetApp® Data ONTAP® to prove enterprise readiness. Back then, every engineer in the company had at least two data center racks’ worth of gear that they “owned” and used for testing. We needed to scale and grow but had to do so while sharing resources and reducing costs.

We developed a best-in-class compute grid, called the Kilo Client, with distinct pods of compute, network, and storage. We started with 3 pods of 224 blade servers each and quickly scaled to over 1,000 during our pilot program.

 

First Gen NetApp Kilo Client Pod Architecture: Circa 2006

This environment included, rapidly provisioned SAN-booted clients, connected to controllers under testing, proved that a best-in-class test grid could be deployed using NetApp storage. This environment has been successful for several years and continues in an evolved design. The NetApp Kilo Client went online in March 2006, and customers wanted to hear how we were using our technology to solve our own problems.

Customer Demand

The years 2006–2009 saw several industry trends unfold. The biggest of these trends was virtualization, which grew to a multibillion-dollar market segment. For as many benefits as virtualization provided, customers struggled to design scalable and predictable infrastructures capable of satisfying their application needs. IT shops were being asked to do more with less. Customers needed vendors that understood their challenges and could deliver best practice guidance in the form of reference architectures.  The NetApp Research Triangle Park campus in North Carolina grew as we built out new data centers. These new designs incorporated virtualization, the pod architecture, hot and cold aisle isolation, and ambient air cooling. We also took a hard look at our disaster recovery strategy and determined which applications were cloud-ready. We learned by building out world-class engineering and IT best practices. The first generation of the Kilo Client reached maturity, and NetApp was investigating new computing platforms for our own testing and virtualization needs.

Strategic Vendor Relationships

About this time, Nuova Systems was developing a new data center switch line and a new compute platform. Acquired by Cisco in 2008, the team quickly brought to market the Nexus 5000 series switch and, in 2009, the Unified Computing System (Cisco UCS). To grow from zero in the server market, Cisco needed storage partners and channel partners that knew a customer’s entire stack. VMware, likewise, was controlling the virtualization market but still looking to grow further and innovate. NetApp, Cisco, and VMware joined together as part of the Imagine Virtually Anything alliance. We developed a reference architecture called Secure Multitenancy that aimed to reduce customer risk while providing a proof point for a standardized virtualized infrastructure stack.

Channel Partner Readiness

No group has the pulse of its customers like the channel community. It is the trusted advisor for its customers and helps guide them through an onslaught of product innovation from their vendors. In 2009–2011, Virtual Computing Environment (VCE), a joint venture of Cisco, EMC, and VMware, produced Vblock. Shortly thereafter, FlexPod, a strong partnership between NetApp and Cisco developed FlexPod . FlexPod was developed as a flexible design centered on compute and network best practices and NetApp’s unified OS, NetApp ONTAP. Partners embraced the FlexPod architecture in that it was not a rigid solution and allowed them the opportunity to tailor a solution to their customers’ needs.

FlexPod revolutionized the market. Customers, channel partners, and alliances were ready for converged infrastructure solutions backed by the experience the engineers gained from solving their own problems. The team aims to reduce customer risk through extensive validations in the form of NetApp Verified Architectures (NVAs) and Cisco Validated Designs (CVDs). We also give partners the training and tools needed to deliver solutions that alleviate customer challenges. Be on the lookout for the second part of this edition of the FlexPod Survival Guide Series, in which we will discuss how FlexPod has continued this revolution and embraced an ever-changing landscape. In the meantime, visit the following pages for a stroll down memory lane and a look at FlexPod today.

Source: https://newsroom.netapp.com/blogs/flexpod-survival-guide-the-start-of-an-it-revolution/

Author: Chris Reno


  • 0

NetApp Helps Polaris Alpha Modernize DevOps, Speed Delivery of Security Services for U.S. Government

Category : NetApp

NetApp and partner Flair Systems deliver all-flash converged infrastructure to accelerate Polaris Alpha’s ability to provision and deploy microservices essential to national security

“Our systems are used to help analysts and operators visualize and share massive amounts of data to support critical decision-making for U.S. national security across air, sea, ground, cyber, and space operations globally,” said David Coker, senior VP of information systems at Polaris Alpha. “With NetApp, we have significantly accelerated the development of new offerings and have changed our relationship with our internal customers; they now see us as part of their team.”

Polaris Alpha provides custom software solutions that allow government agencies such as the Department of Defense and Department of Homeland Security to quickly evaluate potential threats and make smarter decisions. Polaris Alpha’s business and reputation depend on its ability to consistently deliver solutions on time and within budget. Behind the scenes, more than 400 software developers provide continual support and services for customers’ unique deployments.

The company had to modernize every aspect of IT when they decided to adopt a multimodal approach that applied DevOps and agile methodologies to their oldest legacy systems. As they transitioned from monolithic applications to cloud-native containerized technologies, they faced a 10% monthly growth in storage requirements, which increased latency and hampered developer productivity.

Polaris Alpha turned to Flair Data Systems, a NetApp (NASDAQ: NTAP) partner, to help develop a responsive, reliable support approach for customers’ mission-critical operations.  This included moving Polaris Alpha’s virtualized production and development systems to FlexPod® Datacenter converged infrastructure, which uses NetApp® All Flash FAS, the world’s fastest and most cloud connected all flash arrays.

The ability to easily integrate FlexPod with existing VMware solutions simplified provisioning, and accelerated container deployments, including Docker and Jenkins, from six weeks to just 15 minutes.  With faster provisioning and submillisecond latency, Polaris Alpha can now support its 400 developers with access to additional resources. The system can now handle more than 100 concurrent builds with no impact on performance. Developers can now test and deliver solutions and microservices to customers more quickly and more reliably.  The company also benefits from the FlexPod compact footprint, which takes up 75% less data center space than the previous solution.

More information: Polaris Alpha case study

Source: https://newsroom.netapp.com/news/netapp-helps-polaris-alpha-modernize-devops-speed-delivery-of-security-services-for-u-s-government/


  • 0

FlexPod SF: A Scale-Out Converged System for the Next-Generation Data Center

Category : NetApp

Welcome to the age of scale-out converged systems—made possible by FlexPod®SF. Together, Cisco and NetApp are delivering this new FlexPod solution built architecturally for the next-generation data center. Architects and engineers are being asked to design converged systems that deliver new capabilities to match the demands of consolidating silos, expanding to web-native apps, and embracing the new modes of operations (for example, DevOps).

New Criteria for Converged Systems

Until now, converged systems have served design criteria of integration, testing, and ease of configuration, within the confines of current IT operations and staples like Fibre Channel. A new approach, however, focuses on the following design requirements:

 

Converged systems needs to deliver on performance, agility, and value.

Enter the First Scale-Out FlexPod Solution Built on Cisco UCS and Network

Cisco and NetApp have teamed to deliver FlexPod SF, the world’s first scale-out converged system built on Cisco’s UCS Server, Cisco’s Nexus switching, Cisco management, VMware vSphere 6, and newly announced NetApp® SolidFire®SF9608 nodes running the NetApp SolidFire Element® OS on Cisco’s C220 platform. The solution is designed to bring the next-generation data center to FlexPod.

SF9608 Nodes Powered by Cisco C220 and the NetApp SolidFire Element OS

The critical part of bringing FlexPod SF forward is the new NetApp SF9608 nodes. For the first time, NetApp is producing a new Cisco C220-based node appliance running the Element OS.

 

SF9608 nodes built on Cisco UCS C220 M4 SFF Rack Server have these specifications:

  • CPU: 2 x 2.6GHz CPU (E5-2640v3)
  • Memory: 256GB RAM
  • 8 x 960GB SSD drives (non-SED)
  • 6TB raw capacity (per node)

Each node has these characteristics:

  • Block storage: iSCSI-only solution
  • Per volume IOPS-based quality of service (QoS)
  • 75,000 IOPS
  • Single copy of data kept—that is, a primary and replicated copy

Users can obtain support through 888-4NetApp or Mysupport@netapp.com.

 

The key here is that it’s the same Element OS that’s nine revisions mature, born from service providers, and used by some of the biggest enterprise and telco businesses in the world. The Element OS is preconfigured on the C220 node hardware to deliver a storage node appliance just for FlexPod. Element OS 9 delivers:

  • Scale-out clustering. You can cluster a minimum of four nodes, and then add or subtract nodes as needed. You’ll get maximum flexibility with linear scale for performance and capacity, because every node has CPU, RAM, 10GB, SSD IOPS, and capacity.
  • QoS. You can control the entire cluster’s IOPS for setting minimum, maximum, and burst settings per workload to deliver mixed workloads without performance issues.
  • Automation programmability. The Element OS has a 100% exposed API, which is preferred for programming no-touch operations.
  • Data assurance. The OS enables you to protect data from loss of drives or nodes. Recovery for a drive is 5 minutes, and less than 60 minutes for a full node failure (all without any data loss).
  • Inline efficiency. The solution is always on and inline to the data, reducing the footprint through deduplication, compression, and thin provisioning.

The Element OS is also different from existing storage software. It’s important to understand that FlexPod SF is not a dual-controller architecture with SSD shelves; you will not need 93% of the overhead tasks.

Use Cases Delivering the Next-Generation Data Center

As you design for the next-generation data center, you’ll find requirements that are often buzzword-worthy but take technical meaning within FlexPod SF’s delivery:

  • Agility. You’re able to respond by means of the infrastructure stack to a variety of on-demand needs for more resources, offline virtual machine (VM) or app building from infrastructure requests, and autonomous self-healing from failures or performance issues (end-to-end QoS—compute, network, storage).
  • Scalability. You gain scalability not just in size but in how you scale—with granularity, across generations of products—moving, adding, or changing resources such as the new storage nodes. FlexPod SF delivers scale in size (multi-PB, multimillions of IOPS, and so on) and gives you maximum flexibility to redeploy and adjust scale.
  • Predictability. FlexPod SF offers performance, reliability, and capabilities to deliver a SLA from compute, network, and storage via VMware so that VMs, apps, and data can be consumed without periodic delivery issues from existing infrastructure.

With the next-generation data center, IT can simplify and automate, build for “anything as a service” (XaaS), and accelerate the adoption of DevOps. FlexPod SF delivers the next-generation data center for VMware Private Clouds and gives IT and service providers the ability to deliver infrastructure as a service.

  • VMware Private Cloud. Different from server virtualization, where the focus is on virtualization of apps, integration to existing management platforms and tools, and optimization of VM density.
    • Instead of managing through a component UI, manage through the vCenter plug-in or Cisco UCS Director.
    • Move from silos to consolidated and mixed workloads through QoS.
    • Instead of configuring elements of infrastructure, automate through VMware Storage Policy-Based Management, VMware vRealize Automation, or Cisco UCS Director.
  • Infrastructure as a service. Currently, service and cloud providers take the components of FlexPod SF and deliver them as a service. With this new FlexPod solution, you’ll be able to configure multitenancy with much more elasticity of resources with performance controls to construct a SLA for on-demand consumption.

FlexPod SF Cisco Validated Design

A critical part of the engineering is the Cisco Validated Design (CVD), which encompasses all the details needed from a full validation of a design. With FlexPod SF, the validation was specific to the following configuration:

 

 

As you can see, the base strength of Cisco’s UCS and Nexus platforms now configures into scale-out NetApp SF9608 nodes with a spine-leaf 10Gb top-of-rack configuration. All of this is “new school,” and the future is now. Add CPU and RAM in small and flexible increments along with 10Gb network and storage 1U at a time (from a base four-node configuration).

 

Architecture and Deployment Considerations

FlexPod SF is not your average converged system. To architect and deploy, you’ll need to rethink your work—for example, helping the organization understand workload profiles to set QoS, and creating policy automation for rapid builds and self-service. Here are some considerations:

  • Current mode of operations
    • Analyze the structure of current IT operations. FlexPod SF presents the opportunity for IT or a service provider to move past complex configurations to profiles, policy automation, and self-service so VM builders and developers can operate with agility.
  • Application profiles and consolidation
    • Help organizations align known application and VM profiles to programmable settings in QoS, policies, and tools such as PowerShell.
    • Set QoS for minimum, maximum, and burst separate from capacity settings. This granularity enables architects to apply settings that will consolidate app silos and SLAs without overprovisioning hardware resources.
  • Cisco compute and network: same considerations as previous FlexPod solutions; only B Series supported at this time.
  • Storage
    • Architecting the SF9608 nodes is straightforward. With the Element OS, your design requirements are for volume capacity (GB/TB) and IOPS settings through QoS. The IOPS settings are:
      • Minimum: the key ability to deliver performance SLAs. This ability is delivered through the Element OS on a 4+ node governing the maximum capabilities of the cluster and inducing latency to workloads trespassing the QoS settings.
      • Maximum: capping a maximum IOPS of a workload.
      • Burst: over a given time, allows a workload to go past maximum if the cluster can supply the IOPS.
    • Capacity does not need to be projected for a three-to-five-year sizing as with existing storage. SF9608 nodes are an on-demand, 1U-node granularity add to needs for capacity and performance. Scale is linear: each node has CPU, RAM, 10GB, capacity, and IOPS.
    • Encryption is not available at this time.
    • Boot from SAN is supported.
    • You cannot field-update a C220 to become a SF9608 node.
    • There is no DC power at this time (roadmap).
  • VMware
    • In architecting for a FlexPod SF environment, focus on the move from server virtualization, where consolidation ratios, integration to existing stack tools, and the modernization to updated resources like all-flash, 10Gb, and faster Intel. For VMware Private Cloud environments, align all of these attributes and capabilities to an on-demand, profile-centric, policy-driven (SPBM) environment for VM administrators to completely build VMs from vCenter or Cisco UCS Director.
    • FlexPod SF presents a new opportunity for operators. The interface for daily operations is VMware vCenter, Cisco UCS Director, or both. As you build, move, add, and change VMs, you’ll notice policies that go beyond templates. You’ll see granular capabilities to completely build all attributes of VMs. You’ll also be able to present self-service portals for developers and consumers of a VMware Private Cloud to operate with agility and achieve their missions.

Source: https://newsroom.netapp.com/blogs/flexpod-sf-a-scale-out-converged-system-for-the-next-generation-data-center/

Author: Lee Howard


  • 0

ONTAP 9.2: More Cloud, Efficiency, Control, and Software-Defined Storage Options for a Data Driven World

Category : NetApp

On Monday, June 5, NetApp launched the most far-reaching innovation announcement in our 25-year history. At a time when many storage companies are struggling to remain afloat and running out of innovation steam, NetApp at 25 has never been more innovative. We are pivoting to serve the future generation of “data visionaries” with expanded hybrid cloud options, the new NetApp® hyper converged infrastructure (HCI), and much more.

Among all that goodness, the NetApp ONTAP® data management platform had its 25th birthday, and it has never looked better. A friend of mine wrote a great treatise about how the term “legacy” gets misused as an insult by those who are still struggling to succeed. In contrast, for NetApp, 25 years of ONTAP is a track record, showing that year after year our customers vote with their dollars for the feature-rich, efficient, and resilient ONTAP architecture that accelerates their business.

Do you need a quick refresher on everything ONTAP can do for the modern enterprise? Watch the video below. It’ll give you the grand tour!

 

 

Innovation Matters

Unlike other architectures that might be content with the status quo, ONTAP is being improved more quickly and more significantly with every release. And it shows.

Figure 1) NetApp Leadership

ONTAP 9 introduced our new development methodology, which allows us to deliver a new major release on a dependable six-month cadence. With the 9.0 and 9.1 releases, we took full advantage of that capability to deliver value-added features to our existing and many new customers. ONTAP became easier to set up and simpler to operate. Our storage efficiencies continued to improve and deliver more capacity out of the same systems at no additional cost. We added granular software-based encryption to protect data at rest with no additional equipment or cost required. We expanded our offerings in data protection, resiliency, and compliance with AltaVault™ cloud backup integration, MetroCluster™ enhancements, and integrated SnapLock® compliance software. All of that in one year.

Our customers, partners, and the industry have taken notice. ONTAP FlexGroup is quickly taking the scale-out NAS world by storm, dethroning Isilon as the “market leader,” according to end users surveyed by IT Brand Pulse. We’ve shipped over 6PB of NVMe, quickly dominating an emerging space, while other companies are still preannouncing nonshipping products. NetApp, yes, that NetApp, is the fastest growing SAN vendor in the market, according to IDC’s Worldwide Quarterly Enterprise Storage Systems Tracker – 2016Q4, March 2, 2017.

Continuing our winning streak, we again hit that six-month cadence as promised on May 11, when ONTAP 9.2 was released to our customers. I joined the Tech ONTAP podcast to discuss all the great new technology baked into 9.2; have a listen here:

 

 

Justin Parisi, one of the masterminds behind the Tech ONTAP podcast, penned a tremendous blog (ONTAP 9.2RC1 is available!) highlighting many of the key features in ONTAP 9.2, including a number of links to associated podcasts and videos. It’s definitely worth the read.

What makes ONTAP 9.2 worthy of being the newest major release in that proud 25-year track record? We really took to heart NetApp’s new #DataDriven focus on enabling data visionaries to take control of their data. That means that ONTAP 9.2 brings more cloud, more efficiency, more control, and more software-defined storage (SDS) options.

More Cloud Options with FabricPool

FabricPool has been a hot topic ever since we showed off the concept at NetApp Insight™ 2016. (Hint: NetApp Insight 2017 is just around the corner, so register now.) In a nutshell, FabricPool is a simple way to leverage the hybrid cloud to place your data where it best belongs. When enabled on an All Flash FAS array powered by ONTAP 9.2, FabricPool automatically moves cold secondary data stored in Snapshot® copies or backup copies out to the cloud using object storage. It’s the best of both worlds: the performance of a resilient, unified, high-performance all-flash array for hot data and the cost-optimized capacity of a private or public cloud object storage repository for cold data.

Figure 2) Tiering Cold Data

In this first release of FabricPool, we support tiering data to Amazon S3 and to NetApp StorageGRID®, our innovative award-winning object storage solution for your private cloud. We also recently announced our intention to integrate FabricPool with Microsoft Azure Blob Storage.

With just a few clicks, our customers can realize up to 40% savings in storage TCO across their entire All Flash FAS (AFF) SAN and NAS environment. That’s one more major thread added to the NetApp Data Fabric, enabling true hybrid cloud capabilities for modern businesses.

More Efficient Storage with Expanded Inline Deduplication

NetApp has spent decades focusing on using the least storage necessary to store the maximum data, with industry-leading storage efficiency techniques minimizing our customers’ data center footprint and their storage spend. When I joined NetApp in 2008, we were just rolling out the industry’s first deduplication for primary SAN and NAS storage, integrated directly into ONTAP at no additional cost. Since then, almost every major release has integrated new efficiency technologies such as compression or compaction or made the existing technologies more efficient or effective, for example, by shifting to inline efficiencies on our All Flash FAS (AFF) systems.

ONTAP 9.2 takes yet another leap forward by expanding the scope of our inline deduplication on our All Flash FAS systems to run across an entire aggregate. Within ONTAP, an aggregate is a collection of one or more RAID groups of drives, upon which we provision volumes for NAS exports/shares and SAN LUNs. With ONTAP 9.2, the individual aggregates on our All Flash FAS systems can now go up to 800TB. That means that inline dedupe now runs across 800TB of raw flash space; at 5:1, that means dedupe is running against 4PB of logical data. Each aggregate in an All Flash FAS system is larger than or comparable to the entire “global” scale-out system offered by many competitors’ AFA. With ONTAP scale-out, an All Flash FAS system can host multiple aggregates, extending out to 7.3PB of raw storage space, equating to well over 20PB logical storage space after basic efficiencies.

Figure 3) Shrink the Storage Footprint

Now, with ONTAP 9.2, inline dedupe can run across hundreds of individual volumes residing on the same aggregate, reclaiming duplicate data, whether that’s database copies, multiple VMware datastores, or any other NAS/SAN mixed workload. Again, it’s a no-additional-cost capability with negligible performance impact that we offer to customers as a nondisruptive software upgrade. This upgrade allows customers to reclaim up to 30% more efficiency on top of and in complement to existing efficiencies, including compression and compaction. Combine this logical efficiency with the extreme density enabled by 15.3TB solid-state disks (SSDs), and you can get upward of 1PB in a single 2RU shelf or 4RU dual-controller All Flash FAS system. That’s industry-leading logical efficiency and physical consolidation, all in one solution.

More Control of Performance for Shared Business-Critical Workloads

Unified shared infrastructure has a universe of benefits: consolidation, efficiency, flexibility, and scalability, just to name a few. The old silos where a business-critical application has its own storage array are quickly disappearing, victims to their own inability to adapt. However, a siloed storage array does have one advantage: There are no other workloads on the array to compete for resources. That’s why, when you’re consolidating storage workloads, it’s so important to place business-critical applications in the right place to meet their performance requirements and then make sure they get the performance they need after they are there.

Figure 4) Simplified Performance Controls for Shared Environments

In ONTAP 9.2, we introduced the concept of application-centric balanced placement for SAN workloads on our All Flash FAS arrays. What does that mean? It means you tell ONTAP that your application needs X number of LUNs of Y size with Z performance, and ONTAP itself identifies the right place within the scale-out cluster based on the available performance headroom to put your LUNs. Plus ONTAP provisions them for you and applies quality of service (QoS) maximums based on your selected service level. It’s automagic and available today for several common workloads as well as a “generic NAS workload” and a “generic SAN workload,” as shown in the System Manager onboard GUI screenshot in Figure 5.

Figure 5) System Manager GUI

With ONTAP 9.2, we also added new QoS minimums, now available for All Flash FAS SAN. The following video shows how these QoS policies are applied to protect business-critical workloads:

 

The video demo shows how a combination of ONTAP 9.2 QoS minimums and maximums can be used to optimize the performance utilization of a shared efficient SAN/NAS architecture, even when mixing business-critical workloads with “pesky” noisy neighbors such as the preproduction workload highlighted. ONTAP 9.2 makes it possible to modernize your data center while still protecting the applications that drive your business.
More Software-Defined Storage Options with ONTAP Select 

The magic of ONTAP is not just what it enables our customers to do; it’s also how they can do it. ONTAP can run on All Flash FAS systemshybrid FAS systemsFlexPod®converged infrastructure, and third-party heterogeneous arraysnear the cloud; and directly in the cloud. But one of the most exciting and fastest growing ways to deploy ONTAP is as software-defined storage, directly on commodity servers with DAS, using ONTAP Select.

With ONTAP 9.2, we’ve added new deployment options to spread ONTAP software-defined value to even more use cases, as shown in Figure 6.

Figure 6) New Deployment Options

Prior to ONTAP Select 9.2, we offered a single-node non-HA Select configuration or a four-node highly available configuration. ONTAP Select 9.2 introduces a two-node configuration with a separate HA “mediator” built directly into the same “ONTAP Deploy” virtual appliance used to properly instantiate ONTAP Select instances initially, thus requiring no additional infrastructure or virtual machines.

The other major innovation is support for non-DAS storage with ONTAP Select vNAS. What does that mean? It means that, in general, if you can supply storage to a VMware vSphere environment, then ONTAP Select runs on top of that storage. One use case is an external array connected to vSphere, most commonly when you want to add industry-leading NAS services provided by ONTAP onto a SAN array, hence the “vNAS” moniker. A related use case is for customers using VMware VSAN to take their existing server DAS and turn it into a shared storage environment. In most cases, ONTAP Select on its own or other options such as ONTAP All Flash FAS/FAS or our new NetApp HCI can provide an overall stronger enterprise data management solution than VMware VSAN. However, if a customer determines that VSAN is the right solution for a given workload, we want to support that. Offering ONTAP Select vNAS for VSAN allows you to take advantage of VSAN and ONTAP.

ONTAP 9.2: And So Much More …

Beyond the major improvements highlighted in this blog post, there’s an extensive list of other improvements that our customers expect of a major ONTAP release, some of which are covered in the blog by Justin Parisi. They are all covered in the ONTAP 9.2 release notes and documentation, available to our customers and partners at the NetApp Support site.

ONTAP 9.2 represents yet another significant update that delivers new capabilities to the data visionaries that we’re proud to have as customers … and no surprise, the team is already hard at work continuing to drive major innovation for the next release. Because innovation matters, and NetApp is Data Driven.

Source: https://newsroom.netapp.com/blogs/ontap-9-2-more-cloud-efficiency-control-and-software-defined-storage-options-for-a-data-driven-world/

Author: Jeff Baxter


  • 0

Pay-As-You-Go Data Management for Your Data Center

Category : NetApp

The flexibility to pay for only the resources consumed is the key thing that drives many organizations to choose the cloud. Rather than requiring a large up-front capital investment for infrastructure, the cloud gives you the option to start small and pay as you grow.

NetApp recently introduced a new on-demand consumption model that offer this same benefit for resources deployed on premises. NetApp® OnDemand gives you all the value of NetApp data management solutions with the flexibility of the cloud.

NetApp OnDemand Consumption Model

NetApp OnDemand brings cloudlike flexibility to on-premises environments, converting traditional capex purchase models to flexible opex purchases. It simplifies the acquisition and management of data storage capacity, marrying NetApp on-premises infrastructure with the flexibility of a usage-based consumption model and the economic agility benefits of public cloud.

You simply pay monthly for capacity consumed. NetApp owns the infrastructure, but you manage it; you have full control over your data.

OnDemand is part of a continuum of solutions from NetApp that allow you to consume data and storage resources in the way that makes the most sense for your business needs — everything from data services in the cloud to traditional on-premises solutions.


On premises or next to the cloud.
With NetApp OnDemand, you have the option of using NetApp infrastructure deployed either on premises or colocated next to the cloud. If you choose colocation, you maintain full control of your data while gaining the ability to easily leverage compute and analytics services from leading cloud providers.

Managed services. You can also bundle OnDemand consumption with managed services from NetApp or its partners for complete data management as a service. Experts manage the infrastructure and data based on your requirements and according to NetApp best practices.

How It Works

The OnDemand program begins with a NetApp Service Design Workshop in which you work with data management experts to identify the solutions that meet your service-level objectives. NetApp or partner experts install the equipment, and you:

  • Pay for resources on a monthly basis
  • Have complete responsibility for the data
  • Perform all data management tasks, including backups, disaster recovery, and so on

If you choose to add NetApp Managed Services, NetApp or its partners take over these responsibilities on your behalf.

When Should You Consider an OpEx Model?

When planning your next IT deployment, start by assessing your requirements and then make choices based on what best fits your needs. Traditional capex purchases deliver the best value for resources deployed with long-term, predictable workloads. Leasing also remains a good choice for stable medium-term to long-term deployments when you need an opex purchase model.

If you need the flexibility of the cloud for unpredictable workloads but must maintain control of your data—either in your data center or colocated next to the cloud—take a look at NetApp OnDemand. With OnDemand, you can convert capex to opex with a pay-as-you-go model.

For deployments in the cloud, let NetApp help you choose the solution that supports your data management needs, with all the control and efficiency you expect in the data center.

Source: https://newsroom.netapp.com/blogs/pay-as-you-go-data-management-for-your-data-centers/


Support