Category Archives: NetApp

  • 0

FlexPod SF: A Scale-Out Converged System for the Next-Generation Data Center

Category : NetApp

Welcome to the age of scale-out converged systems—made possible by FlexPod®SF. Together, Cisco and NetApp are delivering this new FlexPod solution built architecturally for the next-generation data center. Architects and engineers are being asked to design converged systems that deliver new capabilities to match the demands of consolidating silos, expanding to web-native apps, and embracing the new modes of operations (for example, DevOps).

New Criteria for Converged Systems

Until now, converged systems have served design criteria of integration, testing, and ease of configuration, within the confines of current IT operations and staples like Fibre Channel. A new approach, however, focuses on the following design requirements:

 

Converged systems needs to deliver on performance, agility, and value.

Enter the First Scale-Out FlexPod Solution Built on Cisco UCS and Network

Cisco and NetApp have teamed to deliver FlexPod SF, the world’s first scale-out converged system built on Cisco’s UCS Server, Cisco’s Nexus switching, Cisco management, VMware vSphere 6, and newly announced NetApp® SolidFire®SF9608 nodes running the NetApp SolidFire Element® OS on Cisco’s C220 platform. The solution is designed to bring the next-generation data center to FlexPod.

SF9608 Nodes Powered by Cisco C220 and the NetApp SolidFire Element OS

The critical part of bringing FlexPod SF forward is the new NetApp SF9608 nodes. For the first time, NetApp is producing a new Cisco C220-based node appliance running the Element OS.

 

SF9608 nodes built on Cisco UCS C220 M4 SFF Rack Server have these specifications:

  • CPU: 2 x 2.6GHz CPU (E5-2640v3)
  • Memory: 256GB RAM
  • 8 x 960GB SSD drives (non-SED)
  • 6TB raw capacity (per node)

Each node has these characteristics:

  • Block storage: iSCSI-only solution
  • Per volume IOPS-based quality of service (QoS)
  • 75,000 IOPS
  • Single copy of data kept—that is, a primary and replicated copy

Users can obtain support through 888-4NetApp or Mysupport@netapp.com.

 

The key here is that it’s the same Element OS that’s nine revisions mature, born from service providers, and used by some of the biggest enterprise and telco businesses in the world. The Element OS is preconfigured on the C220 node hardware to deliver a storage node appliance just for FlexPod. Element OS 9 delivers:

  • Scale-out clustering. You can cluster a minimum of four nodes, and then add or subtract nodes as needed. You’ll get maximum flexibility with linear scale for performance and capacity, because every node has CPU, RAM, 10GB, SSD IOPS, and capacity.
  • QoS. You can control the entire cluster’s IOPS for setting minimum, maximum, and burst settings per workload to deliver mixed workloads without performance issues.
  • Automation programmability. The Element OS has a 100% exposed API, which is preferred for programming no-touch operations.
  • Data assurance. The OS enables you to protect data from loss of drives or nodes. Recovery for a drive is 5 minutes, and less than 60 minutes for a full node failure (all without any data loss).
  • Inline efficiency. The solution is always on and inline to the data, reducing the footprint through deduplication, compression, and thin provisioning.

The Element OS is also different from existing storage software. It’s important to understand that FlexPod SF is not a dual-controller architecture with SSD shelves; you will not need 93% of the overhead tasks.

Use Cases Delivering the Next-Generation Data Center

As you design for the next-generation data center, you’ll find requirements that are often buzzword-worthy but take technical meaning within FlexPod SF’s delivery:

  • Agility. You’re able to respond by means of the infrastructure stack to a variety of on-demand needs for more resources, offline virtual machine (VM) or app building from infrastructure requests, and autonomous self-healing from failures or performance issues (end-to-end QoS—compute, network, storage).
  • Scalability. You gain scalability not just in size but in how you scale—with granularity, across generations of products—moving, adding, or changing resources such as the new storage nodes. FlexPod SF delivers scale in size (multi-PB, multimillions of IOPS, and so on) and gives you maximum flexibility to redeploy and adjust scale.
  • Predictability. FlexPod SF offers performance, reliability, and capabilities to deliver a SLA from compute, network, and storage via VMware so that VMs, apps, and data can be consumed without periodic delivery issues from existing infrastructure.

With the next-generation data center, IT can simplify and automate, build for “anything as a service” (XaaS), and accelerate the adoption of DevOps. FlexPod SF delivers the next-generation data center for VMware Private Clouds and gives IT and service providers the ability to deliver infrastructure as a service.

  • VMware Private Cloud. Different from server virtualization, where the focus is on virtualization of apps, integration to existing management platforms and tools, and optimization of VM density.
    • Instead of managing through a component UI, manage through the vCenter plug-in or Cisco UCS Director.
    • Move from silos to consolidated and mixed workloads through QoS.
    • Instead of configuring elements of infrastructure, automate through VMware Storage Policy-Based Management, VMware vRealize Automation, or Cisco UCS Director.
  • Infrastructure as a service. Currently, service and cloud providers take the components of FlexPod SF and deliver them as a service. With this new FlexPod solution, you’ll be able to configure multitenancy with much more elasticity of resources with performance controls to construct a SLA for on-demand consumption.

FlexPod SF Cisco Validated Design

A critical part of the engineering is the Cisco Validated Design (CVD), which encompasses all the details needed from a full validation of a design. With FlexPod SF, the validation was specific to the following configuration:

 

 

As you can see, the base strength of Cisco’s UCS and Nexus platforms now configures into scale-out NetApp SF9608 nodes with a spine-leaf 10Gb top-of-rack configuration. All of this is “new school,” and the future is now. Add CPU and RAM in small and flexible increments along with 10Gb network and storage 1U at a time (from a base four-node configuration).

 

Architecture and Deployment Considerations

FlexPod SF is not your average converged system. To architect and deploy, you’ll need to rethink your work—for example, helping the organization understand workload profiles to set QoS, and creating policy automation for rapid builds and self-service. Here are some considerations:

  • Current mode of operations
    • Analyze the structure of current IT operations. FlexPod SF presents the opportunity for IT or a service provider to move past complex configurations to profiles, policy automation, and self-service so VM builders and developers can operate with agility.
  • Application profiles and consolidation
    • Help organizations align known application and VM profiles to programmable settings in QoS, policies, and tools such as PowerShell.
    • Set QoS for minimum, maximum, and burst separate from capacity settings. This granularity enables architects to apply settings that will consolidate app silos and SLAs without overprovisioning hardware resources.
  • Cisco compute and network: same considerations as previous FlexPod solutions; only B Series supported at this time.
  • Storage
    • Architecting the SF9608 nodes is straightforward. With the Element OS, your design requirements are for volume capacity (GB/TB) and IOPS settings through QoS. The IOPS settings are:
      • Minimum: the key ability to deliver performance SLAs. This ability is delivered through the Element OS on a 4+ node governing the maximum capabilities of the cluster and inducing latency to workloads trespassing the QoS settings.
      • Maximum: capping a maximum IOPS of a workload.
      • Burst: over a given time, allows a workload to go past maximum if the cluster can supply the IOPS.
    • Capacity does not need to be projected for a three-to-five-year sizing as with existing storage. SF9608 nodes are an on-demand, 1U-node granularity add to needs for capacity and performance. Scale is linear: each node has CPU, RAM, 10GB, capacity, and IOPS.
    • Encryption is not available at this time.
    • Boot from SAN is supported.
    • You cannot field-update a C220 to become a SF9608 node.
    • There is no DC power at this time (roadmap).
  • VMware
    • In architecting for a FlexPod SF environment, focus on the move from server virtualization, where consolidation ratios, integration to existing stack tools, and the modernization to updated resources like all-flash, 10Gb, and faster Intel. For VMware Private Cloud environments, align all of these attributes and capabilities to an on-demand, profile-centric, policy-driven (SPBM) environment for VM administrators to completely build VMs from vCenter or Cisco UCS Director.
    • FlexPod SF presents a new opportunity for operators. The interface for daily operations is VMware vCenter, Cisco UCS Director, or both. As you build, move, add, and change VMs, you’ll notice policies that go beyond templates. You’ll see granular capabilities to completely build all attributes of VMs. You’ll also be able to present self-service portals for developers and consumers of a VMware Private Cloud to operate with agility and achieve their missions.

Source: https://newsroom.netapp.com/blogs/flexpod-sf-a-scale-out-converged-system-for-the-next-generation-data-center/

Author: Lee Howard


  • 0

ONTAP 9.2: More Cloud, Efficiency, Control, and Software-Defined Storage Options for a Data Driven World

Category : NetApp

On Monday, June 5, NetApp launched the most far-reaching innovation announcement in our 25-year history. At a time when many storage companies are struggling to remain afloat and running out of innovation steam, NetApp at 25 has never been more innovative. We are pivoting to serve the future generation of “data visionaries” with expanded hybrid cloud options, the new NetApp® hyper converged infrastructure (HCI), and much more.

Among all that goodness, the NetApp ONTAP® data management platform had its 25th birthday, and it has never looked better. A friend of mine wrote a great treatise about how the term “legacy” gets misused as an insult by those who are still struggling to succeed. In contrast, for NetApp, 25 years of ONTAP is a track record, showing that year after year our customers vote with their dollars for the feature-rich, efficient, and resilient ONTAP architecture that accelerates their business.

Do you need a quick refresher on everything ONTAP can do for the modern enterprise? Watch the video below. It’ll give you the grand tour!

 

 

Innovation Matters

Unlike other architectures that might be content with the status quo, ONTAP is being improved more quickly and more significantly with every release. And it shows.

Figure 1) NetApp Leadership

ONTAP 9 introduced our new development methodology, which allows us to deliver a new major release on a dependable six-month cadence. With the 9.0 and 9.1 releases, we took full advantage of that capability to deliver value-added features to our existing and many new customers. ONTAP became easier to set up and simpler to operate. Our storage efficiencies continued to improve and deliver more capacity out of the same systems at no additional cost. We added granular software-based encryption to protect data at rest with no additional equipment or cost required. We expanded our offerings in data protection, resiliency, and compliance with AltaVault™ cloud backup integration, MetroCluster™ enhancements, and integrated SnapLock® compliance software. All of that in one year.

Our customers, partners, and the industry have taken notice. ONTAP FlexGroup is quickly taking the scale-out NAS world by storm, dethroning Isilon as the “market leader,” according to end users surveyed by IT Brand Pulse. We’ve shipped over 6PB of NVMe, quickly dominating an emerging space, while other companies are still preannouncing nonshipping products. NetApp, yes, that NetApp, is the fastest growing SAN vendor in the market, according to IDC’s Worldwide Quarterly Enterprise Storage Systems Tracker – 2016Q4, March 2, 2017.

Continuing our winning streak, we again hit that six-month cadence as promised on May 11, when ONTAP 9.2 was released to our customers. I joined the Tech ONTAP podcast to discuss all the great new technology baked into 9.2; have a listen here:

 

 

Justin Parisi, one of the masterminds behind the Tech ONTAP podcast, penned a tremendous blog (ONTAP 9.2RC1 is available!) highlighting many of the key features in ONTAP 9.2, including a number of links to associated podcasts and videos. It’s definitely worth the read.

What makes ONTAP 9.2 worthy of being the newest major release in that proud 25-year track record? We really took to heart NetApp’s new #DataDriven focus on enabling data visionaries to take control of their data. That means that ONTAP 9.2 brings more cloud, more efficiency, more control, and more software-defined storage (SDS) options.

More Cloud Options with FabricPool

FabricPool has been a hot topic ever since we showed off the concept at NetApp Insight™ 2016. (Hint: NetApp Insight 2017 is just around the corner, so register now.) In a nutshell, FabricPool is a simple way to leverage the hybrid cloud to place your data where it best belongs. When enabled on an All Flash FAS array powered by ONTAP 9.2, FabricPool automatically moves cold secondary data stored in Snapshot® copies or backup copies out to the cloud using object storage. It’s the best of both worlds: the performance of a resilient, unified, high-performance all-flash array for hot data and the cost-optimized capacity of a private or public cloud object storage repository for cold data.

Figure 2) Tiering Cold Data

In this first release of FabricPool, we support tiering data to Amazon S3 and to NetApp StorageGRID®, our innovative award-winning object storage solution for your private cloud. We also recently announced our intention to integrate FabricPool with Microsoft Azure Blob Storage.

With just a few clicks, our customers can realize up to 40% savings in storage TCO across their entire All Flash FAS (AFF) SAN and NAS environment. That’s one more major thread added to the NetApp Data Fabric, enabling true hybrid cloud capabilities for modern businesses.

More Efficient Storage with Expanded Inline Deduplication

NetApp has spent decades focusing on using the least storage necessary to store the maximum data, with industry-leading storage efficiency techniques minimizing our customers’ data center footprint and their storage spend. When I joined NetApp in 2008, we were just rolling out the industry’s first deduplication for primary SAN and NAS storage, integrated directly into ONTAP at no additional cost. Since then, almost every major release has integrated new efficiency technologies such as compression or compaction or made the existing technologies more efficient or effective, for example, by shifting to inline efficiencies on our All Flash FAS (AFF) systems.

ONTAP 9.2 takes yet another leap forward by expanding the scope of our inline deduplication on our All Flash FAS systems to run across an entire aggregate. Within ONTAP, an aggregate is a collection of one or more RAID groups of drives, upon which we provision volumes for NAS exports/shares and SAN LUNs. With ONTAP 9.2, the individual aggregates on our All Flash FAS systems can now go up to 800TB. That means that inline dedupe now runs across 800TB of raw flash space; at 5:1, that means dedupe is running against 4PB of logical data. Each aggregate in an All Flash FAS system is larger than or comparable to the entire “global” scale-out system offered by many competitors’ AFA. With ONTAP scale-out, an All Flash FAS system can host multiple aggregates, extending out to 7.3PB of raw storage space, equating to well over 20PB logical storage space after basic efficiencies.

Figure 3) Shrink the Storage Footprint

Now, with ONTAP 9.2, inline dedupe can run across hundreds of individual volumes residing on the same aggregate, reclaiming duplicate data, whether that’s database copies, multiple VMware datastores, or any other NAS/SAN mixed workload. Again, it’s a no-additional-cost capability with negligible performance impact that we offer to customers as a nondisruptive software upgrade. This upgrade allows customers to reclaim up to 30% more efficiency on top of and in complement to existing efficiencies, including compression and compaction. Combine this logical efficiency with the extreme density enabled by 15.3TB solid-state disks (SSDs), and you can get upward of 1PB in a single 2RU shelf or 4RU dual-controller All Flash FAS system. That’s industry-leading logical efficiency and physical consolidation, all in one solution.

More Control of Performance for Shared Business-Critical Workloads

Unified shared infrastructure has a universe of benefits: consolidation, efficiency, flexibility, and scalability, just to name a few. The old silos where a business-critical application has its own storage array are quickly disappearing, victims to their own inability to adapt. However, a siloed storage array does have one advantage: There are no other workloads on the array to compete for resources. That’s why, when you’re consolidating storage workloads, it’s so important to place business-critical applications in the right place to meet their performance requirements and then make sure they get the performance they need after they are there.

Figure 4) Simplified Performance Controls for Shared Environments

In ONTAP 9.2, we introduced the concept of application-centric balanced placement for SAN workloads on our All Flash FAS arrays. What does that mean? It means you tell ONTAP that your application needs X number of LUNs of Y size with Z performance, and ONTAP itself identifies the right place within the scale-out cluster based on the available performance headroom to put your LUNs. Plus ONTAP provisions them for you and applies quality of service (QoS) maximums based on your selected service level. It’s automagic and available today for several common workloads as well as a “generic NAS workload” and a “generic SAN workload,” as shown in the System Manager onboard GUI screenshot in Figure 5.

Figure 5) System Manager GUI

With ONTAP 9.2, we also added new QoS minimums, now available for All Flash FAS SAN. The following video shows how these QoS policies are applied to protect business-critical workloads:

 

The video demo shows how a combination of ONTAP 9.2 QoS minimums and maximums can be used to optimize the performance utilization of a shared efficient SAN/NAS architecture, even when mixing business-critical workloads with “pesky” noisy neighbors such as the preproduction workload highlighted. ONTAP 9.2 makes it possible to modernize your data center while still protecting the applications that drive your business.
More Software-Defined Storage Options with ONTAP Select 

The magic of ONTAP is not just what it enables our customers to do; it’s also how they can do it. ONTAP can run on All Flash FAS systemshybrid FAS systemsFlexPod®converged infrastructure, and third-party heterogeneous arraysnear the cloud; and directly in the cloud. But one of the most exciting and fastest growing ways to deploy ONTAP is as software-defined storage, directly on commodity servers with DAS, using ONTAP Select.

With ONTAP 9.2, we’ve added new deployment options to spread ONTAP software-defined value to even more use cases, as shown in Figure 6.

Figure 6) New Deployment Options

Prior to ONTAP Select 9.2, we offered a single-node non-HA Select configuration or a four-node highly available configuration. ONTAP Select 9.2 introduces a two-node configuration with a separate HA “mediator” built directly into the same “ONTAP Deploy” virtual appliance used to properly instantiate ONTAP Select instances initially, thus requiring no additional infrastructure or virtual machines.

The other major innovation is support for non-DAS storage with ONTAP Select vNAS. What does that mean? It means that, in general, if you can supply storage to a VMware vSphere environment, then ONTAP Select runs on top of that storage. One use case is an external array connected to vSphere, most commonly when you want to add industry-leading NAS services provided by ONTAP onto a SAN array, hence the “vNAS” moniker. A related use case is for customers using VMware VSAN to take their existing server DAS and turn it into a shared storage environment. In most cases, ONTAP Select on its own or other options such as ONTAP All Flash FAS/FAS or our new NetApp HCI can provide an overall stronger enterprise data management solution than VMware VSAN. However, if a customer determines that VSAN is the right solution for a given workload, we want to support that. Offering ONTAP Select vNAS for VSAN allows you to take advantage of VSAN and ONTAP.

ONTAP 9.2: And So Much More …

Beyond the major improvements highlighted in this blog post, there’s an extensive list of other improvements that our customers expect of a major ONTAP release, some of which are covered in the blog by Justin Parisi. They are all covered in the ONTAP 9.2 release notes and documentation, available to our customers and partners at the NetApp Support site.

ONTAP 9.2 represents yet another significant update that delivers new capabilities to the data visionaries that we’re proud to have as customers … and no surprise, the team is already hard at work continuing to drive major innovation for the next release. Because innovation matters, and NetApp is Data Driven.

Source: https://newsroom.netapp.com/blogs/ontap-9-2-more-cloud-efficiency-control-and-software-defined-storage-options-for-a-data-driven-world/

Author: Jeff Baxter


  • 0

Pay-As-You-Go Data Management for Your Data Center

Category : NetApp

The flexibility to pay for only the resources consumed is the key thing that drives many organizations to choose the cloud. Rather than requiring a large up-front capital investment for infrastructure, the cloud gives you the option to start small and pay as you grow.

NetApp recently introduced a new on-demand consumption model that offer this same benefit for resources deployed on premises. NetApp® OnDemand gives you all the value of NetApp data management solutions with the flexibility of the cloud.

NetApp OnDemand Consumption Model

NetApp OnDemand brings cloudlike flexibility to on-premises environments, converting traditional capex purchase models to flexible opex purchases. It simplifies the acquisition and management of data storage capacity, marrying NetApp on-premises infrastructure with the flexibility of a usage-based consumption model and the economic agility benefits of public cloud.

You simply pay monthly for capacity consumed. NetApp owns the infrastructure, but you manage it; you have full control over your data.

OnDemand is part of a continuum of solutions from NetApp that allow you to consume data and storage resources in the way that makes the most sense for your business needs — everything from data services in the cloud to traditional on-premises solutions.


On premises or next to the cloud.
With NetApp OnDemand, you have the option of using NetApp infrastructure deployed either on premises or colocated next to the cloud. If you choose colocation, you maintain full control of your data while gaining the ability to easily leverage compute and analytics services from leading cloud providers.

Managed services. You can also bundle OnDemand consumption with managed services from NetApp or its partners for complete data management as a service. Experts manage the infrastructure and data based on your requirements and according to NetApp best practices.

How It Works

The OnDemand program begins with a NetApp Service Design Workshop in which you work with data management experts to identify the solutions that meet your service-level objectives. NetApp or partner experts install the equipment, and you:

  • Pay for resources on a monthly basis
  • Have complete responsibility for the data
  • Perform all data management tasks, including backups, disaster recovery, and so on

If you choose to add NetApp Managed Services, NetApp or its partners take over these responsibilities on your behalf.

When Should You Consider an OpEx Model?

When planning your next IT deployment, start by assessing your requirements and then make choices based on what best fits your needs. Traditional capex purchases deliver the best value for resources deployed with long-term, predictable workloads. Leasing also remains a good choice for stable medium-term to long-term deployments when you need an opex purchase model.

If you need the flexibility of the cloud for unpredictable workloads but must maintain control of your data—either in your data center or colocated next to the cloud—take a look at NetApp OnDemand. With OnDemand, you can convert capex to opex with a pay-as-you-go model.

For deployments in the cloud, let NetApp help you choose the solution that supports your data management needs, with all the control and efficiency you expect in the data center.

Source: https://newsroom.netapp.com/blogs/pay-as-you-go-data-management-for-your-data-centers/


  • 0

Introducing NetApp Enterprise-Scale HCI: The Next Generation of Hyper Converged Infrastructure

Category : NetApp

Today is our annual NetApp Analyst Day in Boulder, and I’m looking forward to spending time with arguably the most knowledgeable and well-connected group of people in our industry. Understandable, then, that we chose today to launch NetApp HCI, the next generation of hyper converged infrastructure, and the very first HCI platform designed for enterprise-scale applications.

When it comes to infrastructure, IT buyers and consumers have more options than ever before. Organizations are becoming increasingly aware of the need to consume IT resources in the way that makes most sense for them at any given time — and for any given project. In many cases, purpose-built systems (individual and bespoke storage systems, network, servers) remain the best approach. However, most people agree that other consumption options — “as a service,” converged infrastructure (CI), and software-defined (SDS) platforms — will take a rapidly increasing share of the of the IT infrastructure market over the next few years.

As I wrote about earlier in the year, the CI market has grown rapidly, as organizations continue to value increased operational simplicity and faster time to market. Hyper converged infrastructure (HCI) platforms are a natural evolution, as organizations look to build their next-generation data center strategy. The analyst community tends to agree:

  • “Hyper converged solutions will account for 60% of Server/Storage/Network deployments by 2020.” IDC, Worldwide Datacenter 2017 predications, Nov 2016
  • “By 2019, 30% of global storage array capacity installed in enterprise data centers will be deployed with SDS or HCI architectures, up from less than 5% today.” Gartner, Top Five Use Cases and Benefits of SDS, April 2016
  • “23% of respondents to our survey identified HCI as their top storage technology project in 2017.” 451, Voice of Enterprise Storage, Spring 2016

Perhaps more telling is that by 2020, 70% of storage management functions will be automated and integrated into the infrastructure platform.

 The first-generation of HCI

The first-generation of HCI solutions have proved successful, for smaller-scale projects. However, customers have found architectural limitations around performance, automation, mixed workloads, scaling, and configuration flexibility have blocked their path to a next-generation data center strategy where agility, scale, automation, and predictability are absolute requirements.

If you value the promise of HCI but know the first-generation solutions are limited, despite the marketing hype, please allow me a few more minutes of your valuable time to introduce the centre-piece of our announcements today: NetApp HCI.

Introducing NetApp HCI

NetApp HCI is the first enterprise-scale hyper converged infrastructure solution. It delivers cloud-like infrastructure (consumption of compute, storage, and networking resources) in an agile, scalable, easy-to-manage four-node building block. It is designed around the foundation of SolidFire all-flash storage to deliver guaranteed application performance with mature integrated replication, efficiency, data protection, and high availability services. You can confidently deploy NetApp HCI for the edge to the core of your datacenter. In addition, simple centralized management through a VMware vCenter Plug-in gives full control of your entire infrastructure through an intuitive user interface. Integration with NetApp ONTAP Select opens a new range of deployment possibilities for both existing NetApp customers and anyone looking to modernize their data center.

NetApp HCI solves the limitations in the current generation of HCI offerings in four key ways:

 

 

1) Guaranteed Performance

One of the biggest challenges for anyone managing infrastructure is delivering predictable performance, especially in the face of proliferating applications and workloads. Dedicated platforms and massive over-provisioning are not an economic option anymore. However, any time you have multiple applications sharing the same infrastructure, the potential exists for one application to interfere with the performance of another. NetApp HCI provides the solution with unique Quality of Service (QoS) limits, allowing the granular control of every application, eliminating noisy neighbors, and satisfying all performance SLAs. You can deploy all your applications on a shared platform, predictably and with confidence. Expect to eliminate more than 90% of traditional performance-related problems.

2) Flexibility and Scale

Unlike previous generations of HCI with fixed resource ratios, limiting you to 4-8 node configurations, NetApp HCI scales compute and storage resources independently. Independent scaling avoids costly and inefficient over-provisioning, eliminates the 10 to 30% “HCI tax” from controller VM overhead, and simplifies capacity and performance planning. You might also find licensing costs are no longer your barrier to adopting HCI. NetApp HCI is available in mix-and-match small, medium, and large storage and compute configurations. The architectural design choices we have made mean you can now confidently scale on your terms, making HCI viable for core data center applications and platforms, for the very first time.

3) Automated Infrastructure

The holy grail of IT is to automate all routine tasks, eliminating the risk of user error while freeing up resources to focus on less boring, higher-value projects. NetApp HCI allows IT departments to become more agile and responsive by simplifying deployment and ongoing management. The new NetApp Deployment Engine (NDE) eliminates most manual steps it takes to deploy infrastructure while the VMware vCenter Plug-in makes ongoing management simple and intuitive. Finally, a robust suite of APIs enables integration into higher-level management, orchestration, backup, and disaster-recovery tools. Expect to be up and running in less than 30 mins.

 

4) The NetApp Data Fabric

If you choose one of the first generation of HCI platforms, you will likely be introducing a new silo of resources into your infrastructure. This is not an efficient longer-term approach. They also have little in common with other infrastructure-consumption choices you may have made already or would like to make in the future. In contrast, NetApp HCI integrates into the NetApp Data Fabric for enhanced data portability, visibility, and protection. The NetApp Data Fabric removes lock-in and provides you a new level of choice. It allows the full potential of your data to be unleashed across your environment — whether on-premises or in your public or hybrid cloud.

In summary, with today’s announcement, NetApp is delivering on all original promises of HCI … and more.

 

Now you can choose an HCI platform to deliver all your virtualized applications, with guaranteed performance. You can scale on your terms, transform your IT operations through automation, and unleash the power of your data with the NetApp Data Fabric.

Author:  John Rollason

Source: https://newsroom.netapp.com/blogs/introducing-netapp-enterprise-scale-hci-the-next-generation-of-hyper-converged-infrastructure/


  • 0

NetApp acquires two companies to boost cloud storage

Category : NetApp

NetApp has unveiled two acquisitions it expects to help grow its already-growing converged infrastructure and cloud storage business.

The first acquisition is Immersive Partner Solutions, a developer of software to validate multiple converged infrastructures through their lifecycles.

The second is PlexiStor, provider of software that turns off-the-shelf servers into high-performance converged infrastructure offerings with persistent memory technologies.

The acquisitions were unveiled by NetApp chief executive George Kurian during the company’s fiscal fourth quarter 2017 financial analyst call.

Kurian said his company’s all-flash FlexPod converged infrastructure sales in conjunction with partner Cisco, combined with the company’s channel momentum, helped to strengthen its No. 2 position in the converged infrastructure market with a 44 percent year-over-year growth of FlexPod revenue, as reported by IDC for the fourth calendar quarter.

“We also recently acquired Immersive Partner Solutions, a cloud-based converged infrastructure monitoring, and compliance company,” Kurian said.

“We will integrate this intellectual property into our FlexPod solutions to help customers further simplify and automate lifecycle management and enhance our leadership in the converged infrastructure market.”

NetApp is also leading the industry in the transition to flash with cloud-integrated solutions, Kurian said. The company’s fourth-quarter all-flash array business grew nearly 140 percent year-over-year to an annualized run rate of US$1.7 billion, he said.

“We have entered into an agreement to acquire PlexiStor, a company with technology and expertise in ultra-low latency persistent memory,” he said. “This differentiated intellectual property will help us further accelerate our leadership position and capture new application types and emerging workloads.”

No details were provided about the timing or the terms of either acquisition. Neither acquisition was announced before Kurian’s comments.


  • 0

What Is Your Hyperconverged Infrastructure Strategy?

Category : NetApp

One of the best parts of my job is the constant conversations with customers around architecting great infrastructure solutions. I have always had a passion for talking to customers to assemble the “jigsaw puzzle pieces” into something unique for every customer. One common theme in recent conversations is what’s next? If we aren’t careful, Today’s Buzzword is tomorrow’s Trough of Disillusionment. With our industry evolving at a velocity that seems to be increasing every year it is getting harder and harder to keep up. Because of this trend, it is common to lean on trusted advisors in our industry for guidance. With that in mind, what are the top customer concerns we have seen recently when asked about evaluating hyperconverged solutions?

Both converged and hyperconverged systems provide the biggest bang for the operational buck. There has always been a sweet spot for these systems in the Enterprise. Customers have fallen in love with the shortened time to deploy and provision, operations simplicity, and consolidated support. In addition, customers have told us of their need to move away from a “buy up front and grow into it” world to a “buy in small increments as you grow” model. This will be the key to growth in the converged markets going forward. The days of the 3-7 year buy cycle for infrastructure are dwindling. There will always be a place in the market for build your own, best of breed infrastructure but in my experience, the valid use cases are getting smaller and smaller. As this market has matured, customers are asking for a more integrated experience and this trend is continuing as the hyperconverged market continues to grow. The question to ultimately ask yourself is what variables are you looking for in your next evaluation. As HCI adoption in the Enterprise continues, and as we move into the second generation of hyperconverged infrastructure, I predict customers will expect more from their infrastructure and the lines between converged infrastructure and hyperconverged infrastructure will continue to blur.

You’re probably saying, “Great, so what? How do I know where to place value and trust in my next purchase?” A shift in infrastructure can be daunting, so Gartner developed give key determinants in a Hyperconverged Integrated System Decision: simplicity, flexibility, economic, prescriptive and selectivity. Here are my thoughts on evaluation criteria:

The foundational layer of any infrastructure is trust. Without a prescriptive environment that offers predictable, guaranteed performance we have nothing. We have built our house on a foundation made of sand. This applies to all components in the stack. A prescriptive stack allows us to maximize system resources while creating virtualized, dynamic pools that can be quickly allocated and deallocated. Another factor here is the maturity of the products in the solution. The hyperconverged space currently has 30+ companies, most of them startups. How many will make it long term? Over time this space will naturally consolidate down to a few mature players the Enterprise will trust. This has happened over and over again in our industry (Cloud Management Platforms, Software Defined Networking, and All Flash Arrays to name a few) and is the natural evolution from startup to widespread Enterprise adoption. Statistically, very few startups make it to an IPO or acquisition exit, and even that is no guarantee for long term success. You need to have confidence that your vendor of choice will be there for you.

Another aspect of trust in the stack provides confidence to consolidate workloads, including Enterprise Tier One workloads. Customers are asking us for the ability to manage hundreds of applications on thousands of volumes while at the same time guaranteeing performance to critical applications, all on an a stack that provides simplified operations. The days of islands of infrastructure and silos and tiers are coming to an end.

 The next item to consider is Simplicity. It seems so obvious but simplicity in execution is actually very difficult. This reminds me of the heyday of on-prem Infrastructure-as-a-Service (IaaS). There was a was the goal of self service simplicity presented to the user and operator by abstracting away the underlying resource layers. By adding orchestration, automation, and scheduling on top of virtualization, the end result was a lot of moving parts and overhead. Simplicity in user  experienced were traded for additional operations complexity tying together all the virtual and physical layers. The vitalization administrator wants something that is both easier to stand up (the Day 0 experience) as well as operate over time (Day1 and beyond). The less overhead in the abstraction layers, the better.

A critical attribute of converged systems is flexibility. A prescriptive, simple environment appeals to the virtualization admin (more sleep at night, time to tackle projects that the business needs), but flexibility is the key to a great user experience. We’ve all heard the horror stories of legacy virtualization environments where it would take days to weeks to request a new virtual machine or application for the IT department. I still remember a customer years ago that required a paper form to be filled out and signed by the procurement, network, storage, server departments before a virtual machine could be cloned from a template. Mind you, this task took a couple clicks and a few minutes for the virtualization admin. The paperwork and approvals took exponentially longer than the actual deployment. The reason for this is because the underlying layers were static and fixed the costs to grow were fixed on an annual budget cycle and needed to be managed closely. There was no option to scale as the company grew. It was very difficult to grow (or shrink) the pools of resources to match customer demands. Customers demand a more flexible stack that allows them to match the velocity of their business needs and to stay ahead of their competitors.

 No infrastructure is complete without a plan for data protection and portability you can trust. At Netapp, that vision is the Data Fabric. Customers want know their data is protected at all times without having to sacrifice vendor lock-in. This also includes the ability to move your data from on-prem to public cloud as needed and back again. I recently celebrated my one year anniversary with NetApp but I have been working with NetApp systems for years when I was an SE for a NetApp partner. A personal early example of this vision was a number of years ago (probably 5+) setting up complex VMware infrastructure for a customer. The challenge was creating Snapshots of vmfs datastores and then shipping the data to another NetApp system across the country for DR recovery in another data center in the event of a primary site failure. NetApp’s combination of SnapMirror and SnapCenter (Snap Manager at the time actually) was critical to this success. This is where companies with a core data protection portfolio, such as NetApp, provide many advantages.

 Lastly, I would be remiss if I didn’t include selectivity as my final point. To further expand on the vendor lock-in point from above, what if you want integrations that you either create yourself with an open API or the ability to automate and provision your infrastructure to your exact standards through industry automation (VMware, Chef, Puppet, Ansible)? The concept of flexibility expands beyond infrastructure pools into open systems management. I call this moving the control plane. In a Solidfire context, we have been moving the control for years. In the early days of SolidFire, we were successful with Service Provider customers because we fit into their Cloud Management Systems (CloudStack and OpenStack) or their customer tools (API integration). As we moved into the Enterprise, we added VMware vCenter with our plugin. The ability to meet customers where they are managing systems today without the need to access the UI for your storage system is very important. During your evaluation of hyperconverged infrastructure systems, the ability to seamlessly adapt a new platform into your existing tools and workflows reduces both up front configuration as well as long term integration and operations.

 If you are looking for further insight, check out this report on Creating an Effective Hyperconvergence Strategy from Gartner. They cover what I covered here today as well as some other aspects for you to consider. If you are evaluating a change in your infrastructure in the near future, I would love to hear what you think. What is important to you?


  • 0

NetApp Showcases Cloud-Connected Data Management Solutions at VeeamON 2017

Category : NetApp

NetApp, a VeeamON 2017 premier sponsor, will showcase a variety of data management solutions designed to help customers unleash the full potential of their data, whether on-premises, or in the public or hybrid cloud.
VeeamON 2017 will take place May 16 – 18, 2017 at the Ernest N. Morial Convention Center. in New Orleans. Attendees can visit NetApp in booth #102.

NetApp’s expertise in enterprise data management – combined with Veeam’s proven data backup, replication, and recovery software – helps thousands of customers around the world ensure their data is available, discoverable and secure. NetApp will host three breakout sessions at the conference:

  • “Transform Your Data Protection with NetApp ONTAP and Veeam” featuring Keith Aasen, NetApp, 4:10 p.m. Central Time, May 17
  • “Flash to Disk to Cloud – VM Data Protection with NetApp and Veeam” featuring Keith Aasen, NetApp, and Stefan Renner, Veeam, 11:15 a.m. Central Time, May 18
  • “Programmatic Performance – Dynamic Reallocation of Storage IO Limits to Shrink Backup Windows” featuring Jeremiah Dooley, NetApp, and Michael Cade, Veeam, 4:10 p.m. Central Time, May 18

VeeamON 2017 is a data center availability event that offers technical, business and end user tracks, networking opportunities and more. For more information, visit: www.veeam.com/veeamon

Additional Resources


  • 0

Active IQ: monitoring / reporting

Category : NetApp

Real-time system visibility

The SolidFire Active IQ SaaS platform, a key element of Active Support, provides real-time health diagnostics, historical performance, and trending from the system level all the way down to each individual volume. This holistic approach to infrastructure monitoring, combined with SolidFire’s unique abilities to upgrade, tune, and scale on-demand without disruption, redefines operational success for storage infrastructure.

  • Anticipate business demands: Real-time usage modeling and granular system data increase agility — enabling you to proactively plan for evolving business demands, simplifying storage resource optimization.
  • Increased productivity: Consolidated monitoring saves time when managing multiple clusters/sites, and comprehensive storage metrics reduce assessment efforts. This means you have continuous visibility into changing conditions, empowering you to avoid issues rather than react to them.
  • Reduced risk: Customizable alerts notify you instantly of possible issues. Faster response reduces risk for the business.

  • 0

Chico State Speeds Student Access to Resources with NetApp

Category : NetApp

IT staff improves operational efficiency so students get real-time data on mobile devices, administrators benefit from paperless systems, and campus becomes greener

“With students demanding faster, real-time access to campus resources from mobile devices, we needed a data management solution that would keep pace, speed university operations, and stay within the state budget,” said Ray Quinto, associate director of Computing and Communications Services (CCSV) at California State University, Chico. “After implementing the NetApp all-flash system, we saw a significant increase in operational efficiency, which resulted in happier students and employees, as well as a greener campus.”

California State University, Chico, has come a long way since its inception in 1887 and opening its doors to 90 students as the Chico Normal School in 1889. Today, the university commonly called “Chico State” offers nearly 17,000 students more than 100 undergraduate majors and options. To accelerate operations, as well as keep pace with their student’s burgeoning demand for social media, mobile devices and campus services — while remaining within budget constraints — Chico State implemented NetApp® (NASDAQ: NTAP) All Flash FAS.

The university switched 80% of its servers to the NetApp all-flash array in less than one week. As a result, Chico State accelerated its admissions processing of 35,000 multi-page application documents by 14 to 15 seconds per page, saving as much as 200 business days every year. The administrators also found that they could easily handle the influx of student housing requests during “Black Friday,” processing all 2,000 applications without downtime. Students, meanwhile, can easily access campus resources and collaborate on assignments from any location by using their mobile devices.

With the modernized infrastructure, Chico State IT team members have significantly reduced the time that they spend in test environments. Performance issues caused by the university’s write-intensive workloads have disappeared. The university has also benefited from NetApp support of its Oracle Real Application Clusters (RAC) environment and has received large electric company rebates for power savings in its virtualized data center.

With its new solution, Chico State can:

  • Save up to 200 business days per year of administrator time with faster document processing
  • Double the speed of application and transcript request processing by admissions staff
  • Patch and reboot servers in seconds instead of minutes

More information:


  • 0

The Next Generation Data Center Demands a Next Generation Storage Architectur

Category : NetApp

In today’s digital economy, markets and customer purchasing behaviors are changing. Today’s customers expect everything to be available online, anytime, anywhere, and from any type of device. In order to satisfy these expectations, enterprise IT departments have to react quickly to changing business needs while continuing to manage the mission-critical legacy workloads that “keep the lights on.” In addition to adhering to regulatory requirements and complying with existing change-management processes, enterprise IT is faced with multiple operational challenges that put pressure on the resiliency and reliability of the infrastructure they manage.

To this end, enterprises are having to engage in a process of digital transformation, away from traditional infrastructure and toward a flexible technology stack that has the agility, scalability, predictability, and automation to react to changing business needs without risking normal business operations.

The process of transformation is typically unique for every enterprise — as are the business drivers that prompt it. At one end of the spectrum, organizations based on traditional enterprise IT are looking to achieve drastic cost savings from the consolidation of their virtualized environments, while at the other side, IT organizations are implementing infrastructure to support DevOps cultures that provide self-service resources and enable the refactoring of traditional client/server workloads into agile cloud-based applications. Given the diversity of organization, drivers, and environments, enterprise IT is looking to the highly flexible architecture of a next generation data center (NGDC) to enable their transformation — an architecture that can meet their changing business needs while seamlessly integrating into, and supporting, their existing infrastructure.

A next generation data center such as this cannot be, by its very nature, reliant upon traditional storage infrastructure. Instead, its foundations are built upon a new type of storage, a next generation storage architecture (NGSA) – one that is inherently agile, scalable, and predictable.

Enterprise IT Can’t Transform Using Storage that Forces it to Live in a Traditional Infrastructure World

The NGSA is the next generation in storage  — one that can scale non-disruptively and incrementally across multiple platforms to support business growth — yet continue to provide guaranteed, controlled performance at reduced, cloud-like operational costs. It has the agility to easily automate, scale, and orchestrate across multiple platforms in addition to providing predictable workload delivery at scale through self-service capabilities that are irrespective of the platform used.

Only NetApp SolidFire has a next generation storage architecture that can meet all these requirements and enable enterprise IT to transition from existing environments to the next generation data center. Organizations are demanding IT transformation without operational risk — irrespective of existing environment. Only a next generation data center powered by a next generation storage architecture can meet this need.

Download your complimentary 2017 Strategic Roadmap for Storage report from Gartner, and learn more about how you can be successful in your storage transformation process.


Support