THE DEFINITIVE GUIDE TO HYPERCONVERGED INFRASTRUCTURE
Learn the fundamentals of hyperconverged infrastructure and expert tips to build a fast, efficient and highly scalable enterprise datacenter.
The Nutanix platform is fault resistant, with no single point of failure and no bottlenecks. A shared-nothing architecture – where all data, metadata and services are distributed to all nodes within the cluster – is built to detect, isolate and recover from failures anywhere in the system for an always-on operation.
Intelligent Data Placement
Intelligent data placement across different physical domains (e.g., separate racks or power sources) protects against appliance and rack failures.
Availability domains allow Nutanix clusters to survive the failure of multiple servers in a physical enclosure without loss to data or service, providing greater system-level resilience without increasing storage capacity.
100% Software-defined Flexibility
Administrators can configure and manage availability domains at the Block or Rack level.
Administrators can configure data redundancy based on application SLAs and the criticality of the data set, with a replication factor (RF) of two or three.
Automatic Data Reconstruction
If a node fails, data is automatically read from other nodes. If the node does not come back online, all data on the affected node is automatically reconstructed to ensure full redundancy and data protection.
Data is written to a VM’s local node and is synchronously replicated to one or more other nodes in the cluster, ensuring that all data exists in at least two independent locations and remains highly available.
High Availability During Controller VM Unavailability
Multiple copies of data ensure 100% data availability in the event that a Nutanix Controller VM is unavailable due to failure or maintenance.
If the Nutanix Controller VM becomes unavailable, Nutanix auto-pathing automatically re-routes requests to a healthy Controller VM running on another node in the cluster.
Every node in a Nutanix cluster has access to all replicas so that I/O requests can be serviced immediately by any node, providing N-way, fully fault-tolerant failover for all VMs in the cluster.
Data Integrity Checks
Detection and Repair of Silent Data Corruption
The system scans data in the background and checks against checksums in the metadata store. If it detects an error, it will overwrite the bad data with the good copy.
Automatic Integrity Checks
On every read, a checksum is computed for the data being read and compared with the stored checksum. In the case of an inconsistency, the error is corrected.
Automatic Isolation and Recovery
If a drive fails, the system automatically runs a scan and replicates any data that is not redundant. During the failure and recovery process, both data and access to data are preserved.
Nutanix incorporates a wide range of storage optimization technologies that work in concert to make efficient use of available capacity for any workload. Deduplication and compression technologies are intelligent and adaptive to workload characteristics, eliminating the need for manual configuration and fine-tuning. Erasure coding offers deterministic capacity savings regardless of workload characteristics.
Performance Tier Deduplication
Removes duplicate data in the content cache (SSD and memory) to reduce the footprint of an application’s working set, enabling more working data to be managed in the content cache for better performance.
Capacity Tier Deduplication
Global, post-process MapReduce deduplication reduces repetitive data in the capacity tier to increase the effective storage capacity of a cluster.
Easily configured and managed at vdisk granularity for fine-grained control.
Increase Capacity by up to 4x
Data compression can be enabled as an inline capability as data is written to the system, or post-process as a series of MapReduce jobs after the data has been written, eliminating any impact on write path performance.
Leverage All Resources
Unlike traditional architectures where compression operations run on one or two CPUs, Nutanix compression runs on each node in the cluster to leverage all system compute and memory resources.
Compress a Variety of Data Types
Nutanix uses the Snappy compression algorithm to compress a variety of data types more efficiently, and includes the option to compress data at the sub-block level for greater simplicity.
Erasure Coding with Nutanix EC-X
Resilience with Capacity Efficiency
A mathematical function is applied around a data set to calculate parity blocks, which can then be used to recover data in the event of a failure.
Nutanix systems switch between data replication for hot data and erasure coding for cold data based on I/O frequency to optimize performance and storage.
This patent-pending algorithm distributes coding and rebuilds across the entire cluster to reduce vulnerability windows in the event of failures, and maintains data locality.
SEE FOR YOURSELF
Get hands on with the hyperconverged infrastructure that powers the world’s most advanced datacenters. Sign up for a free test drive to gain immediate access to Nutanix in the cloud.