Ø
The Nutanix Virtual Computing Platform is a
converged infrastructure solution that consolidates the compute (server) tier
and the storage tier into a single, integrated appliance. The Nutanix Virtual
Computing Platform integrates high-performance server resources with
enterprise-class storage in a cost-effective 2U appliance. It eliminates the
need for network-based storage architecture, such as a storage area network
(SAN) or network-attached storage (NAS).
Ø
Each Nutanix node runs an industry-standard
hypervisor and a Nutanix controller VM, which handles all I/O operations for
the local hypervisor. Storage resources are exposed to the hypervisor through
traditional interfaces, and are pooled and made available to all VMs. The
Nutanix Distributed Filesystem (NDFS) is at the core of the Nutanix Virtual
Computing Platform. It manages all metadata and data, as well as enables all
core features. NDFS is the software-driven architecture that connects storage,
compute resources, controller VM, and the hypervisor.
Ø
Nutanix sells server nodes with local storage
built-in, but their magic is in the software that combines all the storage of
all the nodes into a single giant storage pool, with any data from any node
available from any server. They have a master-less architecture with no
concurrency locking, and they can support advanced VMware features like
vMotion.
Ø
All of the nodes are completely seamless. The
fact there are four per 2U appliance is just a form factor. Each node runs
VMware ESXi and acts as your VM host, and then a controller VM running on each
node acts as the iSCSI interface to the storage and basically turns the whole
thing into a distributed SAN. There's a 10gig Ethernet connection for the
storage traffic which is separate from the regular network traffic. The
controller VM decides where in the system to place the data. There's always one
copy local plus another copy somewhere else in the cluster. Nutanix calls this
"Cluster RAID," and it's fully compatible with VMware HA and vMotion.
There's a distributed cache using the Fusion-io with SSD, as well as a
persistent SATA tier.
Ø
Then the distributed MapReduce system does all
the maintenance for them. Everything is completely transparent, and the whole
system is lock-free and everything can be concurrent. There's no single master
and no shared cache. They have true scale-out with their storage metadata
(which lives on every node), and the system continues to scale as you add more
nodes.
No comments:
Post a Comment