This Industry Viewpoint was authored by Stefan Bernbo, CEO and founder of Compuverde.
The boom of cloud services and the rapidly-growing demand for more storage has turned virtualization into a hot commodity. Virtualization, the name given to the practice of using a software program to mimic the functions of physical hardware, offers a wide range of flexibility and reduced hardware costs. These benefits are becoming too good to ignore: according to research firm Forrester, more than six out of ten workloads will be performed by virtual machines in 2014.
Despite its benefits, virtualization’s widespread adoption has created a huge need for massive amounts of storage and a focus on efficient management practices.
The Age of Virtualization
Over the years, virtualization has grown in use due to its improvements in cost savings and flexibility. Virtualization can increase the efficiency of the data center’s hardware by reducing the hardware’s idle time. When an organization installs virtual servers inside its hardware, it can optimize the use of its CPUs, which drives it to virtualize even more of their servers in order to decrease costs.
Another benefit of virtualization is that it allows for much greater flexibility for the organization. It is far more convenient for an organization to have virtual machines rather than physical ones. If an organization ever needs to change hardware, an administrator can move the virtual server to a more powerful physical system, which increases performance with minimal expenditures. Before virtualization, administrators had to install the new server then reinstall and migrate all the data from the old server. It is significantly easier to move a virtual machine than a physical one.
Virtualization’s Core Strengths
It’s important to recall that not every data center wants to implement virtual machines, but those that haven’t yet switched are giving the idea significant consideration. Data centers with somewhere between 20-50 servers are starting to consider virtualizing their machines. Organizations like these can take advantage of decreased costs and increased levels of flexibility. Furthermore, by virtualizing their servers, organizations can make them far easier to manage. The physical challenge of maintaining a certain number of physical servers can lead to difficulties for the data center staff. Virtualization makes data center management easier by allowing for the same number of servers to run on fewer physical machines.
A Need for New Storage Capacities
Despite all the benefits of virtualization, the increase in implementation is elevating the stress being put on traditional data center infrastructures and storage devices.
The problem of storage capacity can be thought of as a direct result of the popularity of virtual machines. The original virtual machines used the local storage within the physical server. This placement would not allow for administrators to move a virtual machine from one physical server to another. By introducing shared storage like NAS or SAN, administrators could move virtual machines from one server to another with little trouble. Eventually all the physical servers and virtual machines became connected to the same storage. Yet even this is a convenient setup, it raises the specter of data congestion.
With only one point of entry, a single point of failure is imminent in scale-out virtual environments. When all the data flow is forced through a single gateway, data becomes bogged down during periods of high demand. Furthermore, with the increase of data creation and virtual machine implementations, organizations must improve their storage architectures or face serious network issues. Architecture must be updated constantly to keep pace with the growth of data.
Looking to the Pioneers
Early adopters of virtual machines, like major service providers or telcos, have encountered this problem and are implementing fixes to reduce its impact. When other organizations start virtualizing their data centers, they will also encounter this problem and can look to historical fixes for examples of remedies.
Organizations looking to reap the benefits of virtualization while also avoiding data congestion issues must make sure their architectures keep pace with the rate of growth of their virtual machine usage. One way of doing so is by removing the single point of entry. NAS or SAN storage solutions all have only one gateway that controls the flow of data, which leads to data congestions during demand spikes. Organizations should implement solutions with multiple data entry points while distributing the load evenly among all servers. By doing so, an organization retains optimal performance and reduces lag-time among its servers.
While this approach represents the most straightforward fix, the next generation of storage architecture is suggesting another alternative as well.
Combining Computing with Storage
In order to deal with the problem of scale-out virtual environments, organizations have been running virtual machines inside the storage nodes themselves. This process turns the storage node into a compute node.
With an approach like this, the entire architecture is flattened out. If an organization is using shared storage in a SAN, usually the virtual machine host from the top of the storage layer. This layout effectively creates on huge storage system with one entry point. In order to alleviate data congestion problems, organizations are moving away from traditional two-layer architecture with both the virtual machine and storage running on the same layer.
Looking Ahead
The increase in implementation of virtualization shows no signs of slowing in the near future. Furthermore, a large amount of organizations will implement virtualization and encounter the performance issues mentioned earlier. If an organization looks at the solutions other early adopters created, then it can develop an efficient virtual environment that reduces costs.
About the Author:
Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!
Categories: Industry Viewpoint
Joyent’s Manta compute jobs run on the storage node, but in a container instead of a high-overhead virtual machine, with sub-second provisioning and billing. http://www.joyent.com/products/manta