Latest News
May 3, 2009
By Frazer Bennett
The AdderLink IPEPS is a palm-sized unit that enables secure and remote computer access from anywhere in the world via the Internet or corporate network. The AdderLink IPEPS uses Real VNC client software specifically designed for secure, high-performance KVM over IP applications. |
Virtualization has made it to prime time. The early adopters are successful, and the driving factors for further adoption—both economic and technological—are so compelling that soon it will be hard to remember enterprise administration without it.
Green technologies, the economy, disaster recovery, mitigating hardware and software conflicts as a result of error-prone operating systems, the need for much greater service, and ease of use are just a handful of reasons why virtualization has secured a place as an essential component of IT infrastructure. We know it, and we feel the effects.
But now that we’ve arrived at the age of virtualization, we need to take a look at its chain reaction. Because virtualization separates a computing platform’s physical resources from the software running on it by creating a skilled insulating software layer, it is, in turn, driving other technologies and business issues. Cures have a way of always creating new ills.
All the pressure is on that insulating layer and its promise to solve a long list of enterprise problems, including creating a more productive data center that conserves hardware and energy; improving efficiency; cutting operating costs; protecting software investments; deploying infrastructure rapidly; improving scalability; planning disaster recovery; and using multicore CPUs. The insulating software layer cannot do all this without support.
Then there are the issues that virtualization software specifically needs help to accomplish, like tackling hardware stability, needing to cope with legacy systems in this economy when people are upgrading instead of overhauling, and solving the issue that not everything can be centralized. One huge business reality that leaves virtualization frozen in the headlights is an increasingly mobile workforce. The economy promises to speed this trend, too.
When people think of virtualization they have traditionally considered server-side or data-center virtualization. More recently, desktop virtualization has become increasingly deployed. |
Key Issues To Plan For
There are some key issues that invariably arise when migrating to virtualization, and you’ll see where the needs beyond virtual machine (VM) software come into play.
When people think of virtualization they have traditionally considered server-side or data-center virtualization. But more recently, desktop virtualization has become increasingly deployed. Because desktop computing will typically make even less efficient use of computing resources than data center computing, adopting virtualization at the desktop makes sense from an energy-saving perspective.
It is also important to reconsider the hardware requirements of the IT center. Although some suggest that hardware can be reused in a move to virtualization, it is more often the case that a move to virtualization coincides with a reprovisioning of hardware. Storage, I/O, and CPU requirements must all be refactored. Security, scalability, and disaster recovery strategies must also be replanned.
You need to analyze the software requirements of the entire organization, specifically which ones lend themselves to a virtualized environment. Things like the number of copies of an application, the amount of interconnectivity (switching speed between them), the amount of I/O, and the typical CPU utilization of any given application must all be taken into account. Licensing requirements must also be considered.
What is your network’s topology? When your IT infrastructure is highly distributed, with lots of satellite-office installations and such, it’s particularly important to consider the impact of deploying a virtualized system. Satellite offices can benefit from having local computing infrastructure, making them insulated from certain problems, such as network outages. Moving to a VM architecture might make your infrastructure more dependent on network availability, which comes with its own intrinsic set of risks.
Network security and privacy has never been a bigger issue. Security breaches are at record levels, and the regulatory frameworks being imposed on the organization are becoming increasingly more stringent.
If you previously had physical separation of servers, and are now consolidating them, that’s a security vulnerability. You need to think about who has administrative access to these servers, and how data stored on them is going to get managed. An access policy needs to be written, and a backup strategy for all your valuable enterprise data needs to be developed, in the event it is compromised.
System mobility—motioning, as it is otherwise known—concerns the ability for live operating systems to migrate between hardware platforms. This is a useful tool to help implement load-balancing strategies, and failure as well as disaster-recovery strategies, but it can have further implications for security, internal audit (charge-back), and compatibility issues. You might need to settle on a single VM vendor to make this work effectively—and there are a lot of VM suppliers out there.
Tackling VM-Created Problems
One central technology—quite literally central to the administration process as a virtual hub—being driven by VM is KVM (for keyboard, video, mouse) switching, or more specifically, CATx KVM switches and KVM-over-IP.
Why KVM? Because you need hardware backup for the software that has so much work on its plate. KVM handles the key issues mentioned above that arise when migrating to a virtual environment.
Also, for the foreseeable future, we’re going to be dealing with a heterogeneous, highly dispersed environment. We’re still going to need to cope with server failure, we’re still going to need hot-swap hardware, and we’re still going to need emergency access. Virtualization doesn’t simply make these issues go away. It might refocus them and it might also localize them, but it doesn’t make them go away.
Here are some of the issues that KVM handles, complementing what VM does. Indeed, virtualization is driving the need for KVM because it steps in where virtualization either falls short, or must be completely reliable in the way that only hardware is.
Consolidated servers become even more critical now that you’re running more infrastructure on a single instance of hardware; this hardware is more critical to the organization. Before virtualization, it might have just been the sales team, or the after-sales care team, that would page IT frantically when their server goes down.
After virtualization, IT might get bombarded with calls from sales, accounting, legal—just about every department—if something goes wrong on that server. KVM offers reliability and the ability to quickly switch the infrastructure to a backup server to keep those departments running without interruption.
Switching to virtualization is not a binary thing. We will never just switch off the old infrastructure and switch on the new VM infrastructure in one fell swoop. It will be a heterogeneous environment for many years to come, and VM is not a substitute for KVM access—particularly KVM-over-IP—whether it is through IPMI or the more traditional aftermarket KVM hardware. Such KVM access will give BIOS-level (below the Hypervisor) access to a machine, as if you were in front of it. Sometimes there is just no substitute for this.
Decent management tools are vital. While VM brings a bunch of benefits (including such things as increased flexibility and cost savings), it can’t be accomplished in isolation. You have to think about how the IT architecture can best meet the needs of the organization it serves, and this will involve a mix of VM and more traditional deployments.
For virtualized systems, the importance of robustness and availability has increased. For both traditional and virtualized systems, you need to be able to manage them with a consistent view. Management tools from companies like Adder are going to be increasingly important.
Yet, as if all the VM factors driving new needs weren’t enough, think about cloud computing. That’s virtualization, plus handing over some reins of control to the ether, for the sake of economics. That means taking precautions like never before, and having the ability to troubleshoot like never before. If you prepare for VM well, it may pave the way to handle the move to a world of computing in the cloud much more smoothly.
More Info
Adder Corporation
Newburyport, MA
Frazer Bennett is an adviser to Adder’s engineering team, with particular responsibilities for IP networking and system architecture. He has more than 20 years of experience in computing systems design and networking. To comment on this article, send e-mail to [email protected].
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
DE EditorsDE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].