Fundamentals of Virtualization

Today on the tech market there are many different kinds of operating system platforms. Windows, Linux, Mac OS and Solaris are some examples. Naturally, people would like to have more than one operating system. For example, they might want Windows for games or work, Linux for secure Web browsing and MAC OS for music and graphic designing. There is a way to go a step further with virtualizing your computer. Virtualization allows multiple operating systems to run under one operating system at the same time. When the machine is virtualized there are many benefits. Some of these benefits are reducing hardware costs, easy testing and development, safer, faster backups and restoration, and the end of endless hardware purchases and upgrades. When you think about applying virtualization to your current environment, you must think about consolidating logical resources rather than physical resources into a system designed to support server, storage, and network virtualization. In other words, virtualization basically is the creation of a virtual, rather than actual, version of something. This technology started a long time ago, allowing administrators to avoid wasting expensive processing power. Many ITs think that system virtualization is the future of the computers field and will be very useful by improving IT flexibility and responsiveness.

There is also something called system virtualization which allows many virtual systems to run within a single physical system. There are two ways to approach system virtualization – through hardware partitioning or hypervisor technology. Hardware partitioning means that one physical server is divided into fractions and each of those parts can run an independent operating system. This allows for hardware consolidation but is not the best option for resource sharing and emulation.

A better choice for resource sharing and emulation would be hypervisors, which is an option that uses a thin layer of code in software of firmware in order to achieve the ultimate level of resource sharing. They provide great flexibility in defining and managing virtual resources and are the first choice of technology for system virtualization. There are two types of hypervisors – Type 1 and Type 2. Type 1 runs directly on the system hardware while Type 2 runs on a host operating system providing virtualization services. An example of this would be I/O device support and memory management. Type 1 hypervisors are usually the first choice because they deal directly with hardware, ensuring higher virtualization efficiency. They provide better performance efficiency, availability and security. Type 2 hypervisors are usually used where efficiency is not as important, such as on client systems. They are used mostly on systems where support for a wide range of I/O devices is important and can be provided by operating system already in place.

Generally there are three different areas of virtualization: network, storage, and server virtualization. Network virtualization combines resources which are available in a network by splitting up the available bandwidth into channels. In those bandwidths, each channel is independent from the others and each can be assigned (or reassigned) to a particular server or device in real time. Storage virtualization is combining physical storage from many network storage devices into one single storage device, which is managed from a central console. It is used in the storage area networks (SANs.) Server virtualization is when physical server hardware is separated from guest operating systems (servers), providing additional capabilities and benefits. For example, VMware is a type of server virtualization which allows you to run multiple guest computers (O.S.) on a single host computer. You can gain all the benefits of any type of virtualization by doing this. Some of those benefits are portability of guest virtual machines which reduces operating costs, reduced administrative overhead, server consolidation, testing & training and disaster recovery benefits.  There are also a few other types of virtualizations – hardware virtualization, memory virtualization, application virtualization, mobile virtualization, data virtualization, database virtualization, and desktop virtualization. Two of these kinds of virtualization have subtypes. Hardware virtualization is separated into four different types: hardware-assisted virtualization, full virtualization which allows a guest OS to run unmodified, partial virtualization where some of the target environment is simulated and may need modification to run, and para virtualization which requires modifications to guest OS  in exchange for better efficiency. Memory virtualization is separated into virtual memory and aggregate RAM recourses from networked systems into a single memory pool.

Unfortunately there are a few disadvantages with virtualization. One of the problems is that virtualization has a single point of failure. If the machine or virtualized OS where the visual virtualization is taking place crashes, everything is lost. The best way to prevent or save all of the information is an early made back up. Virtualization can be expensive and it does not run on older machines. Getting an updated computer would be an additional cost to someone looking to use this technology. In order to have success with virtualization you need to have powerful machines which can execute many processes in a second. Another issue with virtualization is applications. Applications, such as a database application, require frequent disk operations and when there is a delay in reading from or writing to the disk because of virtualization, this might render the whole application useless. Many people still think that database virtualization could be improved, even though some real-time financial applications are running on the virtualized machines. To be safe, it is better to avoid virtualizing when using demanding critical database applications.

Network Instruments, a leading provider of innovative analysis solutions, helps organizations and enterprises ensure the delivery of business-critical applications on their networks. They asked 450 network engineers, IT managers, and executives attending Interop how they feel about virtualization and found that 55% are unhappy with this technology. “Many of the people we’re speaking with have implemented virtualization, but often lack of visibility (24%) is keeping them from realizing the benefits of the technology,” Charles Thompson, Network Instruments’ product manager, said in a statement. “Not surprisingly, a high number of companies have deployed critical network services on virtual machines. Without proper tools, application performance can unnecessarily degrade and network teams waste hours troubleshooting.” Some disadvantages are shown in the pie chart below, which is a graph of the Virtual Troubleshooting Concerns. As for troubleshooting, 27% complained of a lack of appropriate tools, 24% lack of info on problem cause, 11% are having problems with tracking virtual machines, and 14% had issues with the secure virtual infrastructure.

 

The next graph exposes a few other problems which are common with virtualization.  It is still difficult to identify a problem source (78%), to monitor bandwidth consummation (37%), measure latency (36%) and applications (16%) , and handle user complaints (24%.)

 

On the other hand, virtualization has many advantages. By compartmentalizing environments with different virtual machines which have different security requirements one can select the guest operating system and tools that are more appropriate for each environment. By consolidating numerous virtual machines into one physical machine, costs would go down. They would not need new hardware, floor space or software licenses. An advantage in the area of reliability and availability is that other virtual machines are not affected when a software failure occurs in a virtual machine. Testing and development is another positive side of virtualization. Huge economic losses could be caused by a system crash. Fortunately, virtualization technology enables a virtual image on a machine to be instantly re-imaged on another server if a machine failure occurs. The graph below shows the percent of applications on virtual machines as stated in the 2009 State of the Network Global Study. Network Instrument is predicting that this will quickly shift over the next two years and the majority of the big companies will use over 50% of applications on virtual machines. According to the responses, three-quarters have already implemented virtualization within their environments.

 

 

There are two types virtualization software on the market. The first one is commercial virtualization software which cost money, and the other one is freeware virtualization software which is free. Some of the best paid software are VMWare, Parallels,  Workstation (for Windows & Linux), Paralles for Desktop (for Mac), Virtuozzo, Simics, and the  free software are  Microsoft Virtual PC 2007, VirtualBox,Virtual Iron, PearPC (for running Mac inside Windows and Linux), Bochs, QEMU.

In conclusion, I think that virtualization has great future in the area of computers, and would be very economically efficient. Many big companies such as Sun, IBM, and Intel have already announced or already have virtualization solutions. It will be difficult to find perfect solutions for all of the problems virtualization may bring, but the advantages may end up outweighing the negatives.If you need any help virtualizing your PC don’t hesitate to contact us by clicking on this link

For more details about Amvean can help you with there storage strategy please contact us at info@amvean.com or 212.810.2074.