Many enterprising companies are looking for ways to better IT departments, improve business models, and create lower operating costs. Until a few years ago, it seemed as if an all-encompassing solution to those problems has been taken care of by virtualization that is now a reliable, efficient, and customizable solution to those business requests and more.
Whether used to provide better customer service, to be more eco-friendly, or to gain more company memory, the virtualization technology of today offer many benefits to enterprising companies all over the world, creating innovative solutions to work-based problems on a daily basis. Virtualization has not only captured the business world by storm with its innovative and creative solutions but it also offers proven advantages in several areas of industry, IT, and service. Below is a list of a few of the advantages and solutions that virtualization technology can offer the enterprise around the globe.
Virtual Desktops Many corporations and enterprises are looking to reduce their footprint and create more efficient operating systems. This can be done with one aspect of virtualization, virtual desktops. Virtual desktops have the ability to create more space within a desktop computer or on actual desk space through the use of software which expands a desktops environment beyond physical limits through virtualization. This can create a more eco-friendly environment with fewer computers using energy and lower operating costs as well as offering continuous transitions between multiple operating systems.
Haven’t found the relevant content? Hire a subject expert to help you with Founder/Ceo
$35.80 for a 2-page paper
Enhanced System Security A fear of many enterprises considering virtualization in their business is that all the advantages of this technology will compromise the security of sensitive, private, and legal company information. However, that is not the case; in fact, virtualization in business provides enhanced security, making it more difficult for hackers to find key information. Unlike other security systems, virtualization has the ability to single out and trace requests.
If a request seems fishy or unsuitable, virtualization security technologies will reroute hackers to another location, securing and keeping enterprising business information safe from harm. Better System Reliability non-virtualized networks and systems are more prone to crashes and memory corruption due to software installments such as device drivers. Through virtualization, I/O resources can be isolated providing better security (see above,) reliability, and even availability across devices for business purposes. Disaster Recovery
Along the same lines as better system reliability, virtualization also provides enterprising businesses with better, faster, and more secure disaster recovery. This is possible because this technology is able to take a virtual image or information and transfer it to another server in the instance that the original server may be crashing. This prevents information loss and provides a constant stream of secure and safe information. Space and Server Consolidation When an organization has a physical database, it can take up to ten machines to provide the same amount of workload to one virtual machine.
This means that up to ten applications can be run on a solitary virtual machine, consolidating physical space as well as server use, therefore saving energy usage, operating costs, and server expenses. Scalability One advantage of virtualization technology is its unique ability to be scalable. What this means is that unlike purchasing X amount of computer memory or RAM for a company, the possibilities with virtualization are endless. The workload and space needed one month may change in the next and virtualization accommodates those changes by fluxuating to fit the needs of an enterprise at the time of use.
This also saves on energy consumption and operating costs because virtualization service providers oft will only charge for what was used. Endless Memory and Accessibilities One aspect of scalability is virtualization’s advantage of seemingly endless memory. Enterprising businesses can take advantage of limitless memory to house business information, client details, invoices, and financial records all in an accessible, crash protected, and secure place. Virtualization is accessible anywhere there is an internet connection, allows for access to important company information anywhere in the world.
This is great for traveling business owners, work from home employees, or access away from work. This also allows companies to offer better customer service to clients because of the ease of access as well as the quick pull up of saved information stored in a limitless memory location. Many enterprising companies are looking for ways to better IT departments, improve business models, and create lower operating costs, all of which can be accomplished with the modern marvel technology that is virtualization.
Whether used to provide better customer service, to be more eco-friendly, or to gain more company memory, the virtualization technology of today offer many benefits to enterprising companies all over the world, creating innovative solutions to work based problems on a daily basis. Return on Investment School of thoughts have argued in the various platform about the huge capital investment return in virtualization, there are multiple factors which determine how to go about deciding what type of monitoring an Information Technology (IT) department should embark on.
In order to properly evaluate a specific IT environment, one must first determine whether or not to virtualize. Virtualization can be stressed as a lifesaver for countless IT departments over the past few years. What first needs to be determined, however, is if in fact an organization really needs to virtualize. Perhaps database, application servers, network services, etc don’t truly need to become virtualized, maybe they do. What we are trying to determine today is the Return on Investment (ROI) for virtualization.
Technologist, Researchers, and Students helps quantify virtualizing an IT infrastructure. If an organization is in the process of acquiring a new company (or being liquidated) and subsequently moving locations, they must first take a look in their server room*. If it is overcrowded, underpowered, or outdated, then yes… virtualization is probably important. What everyone in the IT department wants to know then, is virtualization right for my organization and more importantly my department?
The resources being saved on just power and hardware along are staggering. Keeping costs lower for the overall organization is obviously crucial, especially when it comes to things such as never having to redeploy application solutions*. This will save you time and subsequently money: no server hardware refreshing costs, limited annual server-related power costs*. The greater questions then arise, how much time will it take a department to make the complete switch? Will the ROI be worth the increased labor hours to become fluent with the virtualized world?
What types of challenges will come about from the overall business perspective? Reducing Infrastructure Costs Through Virtualization.
Nowadays, we live in uncertain times all around the world.
When it comes to architecture and design, we must think a lot more in costs—unlike at other times, when we used to think first of the solution and then in costs. If we had a streamlined and defined return on investment (ROI), only then did the project get the green light. Today, if we think about projects, it is necessary to have a precise budget and defined cost first before we can start to think about the project. Upon brief reflection of what IT architecture is, one finds that the model that the whole world favors (because of costs and the evolution of technologies) are the virtualization model.
Ten years ago, it was all about decentralization—both in data centers and servers and in communication and desktops. When we thought of an application, we always tried to have the layers of such an application as close to the client as possible; data centers, databases, and e-mail servers were distributed all over. This situation was the result of being unable to face the huge cost of having hardware equipment of multiple large capabilities (such as an eight-processor server and lots and lots of RAM gigabytes) or having redundant point-to-point communication links with good bandwidth—the cost of which only large companies could consider including in their architecture.
All of this encouraged a decentralized IT administration model that requiredspecialists in the different platforms of each site. The Original Model. Although this model worked for years, many things were not considered that today have | | |rendered this model not as efficient as was initially thought.
Taking as an example the analysis of a distributed application that was devised 10 years ago, there was an architecture in which it was important to have the data near the client, which led to the following scheme. A database in the central office in which the information from the different sites, the database from each site, the application server from each site, and the local application that was installed on each desktop were all consolidated. A replication scheme among the databases was used for the distribution of information. This drove us to have database administration for each site, besides having on each site an infrastructure administrator who had a thorough knowledge. Initially, this had (as variables within the equation) high communication-link costs, large servers that represented a very high cost, and operative systems that were neither very solid nor rigid with regard to changes and also had little functionality—that is, they offered very few functions or roles within the operative system. For many of the needs of the company, it was necessary to add software that could comply with that functionality. In addition, in order to carry out the tiniest of changes, it was necessary to set the server offline and have IT personnel who represented an average cost. The variable that was not really considered was the updating and maintenance of the whole structure, which at the time—due to the fact that technology did not evolve in the way that it does today—was not such an important aspect. If we consider basic accounting principles (which I have learned during recent years), one should always see the IT personnel as an asset to the company, with both amortization time (which is the time that it takes to shape the person, according to the culture and needs of the company) and an updating cost (which is what must be invested to have a person trained in the different technologies as they evolve). Over time, all of this changed; the variables in this equation also changed, and the updating and maintenance variable (which in many cases had not been taken into account) started to gain more and more importance. This is the equation that we face nowadays: Average to low communication-link costs (taking into account the virtual private network), large servers with many RAM gigabytes at average to low cost, operative systems that had hundreds of embedded and flexible options, and lots of functionality (many things already come solved and embedded in the operative system, so that in general it is not necessary |to set the server offline to make these changes), average to high personnel costs, and average to high updating and maintenance costs. Within the scheme that is encouraged today, many things must be considered; it is necessary, therefore, to have the whole scheme in mind—not just a part of it—to avoid making the same mistakes that we incurred in the past. Nowadays, when uncertainty (crises, corporate mergers, acquisitions, and constant changes)is all around, it is vital to work toward an environment that would basically support constant dynamic changes. More than ever, it is necessary to think about platform and application updating, growth, and corporate and budget contractions.
This, of course, will highly influence the model that is to be chosen, and that model (taking into account the |aforementioned equation) should be based mainly on the updating and maintenance variable. When we consider all of the preceding, we will see that the model that best fits is the virtualization model applied to all of the possible levels, where all of the equation variables are considered in order to determine feasibility and total cost of ownership (TCO). There will be infinite virtualization scenarios—from choosing cloud computing in specific services and virtualizing (or outsourcing) the whole or part of the IT department to using virtualization for servers, applications, or desktops.
Today, there are many important players and technologies that have been widely tested, such as Microsoft Hyper-V and VMWare.
The hardware costs have gone considerably down: If we were to compare four- to eight-processor equipment of the past to one today, it would result in an important cost margin that would be an improvement, and it would be necessary to add the progress that has been made in technologies and redundancy within the equipment, board, hot-plug memories, and so on. Generally speaking, almost all the components of the server can be changed without having to take the server offline.
The same thing applies to operative systems. This means that we can do away with the theory that was used in the past, according to which we used to divide into different hardware pieces the different business applications. In addition to this, the advantages of tolerance to failure in the virtualization schemes that are used today make it possible to take a physical server offline without affecting the virtual server that is running in that physical server.
This, of course, means that from a simple technical point of view, there is already a huge advantage in the use of server virtualization. From an architectural point of view, this allows us to respond to organizational changes quickly. Above all (and this is something to consider nowadays), this also enables us to achieve substantial savings at the time of shaping a data center. There will also be savings in the electrical bill, refrigeration costs, physical space, and hardware. Moreover, deployment and disaster recovery will be much simpler. All of this will result in lower maintenance costs—whether we have our own IT department in 100 percent administration of this platform or a virtualized environment of the IT department (later, we will develop the IT department cost, as well as the options and the reasons behind virtualizing it). This all leads to there not being almost any scenario in which virtualization is not applied and which will not result in a much lower TCO. Desktop Virtualization In this regard, it is also possible to find very well-developed and well-tested technologies, such as Microsoft Terminal Server, Citrix, and so on. This kind of virtualization was previously thought about for remote points or links that had a relatively small bandwidth.
Nowadays, it is used as a method to reduce desktop-administration expenses, because (thanks to this technology) it is possible technically to have the tolerance to failures, add it to the server-virtualization scheme, and create a pool of servers. There is a substantial reduction of the desktop-maintenance cost and the cost of desktops themselves because, with equipment that has smaller hardware, it is possible to run any kind of application and still have centralized control and deployment of applications and security policies. Virtualization of the IT Department. In the past, there was a relatively low or not-so-significant IT department cost, compared to the one nowadays. Today, there is a high IT department cost, and it is necessary to consider the following variables: training in new technologies, training in the company environment, the cost of personnel search, and the time during which the search takes place. All of this leads to the IT department not being able to respond with the speed that the company needs. In addition, we currently experience a high labor turnover, which means that many times this process has to start again—which, of course, drives the cost upwards. If we consider all of these factors, especially the costs and the time that the search involves, we will see that having a virtualized IT department results in a lower TCO and in every possible advantage. With virtualization, all of these IT-department problems are moved to an external company that is exclusively devoted to IT, particularly as it refers to specialists in technology or in specific technologies.
This would mean that there is no point in having a specialist as part of the internal IT department.
Currently, there an infinite number of services are available on the Web, from e-mail services (as has been the case for a very long time) to CRM, ERP, Document Managers, and other services.
This solution naturally offers a world of advantages: It is unnecessary to have a specialist in this technology within our IT department, it is equally unnecessary to maintain that technology from either the hardware or the software point of view, and keeping security copies of the information is no longer required. Depending on the kind of|hired service and service-level agreement (SLA), it will be possible to have a redundant and always-online service. In some cases, the cost for this kind of service can be high—depending on both the number of users within our organization who require this service and the characteristics of the service—and is worth considering.
As an example, we will use a virtualization architecture that uses such Microsoft technologies as Hyper-V and Terminal Server. The number of servers: 15. Typical structure of IT department: One manager, two IT administrators, one database administrator (DBA), and five Help Desk employees. Based on everything that was explained previously, we will take the best of each virtualization technology to carry out a cost reduction.
It will be possible to reduce approximately 15 noncritical servers into 4 physical servers|that will be able to support these 15 virtual servers. It will be necessary to carry out a load analysis and distribute the servers and business applications correctly. Nowadays, given the kind of roles of a typical company server, there are not many servers that have a high processing consumption; thus, it will be necessary to isolate these kinds of servers, so that a resource competition conflict is not generated.
It will also be necessary to include (if we do not already have it) external storage in which our virtualization scheme will be stored (so that it is possible to work on it in a cluster) and which will have the tolerance to failure of all of the virtual equipment. All of this will be possible by using Microsoft Windows Server 2008 64 bits and Hyper-V System Center. Virtual Machine Manager to carry out the P2V conversions.
Within this example, it will be possible to reduce approximately 70 percent of the energy consumption, as a result of less consumption on the part of the servers. In addition, there will also be a reduction of approximately 70 percent in refrigeration consumption, as a result of the use of storage. The licensing cost will also decrease (when we use Microsoft licensing) very substantially. The Microsoft licensing scheme is based on Table 1. In the table, we can | |see that by using Windows Server 2008 data-center server licensing, it will be possible to| | |obtain a reduction in licensing from 15 servers (which will be able to use different versions of Windows Server 2008, depending on the processor and RAM needs) to only 4 (with data-center or enterprise licenses). Depending on the versions of Microsoft Windows that are used, in the least favorable scenario, we will achieve a reduction in the cost of 50 percent. | |Version of Windows Server 2008 host | | |Covered virtual servers | | | | | |Standard | | |1 | | | | | |Enterprise | | |4 | | | | | |Data center | | |Unlimited | | | | | |Table 1.
Microsoft Licensing Scheme Desktop Virtualization
Depending on the memory consumption of the applications, it will be possible to implement | | |approximately five virtual servers for Terminal Server— typically, in five physical servers to cover 500 work positions.
The main advantage of having virtualized servers is that this will automatically commute to any other, in case of a failure in any physical equipment. In this way, we will be able to have a desktop with fewer resources, and it will be |possible to update the applications more rapidly, as with deployment, management of | | |printers, and any other desktop problem. In turn, this will also enable us to make the desktop of the user available to remote or external users.
If we consider, on the one hand, the cost of updating 500 desktops as a result of the installation of some business application and, on the other hand, the purchase of five 32-GB RAM servers and two Quad-Core processors each, we will obtain a cost reduction of approximately 90 percent. Virtualization of the IT Department |It is first necessary to analyze the critical and noncritical applications; it is important also to analyze the IT labor market in the country in which it is applied. Generally speaking, the advice that is given is to virtualize whatever is difficult to get|in the market and to have partial virtualization of the IT department.
For this example—and considering the current work market—we will opt to virtualize (for example) only one IT administrator and one DBA; the Help Desk, one IT Administrator, and the IT manager will continue to be physical. By having an SLA with external suppliers and a framework work contract, it will be possible to increase rapidly the IT department or change swiftly the scheme without a great increase in initial costs. It will also be possible to decrease training costs, hiring costs, and so on.
Virtual-server scenario, Cost Reduction
If we consider the TCO, hiring costs, training costs, and salaries, we will obtain a cost reduction of approximately 55 percent. | |
Cloud Computing. Let us make an application that will not be worth having internally, because of the size of the company. For this example, we will use a CRM. Ten CRM licenses will be hired online. In this way, no costs will be associated with the initial licensing, administration training, CRM server deployment, disaster-recovery policies, or anything that pertains to the administration of the CRM.
Cost Reduction. Based on online services, there will be a cost reduction of approximately 80 percent—based| on 10 licenses, and considering the initial cost of having a server, trained personnel, backup policies, and so on.
Financial Benefits of Virtualization. “Leveraging virtual computer environments has increased the opportunities for teaching and learning. This particular solution is cost effective and sustainable in many different ways. Tech related costs have reduced by a little over $250,000 a year. That is a combination of lower software costs, app software costs, and extending the life of the hardware. That in turn reduces the cost of the hardware by about 35-40% when they do replace it.
Computers can be renewed with hardware that is much more cost-efficient because it does not need to be the latest and greatest machine. The computers that are replaced cost around $500 or less. IT staff numbers are down, mainly because of a reduction in PC technicians. Everything is moving back towards the data center and, because of the way they implement their environment, every time a user accesses an app or a desktop they are actually accessing a copy of a perfect image.
Every time you open up Word it is a brand new fresh copy and when you are done using it that image goes away so you are not really re-using it. SCC does manage their profile information so if they create custom shortcuts it will be applied over that virtual application. This way they still get that customized personalized environment. Pooling resources reduced hardware and software costs while extending the life of current hardware resources. This lowers the school’s total cost of ownership and makes a very significant difference. To read the rest of this report, Desktop Virtualization for the Real World, IMF members can log-in
The virtualization scenario makes it possible to make structural changes in the IT department with the speed that the market actually needs. We can have a strong cost reduction because, with a physical structure, we often do not use all of the resources—hardware, software, employees, and so on—at 100 percent.
On the other hand, with virtualization we have the opposite case: We use and push the resource utilization as far as possible, and then we add more resources to virtualize. In our example, we can see the individual cost reduction; if we look at it globally, however, the cost reduction is more significant.
Haven’t found the relevant content? Hire a subject expert to help you with Founder/Ceo
$35.80 for a 2-page paper