Virtualization And Application Delivery

Home Services Virtualization And Application Delivery

Application & Desktop Virtualization

App virtualization (application virtualization) is the separation of an installation of anapplication from the client computer that is accessing it. There are two types of application virtualization: remote and streaming.

Remote applications run on a server. End users view and interact with their applications over a network via a remote display protocol. The remote applications can be completely integrated with the user’s desktop so that they appear and behave the same as local applications, through technology known as seamless windows. The server-based operating system instances that run remote applications can be shared with other users (a terminal services desktop), or the application can be running on its own OS instance on the server (aVDI desktop). A constant network connection must be maintained in order for a remote application to function.

With streaming applications, the virtualized application is executed on the end user’s local computer. When an application is requested, components are downloaded to the local computer on demand. Only certain parts of an application are required in order to launch; the remainder can be downloaded in the background as needed. Once completely downloaded, a streamed application can function without a network connection. Various models and degrees of isolation ensure that streaming applications will not interfere with other applications, and that they can be cleanly removed when closed.

Both forms of application virtualization have benefits from centralized management. Applications can be installed, patched, and upgraded once for an entire environment, instead of for each individual computer. Licensing can also be easier to handle when IT is provisioning the applications instead allowing users to install them on their own.

Server Virtualization

Server virtualization is the masking of server resources, including the number and identity of individual physical servers, processors, and operating systems, from server users. The server administrator uses a software application to divide one physical server into multiple isolated virtual environments. The virtual environments are sometimes called virtual private servers, but they are also known as guests, instances, containers or emulations.

There are three popular approaches to server virtualization: the virtual machine model, the paravirtual machine model, and virtualization at the operating system (OS) layer.

Virtual machines are based on the host/guest paradigm. Each guest runs on a virtual imitation of the hardware layer. This approach allows the guest operating system to run without modifications. It also allows the administrator to create guests that use different operating systems. The guest has no knowledge of the host’s operating system because it is not aware that it’s not running on real hardware. It does, however, require real computing resources from the host — so it uses a hypervisor to coordinate instructions to the CPU. The hypervisor is called a virtual machine monitor (VMM). It validates all the guest-issued CPU instructions and manages any executed code that requires addition privileges. VMware and Microsoft Virtual Server both use the virtual machine model.

The paravirtual machine (PVM) model is also based on the host/guest paradigm — and it uses a virtual machine monitor too. In the paravirtual machine model, however, The VMM actually modifies the guest operating system’s code. This modification is called porting. Porting supports the VMM so it can utilize privileged systems calls sparingly. Like virtual machines, paravirtual machines are capable of running multiple operating systems. Xen and UML both use the paravirtual machine model.

Virtualization at the OS level works a little differently. It isn’t based on the host/guest paradigm. In the OS level model, the host runs a single OS kernel as its core and exports operating system functionality to each of the guests. Guests must use the same operating system as the host, although different distributions of the same system are allowed. This distributed architecture eliminates system calls between layers, which reduces CPU usage overhead. It also requires that each partition remain strictly isolated from its neighbors so that a failure or security breach in one partition isn’t able to affect any of the other partitions. In this model, common binaries and libraries on the same physical machine can be shared, allowing an OS level virtual server to host thousands of guests at the same time. Virtuozzo and Solaris Zones both use OS-level virtualization.

Server virtualization can be viewed as part of an overall virtualization trend in enterprise IT that includes storage virtualization, network virtualization, and workload management. This trend is one component in the development of autonomic computing, in which the server environment will be able to manage itself based on perceived activity. Server virtualization can be used to eliminate server sprawl, to make more efficient use of server resources, to improve server availability, to assist in disaster recovery, testing and development, and to centralize server administration.

Application Acceleration

The performance benefits from application acceleration are real, provided you understand what the technology can and can’t do for your network. What follows are selected best practices for deploying application acceleration devices in enterprise networks.

Application acceleration takes many different forms. There’s no one definition for “making an application go faster.”

For some users, reducing WAN bandwidth consumption and cutting monthly circuit costs may be the key goals. For others, it’s speeding bulk data transfer, such as in backup, replication, or disaster recovery scenarios. For yet others, improving response times for interactive applications is most important, especially if those transaction-based applications carry an organization’s revenue.

Deciding where to deploy application acceleration is also a consideration. Different types of acceleration devices work in the data center; in pairs with devices deployed on either end of a WAN link; and, increasingly, as client software installed on telecommuters’ or road warriors’ machines. Identifying the biggest bottlenecks in your network will help you decide which parts of your network can benefit most from application acceleration.

It’s also worth considering whether application acceleration can complement other enterprise IT initiatives. For example, many organizations already have server consolidation plans under way, moving many remote servers into centralized data centers. Symmetrical WAN-link application acceleration devices can help here by reducing response time and WAN bandwidth usage, and giving remote users LAN-like performance. In a similar vein, application acceleration may help enterprise VoIP or video rollouts by prioritizing key flows and keeping latency and jitter low.

Many acceleration vendors recommend initially deploying their products in “pass-through” mode, meaning devices can see and classify traffic but they don’t accelerate it. This can be an eye-opening experience for network managers.

The adage “you can’t manage what you can’t see” definitely applies here. It’s fairly common for enterprises to deploy acceleration devices with the goal of improving performance of two to three key protocols – only to discover your network actually carries five or six other types of traffic that would also benefit from acceleration. On the downside, it’s unfortunately also all too common to find applications you didn’t realize existed on your network.

The reporting tools of acceleration devices can help here. Most devices show which applications are most common in the LAN and WAN, and many present the data in pie charts or graphs that easily can be understood by non-technical management. Many devices also report on LAN and WAN bandwidth consumption per application, and in some cases per flow.

Understanding existing traffic patterns is critical before enabling acceleration. Obtaining a baseline is a mandatory first step in measuring performance improvements from application acceleration.

For products that do some form of caching, a corollary to classification is understanding the size of the data set. Many acceleration devices have object or byte caches, or both, often with terabytes of storage capacity. Caching can deliver huge performance benefits, provided data actually gets served from a cache. If you regularly move, say, 3 Tbytes of repetitive data between sites and but your acceleration devices have only 1 Tbyte of cache capacity, then obviously caching is of only limited benefit. Here again, measuring traffic before enabling acceleration is key.

Even without acceleration devices deployed, it’s still possible (and highly recommended) to measure application performance. Tools such as Cisco NetFlow or the IETF’s open sFlow standard are widely implemented on routers, switches, and firewalls; many network management systems also classify application types.

If forced to choose between high availability and high performance (even really high performance), network architects inevitably opt for better availability. This is understandable – networks don’t go very fast when they’re down – and it has implications when deciding which acceleration device type to select.

WAN acceleration devices use one of two designs: in-line and off-path. An in-line device forwards traffic between interfaces, same as a switch or router would, optimizing traffic before forwarding it. An off-path device may also forward traffic between interfaces or it may simply receive traffic from some other device like a router, but in either case it sends traffic through a separate module for optimization. Because this module does not sit in the network path, it can be taken in and out of service without disrupting traffic flow.

There’s no one right answer to which design is better. For sites that put a premium on the highest possible uptime, off-path operation is preferable. On the other hand, there may be a higher delay introduced by passing traffic to and from an off-path module. The extra delay may or may not be significant, depending on the application. If minimal delay is a key requirement, in-line operation is preferable.

Some devices combine both modes; for example, Cisco’s WAAS appliances perform off-path optimization of Windows file traffic but use in-line mode to speed up other applications.

Note that “pass-through” operation is different than in-line or off-path mode. In case of power loss, virtually all acceleration devices will go into pass-through mode and simply bridge traffic between interfaces. Devices in pass-through mode won’t optimize traffic, but then again they won’t cause network downtime either.

One of the most contentious debates in WAN application acceleration is whether to set up encrypted tunnels between pairs of devices or whether traffic should remain visible to all other devices along the WAN path. The answer depends upon what other network devices, if any, need to inspect traffic between pairs of WAN acceleration boxes.

Some vendors claim tunneling as a security benefit because traffic can be authenticated, encrypted, and protected from alteration in flight. That’s true as far as it goes, but encrypted traffic can’t be inspected – and that could be a problem for any firewalls, bandwidth managers, QoS-enabled routers or other devices that sit between pairs of acceleration devices. If traffic transparency is an issue, then acceleration without tunneling is the way to go.

On the other hand, transparency is a requirement only if traffic actually requires inspection between pairs of WAN acceleration devices. If you don’t have firewalls or other content-inspecting devices sitting in the acceleration path, this is a nonissue.

Application acceleration is a worthy addition to the networking arsenal, but it’s not a silver bullet. It’s important to distinguish between problems that acceleration can and can’t solve.

For example, acceleration won’t help WAN circuits already suffering from high packet loss. While the technology certainly can help in keeping congested WAN circuits from becoming even more overloaded, a far better approach here would be to address the root causes of packet loss before rolling out acceleration devices.

Further, not all protocols are good candidates for acceleration. Some devices don’t accelerate UDP-based traffic such as NFS (network file system) or multimedia. And even devices that do optimize UDP may not handle VoIP based on SIP (session initiation protocol) due to that protocol’s use of ephemeral port numbers (this problem isn’t limited to acceleration devices; some firewalls also don’t deal with SIP). SSL is another protocol with limited support; in a recent Network World test only two of four vendors’ products sped up SSL traffic.

Despite these limitations, application acceleration is still a technology very much worth considering. The performance benefits and cost savings can be significant, even taking into account the few caveats given here. Properly implemented, application acceleration can cut big bandwidth bills while simultaneously improving application performance.

WAN Optimization

Your WAN is the foundation of your business. It enables collaboration, communication, user productivity, and risk mitigation. It can also hinder application performance as well as backup and recovery times. Time is money. So don’t let problems with network latency and limited bandwidth slow your business.

F5 WAN optimization solutions deliver fast, predictable LAN-like performance over the WAN. State-of-the-art technologies—including adaptive compression, data deduplication, and TCP optimizations—accelerate everything from data replication and backup to application performance and virtual machine migrations.

Meet disaster recovery objectives

You have to back up and replicate your data, but you don’t have the network resources to spare. Ensure the availability of your business-critical data and reduce backup times with F5 optimization technologies.

Meet or exceed your disaster recovery point and recovery time objectives.

Enhance VDI performance with TCP optimization.

Ensure the availability of your most critical business application—email—with fast Exchange 2010 Database Availability Group (DAG) replication.

Increase bandwidth efficiency

Don’t purchase more bandwidth. Let F5 help you take full advantage of the bandwidth you already have. TCP optimization, adaptive compression, and data deduplication relieve network congestion and reduce bandwidth utilization by up to 50 percent. Eliminate the issues that prevent optimal performance of your WAN.

Reduce OpEx

IT budgets aren’t getting any bigger, but applications are. Expand the capacity and efficiency of your WAN without adding more server hardware. F5 WAN optimization offloads CPU-intensive processes such as compression and SSL processing from your servers so you can accommodate more growth with your existing infrastructure and avoid unnecessary network upgrades.

Integrate security

Data security isn’t optional. But that doesn’t mean it has to be complex or resource-intensive. F5 provides a single, integrated solution for both data acceleration and security. F5 WAN optimization solutions:

  • Encrypt business-critical data using industry-standard SSL and IPsec encryption.
  • Free server capacity for application delivery.
  • Relieve you from managing security certificates.