VM Memory Management to Avoid Memory Ballooning & Swapping

You want to get the best possible performance for your applications, and logic might lead one to believe that allocating more memory to a VM would only improve performance, but issues like memory ballooning and memory swapping defies that logic. Learn more in this video about VM memory management.

Learn More:
Memory Management 101: https://turbonomic.com/blog/on-technology/memory-101-the-challenges-of-managing-memory-in-a-virtual-environment/
Memory Management 201: https://turbonomic.com/blog/on-turbonomic/memory-201-memory-management-that-scales-with-your-environment/
Memory Management Fundamentals eBook: https://turbonomic.com/resources/memory-management-fundamentals-e-book/
3 Misconceptions About Right Sizing Your VMs: https://turbonomic.com/blog/on-turbonomic/3-misconceptions-about-right-sizing-your-virtual-machines/

Video Transcription:
You know those frustrating application issues that pop-up and then disappear before you can trace root cause like a whack-a-mole game? The cause may not be your code, but rather VMs being too big.

You want to get the best possible performance for your apps, and logic might lead one to believe that allocating more memory to a VM would only improve performance, but issues like memory ballooning and swapping defies that logic. Let’s see how.

A physical host with 16 cores and 64 Gigs of memory that has been virtualized may share its physical resources with multiple virtual machines – sometimes 10 or more. When those VMs are over-sized – with say, a 64 Gig virtual memory allocation – otherwise known as vRAM – they take up all resources on the host whether or not those resources are needed.

One of the most powerful aspects of virtualization is that multiple VMs can share the same host machine’s underlying resources. When one VM is over-allocated memory, however, it can lead to things like memory ballooning and memory swap.

Memory ballooning happens when a virtual guest requires a certain amount of memory, a balloon of virtual memory is inflated with physical memory to provide the needed resources to meet the demand. When the demand decreases, the virtual balloon deflates, and the physical resources are surrendered back to the physical host. The problem is that while recovering memory from the ballooned capacity is possible, it isn’t always timely. As a result, there may be varied levels of performance during these burst-and-recover transactions, potentially degrading performance of specific applications.

With many VMs demanding memory at once, the hypervisor may then also have the host swap to disk to fulfill the demand, creating latency on the guest where memory is unavailable. The I/O required in such swap actions creates a problem that can become a self-fulfilling prophecy. As more workloads target memory aggressively, other workloads have to suddenly move from RAM to swap, increasing latency and decreasing end-user performance.

And think about how much harder it is to move a VM demanding 64 Gigs of RAM from a host that’s running hot to a better spot – a VM that’s over-built with too much RAM makes it much harder to find a better spot for it to run. As more workloads have performance issues, they can impact other workloads they are sharing resources with, increasing latency and further degrading end-user performance.

However, if we re-size the VM to, say, 4 gigs of VRAM rather than 32 or 64, we see how physical resources are more efficiently allocated, reducing latency for every VM, improving application performance and end-user experience.

The other benefit of avoiding over-sizing is that each blade now has more room, meaning each VM has more options if it needs to move to avoid a noisy or busy neighbor. More options means better performance for every workload, including your own.

Cutting edge IT operations are making use of standardized VM deployment models with smaller initial VM sizing.
This VM right sizing will actually improve application performance, while managing resources more effectively, and will also aim to optimize current workloads.

With virtualization, both virtual CPU and Memory resources can be added without needing to re-start the VM (a process known as “hot add”). Turbonomic is able to automate this hot add process so that VMs seeing increased demand can get the resources they need to assure performance in real-time.

Turbonomic looks at all resources holistically, in real time, and understands the tradeoffs between not only CPU and Memory, but also network, storage, and indeed all aspects of the data center supply chain. Turbonomic then correlates those tradeoffs with application demand at that very moment. This enables every workload to get exactly the resources it needs in real time – no more, but importantly no less – to assure performance for the end user.

Full Transcript Can Be Found Here:

via VMTurbo

About The Author
- Turbonomic’s Autonomic Platform enables heterogeneous environments to self-manage to assure the performance of any application in any cloud. Turbonomic’s patented decision engine dynamically analyzes application demand and allocates shared resources in real time to maintain a continuous state of application health.Launched in 2010, Turbonomic is one of the fastest growing technology companies in the virtualization and cloud space. Turbonomic’s Autonomic Platform is trusted by thousands of enterprises to accelerate their adoption of virtual, cloud, and container deployments for all mission critical applications.

Tell us what you think...