Scott Sayler - responsible for VMware relationship with Microsoft SQL
- The approach is similar to that adopted for Exchange and Sharepoint;
- Global stats from IDC - 9% of x86 servers run SQL, 4% run Oracle, so database is a significant volume of workloads
- Like most other VMs, SQL servers are generally over provisioned and organisations need to think about the licencing implications of this
- SQL 2008 R2 on Windows 2008 can scale to 256 cores (suggesting here that its enterprise class!);
- Benefits of databases on VMware
- increase hardware utilization
- no application change implications
- consolidate SQL licences
- rapidly respond to changing workload requirements (hot add of resource is possible with SQL 2008)
- ESXi 4 overhead is less than 10%
- overcommit is possible on processor, but not recommended for SQL
Comparison with SQL Scale Up Consolidation:
- If the OS has problems you loose all SQL instances - ie SPOF
- If SQL has a problem you loose all SQL instances - ie another SPOF
- Load balancing is not possible across nodes (i.e. DRS can only handle the location of the whole stack, not each individual instances
- Apps need to be remediated, maintained etc., at the same time and at the same speed causing peaks in mainteinance workloads and conflict across users agreeing when to have outages
- (Comment - VMware not mentioning the benefits of licence consolidation in this section!)
- OS can be a bottleneck
Licencing Advice:
- think of using SQL Enterprise Data Centre edition licencing
Host Best Practice:
- CPU - don't over commit pCPUs - vCPU count should be lower than pCPU count
- Memory - don't over commit memory; use SQL min / max server memory settings; - memory allocation to the guest should match peak requirements
-
Showing posts with label ESXi. Show all posts
Showing posts with label ESXi. Show all posts
Thursday, 14 October 2010
Virtualization Platform Comparisons Thur 09:00
Disclaimer from me - this is a VMware presentation
VMware have a lab to examine competitve products to ensure their products stay ahead of the competition
Recent Microsoft TechEd Attendees voted vSphere as best in show product
Hypervisor Comparison
- VMware confirming that ESXi is the strategic platform - its currently 70Mb, however still need 1GB partition to allow for the roll-back version and space for dump files
- Hyper-V and Windows 2008 Server Core is 3.6GB
- VMware supports NIC teaming, Hyper-V is not unsupported by the vendors
- VMware built for clustering, Hyper-V is based on Windows clustering which is complex to set up
- System Center manages vCenter but requires over 10 different tools with various different look and feel, so quite complex compared to vCenter
- VMware Update Manager - select the update, select the hosts, set off automated process to VMotion the guests, patch hosts, VMotion the guests. Hyper-V R2 upgrade is 9 manual steps per host
Storage
- VMware supports multiple storage technologies and mix and match in the same cluster
- RHEV only allows one type of storage per cluster
- Hyper-V doesn't support NAS
- vSphere VMs encapsulated as files that are portable
- Hyper-V and RHEV use complex files that are more difficult to understand and less portable
- VMFS volumes grow from GUI
- Hyper-V allows additional volumes to be added but not grown, RHEL VMs need a reboot
- Storage vMotion unique to VMware
- Storage I/O QoS unique to VMware
- Thin provisioning - vSphere fully supports, Hyper-V its notadvised, Xenserver does not support this on FC/iSCSI
- When LUN fills, vSphere pauses the guest alerts and can restart when LUN has been grown. Hyper-V crashes the guest OS
- Snapshots supported in vSphere, not recommended in Hyper-V without downtime, RHEV also needs downtime
Resource Management
- Fault tolerance avaiable in vSphere (comment - but limited to small VMs). Not available in Hyper-V. RHEV work with Marthon Everrun to provide something similar, and only with Microsoft OS
- Affinity / anti-affinity possible in vSphere,
- Host affinity only available from vSphere (great for licencing restrictions - e.g. certain database vendors)
- Role based granular access controls to guest VM level in vSphere, XenServer has coarse pre-determined role
- Resource pools - divides up the resources in a cluster and you can assign ownership and roles - vSphere can spread resource pools across the cluster but XenServer can only divide server by server
- Memory overcommitment supported and vSphere and Xen, but not on Hyper-V. In XenServer the ballooning just prevents the VM from using the memory allocated to it, it doesn't actually allow the VM to use the unused memory allocated to other VMs. VMware forces the quietest guest to go to swap files to free up RAM for the busy guest
Benchmarks and Case Studies
- Taneja Density report using DVD Store
- Hyper-V handled 11 guests, RHEL KVM max at 14, XenServer and vSphere handled 32 guests
- Virtual Reality Check analysing terminal services on Intel Nehalem using transparent page sharing on vSphere 4.1 - no performance hi. They found no performance difference for XenApp on XenServer vs vSphere
- Graydon Head & Ritchey would have needed twice as many servers to run Hyper-V compared to vSphere
The conclusion is that vSphere is best - but that would be expected from this presentation really.
Comment - no mention of the limitation of VMware toolsets to the virtual platform, didn't give Microsoft credit for managing physical and virtual from the same management tools
Subscribe to:
Posts (Atom)