27 February, 2008
First of all Christoph depicted the SAP customer pain point:
· Upgrades are time consuming
· Expensive and complex business continuity
· High operational costs
· Addressing compliance
With SAP in place you could (must…) have a lot of systems in different environment:
· Quality and Assurance (formally Q&A)
· … plus surrounding supporting servers
With Vmware you can achieve:
· Automated resource assurance (dynamic balance)
· Increased availability
· On demand capacity
Since December 2007 SAP is fully supported on Vmware Virtual Infrastructure for production environments, both for Windows and Linux.
The use of Vmware for SAP systems led to:
· Faster SAP upgrades (no need to wait for new hardware, snapshots to roll back, no need for SAP consultants in the first part of the migration)
· Business continuity (cost effective HA for all environments: prod, test, dev, Q&A, training)
· Less operational costs: less hardware, less power consumption, easier migration and consolidation
· Compliance: easy manteinance and migration of legacy systems, faster deployments, easy documentation of servers and asset management with the adoption of Vmware Lifecycle Manager.
More information in Vmware SAP solution community for service partner site: http://www.vcc-sap.com/
He starts with some definition (vnic, vswitch and portgroup). The portgroup overview was expecially interesting (for me); in portgroups you can specify:
· VLANs configuration
· Teaming policies
· Layer 2 security policies
· Traffic shaping
Moreover portgroups are not VLANs: PG do not segment the vSwitches into separate broadcast domains unless they have different VLAN ID.
How can we implement VLANs? Two way:
· Virtual switches tagging (the easier way)
· External switches tagging (with virtual guest tagging in addition); this implies more work and more cabling
The native VLANs are fully supported by ESX (but pls, do not set any VLAN ID in vSwitches)
Tommy starts explaining the basics of VDI and introduces VDM2 (connection broker): with VDM2 we have:
· Automatic provisioning of desktop
· SSL encryption and Single Sign On
· USB redirection
· Active Directory and Secure ID integration
· High Availability and easy scalability
· DMZ support and Internet deployment
· Windows / MAC / Linux clients support
The session continues with the introduction of NetApp VDI facilitator that alleviate (a lot) the lengthy mass deployment timeframe: in the demo they deployed 100 VMs in about 2 minutes ( normally you can do a maximum of 5/10 VMs deployment per hour)
26 February, 2008
for 3i we have:
- compactness, 32 MB footprint
- architecture only, no installation needed
- wizard for deploying
- syslog stored in memory and tools to analyze it
- Distribuited Power Management (consolidate workload, place unneeded servers in standby and bring servers back online - minimizing power consumption and with no disruption of te VMs)
for 3.5 we have:
- Update manager (scan and remedies online and offline VMs and online ESX hosts, snapshot VMs before patching, integrated with DRS)
- Storage vMotion (FC only, zero downtime to VMs, LUN indipendent): this feature is not integrated with Virtual Infrastructure Client (...yet)
- HA now support up to 32 hosts cluster, with proactive cluster confcheck. As experimental feature there is individual VM failure monitoring
- VCB now support backup on iSCSI, NAS and local storage; you can use a virtual machine to run VCB and use Vmware Converter to restore the VMs.
- New OS support: Vista and Ubuntu
- up to 64 GByte of memory per VM and up to 256 GByte of RAM per host
- support for ethernet up to 10 GBit and Infiniband
There are a lot of benefit in application performance due to:
- Paravirtualization (now just for Linux, paravirt-ops with kernel > 2.6.21) that make guest OS virtualization aware: the major benefit are on large database, multiprocess applications, file server and web server because of net and disk I/O and context switching are better managed
- Large memory pages
- Network support for TCP segmentation Offload (reduce CPU overhead) and Jumbo Frames (with Jumbo Frame hardware enabled) with benefits on backup over lan, webserver, Citrix server and iSCSI
With the help of Deepak Narain and Thomas Huber, now I appreciate them more.
The session has begun with the definition of honeypot, that's a system used to attract bad guys and to collect everything they do; moreover it can be used to distract them from the real production environment.
How can we forge and honeypot?
- Decoy system: expose it on the internet offering services
- expose vulnerability to the bad guys
- monitor your box that must looks and behave as a normal - well designed - production system
Honeypots can be classified in two types:
- low interaction (or no interaction) that's based on emulation of services
- high interaction with full access for the bad guys to the OS and full "play around" with system
Using virtualization to forge an honeypot is better because:
- you can consolidate decoying a lot of system that's on a self contained physical machine
- VMs are self contained
- easiness of provisioning
- improved response to attack (just unplug network and you have done!)
- quickly reconfigurable and redeployable
unfortunatly also bad guys like those features
Why having an honeypot?
- We can learn from outside attack and remediate in the real production env.
- We can lure attack from real production.
- We quickly detect attack: that shouldn't be any traffic towards the honeypot normally: so, all traffic is hostile.
- We can have evidence: once an attacker is identified you can use evidences legally
Some projects are sprawling around the world: honeyd project is one of this.
He started with basic on vSwitch telling that they behave as layer 2 physical switch (so no layer 3 routing).
He put strike on the importance of having network teaming to achieve:
- better use of bandwidth
- enhanced availability and performance
Another important feature to be used is VLAN tagging (that implies 802.1Q hardware), moreover in case of Virtual infrastructure deployed on blade system with lack of eth ports.
During the session, Guy explains how we can do nic teaming in ESX 3.x:
- Originating port ID
- Source MAC address
- IP Hash (static etherchannel required)
the tips is to choose for semplicity the "Originating port ID", but this can change based on your environment.
The virtual traffic types were classified:
- VMs Traffic
- vMotion traffic (must be dedicated and isolated)
- Management traffic (should be isolated, expecially if HA enabled)
- iSCSI traffic
He shows us some design examples, mostly for explaining the VLAN tagging techniques with lack of physical NICs
Last, but no least, vSwitches do not make use of Spanning Tree, so ports on physical switches should be configured as porfast or trunkfast to progress to forward state quickly.
- defining goals (policies)
- tracking measures (compliance)
- apply updates (remediation)
this process implies the risk of having invisible virtual machine (powered off VMs and templates): VUM takes care of those and patches them.
Basically VUM can patch ESX hosts, Microsoft VMs and Linux RHEL VMs; the infrastructure to get VUM working is Virtual Center + Update manager add-on. VUM let you create the baseline securities standard for your enterprise and applies it interactively or on a schedule basis.
The supporting technology are linked clones (so RDM is not currently supported) and network fancing. For more information have a look to the Stage Manager Beta web page
25 February, 2008
Second session of my first day at Vmworld Europe.
Michael Adams and Brian Emerson introduced us to the automatization, management and control of the life of a virtual machine.
The VLM track VMs and report us on various states of VMs get during their existence. With Lifecycle manager you can define policy for creation, deployment, changes and retirement of VMs.
Brian show us a demo of the web access based inteface in which a requester can asks for a VM filling out requirement (OS, type of environment, ACLs, domain,...) and the approver can trigger the creation of the requeted VM.
- Setup of the workflow (DR plan is stored within Virtual Center, in a virtual runbook)
- Cross sites VC management (VMs get correctly organized on the secondary site, VMs have right CPU and memory allocation after failover, VMs are plugged in the right (v)LAN after failover)
- DR plan change control (rolebased access control, audit trails, recovery and test plans can be exported, changes to DR plan are instantly reflected in the test and failover environments)
- Failover workflow (automate failover with playback of virtual runbook
- Network management (VMs' IP changes automatically if needed, IP changes could be scripted to reflect the changes into DNS)
- Test workflow (run frequent non -disruptive testing, create a test network, export the results, increase the scope of DR plan, meet the compliance)
In order to use SRM you need to have two sites each with one VCM server. If you have more than two sites you'll have to work with sites pair.
For now you must use VMFS file system (RDM is experimental). The replication can be done within supported Fibre Channel and iSCSI storage.
Then we have the overview of the new goals of virtualization with Kartik Rau (Vice President of Marketing). Finally Carl Eschenbach (Executive Vice President of WW Field Ops) speaks about the future of Vmware and possible competitors for the next years
23 February, 2008
tomorrow I'm leaving Italy and traveling to Cannes to attend VmWorld Europe 2008!
I'm very happy 'cause maybe I'll meet some of you! I'll be blogging from there so STAY TUNED!!
I've came across a very interesting site where you can find a new way to use VmWare Converter.
The author subtitles the site with those words:
a simple idea of including virtualization into your virtual disaster
the main idea in this site is:
You are having multiple servers that needs to be backed up and you don't know how to resolve that process? If you are already running virtual servers or just thinking running virtualization, you want to explore how virtual machines facilitate virtual disaster recovery and planning.
have a look to http://www.p2vbackup.com/ and give your feedback to the author.
22 February, 2008
new fixes are available, since feb 20, on vmware site.
For those who haven't installed update manager for Virtual Center 2.5 here's the script to install the last 19 fixes (related to the last two patches releases).
After downloading the fixes to your /var/updates folder execute the following :
tar zxvf ESX-1002427.tgz
tar zxvf ESX-1002965.tgz
tar zxvf ESX-1002966.tgz
tar zxvf ESX-1002967.tgz
tar zxvf ESX-1002969.tgz
tar zxvf ESX-1002970.tgz
tar zxvf ESX-1002971.tgz
tar zxvf ESX-1002974.tgz
tar zxvf ESX-1002975.tgz
tar zxvf ESX-1002976.tgz
tar zxvf ESX-1003179.tgz
tar zxvf ESX-1003359.tgz
tar zxvf ESX-1003360.tgz
tar zxvf ESX-1003362.tgz
tar zxvf ESX-1003364.tgz
tar zxvf ESX-1003365.tgz
tar zxvf ESX-1003175.tgz
tar zxvf ESX-1003366.tgz
tar zxvf ESX-1003374.tgz
esxupdate -n -r file:/var/updates/ESX-1002427/ update
esxupdate -n -r file:/var/updates/ESX-1002965/ update
esxupdate -n -r file:/var/updates/ESX-1002966/ update
esxupdate -n -r file:/var/updates/ESX-1002967/ update
esxupdate -n -r file:/var/updates/ESX-1002969/ update
esxupdate -n -r file:/var/updates/ESX-1002970/ update
esxupdate -n -r file:/var/updates/ESX-1002971/ update
esxupdate -n -r file:/var/updates/ESX-1002974/ update
esxupdate -n -r file:/var/updates/ESX-1002975/ update
esxupdate -n -r file:/var/updates/ESX-1002976/ update
esxupdate -n -r file:/var/updates/ESX-1003179/ update
esxupdate -n -r file:/var/updates/ESX-1003359/ update
esxupdate -n -r file:/var/updates/ESX-1003360/ update
esxupdate -n -r file:/var/updates/ESX-1003362/ update
esxupdate -n -r file:/var/updates/ESX-1003364/ update
esxupdate -n -r file:/var/updates/ESX-1003365/ update
esxupdate -n -r file:/var/updates/ESX-1003175/ update
esxupdate -n -r file:/var/updates/ESX-1003366/ update
esxupdate -n -r file:/var/updates/ESX-1003374/ update
please, modify accordingly to your environment and do not blindly cut and paste and execute.
Hope to help