Virtualization Adapted Adapting Business Processes for Virtual Infrastrcuture (and vice-versa)

2011/05/28

Fake path ie8 Dell drac

Filed under: it,security — Tags: , , , , , , , — iben @ 12:11

If you want to use Dell DRAC 5 with IE 8 you need to change this setting or the Virtual Media won’t work.

Microsoft made this change to conform with HTML5.

http://acidmartin.wordpress.com/2009/06/09/the-mystery-of-cfakepath-unveiled/

http://codingforums.com/showthread.php?p=817890

http://blogs.msdn.com/ie/archive/2009/03/20/rtm-platform-changes.aspx

http://forum.maxthon.com/redirect.php?tid=75307&goto=lastpost

http://www.marc-antho-etc.net/blog/post/Two-IE8-behavioral-changes-worth-mentioninge280a6.aspx

So in order to prevent information disclosure (the path to a file may include the user name if the file reside under the user ‘profile), there are actually two changes combined to achieve that:

  • The IE security setting “include local directory path when uploading files to a server” (already present in IE7) is set to “Disable” instead of “Enabled” as it was with IE7 for the “Internet Zone”

2011/05/27

Accelerating Your Virtual Cloud Journey

Filed under: cloud,security — Tags: , , — iben @ 23:48
SECURE CLOUD & VIRTUALIZATION SUMMIT
http://www.misti.com/includes/conferences/agendadetails.asp?pID=174&ISS=21737&SID=713753
Date: Friday, 22 April 2011
10:00 AM – 11:00 AM
Breaking Through “Virtual Stall” and Accelerating Your Virtual Cloud Journey
Iben Rodriguez, VMware vSphere ESX Benchmark Lead, The Center for Internet Security
• The “perfect storm” of challenges that slow down virtualization
• Why organizations need to look beyond technology when developing a virtualization/cloud strategy
• Conducting a comprehensive analysis of your current environment before implementing a virtual/cloud security solution
• Innovative technical solutions that address the key security roadblocks to virtualization and cloud adoption 

 

Slides here:

http://portal.sliderocket.com/AQXFR/InfoSecCloud2011Preso

 

2011/05/22

Step Right Up and Get Your Wildcard SSL Certificates Here

Filed under: virtualization — iben @ 18:33

Step Right Up and Get Your SSL Certificates Here

Wildcard RapidSSL Certificate

http://www.rapidssl.com/buy-ssl/wildcard-ssl-certificate/

  • Fast issuance and easy install
  • 99% browser support
  • Chained Cert works with most newer handheld devices and mobile browsers
  • Up to 256-bit SSL encryption

Price

  • 1 Year: $131.00
  • 2 Years: $232.00 ($116 per year)
  • 3 Years: $333.00 ($111 per year)
  • 4 Years: $432.00 ($108 per year)

RapidSSL (non-wildcard) Certificate Price

  • 1 Year: $38.00
  • 2 Years: $62.00 ($31 per year)
  • 3 Years: $81.00 ($27 per year)
  • 4 Years: $104.00 ($26 per year)

Trustwave Wildcard 256-Bit SSL Certificate Details

https://ssl.trustwave.com/ssl-premium-wildcard.php

  • Organization Vetted
  • $100,000 Warranty
  • Free Technical Support
  • Free Trusted Commerce Site Seal
  • Free lifetime reissuance
  • Your organization’s name appears in the certificate
  • 100% Trusted Root Guarantee
  • Good for multiple server names
  • Not a low assurance instant issued certificate

  • 1 Year: $340.00
  • 2 Years: $640.00
  • 3 Years: $940.00


https://www.thawte.com/ssl/wildcard-ssl-certificates/index.html

  • Organization Vetted
  • Save time and money with fewer SSL certificates to manage and purchase.
  • Create a secure, private connection between a web browser and web server, including gateways, web forms, mail and FTP servers, and VPNs with up to 256-bit SSL encryption.
  • Secure your competitive advantage with SSL from Thawte, a globally recognized certificate authority with root certificates included in over 99% of browsers.

Price

  • 1 Year: $540.00
  • 2 Years: $1040.00

Comodo PremiumSSL Wildcard Details

http://www.comodo.com/business-security/digital-certificates/wildcard-ssl.php

  • Organization Vetted
  • Domain Vetted
  • Secure multiple sub-domains on a single domain name with one Certificate
  • Full business-validated certificate
  • 2048 bit industry standard SSL Certificate
  • Trusted by all popular browsers
  • 99.3% browser compatibility
  • Unlimited Re-issuance Policy
  • 128/256 bit encryption

Price

  • 1 Year: $390.00
  • 2 Years: $740.00
  • 3 Years: $1090.00
  • 4 Years: $1440.00
  • 5 Years: $1790.00

Request Process

Fill out the form below and email it to me with your CSR at: sslcerts@ibenit.com

I will email you an invoice from paypal along with your new certificate.

Here’s what you’ll get: http://www.freessl.com/buy-ssl/wildcard-ssl-certificate/index.html

CSR Instructions

https://knowledge.rapidssl.com/support/ssl-certificate-support/index?page=content&actp=CROSSLINK&id=so13985

NOTE: The following characters can not be accepted: < > ~ ! @ # $ % ^ * / \ ( ) ?.,&

Type the following command to generate a private key that is file encrypted. You will be prompted for the password to access the file and also when starting your web server. Warning: If you lose or forget the pass phrase, you must purchase another certificate.

openssl genrsa -des3 -out domainname.key 2048

You could also create a private key without file encryption if you do not want to enter the pass phrase when starting your web server:

openssl genrsa -out domainname.key 2048

Type the following command to create a CSR with the RSA private key (output will be PEM format):

openssl req -new -key domainname.key -out domainname.csr

NOTE: You will be prompted for your PEM pass phrase if you included the “-des3” switch.  This is optional. Don’t include the -des3 if you want your webserver to be able to restart without human intervention.

Fill Out This Form

Wildcard Certificates will be issued for *.domain-name.com

  • Domain Name:
  • Certificate Duration:  1  2  3  4  5  Years
  • Domain Contact Email:
    (Needs to match whois info)
  • First Name:
  • Last Name:
  • Address:
  • City:
  • Country:
  • State:
  • Zip/Postal Code:
  • Phone:

Example Issued Certificate Information

Common name: *.domain-name
SANs: *.domain-name
Organization: *.domain-name
Location: US
Valid from February 29, 2010 to February 29, 2015
Signature Algorithm: sha1WithRSAEncryption
Issuer: RapidSSL CA

Common name: RapidSSL CA
Organization: GeoTrust, Inc.
Location: US
Valid from February 19, 2010 to February 18, 2020
Serial Number: 145105 (0x236d1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: GeoTrust Global CA

How to read a CSR

openssl req -text -noout -in host.csr

 

2011/05/06

Performance Recommendations for Virtualizing AnyThing with VMware vSphere 4

Filed under: virtualization — iben @ 09:10

Performance Recommendations for Virtualizing AnyThing with VMware vSphere 4

( Derived from: Performance Recommendations for Virtualizing Zimbra with VMware vSphere 4 http://wiki.zimbra.com/wiki/Performance_Recommendations_for_Virtualizing_Zimbra_with_VMware_vSphere_4)

Introduction

VMware vSphere’s virtualization capability to deliver computing and I/O resources far exceeds the resource requirements of most x86 applications. This is what allows multiple application workloads to be consolidated onto the vSphere platform and benefit from reduced server cost, improved availability, and simplified operations.

However, there are some common misconfiguration or design issues that many experience when virtualizing applications, especially Enterprise workloads with higher resource demands than smaller departmental workloads.

We have compiled a short list of the essential best practices and recommendations to ensure a highly performant deployment on the vSphere platform. We have also provided a list of highly recommended reference material to both build and deploy a vSphere platform with performance in mind, as well as troubleshooting steps to resolve performance related issues.

CPU Resources

  • Confirm hardware assisted virtualization is enabled in the BIOS on your hardware platform.
  • Confirm CPU/MMU virtualization is configured correctly for your hardware platform.
    • To configure CPU/MMU virtualization:‘myVM’ -> Summary Tab -> Edit Settings -> Options -> CPU/MMU virtualization

NUMA

Non-Uniform Memory Access (NUMA) is a memory architecture used in multi-processor systems. A NUMA node is comprised of the processor and bank of memory local to that processor. In NUMA architecture, a processor can access its own local memory faster than non-local memory or memory local to another processor. A phenomenon known as NUMA “crosstalk” occurs when a processor accesses memory local to another processor causing a performance penalty.

VMware ESX™ is NUMA aware and will schedule all of a virtual machine’s (VM) vCPUs on a ‘home’ NUMA node. However, if the VM container size (vCPU and RAM) is larger than the size of a NUMA node on the physical host, NUMA crosstalk will occur. It is recommended, but not required, to configure your maximum VM container size to fit on a single NUMA node.

For example:

  • ESX host with 4 sockets, 4 cores per socket, and 64GB of RAM.
  • NUMA nodes are 4 cores with 16GB of RAM (1 socket and local memory).
  • Recommended maximum VM container is 4 vCPU with 16GB of RAM.

CPU Resources

It is okay to over commit CPU resources, it is not okay to over utilize. Meaning you can allocate more virtual CPUs (vCPUs) than there are physical cores (pCores) in an ESX host as long as the aggregate workload does not exceed the physical processor capabilities. Over utilizing the physical host can cause excessive wait states for VMs and corresponding applications while the ESX scheduler is busy scheduling processor time for other VMs.

Most apps are not CPU bound when disk and memory resources are sized correctly. It is perfectly fine to over commit vCPUs to pCores on ESX hosts where the workloads will be running. However, in any over committed deployment it is recommended to monitor host CPU utilization, VM Ready Time, and utilize the Dynamic Resource Scheduler (DRS) to load balance VMs across hosts in a vSphere Cluster.

VM Ready Time, host CPU utilization, and other important resource statistics can be monitored using ESXtop or from the Performance tab in the vSphere Client. You can also configure Alarms and Triggers to email administrators and perform other automated actions when performance counters reach critical thresholds that would affect the end user experience.

See the Performance Troubleshooting for VMware vSphere 4 guide for detailed information on performance troubleshooting.

vCPU Resources

Reduce the number of vCPUs allocated to your VM to the fewest number required to sustain your workload. Over allocating vCPUs causes excessive and unnecessary CPU overhead and idle time on the physical host. When memory and disk resources are sized appropriately, most apps are not a CPU bound. If your VM experiences less than 60% sustained utilization during peak workloads, we recommend reducing the allocated vCPUs to half the number of currently allocated vCPUs.

VM Memory Allocation

If you see periods of high, sustained CPU utilization on your VM, this may actually be caused by memory backpressure or a poorly performing disk subsystem. It is recommended to first increase the memory allocated to the VM (make sure you match the VM memory reservation to the total allocated memory for as a JAVA workload best practice). Then, monitor VM CPU utilization, VM disk I/O, and in-guest swapping (can cause excessive disk I/O); for signs of improvement and other issues before increasing the number of vCPUs allocated to your VM.

Memory Resources

  • It is recommended to size the VM memory not to exceed the amount of memory local to a single NUMA node. For example:
    • ESX host with 4 sockets, 4 cores per socket, and 64 GB of RAM.
    • NUMA nodes are 4 cores with 16 GB of RAM (1 socket and local memory).
    • Recommended maximum VM container is 4 vCPU with 16GB of RAM.
  • Set the memory reservation for your VMs to the total amount of memory allocated to the VM. For example:
    • If you allocated 8192MB of memory to the VM, then the memory reservation should be set to 8192MB.

To configure memory reservations:‘myVM’ -> Summary Tab -> Edit Settings -> Resources – > Memory -> Reservation

Network Resources

  • Use the VMXNET3 paravirtualized network adapter if supported by your guest Operating System. Note: This does not apply to the some pre-built appliances so check with your vendor.
  • Use separate physical NIC ports, NIC teams, and VLANs for VM network traffic, vMotion, and IP based storage traffic (i.e. iSCSI storage or NFS datastores). This will avoid contention between client/server I/O, storage I/O, and vMotion traffic.

Storage Resources

VMFS Datastores

Do not oversubscribe VMFS datastores. Disk I/O and latency is a physics issue and storage design has the same impact on performance virtual as it does physical. Design your VM’s storage with the appropriate number of spindles to satisfy I/O requirements for DBs, indexes, redologs, blob stores, etc.

See the Performance Troubleshooting for VMware vSphere 4 guide for detailed information on performance troubleshooting. Remember that insufficient memory allocation can cause excessive memory swapping and disk I/O. See the memory resource section for information on tuning VM memory resources.

PVSCSI Paravirtualized SCSI Adapter

  • Use the PVSCSI paravirtualized SCSI adapter if supported by your guest Operating System.
  • Use the PVSCSI paravirtualized SCSI adapter if supported by your guest Operating System.  Note: This does not apply to the some pre-built appliances so check with your vendor.

RDM devices versus VMFS Datastores

There is no performance benefit to using RDM devices versus VMFS datastores. It is recommended to use VMFS datastores unless you have specific storage vendor requirements to support hardware snapshots or replications in a virtual environment.

VMDK Disk Devices

Configure your VMs, VMDK disk device as thick-eagerzeroed to zero out each block when the VMDK is created. By default, new thick VMDK disk devices are created lazyzeroed. This causes duplicate I/O the first time each block is written to the disk device by first zeroing the block, then writing your application data. This can cause significant performance overhead for disk I/O intensive applications.

To configure thick-eagerzero VMDK disk devices either:

  • Check the box to ‘Support clustering features such as Fault Tolerance’ when creating the VM. This does not enable FT, but does eagerzero the disks.

Or

  • From the ESX CLI:
vmkfstools -k /vmfs/volumes/path/to/vmdk

To configure thick-eagerzero VMDK disk devices from the ESX CLI: vmkfstools -k /vmfs/volumes/path/to/vmdk

For more information about the ESX CLI, see the vSphere Command-Line Interface Documentation at http://www.vmware.com/support/developer/vcli/

Fiber Channel Storage

If using Fiber Channel storage, configure the maximum queue depth on the FC HBA card.

IP-Based Storage

  • Do not oversubscribe network interfaces or switches when using IP based storage (i.e. iSCSI or NFS). Use EtherChannel with ESX NIC teams and IP storage targets or 10GE if storage I/O requirements exceed a single 1Gb network interface.
  • Use dedicated physical NIC ports, teams, and VLANs for IP based storage traffic (i.e. iSCSI storage or NFS datastores). This will avoid contention between client/server I/O, storage I/O, and vMotion traffic.
  • Use Jumbo frames to increase storage I/O throughput and performance when using IP based storage (i.e. iSCSI or NFS).

vSphere Cluster Recommendations

VMware vMotion

Use dedicated physical NIC ports, teams, and VLANs for vMotion traffic to avoid contention between client/server I/O, storage I/O, and vMotion traffic.

VMware HA

Confirm VMware HA is enabled for the vSphere Cluster to automatically recover your VMs in the vSphere Cluster in case of unplanned hardware downtime.

VMware DRS

  • Confirm DRS is enabled to load balance VMs across ESX hosts in a vSphere Cluster.
  • With DRS, you can configure affinity rules to keep virtual machines together or apart on the ESX hosts in a vSphere Cluster. We recommend using affinity rules to separate multi-server deployments performing the same function onto different ESX hosts in a vSphere Cluster. This will minimize the impact to users caused by a hardware failure affecting a single ESX host. VMware HA (if enabled) will automatically recover the ZCS multi-server deployment VMs from the failed ESX host onto another ESX host in the vSphere Cluster.
  • To create a DRS rule: ‘myvSphereCluster’ -> Edit settings -> VMware DRS -> Rules – > Add
  • Create the following rules:
    • Name: Mailbox Servers – > Type: Separate Virtual Machines -> Add: ‘myMailboxServers’
    • Name: Proxy Servers – > Type: Separate Virtual Machines -> Add: ‘myProxyServers’
    • Name: MTA Servers – > Type: Separate Virtual Machines -> Add: ‘myMTAServers’

Reference Materials

Zimbra vSphere Best Practices

http://files2.zimbra.com/zca/zca-6.0.7_GA_341/doc/Zimbra_on_vSphere_Performance_Best_Practices.pdf

Performance Best Practices for VMware vSphere 4.0

http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf

VMware vSphere 4 Performance with Extreme I/O Workloads

http://www.vmware.com/pdf/vsp_4_extreme_io.pdf

Performance Troubleshooting for VMware vSphere 4

http://communities.vmware.com/servlet/JiveServlet/download/10352-1-28235/vsphere4-performance-troubleshooting.pdf

Understanding Memory Resource Management in VMware ESX Server

http://www.vmware.com/files/pdf/perf-vsphere-memory_management.pdf

Comparison of Storage Protocol Performance in VMware vSphere 4

http://www.vmware.com/files/pdf/perf_vsphere_storage_protocols.pdf

Best Practices for Running vSphere on NFS Storage

http://vmware.com/files/pdf/VMware_NFS_BestPractices_WP_EN.pdf

Configuration Maximums for VMware vSphere 4.0

http://www.vmware.com/pdf/vsphere4/r40/vsp_40_config_max.pdf

What’s New in VMware vSphere 4: Performance Enhancements

http://www.vmware.com/files/pdf/vsphere_performance_wp.pdf

Powered by WordPress