Virtualization Adapted Adapting Business Processes for Virtual Infrastrcuture (and vice-versa)


Use screen to maintain sessions through ssh disconnects

Filed under: virtualization — iben @ 15:16

Use screen to maintain sessions through ssh disconnects

An ssh terminal session may disconnect due to various reasons and inadvertently kill any running processes.

Use the screen command to save the session. While your SSH session gets disconnected the screen command will keep your remote process running.

screen -help
Use: screen [-opts] [cmd [args]]
or: screen -r [host.tty]

-a Force all capabilities into each window’s termcap.
-A -[r|R] Adapt all windows to the new display width & height.
-c file Read configuration file instead of ‘.screenrc’.
-d (-r) Detach the elsewhere running screen (and reattach here).
-dmS name Start as daemon: Screen session in detached mode.
-D (-r) Detach and logout remote (and reattach here).
-D -RR Do whatever is needed to get a screen session.
-e xy Change command characters.
-f Flow control on, -fn = off, -fa = auto.
-h lines Set the size of the scrollback history buffer.
-i Interrupt output sooner when flow control is on.
-list or -ls. Do nothing, just list our SockDir.
-L Turn on output logging.
-m ignore $STY variable, do create a new screen session.
-O Choose optimal output rather than exact vt100 emulation.
-p window Preselect the named window if it exists.
-q Quiet startup. Exits with non-zero return code if unsuccessful.
-r Reattach to a detached screen process.
-R Reattach if possible, otherwise start a new session.
-s shell Shell to execute rather than $SHELL.
-S sockname Name this session <pid>.sockname instead of <pid>.<tty>.<host>.
-t title Set title. (window’s name).
-T term Use term as $TERM for windows, rather than “screen”.
-U Tell screen to use UTF-8 encoding.
-v Print “Screen version 4.00.03 (FAU) 23-Oct-06”.
-wipe Do nothing, just clean up SockDir.
-x Attach to a not detached screen. (Multi display mode).
-X Execute <cmd> as a screen command in the specified session.


updated blog site to wordpress 4.6 Pepper @sonic

Filed under: security,virtualization — admin @ 17:01

Due to a compromise of older version and old plugins I was not able to login to my site via the admin portal. Thanks to the great phone support from @sonic dot net technicians I was able to login the the ssh shell and ftp server to back up the installation, restore from a snapshot, remove the old plugins, and get the system working again! Very happy to have this all going. I’m excited to poke around with the latest version of WordPress to see all the new features.

WordPress 4.6 “Pepper” 2016/08/16

Version 4.6 of WordPress, named “Pepper” in honor of jazz baritone saxophonist Park Frederick “Pepper” Adams III, is available for download or update in your WordPress dashboard. New features in 4.6 help you to focus on the important things while feeling more at home.


Temperature and Velocity at the Gas Pump

Filed under: virtualization — iben @ 14:53
Seraphin Label

Seraphin Label

Gas Pump Meter

Gas Pump Meter

I saw this guy testing the meters on the gas pumps today and asked him a few questions. You’d be happy to know about this:

Q1) Does it really make any difference if you pump gas slowly or quickly? I’ve heard you get more gas if you are not in a hurry.
A1) Yes. The gas pumps are not tuned for slow delivery levels and will “leak” a little extra gas when run at the minimum delivery rate.

Q2) Does gas actually compress at night when it’s cold?
A2) Yes. See the Seraphin picture for the actual math formula to compute the change in volume based on temperature. For best results – pump your gas at night or early morning when the ground temp is lowest.


#openstack training and certification #jobs demand exceed supply

Filed under: virtualization — iben @ 10:33

#openstack training and certification #jobs demand exceed supply

PistonCloud Training Available
Rackspace Certification Program
Heat RefStack Reference Architecture
We’re in trouble. We’re facing an unprecedented shortage of IT talent in cloud, and in this case, specifically within OpenStack.
Full price of the course is $1,995.
Piston Cloud has launched a unique training program aimed at helping organizations get off the ground with OpenStack. Our next class, “Deploying OpenStack for Cloud Administrators,” is scheduled for May 8th and 9th here in San Francisco.

The course is ideal for IT managers, VMware administrators, and cloud architects, and features a blend of keynotes and hands-on lab work. Attendees will learn directly from the seasoned engineers who built Piston Enterprise OpenStack, and leave with a deep and comprehensive understanding of the technology, as well as many insider tips and tricks to get their own private cloud deployment off the ground. The two-day course includes:

Keynote from Piston Cloud CTO Joshua McKenty: A History of Cloud Computing
Session: Opinionated Architecture and the True Cloud
Lab: Installation and Configuration of an OpenStack Private Cloud
Lab: An Introduction to Account Creation and Administration
Session: Introduction to OpenStack Compute (Nova)
Lab: Hands-on with Instance Management and Migration
Wrap-up and Q&A
Dinner and an evening outing
Session: An Introduction to Image Management (Glance)
Lab: Working with Images
Lab: Hands-on with OpenStack Block Storage (Cinder)
Lab: OpenStack Object Storage (Swift)
Lab: Enterprise Features
Closing Wrap-up and Q&A

RackSpace Certification
Rackspace Certified Technician for OpenStack
IT Professionals earn this certification after demonstrating the skills necessary to utilize and operate an OpenStack cloud.

Exam Objectives

Create instances from images, snapshots and volumes
Inject user data and files into an instance
Connect to instances via SSH and VNC
Inject a keypair into an instance and access the instance with that keypair
Set metadata on an instance
Create an instance snapshot
Pause, suspend, stop, rescue, resize, rebuild, reboot an instance
Manage volumes and volume snapshots
Manage images
Manage security groups
Assign IP Addresses to an instance
View Object Storage account information
Set Object Storage account metadata
Manage Object Storage containers
Set Object Storage metadata
Set Object Storage access control lists
Set Object Storage container sync
Set Object Storage container versioning
Set Object Storage static web
Upload, list, download and delete objects
Create Temporary URL’s
Create form posts
List Identity Service catalog
Get auth tokens
Discover keystone endpoints

Reference Architecture – #refstack
The OpenStack project does an insane amount of automated testing as part of the development cycle, but up until now there has been no corresponding testing that can be performed against running public clouds. While we want to do that, before we can test other people’s clouds for compatibility, we need to be able to express what it is they need to be compatible with.

It turns out that OpenStack is rich enough now to express a reference implementation in terms of itself, using heat templates. Some people think that’s a great end to itself – deploy your OpenStack using OpenStack – but others are not quite as sure about that yet, and have significant investment in things like chef, puppet, crowbar or cobbler. To meet the needs of expressing a useful set of testable information and not leave that specification as an academic exercise, or as the recipient of more tool wars – we’ve come up with a plan to have the heat templates describe the state, the “what” if you will, and to describe a clear boundary line across which metadata is passed to the tools on the individual nodes that will turn that metadata into configuration.


Add a Backup Disk to Linux VM ext4 Centos 5

Filed under: virtualization — iben @ 15:17

Add a Backup Disk to Linux VM ext4 Centos 5

Edit the VM Settings
add the disk to the VM
yum install e4fsprogs
fdisk -l
Disk /dev/sdb: 60.1 GB, 60129542144 bytes
Disk /dev/sdb doesn’t contain a valid partition table
fdisk /dev/sdb
n – create new partition
2 – use partition number 2
w – write partition table
mkdir /backup
/sbin/mkfs.ext4 -L /backup /dev/sdb2
mount -t ext4 /dev/sdb2 /backup
vi /etc/fstab
Add the following line to etc/fstab
/dev/sdb2 /backup ext4 defaults 1 2
change the zimbra backups to go to new directory
su zimbra –
zmprov gacf zimbraBackupTarget
zimbraBackupTarget: /opt/zimbra/backup
zmprov mcf zimbraBackupTarget /backup
zmprov gacf zimbraBackupTarget
zimbraBackupTarget: /backup


windows on devstack on ubuntu nova hyper-v cloudbase openstack

Filed under: virtualization — iben @ 12:11

Here’s the situation…

You have a machine and you want to test out openstack. You can provision and run windows VMs with an Ubuntu server acting as a hypervisor. Here are some of the ingredients you will need along with some instructions and tips. This is a work in progress so it’s not yet complete.



Unix Network IP Address Configuration

Filed under: virtualization — iben @ 13:56

To temporarily configure an IP address, you can use the ifconfig command in the following manner. Just modify the IP address and subnet mask to match your network requirements.

sudo ifconfig eth0 netmask

To verify the IP address configuration of eth0, you can use the ifconfig command in the following manner.

ifconfig eth0

eth0 Link encap:Ethernet HWaddr 00:15:c5:4a:16:5a

inet addr: Bcast: Mask:

inet6 addr: fe80::215:c5ff:fe4a:165a/64 Scope:Link


RX packets:466475604 errors:0 dropped:0 overruns:0 frame:0

TX packets:403172654 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2574778386 (2.5 GB) TX bytes:1618367329 (1.6 GB)


To configure a default gateway, you can use the route command in the following manner. Modify the default gateway address to match your network requirements.

sudo route add default gw eth0

To verify your default gateway configuration, you can use the route command in the following manner.

route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface U 1 0 0 eth0 UG 0 0 0 eth0

If you require DNS for your temporary network configuration, you can add DNS server IP addresses in the file /etc/resolv.conf. The example below shows how to enter two DNS servers to /etc/resolv.conf, which should be changed to servers appropriate for your network. A more lengthy description of DNS client configuration is in a following section.



If you no longer need this configuration and wish to purge all IP configuration from an interface, you can use the ip command with the flush option as shown below.

ip addr flush eth0


Flushing the IP configuration using the ip command does not clear the contents of /etc/resolv.conf. You must remove or modify those entries manually.

via Network Configuration.


report linkedin spam

Filed under: virtualization — iben @ 06:19


Someone sent me a connection request saying they know me or work with me but I’m pretty sure they don’t. These guys shouldn’t be mis-representing them selves on line. It lowers the trust we have in “the system”. What can I do?


  • Use the Help Center link in the lower left of your screen to contact customer service.
  • More serious messages that might be illegal should be emailed to
  • If it’s an invitation you can sign in to linkedin, click on the response “I Don’t Know this person”.
    Once someone has received 5 IDKs, the account is restricted
  • My favorite method is to click the report spam button to the right of the message in your inbox.
    You do have to sign in to linkedin to do this. See the picture on the right as an example.




arp mac esxi vmkernel storage

Filed under: it,security,virtualization — iben @ 19:44

troubleshooting arp mac esxi vmkernel storage

The guys were getting some new storage setup and had the IP address set incorrectly. Usually a vmkping would be enough to prove the vmkernel interfaces were setup correctly but the vendor came back with “the firewall is blocking NFS” so I needed a way to see the ARP table to prove the MAC for the NAS was showing up on the correct VMK interface with no gateway in the data path.

This was tested to work on latest ESXi version 5 build 469512.

Here are the results:

~ # vmkping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=0.201 ms
64 bytes from icmp_seq=1 ttl=64 time=0.187 ms

— ping statistics —
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.187/0.194/0.201 ms
~ # esxcfg-vmknic -l
Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type
vmk0 Management Network IPv4 00:25:90:52:91:21 1500 65535 true STATIC
vmk1 VMkernel-152 IPv4 00:50:56:76:23:45 1500 65535 true STATIC
vmk2 VMkernel-150 IPv4 00:50:56:70:34:56 1500 65535 true STATIC
~ # esxcli network ip neighbor list
Neighbor Mac Address Vmknic Expiry State
————– —————– —— ————– —– 00:50:56:2a:12:34 vmk2 993 sec


Displaying the ARP and Neighbor Discovery cache for VMkernel network interfaces

HP Cloud Beta

Filed under: virtualization — iben @ 19:35
I just started testing the cool new HP Cloud which is in beta right now. While you can’t create or upload your own machine templates you can choose from some popular open source choices (all 64 bit os):

  • CentOS 5.6 or 6.2
  • Ubuntu 10 or 11
  • Debian 6

You get full root access and have ability to run yum or apt-get to install or update as you wish however you cannot update the kernel. (See this post:

What virtualization technology does HP Cloud Compute use?

HP Cloud Compute is based on KVM virtualization technology.

What instance types are available for HP Cloud Compute?

Standard instances offer a number of virtual server types. Select the size that’s right for you based on the amount of memory, number of virtual cores, and local storage required. Although you won’t be billed during the beta I imagine once they do stop billing you the choice you make here could have an impact on your pocketbook.


Standard Instance Types RAM (GB) # of Virtual Cores Local Disk (GB)
Extra Small 1 1 30
Small 2 2 60
Medium 4 2 120
Large 8 4 240
Extra Large 16 4 480
Double Extra Large 32 8 960


How to SSH to your VM

NOTE: the workflow for this is a little unintuitive. You get to the keypair dialog only when you are creating a VM. Be sure to take the time to follow these steps WHILE you are setting up your VM.

Setup a Keypair


By, 1 week 3 days ago

Creating a Keypair is the first necessary step in launching an instance for the first time. Only one keypair is needed for a series of instances launched under that keypair name.

The Keypair creation menu is located on the left side of the “keypair” dropdown of the AZ’s management page:

A separate keypair must be generated for each AZ (Availability Zone)

After entering the keypair management page, type a name for the keypair, then click “Create”.

Once the keypair has been created, a black text brick appears below. Copy and paste the entire text field.

Save it within a word processing document (notepad, text edit, word, etc.) and rename the file with a .pem extension. This allows use of the file by HP Cloud’s compute instances to identify the authorized user in part.

Once you have created a keypair, you can then enter your instance. Please see our Creating an instance and connect with Putty guide for further steps to gain access to your instance.

Create PEM file on Mac OS X

Open a terminal window

vi testkeypair.pem , press i to insert, copy and paste the text from above into the terminal, esc, :wq to save the file and quit vi

chmod 400 testkeypair.pem

Add a Public IP address to your VM

Be sure to go back and click the check box for this.

Login with SSH

ssh -i testkeypair.pem your-servers-public-ip-address

Setup IPv6 on your VM

Older Posts »

Powered by WordPress