Tuesday, 25 September 2012

"Failed to uninstall the device. The device may be required to boot up the computer"

After what seemed to be a successful migration of a dozen or so virtual machines running on an old  VMware ESX 3 host to a shiny new VMware vSphere 5.1 host (via VMware vCenter Converter), I started to see the following error popping up when configuring the new network adapter.
Failed to uninstall the device. The device may be required to boot up the computer
After a little snooping around on the VMware KB site, I found a way to resolve the issue:

Open a command prompt

  1. Type: set devmgr_show_nonpresent_devices=1 and press Enter
  1. Type: devmgmt.msc to open Device Manager and press Enter
Once device manager is up, go to View, Show Hidden Devices. From there you can remove (or disable) the old network adapters. Once that has been done, reboot the machine and you should be good to go!

Tuesday, 18 September 2012

Backup your log files to Amazon S3

As somewhat of a follow-up to my post earlier on Auto Scaling your AWS fleet with Cloud Watch, if any of your instances log to local storage, once said instances have been scaled down, getting that log data back will be difficult, or impossible if you trash your EBS volumes too.

One solution is to have a small script create a bucket within S3 as soon as your instance powers up. Next, have another script that copies your local log directory to the S3 bucket every few minutes. That way, once your EC2 instance is destroyed you still have access to all the log data ... just in case. This example refers to AWS instances but could easily be used on any instance, cloud or otherwise.

The tools of choice here are few:
  1. CloudBerry Amazon S3 Explorer Freeware - we will use this for both bucket management as well as the built in PowerShell snap-in.
  2. Windows Server Task Scheduler

Install and configure CloudBerry Amazon S3 Explorer

To get started, download and install CloudBerry Amazon S3 Explorer Freeware. Next, register the PowerShell snap-in by dropping to the command line (run as administrator) and launch the following.

C:\Windows\Microsoft.NET\Framework64\v2.0.50727\InstallUtil.exe "C:\Program Files\CloudBerryLab\CloudBerry Explorer for Amazon S3\CloudBerryLab.Explorer.PSSnapIn.dll"

Once successful, you are ready to go with the scripts. Overall, they are very lightweight and easy to read regardless of your PowerShell knowledge. 

First things first, set the PowerShell execution policy to unrestricted by executing the following PowerShell command: set-executionpolicy unrestricted. The Set-ExecutionPolicy cmdlet enables you to determine which Windows PowerShell scripts (if any) will be allowed to run on your computer. If you like you can read more about that cmdlet here.

S3 Folder creation via PowerShell & S3 Explorer

Create the first script that will add a folder to your S3 bucket. If you don't have one yet, use S3 Explorer or the AWS Console to create one. You can copy/paste the following into an empty PowerShell script.

#declare variables
$key = "xxxxx"  #AWSAccessKeyId
$secret = "xxxxx" #AWSSecretKey

#load CloudBerryLab PSSnapin
Add-PSSnapin CloudBerryLab.Explorer.PSSnapIn

$hname = hostname
$s3 = Get-CloudS3Connection -Key $key -Secret $secret
$destination = $s3 | Select-CloudFolder -path "mylogs" | Add-CloudFolder $hname

What this will does is sets up your AWS authentication, loads the CloudBerryLab  PowerShell snap-in, sets a variable $hname equal to the hostname of your instance. It then uses that variable to create a new folder in your mylogs S3 bucket.

Set this up as a basic task to Run whether user is logged on or not and have it set to trigger At startup. Configure the task action to Start a program, of which, powershell.exe is your program of choice here. Then add the following arguments:  -command "D:\tmp\logscopu.ps1"

Reboot the server to confirm - you should see a new folder in your S3 bucket.

S3 File copy via PowerShell & S3 Explorer

The second little script is just about as easy to read. 

#declare variables
$key = "xxxxx"  #AWSAccessKeyId
$secret = "xxxxx" #AWSSecretKey

#load CloudBerryLab PSSnapin
Add-PSSnapin CloudBerryLab.Explorer.PSSnapIn

#determine and set instance hostname
$hname = hostname

$s3 = Get-CloudS3Connection -Key $key -Secret $secret
$destination = $s3 | Select-CloudFolder -path "mylogs/$hname"
$src = Get-CloudFilesystemConnection | Select-CloudFolder -path "D:\Logs"
$src | Copy-CloudItem $destination -filter "*"

The PowerShell snap-in bits are a authenticating your account, setting up the destination bucket with a foldername that matches your instance name. And finally copying the local content of D:\logs\* to mylogs/$hname.

Set this up on a basic task as well - but make it fire every 5 minutes. You should now see your logs populating S3 about every 5 minutes. 

CloudBerry has many examples here that you can use and tweak to get just what you need.

Monday, 17 September 2012

Auto Scaling your AWS fleet with Cloud Watch and SQS

This blog post shows you how to setup Auto Scaling in AWS using Cloud Watch to dynamically expand and collapse your Amazon EC2 fleet in line with SQS expansion.

Before we get our hands dirty, I am making a few assumptions here:
  1. You are already familiar with the basics of AWS
  2. You have an AWS account - if not, sign up here.
Also, this was done on a Windows box so your mileage on a Mac or a Linux box may vary.

The first step is to download  the Auto Scaling tools from Amazon and configure the command line interface (CLI). If you do not have Java v1.5 or newer you'll need to update here.

I like to pop the following commands in a batch file that I can reuse as needed and makes the CLI tools more portable between workstations.

set AWS_AUTO_SCALING_HOME=C:\aws\AutoScaling-
set AWS_CREDENTIAL_FILE=C:\aws\AutoScaling-\credential-file-path.template
set JAVA_HOME=C:\Program Files\Java\jre6

Once that is done, you can confirm the setup was done correctly by using the as-cmd command.


If successful, you will see all of the available Auto Scaling commands.

Command Name                                Description

------------                                -----------

as-create-auto-scaling-group                Create a new Auto Scaling group.

as-create-launch-config                     Creates a new launch configuration.

as-create-or-update-tags                    Create or update tags.

<snip />

as-update-auto-scaling-group                Updates the specified Auto Scaling group.

version                                     Prints the version of the CLI tool and the API.

    For help on a specific command, type '<commandname> --help'

The next step is to create a Launch Configuration for the SQS processor instances. The launch configuration specifies the type of Amazon EC2 instance that Auto Scaling creates for you. I chose to use a micro instance using the Amazon Linux AMI however you can use any AMI you like, including your own.

You create your launch config via the as-create-launch-config command:

as-create-launch-config DFQPconfig --image-id ami-a0cd60c9 --instance-type t1.micro

If successful, you will see: OK-Created launch config

Now, create an AutoScaling group for the SQS processors:

as-create-auto-scaling-group DFQPgroup --launch-
configuration DFQPconfig --availability-zones us-east-1e --min-size 1 --max-size

If successful you will see: OK-Created AutoScalingGroup

The Auto Scaling group existence can be verified with the following command:

    as-describe-auto-scaling-groups --headers

The results will look similar to this:

AUTO-SCALING-GROUP  DFQPgroup   DFQPconfig     us-east-1e          1         5
INSTANCE  i-8c93d9f6   us-east-1e         InService  Healthy  DFQPconfig

Now, let's configure Auto Scaling policies that with both add and remove 1 node to and from the group as demand specifies.  They will be invoked when the number of messages on the queue increases to a predefined level or decreases to an acceptable level:

as-put-scaling-policy DFQPscaleup -g DFQPgroup --adjustment=1 --type ChangeInCapacity

Be sure to make note of your ARN you will need it later to create CloudWatch alarms.


The amount to scale the capacity of the associated group. Use negative values to decrease capacity. For negative numeric values, specify this option such as --adjustment=-1 for Unix, and  "--adjustment=-1" (quotes required) for Windows.

as-put-scaling-policy DFQPscaledown -g DFQPgroup "--adjustment=-1" --type ChangeInCapacity

Be sure to make note of your ARN you will need it later to create CloudWatch alarms.


Next, we need the Amazon CloudWatch tools. Download the command line tools here.

Similar to the Auto Scaling config, this is my setup for Cloud Watch - yours might vary a little:

set AWS_CLOUDWATCH_HOME=C:\aws\CloudWatch-
set AWS_CREDENTIAL_FILE=C:\aws\CloudWatch-\credential-file-path.template
set JAVA_HOME=C:\Program Files\Java\jre6

Once that is done, we will build out two alarms, one to trigger when there is a high number of ApproximateNumberOfMessagesVisible and another when there is a low number of ApproximateNumberOfMessagesVisible.

mon-put-metric-alarm --alarm-name DFQPHighAlarm --alarm-description "Scale up when ApproximateNumberOfMessagesVisible is high" --metric-name ApproximateNumberOfMessagesVisible --namespace AWS/SQS --statistic Average --period 60 --threshold 100 --comparison-operator GreaterThanThreshold --dimensions="QueueName=sqs_populator" --evaluation-periods 5 --alarm-actions arn:aws:autoscaling:us-east-1:512617028781:scalingPolicy:61432c71-0679-4988-928f-4a58a867d71f:autoScalingGroupName/DFQPgroup:policyName/DFQPscaleup

If successful, you will see: OK-Created Alarm

mon-put-metric-alarm --alarm-name DFQPLowAlarm --alarm-description "Scale down when ApproximateNumberOfMessagesVisible is low" --metric-name ApproximateNumberOfMessagesVisible --namespace AWS/SQS --statistic Average --period 60 --threshold 100 --comparison-operator LessThanThreshold --dimensions="QueueName=sqs_populator" --evaluation-periods 5 --alarm-actions arn:aws:autoscaling:us-east-1:512617028781:scalingPolicy:1611a772-cea0-492c-b245

If successful, you will see: OK-Created Alarm

This example the Alarms cause the EC2 to scale up by adding instances when the number of visible messages on the queue remains above 100 for 5 minutes. It will also scale down when the number of visible messages falls below 100 for 5 minutes.

Tuesday, 21 August 2012

Active Directory in the Rackspace Cloud

Wow. My first post in ages! There are LOADS of new and exciting features taking place all over the various clouds these days. As such, this is a quick guide I threw together that goes through the few steps required to set-up Active Directory within the Rackspace Cloud. I am using their new OpenStack gear, but this same process should work without issue on the First-Gen gear too.

A few assumptions...
  • You are an existing Rackspace Cloud customer. (free sign up)
  • You know enough about AD and Windows Server in general to follow along with this light guide
  • You know what to do once the AD role as been installed
OK - let's role.
  1. Log into your Rackspace account and create a new slice, and since this is only for testing purposes, use the smallest slice you can get your hands on (1.0 GB RAM, 40 GB Disk, 1 vCPU). Make note of your administrator password.
  2. Once launched, login via RDP or the local console, download and install your favorite AV client.
  3. (optional) Download and install Firefox or Chrome. I find it much easier to navigate the web on a fresh Windows install with one of these.
  4. Set time zone - not required, but good to do so now.
  5. Enable and start Remote Registry. Very important, you will not get far without.
  6. Remove any roles you don't need (i.e. IIS) and all of their dependant features. Reboot.
  7. Enable Active Directory Domain Services. Reboot. 
  8. Run dcpromo.exe - populate the text boxes with your new domains values. Reboot.
Once you get here you are nearly set. You can tweak your DNS if need be and your LAN settings but the out-of-the-box values from RAX work just fine.

The next, and somewhat painful step, is configuring the network rules to allow the various types of AD traffic. This is a great helper if you don't have them all for memory. http://technet.microsoft.com/en-us/library/dd772723(v=WS.10).aspx. One tip is to restrict RPC to a specific port(s).

Good luck!

Tuesday, 14 August 2012

Sending email through Amazon SES with PowerShell

Amazon Simple Email Service (Amazon SES) is a highly scalable and cost-effective bulk and transactional email-sending service for businesses and developers. 

Here is a pretty basic PowerShell script that will send email through Amazon SES SMTP interface. This assumes you already have SES enabled, and you at least have access to send email from the SES sandbox – an environment specifically designed for developers to test and evaluate the service.

$smtpServer = "email-smtp.us-east-1.amazonaws.com" 
$smtpPort = 587  
$username = "your username"
$password = "your password"  
$from = "[email protected]"
$to = "[email protected]"
$subject = "Test e-mail with PowerShell"
$body = "This is a test e-mail sending with using PowerShell"

$smtp = new-object Net.Mail.SmtpClient($smtpServer, $smtpPort)
$smtp.EnableSsl = $true 

$smtp.Credentials = new-object Net.NetworkCredential($username, $password)
$msg = new-object Net.Mail.MailMessage
$msg.From = $from
$msg.Subject = $subject
$msg.Body = $body

Thursday, 19 July 2012

Install Telnet Client via command line

On Windows 7, Windows Server 2008 R2, Windows Server 2008 or Windows Vista you can use the following command line procedure to install Telnet Client.

Open a command prompt window. Click Start, type cmd in the Start Search box, and then press ENTER.
Type the following command:

pkgmgr /iu:"TelnetClient"