Thursday, 28 July 2016

Some great technical interview questions.

A laundry list of technical interview questions I have compiled myself or from other sites over the years. Feel free to comment with your favorites and I will add them as them come in.
  • What development tools have you used?
  • What languages have you programmed in?
  • What source control tools have you used?
  • What are your technical certifications?
  • What do you do to maintain your technical certifications?
  • How did your education help prepare you for this job?
  • How would you rate your key competencies for this job?
  • What are your IT strengths and weaknesses?
  • Tell me about the most recent project you worked on. What were your responsibilities?
  • From the description of this position, what do you think you will be doing on a day-to-day basis?
  • What challenges do you think you might expect in this job if you were hired?
  • How important is it to work directly with your business users?
  • What elements are necessary for a successful team and why?
  • Tell me about the project you are most proud of, and what your contribution was.
  • Describe your production deployment process.
  • Give an example of where you have applied your technical knowledge in a practical way.
  • How did you manage source control?
  • What did you do to ensure quality in your deliverables?
  • What percentage of your time do you spend unit testing?
  • What do you expect in the solution documents you are provided?
  • Describe a time when you were able to improve upon the design that was originally suggested.
  • How much reuse do you get out of the code that you develop, and how?
  • Which do you prefer; service oriented or batch oriented solutions?
  • When is the last time you downloaded a utility from the internet to make your work more productive, and what was it?
  • What have you done to ensure consistency across unit, quality, and production environments?
  • Describe the elements of an in tier architecture and their appropriate use.
  • Compare and contrast REST and SOAP web services.
  • Define authentication and authorization and the tools that are used to support them in enterprise deployments.
  • What is ETL and when should it be used?
  • You have been asked to research a new business tool. You have come across two solutions. One is an on-premises solution, the other is cloud-based. Assuming they are functionally equivalent, would you recommend one over the other, and why?
  • What do you do to ensure you provide accurate project estimates?
  • What technical websites do you follow?
  • Have you used Visual Studio?
  • Have you used Eclipse?
  • What is a SAN, and how is it used?
  • What is clustering, and describe its use.
  • What is the role of the DMZ in network architecture?
  • How do you enforce relational integrity in database design?
  • When is it appropriate to denormalize database design?
  • What is the difference between OLAP and OLTP? When is each used?
  • You have learned that a business unit is managing a major component of the business using Excel spreadsheets and Access databases. What risks does this present, and what would you recommend be done to mitigate those risks?
  • What automated-build tools or processes have you used?
  • What is the role of continuous integration systems in the automated-build process?
  • Describe the difference between optimistic and pessimistic locking.
  • In databases, what is the difference between a delete statement and a truncate statement?
  • What are transaction logs, and how are they used?
  • What are the most important database performance metrics, and how do you monitor them?
  • What is the role of SNMP?
  • What is a cross site scripting attack, and how do you defend against it?
  • In network security, what is a honey pot, and why is it used?

More generic questions:
  • Tell me about yourself.
  • What interested you in this position?
  • What is your long-range objective?
  • Are you a team player?
  • Have you ever had a conflict with a boss or professor? How was it resolved?
  • What is your greatest weakness? Nervousness about trying something new.
  • What is your greatest strengths?.
  • What qualities do you feel a successful manager should have?
  • If you had to live your life over again, what one thing would you change?
  • What do you do in your spare time?
  • How do you react to criticism?
  • How do you work under pressure?
  • How have you death with frustrated customers?
  • What is the worst technical experience you have worked through?
  • What are your technical certifications?
  • What do you do to maintain your technical certifications?
  • Tell me about the most recent project you worked on. What were your responsibilities?
  • From the description of this position, what do you think you will be doing on a day-to-day basis?
  • Tell me about the project you are most proud of, and what your contribution was.
  • Do you have any questions for me?

Friday, 15 July 2016

Android stuck obtaining ip address

Wow, I have not posted in years! Time to start again? We'll see if this takes :)

Anyhow, I have been using a OnePlus X, a fantastic little device, with Cyanogenmod 13 on it - the latest nightly as of today. And for whatever reason the WiFi would not work - it was always stuck obtaining an IP address.

After some searching, I was able to resolve it by going to Developer Options and turn on Legacy DHCP. No reboot needed.

Tuesday, 25 September 2012

"Failed to uninstall the device. The device may be required to boot up the computer"

After what seemed to be a successful migration of a dozen or so virtual machines running on an old  VMware ESX 3 host to a shiny new VMware vSphere 5.1 host (via VMware vCenter Converter), I started to see the following error popping up when configuring the new network adapter.
Failed to uninstall the device. The device may be required to boot up the computer
After a little snooping around on the VMware KB site, I found a way to resolve the issue:

Open a command prompt

  1. Type: set devmgr_show_nonpresent_devices=1 and press Enter
  1. Type: devmgmt.msc to open Device Manager and press Enter
Once device manager is up, go to View, Show Hidden Devices. From there you can remove (or disable) the old network adapters. Once that has been done, reboot the machine and you should be good to go!

Tuesday, 18 September 2012

Backup your log files to Amazon S3

As somewhat of a follow-up to my post earlier on Auto Scaling your AWS fleet with Cloud Watch, if any of your instances log to local storage, once said instances have been scaled down, getting that log data back will be difficult, or impossible if you trash your EBS volumes too.

One solution is to have a small script create a bucket within S3 as soon as your instance powers up. Next, have another script that copies your local log directory to the S3 bucket every few minutes. That way, once your EC2 instance is destroyed you still have access to all the log data ... just in case. This example refers to AWS instances but could easily be used on any instance, cloud or otherwise.

The tools of choice here are few:
  1. CloudBerry Amazon S3 Explorer Freeware - we will use this for both bucket management as well as the built in PowerShell snap-in.
  2. Windows Server Task Scheduler

Install and configure CloudBerry Amazon S3 Explorer

To get started, download and install CloudBerry Amazon S3 Explorer Freeware. Next, register the PowerShell snap-in by dropping to the command line (run as administrator) and launch the following.

C:\Windows\Microsoft.NET\Framework64\v2.0.50727\InstallUtil.exe "C:\Program Files\CloudBerryLab\CloudBerry Explorer for Amazon S3\CloudBerryLab.Explorer.PSSnapIn.dll"

Once successful, you are ready to go with the scripts. Overall, they are very lightweight and easy to read regardless of your PowerShell knowledge. 

First things first, set the PowerShell execution policy to unrestricted by executing the following PowerShell command: set-executionpolicy unrestricted. The Set-ExecutionPolicy cmdlet enables you to determine which Windows PowerShell scripts (if any) will be allowed to run on your computer. If you like you can read more about that cmdlet here.

S3 Folder creation via PowerShell & S3 Explorer

Create the first script that will add a folder to your S3 bucket. If you don't have one yet, use S3 Explorer or the AWS Console to create one. You can copy/paste the following into an empty PowerShell script.

#declare variables
$key = "xxxxx"  #AWSAccessKeyId
$secret = "xxxxx" #AWSSecretKey

#load CloudBerryLab PSSnapin
Add-PSSnapin CloudBerryLab.Explorer.PSSnapIn

$hname = hostname
$s3 = Get-CloudS3Connection -Key $key -Secret $secret
$destination = $s3 | Select-CloudFolder -path "mylogs" | Add-CloudFolder $hname

What this will does is sets up your AWS authentication, loads the CloudBerryLab  PowerShell snap-in, sets a variable $hname equal to the hostname of your instance. It then uses that variable to create a new folder in your mylogs S3 bucket.

Set this up as a basic task to Run whether user is logged on or not and have it set to trigger At startup. Configure the task action to Start a program, of which, powershell.exe is your program of choice here. Then add the following arguments:  -command "D:\tmp\logscopu.ps1"

Reboot the server to confirm - you should see a new folder in your S3 bucket.

S3 File copy via PowerShell & S3 Explorer

The second little script is just about as easy to read. 

#declare variables
$key = "xxxxx"  #AWSAccessKeyId
$secret = "xxxxx" #AWSSecretKey

#load CloudBerryLab PSSnapin
Add-PSSnapin CloudBerryLab.Explorer.PSSnapIn

#determine and set instance hostname
$hname = hostname

$s3 = Get-CloudS3Connection -Key $key -Secret $secret
$destination = $s3 | Select-CloudFolder -path "mylogs/$hname"
$src = Get-CloudFilesystemConnection | Select-CloudFolder -path "D:\Logs"
$src | Copy-CloudItem $destination -filter "*"

The PowerShell snap-in bits are a authenticating your account, setting up the destination bucket with a foldername that matches your instance name. And finally copying the local content of D:\logs\* to mylogs/$hname.

Set this up on a basic task as well - but make it fire every 5 minutes. You should now see your logs populating S3 about every 5 minutes. 

CloudBerry has many examples here that you can use and tweak to get just what you need.

Monday, 17 September 2012

Auto Scaling your AWS fleet with Cloud Watch and SQS

This blog post shows you how to setup Auto Scaling in AWS using Cloud Watch to dynamically expand and collapse your Amazon EC2 fleet in line with SQS expansion.

Before we get our hands dirty, I am making a few assumptions here:
  1. You are already familiar with the basics of AWS
  2. You have an AWS account - if not, sign up here.
Also, this was done on a Windows box so your mileage on a Mac or a Linux box may vary.

The first step is to download  the Auto Scaling tools from Amazon and configure the command line interface (CLI). If you do not have Java v1.5 or newer you'll need to update here.

I like to pop the following commands in a batch file that I can reuse as needed and makes the CLI tools more portable between workstations.

set AWS_AUTO_SCALING_HOME=C:\aws\AutoScaling-
set AWS_CREDENTIAL_FILE=C:\aws\AutoScaling-\credential-file-path.template
set JAVA_HOME=C:\Program Files\Java\jre6

Once that is done, you can confirm the setup was done correctly by using the as-cmd command.


If successful, you will see all of the available Auto Scaling commands.

Command Name                                Description

------------                                -----------

as-create-auto-scaling-group                Create a new Auto Scaling group.

as-create-launch-config                     Creates a new launch configuration.

as-create-or-update-tags                    Create or update tags.

<snip />

as-update-auto-scaling-group                Updates the specified Auto Scaling group.

version                                     Prints the version of the CLI tool and the API.

    For help on a specific command, type '<commandname> --help'

The next step is to create a Launch Configuration for the SQS processor instances. The launch configuration specifies the type of Amazon EC2 instance that Auto Scaling creates for you. I chose to use a micro instance using the Amazon Linux AMI however you can use any AMI you like, including your own.

You create your launch config via the as-create-launch-config command:

as-create-launch-config DFQPconfig --image-id ami-a0cd60c9 --instance-type t1.micro

If successful, you will see: OK-Created launch config

Now, create an AutoScaling group for the SQS processors:

as-create-auto-scaling-group DFQPgroup --launch-
configuration DFQPconfig --availability-zones us-east-1e --min-size 1 --max-size

If successful you will see: OK-Created AutoScalingGroup

The Auto Scaling group existence can be verified with the following command:

    as-describe-auto-scaling-groups --headers

The results will look similar to this:

AUTO-SCALING-GROUP  DFQPgroup   DFQPconfig     us-east-1e          1         5
INSTANCE  i-8c93d9f6   us-east-1e         InService  Healthy  DFQPconfig

Now, let's configure Auto Scaling policies that with both add and remove 1 node to and from the group as demand specifies.  They will be invoked when the number of messages on the queue increases to a predefined level or decreases to an acceptable level:

as-put-scaling-policy DFQPscaleup -g DFQPgroup --adjustment=1 --type ChangeInCapacity

Be sure to make note of your ARN you will need it later to create CloudWatch alarms.


The amount to scale the capacity of the associated group. Use negative values to decrease capacity. For negative numeric values, specify this option such as --adjustment=-1 for Unix, and  "--adjustment=-1" (quotes required) for Windows.

as-put-scaling-policy DFQPscaledown -g DFQPgroup "--adjustment=-1" --type ChangeInCapacity

Be sure to make note of your ARN you will need it later to create CloudWatch alarms.


Next, we need the Amazon CloudWatch tools. Download the command line tools here.

Similar to the Auto Scaling config, this is my setup for Cloud Watch - yours might vary a little:

set AWS_CLOUDWATCH_HOME=C:\aws\CloudWatch-
set AWS_CREDENTIAL_FILE=C:\aws\CloudWatch-\credential-file-path.template
set JAVA_HOME=C:\Program Files\Java\jre6

Once that is done, we will build out two alarms, one to trigger when there is a high number of ApproximateNumberOfMessagesVisible and another when there is a low number of ApproximateNumberOfMessagesVisible.

mon-put-metric-alarm --alarm-name DFQPHighAlarm --alarm-description "Scale up when ApproximateNumberOfMessagesVisible is high" --metric-name ApproximateNumberOfMessagesVisible --namespace AWS/SQS --statistic Average --period 60 --threshold 100 --comparison-operator GreaterThanThreshold --dimensions="QueueName=sqs_populator" --evaluation-periods 5 --alarm-actions arn:aws:autoscaling:us-east-1:512617028781:scalingPolicy:61432c71-0679-4988-928f-4a58a867d71f:autoScalingGroupName/DFQPgroup:policyName/DFQPscaleup

If successful, you will see: OK-Created Alarm

mon-put-metric-alarm --alarm-name DFQPLowAlarm --alarm-description "Scale down when ApproximateNumberOfMessagesVisible is low" --metric-name ApproximateNumberOfMessagesVisible --namespace AWS/SQS --statistic Average --period 60 --threshold 100 --comparison-operator LessThanThreshold --dimensions="QueueName=sqs_populator" --evaluation-periods 5 --alarm-actions arn:aws:autoscaling:us-east-1:512617028781:scalingPolicy:1611a772-cea0-492c-b245

If successful, you will see: OK-Created Alarm

This example the Alarms cause the EC2 to scale up by adding instances when the number of visible messages on the queue remains above 100 for 5 minutes. It will also scale down when the number of visible messages falls below 100 for 5 minutes.

Tuesday, 21 August 2012

Active Directory in the Rackspace Cloud

Wow. My first post in ages! There are LOADS of new and exciting features taking place all over the various clouds these days. As such, this is a quick guide I threw together that goes through the few steps required to set-up Active Directory within the Rackspace Cloud. I am using their new OpenStack gear, but this same process should work without issue on the First-Gen gear too.

A few assumptions...
  • You are an existing Rackspace Cloud customer. (free sign up)
  • You know enough about AD and Windows Server in general to follow along with this light guide
  • You know what to do once the AD role as been installed
OK - let's role.
  1. Log into your Rackspace account and create a new slice, and since this is only for testing purposes, use the smallest slice you can get your hands on (1.0 GB RAM, 40 GB Disk, 1 vCPU). Make note of your administrator password.
  2. Once launched, login via RDP or the local console, download and install your favorite AV client.
  3. (optional) Download and install Firefox or Chrome. I find it much easier to navigate the web on a fresh Windows install with one of these.
  4. Set time zone - not required, but good to do so now.
  5. Enable and start Remote Registry. Very important, you will not get far without.
  6. Remove any roles you don't need (i.e. IIS) and all of their dependant features. Reboot.
  7. Enable Active Directory Domain Services. Reboot. 
  8. Run dcpromo.exe - populate the text boxes with your new domains values. Reboot.
Once you get here you are nearly set. You can tweak your DNS if need be and your LAN settings but the out-of-the-box values from RAX work just fine.

The next, and somewhat painful step, is configuring the network rules to allow the various types of AD traffic. This is a great helper if you don't have them all for memory. One tip is to restrict RPC to a specific port(s).

Good luck!

Tuesday, 14 August 2012

Sending email through Amazon SES with PowerShell

Amazon Simple Email Service (Amazon SES) is a highly scalable and cost-effective bulk and transactional email-sending service for businesses and developers. 

Here is a pretty basic PowerShell script that will send email through Amazon SES SMTP interface. This assumes you already have SES enabled, and you at least have access to send email from the SES sandbox – an environment specifically designed for developers to test and evaluate the service.

$smtpServer = "" 
$smtpPort = 587  
$username = "your username"
$password = "your password"  
$from = "[email protected]"
$to = "[email protected]"
$subject = "Test e-mail with PowerShell"
$body = "This is a test e-mail sending with using PowerShell"

$smtp = new-object Net.Mail.SmtpClient($smtpServer, $smtpPort)
$smtp.EnableSsl = $true 

$smtp.Credentials = new-object Net.NetworkCredential($username, $password)
$msg = new-object Net.Mail.MailMessage
$msg.From = $from
$msg.Subject = $subject
$msg.Body = $body