This Barren Land

It has been quite some time since I last blogged. in fact, it’s been since our trip out here to the land of mountains and rain. But it’s not this area that’s barren – it’s my blog.

Yes, it’s been a while but I have been busy: busy with the new job and getting oriented to a new way of working. I have a team now, and that’s new for me – no transitioning in and out of a project never to see them again, no flying to and fro. It’s been a real change for the better I feel. I will be starting in a new office this coming Monday. We moved one building over and now I have a window in my office. I’ve also been busy trying to orient to the area and learn where things are, where the good restaurants are, where the stores are.

Fortunately, we just decided to extend our lease when the current one is up. We’ll get to live in the same house for a couple more years while we deal with renting and then hopefully selling our Virginia house.

I’m really glad that we moved our blogs to the cloud: our server was down for two months while we moved from apartment to apartment to house. Now, we’ve been able to cut down on the number of computers I’ve been using in the house. Previously, I was using 5 computers just host the home’s “infrastructure” and web sites. Now, we’re down to 3 which should save us a bit of money on power.

Now, my next task for the sites is to clean up what little data remains on my old web server, make sure it’s on the blogs, and then shut it down.

Most are done, but one more site remains.


New Airport, New Country

Last month, I flew to Singapore. I have a new airport and a new country to add to my list.

Why? Because I’m crazy. I want to make it to 1 million miles this year, so I had booked that trip last year in order to get over 21K miles. Well, that trip is now over and done with and I’m that much closer. I’m within 12K miles and should get it this year – which is the bad news. That means I’ve been travelling again.

Not quite as fun as I remember.

I’d like to stay home for a while now…


Seems we have a lot of it. In fact, I feel like we’re going through hard drives like potato chips. [Okay – not quite that fast, but still!]

We finally filled up our actual data drive which was 1.5 TB of storage. That’s our personal data plus software images and such, not virtual machine storage or anything like that. So, I purchased a new 2TB drive to extend our data partition. I previously had 2 1.5TB drives mirrored and I needed to take that storage and convert it to a RAID 5 array using those 3 disks. Now, I tried to find a 1.5TB disk but couldn’t, so I just took the next size up, and turned off the file server. I put the disk in and found that I couldn’t just create a new Windows Storage Space and then convert it from “simple” to “parity” mode. You need a reformat to do so. My RAID card can do this, but it’s a pain to manage and I don’t like the way it works, so I decided to destroy the old disk and build a new one fresh. Once I copied all the data off [and I barely had a enough room!] I destroyed the array and mounted the disks as separate drives to be managed by Windows.

All was well until I decided to reboot.

The RAID software kept locking up on me, so I booted into the BIOS and configured the disks directly, then attempted to boot. Got a flashing cursor. Nice.

After a couple of hours, I was finally able to properly order the disks and fix the boot partitions [even though nothing had changed on those drives!] and it booted normally.

Now, I have created a new storage pool of “parity” mode using those three disks which equates to almost 3TB of redundant storage. I lose 500GB of storage due to the fact that the new drive is bigger, but when I add additional disks in the future, I could reclaim that space.

The data is now being copied back – it might take all day.

On another note, one of the other hard drives on which I stored my virtual machine configurations failed a couple of months ago. Now, I just bought a replacement 2TB drive and stuck it back in and the array nicely rebuilt itself back. However, I was sitting here with this bad disk thinking “I wonder if there’s some warranty…”

Sure enough, there was! The drive will be sent back to the manufacturer and then I can add it to the new storage pool to increase the amount of available space.

Okay. Enough geeky stuff. Back to work.

Aliens and the Surface Pro

I’ve been busy for a while with quite a lot of things, so I have not had much time to write [or rant for that matter] lately.

First and foremost, I will tell you that Laura has been painfully making do for the last several years with my “leavings” from my employer. She’s been using my hand-me-down extra laptops when I get a new one.

Lately, though, that wasn’t really working for us. She loves to do photo editing and also loves a good, fast, and stable machine. My old work laptop didn’t quite have a powerhouse of a graphics card [neither did my newer one] and eventually, it has slowly succumbed to entropy and now crashes frequently for no reason. So did my newer one when I put any kind of strain on the graphics engine, like, say, playing a game… If I were to do such things…

It was definitely time to get something configured for her that was brand new and with a warranty. I gave her two options. One: get a super high performance laptop that would do everything, or two: get two purpose built machines (one powerful desktop for home and one ultra-portable for elsewhere). Either option was about the same amount of money.

What she discovered is that the laptops which had the specs that she like were 10 pounds and had 17 inch monitors and were usually “gaming” machines. She also didn’t like the fact that if something breaks with your laptop, you usually need a professional to fix it. I can change parts in a desktop rather easily, so no calling the technicians on that one. She also couldn’t get quite the performance on a laptop that she could on a desktop that was about half the price. So that’s what we decided. All that was left was picking out the two pieces of the Laura Computing Environment, from now on referred to as “LCE”. 🙂

The easier choice was a desktop for the LCE. We’ve both always admired the Alienware computers, and now that they’ve been acquired by Dell, they seem to be more affordable. We chose a strong performer, but without the dual video cards [without a second monitor they are unnecessary], with 32GB of RAM, super fast CPU, and an SSD for the system drive. All of our data files usually are on the network so storage isn’t a problem. We chose speed over space with that disk, but it’s still 256GB.

Then, we had to chose the LCE mobile machine. We looked at several options, playing with them in some stores, but none seemed sufficiently powerful and robust enough when combined with light weight and touch screens.

Anyhow, after my employer gave me a Surface RT, I found that I kept finding it in her hands. Hmm… there might be something there, I thought. So, when availability of the Pro came around, we showed up at the store – to a huge crowd! We were able to miraculously obtain the 128GB Surface Pro [which, it seems, everyone else wants, too] that the manager said had a “damaged box”. Turns out it wasn’t damaged that we could see. Everyone else was now in line to reserve a back-ordered one.

So: the LCE now consists of the Surface Pro 128GB and my old laptop. The Alienware desktop has not yet arrived, but it will in about a week I think.

I’ve set up the Surface Pro for her and even tested it out in a café with Lightroom on it doing some picture edits and uploads for about 3 hours. I’ve got to say: it’s pretty amazing for such a tiny thing!


You can take that in a couple of ways. First, at least chronologically, we have surfaced from our dives and our vacation in fine diving form. In fact, we both took and passed our Advanced Open Water diving certification!

Yay, us!

It was a very good refresher course for us, so much so that Laura even got so excited about diving that she didn’t want to stop!

We loved it in the Keys – it was very relaxing and laid back. We are definitely going back… just not to Miami. Miami wasn’t so great – way too hectic and crowded for a vacation.

Secondly, and most important, I’m typing this blog entry on my brand new Surface! Full disclosure dictates that I tell you that Microsoft is my employer, but even so – this thing is way cool. I’m going on a couple of hours with hardly a dent in the battery. It’s a little small, but that’s to be expected of a tablet device.

Any way you slice it, It’s a cool machine.

A Return To Normalcy

Whatever that means…

[WARNING: More Geeky Stuff Ahead!]

I had been running the site and all my servers behind a hardware firewall [not a smart one] as an interim measure since during the disk repairs, my TMG server didn’t come back. Why didn’t it come back? I’m not certain I will ever know the in depth truth, but it comes down to this:

When I finally tried to make it work again, nothing I could think of worked, not removing and re-adding network cards, not rebuilding and reinstalling the server from scratch – nothing.

So, last night while trying to put Laura to sleep with technical talk [Really – geeking out totally puts her down!] I verbalized several ideas that I had. I had even begun to think that my hardware might have “just failed” at the precise time that I shut down and moved a file on my hard drive. Right.

One of the ideas I had, though, was to delete the virtual switch on my Hyper-V server. Since that was the least destructive method to try, I did that first.

Lo, and behold! It worked. I then proceeded to reinstall all the Forefront TMG software and patches on the server and re-import all the application hosting settings, which is now how you’re seeing this page. This is a much more secure firewall for those inside and out.

My best guess is that there was some cached setting in that virtual switch that I could not clear when I “moved” the server from one disk to another. Once I recreated the switch, traffic began to flow just fine.

Now, I can sit back and relax… and wait for Windows Server 2012 to be released!

Then the upgrades start again… 😐

UPDATE: Windows 8 just released ( It won’t be long until Server 2012 is available for me to upgrade.

It’s Difficult To Be A System Administrator

[WARNING: Geeky Content Ahead]

Sometimes, things go well – most of the time, usually. But then, you have things like massive power outages when you’re in the middle of doing something to prevent catastrophic data logs when the power goes out. And what you were doing in the middle of the power outage actually CAUSES catastrophic data loss. 😦

Two weeks ago, one of my hard drives on my personal file server failed. It’s a 1.5TB hardware RAID1 (mirror) array. For those who don’t know what that is, I have a special device in my server that allows me to build redundant arrays so that in case of the failure of a single disk, no data is lost. That’s what I have: two 1.5TB drives mirrored to provide full redundancy. One of them failed and the server started to beep… telling me “fix my drive!” So I did. I got a new drive and it spent the next 20 hours or so rebuilding the 1TB of data that is on the drive. Problem solved!

Then, I did something dangerous. I started to think.

I thought “Hmmm… that hard drive failed way too early in it’s lifecycle. What if my other ones fail to?” You see: the drive I just replaced was only my personal data [which is very important] but it wasn’t part of my infrastructure. I could use that data on any machine, but I’d have to rebuild a ton of stuff to get to it if I had many more failures.

Just so that you understand an overview of my “home server farm”, I have [for purposes of this discussion] 2 host servers that serve up virtual machines (VMs). I have somewhere around 30 VMs spread across these hosts. Most are work-related, helping me to design and build solutions for my customers and do research, and some are for my personal use, such as a desktop, web server, email, and the file server mentioned above. There’s also a few machines to maintain the farm – Domain Controllers, DNS, Certificate Authorities, etc.

Each one of these computers is actually a file on one of the two host boxes – and those files are rather large. If something were to happen, say a power outage, there’s no guarantee that my battery power would last long enough for me to shut down the machines cleanly.

With all that in mind, and the idea that in this instance a “server” is actually a big file on the order of 40GB to 200GB in size, I thought that making the disk upon which these files sit a mirrored array would be a smart thing to do. Which it is.

So: on Friday afternoon, I began the process of creating mirrors on two of my servers. One server, with most of my work VMs on it, has no RAID card. On that one, I used the Windows OS to mirror two 1TB drives after shutting down the machines and moving the files off of one. Once I had that started, I moved to the other server, my personal one, and did the same thing – shut down the VMs and moved them to another disk. Then, I repurposed the vacant disk and joined it to the one which now held the VMs and began building a mirrored volume.

Now, with the RAID card, I can do this on the fly while the disk is available. Before I turned the machines on, it said it would take about 2.5 hours. I turned the machines on. Now, it said it would take about 20 hours.

I should have left them off.

Well after 2.5 hours later, at about 10:30 that night, the power went off. 14 or so hours later, it came back. I went down to power things on. It all “looked” okay for a while – I was getting email again, but the web server was wonky and slow and some other things were just kind of weird.

Looking further, it appeared that the rebuild had to be restarted since it had lost power. I restarted it. [Note: the work server using the Windows OS RAID simply came back on automatically and began rebuilding the mirror and it completed with zero errors.] About 2 hours later, there was a loud obnoxious beeping from the server closet. The rebuild had failed and the drive simply dropped offline. Gah!

All my VMs disappeared for a moment. Rescanning the array with the utility made it come back, but now I was very worried. Since the VMs were all off, I copied all the files to a second drive and build the array from scratch [after several attempts to find and fix whatever bad secords or corrupt tables were on the drives]. I moved the files back after it completed. I turned on the VMs only to have half of the machines not come back – the half that mattered of course. One domain controller, my web server, desktop and the email server were the biggest losses. I had the old disks, but I had to actually reinstall the OS on all of them and begin the slow, painful process of restoration.

Which is where I am today. I have the web server working [obviously] and we now have email with empty mailboxes. I have a recovery database ready to go, but there are issues with the old database so I need to finish patching the Exchange server so I can get it to the same version that it was before so the recovery tools will work properly. That’s what I’m doing now.

All of this work has taken 5 days or so to get things back up. I now have most of the critical VMs housed on RAID drives. I just need one more to complete the process.

At least I learned more about doing Exchange server mailbox recovery.

Power Outage

Many of you may have noticed the site down for several days. That is directly due to the power outage. Not that we’ve been without power, mind you. Power came back within 12 hours of loss. (Yay! Air Conditioning!) What happened was that I was in the middle of moving critical files from one disk to another when the power failed. The failure damaged several of my servers including the web server and the email server. I have no email for now, but hope to have it up soon – even if I have to do without the old stuff.

But, as you can now see: the web site is up and running.

Thank you for your patience!

New Online Backup Solution

It may not be new, but it’s new to me since I last researched these things. When I first sought a backup solution for our home, I settled on Elephant Drive for two reasons: price (it was $4.95/month for unlimited storage) and software compatibility. This last one meant that I could run the software on our server where all of our data is stored, instead of on one of our laptops or desktops which never has anything stored local. The software had to run on Windows Server.

All was well once I had Elephant Drive running. Then, after two years of great service, the hammer fell.

They wanted to raise prices and eliminate the unlimited package. This meant I would have to pay almost $200/month for my near 1TB of data that I was backing up. That was quite a price jump, so I ditched them and have been without back up… until now.

I found a new company called CrashPlan which has great plans, even family backup plans, and a client that works on Windows Server. I’m now back to backing up!

I’ll soon be backed up again in the “cloud”. Soon meaning about a month due to the vast amounts of data that will be going up.