Tuesday May 16th, 2017 22:55 Veeam “Failed to create Hyper-V Cluster Wmi utils: Failed to connect to WMI on host” Fixed

Yes, I know I should be beaten with the SEO mace for that post title. It’s intentional because I spent entirely too much time on Google trying to use the actual error code as a guide to find the source of the error. Stupid me.

Nearly everything I could find that was specifically related to Veeam either provided L1-phone support answers or only contained part of the reported error1.

That said: If you’re seeing that particular error, the actual problem has a high probability of being a simple fix. And here’s how that goes.

1. Forget about Veeam. That’s most likely not your problem, even if it’s happening on every job. It’s WMI itself.

22. Open WMI management on your Veeam server. It’s under start if you just type “wmi” or you can load it from MMC.

3. Right-click “WMI Control” – should have “(Local)” next to it at this point.


4. Check the properties to make sure it doesn’t say RPC Server unavailable. If it does, go to another server (that isn’t failing backup – hopefully you’ve got a standalone DC) and start over until you get a result with some basic sys info and a version number.

5. Close that window and right-click control again, but this time connect to another server.

6. Put in the name of the server that is failing backup and see what happens. For me, I got a positive result.

7. Now put in the IP of that same server. For me, RPC fail.

7.5 If those two things don’t happen, sorry. This procedure will probably not help you.

8. Given that scenario, connect to a DC that handles DNS for your domain.

9. Check the reverse lookup entries for the server failing backup. If you’re not seeing the correct name next to the correct IP, put it in there and delete any other reference to either (assuming they’re not accurate to another NIC’s IP, of course).

Be thorough. If someone gave a host a name outside the netbios limit and there’s a shortened entry, get rid of it. Only leave the un-suffixed FQDN entries. Check other subnets too.

RDNS is not something most of us clean up regularly, and conflicting entries can bork things.

10. Redo steps 5-7 from your step 4 server, connecting to whichever one(s) Veeam errored on. Remember to flush the DNS first. If good info now appears, you should be able to hit retry on the backup job(s) and walk away.

Hope that helps some other poor schmuck out there. No need to thank me; just remember to write it up when you solve your next annoying problem. Beats going begging to reddit.

1. Almost all of the posts had error messages that ended in some iteration of ‘bad credentials.’ Seriously, how are you employed if you needed to look that up?

2. This can also be done via CLI. This guy has a cut-and-paste-ready command.

In: Computers, How ToNo Comments

Wednesday November 16th, 2016 13:53 Pretty much my life

In: Computers, How ToNo Comments

Thursday May 5th, 2016 14:53 Logitech C920 webcam in a conference room

I’m not the guy you find in a web conference. I’m the guy who makes it happen. So it was a bit of a surprise to me that my office had been using a simple Logitech C920 – which is one of the most popular and highly-rated webcams on the market – and the video looked like utter crap.

Naturally looking to do better, I did all kind of tests and looked up replacements. First off, the next level of webcam (ignoring the C930) is a full-price conference room setup. We have one in the big board room, and that puppy was over $15k. You can go way down to $800 or so, but almost none of those come with built-in mics.

While looking, I tried to search out a cam with a large depth of field. The real problem with the C920 is that, from 15 feet away, everything is fuzzy. It claims an ‘infinite’ zoom after a couple meters, but not so much.

This is when I stumbled upon some tiny Finnish site that suggested opening it up and manually messing with the lens.

So, yank it apart and find this (pic via the Finns):


Grab yourself a pair of small but strong pliers and, very carefully, twist that lens clockwise a little bit – 1/8 to 1/4 turn. It’s going to look like you’re breaking things, but all you’re actually doing is tightening it against the body of the device.

The result: crystal.

I didn’t even have the thing fully in place when someone walked in for a pending meeting and asked me if I got a new camera. It looks infinitely better.

Score one more for the ‘when in doubt, poke it with a stick’ philosophy.

Note: In the disassembly, after the first two screws are removed and you need to pop off the mic covers, put a flathead in the slot below the two screws and pry up. Wedge the gap with your fingernail and do it again. That will allow it to pop out without risking breaking the other plastic latch-type thing on the other side, to which there is no access.

Note 2: It is noticeably slower to do the first auto-zoom adjustment after making this alteration. Make sure to wave your hand or something in order to trigger the adjustment more quickly.

In: Computers, How ToNo Comments

Tuesday April 26th, 2016 16:48 Deleting a Windows file whose path/file name is too long (the magic way)

In any shared file system, there will be at least one person who manages to get a file, 13 folders deep, to have a 487 billion-character file name. You will not see this file until it completely screws up a move project.

When you run into such a problem and try to look it up on the internet, there is always the ‘use rd/rmdir’ trick and the subst drive creation trick or robocopy them over to a new dir.

One of those always works, provided you didn’t walk into the situation I did.

Your average person might assume that any person would see the title “Distributed File System” and think that it has something to do with distribution. That the part of it distinctly labeled “Replication” is in some way related to replicating things.

Those people were not my predecessor.

Two major folders, one for user profiles, one simple shared space. The former is set up in DFS replication, but doesn’t replicate anywhere. The latter has a standalone, top-level namespace that points to…a share with an identical path.

This is just the part I could bring myself to investigate. I have no clue what other awful nonsense was going on there, except that it broke all the usual methods of deleting files that had gone over max_path. I decided to try manually changing every folder level to a single character and seeing if that would bring me under 260.

So, here’s the magic part

I went up one level from the errant file and renamed its containing folder to a single character. After that, I went back inside and the actual file was suddenly available for renaming.

Why? I have no earthly idea and I don’t care.

It worked on every single one, so there must be some reason. But this is one of those rare instances where I’m just going to take the money and run.

And since it appears the entire intertubes has never even heard that this was possible, I’m going to spend a few minutes walking around like this:

In: Computers, How ToNo Comments

Thursday April 7th, 2016 18:50 Slow networking on Hyper-V guest VM and the VMQ setting

So, I ran into the situation today that many who run Dell servers have: I just built my first VM on a brand new server, and the network performance is complete dogshit.

Now, many know by now that the problem is related to the Virtual Machine Queues setting. Your usual 2ms ping to Google is suddenly 200ms. RDP is a joke, if it works at all.

So, I went in to the VM settings, to the network adapter, and in to the hardware acceleration settings page. I uncheck the VMQ option. It does not work. I restart the VM. It does not work. I restart the host**. It does not work. I light my hair on fire. It does not work.

It wasn’t until I rephrased my Google for the 7th time that I came across a Sysprobs post that actually included pictures showing that the stupid setting was also in the physical NIC config too.

That nonsense in the VM setup where you tell it not to use the hardware capability at all? Doesn’t matter. You have to outright disable it.


On a side note, as I was working through this, I happen to mention my stumpedness to a decidedly higher-ranking colleague, who shared possibly my favorite compliment of all time:

(recounting comment about me said elsewhere) “He’s kind of a know-it-all, but he’s also kind of a know-it-all.”



**This puppy has the new 12gbps SAS interfaces, which, even with lowly 10k drives in RAID5, pushes 500MBps read/write, so they might as well be SSDs – full restart, start to finish, is 75 seconds.

In: Computers, How ToNo Comments

Thursday February 18th, 2016 15:38 Can’t add security group/distribution list to calendar permissions

One day when I was out of the office, a user got my colleague to change up permissions on a room calendar. Since it’s not particularly his area, we ended up with the entire organization being bumped back to ‘availability only’ status. When I got in to re-add the main staff group to the permissions list, I got myself an interesting error:


If you can’t read that, the problem is:

The user “EMCF Staff” was found in Active Directory but isn’t valid to use for permissions. Try an SMTP address
+ CategoryInfo : NotSpecified: (:) [Add-MailboxFolderPermission], InvalidInternalUserIdException
+ FullyQualifiedErrorId : [Server=XXXXXX,RequestId=d501ac25-6ccd-421c-b89e-578de600d709,TimeStamp=2/18/2016 5
:34:00 PM] [FailureCategory=Cmdlet-InvalidInternalUserIdException] BEB2A843,Microsoft.Exchange.Management.StoreTas
+ PSComputerName : XXX.domain.com

It took a surprising amount of poking around to figure out exactly why I couldn’t add that – especially considering that the same group had permissions on a different calendar. So, for consolidation’s sake, here’s the fix.

Hop on ADSI Edit and bring up the group in question. Since security groups can function as distribution in 2013+, some of the ones you have might have been converted from a distribution group at some point in the past. That leaves behind a relic that breaks the ability to add them in the Exchange shell:


You can see from this handy list right here that it still has the distribution list code on the msExchRecipientDisplayType property.

The easy fix?

Simply clear that value to set to null – “<not set>” – and run the shell command that failed before. Voila.

We all know how whiny Exchange can be about tiny details like this, and if it stopped me from making a simple permissions change, I’m not going to bet against it causing a completely different problem if left as null. So, I changed mine right back afterward. You might be more bold, and if you are, do let me know how it turns out.

In: Computers, How ToNo Comments

Thursday January 21st, 2016 17:42 SMTPSEND.DNS. NonExistentDomain: the magical error of wonders

Sometime around 4 p.m. or so the other day, all of my incoming mail just stopped working. And oh, what a fantastical tale do I have to tell about why.

I’m looking at a 2010/2013 coexistence that’s just one step away from having 2010 removed (I’m just doing all the CUs to guard against the old 4.3.2 random error that came about after SP1). So, mail was passing through 2010 on its way to 2013 where all the databases live.

In setting up 2013, I got a pretty solid course on what exactly all those new receive connectors do and how important they are for the communication and trust between the two editions. That being the most likely cause of a communication failure between the two, it was easy to diagnose as not the problem.

Little more background: we had a DC go down a little while ago, and I had built it back up, 2 days before this problem came to being, as a new server. It wasn’t promoted, it didn’t have ADDS or DNS roles installed. No reason that should be a factor.

So, following the error directly, I manually entered the live, functioning DNS into the 2010 NICs, removing the secondary. Then I went to the host (2010 is a VM) and did the same.

No dice.

I went on looking for other causes, but in the end came back to DNS.

To correct this, I had to manually specify the NIC with the exclusive DNS settings within Exchange. You go to Server Config -> Hub Transport, then the properties of the server and the “Internal DNS Lookups” tab. From the drop-down, select the NIC – instead of all IPv4 – and it displays the available DNS servers on that NIC. So long as the list is right, you’re good.

Immediately, over 500 emails in the queue started flowing out.

How in the world Exchange managed to ignore the server NIC, ignore the host NIC, latch on to a DNS server that no longer existed, then fail to find 2013 even when it could resolve through ping and nslookup, I have no earthly clue.

But if you find yourself in a similar mess, give that a shot.

In: Computers, How ToNo Comments

Wednesday September 30th, 2015 12:34 It’s Teamviewer time [may His Noodly Appendage help me]


Keeping with the tradition of intentionally destroying its own user base, Logmein has decided to screw over my company by jacking up their prices. Again.

I can understand how annoying it was last January for personal users to suddenly get hit with a $99 charge for 2 lousy computers. At the same time, I can’t understand who would have been a Logmein user if they only had 2 computers to which they’d want to remotely connect. Even with no one home, there are 3 in my living room. That, my friends, means a $249 annual to stick with LMI.

To which I say:

So, with the prospect of no longer being able to use that account looming, I started working on alternatives.

The trouble being: ports.

Being old timey and liking things like VNC, I got really excited when I came across NoMachine. Nice GUI with tiled saved machine connections. Just throw in an IP, open up a custom port on the home router, NAT traffic to individual machines (everyone assigns static IPs at home, right?) click go, right?

The place where I happened to test from, however, wasn’t allowing those ports to talk to anything.

Logmein uses TCP on 443, so it’s allowed traffic for nearly every network on the planet. That = awesome.

It’s not really a remote connection, but a simple internet connection to a central broker that’s swapping the data and sending it along with no more fanfare that a basic secure web request.

Now, connecting to my media server or coffee-table laptop pretty much never qualifies as a need. But this is a computer thing, and my brain doesn’t work that way with computer things.

It was then that I learned TeamViewer uses an initial attempt on 5398. If that’s not available, it’ll go to – you guessed it – 443.

Crap. Really didn’t want that to be my best option.

My longstanding disdain for TeamViewer even made me consider opening up RDP to my house. Then I remembered I’m not insane

Initial testing seems to indicate that the Big Hate is no longer a thing – it used to force manual update on new versions, which broke all connections – but only time will tell if this thing drives me crazy enough to do something even more drastic.

In: Computers, How ToNo Comments

Monday September 7th, 2015 16:56 An easily-missed pitfall when testing upgraded Exchange

We’ve got one client going from 2010 to 2013 and, believe me, the irony that this is happening right next to the 2016 release has not been lost (though, by all accounts it’s basically 2013 with slightly different colors).

This is the kind of operation that needs to happen with Absolutely No Errors At All.

Naturally, that means going completely overboard on the testing to a point where Outlook is starting to look at me funny and may have snuck out during the night to see a lawyer about a restraining order.

In this particular scenario, we have the luxury of a separate public IP that’s been NATted to 2013’s internal. Don’t have to turn on CAS at all, which saves setup time for the testing.

So, I rewrote the hosts file on an internal computer and one of my laptops, so we could do both inside and outside tests. Few errors here and there, but nothing that couldn’t be cleaned up pretty easily.

Everything worked fine except Outlook just wouldn’t get the new server info from autodiscover on an existing internal mailbox. Manual entry: fine. New profile: fine. External: fine. People who were migrated would need manual profile recreation.

So, the only thing that wouldn’t work is the most important thing possible for a project that requires Absolutely No Errors at All.


It took me a few days of wondering how in the hell autodiscover could possibly be affected, only internally, by the presence of an existing Outlook profile. I mean, that’s the kind of thing that sends you into full-on hate-research mode because no matter what you look at, nothing is quite describing the exact same situation and all these semi-related things have absolutely no effect and COME ON this has to be done right or we might as well not do it at all.

Where the idea came from, I cannot recall. But, the oft-overlooked stepchild of the OS was our culprit:

Windows. Credential. Manager.

As the tester, multiple tests accounts that I used saved their creds in there, creating a situation in which there was always at least one set pointing to the old server.

One day, I will look up all the tiny details as to why such a thing even matters to Outlook, but it is not this day. On this day, I must warn the people.

Hopefully someone else is spared this pain. Good night, and good luck.

In: How ToNo Comments

Monday April 20th, 2015 12:51 In which we rewrite install options to make Office 2013 do our bidding


There are only so many computers I’ve put Office 2013 on. Makes sense. A lot of people have 2010, which works just fine, and the drastic interface change is pretty uncomfortable for less-technically-inclined users.

So today was the very first time that I ran into a situation where someone needed Office, but not Outlook.

And now to explain why the above picture applies:

Because you can’t do that.

I mean, I can do that. I know how to hand-edit xml files and work with the command line after going through several pages of reference material from TechNet. You (read: nearly everyone else on the planet) cannot.

For the tl;dr crowd: The only way to make this work is to use the Click-To-Run tool, command-line download a separate setup package, config an xml by hand, command-line the install run, all so you replace the check-box function they removed with “<ExcludeApp ID=”Outlook” />“.

I get that it’s par for the course that a software company wants you to use their software above all others. Doing this at the cost of making your software harder to use: really, really stupid.

Especially if you imbue that in someone who tells a whole lot of people what software they should be buying.

Edit: per the comment below, technically Student will do this natively, but, as you can see down there, it wasn’t so much an option at the time.

In: Computers, How To(2) Comments


IT guy, dev, designer, writer.

Got a degree in print journalism from UF but history dealt some bad cards to that industry, so I moved back to an earlier love: the computer.

Was recently at ZMOS Networks, but am now the Senior IT Associate at the Edna McConnell Clark Foundation.

My name is moderately common, as are a couple screen names, so always look for the logo to make sure you're reading something with official Km approval.

You can get to me directly with kyle(@)kylemitchell.org