We Want Your Halloween IT Horror Stories!

If they are as entertaining as we expect, you could win an Amazon Echo.

It’s that time of the year again when things go bump in the night. Well, in IT, things go  bump all year round. From DDOS to disaster, we bet you’ve got IT horror stories worth sharing. Change the names to protect the innocent, if you wish—and definitely if you must—but please share the most frightening, shock-worthy (hilarious, incredible, dunderheaded) scenarios you have encountered in your IT career.

The pay off? The three best stories will be selected by you and your peers, right here, and each storyteller will win an Amazon Echo smart speaker.

Submit your Halloween IT horror story of ransomware, end-user mischief, or an epic IT fail—by commenting on this blog or tweeting it to @Code42 tagged with #HalloweenIT. The Code42 team will then select the top six stories by 5:00 pm (CT) on Monday, October 31, 2016. We will post them to our Data on the Edge blog where you, your cohorts and the public will have the opportunity to vote for your favorite story.

The top three winners will be notified by direct message on Twitter or by email if stories were submitted on the blog. After which, details will be made known on Code42’s Twitter and Data on the Edge blog.

Please read the terms and conditions to make sure you are eligible. To enter, you need to be 18 or older and live in the United States or United Kingdom. If you win, you will need to provide us with your contact details, so we can send the Amazon Echo to you.

We look forward to your #HalloweenIT tweets and stories below. Happy haunting!

6 responses to “We Want Your Halloween IT Horror Stories!

  1. We used a tape back-up which took 5 tapes for 1 complete back-up. We had 3 complete sets of back-ups that we ran bi-weekly. We also backed up our complete database and system to CD-roms twice a year. Our server had raid level 3 configuration. One day, partitions, individually, began to crash and we started losing all of our data, we were paperless and had 100% of our business scanned into the paperless environment. We replaced the drives in the server and went to the tapes to restore the back-up. We went to our first back-up set, got into the 4th tape, and the tape broke. The entire back-up was corrupted. We then went to 2nd set and again, a tape broke before the back-up was restored. The same thing happened when we went to 3rd set. We then went to our custom burned, back-up CD-roms to do a full restoration from, and learned that the CDs were corrupted because the technician who ran the back-up burned individual sessions, not knowing that every time he stopped and restarted, it burned over what was previously burned, which corrupted the CD. All of our back-up efforts were in vain. We ended up sending the crashed drives, the broken tape sets, and the CDs to a company out west, which charged over $500/hour to restore our systems. This was 10 years ago, and $500/hour was shocking. It took about 3 weeks to get back and running. This is obviously before the existence of the “cloud!”

  2. Years ago, had a coworker bring in a desktop from his home office complaining that it was sluggish and overheating. As I put it on the counter I noticed what looked like fur sticking out of one of the exhaust vents, so I removed the casing to check it out.

    I have never seen so much random debris — it’s a wonder the machine was still able to run at all.

    There was enough dog fur inside to knit a thick sweater, along with assorted pins, paperclips and what looked like one odd shriveled early pea???

    I had to use a shop vac to siphon everything out and even so it took over an hour. I bagged all the contents for posterity.

    When the coworker came by the following day to see if I’d figured out what was wrong. I presented him with the bag of trash.

  3. A number of years ago my company had an encryption software in place. The software was not something we were continue to use and were going to migrate to Bitlocker. So the current version of the software was not up to date and our licensing support contract had expired. Our testing environment was lacking as well. One month wonderful patch Tuesday came along. We started receiving calls about a couple machines blue screening. Then we started receiving more and more. We quickly learned what the issue was. A conflict between the encryption software and the windows update. We had to notify users to not reboot their machines. We had to pay the current encryption company to provide troubleshooting and an upgrade to the client. We also did not have a backup solution in place at the time either. So there were many weeks, doing manual de-cryption of drives in order to recover user data and reimaging machines that we could not patch before they blue screened. On top of that we had a new contractor starting that we could not train at the moment due to the fact we were all in emergency mode. Many lessons learned from this experience.

Leave a Reply

Your email address will not be published. Required fields are marked *