Monday, 8 a.m.
This is going to be a great week. The team and I met on Friday and decided that this was the week we’ll finally be able to put the finishing touches on the new payroll platform the CEO has been asking for. I’ve cleared my day, we have budget approval, and everything is ready to go. This is exactly the reason I took this job – to be able to implement new solutions and add more value to the IT function.
Monday, 1 p.m.
Not quite there yet. Someone in the HR department called early this morning to let us know that they clicked on a “suspect” file (how many times have I told them NOT to open those files??) and may have downloaded a virus, so I spent the day chasing that down and rebooting a bunch of laptops. But I still have the rest of the day to work on booting up that payroll system…
Tuesday, 6:15 p.m.
Another hiccup. We started to boot up the software for the new payroll system and encountered a performance issue that we linked back to the new storage array we installed on-site to manage it. However, our IT infrastructure is complex and not very well-documented, so it’s been hard to track down the root of the problem (despite spending all day trying!). Placed a call to the storage vendor. Tomorrow’s another day.
Wednesday, 11:45 a.m.
Came in early this morning to have a call with our storage vendor and see if we can sleuth out this performance issue, but spent the morning distracted on another fire drill. Last night’s automated backup failed, and because it’s the end of the quarter the accounting department needs all files to be updated. Looks like this will take up most of my day. Tomorrow…
Thursday, 8:15 a.m.
Finally finishing up re-doing that failed backup. But bad news: One of our IT staffers gave notice today, and my supervisor told me that we might not have the budget to replace her until next quarter. We’ll have to find ways to streamline so we can get by with our remaining staff. Still need to have that call with our storage vendor…
Thursday, 2:30 p.m.
Talked to the storage vendor, and here’s the diagnosis: Too many applications mapped to different storage arrays, multiple backup applications and about 12 purpose-built appliances on our system is causing an I/O problem with the new storage array. And now payroll isn’t the only application that’s difficult to troubleshoot—we’re having performance issues with other systems now too. This will take the rest of my day, if not the rest of the week.
Friday, 1 p.m.
Spent half of today dealing with more backup woes. Our new product team went to download customer survey data relating to a new product launch, only to find that it had been deleted—and that the last two backups have failed (and no one noticed), so there’s nothing to fall back on. I had put the need for a disaster recovery plan into this year’s budget, but it got cut.
Friday, 6 p.m.
What a week. The payroll system still isn’t working, and most of my week was spent managing backup and storage and security problems. When does this job become about innovation and not just reaction? How can our IT team become more proactive and service-oriented if we are expanding our infrastructure and applications, but keep managing them the same way? There might be light at the end of the tunnel, though: I had a conversation with my boss about reallocating some of our budget to cloud providers that might be able to help us simplify our challenges of managing data and infrastructure – freeing up our staff time so that we can focus on improving the way we do business, not just putting out fires. He’s up for exploring new solutions, so we have some calls set up for Monday. Keeping my fingers crossed!