Well that went by quick, didn’t it? As I’m typing this blog post it’s already November 2016 and it’s time to look forward to our plans for 2017. With that in mind, I’m going to outline the goals I’m setting out for my clients in 2017. Same as our personal goals, some of them will happen, some won’t, but knowing where we want to go will help us progress regardless.
I have two caveats around this list. First, I’m a Microsoft Data Platform consultant at Pythian so the examples will apply to that platform, however, most of the advice translates easily to other technology stacks. Second, these are my guidance and opinions, the beauty of working at a place like Pythian is that lively technology-related debates are encouraged and literally found around any corner. If any of the following resonates with you, please reach out to your Pythian main point of contact and start a conversation.
On to the list!
No more unsupported versions
For SQL Server shops this means that we need to urgently come up with a plan to get away from anything older than SQL 2008. So you have some legacy app that was never updated by the vendor and you’re stuck on this older version. Well then let’s move to a newer version while trying to maintain the older compatibility model (if possible) or worst case, virtualize and keep it running with the smallest footprint affordable.
And if you are running SQL 2008, let’s start planning now to move to a newer version or a cloud PaaS service.
What’s in it for your business: unsupported versions put you in a difficult position in the event of hitting a major product bug or issue. More importantly, once the software is unsupported, the vendor stops issuing security patches.
New product versions will also bring in new features that you can leverage for faster application response, quicker insights or improving the developer experience.
Are you sick and tired of this version catch-up game? Then let’s talk about leveraging cloud PaaS services where patching and upgrading is done transparently by the provider.
Now I’m going to catch some heat for this one from some of my colleagues (you know who you are). Yes, if you’re running a big SMP low latency OLTP workload or a large 128 CPU analytical one this will not apply. However, for many other small and medium workloads, there’s simply no reason to deploy them directly on bare metal. Virtualization allows you to provision capacity more effectively, automate the creation and configuration of environments, prototype, and test faster, and provide a built-in first layer of HA. You don’t want to deploy virtualization on-premises? Great! Any of the public cloud providers will be more than happy to provide Infrastructure as a Service capabilities.
What’s in it for your business: better configuration management through golden images that are fast to deploy. This velocity decreases your time to develop, test, prototype (potentially fail) and release. Compute resources can be leveraged more efficiently and on top of that, you get a basic layer of HA.
Automate and Alert
You have proper run-books with well-documented procedures and deviations. That’s awesome, let’s take it to the next level and drive for automation. If you’re hesitant about automating the processes that you know work very well in a manual fashion, that’s OK. I understand your hesitation, and that’s why all the automation catches exceptions and errors and sends them to an operations resource that understands what’s going on. If everything goes well, no human intervention. On the odd case where it doesn’t, you send an alert and then that person can improve the automation to catch that case next time.
What’s in it for your business: automation frees people up for more valuable work. It also reduces the tediousness of repetitive work and the potential for human error. The automate and alert feedback loop forces people to improve their understanding of the process and product (SQL Server, Oracle, MySQL, etc.), improve their error handling and detection of edge cases. In other words, it keeps them SHARP.
Effective and Actionable Monitoring
Myself and my colleagues on the Pythian’s Managed Services team have accumulated countless on-call hours for many Mission Critical systems. If there’s one thing I really don’t like it’s a noisy monitoring system that produces alerts that are not actionable. When your monitoring system is asking for a human being (in-house or a service like us, doesn’t matter) to spend their precious time to look at an alert, you have to make sure that there is actually something valuable that the human can do to either improve the service or prevent an issue.
We don’t want the human target of the alerts to constantly ignore the noise from the tool or to acknowledge situations that they can’t fix. This means that the monitoring system is in need of optimization. In the case of Pythian, I personally don’t want our teams to be spending time on noise, I want them actively engaged on real alerts as close to 100% of the times as possible.
What’s in it for your business: there are two benefits of tuning monitoring to the max. First, if the monitoring is generating lots of noise then people will get desensitized and will be more likely to miss a real alert when it happens. Second, you’re wasting valuable resource time and energy that would be better spent elsewhere.
Yes, it’s painful to have idle resources and rent data center space just in case a major disaster happens around your main data center. For years, this has been the bane of many IT managers to justify any DR investment. Well guess what, the cloud has killed all those excuses. Even if you don’t want to have a warm standby copy in the cloud, I propose the following:
- Pick a cloud region that makes sense for you latency wise.
- Upload your backups there.
- Once a month boot up a VM, restore that backup and alert if something goes wrong.
- Shut down the VM.
What’s in it for your business: Not only do you get off-site backup storage at cheap cloud prices, you also get to test your restores and DR capability, that’s a 3×1! And you end up with a DR strategy that makes sense cost-wise and provides a decent RTO and RPO.
Let’s talk Cloud
I left this one last because really, it could be a blog post on its own. Some clients are barely dipping their toes by offloading storage and backups. Others are starting to leverage multi-region redundancy in their IaaS workloads. Other clients are already knee deep in PaaS services for IoT streaming, data visualization, managed big data or data warehousing. Wherever it is that you find yourself, we want to sit down with you and draw out your 2017 plans. We are uniquely positioned to be exposed to all the providers, all their offerings and you can learn from the lessons we have learned.
What’s in it for your business: I see the cloud as a vehicle for freeing people and resources from mundane tasks and allowing them to focus on delivering more value to a business. Some believe the cloud means running your business IT for fewer dollars but while that is possible, a lot of times it won’t be the case. However, properly implemented, a public cloud provider should be able to bring redundancy, faster time to market, free up resources and provide the lowest possible cost of curiosity.
Needless to say, it’s not an exhaustive list. For other clients, the goals we’ve setup for 2017 involve In-Memory technologies, better analytics, applying machine learning, real-time data ingestion, moving systems to IaaS or trying out cloud PaaS services. Everyone is at a different point in this road and we love helping people navigate the challenges of the ever-expanding world of data.
Let’s start playing offense, tackle our technical debt and innovate in the New Year. Here’s to reaching all our goals in 2017!