Identifying Data to Accelerate our Data Strategy

Posted in: Business Insights, Technical Track
We previously discussed the successful formula for creating and executing an organization’s data strategy. The process of developing a data strategy should be led by executives focused on transformation, go-to-market or data. While the technology teams should be part of planning the execution of data strategies, their voice is limited to practicality of timeline and to recommend technical capabilities that can accelerate the implementation of the strategy. This structure is backed by defined measures of success and adoption of new capabilities aligned with revenue, profitability, competitive adoption and shortened sales cycles.

Many data strategy creation projects begin with a brainstorming of use cases across the organization. While this can be a thought provoking exercise, many organizations do not have foundational capabilities or reliable business processes to enable the aspirational dreams of executive teams. We often start by understanding what data is available to use throughout the organization and use that inventory to discuss applicable use cases that are more feasible in the short term and will enable the building of foundational capabilities for future innovation. Starting with use cases is fine if the organization desires, but can often leave a lot of scraps on the cutting room floor due to incomplete data as teams begin to dig into work-streams and implementation plans.

When we begin assessing what data assets an organization has and are available for executing our data strategy, the logical place to start is with existing operational data stores and data warehouses. Much of this data will be imperfect in completeness and other quality metrics, but can be mapped to business processes to identify quick wins for automating repetitive tasks, enhancing customer experiences and monetizing the data in adjacent industries. These uses will often lead to quick wins and will create foundational capabilities that lead to advanced use of AI and personalization techniques.

Once we have a view into our data and quick wins for automation and enhanced experiences we can begin to think about the art of what’s possible. We can begin to brainstorm on future use cases of interest and quickly identify gaps to their successful execution. This helps us identify missing data to be captured, business processes to be changed or acquisitions of technology or teams needed for future innovation.

As part of our data inventory we should look for data assets, but also ensure we map how they are obtained, created and transformed. This documentation enables later initiatives to quickly capture new data from customers and consumers, while keeping an eye to simplifying transformations over time to lower operational maintenance costs and improve data quality used for downstream analysis and AI. Many times the documented business processes have exceptions that have become habit within an organization. This initiative should look to ensure the processes mapped and data impacted is the real-world situation, including any system or team level workarounds created and applied by the organization.

Finally, as part of this discovery process we should evaluate who has access to our data assets today and if that access is broad enough to enable organizational transformation. Many organizations have built silos of data by limiting access to very small teams, even when data is not sensitive or regulated. Organizations should use this opportunity to reevaluate how they enable the edges of the organization to make more informed and measured decisions by granting broader access to data than has been traditional.

An inventory of our data is where we start. It enables us to look broadly at our business processes, workarounds and data gaps. From this we can quickly identify immediate use cases of value and minimal investment to deploy – building a foundation for future use cases of advanced capabilities.

In our next post we will explore the strategy storyboard – a use case focused approach to identify how we roll out new capabilities of measurable value with high chances of successful organizational adoption.
email
Want to talk with an expert? Schedule a call with our team to get the conversation started.

About the Author

VP Analytics
Joey Jablonski is VP of Analytics at Pythian, he leads strategic engagements assisting customers in developing their data strategy, defining and executing on data governance programs and building analytical models to power the modern data-driven organization. Prior to Pythian, Joey was VP of Product at Manifold, where he brought a product mind-set is part of all engagements—allowing for delivery of value quickly in any project, and building over time to drive adoption of new data-centric capabilities in an organization. Joey led engagements across industries including high tech, pharmaceuticals and for the federal government. Before Manifold, Joey held executive leadership positions at Northwestern Mutual, iHeartMedia and Cloud Technology Partners. He brings 20+ years of experience in software engineering, high performance computing, cyber security, data governance and data engineering.

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *