Data quality should be a strategic priority for any business, if it isn’t already. Exploding data volumes and the modern omnichannel customer experience have created a tsunami of information, with Northeastern University research recently finding that 2.5 exabytes of data are produced every day – the equivalent of the storage capacity of 150 million iPhones. Not all this data is complete, or even useful to marketing, so it’s imperative at these high volumes that there are processes in place to clean data before leveraging it in customer engagement.
There are severe negatives for ignoring data quality management, such as that bad or poor-quality data can cost organizations as much as 10 to 20 percent of their revenue. Increasing the quality of your data is a sound business investment and means marketing will be able to reach the connected customer more effectively and with greater confidence. More than that, ensuring consistent data quality management can help brands better personalize their customer interactions. Improving personalization means that brands have the opportunity to capture some of the $800 billion in revenue that will shift to the 15 percent of companies that get personalization right over the next five years.
Ensuring data quality gets and remains high is a substantial undertaking – one that needs attention to maintain. To that end, there are three primary ingredients of an ironclad customer data quality management strategy:
Each of these ingredients are crucial for brands looking to leverage their customer data as a strategic advantage in the coming months and years.
Gaining Organizational Alignment on Business Rules
The idea of “high quality data” is a nebulous one, subject to different interpretations depending on a particular business unit in the organization. For example, sales might emphasize the correct email address, while billing wants to know the correct mailing address, and marketing wants to know a customer’s social media identities. It can also vary depending on campaign – email address for an email campaign, mobile behavior for a mobile ad, and so on.
Brands could also judge data types differently. Email address, for example, might be determined based on which one is more frequently used or which one was more recently used. Mailing address might be determined based on completeness instead of recency, social media preferences could be determined based on frequency, and so on. The point is that which data are and are not high quality vary depending on use-case and business unit, and can frequently change.
The entire organization needs to agree on which business rules to employ in judging data quality. If one data source wants complete records, and one wants most recent records, then those need to be determined up front before any solution is deployed. One of the worst things you can do is pick a technology to manage data quality and realize halfway through the implementation that you don’t have any rules in place to judge one data source over another.
Deploy the Right Data Quality Management Technology
A significant number of businesses still manage data quality manually, using spreadsheets and data stewardship processes that require extensive manual input. Data quality management conducted at the scale necessary to provide contextually relevant interactions must be heavily automated, which is why the right technology is so important. Automation can also save a substantial amount of staff time – TDWI recently found that 43 percent of business intelligence and analytics personnel spent more than 60 percent of their time on data preparation and data quality tasks alone.
One technology that can streamline your data quality is a data hub, a centralizing solution that unifies data across the enterprise without replacing existing solutions. One example of a data hub is a customer data platform (CDP), a solution that unifies customer data into a central point of control that provides an always-on, always processing unified customer profile at low latency throughout the enterprise. CDP users are empowered with access to customer behavior and preference data, which gives them the capability to react in the moment of need with a relevant message. That can power substantial gains in customer retention.
Committing the Time and Resources to Ensure Data Quality
The third and final ingredient for strong data quality management is committing the time and staff resources necessary to manage the process. The majority of data quality tasks can be automated, but they’re not automatic. Brands cannot define business rules, plug in the right software, and then expect to always have high quality data. They must commit time and resources to the process to ensure that data quality always remains at a high level.
Customer data constantly shifts in volume and type, and attempting to make data quality management completely automatic is a recipe for disaster. This is especially true because data sources constantly change and new channels arise all the time. If customers suddenly shift from preferring Facebook to preferring Snapchat or Twitter, then brands need to be aware of those changes. If email addresses or physical addresses change, brands must be savvy enough to make those changes in the underlying data and ensure customer data quality remains high.
Customer data is increasingly a key asset for long-term success. The more money and effort invested into data quality management, the more richly detailed and accurate customer data are. With more richly detailed customer data, brands can more readily engage the right customer at the right time with the right message. In an environment where brands seek out any edge they can to succeed, investing in data quality management can and should be a strategic imperative.