Redpoint Logo
Redpoint Logo

Jan 20, 2023

The Pros and Cons of Composable Architecture

Aristotle is attributed with the adage that “the whole is greater than the sum of its parts.” Apart from its philosophical bent, the concept is generally thought to hold true in almost every discipline, software engineering included. In the charged debate about composable architecture, however, one side holds forth that perhaps the scale now tilts in the “parts” favor.

This blog will discuss composable architecture as a concept as it applies to customer data platforms (CDPs), highlight some pros and cons and finally relate where the Redpoint rg1 platform fits into the discussion.

A Composable Architecture Primer

First, what is composable architecture? Simply put, the concept refers to a tech stack architecture composed of different best-of-breed technology solutions, brought together through APIs to operate as a whole. Unlike a monolithic enterprise suite, the API-based approach encourages the integration of various systems usually through multiple vendors, with the components creating a flexible, purpose-built cohesive platform. As a physical manifestation, the concept is similar to Google’s infamous Project Ara for building a modular smartphone in which owners could just replace or upgrade various components rather than the entire device.

What’s different with a composable architecture approach in relation to a CDP is that here we refer to a cloud-based open ecosystem of services and platforms. The rising popularity and attention paid to composable architecture is due to many factors, among them rapid market innovations that demand greater flexibility, as well as a diffusion of SaaS solutions, a rise in microservices and more specialized applications.

Composable Architecture: Pros and Cons

For a CDP, the four main “composable” components are a data warehouse, a data collection framework and capabilities for data transformation and data activation. The main benefit of using up to four different vendors is the customer will of course not have to wait for an upgrade to support new functionality or to connect to a new system. The customer is far more adaptive and responsive to the ever-changing technology landscape.

That responsiveness, however, comes with certain trade-offs. One is that the buying organization is responsible for making sure all the requirements for the finished system are met (i.e., does identity resolution do what is needed; is data quality covered; is performance adequate, etc.?). Second, someone must ensure there is a usable UX for the marketing user (i.e., how do they do segmentation, models for prediction and/or recommendations, workflows for campaign design, monitoring, measuring, etc.?). Third, someone has to maintain and upgrade the composition (i.e., add channels, replace or tune components, etc.).

Another potential hitch is the strength of the API connections and whether connectivity impacts performance. One potential stumbling block, for example, is that the data collection component may input data into the central database in any format it chooses, with an entity relationship diagram for every single one of its connectors, leading to up to potentially hundreds of database tables in the database. This will require some manual data transformations to generate some understanding of the inputs.

What happens then on the data activation side is that the data activation system will have no knowledge of how the data model was generated for a database from a different vendor. It will generate a catalog, a list of potentially hundreds of tables and columns in a particular schema. The ideal is that the database will have another tool for performing some form of identity resolution, but the activation system will then be used to build segments. Yet with the potential for multiple transaction tables and multiple customer tables, there needs to be robust data collation to make it actionable for marketers and business users of the data.

Downstream data clean-up, then, is one of the most important things to at least be aware of when considering whether a composable architecture is right for your business. Where is the reverse ETL and cleansing taking place, and at what point in the activation cycle? How will it impact speed and performance, especially with more redundancy? Is there an added cost for transactions?

A Composable Architecture and rg1

What is a bit off-putting about discussing composable architecture as a “new” concept is that Redpoint rg1 has always, by any definition, fit into a composable architecture framework. Meaning that since the platform’s release, customer have used various components of the platform – specifically Redpoint Data Management and Redpoint Interaction – as components in a larger martech stack. It has never been, in other words, an all-or-nothing proposition which is the litmus test for whether a CDP meets the definition of a composable architecture.

From a functionality standpoint, Redpoint is and has been on board with the best-of-breed approach; customers can choose any systems they’d like around the rg1 platform to support their business use cases for ingesting data through delivering content. The platform is also agnostic to key components within a CDP, such as the data warehouse technology used. The platform offers tightly integrated pillars that support data, insight and action, with the understanding that use cases vary and the entire platform may not align with a businesses’ current objectives.

Extending the concept further, the argument can be made that the platform as a whole is itself composable in that it is used in concert with other marketing automation tools, i.e, an open garden construct with hundreds of native connectors.

That said, while rg1 most certainly aligns with a composable ecosystem, Aristotle may have been onto something as far as the whole being greater than the sum of its parts. If one of the drawbacks of a composable framework is the potential for database redundancy and downstream cleansing activities, rg1 as a fully integrated CDP solves for those problems at the point of data collection. That is, cleansing is completed at ingestion; rg1 only retains data that is pertinent to its intended use case, and uses fit-for-purpose data to create or enhance a single, persistent core customer or transaction table.

With one core customer table and one transaction table – not 20, 30 or more – retaining only the data that is pertinent to its intended use case drives persistence. In addition, with the rapid increase in cloud spend, retaining data that isn’t used is a dangerous precedent to set.

In short, from a data collection standpoint the key difference is that in a composable architecture the database structure must account for any data model; an organization can simply throw any data into a database, leaving configurations downstream. With rg1 as a fully integrated platform, data integrity is ensured upfront because there is an understanding of what purpose the data will be used for.

Redpoint produces a series of on-demand webinars about what to consider before investing in a CDP. Whether to explore the composable architecture route is certainly an important consideration and decision to make. I welcome anyone with questions about whether a composable architecture is right for their business to reach out to us directly, or simply request a demo to see how rg1 can help solve your company’s unique business challenges.

Related Redpoint Orchard Blogs

 4 Steps to More Efficient Data Preparation

Cadence, Scope, Flexibility: Understanding a CDP’s Data Architecture Needs

The Telltale Signs Your Business Needs a CDP

Be in-the-know with all the latest customer engagement, data management, and Redpoint Global news by following us on LinkedInTwitter, and Facebook.

Steve Zisk 2022 Scaled

Ian Clayton

Chief Product Officer

Do you like this article? Share it!

Related Articles: