The ‘Adjacent Possible’ of Big Data: What Evolution Teaches About Insights Generation

Originally published on WIRED

brunkfordbraun/Flickr

Stuart Kauffman in 2002 introduced the the “adjacent possible” theory. This theory proposes that biological systems are able to morph into more complex systems by making incremental, relatively less energy consuming changes in their make up. Steven Johnson uses this concept in his book “Where Good Ideas Come From” to describe how new insights can be generated in previously unexplored areas.

The theory of “adjacent possible extends to the insights generation process. In fact, it offers a highly predictable and deterministic path of generating business value through insights from data analysis. For enterprises struggling to get started with data analysis of their big data, the theory of “adjacent possible” offers an easy to adopt and implement framework of incremental data analysis.

Why Is the Theory of Adjacent Possible Relevant to Insights Generation

Enterprises often embark on their big data journeys with the hope and expectation that business critical insights will be revealed almost immediately just from the virtue of being on a big data journey and they building out their data infrastructure. The expectation is that insights can be generated often within the same quarter as when the infrastructure and data pipelines have been setup. In addition, typically the insights generation process is driven by analysts who report up through the typical management chain. This puts undue pressure on the analysts and managers to show predictable, regular delivery of value and this forces the process of insights generation to fit into project scope and delivery. However, the insights generation process is too ambiguous, too experimental that it rarely fits into the bounds of a committed project.

Deterministic delivery of insights is not what enterprises find on the other side of their initial big data investment. What enterprises almost always find is that data sources are in a disarray, multiple data sets need to be combined while not primed for blending, data quality is low, analytics generation is slow, derived insights are not trustworthy, the enterprise lacks the agility to implement the insights or the enterprise lacks the feedback loop to verify the value of the insights. Even when everything goes right, the value of the insights is simply miniscule and insignificant to the bottom line.

This is the time when the enterprise has to adjust its expectations and its analytics modus operandi. If pipeline problems exist, they need to be fixed. If quality problems exist, they need to be diagnosed (data source quality vs. data analysis quality). In addition, an adjacent possible approach to insights needs to be considered and adopted.

The Adjacent Possible for Discovering Interesting Data

Looking adjacently from the data set that is the main target of analysis can uncover other related data sets that offer more context, signals and potential insights through their blending with the main data set. Enterprises can introspect the attributes of the records in their main data sets and look for other data sets whose attributes are adjacent to them. These datasets can be found within the walls of the enterprise or outside. Enterprises that are looking for adjacent data sets can look at both public and premium data set sources. These data sets should be imported and harmonized with existing data sets to create new data sets that contain a broader and crisper set of observations with a higher probability of generating higher quality insights.

The Adjacent Possible for Exploratory Data Analysis

In the process of data analysis, one can apply the principle of adjacent possible to uncovering hidden patterns in data. An iterative approach towards segmentation analysis with a focus on attribution through micro segmentation, root cause analysis change and predictive analysis and anomaly detection through outlier analysis can lead to a wider set of insights and conclusions to drive business strategy and tactics.

Experimentation with different attributes such as time, location and other categorical dimensions can and should be the initial analytical approach. An iterative approach to incremental segmentation analysis to identify segments where changes in key KPIs or measures can be attributed to, is a good starting point. The application of adjacent possible requires the iterative inclusion of additional attributes to fine tune the segmentation scheme can lead to insights into significant segments and cohorts. In addition, adjacent possible theory can also help in identifying systemic problems in the business process workflow. This can be achieved by walking upstream or downstream in the business workflow and by diagnosing the point of process workflow breakdown or slowdown through the identification of attributes that correlate highly with the breakdown/slowdown.

The Adjacent Possible for Business Context

The process of data analysis is often fraught with silo’d context i.e. the analyst often does not have the full business context to understand the data or understand the motivation for a business driven question or understand the implications of their insights. Applying the theory of adjacent possible here implies that by introducing the idea of collaboration to the insights generation process by inviting and including team members who each might have a slice of the business context from their point of view can lead to higher valued conclusions and insights. Combining the context from each of these team members to design, verify, authenticate and validate the insights generation process and its results is the key to generating high quality insights swiftly and deterministically.

Making incremental progress in the enterprise’s insights discovery efforts is a significant and valuable method to uncover insights with massive business implications. The insights generation process should be treated as an exercise in adjacent possible and incremental insights identification should be encouraged and valued. As this theory is put in practice, enterprises will find themselves with a steady churn of incrementally valuable insights with incrementally higher business impact.

Signals and Insights: Value, Reach, Demand

Published Originally on Apigee

The mobile and apps economy means that the interaction between businesses and their customers and partners happens in an ever broader context, meaning that the amount of data that enterprises gather is exploding. Business is being done on multiple devices, and through apps, social networks, and cloud services.

It is important to think about signals when thinking about the value that is hidden in your enterprises data. Signals point towards insights. The ability to uncover, identify, and enhance these signals is the only way to make your big data work for you and succeed in the app economy.

Types of Signals

There are three types of signals that an enterprise should track and utilize in its decision making and strategic planning.

Value Signals

When customers use an enterprise’s products or services, they generate value signals. The actions that are part of searching, discovering, deciding, and purchasing a product or service offer signals into the perceived value of the product or service. These signals examined through the lens of user context (such as their profile, demographics, interests, past transaction history, and locality in time and space to interesting events and locations) deliver insights into business critical customer segments and their preference, engagement, and perceived value.

Reach Signals

When developers invest in the enterprise’s API platform and choose the APIs to create apps, they create reach signals. They are the signals around the attractiveness and perceived value of the enterprise’s products and services. Developers take on dependencies on APIs because they believe that such dependencies will help them in creating value for the end users of their apps and ultimately themselves. Developer adoption and engagement is a signal that offers a leading indicator and insight into the value and delivery of an enterprise’s products and services.

Demand Signals

When end users request information and data from the enterprises’ core data store, they generate demand signals on the enterprises’ information. These demand signals, within the user context deliver insights into the perceived value of the enterprise’s information along with context around the information (such as the source, type, freshness, quality, comprehensiveness and cache-ability). These insights offers a deep understanding of the impact of information on end user completed transactions and engagement.

Apigee Insights offers the expertise, mechanisms, and capabilities to extract and understand these signals from the enterprise data that sits within, at the edge, and outside the edge of the enterprise. Apigee Insights is built from the ground up to identify, extract and accentuate the value, reach and demand signals that drive business critical insights for the enterprise.

All (Big Data) Roads Lead To Your Customers

Originally Published on DataFloq

A large number of enterprise report a high level of inertia around getting started with Big Data. Either they are not sure about the problems that they need to solve using Big Data or they get distracted by the question of which Big Data technology to invest in and less on the business value they should be focusing on. This is often due to a lack of understanding of what business problems need to be solved and can be solved through data analysis. This causes enterprises to focus their valuable initial time and resources on evaluating new Big Data technologies without a concrete plan to deliver customer or business value through such investments. For enterprises that might find themselves in this trap, here are some trends and ideas to keep in mind.

Commoditization and maturation of Big Data technologies

Big Data technologies are going to get commoditized in the next couple of years. New technologies like Hadoop, HBase etc will mature with both their skills and partner ecosystem getting more diverse and stable. Increasing number of vendors will offer very similar capabilities and we will see these vendors compete increasingly on operational efficiency on the pivots of speed and cost. Enterprises who are not competing on the “Data efficiency” i.e. their ability to extract exponentially greater value from their data as compared to their competitors (notably AMZN, GOOG, YHOO, MSFT, FB and Twitter) should be careful to not overinvest in an inhouse implementation of Big Data technologies. Enterprises whose core business runs on data analysis need to continuously invest in data technologies to extract the maximum possible business value from their data. However, for enterprises that are still beginning or in the infancy of their Big Data journey, investing in a cutting edge technological solution is almost always the wrong strategy. Enterprises should focus on small wins using as much off the shelf components as possible to quickly reach the point of Big Data ROI offered out of customization free, off the shelf tools. When possible, enterprises should offload infrastructure operation and management to third party vendors while experimenting with applications and solutions that utilize these Big Data technologies. This ensures that critical resources are spent on solving real customer problems while critical feedback is being collected to inform future technology investments.

Technology Choices Without Business Impetus Are Not Ideal

The Big Data technology your business needs can vary by the problem that you are trying to solve. The needs of your business and the type of problems that you need to solve to offer simple, trustworthy and efficient products and services to your customers should determine and lead you to the right Big Data technology and vendors that match your needs. Enterprises need to focus on the business questions that need to be answered as opposed to the technology choice. Enterprises who do not have the business focus will spend crucial resources on optimizing their technology investments as opposed to solving real business problems and end up with little ROI. Planning and implementing Big Data technology solutions in a vacuum without clear problems and intended solutions in mind not only can lead to incorrect choices but can lead to wasted effort spent prematurely optimizing for and commitment to a specific technology

Evangelize Analytics Internally To Better Understand Technology Requirements

Appropriate Big Data technology decisions can only be made by ensuring that the needs and requirements of the various parts of the organization are correctly understood and captured. Ensuring the that culture in the enterprise promotes the use of data to answer strategic questions and track progress can only happen if analytical thinking and problem solving are used by all functions in the organization ranging from support to marketing to operations to products and engineering. Having these constituents represented in the technology stack decision process is extremely critical to ensure that eventual technology is usable and useful for the entire organization and does not get relegated to use by a very small subset of employees. In addition, the specific needs of certain users such as data exploration, insights generation, data visualization, analytics and reporting, experimentation, integration or publishing often require a combination of one or more technologies. Defining and clarifying the decision making process in an enterprise is needed to identify the various sets of technologies that need to be put together to build a complete data pipeline that is designed to enable decisions and actions.

All (Big Data) Roads Lead to Your Customers

For enterprises that are struggling to get started with Big Data analysis or have moved past the initial exploration stage in Big Data technology adoption, deciding what problems to tackle initial that will offer the highest ROI can be a daunting task. In addition, there is often pressure from management to showcase the value of the Big Data investment to the business, customers and users of the products and services. Almost always, focusing on improving customer/user satisfaction, increasing engagement with and use of your products and services mix and preventing customer churn is the most important problem that an enterprise can focus on and represents a class of problems that is 1. Universal 2. Perfect for Big Data analysis. As customers and end users interact with the enterprise’s products and services, they generate data or records of their usage. Because customer actions can be almost always divided into two sets: Transactional actions that represent a completed monetary or financially beneficial actions by the user for an enterprise. e..g purchasing a product or printing directions to a restaurant and Non Transactional, Leading Indicator Actions that by themselves are not monetarily beneficial to the enterprise but are leading indicators of upcoming transactions. e.g. searching for a product and adding it to a cart, reviewing a list of restaurants. Being able to tag the data generated by your users by the following metadata generates an extremely rich data set that is primed for Big Data analysis. Understanding the frequency of actions, time spent, when the actions occur, where they occur, on what channel and the environment and the demographic description of the user who carries out the action is critical. At the minimum, enterprises need to understand the actions of their users that correlate the highest with transactions, the attributes and behavior patterns of engaged and profitable users and the leading indicators of user dissatisfaction and abandonment, There are other very obvious applications of Big Data in the areas of security, fraud analysis, support operations, performance etc however each of these applications can be traced directly or indirectly to customer dissatisfaction or disengagement problems. Focusing your Big Data investments into a holistic solution to track and remedy customer dis-satisfaction to improve engagement and retention is a sure fire way to not only design the best possible Big Data solution to your needs but also to extract maximum value from these investments that impact your business’s bottom line.

The Three ‘ilities’ of Big Data

Published Originally on Big Data Journal

When talking about Big Data, most people talk about numbers: speed of processing and how many terabytes and petabytes the platform can handle. But deriving deep insights with the potential to change business growth trajectories relies not just on quantities, processing power and speed, but also three key ilities: portability, usability and quality of the data.

Portability, usability, and quality converge to define how well the processing power of the Big Data platform can be harnessed to deliver consistent, high quality, dependable and predictable enterprise-grade insights.

Portability: Ability to transport data and insights in and out of the system

Usability: Ability to use the system to hypothesize, collaborate, analyze, and ultimately to derive insights from data

Quality: Ability to produce highly reliable and trustworthy insights from the system

Portability
Portability is measured by how easily data sources (or providers) as well as data and analytics consumers (the primary “actors” in a Big Data system) can send data to, and consume data from, the system.

Data Sources can be internal systems or data sets, external data, data providers, or the apps and APIs that generate your data. A measure of high portability is how easily data providers and producers can send data to your Big Data system as well as how effortlessly they can connect to the enterprise data system to deliver context.

Analytics consumers are the business users and developers who examine the data to uncover patterns. Consumers expect to be able to inspect their raw, intermediate or output data to not only define and design analyses but also to visualize and interpret results. A measure of high portability for data consumers is easy access – both manually or programmatically – to raw, intermediate, and processed data. Highly portable systems enable consumers to readily trigger analytical jobs and receive notification when data or insights are available for consumption.

Usability
The usability of a Big Data system is the largest contributor to the perceived and actual value of that system. That’s why enterprises need to consider if their Big Data analytics investment provides functionality that not only generates useful insights but also is easy to use.

Business users need an easy way to:

  • Request analytics insights
  • Explore data and generate hypothesis
  • Self-serve and generate insights
  • Collaborate with data scientists, developers, and business users
  • Track and integrate insights into business critical systems, data apps, and strategic planning processes

Developers and data scientists need an easy way to:

  • Define analytical jobs
  • Collect, prepare, pre process, and cleanse data for analysis
  • Add context to their data sets
  • Understand how, when, and where the data was created, how to interpret data and know who created them

Quality
The quality of a Big Data system is dependent on the quality of input data streams, data processing jobs, and output delivery systems.

Input Quality: As the number, diversity, frequency, and format of data channel sources explode, it is critical that enterprise-grade Big Data platforms track the quality and consistency of data sources. This also informs downstream alerts to consumers about changes in quality, volume, velocity, or the configuration of their data stream systems.

Analytical Job Quality: A Big Data system should track and notify users about the quality of the jobs (such as map reduce or event processing jobs) that process incoming data sets to produce intermediate or output data sets.

Output Quality: Quality checks on the outputs from Big Data systems ensure that transactional systems, users, and apps offer dependable, high-quality insights to their end users. The output from Big Data systems needs to be analyzed for delivery predictability, statistical significance, and access according to the constraints of the transactional system.

Though we’ve explored how portability, usability, and quality separately influence the consistency, quality, dependability, and predictability of your data systems, remember it’s the combination of the ilities that determines if your Big Data system will deliver actionable enterprise-grade insights.

It’s the End of the (Analytics and BI) World as We Know It

Published Originally on Wired

“That’s great, it starts with an earthquake, birds and snakes, an aeroplane, and Lenny Bruce is not afraid.” –REM, “It’s the End of the World as We Know It (and I Feel Fine)”

REM’s famous “It’s the End of the World…”song rode high on the college radio circuit back in the late 1980s. It was a catchy tune, but it also stands out because of its rapid-fire, stream-of-consciousness lyrics and — at least in my mind — it symbolizes a key aspect of the future of data analytics.

The stream-of-consciousness narrative is a tool used by writers to depict their characters’ thought processes. It also represents a change in approach that traditional analytics product builders have to embrace and understand in order to boost the agility and efficiency of the data analysis process.

Traditional analytics products were designed for data scientists and business intelligence specialists; these users were responsible for not only correctly interpreting the requests from the business users, but also delivering accurate information to these users. In this brave new world, the decision makers expect to be empowered themselves, with tools that deliver information needed to make decisions required for their roles and their day to day responsibilities. They need tools that enable agility through directed, specific answers to their questions.

Decision-Making Delays

Gone are the days when the user of analytics tools shouldered the burden of forming a question and framing it according to the parameters and interfaces of the analytical product. This would be followed by a response that would need to be interpreted, insights gleaned and shared. Users would have to repeat this process if they had any follow up questions.

The drive to make these analytics products more powerful also made them difficult to use to business users. This led to a vicious cycle: the tools appealed only to analysts and data scientists, leading to these products becoming even more adapted to their needs. Analytics became the responsibility of a select group of people. The limited population of these experts caused delays in data-driven decision making. Additionally, they were isolated from the business context to inform their analysis.

Precision Data Drill-Downs

In this new world, the business decision makers realize that they need access to information they can use to make decisions and course correct if needed. The distance between the analysis and the actor is shrinking, and employees now feel the need to be empowered and armed with data and analytics. This means that analytics products that are one size fits all do not make sense any more.

As the decision makers look for analytics that makes their day to day job successful, they will look towards these new analytics tools to offer the same capabilities and luxuries that having a separate analytics team provides, including the ability to ask questions repeatedly based on responses to a previous question.

This is why modern analytics products have to support the user’s “stream of consciousness” and offer the ability to repeatedly ask questions to drill down with precision and comprehensiveness. This enables users to arrive at the analysis that leads to a decision that leads to an action that generates business value.

Stream of conciousness support can only be offered through new lightweight mini analytics apps that are purpose-built for specific user roles and functions and deliver information and analytics for specific use cases that users in a particular role care about. Modern analytics products have to become combinations of apps to empower users and make their jobs decision and action-oriented.

Changes in People, Process, and Product

Closely related to the change in analytics tools is a change in the usage patterns of these tools. There are generally three types of employees involved in the usage of traditional analytics tools:

  • The analyzer, who collects, analyzes, interprets, and shares analyses of collected data
  • The decision maker, who generates and decides on the options for actions
  • The actor, who acts on the results

These employees act separately to lead an enterprise toward becoming data-driven, but it’s

a process fraught with inefficiencies, misinterpretations, and biases in data collection, analysis, and interpretation. The human latency and error potential makes the process slow and often inconsistent.

In the competitive new world, however, enterprises can’t afford such inefficiencies. Increasingly, we are seeing the need for the analyzer, decision maker, and actor to converge into one person, enabling faster data-driven actions and shorter time to value and growth.

This change will force analytics products to be designed for the decision maker/actor as opposed to the analyzer. They’ll be easy to master, simple to use, and tailored to cater to the needs of a specific use case or task.

Instant Insight

The process of analytics in the current world tends to be after-the-fact analysis of data that drives a product or marketing strategy and action.

However, in the new world, analytics products will need to provide insight into events as they happen, driven by user actions and behavior. Products will need the ability to change or impact the behavior of users, their transactions, and the workings of products and services in real time.

Analytics and BI Products and Platforms

In the traditional analytics world, analytics products tend to be bulky and broad in their flexibility and capabilities. These capabilities range from “data collection” to “analysis” to “visualization.” Traditional analytics products tend to offer different interfaces to the decision makers and the analyzers.

However, in the new world of analytics, products will need to be minimalistic. Analytics products will be tailored to the skills and needs of their particular users. They will directly provide recommendations for specific actions tied directly to a particular use case. They will provide, in real time, the impact of these actions and offer options and recommendations to the user to fine tune, if needed.

The Decision Maker’s Stream of Consciousness

In context of the changing people, process, and product constraints, analytics products will need to adapt to the needs of decision makers and their process of thinking, analyzing, and arriving at decisions. For every enterprise, a study of the decision maker’s job will reveal a certain set of decisions and actions that form the core of their responsibilities.

As we mentioned earlier, yesterday’s successful analytical products will morph into a set of mini analytics apps that deliver the analysis, recommendations, and actions that need to be carried out for each of these decisions/actions. Such mini apps will be tuned and optimized individually for each use case individually for each enterprise.

These apps will also empower the decision maker’s stream of consciousness. This will be achieved by emulating the decision maker’s thought process as a series of analytics layered to offer a decision path to the user. In addition, these mini apps will enable the exploration of tangential questions that arise in the user’s decision making process.

Analytics products will evolve to become more predictive, recommendation-based, and action oriented; the focus will be on driving action and reaction. This doesn’t mean that the process of data collection, cleansing, transformation, and preparation is obsolete. However, it does mean that the analysis is pre-determined and pre-defined to deliver information to drive value for specific use cases that form the core of the decision maker’s responsibility in an enterprise.

This way, users can spend more time reacting to their discoveries, tapping into their streams-of-consciousness, taking action, and reacting again to fine-tune the analysis

The Importance of Making Your Big Data System Insightful

Originally Published on Wired

 

With all the emphasis these days that’s placed on combing through the piles of potentially invaluable data that resides within an enterprise, it’s possible for a business to lose sight of the need to turn the discoveries generated by data analysis into valuable actions.

Sure, insights and observations that arise from data analysis are interesting and compelling, but they really aren’t worth much unless they can be converted into some kind of business value, whether it’s, say, fine tuning the experience of customers who are considering abandoning your product or service, or modeling an abuse detection system to block traffic from malicious users.

Digging jewels like these out of piles of enterprise data might be viewed by some as a mysterious art, but it’s not. It’s a process of many steps, considerations, and potential pitfalls, but it’s important for business stakeholders to have a grip on how the process works and the strategy considerations that go into data analysis. You’ve got to know the right questions to ask. Otherwise, there’s a risk that data science stays isolated, instead of evolving into business science.

The strategic considerations include setting up an “insights pipeline,” which charts the path from hypothesis to insight and helps ensure agility in detecting trends, building new products, and adjusting business processes; ensuring that the analytical last mile, which spans the gap from analysis to a tangible business action, is covered quickly; building a “data first” strategy that lays the groundwork for new products to produce valuable data; and understanding how partnerships can help enterprises put insights to work to improve user experiences.

The Insights Pipeline

You can visualize an insights pipeline as a kind of flow chart that encompasses the journey from a broad business goal, question or hypothesis to a business insight.

The questions could look something like this: Why are we losing customers in the European market? Or, how can revenue from iOS users be increased? This kind of query is the first step in open-ended data exploration, which, as the name implies, doesn’t usually include deadlines or specific expectations, because they can suppress the serendipity that is a key part of the open-ended discovery process.

Data scientists engage in this kind of exploration to uncover business-critical insights, but they might not know what shapes these insights will take when they begin their research. These insights are then presented to business stakeholders, who interpret the results and put them to use in making strategic or tactical decisions.

The broad nature of open-ended exploration carries potential negatives. Because of the lack of refinement in the query, the insights generated might be unusable, not new, or even worthless, leading to low or no ROI. Without specific guidance, a data scientist could get lost in the weeds.

Closed-loop data exploration, on the other hand, is much more refined and focused on a very focused business function or question. For example, a data scientist might pursue this: Are there any customers who do more than $100 of business each day with an online business? If so, flag them as “very important customers” so they can receive special offers. There is very little ambiguity in the query.

In the insights pipeline, successful open-ended explorations can eventually be promoted to closed loop dashboards, once business stakeholders ratify the results.

Closed-loop analysis implements systems based on models or algorithms that slot into business processes and workflow systems. As the example above suggests, these kinds of questions enable fast, traffic-based decision-making and end-user servicing. They also don’t add development costs once they are put in place.

But the very specificity of the queries that define closed-loop data analysis can produce insights of limited value. And once the query is set up, the possibility of “insights staleness” arises. Revisiting the “very important customer” example, what if inflation makes the $100-per-day customer less valuable? The insight becomes outdated; this highlights the need to consistently renew and verify results.

This illustrates the importance of consistently retuning the model, and, sometimes, forming new questions or hypotheses to plug back into an open-ended exploration. For example, a system that filters incoming emails for spam can quickly become outdated as spammers change tactics or use new technologies. A closed-loop system like this often needs to be revamped entirely to reflect changes in smaller behavior.

The Analytical Last Mile

Making decisions is one of the most challenging parts of doing business. In IT, employees are very comfortable delivering reports or assembling dashboards. But deciding on an action plan based upon that information isn’t easy, and lots of insights but few decisions introduces a lag time that in turn erodes business value.

The analytical last mile represents the time and effort required to use analytics insights to actually improve the state of a businesses. You might have invested heavily in big data technologies and produced all kinds of dashboards and reports, but this adds up to very little if interesting observations aren’t converted into action.

The value of analytics and a data-driven culture is only realized when the analytical last mile is covered quickly and efficiently. The inability to do this often results in lost business efficiency and unrealized business value.

More often than not, human latency is to blame. It’s defined as the time it takes employees to collect the required information, perform analysis, and disseminate the resulting insight to decision makers, and, then, the time it takes decision makers to collaborate and decide on a course of action.

Covering the analytical last mile efficiently requires an investment in and emphasis on setting up streamlined data collection, analysis and decision-making processes.

A “Data First” Strategy

When you define, design, and introduce a new product or service, data generation, collection and analysis, and product optimization might be the last thing you’re thinking of. It should be the first.

A “data first” strategy ensures that the right kind of technology is in place to deliver insights that can improve the end user experience. Thinking through what kinds of user data might be collected ensures that the enterprise isn’t caught off guard when the new product or service begins to gain momentum.

Some of the data you should think about gathering includes:

  • Data generated by user actions and interactions, such monetary transactions, information requests, and navigation
  • Data that defines the profile attributes of the user, including information available from the user, the enterprise, or enterprise partners
  • Contextual data about the user’s social network activity triggered by the product or service, the user’s location in relation to use of the product or service, or the channels through which the product or service is being used or accessed

Instead of losing critical time scrambling to set up methodologies to gather this data, you’ll be prepared to do some fine-tuning to the product to boost the end user’s experience.

Partnerships

A lot of skills and capabilities are required to take a data-driven effort to optimize the user experience and turn that into an actual, tangible improvement in your customer’s experience and, ultimately, boost the enterprise’s bottom line.
Many of these skills are not traditionally part of a business’ core competencies, so partnerships are a great way to bring in outside expertise to help polish the customer experience. Some areas where enterprises look to partners for help include: the ability to reach customers with content, offers, deals, and ads across multiple channels, devices or platforms; the ability to access user transaction history across multiple services and products; and the capability to know users’ locations at any point in time.

There’s a reason that big data analysis has become such a catchphrase. It’s an amazingly powerful tool that can improve user experiences and boost the bottom line.

But it’s critical that business stakeholders have an awareness of the process, think about the right strategic considerations, and realize the importance of moving quickly and decisively once insights are delivered. Otherwise, it’s all too easy for a business to get mired in data science, instead of transforming a valuable insight into an even more valuable action.

How Data Analysis Drives the Customer Journey

Originally Published on Wired

Driving down Highway 1 on the Big Sur coastline in Northern California, it’s easy to miss the signs that dot the roadside. After all, the stunning views of the Pacific crashing against the rocks can be a major distraction. The signage along this windy, treacherous stretch of road, however, is pretty important — neglecting to slow down to 15 MPH for that upcoming hairpin turn could spell trouble.

Careful planning and even science goes into figuring out where to place signs, whether they are for safety, navigation, or convenience. It takes a detailed understanding of the conditions and the driving experience to determine this. To help drivers plan, manage, and correct their journey trajectories, interstate highway signs follow a strict pattern in shape, color, size, location, and height, depending on the type of information being displayed.

Like the traffic engineers and transportation departments that navigate this process, enterprises face a similar challenge when mapping, building, and optimizing digital customer journeys. To create innovative and information-rich digital experiences that provide customers with a satisfying journey, a business must understand the stages and channels that consumers travel through to reach their destination. Customer journeys are multi-stage and multi-channel, and users require information at each stage to make the right decisions as they move toward their destination.

Signposts on the Customer Journey

To understand what kind of information must be provided — and when it must be supplied — it’s important to understand the stages users travel through as they form decisions to purchase or consume products or services.

  • Search: The user starts on a path toward a transaction by searching for products or services that can deliver on his or her use case
  • Discover: The user narrows down the search results to a set of products or services that meet the use case requirements
  • Consider: The user evaluates the short-listed set of products and services
  • Decide: The user makes a decision on the product or service
  • Sign up/set up: The user completes the setup or sign up required to begin using the chosen product or service
  • Configure: The user configures and personalizes the product or service, to the extent possible, to best deliver on the user’s requirements
  • Act: The user uses the product or service regularly
  • Engage: The user’s usage peaks, collecting significant levels of activity, transaction value, time spent on the product, and the willingness to recommend the product or service to their professional or personal networks
  • Abandon: The user displays diminishing usage of the product or service compared to the configuring, active, and engaged levels
  • Exit: The user ceases use of the product or service entirely

Analyzing how a customer uses information as they navigate their journey is key to unlocking more transactions and higher usage, and also to understanding and delivering on the needs of the customer at each stage of their journey.

At the same time, it’s critical to instrument products and services to capture data about usage and behavior surrounding a product or service, and to build the processes to analyze the data to classify and detect where the user is on their journey. Finally, it’s important to figure out the information required by the user at each stage. This analysis determines the shape, form, channel, and content of the information that will be made available to users at each point of their transactional journey.

The highway system offers inspiration for designing an information architecture that guides the customer on a successful journey. In fact, there are close parallels between the various types of highway signs and the kind of information users need when moving along the transaction path.

  • Regulatory: Information that conveys the correct usage of the product or service, such as terms of use or credit card processing and storage features
  • Warning: Information that offers “guardrails” to customers to ensure that they do not go off track and use the product in an unintended, unexpected way; examples in a digital world include notifications to inform users on how to protect themselves from spammers
  • Guide: Information that enables customers to make decisions and move ahead efficiently; examples include first-run wizards to get the user up and running and productive with the product or service
  • Services: Information that enhances the customer experience, including FAQs, knowledge bases, product training, references, and documentation
  • Construction: Information about missing, incomplete, or work-in-progress experiences in a product that enable the user to adjust their expectations; this includes time-sensitive information designed to proactively notify the user of possible breakdowns or upcoming changes in their experience, including maintenance outages and new releases

Information Analytics

Information analytics is the class of analytics designed to derive insights from data produced by end users during their customer journey. Information analytics provides two key insights into the data and the value it creates.

First, it enables the identification of the subsets of data that drive maximum value to the business. Certain data sets in the enterprise’s data store are more valuable than others and, within a data set, certain records are more valuable than others. Value in this case is defined by how users employ the information to make decisions that eventually and consistently drive value to the business.

For example, Yelp can track the correlation between a certain subset of all restaurant reviews on their site and the likelihood of users reading them and going to the reviewed restaurants. Such reviews can then be automatically promoted and ranked higher to ensure that all users get the information that has a higher probability of driving a transaction—a restaurant visit, in this case.

Secondly, information analytics enables businesses to identify customer segments that use information to make decisions that drive the most business transactions. Understanding and identifying such segments is extremely important, as it enables the enterprise to not only adapt the information delivery for the specific needs of the customer segment but also price and package the information for maximum business value.

For example, information in a weather provider’s database in its raw form is usable by different consumers for different use cases. However, the usage of this information by someone planning a casual trip is very different than a commodities trader who is betting on future commodity prices. Understanding the value derived by a user from the enterprise’ information is key to appropriate pricing and value generation for the enterprise.

Information Delivery

Mining and analyzing how users access information is critical to identifying, tracking, and improving key performance indicators (KPIs) around user engagement and user retention. If the enterprise does not augment the product experience with accurate, timely, and relevant information (according to the user’s location, channel and time of usage), users will be left dissatisfied, disoriented, and disengaged.

At the same time, a user’s information access should be mined to determine the combination of information, channel, and journey stage that drives value to the enterprise. Enterprises need to identify such combinations and promote them to all users of the product and service and subsequently enable a larger portion of the user base to derive similar value.

Mining the information access patterns of users can enable enterprises to build a map of the various touch points on their customer’s journey, along with a guide to the right information required for each touchpoint (by the user or by the enterprise) in the appropriate form delivered through the optimal channel. Such a map, when built and actively managed, ends up capturing the use of information by customers in their journey and correlates this with their continued engagement with — or eventual abandonment of — the product.

Enabling successful journeys for customers as they find and use products and services is critical to both business success and continued customer satisfaction. Contextual information, provided at the right time through the right channel to enable user decisions, is almost always the difference between an engaged user and an unsatisfied one — and a transaction that drives business value.