3 Destructive Distractions That Every Entrepreneur Should Avoid

Published Originally on Entrepreneur.com 

3 Destructive Distractions That Every Entrepreneur Should Avoid
Image credit: Rennett | Stowe
Distraction comes in several forms for entrepreneurs. It can arise from not having the ability and resolve to say no or from taking on too much too soon or not being organized for success. More important than a good idea is the ability to execute on it.

Being able to protect yourself and your team from the following three distractions can go a long way in ensuring your venture prospers:

1. Self-inflicted scope creep.

Scope creep, in any form, is dangerous. But the worst kind is the one that entrepreneurs impose on themselves. Self-inflicted scope creep can happen under two circumstances

Entrepreneurs should be careful to not misinterpret requirements or use cases from their customers in a way so their plans overreach and attempt to solve issues outside what’s absolutely necessary.

In addition entrepreneurs often become distracted by excitement that comes with building something fresh or adding a cool new features to a product. This zeal can often hide the real problem at hand and can prompt the entrepreneur to gloss over better, cheaper or more suitable ways for solving a problem. In the pursuit of new technology and bigger, better features, valuable time and resources can get lost.

The discipline to avoid self-inflicted scope creep does not come easy can can often take years to develop. Here are some tips to develop this discipline:

Write down use cases in the words of your users.

Perform a root-cause analysis of the problem.

Do a desired-state analysis.

Brainstorm about ways to solving the problem for your customers.

Test the solution’s concept with real users.

2. Fragmented mindshare across multiple initiatives.

Another common distraction for startup leaders is attempting to have the same team focus on multiple large initiatives at the same time. Fragmenting the thought processes of staffers across different complex problems reduces their ability to function and deliver results.

This lack of central focus prolongs the problem solving for the projects and forces team members to make tradeoffs across all initiatives to try to progress in parallel across all fronts. Staffers also pay a high price when they switch their context as they move between these diverse initiatives.

Here are some tips to detect when you might be in this situation:

You are attempting to solve multiple distinct problems.

All at the same time, you’re addressing the needs of distinct segments of users and use cases.

You’re building several complex, multifaceted value propositions simultaneously.

The people who are finding solutions are working on all problems at once.

3. A disorganized operation.

A frequently occurring distraction crops up in a disorganized enterprise. Given that time is limited, entrepreneurs are tasked with not only dealing with multiple issues at once but also leading their teams to make progress.

With aggressive growth, organizations can evolve organically and this can lead to multiple centers of power or expertise on the same or similar features or technologies. Disorganization can lead to multiple parallel efforts to solve the same or similar problems.

Here are some tips to ensure that you’re organized for success:

Organize around primary customer use cases and tasks.

Ensure a consistent user experience across your product’s surface area.

Overcommunicate and develop shared goals when multiple teams are tasked with building similar components and experiences.

Why CIOs Should Turn To Cloud Based Data Analysis in 2015

Originally Published on DataFloq

cloudsroad (1)

CIOs are under tremendous pressure to quickly deliver big data platforms that can enable enterprises to unlock the potential of big data and better serve their customers, partners and internal stakeholders. Early  adopter CIOs of big data report clear advantages of seriously considering and choosing the cloud for data analysis. These CIOs make a clear distinction between business critical and business enabling systems and processes. They understand the value that the cloud brings to data analysis and exploration and how it enables the business arm to innovate, react and grow the businesses.

Here are the 5 biggest reported advantages of choosing the cloud for data analysis

Speed – Faster Time to Market

Be it the speed of getting started with data analysis, the time it takes to have a software stack that can enable analysis or the time it takes to provision access to data, a cloud based system offers a faster boot time for the data initiative. This is music to the business ears as they are able to extract value from data sooner than later.

The cloud also offers faster exploration, experimentation, action and reaction based on data analysis. For example, a cloud-based system can be made to auto scale given the number of users querying the system, the number of concurrent ongoing analysis, the data that is entering the system and the data that is being stored or processed in the system. Without any long hardware procurement times, the cloud can often be the difference between critical data analysis that drives business growth and missed opportunities.

Another consideration mentioned by CIOs is the opportunity cost of building out full scale analytics systems. With limited budgets and time, focusing on generating core business value turns out to be more beneficial than spending those resources on reinventing a software stack that has already been built by a vendor.

Extensibility – Adjusting to Change

A very unique advantage of operating in the cloud is the ability to adjust to changes in business, the industry or competition. Dynamic enterprises introduce new products, kill underperforming products, invest in mergers and acquisitions. Each such activity creates new systems, processes and data sets. Having a cloud based stack that not just scales but offers a consistent interface reduces the problem of combining this data (and securing and maintaining) from a O(n!) problem to a O(n) problem making it a much cheaper proposition.

Cost – Lower, Cheaper

CIOs love the fact that cloud based data analysis stacks are cheaper to build and operate. Requiring no initial investment, CIOs get to pay for what they use and if the cloud auto scales, it makes for simpler capacity growth plans and easier to perform long term planning without the danger of over provisioning. Given the required data analysis capacity can often be spiky (varies sharply by time depending on planning and competitive activities), is impacted by how prevalent the data driven culture is in an enterprise (and how the culture changes over time) and the volume and variety of data sources (this can be change at the rate of how the enterprise grows and maneuvers), it is very hard for the CIO to predict required capacity. Imperfect estimates can lead to wasted resources or/and unused wasted capacity.

Risk Mitigation – Changing Technological Landscape

Data analysis technologies and options are in a flux. Especially in the area of big data, technologies are growing and maturing at different rates with new technologies being introduced regularly. In addition, it is very clear given the growth of these modern data processing and analysis tools and the recent activity of analytics and BI vendors, the current capabilities available to business are not addressing the pain points. There is a danger of moving in too early and adopting and depending on a certain stack might end up being the wrong decision or leave the CIO with a high cost to upgrade and maintain the stack at the rate it is changing. Investing in a cloud based data analysis system hedges this risk for the CIO. Among the options available for the CIO in the cloud are Infrastructure as a Service, Platform as a Service or Analytics as a Service and the CIO can choose the optimal solution for them depending on bigger tradeoffs and decisions beyond the data analysis use cases.

IT as the Enabler

Tasked with security and health of data and processes, CIOs see their role changing to an enabler role where they are able to ensure that the data and processes are protected while still maintaining control in the cloud. For example, identifying and tasking employees as the data stewards ensures that a single person or team understands the structure and relevancy of various data sets and can act as the guide and central point of authority to enable various employees to analyze and collaborate. The IT team’s role can now focus on acting as the Data Management team and ensure that feedback and business pain points are quickly addressed and the learnings are incorporated into the data analysis pipeline.

A cloud based data analysis system also offers the flexibility to let the analysis inform the business process and workflow design. A well designed cloud based data analysis solution and its insights should be pluggable into the enterprise’s business workflow through well defined clean interfaces such as an insight export API. This ensures that any lessons learnt by IT can be easily fed back as enhancements to the business.

Similarly, a cloud based data analysis solution is better designed for harmonization with external data sources, both public and premium. The effort required to integrate external data sources and build a refresh pipeline for these sources is sometimes not worth the initial cost given business needs to iterate with multiple such sources in their quest for critical insights. A cloud based analytics solution offers a central point for such external data to be collected. This frees up IT to focus on providing services to procure such external data sources and make them available for analysis as opposed to procurement and infrastructure services to provision the data sources.

A cloud based solution also enables IT to serve as deal maker of sorts by enabling data sharing through data evangelism. IT does not have to focus on many to many data sharing between multiple sub organizations and arms of the enterprise but serve as a data and insight publisher focusing on the proliferation of data set knowledge and insights across the enterprise and filling a critical gap in enterprises of missed data connections and insights that go uncovered.

4 Strategies for Making Your Product ‘Smarter’

Originally Published on Entrepreneur.com

“Smart” is the dominant trend in the area of entrepreneurship and innovation. In recent times, a plethora of new products have arrived that make an existing product “smarter” by incorporating sensors, connecting the product to their backend or adding intelligence in the product. Reimagining existing products to be smarter and better for the end user is a gold mine for innovation. Here are four ways to rethink your products and make them smarter.

1. Understand user intent and motivations.

Make your products smarter by making it listen and understand the intent of your user. What is the user trying to do at a given time or at a given location on a specific channel? By listening for signals that motivate the usage of your product, and accounting for how variations in these signals change how your product is used, you can predict and influence how your product should adjust to better serving the end user.

For example, a smart refrigerator can detect the contents, match it against the required ingredients for a decided dinner menu and remind the user to restock a certain missing ingredient.

2. Reach users at the right time.

You can make your products smarter by reaching the user at the right time with the right message, even if the user is not using the product at a given point in time. Making the product aware of the user’s environment offers the opportunity to craft a personalized message to enhance the user experience. You can then motivate and influence the user to use the product at the opportune time in the manner that is most beneficial for both the user and the product.

For example, a smart app can detect the user’s location in a particular grocery aisle and alert them an item they need to replace is on sale.

3. Enable good decisions.

Smart products help the user make the best decisions. By understanding the user’s context and their current environment, you can suggest alternatives, recommend choices or simply notify them of changes in their environment they might otherwise not have noticed. This capability enables the user to make informed choices and decisions, thus enhancing their experience and satisfaction from the product.

For example, by integrating traffic signals in a navigation system, the user can be notified of alternate routes when there are problems in their usual route.

4. Enhance user experience.

You can make your products smarter by enhancing the user’s experience, regardless of where they are in their journey with your product. If they are a new user, your product should help them onboard. If they are an active user, your product should make them more productive. If they are a dissatisfied user, your product should detect their dissatisfaction and offer the appropriate support and guidance to help them recover. In parallel, the product should learn from their situation and use this feedback in redesigning or refactoring the product.

For example, a product company that performs sentiment analysis on their twitter stream is able to swiftly detect user discontent and feed that into their support ticketing system for immediate response and follow up.

The ability to collect telemetry of how your product is being used, use sensors to detect the environment in which it is being used and use customer usage history in the backend to understand user intent has the potential to reinvigorate your existing products to be smarter and more beneficial for their users. Similarly, reimagining or innovating using the above principles offers entrepreneurs the opportunity to disrupt current products and markets and ride the “smart” wave to success.

The ‘Adjacent Possible’ of Big Data: What Evolution Teaches About Insights Generation

Originally published on WIRED

brunkfordbraun/Flickr

Stuart Kauffman in 2002 introduced the the “adjacent possible” theory. This theory proposes that biological systems are able to morph into more complex systems by making incremental, relatively less energy consuming changes in their make up. Steven Johnson uses this concept in his book “Where Good Ideas Come From” to describe how new insights can be generated in previously unexplored areas.

The theory of “adjacent possible extends to the insights generation process. In fact, it offers a highly predictable and deterministic path of generating business value through insights from data analysis. For enterprises struggling to get started with data analysis of their big data, the theory of “adjacent possible” offers an easy to adopt and implement framework of incremental data analysis.

Why Is the Theory of Adjacent Possible Relevant to Insights Generation

Enterprises often embark on their big data journeys with the hope and expectation that business critical insights will be revealed almost immediately just from the virtue of being on a big data journey and they building out their data infrastructure. The expectation is that insights can be generated often within the same quarter as when the infrastructure and data pipelines have been setup. In addition, typically the insights generation process is driven by analysts who report up through the typical management chain. This puts undue pressure on the analysts and managers to show predictable, regular delivery of value and this forces the process of insights generation to fit into project scope and delivery. However, the insights generation process is too ambiguous, too experimental that it rarely fits into the bounds of a committed project.

Deterministic delivery of insights is not what enterprises find on the other side of their initial big data investment. What enterprises almost always find is that data sources are in a disarray, multiple data sets need to be combined while not primed for blending, data quality is low, analytics generation is slow, derived insights are not trustworthy, the enterprise lacks the agility to implement the insights or the enterprise lacks the feedback loop to verify the value of the insights. Even when everything goes right, the value of the insights is simply miniscule and insignificant to the bottom line.

This is the time when the enterprise has to adjust its expectations and its analytics modus operandi. If pipeline problems exist, they need to be fixed. If quality problems exist, they need to be diagnosed (data source quality vs. data analysis quality). In addition, an adjacent possible approach to insights needs to be considered and adopted.

The Adjacent Possible for Discovering Interesting Data

Looking adjacently from the data set that is the main target of analysis can uncover other related data sets that offer more context, signals and potential insights through their blending with the main data set. Enterprises can introspect the attributes of the records in their main data sets and look for other data sets whose attributes are adjacent to them. These datasets can be found within the walls of the enterprise or outside. Enterprises that are looking for adjacent data sets can look at both public and premium data set sources. These data sets should be imported and harmonized with existing data sets to create new data sets that contain a broader and crisper set of observations with a higher probability of generating higher quality insights.

The Adjacent Possible for Exploratory Data Analysis

In the process of data analysis, one can apply the principle of adjacent possible to uncovering hidden patterns in data. An iterative approach towards segmentation analysis with a focus on attribution through micro segmentation, root cause analysis change and predictive analysis and anomaly detection through outlier analysis can lead to a wider set of insights and conclusions to drive business strategy and tactics.

Experimentation with different attributes such as time, location and other categorical dimensions can and should be the initial analytical approach. An iterative approach to incremental segmentation analysis to identify segments where changes in key KPIs or measures can be attributed to, is a good starting point. The application of adjacent possible requires the iterative inclusion of additional attributes to fine tune the segmentation scheme can lead to insights into significant segments and cohorts. In addition, adjacent possible theory can also help in identifying systemic problems in the business process workflow. This can be achieved by walking upstream or downstream in the business workflow and by diagnosing the point of process workflow breakdown or slowdown through the identification of attributes that correlate highly with the breakdown/slowdown.

The Adjacent Possible for Business Context

The process of data analysis is often fraught with silo’d context i.e. the analyst often does not have the full business context to understand the data or understand the motivation for a business driven question or understand the implications of their insights. Applying the theory of adjacent possible here implies that by introducing the idea of collaboration to the insights generation process by inviting and including team members who each might have a slice of the business context from their point of view can lead to higher valued conclusions and insights. Combining the context from each of these team members to design, verify, authenticate and validate the insights generation process and its results is the key to generating high quality insights swiftly and deterministically.

Making incremental progress in the enterprise’s insights discovery efforts is a significant and valuable method to uncover insights with massive business implications. The insights generation process should be treated as an exercise in adjacent possible and incremental insights identification should be encouraged and valued. As this theory is put in practice, enterprises will find themselves with a steady churn of incrementally valuable insights with incrementally higher business impact.

The 2+2=5 Principle and the Perils of Analytics in a Vacuum

Published Originally on Wired

Strategic decision making in enterprises playing in a competitive field requires collaborative information seeking (CIS). Complex situations require analysis that spans multiple sessions with multiple participants (that collectively represent the entire context) who spend time jointly exploring, evaluating, and gathering relevant information to drive conclusions and decisions. This is the core of the 2+2=5 principle.

Analytics in a vacuum (i.e non collaborative analytics) due to missing or partial context is highly likely to be of low quality, lacking key and relevant information and fraught with incorrect assumptions. Other characteristics of non collaborative analytics is the usage of general purpose systems and tools like IM and email that are not designed for analytics. These tools lead to enterprises drowning in a sea of spreadsheets, context lost across thousands of IMs and email and an outcome that is guaranteed to be sub optimal.

A common but incorrect approach to collaborative analytics is to think of it as a post analysis activity. This is the approach to collaboration for most analytics and BI products. Post analysis publishing of results and insights is very important however, pre-publishing collaboration plays a key role in ensuring that the generated results are accurate, informative and relevant. Analysis that terminates at the publishing point has a very short half life.

Enterprises need to think of analysis as a living and breathing story that gets bigger over time as more people collaborate and lead to more data, new data, disparate data leads to the inclusion of more context negating incorrect assumptions, missing or low quality data issues and incorrect semantical understanding of data.

Here are the most common pitfalls that we have observed, of analytics carried out in a vacuum.

Wasted resources. If multiple teams or employees are seeking the same information or attempting to solve the same analytical problem, a non collaborative approach leads to wasted resources and suboptimal results.

Collaboration can help the enterprise streamline and divide and conquer the problem more efficiently and faster with lower time and manpower. Deconstructing an analytical hypothesis into smaller questions and distributing them across multiple employees leads to faster results.

Silo’ed analysis and conclusions. If results of analysis, insights and decisions are not shared systematically across the organization, enterprises face a loss of productivity. This lack of context between employees tasked with the same goals causes organizational misalignment and lack of coherence in strategy.

Enterprises need to ensure that there is common understanding of key data driven insights that are driving organizational strategy. In addition, the process to arrive at these insights should be transparent and repeatable, assumptions made should be clearly documented and a process/mechanism to challenge or question interpretations should be defined and publicized.

Assumptions and biases. Analytics done in a vacuum is hostage the the personal beliefs, assumptions, biases, clarity of purpose and the comprehensiveness of the context in the analyzer’s mind. Without collaboration, such biases remain uncorrected and lead to flawed foundations for strategic decisions.

A process around and freedom to challenge, inspect and reference key interpretation and analytical decisions made en route to the insight is critical for enterprises to enable and proliferate high quality insights in the organization.

Drive-by analysis. When left unchecked with top down pressure to use analytics to drive strategic decision making, enterprises see an uptake in what we call “drive-by analysis.” In this case, employees jump in to their favorite analytical tool, run some analysis to support their argument and publish these results.

This behavior leads to another danger of analytics without collaboration. These can be instances where users, without full context and understanding of of the data, semantics etc perform analysis to make critical decisions. Without supervision, these analytics can lead the organization down the wrong path. Supervision, fact checking and corroboration are needed to ensure that correct decisions are made.

Arbitration. Collaboration without a process for challenge, arbitration and an arbitration authority is often found to be, almost always at a later point in time when it is too late, littered with misinterpretations and factually misaligned or deviated from strategic patterns identified in the past.

Subject matter experts or other employees with the bigger picture, knowledge and understanding of the various moving parts of the organization need to, at every step of the analysis, verify and arbitrate on assumptions and insights before these insights are disseminated across the enterprise and used to affect strategic change.

Collaboration theory has proven that information seeking in complex situations is better accomplished through active collaboration. There is a trend in the analytics industry to think of collaborative analytics as a vanity feature and simple sharing of results is being touted as collaborative analytics. However, collaboration in analytics requires a multi pronged strategy with key processes and a product that delivers those capabilities, namely an investment in processes to allow arbitration, fact checking, interrogation and corroboration of analytics; and an investment in analytical products that are designed and optimized for collaborative analytics.

Four Common Mistakes That Can Make For A Toxic Data Lake

Foundational Theories in Big Data Strategy, Analytics and Product Management

Published on Forbes

Data lakes are increasingly becoming a popular approach to getting started with big data. Simply put, a data lake is a central location where all applications that generate or consume data go to get raw data in its native form. This enables faster application development, both transactional and analytical, as the application developer has a standard location and interface to write data that the application will generate and a standard location and interface to read data that it needs for the application.

However, left unchecked, data lakes can quickly become toxic, becoming a cost to maintain whereas the value delivered from them shrinks or simply does not materialize. Here are some common mistakes that can make your data lake toxic.

Your big data strategy ends at the data lake.

A common mistake is to choose a data lake as the implementation of the big data strategy. This…

View original post 945 more words