The Cultural Hurdles of Innovation via Data Leveraging: Lessons from Failed Flight and Tech Giants
The early history of human flight offers a powerful lens through which to view some contemporary digital transformation efforts. Early aviation pioneers failed not because they lacked ambition or resources, but because they misunderstood the fundamental principles of flight. Their attempts to mimic birds’ wing movements proved futile until scientific breakthroughs like Bernoulli’s principle revealed that lift generation requires complex aerodynamic interactions, not mere mechanical imitation.
This historical parallel illuminates a critical challenge facing some traditional companies today: the superficial adoption of digital transformation methodologies without grasping their underlying principles. Much as those early inventors focused on visible aspects of flight while missing its essential mechanics, many firms chase buzzwords like “data-driven decision making,” “AI transformation,” and “digital-first strategy” without understanding the fundamental organizational and cultural changes these approaches require to succeed.
The pattern is particularly evident in how some traditional firms attempt to replicate the success of tech giants like Google, Apple, and Meta. These companies’ apparent methods and tools are widely copied, but their deeper organizational principles—the equivalent of Bernoulli’s principle in our analogy—remain poorly understood or ignored. The result is a landscape littered with failed transformation initiatives, despite significant investments in technology and processes. Indeed, studies consistently indicate a high failure rate for digital transformations. While estimates vary, research suggests that a significant proportion of these initiatives do not fully achieve their objectives. McKinsey (2023) states that “success remains the exception, not the rule” and that “even successful organizational transformations deliver less than their full potential.” Similarly, BCG (2020) notes that only 30% of digital transformations meet or exceed their target value and result in sustainable change.
This essay examines why such imitation-based approaches to innovation may fall short, exploring:
- The critical differences between surface-level adoption and fundamental transformation
- How market economies shape and constrain innovation possibilities
- The cultural and organizational barriers that prevent traditional firms from achieving meaningful change
- The particular challenges of leveraging data and technology without proper expertise
Through this, perhaps naive, analysis, I aim to tentatively explore the essential principles, under the speculative assumption they exist, that separate successful digital transformation from mere technological mimicry, possibly offering insights for organizations seeking authentic rather than superficial change. Importantly, one must bear in mind that the factors leading to failure may differ from those driving success—avoiding the former does not automatically guarantee the latter.
Imitating Success Without Understanding the Mechanics
Some industrial-era firms often observe tech giants and attempt to emulate their methods in areas such as innovation, IT and data analytics. Much like early flight pioneers, these firms frequently fail to achieve success because they do not fully grasp the mechanisms driving that success. Tech giants have built their cultures and systems to foster innovation at a fundamental level, emphasizing agile development, experimental approaches, and failure as an integral part of the process. Traditional firms, on the other hand, may solely focus on superficial elements—such as creating an “innovation lab” or adopting data science tools—without fostering the cultural shifts that make these approaches effective. This superficial adoption frequently manifests in how organizations implement development methodologies.
The Process Trap: Losing Sight of Value
The SCRUM and SAFe Dilemma
In some, perhaps many, traditional firms, agile methodologies may have become rigid, process-heavy approaches that contradict their original intent. SCRUM, for instance, often devolves into what can be described as a “waterfall of waterfalls”—providing the illusion of agility while maintaining old habits. In such environments, the process becomes paramount, prioritizing the accumulation of “user story” tickets over delivering genuine value to customers.
The Scaled Agile Framework (SAFe) and other initiatives aimed at scaling agile, like the Spotify model, have also faced scrutiny. While Spotify is often cited as a success story for scaling agile practices, evidence suggests a more complex reality. As noted by complexity theorist Dave Snowden, a respected figure in the Agile community (Snowden, 2014):
SAFe seemed to be PRINCE II camouflaged in Agile language. SCRUM as an approach was emasculated in a small box to the bottom right of a hugely overcomplicated linear model. The grandiose name of a dependency map was applied to something which is no different from a PERT chart and in general what we had is an old stale wine forced into shiny new wineskins.
And he added:
SAFe is not only a betrayal of the promise offered by AGILE but is a massive retrograde step giving the managerial class an excuse to avoid any significant change.
What the Buzz with Agile Software Development Anyway?
I was fortunate to begin my career at Horiba in Kyoto, a company renowned for manufacturing state-of-the-art scientific instruments. I worked in the R&D scientific division, where, like in many successful R&D teams, Agile practices were naturally embedded—not only in the software and applied mathematics areas, where I was involved, but across the entire scope of product development. Interestingly, the term “Agile” was never explicitly used.
The 17 individuals who gathered in 2001 to write the Agile Manifesto did not invent anything entirely new that day. They were seasoned practitioners who simply formalized and articulated what was already working in their fields. They described practices as they observed them, rather than imagining a brand-new framework for addressing the complexity of software development—an area where traditional project management methods (like those used in construction, car or pen design, and most processes from the industrial era) had consistently fallen short.
In other words, Agile emerged as the result of a Darwinian selection process. It was not imposed on practitioners; it evolved naturally to meet their needs.
However, forcing Agile onto every type of project and making it the only option across an entire organization may not be wise after all. Agile requires a specific mindset and culture—one that emphasizes adaptability, collaboration, and continuous improvement. Some parts of the organization may not possess or be ready to adopt these values. Different projects, teams, and contexts may benefit more from alternative methodologies, and allowing flexibility in choosing the right approach could ultimately lead to better outcomes. Otherwise, a caricature of Agile may emerge, where the label is applied but the underlying principles are misunderstood or misapplied.
The Leadership Gap: Technical Expertise in Management
The people that really create the things that change this industry are both the thinker and doer in one person. — Steve Jobs (Jobs, 1990)
A fundamental issue underlying these failures may stem from the lack of technical expertise within management. Successful tech companies have consistently demonstrated the value of technically proficient leadership. Leaders like Steve Jobs, Eric Schmidt, Larry Page, Sergey Brin, and Mark Zuckerberg all emphasize the importance of technically skilled managers.
For instance, Jeff Dean, co-creator of technologies like MapReduce and TensorFlow and currently Google’s Chief Scientist, exemplifies expertise-driven leadership. Reporting directly to Alphabet CEO Sundar Pichai, Dean continues contributing hands-on, bridging management and engineering while driving meaningful innovation.
Another example is Bertrand Serlet, former chief of Apple’s operating systems under Steve Jobs. Despite his executive role, Serlet actively contributed foundational code for OS X, showing how technical expertise at leadership levels shapes product success.
In contrast, the absence of technically proficient leadership may have contributed to notable failures in several traditional firms. Nokia and BlackBerry, both pioneers in mobile technology, failed to adapt to the software-centric smartphone era, as their leaders focused on hardware rather than user experience and app ecosystems. General Electric (GE), in its attempt to become a digital leader, struggled with its Predix platform because its industrial-focused leadership underestimated the complexity of building scalable software solutions.
These examples illustrate the challenges faced by high-level executives lacking technical expertise in a technology-driven environment. However, even in industries such as services, where top management may be experts in their domain, the leaders appointed to oversee digital transformation may lack the necessary technical skills.
Although not related to IT, the Jérôme Kerviel case at Société Générale in 2008 offers a stark example of how systemic reliance on elite networks can contribute to catastrophic failures. Kerviel’s direct supervisor, Eric Cordelle, who led the Delta One trading desk, was an alumnus of the prestigious École Polytechnique, a hallmark of France’s elite educational system. His appointment, like many in similar institutions, reflected a broader tendency to prioritize academic pedigree and alumni networks over deep domain expertise. This structural flaw created oversight gaps, which Kerviel allegedly exploited, culminating in €4.9 billion in losses—one of the largest trading scandals in history. The case highlights the dangers of relying on exclusive networks to fill leadership roles, a practice that fosters systemic vulnerabilities (Baker & Cohanier, 2017; Bennhold, 2008). For those skeptical of this interpretation, it is worth noting that Kerviel did not have the authority to sign off on frequent and substantial margin calls, underscoring deeper systemic failures.
The Pursuit of “THE Innovation” in a Multitude of Innovations
Many traditional firms classify data projects into the innovation rubric. Sometimes, they attempt to encourage innovation through training programs or talks that often resemble motivational speeches rather than fostering real change. These sessions typically present well-known but overly simplistic examples. The emphasis is on stirring enthusiasm rather than addressing fundamental issues.
In these talks, the concept of patents is often presented as an indicator of a company’s innovation prowess. Successful innovative companies are portrayed as those with “successful patents,” and grand theory about how to foster innovation is devised through finding common traits, yet this overlooks the deeper structural and cultural factors that may truly drive innovation. I am guilty of similar oversimplification in my “Ideation: First Step Towards Innovation” talk (Merckel, 2020) - while I used Fleming and Marx (2006)’s network analysis of patent collaboration to discuss organizational structures, translating such academic research into practical recommendations risks masking underlying complexities. Besides, patents primarily capture product innovation, missing other crucial types such as processes, services, finance and the like.
Furthermore, trying to find common traits to draw conclusions is doubtful from a scientific standpoint. Among many issues, such an approach assumes that such a process does exist. As I wrote elsewhere (Merckel, 2017, p. 2):
the mechanism leading to successful innovations … is still to be identified. Further, the possibility that such a mechanism does not exist cannot be ruled out, those radical changes might happen by sheer luck or serendipity—a phenomenon falling under the umbrella of the so-called black swans (Taleb, 2008). Whether such a mechanism exists seems to be still an open question that some scholars have been attempting to elucidate; Christensen (The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, 1997), one of them, expresses strong presumptions in favour of its existence.
The Market Economy Divide: Incremental vs. Disruptive Innovation
A crucial yet often overlooked factor is the fundamental difference between market economies. Coordinated market economies (CMEs), like Germany and Japan, excel at consensus-driven, incremental improvement. Liberal market economies (LMEs), such as the United States, thrive on competition and disruptive innovation. Attempting to transplant practices from a disruptive innovator like Google into a CME environment may lead to misalignment and frustration.
Consider mobile technology: In CMEs, creating an iPhone app for booking tickets represents a typical incremental innovation—improving an existing service through technology. In contrast, LMEs produced the iPhone itself and its ecosystem—a disruptive innovation that fundamentally changed how we interact with mobile devices and created entirely new markets.
Cultural and Sectoral Misalignment
Different sectors require different approaches to innovation. Logistics and service companies, for instance, should not mistakenly try to apply product-centric innovation frameworks to their operations. Instead, these companies would probably benefit more from focusing on operational and process innovations aligned with their core business models.
Innovation Myths and Misunderstandings
The Patent Paradox
Patents, often presented as indicators of innovation success, tell only part of the story. And what is the relevant measure of success for a patent? Does the success of a patent lie in the revenue it generates to the inventors, or in the eventual impact of the idea? Consider Kodak’s cautionary tale: despite inventing and patenting the digital camera, the company failed to capitalize on this innovation. The technology only achieved widespread success after Kodak’s patent expired, demonstrating that innovation success depends more on implementation and adaptation than mere invention. This also mirrors other innovations, like the invention of wheels on suitcases, which took decades to become widely adopted despite their obvious utility.
Superficial Innovation Initiatives
Some firms rely on motivational sessions that oversimplify innovation concepts. A common example is the misattributed Henry Ford quote about faster horses—while Ford never actually said, “If I had asked customers what they wanted, they would have said a faster horse,” this apocryphal quote is often used to justify ignoring customer feedback. Such oversimplification obscures the complex reality of successful innovation. Perhaps nowhere is this complexity more evident than in how organizations handle failure in their innovation efforts.
The Culture of Failure and Experimentation
In companies like Google, failure is viewed as an essential part of the innovation process. Failed projects are analyzed, learned from, and shelved without stigma. Traditional firms, however, often maintain a culture where failure is unacceptable, leading to distorted reporting and flawed experimentation. This includes poorly designed A/B tests with inadequate control groups, driven more by the desire to report success than to generate meaningful insights.
The Planning Ritual
Management theorist Russell Ackoff’s comparison of corporate planning to a “ritual rain dance” remains relevant (“A good deal of the corporate planning I have observed is like a ritual rain dance; it has no effect on the weather that follows, but those who engage in it think it does”, Ackoff, 1981, p. ix). Some traditional firms engage in the motions of innovation—creating committees, developing roadmaps, and setting targets—without addressing the fundamental cultural barriers to change. True cultural transformation requires more than ceremonial activities; it demands systematic, deep-rooted change.
“Innovate or Die” Applies only to Some Firms
Are You Operating an R&D Facility?
An important, though perhaps overlooked, distinction lies in how different types of firms must approach innovation. Product-driven technology companies face fundamentally different innovation imperatives compared to service-oriented or infrastructure-heavy businesses.
Consider ASML, the semiconductor manufacturing equipment leader: their Q3 2024 financial results landing page prominently displays just four key metrics, with R&D spending of over 1 billion euros featured as a headline figure. From a traditional accounting perspective, R&D represents an expenditure that management has a fiduciary duty to minimize. Yet for firms whose survival depends on innovative products, R&D is not merely a cost center—it is an existential investment. These companies operate on the principle that today’s R&D expenses will be tomorrow’s capitalized innovations.
This stands in stark contrast to service-oriented businesses like consulting firms or infrastructure-heavy operations like airlines, where innovation typically focuses on operational efficiency rather than product development. For these organizations, R&D in the traditional sense may be minimal or non-existent, with innovation taking different forms—process improvements, service delivery optimization, or incremental efficiency gains.
This philosophy is exemplified in Google’s systematic approach to innovation investment. As Eric Schmidt describes in ‘How Google Works’, the company follows a 70/20/10 rule for resource allocation: 70 percent of resources dedicated to core business, 20 percent to emerging opportunities, and 10 percent to completely new, high-risk but potentially high-payoff initiatives. This structured approach to balancing innovation investment—even allocating a specific percentage to high-risk ventures—stands in stark contrast to service-oriented businesses where such systematic R&D allocation would make little sense.
Patents as Metrics: A Misleading Indicator?
While patent counts are often cited as indicators of innovation, they primarily reflect the strategies of product-driven firms like ASML or Google, which use patents as both defensive assets and markers of technological advancement. In contrast, successful service companies may thrive with minimal patent activity, focusing instead on process improvements or customer experience innovations. This highlights a critical point: when benchmarking innovation, it is essential to carefully consider which companies and metrics are most relevant. Relying on patent counts alone risks misidentifying what innovation truly means in different contexts, leading to misaligned expectations and ineffective initiatives when applying Silicon Valley-style practices to traditional businesses.
Even for technology companies, however, patent counts can be an obscure and sometimes misleading metric. Apple’s approach under Steve Jobs serves as a striking example. Following the $100 million settlement with Creative Technology in 2006, Apple dramatically increased its patent filings—not only for groundbreaking technologies but also for seemingly trivial aspects like product packaging. During the 2007 iPhone launch, Jobs famously quipped, “And boy have we patented it,” highlighting the breadth of Apple’s intellectual property strategy.
In my own experience authoring four patents1, the innovation value was often questionable. These patents were pursued primarily to increase patent counts rather than to protect truly groundbreaking inventions. This illustrates the broader point that patent metrics don’t always reflect genuine innovation and can lead to misleading comparisons between companies in different sectors.
The Data-Driven Fallacy: When Numbers Lead Astray
A sometimes-overlooked yet crucial aspect of digital transformation is the essential role of domain expertise in navigating the complexity of data interpretation. Some organizations proudly declare themselves “data-driven” but lack the fundamental skills to draw meaningful conclusions from their data. This dangerous mix of enthusiasm and inexpertise may lead to flawed decision-making, hidden behind a facade of analytical rigor.
The Feet Size Fallacy: A Cautionary Tale
Consider this revealing example from psychometric testing: imagine an analysis that shows a strong positive correlation between foot size and intelligence test scores. Without proper analytical expertise, an HR department might proudly declare they have found a revolutionary, “data-driven” screening tool—after all, the data clearly shows that larger feet correlate with higher intelligence scores! They might even celebrate the cost efficiency of replacing complex psychometric tests with a simple ruler measurement.
The reality, of course, is far different. The correlation exists because babies and toddlers, with their smaller feet, typically timeout or cannot complete computerized tests, resulting in zero scores. This seemingly obvious confounding factor would be immediately apparent to someone with proper statistical training and domain knowledge. However, in organizations where data analysis is treated as a simple tool rather than a complex discipline requiring expertise, such fundamental errors can go unnoticed and even be celebrated as innovations.
Laughing? Well, a study published in Psychological Bulletin (Coren & Halpern, 1991) claimed left-handed people died on average nine years earlier than right-handed people. This caused significant concern until researchers realized the fatal flaw: the study used contemporary samples of older and younger people. Older left-handed people were historically forced to write with their right hand, so they appeared right-handed in the data, creating an artificial correlation between left-handedness and younger age.
In the fall of 1973, the University of California, Berkeley faced accusations of gender bias when it was found that 44% of male applicants were admitted to graduate programs, compared to only 35% of female applicants (Tuna, 2009). This raised serious concerns about discriminatory practices. However, a deeper analysis by statistician Peter Bickel revealed that the apparent bias was a statistical illusion. When admissions were broken down by department, no individual department significantly favored male applicants. Instead, female applicants tended to apply to more competitive departments with lower overall acceptance rates, while male applicants were more represented in departments with higher acceptance rates. This created an artificial disparity in the aggregated data, making it a classic real-world example of Simpson’s Paradox.
What about what Malcolm Gladwell documented in “Blink”? He noted a strong correlation between CEO height and company revenue, with taller individuals disproportionately represented among Fortune 500 CEOs. This bias, however, reflects societal perceptions that favor taller people throughout their careers, not any inherent link between height and leadership ability.
The Broader Implications
This “feet size” example illustrates several critical points about the current state of some data-driven transformation:
False Confidence: The appearance of analytical rigor (correlations, graphs, numbers) can create unwarranted confidence in conclusions, particularly among those without statistical expertise.
Hidden Complexities: What seems like straightforward data analysis often involves numerous subtle considerations that only become apparent with proper training and experience.
Cost of Inexpertise: Organizations attempting to become data-driven without investing in proper expertise risk making decisions that are worse than those based on experience and intuition.
The Democratization Myth: While tools for data analysis have become more accessible, the skills needed to draw valid conclusions from data cannot be similarly democratized without substantial investment in training and expertise.
This situation is particularly dangerous because it creates what might be called “data-driven theater”—organizations believe they are making objective, data-based decisions while actually engaging in potentially misleading analysis that provides a false sense of scientific validity to flawed conclusions.
The Reproducibility Crisis: A Real-World Example
A parallel can be drawn to the ongoing reproducibility crisis in academic research, where a large portion of published findings cannot be replicated. The causes span both deliberate misconduct, as highlighted in the 2023 Harvard data fraud scandal (Scheiber, 2023), and genuine statistical incompetence. Misapplied statistical methods, p-hacking (manipulating data to find significant results), and misunderstanding of statistical principles have led to numerous unreliable published results, whether through intentional manipulation or honest mistakes.
This crisis is a clear demonstration of how the absence of strong analytical skills—combined with pressure to produce results—can lead to systemic issues that undermine trust in data-driven conclusions. It serves as a cautionary tale for organizations aiming to transform into “data-driven” entities without first building a solid foundation of statistical competence and analytical rigor.
The Expertise Gap in Data Science Teams
The challenge of building effective data science capabilities extends beyond tools and methodologies to the fundamental issue of team composition and expertise. Some organizations, eager to embrace data-driven transformation, struggle to build genuinely qualified teams due to several interconnected factors.
At the core of this challenge lies the complex landscape of data science qualifications and expertise. Traditional firms may struggle to distinguish between deep expertise—built through formal education in statistics, computer science, or applied mathematics combined with practical experience—and surface-level familiarity gained through short-term training programs. Without deep technical understanding at the leadership level, hiring processes may overemphasize certifications and online courses while missing crucial indicators of real capability. This distinction becomes particularly critical when teams are responsible for making significant business decisions based on data analysis.
The situation is further complicated by:
- Competition with tech companies for top talent, leading to compromises in hiring standards
- Lack of clear technical career paths and growth opportunities
- Tendency to prioritize cultural alignment over technical capability, particularly when those making hiring decisions lack the technical background to evaluate candidates effectively
These compromises in team formation ultimately undermine the very data-driven transformation these organizations seek to achieve, resulting in flawed analyses that appear rigorous but lack statistical validity, and difficulty in scaling data science initiatives effectively.
The Ultimate Irony: The Blind Spots in Our Own Digital Transformation
The irony inherent in digital transformation initiatives becomes apparent when we consider how their success (or lack thereof) is evaluated. Senior management, often operating from traditional business metrics and quarterly reports, may lack the technical expertise to accurately assess the quality and impact of their digital investments. This creates a peculiar situation where:
- Technical teams might recognize fundamental issues in implementation but struggle to communicate these effectively up the management chain
- Middle management may focus on reporting metrics that look good on paper (number of AI projects initiated, percentage of “data-driven” decisions) while missing deeper indicators of successful transformation. Revenue attribution negotiation becomes one of the key for claiming success.
- Senior leadership, viewing results through traditional business frameworks, might see surface-level progress without detecting underlying structural problems
However, there is an inevitable financial reckoning. While superficial yet expensive digital initiatives might temporarily create the appearance of progress, the CFO’s office eventually encounters the hard numbers: projects that do not deliver expected efficiencies, maintenance costs that spiral due to technical debt, or competitive advantages that fail to materialize despite significant investment.
The Change Agent’s Dilemma
The awareness of these issues raises a complex question: Do those who understand these dynamics have sufficient incentive to drive change? Some, like this essay’s attempt to spark discussion, represent individual efforts to challenge the status quo. However, such initiatives often come at a personal cost. As my boss during my tenure in the first data-leverage attempt program of the company observed, being a nonconformist—while earning respect from some—can generate resistance from many others.
This organizational inertia is perhaps best captured by a wisdom passed down from Jason Zweig’s father, later shared widely in financial circles (Zweig, 2018):
- Lie to people who want to be lied to, and you’ll get rich
- Tell the truth to those who want the truth, and you’ll make a living
- Tell the truth to those who want to be lied to, and you’ll go broke
This insight, while originating in the context of financial journalism, may apply equally well to organizational change and digital transformation. Some stakeholders in traditional organizations have vested interests in maintaining current structures and processes, even when they recognize their limitations. The institutional pressure to conform to existing paradigms—even flawed ones—can be overwhelming.
Successful Data Transformations: Learning from the Exceptions
While some traditional companies may struggle with becoming genuinely data-driven, examining successful turnaround cases provides valuable insights. Particularly instructive are Microsoft that had degraded into traditional industrial-era organizational patterns before successfully reinventing iteself—and ASML, which evolved from a traditional manufacturing company into a data-sophisticated organization while maintaining its engineering excellence.
Microsoft’s Journey: From Industrial-Era Bureaucracy to Data-Driven Renaissance
By 2014, Microsoft had devolved into exhibiting classic traits of a traditional industrial corporation rather than a tech giant. Despite its origins in software innovation, the company had developed:
- Rigid hierarchical structures reminiscent of industrial-era organizations
- Risk-averse decision-making processes that prioritized existing revenue streams
- Bureaucratic approaches to data that emphasized reporting over insight
- Siloed departments that hoarded rather than leveraged data
- A “know-it-all” culture resistant to data-driven learning
This transformation into a traditional company was evident in Microsoft’s missed opportunities in mobile and cloud computing—failures that stemmed not from lack of data, but from inability to effectively leverage it for decision-making.
Under Nadella’s leadership, Microsoft’s revival demonstrated how a company can shed industrial-era patterns and rebuild genuine data capabilities:
Technical Data Leadership: Beyond general technical expertise, Nadella’s background in cloud computing and data systems enabled him to understand the real possibilities and limitations of data-driven decision making. This prevented the superficial “data theater” criticized earlier in this essay.
Integrated Data Culture: Rather than creating isolated “data science teams,” Microsoft integrated data capabilities throughout its organization. This approach ensured that data expertise was paired with domain knowledge—avoiding the “feet size fallacy” type errors described earlier.
Data Infrastructure Investment: Microsoft built genuine data capabilities through substantial investment in both technical infrastructure and human expertise, avoiding the common trap of purchasing tools without building understanding.
ASML’s Evolution: Engineering Meets Data
ASML provides a particularly instructive example of how manufacturing companies can effectively leverage data while maintaining their engineering focus. Their approach demonstrates several crucial elements:
Domain-Driven Data Integration: Rather than attempting to become “data-driven” in the abstract, ASML integrated data science into specific engineering challenges. For example:
- Using sensor data from their machines to predict maintenance needs
- Leveraging manufacturing data to optimize chip production processes
- Applying machine learning to improve lithography accuracy
Data Expertise in Context: ASML maintained strong technical leadership while building data science capabilities, ensuring that data analysts worked closely with domain experts. This prevented the common problem of data scientists working in isolation from business reality.
Data Feedback Loops: By creating robust data collection and analysis systems across their customer base, ASML built genuine learning capabilities rather than simply accumulating data. Their machines not only generate data but feed it back into improvement cycles.
Common Elements in Successful Data Transformations
These examples are particularly relevant because they demonstrate how organizations can overcome the very challenges faced by today’s traditional firms. Their success stemmed from:
Authentic Data Culture: Each company built genuine data capabilities rather than adopting superficial “data-driven” practices, explicitly breaking away from their previous traditional corporate patterns.
Domain-Data Integration: Both successfully integrated data science with domain expertise, avoiding the common trap of isolated data teams making decisions without context.
Clear Data Purpose: Rather than collecting data for its own sake, each company maintained clear links between data collection and business value.
Technical-Business Balance: Each maintained strong technical leadership while ensuring data initiatives served clear business purposes.
Learning Focus: Both companies used data to build learning capabilities rather than just making isolated decisions.
These transformations are particularly instructive because they started from positions similar to today’s traditional companies—bureaucratic, hierarchical, and resistant to change. Their success demonstrates that while becoming genuinely data-driven is challenging, organizations can succeed when they first acknowledge and then systematically address their industrial-era organizational patterns.
Conclusion: Reflections on the Illusion of Imitable Success
Given the arguments presented, it would be both ironic and disingenuous to prescribe a definitive checklist of “dos and don’ts” based solely on observed traits. While some of these traits may inevitably lead to spectacular failures, the characteristics identified in successful cases do not provide a guaranteed roadmap to success. They simply do not.
Furthermore, it is crucial to recognize that many Silicon Valley firms, often cited as models for digital transformation, never underwent such transitions themselves. With few exceptions, these companies were born in the digital era, not the industrial one. They did not transform—they spearheaded the digital revolution. It is embedded in their DNA. Extending this biological metaphor, even with significant advancements in genetic editing, fundamentally altering an organism’s DNA to achieve a precisely desired expression remains a formidable challenge.
Microsoft and Apple, often referenced as rare success stories of transformation, present a duality worth examining. While both offer valuable lessons, they are also deeply entrenched as tech giants, making them atypical examples for traditional firms. Microsoft’s reinvention—from a stagnating industrial-era structure to a cloud-first leader—is remarkable but was enabled by its deep technical roots and vast resources. Similarly, Apple’s evolution from “Apple Computer Inc.” to “Apple Inc.” underscores the scale and complexity of such shifts, yet its success was driven by a unique combination of design innovation and ecosystem control. For traditional companies, whose foundational DNA lacks such inherent technical or design strengths, replicating these successes may prove exponentially more challenging.
I will leave you with this insight from Jeff Bezos (Stone, 2013, p. 14), which captures my hesitance to prescribe simplistic strategies based on surface observations:
When a company comes up with an idea, it’s a messy process. There’s no aha moment.
In essence, innovation rarely stems from a neatly defined plan. Instead, it arises from iterative learning, experimentation, and the ability to navigate uncertainty—and, of course, a pinch of luck.
Sources
References
- Ackoff, R. L. (1981). Creating the Corporate Future: Plan or Be Planned For. Wiley. ISBN-13: 978-0471090090.
- Anthony, S. D., & Christensen, C. M. (2004). Seeing What’s Next: Using Theories of Innovation to Predict Industry Change. Harvard Business Review Press. ISBN-13: 978-1591391852.
- Austin, R. D., & Pelow, G. (2019). Digital Transformation at GE: What Went Wrong? Harvard Business School Case 9-194-099. Retrieved from https://hbsp.harvard.edu/product/W19499-PDF-ENG.
- Baker, C. R., & Cohanier, B. (2017). Breakdowns in internal controls in bank trading information systems: The case of the fraud at Société Générale. International Journal of Accounting Information Systems, 26, 20-31. https://doi.org/10.1016/j.accinf.2017.06.002
- Bessen, J. E. (2014). Employers aren’t just whining – the ‘skills gap’ is real. Harvard Business Review. Retrieved from https://hbr.org/2014/08/employers-arent-just-whining-the-skills-gap-is-real.
- Capelli, P. (2019). Your approach to hiring is all wrong. Harvard Business Review. Retrieved from https://hbr.org/2019/05/recruiting.
- Chamorro-Premuzic, T. (2017). The Talent Delusion: Why Data, Not Intuition, Is the Key to Unlocking Human Potential. Piatkus. ISBN-13: 978-0349412481.
- Cord, D. J. (2014). The Decline and Fall of Nokia. Schildts & Söderströms. ISBN-13: 978-9515233202.
- Coren, S., & Halpern, D. F. (1991). Left-handedness: A marker for decreased survival fitness. Psychological Bulletin, 109(1), 90–106. https://doi.org/10.1037/0033-2909.109.1.90.
- Davenport, T. H., & Patil, D. J. (2012). Data scientist: The sexiest job of the 21st century. Harvard Business Review. Retrieved from https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century.
- Dubois, R. W., Rogers, W. H., Moxley III, J. H., Draper, D., & Brook, R. H. (1987). Hospital inpatient mortality. New England Journal of Medicine, 317(26). https://doi.org/10.1056/NEJM198712243172626.
- Fleming, L. & Marx, M. (2006). Managing creativity in small worlds. California management review, 48(4), pp.6-27. https://funginstitute.berkeley.edu/wp-content/uploads/2012/10/Managing-Creativity-in-Small-Worlds.pdf.
- Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Columbia University. http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf.
- Gladwell, M. (2005). Blink: The Power of Thinking Without Thinking. Back Bay Books. ISBN-13: 978-0316010665.
- Hall, P. A., & Soskice, D. (2001). Varieties of Capitalism: The Institutional Foundations of Comparative Advantage. Oxford University Press. https://doi.org/10.1093/0199247757.001.0001.
- Harris, L. J. (1993). Do left-handers die sooner than right-handers? Commentary on Coren and Halpern’s (1991) “Left-handedness: A marker for decreased survival fitness.” Psychological Bulletin, 114(2), 203–234. https://doi.org/10.1037/0033-2909.114.2.203.
- Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124.
- Isaacson, W. (2011). Steve Jobs. Little, Brown. ISBN-13: 978-1408703748.
- Keller, S. (2017). Attracting and retaining the right talent in data and analytics. McKinsey Insights. Retrieved from https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/attracting-and-retaining-the-right-talent.
- McNish, J., & Silcoff, S. (2015). Losing the Signal: The Untold Story Behind the Extraordinary Rise and Spectacular Fall of BlackBerry. Flatiron Books. Kindle Edition. ASIN: B00Q20ASVS.
- Merckel, L. (2017). An Analysis of the Competitive Dynamics Behind the Disruptor’s Dilemma. https://doi.org/10.17863/CAM.9207.
- Nadella, S., Shaw, G., & Nichols, J. T. (2017). Hit Refresh: The Quest to Rediscover Microsoft’s Soul and Imagine a Better Future for Everyone. Harper Business. Kindle Edition. ASIN: B01HOT5SQA.
- Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251). https://doi.org/10.1126/science.aac4716.
- Schmidt, E., & Rosenberg, J. (2014). How Google Works. John Murray. Kindle Edition. ASIN: B00J379F3O.
- Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11). https://doi.org/10.1177/0956797611417632.
- Stone, B. (2013). The Everything Store: Jeff Bezos and the Age of Amazon. Transworld. Kindle Edition. ASIN: B00FOQRNN2.
Notes
- ASML. (2024, October 16). Q3 2024 Financial Results Landing Page. Retrieved from https://www.asml.com/en/investors/financial-results/q3-2024-572fba47762609d4.
- BCG (2020). Flipping the Odds of Digital Transformation Success. Retrieved from https://www.bcg.com/publications/2020/increasing-odds-of-success-in-digital-transformation.
- Jackson, S. (2024, September 21). Mark Zuckerberg says leaders should have technical skills if they want to call themselves a tech company. Yahoo News. Retrieved from https://www.yahoo.com/tech/mark-zuckerberg-says-leaders-technical-092702401.html.
- Jobs, S. (1990). Interview with Computerworld Information Technology Awards Foundation. all about Steve Jobs.com. Retrieved from https://allaboutstevejobs.com/videos/misc/future_of_pc_1990.
- McKinsey (2023). Losing from day one: Why even successful transformations fall short. Retrieved from https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/successful-transformations#/.
- Merckel, L. (2020). Ideation: First Step Towards Innovation. Retrieved from https://www.619.io/blog/2020/02/26/ideation-first-step-towards-innovation.
- Scheiber, N.. (2023, September 30). The Harvard Professor and the Bloggers. The New York Times. Retrieved from https://www.nytimes.com/2023/09/30/business/the-harvard-professor-and-the-bloggers.html
- Pièces à conviction: Affaire Kerviel Société Générale la justice sous influence [Video]. (2017, April 21). YouTube. Retrieved from https://youtu.be/TNWYyRMfzKA.
- Sadow, B. D. (1972). Rolling luggage (US Patent No. 3,653,474). United States Luggage Corp. Retrieved from https://patents.google.com/patent/US3653474A/en.
- Snowden, D. (2010). Shooting the sacred cows of OD. Cognitive Edge. Retrieved from https://thecynefin.co/shooting-the-sacred-cows-of-od/.
- Snowden, D. (2014). SAFe: the infantilism of management. Cognitive Edge. Retrieved from https://thecynefin.co/safe-the-infantilism-of-management/.
- Tuna, C. (2009, December 2). When Combined Data Reveal the Flaw of Averages. The Wall Street Journal. Retrieved from https://www.wsj.com/articles/SB125970744553071829
- Zweig, J. (2018). Three Ways to Get Paid. Jason Zweig. Retrieved from https://jasonzweig.com/three-ways-to-get-paid/.
-
Patent numbers: US2006055697A1, JP2008309595A, JP2009008577A, JP2010026882A ↩