Five Essential Steps For Strategic (adaptive) Enterprise Computing


Given the spin surrounding big data, duopoly deflection campaigns by incumbents, and a culture of entitlement across the enterprise software ecosystem, the following 5 briefs are offered to provide clarity for improving strategic computing outcomes.

1)  Close the Data Competency Gap

Much has been written in recent months about the expanding need for data scientists, which is true at this early stage of automation, yet very little is whispered in public on the prerequisite learning curve for senior executives, boards, and policy makers.

Data increasingly represents all of the assets of the organization, including intellectual capital, intellectual property, physical property, financials, supply chain, inventory, distribution network, customers, communications, legal, creative, and all relationships between entities. It is therefore imperative to understand how data is structured, created, consumed, analyzed, interpreted, stored, and secured. Data management will substantially impact the organization’s ability to achieve and manage the strategic mission.

Fortunately, many options exist for rapid advancement in understanding data management ranging from off-the-shelf published reports to tailored consulting and strategic advisory from individuals, regional firms, and global institutions. A word of caution, however—technology in this area is changing rapidly, and very few analysts have proven able to predict what to expect within 24-48 months.

Understanding Data Competency

    • Data scientists are just as human as computer or any other type of scientist
    • A need exists to avoid exchanging software-enabled silos for ontology-enabled silos
    • Data structure requires linguistics, analytics requires mathematics, human performance requires psychology, predictive requires modeling—success requires a mega-disciplinary perspective

2)  Adopt Adaptive Enterprise Computing

A networked computing workplace environment that continually adapts to changing conditions based on the specific needs of each entity – MM 6.7.12

While computing has achieved a great deal for the world during the previous half-century, the short-term gain became a long-term challenge as ubiquitous computing was largely a one-time, must-have competitive advantage that everyone needed to adopt or be left behind.  It turns out that creating and maintaining a competitive advantage through ubiquitous computing within a global network economy is a much greater challenge than initial adoption.

A deep misalignment of interests now exists between customer entities that need differentiation in the marketplace to survive and much of the IT industry, which needs to maintain scale by replicating the precise same hardware and software at massive scale worldwide.

When competitors all over the world are using the same computing tools for communications, operations, transactions, and learning, yet have a dramatically different cost basis for everything else, the region or organization with a higher cost basis will indeed be flattened with economic consequences that can be catastrophic.

This places an especially high burden on companies located in developed countries like the U.S. that are engaged in hyper-competitive industries globally while paying the highest prices for talent, education and healthcare—highlighting the critical need to achieve a sustainable competitive advantage.

Understanding adaptive enterprise computing:

    • Adaptive computing for strategic advantage must encompass the entire enterprise architecture, which requires a holistic perspective
    • Adaptive computing is strategic; commoditized computing isn’t—rather should be viewed as entry-level infrastructure
    • The goal should be to optimize intellectual and creative capital while tailoring product differentiation for a durable and sustainable competitive advantage
    • Agile computing is largely a software development methodology while adaptive computing is largely a business strategy that employs technology for managing the entire digital work environment
    • The transition to adaptive enterprise computing must be step-by-step to avoid operational disruption, yet bold to escape incumbent lock-in

3)  Extend Analytics to Entire Workforce

Humans represent the largest expense and risk to most organizations, so technologists have had a mandate for decades to automate processes and systems that either reduce or replace humans. This is a greatly misunderstood economics theory, however. The idea is to free up resources for re-investment in more important endeavors, which has historically employed the majority of people, but in practice the theory is dependent upon long-term, disciplined, monetary and fiscal policy that favors investment in new technologies, products, companies and industries. When global automation is combined with an environment that doesn’t favor re-investment in new areas, as we’ve seen in recent decades, capital will sit on the sidelines or be employed in speculation that creates destructive bubbles, the combination of which results in uncertainty with high levels of chronic unemployment.

However, while strategic computing must consider all areas of cost competitiveness, it’s also true that most organizations have become more skilled at cost containment than human systems and innovation. As we’ve observed consistently in recent years, the result has been that many organizations have failed to prevent serious or fatal crises, failed to seize missed opportunities, and failed to remain innovative at competitive levels.

While hopefully the macro economic conditions will broadly improve with time, the important message for decision makers is that untapped potential in human performance analytics that can be captured with state-of-the-art systems today is several orders of magnitude higher than through traditional supply chain analytics or marketing analytics alone.

Understanding Human Performance Systems:

    • Improved human performance systems improves everything else
    • The highest potential ROI to organizations today hasn’t changed in a millennium: engaging humans in a more competitive manner than the competition
    • The most valuable humans tend to be fiercely protective of their most valuable intellectual capital, which is precisely what organizations need, requiring deep knowledge and experience for system design
    • Loyalty and morale are low in many organizations due to poor compensation incentives, frequent job change, and misaligned motivation with employer products, cultures and business models
    • Motivation can be fickle and fluid, varying a great deal between individuals, groups, places, and times
    • For those who may have been otherwise engaged—the world went mobile

4)  Employ Predictive Analytics

An organization need not grow much beyond the founders in the current environment for our increasingly data rich world to require effective data management designed to achieve a strategic advantage with enterprise computing. Indeed, often has been the case where success or failure depended upon converting an early agile advantage into a more mature adaptive environment and culture. Within those organizations that survive beyond the average life expectancy, many cultures finally change only after a near-death experience triggered by becoming complacent, rigid, or simply entitled to that which the customer was in disagreement—reasons enough for adoption of analytics for almost any company.

While the need for more accurate predictive abilities is obvious for marketers, it is no less important for risk management, investment, science, medicine, government, and most other areas of society.

Key elements that impact predictive outcomes:

    • Quality of data, including integrity, scale, timeliness, access, and interoperability
    • Quality of algorithms, including design, efficiency, and execution
    • Ease of use and interpretation, including visuals, delivery, and devices
    • How predictions are managed, including verification, feed-back loops, accountability, and the decision chain

5)  Embrace Independent Standards

Among the most important decisions impacting the future ability of organizations to adapt their enterprise computing to fast changing external environmental forces, which increasingly influences the ability of the organization to succeed or fail, is whether to embrace independent standards for software development, communications, and data structure.

Key issues to understand about independent standards:

    • Organizational sovereignty—it has proven extremely difficult and often impossible to maintain control of one’s destiny in an economically sustainable manner over the long-term with proprietary computing standards dominating enterprise architecture
    • Trade secrets, IP, IC, and differentiation are very difficult to secure when relying on consultants who represent competitors in large proprietary ecosystems
    • Lock-in and high maintenance fees are enabled primarily by proprietary standards and lack of interoperability
    • Open source is not at all the same as independent standards, nor necessarily improve adaptive computing or TCO
    • Independent standards bodies are voluntary in most of the world, slow to mature, and influenced by ideology and interests within governments, academia, industry, and IT incumbents
    • The commoditization challenge and need for adaptive computing is similar with ubiquitous computing regardless of standards type
Advertisements

Video short on data silos + extending BI to entire workplace


Brief video on adaptive computing, avoiding data silos and extending BI across the enterprise and entire information workplace:

Legacy of the Tōhoku Earthquake: Moral Imperative to Prevent a Future Fukushima Crisis


An article in the New York Times reminds us once again that without a carefully crafted and highly disciplined governance architecture in place, perceived misalignment of personal interests between individuals and organizations across cultural ecosystems can lead to catastrophic decisions and outcomes. The article was written by Martin Fackler and is titled: Nuclear Disaster in Japan Was Avoidable, Critics Contend.

While not unexpected by those who study crises, rather yet another case where brave individuals raised red flags only to be shouted down by the crowd, the article does provide instructive granularity that should guide senior executives, directors, and policy makers in planning organizational models and enterprise systems. In a rare statement by a leading publication, Martin Fackler reports that insiders within “Japan’s tightly knit nuclear industry” attributed the Fukushima plant meltdown to a “culture of collusion in which powerful regulators and compliant academic experts”.  This is a very similar dynamic found in other preventable crises, from the broad systemic financial crisis to narrow product defect cases.

One of the individuals who warned regulators of just such an event was professor Kunihiko Shimizaki, a seismologist on the committee created specifically to manage risk associated with Japan’s off shore earthquakes. Shimizaki’s conservative warnings were not only ignored, but his comments were removed from the final report “pending further research”. Shimizaki is reported to believe that “fault lay not in outright corruption, but rather complicity among like-minded insiders who prospered for decades by scratching one another’s backs.”  This is almost verbatim to events in the U.S. where multi-organizational cultures evolved slowly over time to become among the highest systemic risks to life, property, and economy.

In another commonly found result, the plant operator Tepco failed to act on multiple internal warnings from their own engineers who calculated that a tsunami could reach up to 50 feet in height. This critical information was not revealed to regulators for three years, finally reported just four days before the 9.0 quake occurred causing a 45 foot tsunami, resulting in the meltdown of three reactors at Fukushima.

Three questions for consideration

1) Given that the root cause of the Fukushima meltdown was not the accurately predicted earthquake or tsunami, but rather dysfunctional organizational governance, are leaders not then compelled by moral imperative to seek out and implement organizational systems specifically designed to prevent crises in the future?

2) Given that peer pressure and social dynamics within the academic culture and relationship with regulators and industry are cited as the cause by the most credible witness—from their own community who predicted the event, would not prudence demand that responsible decision makers consider solutions external of the inflicted cultures?

3) With the not-invented-here-syndrome near the core of every major crises in recent history, which have seriously degraded economic capacity, can anyone afford not to?

Steps that must be taken to prevent the next Fukushima

1) Do not return to the same poisoned well for solutions that caused or enabled the crisis

  • The not-invented-here-syndrome combined with bias for institutional solutions perpetuates the myth that humans are incapable of anything but repeating the same errors over again.

  • This phenomenon is evident in the ongoing financial crisis which suffers from similar cultural dynamics between academics, regulators and industry.

  • Researchers have only recently begun to understand the problems associated with deep expertise in isolated disciplines and cultural dynamics. ‘Expertisis’ is a serious problem within disciplines that tend to blind researchers from transdisciplinary patterns and discovery, severely limiting consideration of possible solutions.

  • Systemic crises overlaps too many disciplines for the academic model to execute functional solutions, evidenced by the committee in this case that sidelined their own seismologist’s warnings for further study, which represents a classic enabler of systemic crises.

2) Understand that in the current digital era through the foreseeable future, organizational governance challenges are also data governance challenges, which requires the execution of data governance solutions

    • Traditional organizational governance is rapidly breaking down with the rise of the neural network economy, yet governance solutions are comparably slow to be adopted.

    • Many organizational leaders, policy makers, risk managers, and public safety engineers are not functionally literate with state-of-the-art technology, such as semantic, predictive, and human alignment methodologies.

    • Functional enterprise architecture that has the capacity to prevent the next Fukushima-like event, regardless of location, industry, or sector, will require a holistic design encapsulating a philosophy that proactively considers all variables that have enabled previous events.

      • Any functional architecture for this task cannot be constrained by the not-invented-here-syndrome, defense of guilds, proprietary standards, protection of business models, national pride, institutional pride, branding, culture, or any other factor.

3) Adopt a Finely Woven Decision Tapestry with Carefully Crafted Strands of Human, Sensory, and Business Intelligence

Data provenance is foundational to any functioning critical system in the modern organization, providing:

      • Increased accountability

      • Increased security

      • Carefully managed transparency

      • Far more functional automation

      • The possibility of accurate real-time auditing

4) Extend advanced analytics to the entire human workforce

      • incentives for pre-emptive problem solving and innovation

      • Automate information delivery:

        • Record notification

        • Track and verify resolution

        • Extend network to unbiased regulators of regulators

      • Plug-in multiple predictive models:

        • -establish resolution of conflicts with unbiased review.

        • Automatically include results in reporting to prevent obstacles to essential targeted transparency as occurred in the Fukushima incident

5) Include sensory, financial, and supply chain data in real-time enterprise architecture and reporting

      • Until this year, extending advanced analytics to the entire human workforce was considered futuristic (see 1/10/2012 Forrester Research report Future of BI), in part due to scaling limitations in high performance computing. While always evolving, the design has existed for a decade

      • Automated data generated by sensors should be carefully crafted and combined in modeling with human and financial data for predictive applications for use in risk management, planning, regulatory oversight and operations.

        • Near real-time reporting is now possible, so governance structures and enterprise architectural design should reflect that functionality.

 

Conclusion

While obviously not informed by a first-person audit and review, if reports and quotes from witnesses surrounding the Fukushima crisis are accurate, which are generally consistent from dozens of other human caused crises, we can conclude the following:

The dysfunctional socio-economic relationships in this case resulted in an extremely toxic cultural dynamic across academia, regulators and industry that shared tacit intent to protect the nuclear industry. Their collective actions, however, resulted in an outcome that idled the entire industry in Japan with potentially very serious long-term implications for their national economy.

Whether psychological, social, technical, economic, or some combination thereof, it would seem that no justification for not deploying the most advanced crisis prevention systems can be left standing. Indeed, we all have a moral imperative that demands of us to rise above our bias, personal and institutional conflicts, and defensive nature, to explore and embrace the most appropriate solutions, regardless of origin, institutional labeling, media branding, or any other factor. Some crises are indeed too severe not to prevent.

Mark Montgomery
Founder & CEO
Kyield
http://www.kyield.com

Press release on our enterprise pilot program


We decided to expand our reach on our enterprise pilot program through the electronic PR system so I issued the following BusinessWire release today:

Kyield Announces Pilot Program for Advanced Analytics and Big Data with New Revolutionary BI Platform 

“We are inviting well-matched organizations to collaborate with us in piloting our break-through system to bring a higher level of performance to the information workplace,” stated Mark Montgomery, Founder and CEO of Kyield. “In addition to the significant competitive advantage exclusive to our pilot program, we are offering attractive long-term incentives free from lock-in, maintenance fees, and high service costs traditionally associated with the enterprise software industry.”

Regards, MM

Best of Kyield Blog Index


I created an index page containing links to the best articles in our blog with personal ratings:

Best of Kyield Blog Index.

Key patent issued


My key patent for Kyield was issued today by the USPTO as scheduled earlier this month.

Title: Modular system for optimizing knowledge yield in the digital workplace

Abstract: A networked computer system, architecture, and method are provided for optimizing human and intellectual capital in the digital workplace environment.

To view our press release go here

To view the actual patent  go here

I will post an article when time allows on the importance of this technology and IP, and perhaps one on the experience with the patent system. Thanks, MM

Data Integrity: The Cornerstone for BI in the Decision Process


When studying methods of decision making in organizations, mature professionals with an objective posture often walk away wondering how individuals, organizations, and even our species have survived this long. When studying large systemic crises, it can truly be a game changer in the sport of life, providing motivation that extends well beyond immediate personal gratification.

Structural integrity in organizations, increasingly reflected by data in computer networking, has never been more important. The decision dimension is expanding exponentially due to data volume, global interconnectedness, and increased complexity, thus requiring much richer context, well-engineered structure, far more automation, and increasingly sophisticated techniques.

At the intersection of the consumer realm, powerful new social tools are available worldwide that have proven valuable in affecting change, but blind passion is ever-present, as is self-serving activism from all manner of guild. Ideology surrounding the medium plays a disproportionate role in phase 3 of the Internet era, to include crowdsourcing, social networking, and mainstream journalism. Sentiment can be measured more precisely today, but alignment is allusive, durability questionable, and integrity rare.

Within the enterprise, managers are dealing with unprecedented change, stealthy risk, and compounding complexity driven in no small part by technology. Multi-billion dollar lapses sourced from multiple directions have become common, including a combination of dysfunctional decision processes, group/herding error, self-destructive compensation models, conflicting interests, and poorly designed enterprise architecture relative to actual need.

Specifically to enterprise software, lack of flexibility, commoditization, high maintenance costs, and difficulty in tailoring has created serious challenges for crisis prevention, innovation, differentiation, and global competitiveness. It is not surprising then, given exponential growth of data, which often manifests in poor decisions in complex environments, Business Intelligence (BI) is a top priority in organizations of all types. BI is still very much in its infancy, however, often locked in the nursery, subjecting business analysts to dependency on varying degrees of IT functionality to unlock the gate to the data store.

Given the importance of meaningful, accurate data to the mission of the analyst and future of the organization, recent track records in decision making, and challenges within the organization and IT industry, it is not surprising that analysts would turn to consultants and cloud applications seeking alternative methods, even when aware of extending the vicious cycle of data silos.

Unfortunately, while treating the fragmented symptoms of chronic enterprise maladies may provide brief episodic relief, only a holistic approach specifically designed to address the underlying root causes is capable of satisfying the future needs of high performance organizations.

The dirty dozen fault lines to look for in structural integrity of data

  1. Does your EA automatically validate the identity of the source in a credible manner? (Y/N)

  2. Is your IT security redundant, encrypted, bio protected, networked, and physical?  (Y/N)

  3. Are your data languages interoperable internally and externally? (Y/N)

  4. Is the enterprise fully integrated with customers, partners, social networking, and communications? (Y/N)

  5. Do you have a clear path for preventing future lock-in from causing unnecessary cost, complexity, and risk?  (Y/N)

  6. Are data rating systems tailored to specific needs of the individual, business unit, and organization? (Y/N)

  7. Are original work products of k-workers protected with pragmatic, automated permission settings? (Y/N)

  8. Does each knowledge worker have access to organizational data essential to their mission? (Y/N)

  9. Are compensation models driving mid to long-term goals, and well aligned with lowering systemic risk? (Y/N)

  10. Is counter party risk integrated with internal systems as well as best available external data sources? (Y/N)

  11. Does your organization have enterprise-wide, business unit, and individual data optimization tools? (Y/N)

  12. Are advanced analytics and predictive technologies plug and play? (Y/N)

If you answered yes (Y) to all of these questions, then your organization is well ahead of the pack; even if perhaps a bit lonely. If you answered no (N) to any of these questions, then your organization likely has existing fault lines in structural integrity that will need to be addressed in the near future.  The fault lines may or may not be visible even to the trained professional, until the next crisis of course, at which time it becomes challenging for management to focus on anything else.

Clever is Cute as Sustainable is Wise


If the financial crisis confirmed anything, it is that the majority of humans are followers, not leaders, and that leaders throughout our society have yet to capture the significance of technology to their members and organizations.

One of the primary causal factors cited by thought leaders in studying crises is poor leadership, to include those who accept misaligned or conflicted interests. When we see “skimming off the top” in others we label it corruption, yet few see it in themselves at all, or choose to ignore it, resulting in the same outcome. While balance is obviously needed for survival—indeed managing that balance well is key for modern leaders, when we over-emphasize short-term profits, we then elevate the influence and power of those who are skilled at winning very short-term battles, rather than long-term wars. I have personally experienced that strategy in organizations and observed it in many others; it doesn’t end well.

One problem with the short-term leadership model is that the skills for software programming, instant trading, manipulating markets, or otherwise amassing great wealth quickly, does not necessarily translate to good leadership in a private company, government, or stewardship in philanthropy. Indeed, in my own observations and those of many others, quite the opposite is often true, yet our information institutions instruct society to emulate the clever rather than the wise. Should we be surprised then at the trend line of manipulation, polarization, and ever deeper crises?

Unlike the early days of the industrial revolution, in the second inning of the information revolution we now understand that most of the challenges facing the human species are long-term in nature, so we must realign our thinking and structures accordingly, including financial incentives and leadership development. Alas, since the long-term has been greatly compressed by consistent failure of short-term behavior, our entire species must now learn to act in the short-term on behalf of our mutual long-term interests. Easier said than done in our culture. The good news is that it’s quite possible…tick-tock, tick-tock, tick-tock.

The process of identifying, mentoring, and recruiting strong leaders often consists of conflicting advice that tends towards self-propelling cultures, regardless of organizational type. In addition to skill sets and track records sketched from misleading data, leaders are often selected based on ideology, dysfunctional networks, and susceptibility to peer pressure, instead of independent thought, good decision making, and wisdom.

Given the evidence, a rational and intelligent path would be to reconstruct our thinking and behavior surrounding the entire topic of leadership and organizational structures, and then tailor that thinking specifically for the environment we actually face, with tools specifically designed for the actual task. For many cultures, such a path begins by emerging from deep denial and embracing evidence-based decision making. Once emerged from the pit of denial, they soon discover among other truths that resources are not infinite after all, personal accountability is not limited to the inefficiencies of organizations, and that both the problems and solutions we face are inextricable from computing, organizational management, and personal accountability. Only then will the path to sustainability began to take shape in the vision field in sufficient form to differentiate the forest from the trees.

Yet another of the many disciplines related to this topic defines psychosis as a “mental disorder characterized by symptoms, such as delusions that indicate impaired contact with reality”.  An appropriate translation of insanity might be “refusal to adopt tools and models designed to achieve sustainability”, aka survival.

If this sounds familiar in your organization, it could well be traced to your leadership development model and process, for leaders are the decision makers who have budget authority. Perhaps it’s time for your organization to redefine strategic from clever to wise, and synchronize the organizational clock with present-day reality?

Hidden costs of complexity in the enterprise


Often is the case when consuming fresh evidence that the truism is reconfirmed: what is not visible is far greater than what is; in terms of opportunity, risk, and cost. Few concepts are so profoundly transformative when this string of characters begins to take shape in the mind’s eye.

Unlike biological evolution where logical adaptation occurs over many generations, the dominant complex systems on earth today are driven by technology, evolving very rapidly, and of course subject to human manipulation.

From competitive advantage to utility tax

Similar to healthcare, education, and finance where complexity creep has been manipulated for long periods until surpassing sustainability, enterprise software has become primarily a game of ‘heads I win; tails you lose’ for those paying the bills. While the first generation of enterprise software provided a strong competitive advantage to early adopters, the bulk of systems have long since resembled a tax on admission.

Today most leading enterprise systems offer a competitive advantage to very few organizations where innovation is focused on optimizing commoditization, scale and capitalization; not products, service, or improvement. The result is a breakdown in product design, tragically to include a negative influence on innovation for customers worldwide.

The tipping point for market dysfunction in the enterprise occurred more than a decade ago when most of the innovation began to be outsourced to consumer ventures in Silicon Valley that serve a very different purpose for a different customer. Even if the majority of SV VC firms and young social networking entrepreneurs understood the needs of organizations, globally adopted consumer technology cannot provide a competitive advantage to organizations. It should be obvious that if everyone is using the same rigid programs with little if any adaptation or customization, then the advantage falls only to those with scale.

Another sign of market dysfunction in IT more generally is visible by comparing cash generation of industry leaders with their competitors. A recent article in the WSJ revealed that nine tech companies generated $68.5 billion in new cash during the Great Recession while the other 65 competitors in the S&P 500 generated $13.5 billion combined.

The war chests at IT leaders tells a different story for each company, including recent product success at Apple, the power of the network effect with Google on the Web, effectively managed monopolies, and an economy dominated for many years by investment banks, weak regulation and judicial enforcement, and last but not least; apathy in enterprise customers. I argue that this level of consolidation of market power isn’t good for anyone, including the stockholders of those with the largest war chests as the dividends are either very small or non-existent, with the majority performing poorly in stock appreciation.

Need for ‘market farming’

It’s essential to understand the difference between proprietary applications and proprietary standards, particularly relating to network effect, market power and systemic risk. IT architecture in the network era is of course global and largely unregulated outside of minimal anti-trust enforcement, with voluntary standards bodies, so it should not be surprising then that proprietary standards still dominate enterprise computing, or that proprietary systems are manipulated for self gain. That is after all the fiduciary duty of public companies with bonus incentives, stock options, and careers dependent upon same.

Of course it’s also true that given government’s terrible track record in regulating technology, combined with nationalist conflicts that too-often suffocate innovation with protectionism, very few of the vastly divergent stakeholders in the ecosystem favor increased regulation, which leaves us with the heavy lifting of systems design and market education to attract intelligent decisions by early adopters in an environment that is predatory to new entrants. Markets do eventually depend upon well-informed customers acting in their best interest, even in expert professional roles; particularly in markets dominated by lock in.

From the perspective of a consultant and venture capitalist who coached hundreds of businesses of all sizes on defending against predation, it is an understatement that enterprise customers need to take a far more proactive role as experts in market farming. The financial incentive to do so is certainly sufficient, as is the risk in not doing so. Until such time that more effective global governance of standards is enforced, however, national governments, non-profits, consumers, and especially CIOs have a special responsibility to be proactive as ‘market farmers’ within the IT ecosystem.

Ownership of data

In the modern world, data represents knowledge, security, identity, health, and wealth. Those who control proprietary standards within the network effect in an unregulated environment not only can extort a high price of entry to the utility, but can also greatly influence if not control ownership of data. We see that often today on the public web (see LinkedIn’s license agreement for an example).

Our position with Kyield is that while we have every right and indeed a responsibility to protect our proprietary intellectual capital within our platform, we do not have the right to extend the power enabled by the platform over others. I would even go so far as to say that we have a responsibility to defend customers from others doing so to the best of our ability. Our power should only extend to the borders of our internal systems, beyond which we should embrace universal standards, so we do, even though they are far from perfect and extremely slow to evolve relative to need.

Networks are not new, they are just much larger, contain data representing far more than ever before, and are of course global. Nations created standards for electric grids presumably to prevent the electric companies from owning every appliance maker, if not their economy. The ‘extend and conquer’ strategy is predatory, obviously very dangerous to the global economy, and should therefore be regulated and aggressively enforced. Nationalist conflicts and protectionism have prevented proper regulation.

It’s my personal position that no one should have power over the languages used for public transactions, nor should any company or group of companies (or any entity including governments) have influence, power, or ownership of data other than the original owner of that data or those who have been granted the rights of use. Ownership and control of data is I believe a fundamental right for individuals and legal entities. It would surprise me if the U.S. Supreme Court failed to better clarify data standards and ownership at some point in the near future.

Of course standards bodies have a higher calling not only to manage the standards process properly within meaningful time constraints, but also consider the influence of conflicting interests on their financial model, as well as sustainable economics relating to adoption. I recently suggested to the W3C for example that their Wiki was insufficient for communicating the benefits of the languages the body created and approved. The response was that they didn’t have the bandwidth and that communications was up to vendors. The response fell well short of what leaders in science have been teaching for many years, suggesting to me that significant change was needed in the model and culture.

With very few exceptions, during the 15 years I have been observing the evolution of computing standards, it has been confirmed often that the IT industry is no more able to self-regulate than the finance or healthcare industries. I wish they could, but wishing doesn’t make it so.

Beauty of simplicity

Those who doubt the need for reducing complexity in our society probably haven’t spent much time working pro bono for a worthy cause that cannot be saved due to entrenched interests owing their existence to manipulated complexity. I have seen the dilemma many times during my career manifest in legislation where the individual interests of the few manipulate the outcome for the many, thereby threatening the whole, to include themselves eventually of course, with little or no apparent understanding of the impact of their self centered activism. The current political situation in the U.S. is obviously not what the Founding Fathers (and Mothers) intended, and indeed feared most.

A very similar dynamic evolved in computing over the past few decades, which I began to recognize in the mid 1990s when we converted our consulting firm to one of the first commercial Internet labs and incubators. It was only when I trained to become a network engineer and object oriented programmer that the degree of manipulation and systemic risk became apparent.

“Complicated systems and their combinations should be considered only if there exist physical-empirical reasons to do so.” –Albert Einstein

There have been reasons for the increasing complexity in enterprise software, but have been primarily limited to protecting cash cows and extending market share in the global network ecosystem. While economies of scale and Moore’s law pushed hardware costs lower, software costs escalated while arguably delivering less value, despite a massive reaction by open source. The parallel evolution in healthcare, education, and government in the U.S. was very similar; we were all paying more for less.

The problem today of course is that our collective resources can no longer afford even the status quo, much less the trajectory, from the costs due to manipulated complexity. In enterprise software, manipulated complexity also creates a very high maintenance cost now exceeding 20% of product cost per year in some cases, and of course a much higher level of security risk. This is a dangerous game and largely unnecessary.

One of the elements I liked best when reviewing the papers of the Google founders and later their early beta was the elegance of simplicity. Google was among the first substantial efforts in the Internet era to reduce a highly complex environment to a very simple interface designed for humans in their own natural language, and it worked very well indeed.

Ironically I found Google while searching for raw IP in an effort to achieve something similar for the enterprise, but enterprise data had not been converted to simple HTML like the public web, rather was subject to incompatible languages and data silos, and of course HTML lacked the ability to provide the embedded intelligence we were seeking, so we had a long wait for standards to catch up to the vision often associated with the semantic web. I like to think we used the time wisely.

Simplicity for the enterprise has not yet arrived at the station, but the train is full and fast approaching.

Toyota’s failure to act on the dots


It seems that every few weeks we hear of another major series of tragedies leading to a crisis that could have been substantially mitigated, if not entirely prevented. Among the most disturbing to discover, and most costly, are those involving safety issues that either threaten or cause loss of life.

Often has been the case during the past decade when the organization involved has been a government agency, but this quarter Toyota wins the prize for failure to act on the digital dots in the enterprise (in fairness Toyota should share the prize with NHTSA—also the front runners for the annual prize—it’s still early in the year).

In the case of  Toyota, we may never know how many digital red flags existed within the enterprise network, but the clear pattern in other cases suggests that the probabilities are high that warnings were common within communications, documents, and web servers indexed by enterprise search engines. Similar to previous events, these problems were known for years by the corporation and the agencies charged with regulation, yet they collectively failed to prevent loss of life, resulting in personal tragedy for a few and far reaching economic destruction in one of the world’s premier corporations.

While growing too fast may have contributed to this problem, it was organizational system design that allowed it to become one of the worst crises in the company’s history, and a serious threat to the future. Toyota has done a good job with innovation in product design compared to most of its competitors, but failed in adopting innovation in organizational management that would sustain their success.

When will leaders of the largest organizations finally understand that failing to invest in how organizations function is often the most expensive mistake executives will ever make? Given the size and impact of industry leaders, combined with the systemic nature of the global economy, these types of failures are unacceptable to growing numbers of communities, companies, and consumers.

An article in the WSJ sums it up well with this quote:

Liu Jiaxiang, a Toyota dealer in Shenzhen, said sales at his stores in southern China have been “steady” since late January. “What keeps me awake at night is a possible long-term erosion of customer confidence in the Toyota brand” because of the recall problem, he said.

Denial is a powerful force in humans, particularly when combined with peer pressure, social herding, politics, bureaucracy, contradictory information, and sheer inertia that seems unstoppable at times. It’s also true that cultures of invincibility have a consistent track record of experiencing crisis and decline soon thereafter.

Mr. Toyoda, if this post finds you, I invite your team to revisit our publications and jointly explore how our holistic semantic enterprise platform could have reduced or prevented this series of recalls. Even if  Toyota had deployed Kyield when your domain first appeared in our Web logs, we almost certainly could have contained this event to a multi-million dollar product defect issue, rather than the multi-billion dollar corporate crisis that it has become.

Mark Montgomery
Founder & CEO – Kyield
Web: http://www.kyield.com
Blog: https://kyield.wordpress.com
email: markm@kyield.com
Twitter: @kyield