Revolution in IT-Enabled Competitiveness


Four Stages of Enterprise Network Competence

Most current industry leaders owe their existence beyond basic competencies and resources to a strong competitive advantage from early adoption of systems engineering and statistical methods for industrial production that powered much of the post WW2 economy. These manual systems and methods accelerated global trade, extraction, logistics, manufacturing and scaling efficiencies, becoming computerized over the last half-century.

The computer systems were initially highly complex and very expensive, though resulted in historic business success such as American Airlines’ SABRE in 1959 [1] and Walmart’s logistics system staring in 1975 [2], which helped Walmart reach a billion USD in sales in a shorter period than any other company in 1980.

As those functions previously available to only a few became productized and widely adopted globally, the competitive advantage began to decline. The adoption argument then changed from a competitive advantage to an essential high cost of entry.[3]   When functionality in databases, logistics and desktops became ubiquitous globally the competitive advantage was substantially lost, yet costs continued to rise in software while falling dramatically in hardware, causing problems for customers as well as national and macro global economics. In order to achieve a competitive advantage in IT, it became necessary for companies to invest heavily in commoditized computing as a high cost of initial entry, and then invest significantly more in customization on top of the digital replicas most competitors enjoyed.

The network era began in the 1990s with the commercialization of the Internet and Web, which are based on universal standards, introduced a very different dynamic to the IT industry that has now impacted most sectors and the global economy. Initially under-engineered and overhyped for short-term gains during the inflation of the dotcom bubble, long-term impacts were underestimated as evidenced by ongoing disruption today causing displacement in many industries. We are now entering a new phase Michael Porter refers to as ‘the third wave of IT-driven competition’, which he claims “has the potential to be the biggest yet, triggering even more innovation, productivity gains, and economic growth than the previous two.” [4]

While I see the potential of smart devices similar to Porter, the potential for AI-enhanced human work for increased productivity, accelerated discovery, automation, prevention and economic growth is enormous and, similar to the 1990s, while machine intelligence is overhyped in the short-term, the longer term impact could indeed be “the biggest yet” of the three waves. This phase of IT-enabled competitiveness is the logical extension of the network economy benefiting from thousands of interoperable components long under development from vast numbers of sources to execute the ‘plug and play’ architecture many of us envisioned in the 1990s. This still emerging Internet of Entities when combined with advanced algorithmics brings massive opportunity and risk for all organizations in all sectors, requiring operational systems and governance specifically designed for this rapidly changing environment.

This is a clip from an E-book nearing completion titled: The Kyield OS: A Unified AI System; Rapid Ascension to a Higher Level of Performance. Existing or prospective customers are invited to send me an email for a copy upon completion within the next month – markm at kyield dot com.

[1] https://www.aa.com/i18n/amrcorp/corporateInformation/facts/history.jsp

[2] http://www.scdigest.com/ASSETS/FIRSTTHOUGHTS/12-07-26.php?cid=6047

[3] Lunch discussion on topic with Les Vadasz in 2009 in Silicon Valley.

[4] https://hbr.org/2014/11/how-smart-connected-products-are-transforming-competition

Advertisements

Yield Management of Knowledge for Industry + FAQs


Industrial Yield Management of Knowledge from www.kyield.com

I decided to share this slightly edited version of a diagram that was part of a presentation we recently completed for an industry leading organization. Based on feedback this may be the most easily understandable graphic we’ve produced to date in communicating the Kyield enterprise system. As part of the same project we published a new FAQs page on our web site that may be of interest.  Most of my writing over the past several months has been in private tailored papers and presentations related to our pilot and partner programs.

I may include a version of this diagram in a public white paper soon if I can carve out some writing time. If you don’t hear from me before then I wish you and your family a happy holiday season.

Kind regards, MM

Kyield Partner Program: An Ecosystem for the Present Generation


We’ve been in various discussions for the past couple of years with potential partners, finding many interests in the industry to be in direct conflict with the mission of the customer organizations. Most of our discussions reflect a generational change in technology underway that is being met with sharp internal divisions at incumbent market leaders that result in attempting to protect the past from the present and future. The situation is not new– it is very difficult for incumbent market leaders to voluntarily cannibalize highly profitable business models, particularly when doing so would threaten jobs, so historically disruptive technology has required new ecosystems (MS and Intel in the early 1980s, Google in the late 1990s, RIM, Apple app stores in the 2000s, etc.).

So due to the platform approach to Kyield and the disruptive nature of our technology to incumbent business models, and resistance to change in industry leaders–despite pressure from even their largest customers, we determined quite some time ago that it may require building a new ecosystem based on the Kyield platform, data standards and interoperability. The driving need is not just about the enormous sum of money being charged for unproductive functionality like lock-in, maintenance fees, and unnecessary software integration–although this is an ever-increasing problem for customers, or even commoditization and lack of competitive advantage–also a very serious problem, rather as is often the case it comes down to a combination of complexity, math and physics.

Not only is it not economically viable to optimize network computing in the neural network economy based on legacy business models, but we cannot physically prevent systemic crises, dramatically improve productivity, and/or expedite discovery in a manner that doesn’t bankrupt a good portion of the economy without data standards and seamless interoperability. In addition, we do need deep intelligence on each entity as envisioned in the semantic web in order to overcome many of our greatest challenges, to execute advanced analytics, and to manage ‘big data’ in a rational manner, but we also need to protect privacy, data assets, knowledge capital and property rights while improving security. Standards may be voluntary, but overcoming these challenges isn’t.

So we’ve been working on a new partner program for Kyield in conjunction with our pilot phase in attempt to reach out to prospective partners who may not be on our radar that would make great partners and work with us to build a new ecosystem based not on protecting the past or current management at incumbent firms, but rather the future by optimizing the present capabilities and opportunities surrounding our system.  We hope to collectively create a great many more new jobs than we could possibly do on our own in the process–not just in our ecosystem, but importantly for customer ecosystems.

We’ve decided for now on five different types of partner relationships that are intended to best meet the needs of customers, partners, and Kyield:

  • Consulting
  • Technology
  • Platform
  • Strategic
  • Master

For more information on our partner program, please visit:

http://www.kyield.com/partners.html

Thank you,

Mark Montgomery
Founder, Kyield

New Video on Kyield Enterprise – Data Tailored to Each Entity


New Video- Kyield Enterprise: Human Economics in Adaptive NN


White board video presentation on Kyield Enterprise


Kyield founder Mark Montgomery provides a 14 minute white board presentation on Kyield Enterprise and the Kyield Enterprise Pilot

Five Essential Steps For Strategic (adaptive) Enterprise Computing


Given the spin surrounding big data, duopoly deflection campaigns by incumbents, and a culture of entitlement across the enterprise software ecosystem, the following 5 briefs are offered to provide clarity for improving strategic computing outcomes.

1)  Close the Data Competency Gap

Much has been written in recent months about the expanding need for data scientists, which is true at this early stage of automation, yet very little is whispered in public on the prerequisite learning curve for senior executives, boards, and policy makers.

Data increasingly represents all of the assets of the organization, including intellectual capital, intellectual property, physical property, financials, supply chain, inventory, distribution network, customers, communications, legal, creative, and all relationships between entities. It is therefore imperative to understand how data is structured, created, consumed, analyzed, interpreted, stored, and secured. Data management will substantially impact the organization’s ability to achieve and manage the strategic mission.

Fortunately, many options exist for rapid advancement in understanding data management ranging from off-the-shelf published reports to tailored consulting and strategic advisory from individuals, regional firms, and global institutions. A word of caution, however—technology in this area is changing rapidly, and very few analysts have proven able to predict what to expect within 24-48 months.

Understanding Data Competency

    • Data scientists are just as human as computer or any other type of scientist
    • A need exists to avoid exchanging software-enabled silos for ontology-enabled silos
    • Data structure requires linguistics, analytics requires mathematics, human performance requires psychology, predictive requires modeling—success requires a mega-disciplinary perspective

2)  Adopt Adaptive Enterprise Computing

A networked computing workplace environment that continually adapts to changing conditions based on the specific needs of each entity – MM 6.7.12

While computing has achieved a great deal for the world during the previous half-century, the short-term gain became a long-term challenge as ubiquitous computing was largely a one-time, must-have competitive advantage that everyone needed to adopt or be left behind.  It turns out that creating and maintaining a competitive advantage through ubiquitous computing within a global network economy is a much greater challenge than initial adoption.

A deep misalignment of interests now exists between customer entities that need differentiation in the marketplace to survive and much of the IT industry, which needs to maintain scale by replicating the precise same hardware and software at massive scale worldwide.

When competitors all over the world are using the same computing tools for communications, operations, transactions, and learning, yet have a dramatically different cost basis for everything else, the region or organization with a higher cost basis will indeed be flattened with economic consequences that can be catastrophic.

This places an especially high burden on companies located in developed countries like the U.S. that are engaged in hyper-competitive industries globally while paying the highest prices for talent, education and healthcare—highlighting the critical need to achieve a sustainable competitive advantage.

Understanding adaptive enterprise computing:

    • Adaptive computing for strategic advantage must encompass the entire enterprise architecture, which requires a holistic perspective
    • Adaptive computing is strategic; commoditized computing isn’t—rather should be viewed as entry-level infrastructure
    • The goal should be to optimize intellectual and creative capital while tailoring product differentiation for a durable and sustainable competitive advantage
    • Agile computing is largely a software development methodology while adaptive computing is largely a business strategy that employs technology for managing the entire digital work environment
    • The transition to adaptive enterprise computing must be step-by-step to avoid operational disruption, yet bold to escape incumbent lock-in

3)  Extend Analytics to Entire Workforce

Humans represent the largest expense and risk to most organizations, so technologists have had a mandate for decades to automate processes and systems that either reduce or replace humans. This is a greatly misunderstood economics theory, however. The idea is to free up resources for re-investment in more important endeavors, which has historically employed the majority of people, but in practice the theory is dependent upon long-term, disciplined, monetary and fiscal policy that favors investment in new technologies, products, companies and industries. When global automation is combined with an environment that doesn’t favor re-investment in new areas, as we’ve seen in recent decades, capital will sit on the sidelines or be employed in speculation that creates destructive bubbles, the combination of which results in uncertainty with high levels of chronic unemployment.

However, while strategic computing must consider all areas of cost competitiveness, it’s also true that most organizations have become more skilled at cost containment than human systems and innovation. As we’ve observed consistently in recent years, the result has been that many organizations have failed to prevent serious or fatal crises, failed to seize missed opportunities, and failed to remain innovative at competitive levels.

While hopefully the macro economic conditions will broadly improve with time, the important message for decision makers is that untapped potential in human performance analytics that can be captured with state-of-the-art systems today is several orders of magnitude higher than through traditional supply chain analytics or marketing analytics alone.

Understanding Human Performance Systems:

    • Improved human performance systems improves everything else
    • The highest potential ROI to organizations today hasn’t changed in a millennium: engaging humans in a more competitive manner than the competition
    • The most valuable humans tend to be fiercely protective of their most valuable intellectual capital, which is precisely what organizations need, requiring deep knowledge and experience for system design
    • Loyalty and morale are low in many organizations due to poor compensation incentives, frequent job change, and misaligned motivation with employer products, cultures and business models
    • Motivation can be fickle and fluid, varying a great deal between individuals, groups, places, and times
    • For those who may have been otherwise engaged—the world went mobile

4)  Employ Predictive Analytics

An organization need not grow much beyond the founders in the current environment for our increasingly data rich world to require effective data management designed to achieve a strategic advantage with enterprise computing. Indeed, often has been the case where success or failure depended upon converting an early agile advantage into a more mature adaptive environment and culture. Within those organizations that survive beyond the average life expectancy, many cultures finally change only after a near-death experience triggered by becoming complacent, rigid, or simply entitled to that which the customer was in disagreement—reasons enough for adoption of analytics for almost any company.

While the need for more accurate predictive abilities is obvious for marketers, it is no less important for risk management, investment, science, medicine, government, and most other areas of society.

Key elements that impact predictive outcomes:

    • Quality of data, including integrity, scale, timeliness, access, and interoperability
    • Quality of algorithms, including design, efficiency, and execution
    • Ease of use and interpretation, including visuals, delivery, and devices
    • How predictions are managed, including verification, feed-back loops, accountability, and the decision chain

5)  Embrace Independent Standards

Among the most important decisions impacting the future ability of organizations to adapt their enterprise computing to fast changing external environmental forces, which increasingly influences the ability of the organization to succeed or fail, is whether to embrace independent standards for software development, communications, and data structure.

Key issues to understand about independent standards:

    • Organizational sovereignty—it has proven extremely difficult and often impossible to maintain control of one’s destiny in an economically sustainable manner over the long-term with proprietary computing standards dominating enterprise architecture
    • Trade secrets, IP, IC, and differentiation are very difficult to secure when relying on consultants who represent competitors in large proprietary ecosystems
    • Lock-in and high maintenance fees are enabled primarily by proprietary standards and lack of interoperability
    • Open source is not at all the same as independent standards, nor necessarily improve adaptive computing or TCO
    • Independent standards bodies are voluntary in most of the world, slow to mature, and influenced by ideology and interests within governments, academia, industry, and IT incumbents
    • The commoditization challenge and need for adaptive computing is similar with ubiquitous computing regardless of standards type

Legacy of the Tōhoku Earthquake: Moral Imperative to Prevent a Future Fukushima Crisis


An article in the New York Times reminds us once again that without a carefully crafted and highly disciplined governance architecture in place, perceived misalignment of personal interests between individuals and organizations across cultural ecosystems can lead to catastrophic decisions and outcomes. The article was written by Martin Fackler and is titled: Nuclear Disaster in Japan Was Avoidable, Critics Contend.

While not unexpected by those who study crises, rather yet another case where brave individuals raised red flags only to be shouted down by the crowd, the article does provide instructive granularity that should guide senior executives, directors, and policy makers in planning organizational models and enterprise systems. In a rare statement by a leading publication, Martin Fackler reports that insiders within “Japan’s tightly knit nuclear industry” attributed the Fukushima plant meltdown to a “culture of collusion in which powerful regulators and compliant academic experts”.  This is a very similar dynamic found in other preventable crises, from the broad systemic financial crisis to narrow product defect cases.

One of the individuals who warned regulators of just such an event was professor Kunihiko Shimizaki, a seismologist on the committee created specifically to manage risk associated with Japan’s off shore earthquakes. Shimizaki’s conservative warnings were not only ignored, but his comments were removed from the final report “pending further research”. Shimizaki is reported to believe that “fault lay not in outright corruption, but rather complicity among like-minded insiders who prospered for decades by scratching one another’s backs.”  This is almost verbatim to events in the U.S. where multi-organizational cultures evolved slowly over time to become among the highest systemic risks to life, property, and economy.

In another commonly found result, the plant operator Tepco failed to act on multiple internal warnings from their own engineers who calculated that a tsunami could reach up to 50 feet in height. This critical information was not revealed to regulators for three years, finally reported just four days before the 9.0 quake occurred causing a 45 foot tsunami, resulting in the meltdown of three reactors at Fukushima.

Three questions for consideration

1) Given that the root cause of the Fukushima meltdown was not the accurately predicted earthquake or tsunami, but rather dysfunctional organizational governance, are leaders not then compelled by moral imperative to seek out and implement organizational systems specifically designed to prevent crises in the future?

2) Given that peer pressure and social dynamics within the academic culture and relationship with regulators and industry are cited as the cause by the most credible witness—from their own community who predicted the event, would not prudence demand that responsible decision makers consider solutions external of the inflicted cultures?

3) With the not-invented-here-syndrome near the core of every major crises in recent history, which have seriously degraded economic capacity, can anyone afford not to?

Steps that must be taken to prevent the next Fukushima

1) Do not return to the same poisoned well for solutions that caused or enabled the crisis

  • The not-invented-here-syndrome combined with bias for institutional solutions perpetuates the myth that humans are incapable of anything but repeating the same errors over again.

  • This phenomenon is evident in the ongoing financial crisis which suffers from similar cultural dynamics between academics, regulators and industry.

  • Researchers have only recently begun to understand the problems associated with deep expertise in isolated disciplines and cultural dynamics. ‘Expertisis’ is a serious problem within disciplines that tend to blind researchers from transdisciplinary patterns and discovery, severely limiting consideration of possible solutions.

  • Systemic crises overlaps too many disciplines for the academic model to execute functional solutions, evidenced by the committee in this case that sidelined their own seismologist’s warnings for further study, which represents a classic enabler of systemic crises.

2) Understand that in the current digital era through the foreseeable future, organizational governance challenges are also data governance challenges, which requires the execution of data governance solutions

    • Traditional organizational governance is rapidly breaking down with the rise of the neural network economy, yet governance solutions are comparably slow to be adopted.

    • Many organizational leaders, policy makers, risk managers, and public safety engineers are not functionally literate with state-of-the-art technology, such as semantic, predictive, and human alignment methodologies.

    • Functional enterprise architecture that has the capacity to prevent the next Fukushima-like event, regardless of location, industry, or sector, will require a holistic design encapsulating a philosophy that proactively considers all variables that have enabled previous events.

      • Any functional architecture for this task cannot be constrained by the not-invented-here-syndrome, defense of guilds, proprietary standards, protection of business models, national pride, institutional pride, branding, culture, or any other factor.

3) Adopt a Finely Woven Decision Tapestry with Carefully Crafted Strands of Human, Sensory, and Business Intelligence

Data provenance is foundational to any functioning critical system in the modern organization, providing:

      • Increased accountability

      • Increased security

      • Carefully managed transparency

      • Far more functional automation

      • The possibility of accurate real-time auditing

4) Extend advanced analytics to the entire human workforce

      • incentives for pre-emptive problem solving and innovation

      • Automate information delivery:

        • Record notification

        • Track and verify resolution

        • Extend network to unbiased regulators of regulators

      • Plug-in multiple predictive models:

        • -establish resolution of conflicts with unbiased review.

        • Automatically include results in reporting to prevent obstacles to essential targeted transparency as occurred in the Fukushima incident

5) Include sensory, financial, and supply chain data in real-time enterprise architecture and reporting

      • Until this year, extending advanced analytics to the entire human workforce was considered futuristic (see 1/10/2012 Forrester Research report Future of BI), in part due to scaling limitations in high performance computing. While always evolving, the design has existed for a decade

      • Automated data generated by sensors should be carefully crafted and combined in modeling with human and financial data for predictive applications for use in risk management, planning, regulatory oversight and operations.

        • Near real-time reporting is now possible, so governance structures and enterprise architectural design should reflect that functionality.

 

Conclusion

While obviously not informed by a first-person audit and review, if reports and quotes from witnesses surrounding the Fukushima crisis are accurate, which are generally consistent from dozens of other human caused crises, we can conclude the following:

The dysfunctional socio-economic relationships in this case resulted in an extremely toxic cultural dynamic across academia, regulators and industry that shared tacit intent to protect the nuclear industry. Their collective actions, however, resulted in an outcome that idled the entire industry in Japan with potentially very serious long-term implications for their national economy.

Whether psychological, social, technical, economic, or some combination thereof, it would seem that no justification for not deploying the most advanced crisis prevention systems can be left standing. Indeed, we all have a moral imperative that demands of us to rise above our bias, personal and institutional conflicts, and defensive nature, to explore and embrace the most appropriate solutions, regardless of origin, institutional labeling, media branding, or any other factor. Some crises are indeed too severe not to prevent.

Mark Montgomery
Founder & CEO
Kyield
http://www.kyield.com

Best of Kyield Blog Index


I created an index page containing links to the best articles in our blog with personal ratings:

Best of Kyield Blog Index.

Kyield is seeking select organizations to partner on enterprise prototype


Greetings,

We are now seeking clients to work in a collaborative manner to develop and test a fully functional prototype of our patented enterprise platform during 2012.

For a small group of well-matched organizations, we are prepared to offer exceptional benefits:

  • Very attractive license terms of extended duration
  • Extraordinary consulting in tailoring and optimizing the system
  • Priority terms for future innovation, providing on-going competitive advantage

General target criteria for prototype / client partnership

Must be interested in improving:

  • Crisis prevention
  • Innovation
  • Productivity
  • Decision making

Preffered entity of 500 to 10,000 knowledge workers

  • Flexible depending on work type & intensity
  • Can go much higher but not much lower
  • No direct competitors
  • Strategic partner organizations possible

Above average technical environment

  • Create original digital work products
  • Distributed, remote workers
  • Place high value on invention & innovation
  • High value on prevention of crises & litigation
  • High priority on competitive advantage & differentiation
  • Thought leader more important than market leader

English language work environment

    • To interact efficiently with Kyield, not for internal files

The project will require CEO/COO buy-in for unit or organization

    • The nature of the organization platform will likely require CEO leadership prior to engagement

If your organization or a client matches these general criteria for this opportunity, or you are aware of one that might, please contact Kyield’s CEO Mark Montgomery ( markm@kyield.com ) to explore in more detail. 

Kyield will not disclose identities of either the individuals or organizations until both parties agree.

Thank you!