An open letter to Fortune 500 CEOs on AI systems from Kyield’s founder


IMG_1532

Since we offer an organizational and network operating system—technically defined as a modular artificial intelligence system, we usually deal with the most important strategic and operational issues facing organizations. This is most obvious in our new HumCat offering, which provides advanced technology for the prevention of human-caused catastrophes. Short of preventing an asteroid or comet collision with earth, this is among the most important work that is executable today. Please keep that in mind while reading.

In our efforts to assist organizations we perform an informal review on our process with the goal of improving upon the experience for all concerned. In cases where we invest a considerable amount of time, energy, and money, the review is more formal and extensive, including SWOT analysis, security checks and reviews, and deep scenario plans that can become extremely detailed down to the molecular level.

We are still a small company and I am the founder who performed the bulk of R&D, so by necessity I’m still involved in each case. Our current process has been refined over the last decade in many dozens of engagements with senior teams on strategic issues. In so doing we see patterns develop over time that we learn from and I share with senior executives when behavior causes them problems. This is still relatively new territory while we carefully craft the AI-assisted economy.

I just sent another such lessons learned to a CEO in a public company this morning. Usually this is done in a confidential manner to very few and never revealed otherwise, but I wanted to share a general pattern that is negatively impacting organizations in part due to the compounding effect it has on the broader economy. Essentially this can be reduced to misapplying the company’s playbook in dealing with advanced technology.

The playbook in this instance for lack of a better word can be described as ‘industry tech’, as in fintech or insurtech. While new to some in banking and insurance, this basic model has been applied for decades with limited, mixed results over time. The current version has switched the name incubators for accelerators, innovation shops now take the place of R&D and/or product development, and corporate VC is still good old corporate VC. Generally speaking this playbook can be a good thing for companies in industries like banking and insurance where the bulk of R&D came from a small group of companies that delivered very little innovation over decades, or worse as we saw in the financial crisis, which delivered toxic innovation. Eventually the lack of innovation can cause macro economic problems and place an entire industry at risk.

A highly refined AI system like ours is considered by many today to be among most important and valuable of any type, hence the fear however unjustified in our case. Anyone expecting to receive our ideas through open innovation on these issues is irresponsible and dangerous to themselves and others, including your company. That is the same as bank employees handing out money for free or insurance claims adjusters knowingly paying fraudulent claims at scale. Don’t treat the issue of intellectual capital and property lightly, including trade secrets, or it will damage your organization badly.

The most common mistake I see in practice is relying solely on ‘the innovation playbook’. CEOs especially should always be open to opportunity and on the lookout for threats, particularly any that have the capacity to make or save the company. Most of the critical issues companies face will not come from or fit within the innovation playbook. Accelerators, internal innovation shops and corporate VC arms are good tools when used appropriately, but if you rely solely on them you will almost certainly fail. None of the most successful CEOs that come to mind rely only on fixed playbooks.

These are a few of the more common specific suggestions I’ve made to CEOs of leading companies in dealing with AI systems, in part based on working with several of the most successful tech companies at similar stage, and in part in engaging with hundreds of other organizations from our perspective. As you can see I’ve learned to be a bit more blunt.

  1. Don’t attempt to force a system like Kyield into your innovation playbook. Attempting to do so will only increase risk for your company and do nothing at all for mine but waste time and money. Google succeeded in doing so with DeepMind, but it came at a high price and they needed to be flexible. Very few will be able to engage similarly, which is one reason why the Kyield OS is still available to customer organizations.

  2. With very few exceptions, primarily industry-specific R&D, we are far more experienced in AI systems than your team or consultants. With regard to the specific issues, functionality, and use cases within Kyield that we offer, no one has more experience and intel I am aware of. We simply haven’t shared it. Many are expert in specific algorithmics, but not distributed AI systems, which is what is needed.

  3. A few companies have now wasted billions of USD attempting to replicate the wheel that we will provide at a small fraction of that cost. A small number of CEOs have lost their jobs due to events that cost tens of billions I have good reason to believe our HumCat could have prevented. It therefore makes no sense at all not to adopt.

  4. The process must be treated with the respect and priority it deserves. Take it seriously. Lead it personally. Any such effort requires a team but can’t be entirely delegated.

  5. Our HumCat program requires training and certification in senior officers for good reasons. If it wasn’t necessary it wouldn’t be required.

  6. Don’t fear us, but don’t attempt to steal our work. Some of the best in the world have tried and failed. It’s not a good idea to try with us or any of our peers.

  7. Resist the temptation to customize universal issues that have already been refined. We call this the open checkbook to AI system hell. It has ended several CEO careers already and it’s early.

  8. Since these are the most important issues facing any organization, it’s really wise to let us help. Despite the broad characterization, not all of us are trying to kill your organization or put your people out of work. Quite the contrary in our case or we would have gone down a different path long ago.

  9. We are among the best allies to have in this war.

  10. It is war.

Mark Montgomery is the founder and CEO of Kyield, originator of the theorem ‘yield management of knowledge’, and inventor of the patented AI system that serves as the foundation for Kyield: ‘Modular System for Optimizing Knowledge Yield in the Digital Workplace’. He can be reached at markm@kyield.com.

Advertisements

A Million in Prevention can be Worth Billions of Cure with Distributed AI Systems


Deep Water Horizon Rig (NOAA)

DeepWater Horizon Rig, April 2010 (NOAA News)

Every year, natural catastrophes (nat cat) are highly visible events that cause major damage across the world. In 2016 the cost of nat cats were estimated to be $175 billion, $50 billion of which were covered by insurance, reflecting severe financial losses for impacted areas.[i]  The total cost of natural catastrophes since 2000 was approximately $2.3 trillion.[ii]

Much less understood is that human-caused catastrophes (hum cat) have resulted in much greater economic damage during the same period and have become increasingly preventable. Since 2000 the world has experienced two preventable hum cat events of $10 trillion or more: the 9/11 terrorist attacks and the global financial crisis. In addition, although the Tōhoku earthquake in Japan was unavoidable, the Fukushima Daiichi nuclear disaster was also preventable, now estimated at $188 billion excluding widespread human suffering and environmental damage.[iii]

One commonality in these and other disasters is that experts issued advanced and accurate evidence-based warnings only to be ignored. The most famous and costly such example was the Phoenix memo issued on July 10, 2001 by Special Agent Kenneth J. Williams.[iv]  The FBI memo was described as “chilling” by the first journalist who reviewed it due to specificity in describing terrorist-linked individuals who were training to fly commercial aircraft.

Williams was a seasoned terrorism expert who followed the prescribed use of the FBI’s rules-based system, yet during the two-month period prior to the 9/11 attacks no relevant action was taken. If the lead had been pursued the terrorist attacks and ensuing events would very likely have been avoided, including two wars with massive casualties and continuing hostilities.

Government agencies have invested heavily in prevention since that fateful day of September 11, 2001 so hopefully similar events will be prevented.

In corporate catastrophes, however, prevention scenarios are usually more complex than the Phoenix memo case. They are also occurring with increasing frequency and expanding in scale and cost.

The remainder of this article can be viewed at Cognitive World.

View brief video by Mark related to this article and Kyield’s new HumCat product.

Mark Montgomery is the founder and CEO of Kyield, originator of the theorem ‘yield management of knowledge’, and inventor of the patented AI system that serves as the foundation for Kyield: ‘Modular System for Optimizing Knowledge Yield in the Digital Workplace’. He can be reached at markm@kyield.com.