An open letter to Fortune 500 CEOs on AI systems from Kyield’s founder


IMG_1532

Since we offer an organizational and network operating system—technically defined as a modular artificial intelligence system, we usually deal with the most important strategic and operational issues facing organizations. This is most obvious in our new HumCat offering, which provides advanced technology for the prevention of human-caused catastrophes. Short of preventing an asteroid or comet collision with earth, this is among the most important work that is executable today. Please keep that in mind while reading.

In our efforts to assist organizations we perform an informal review on our process with the goal of improving upon the experience for all concerned. In cases where we invest a considerable amount of time, energy, and money, the review is more formal and extensive, including SWOT analysis, security checks and reviews, and deep scenario plans that can become extremely detailed down to the molecular level.

We are still a small company and I am the founder who performed the bulk of R&D, so by necessity I’m still involved in each case. Our current process has been refined over the last decade in many dozens of engagements with senior teams on strategic issues. In so doing we see patterns develop over time that we learn from and I share with senior executives when behavior causes them problems. This is still relatively new territory while we carefully craft the AI-assisted economy.

I just sent another such lessons learned to a CEO in a public company this morning. Usually this is done in a confidential manner to very few and never revealed otherwise, but I wanted to share a general pattern that is negatively impacting organizations in part due to the compounding effect it has on the broader economy. Essentially this can be reduced to misapplying the company’s playbook in dealing with advanced technology.

The playbook in this instance for lack of a better word can be described as ‘industry tech’, as in fintech or insurtech. While new to some in banking and insurance, this basic model has been applied for decades with limited, mixed results over time. The current version has switched the name incubators for accelerators, innovation shops now take the place of R&D and/or product development, and corporate VC is still good old corporate VC. Generally speaking this playbook can be a good thing for companies in industries like banking and insurance where the bulk of R&D came from a small group of companies that delivered very little innovation over decades, or worse as we saw in the financial crisis, which delivered toxic innovation. Eventually the lack of innovation can cause macro economic problems and place an entire industry at risk.

A highly refined AI system like ours is considered by many today to be among most important and valuable of any type, hence the fear however unjustified in our case. Anyone expecting to receive our ideas through open innovation on these issues is irresponsible and dangerous to themselves and others, including your company. That is the same as bank employees handing out money for free or insurance claims adjusters knowingly paying fraudulent claims at scale. Don’t treat the issue of intellectual capital and property lightly, including trade secrets, or it will damage your organization badly.

The most common mistake I see in practice is relying solely on ‘the innovation playbook’. CEOs especially should always be open to opportunity and on the lookout for threats, particularly any that have the capacity to make or save the company. Most of the critical issues companies face will not come from or fit within the innovation playbook. Accelerators, internal innovation shops and corporate VC arms are good tools when used appropriately, but if you rely solely on them you will almost certainly fail. None of the most successful CEOs that come to mind rely only on fixed playbooks.

These are a few of the more common specific suggestions I’ve made to CEOs of leading companies in dealing with AI systems, in part based on working with several of the most successful tech companies at similar stage, and in part in engaging with hundreds of other organizations from our perspective. As you can see I’ve learned to be a bit more blunt.

  1. Don’t attempt to force a system like Kyield into your innovation playbook. Attempting to do so will only increase risk for your company and do nothing at all for mine but waste time and money. Google succeeded in doing so with DeepMind, but it came at a high price and they needed to be flexible. Very few will be able to engage similarly, which is one reason why the Kyield OS is still available to customer organizations.

  2. With very few exceptions, primarily industry-specific R&D, we are far more experienced in AI systems than your team or consultants. With regard to the specific issues, functionality, and use cases within Kyield that we offer, no one has more experience and intel I am aware of. We simply haven’t shared it. Many are expert in specific algorithmics, but not distributed AI systems, which is what is needed.

  3. A few companies have now wasted billions of USD attempting to replicate the wheel that we will provide at a small fraction of that cost. A small number of CEOs have lost their jobs due to events that cost tens of billions I have good reason to believe our HumCat could have prevented. It therefore makes no sense at all not to adopt.

  4. The process must be treated with the respect and priority it deserves. Take it seriously. Lead it personally. Any such effort requires a team but can’t be entirely delegated.

  5. Our HumCat program requires training and certification in senior officers for good reasons. If it wasn’t necessary it wouldn’t be required.

  6. Don’t fear us, but don’t attempt to steal our work. Some of the best in the world have tried and failed. It’s not a good idea to try with us or any of our peers.

  7. Resist the temptation to customize universal issues that have already been refined. We call this the open checkbook to AI system hell. It has ended several CEO careers already and it’s early.

  8. Since these are the most important issues facing any organization, it’s really wise to let us help. Despite the broad characterization, not all of us are trying to kill your organization or put your people out of work. Quite the contrary in our case or we would have gone down a different path long ago.

  9. We are among the best allies to have in this war.

  10. It is war.

Mark Montgomery is the founder and CEO of Kyield, originator of the theorem ‘yield management of knowledge’, and inventor of the patented AI system that serves as the foundation for Kyield: ‘Modular System for Optimizing Knowledge Yield in the Digital Workplace’. He can be reached at markm@kyield.com.

Advertisements

Priority Considerations When Investing in Artificial Intelligence


IMG_7892.JPG

After several decades of severe volatility in climatology across the fields involved with artificial intelligence (AI), we’ve finally breached the tipping point towards sustainability, which may also represent the true beginnings for a sustainable planet and humanity.

Recent investment in AI is primarily due to the formation of viable components in applied R&D that came together through a combination of purposeful engineering and serendipity, resulting in a wide variety of revolutionary functionality.  However, since investment spikes also typically reflect reactionary herding, asset allocation mandates, monetary policy, and opaque strategic interests among other factors, caution is warranted.

The following considerations are offered as observations from my perch as an architect and founder who has been dealing with many dozens of management teams over the last few years.  The order of priority will not be the same for each organization, though in practice are usually similar within industries.

Risk Management and Crisis Prevention

The nature of AI when combined with computer networking and interconnected emerging technology such as cryptography, 3D printing, biotech and nanotech represents perhaps the most significant risk and opportunity in history.

While the global warnings on AI are premature, often inaccurate, and appear to be a battle for control, catastrophic risk for individual companies is considerable.  For most organizations the risk should be manageable, though not with traditional strategies and tactics.  That is to say that AI within the overall environment requires aggressive behavioral change outside comfort zones.

Recent examples of multi-billion dollar investments in AI include Google, IBM, and Toyota, though multi-million USD investments now number in the thousands if we include internal investments and venturing.  To be sure much of this investment is reactionary and wasteful, but the nature of the technology only requires a small fraction of the functionality to prove successful, which can be decisive in some markets.

For appreciation of the sea change, common functions employed today were deemed futuristic and decades in the future just three or four years ago.  So it’s not surprising that a majority of senior management teams we’ve engaged in the last two years confirm that AI is among their highest priorities, though I must say some are still moving too slow. We’ve observed a wide range of actions from window dressing for Wall Street to confusing to brilliant.

The highest return on investment possible is prevention of catastrophic events, whether an industrial accident, lone wolf bad actors, systemic fraud, or disruption leading to displacement or irrelevance.  Preventable losses in the tens of billions in single organizations have become common.  Smaller events that require a similar core design to prevent or mitigate are the norm rather than the exception, but are often nonetheless career ending in hindsight, and can be fatal to all but the most capitalized companies.  We’ve experienced several multi-billion dollar events in former management teams that likely could have been prevented if they had moved more quickly, including unfortunately loss of lives, which is what gets me up at 3am.

Talent War

An exponential surge in training is underway in machine learning (ML) along with substantial funding in tools, so we can expect the cost of more common technical skills will begin to subside, while other challenges will escalate.

“In their struggle against the powers of the world around them their first weapon was magic, the earliest fore-runner of the technology of to-day.  Their reliance on magic was, as we suppose, derived from their overvaluation of their own intellectual operations, from their belief in the ‘omnipotence of thoughts’, which, incidentally, we come upon again in our obsessional neurotic patients.’’ — Sigmund Freud, 1932.

The challenge is that the magic of the previous century has evolved and matured by necessity from the efforts of many well meaning scientists, but some of the magicians on stage still suffer from neurosis.  Technology is evolving much faster than humans or organizations.

Examples of talent issues commonly found in our communications:

  • Due to long AI winters followed by the recent tipping point in viability, the number of individuals with extensive experience is very small, and most are at a few tech companies attempting to displace other industries.

  • Industry-specific expertise beyond search and robotics is rare and very specialized with little understanding of enterprise-wide potential.
    An exceptional level of caution is warranted on conflicts in AI counsel due to competency and pre-existing alliances.

  • Despite efforts to exploit emerging opportunity, ability to think strategically in AI systems appears to be almost non-existent.

  • CTOs may win some key battles in tactical applications, but CEOs must win the wars with organizational AI systems.

The talent war for the top tier in AI is so severe with such serious implications that hundreds of millions USD have been invested for key individuals.  Of course very few organizations can compete in talent auctions, which is one reason why the Kyield OS is so important.  We automate many AI functions that will be common in organizations and their networks for the foreseeable future while also making deep dive custom algorithmics simpler and more relevant.

Historic Opportunity

Not only is AI a classic case of ‘offense is the best defense’, when designed and executed well to enhance knowledge workers and customers, the embedded intelligence with prescriptive analytics can accelerate discovery, uncover previously unknown opportunities, providing historically rare potential for new businesses, spin outs, joint ventures and other types of partnering.  Managed well, this is precisely what many companies and national economies need.

Architectural Design

Impacting every part of distributed organizations, the importance of architecture cannot be overstated as it will influence and in many instances determine outcomes in the distributed network environment.  AI is a continuous process, not a one-off project, so it requires pivotal thinking from two decades of fast fail lean innovation that our lab helped pioneer.  Key considerations in architecture we incorporated in the Kyield OS include but are not limited to the following:

  • Optimizing the Internet of Entities

  • Governance, compliance, and data quality

  • Accelerated discovery and innovation

  • Continuous improvement

  • Real-time adaptability

  • Interoperability

  • Business modeling

  • Relationship management

  • Smart contracts

  • Security and privacy

  • Transactions

  • Digital currency

  • Ownership and control of data

  • Audits and reporting

  • Productivity Improvement

A priority outcome for most organizations in competitive environments, productivity improvement is increasingly derived from optimizing embedded intelligence, which is also desperately needed to improve the macro global economic situation.  A large gap remains in most AI strategies with respect to enterprise-wide productivity, which represents the foundation of recurring value to organizations and society, regardless of the specific task of each knowledge worker and organization.

While cultural challenges and defensive efforts are common obstacles to any productivity improvement, strong leadership has proven the ability to triumph.  Internal and external consultants and advisors can help, particularly given the steep learning curve in AI;  just be cautious on unhealthy relationships that may have interests directly opposed to the client organization, as conflicts are pervasive and tactics are sophisticated.

Trust

Just when we thought trust couldn’t become more important, it seems to dominate life on earth.  We’ve come across quite a few trust related issues in our AI voyage. A few examples that come to mind:

Intellectual property:  Trust is a two-way street, particularly when it comes to intellectual assets, so upfront mutual protection is a necessary evil and serves as the first formal step in establishing a trustworthy relationship, without which the other party must presume the worst of intentions.  Once the Kyield OS is installed with partners this problem is effectively eliminated with smart contracts and digital currency based on internal dynamics and verified intelligence (aka evidence).

Fear of displacement:  Since AI is new for most, suffice to say that fear is omnipresent and must be dealt with in a transparent and intelligent manner. At the knowledge worker level we overcome the problem with transparency, which makes it obvious that the Kyield OS is likely their strongest ally.

Modeling:  While motivation to change is often needed from external sources such as regulatory or competition, it’s probably not a good idea to trust a company that has the capability, desire, culture and incentive to displace customers.  Another problem to avoid at the confluence of networked computing and AI is lock-in from technology or talent, including service models.  Beware the overfunded offering that attempts to buy adoption and/or over-reliance on marketing hype.

Authenticity:  Apart from the serious structural economic problems caused by copying or theft of intellectual work, consider the trustworthiness of those who would do so and how much know-how is withheld because of this problem.  Authenticity is especially important in this field due to the length of time required to understand the breadth and depth of implications across the organization and network economy.

Conclusion

Given the strategic implications to organizations, AI should be a top priority led by senior management.  However, since supply chains face similar challenges with AI, traditional methods and channels to technology adoption may not necessarily serve organizations well, and in some cases may be high risk.  Whether for strategic intent, financial return, operational necessity or any combination thereof, investing well in AI is not a trivial undertaking. Integrity, experience, knowledge and freedom from conflicts are therefore critical in choosing partners and investments.

About the author

Mark Montgomery is the founder and CEO of Kyield, which is based on two decades of self-funded R&D.  The Kyield OS is designed around his patented AI system, which tailors data to each entity in the digital network environment.

Plausible Scenarios For Artificial Intelligence in Preventing Catastrophes


(The PDF version of this article is now available here)

Plausible Scenarios For Artificial Intelligence in Preventing Catastrophes

 

Given consistent empirical evidence demonstrating that the raising of red flags has resulted in a lower probability of essential actors heeding warnings than causing of unintended consequences, those endowed with a public platform carry a special responsibility, as do system architects. Nowhere is this truer than at the confluence of advanced technology and existential risk to humanity.

As one who has invested a significant portion of my life studying crises and designing methods of prevention, including for many years the use of artificial intelligence (AI), I feel compelled to offer the following definitions of AI with a few of what I consider to be plausible scenarios on how AI could prevent or mitigate catastrophes, as well as brief enlightenment on how we reduce the downside risk to levels similar to other technology systems.

An inherent temptation with considerable incentives exists in the transdisciplinary field of AI for the intellectually curious to expand complexity infinitely, generally leaving the task of pragmatic simplicity to applied R&D, which is the perspective the following definitions and scenarios are offered, primarily for consideration and use in this article.

A Definition of Artificial Intelligence: Any combination of technologies and methodologies that result in learning and problem solving ability independent of naturally occurring intelligence.

A Definition of Beneficial AI: AI that has been programmed to prevent intentional harm and to mitigate unintended consequences within a strictly controlled governance system, which include redundant safety functions, and continuous human oversight (to include a kill switch in high-risk programs). [1]

A Definition of Augmentation: For the purposes of this article, augmentation is defined not as implants or direct connection to biological neural systems, which is an important rapidly emerging sub-field of AI in biotech, but rather simply enhancing the quality of human work products and economic productivity with the assistance of AI[2]

To aid in this exercise, I have selected quotes from the highly regarded book Global Catastrophic Risks (GCR), which consists of 22 chapters by 25 authors in a worthy attempt to provide a comprehensive view. [3]  I have also adopted the GCR definition of ‘global catastrophic’: 10 million fatalities or 10 trillion dollars. The following reminders from GCR seem appropriate in a period of intense interest in AI and existential risk:

“Even for an honest, truth-seeking, and well-intentioned investigator it is difficult to think and act rationally in regard to global catastrophic risks and existential risks.”

“Some experts might be tempted by the media attention they could get by playing up the risks. The issue of how much and in which circumstances to trust risk estimates by experts is an important one.”

Super Volcanoes

A very high probability event that should consume greater attention, super volcano eruptions occur about every 50,000 years. GCR highlights the Toba eruption in Indonesia approximately 75,000 years ago, which may be the closest humanity has come to extinction. The primary risk from super eruptions is airborne particulates that can cause a rapid decline in global temperatures, estimated to have been 5–15 C after the Toba event. [4]

While it appears that a super-eruption contains a low level of existential risk to the human race, a catastrophic event is almost certain and would likely exceed all previous disasters, followed by massive loss of life and economic collapse, reduced only by the level of preventative measures taken in advance. While preparation and alert systems have much improved since my co-workers and I observed the eruption of Mt. St. Helens from Paradise on Mt. Rainier the morning of May 18th, 1980, it is quite clear that the world is simply not prepared for one of the larger super-eruptions that will undoubtedly occur. [5] [6]

MM on Mt. Rainier, 5/18/1980

MM on Mt. Rainier, 5/18/1980

The economic contagion observed in recent events such as the 9/11 terrorists attacks, the global financial crises, the Tōhoku earthquake and tsunami, the current Syrian Civil War, and the ongoing Ebola Epidemic in West Africa serve as reasonable real-world benchmarks from which to make projections for much larger events, such as a super-eruption or asteroid. It is not alarmist to state that we are woefully unprepared and need to employ AI to increase the probability for preserving our species and others. The creation and colonization of highly adaptive outposts in space designed with the intent of sustainability would therefore seem prudent policy best served with the assistance of AI.

The role of AI in mitigating the risks associated with super volcanoes include accelerated research, more accurate forecasting, increased modeling for preparation and mitigation of damage, accelerating discovery of systems for surviving super eruptions, and to assist in recovery. These tasks require ultra high scale data analytics to deal with complexities far beyond the ability of humans to process within circumstantially dictated time windows, and are too critical to be dependent upon a few individuals, thus requiring highly adaptive AI across distributed networks involving large numbers of interconnected deep earth and deep sea sensors, linking individuals, teams, and organizations. [7]  Machine learning has the ability to accelerate all critical functions, including identifying and analyzing preventative measures with algorithms designed to discover and model scenario consequences.

Asteroids are similar to super volcanoes from an AI perspective, both requiring effective discovery and preparation, which while ongoing and improving could benefit from AI augmentation. Any future preventive endeavors may require cognitive robotics working in extremely hostile environments for humans where real time communications could be restricted. Warnings may be followed swiftly by catastrophic events—hence, preparation is critical, and if viewed as optional, we do so at our own peril.

Pandemics and Healthcare Economics

Among the greatest impending known risks to humans are biological contagions, due to large populations in urban environments that have become reliant on direct public interaction, regional travel by public transportation, and continuous international air travel. Antibiotic-resistant bacteria and viruses in combination with large mobile populations pose a serious short-term threat to humanity and the global economy, particularly from an emergent mutation resistant to pre-existing immunity or drugs.

“For example, if a new strain of avian flu were to spread globally through the air travel network that connects the world’s major cities, 3 billion people could potentially be exposed to the virus within a short span of time.” – Global Risk Economic Report 2014, World Economic Forum (WEF). [8]

Either general AI or knowledge work augmented with AI could be a decisive factor in preventing and mitigating a future global pandemic, which has the potential capacity to dwarf even a nuclear war in terms of short-term casualties. Estimates vary depending on strain, but a severe pandemic could claim 20-30% of the global population compared to 8-14% in a nuclear war, though aftermath from either could be as horrific.

Productivity in life science research is poor due to inherent physical risks for patients as well as non-physical influences such as cultural, regulatory, and financial that can increase systemic, corporate, and patient risk. Healthcare cost is among the leading global risks due to cost transference to government. The WEF, for example, cites ‘Fiscal Crisis in Key Economies’ as the leading risk in 2014. Other than poor governance that allows otherwise preventable crises to occur, healthcare costs are arguably the world’s most severe curable ongoing crisis, impacting all else, including pandemics.

Priority uses of AI for pandemics and healthcare: [9]

  • Accelerated R&D throughout life science ecosystem, closing patient/lab gap.

  • Advanced modeling scenarios for optimizing prevention, care, and costs.

  • Regulatory process including machine learning and predictive modeling.

  • Precision healthcare tailored to each person; more direct and automated. [10]

  • Predictive analytics for patient care, disease prevention, and economics.

  • Cognitive computing for diagnostics, patient care, and economics.

  • Augmentation for patients and knowledge workers worldwide.

“By contrast to pandemics, artificial intelligence is not an ongoing or imminent global catastrophic risk. However, from a long-term perspective, the development of general artificial intelligence exceeding that of the human brain can be seen as one of the main challenges to the future of humanity (arguably, even the main challenge). At the same time, the successful deployment of superintelligence could obviate many of the other risks facing humanity.” — Eliezer Yudkowsky in GCR chapter ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk’.

Climate Change

Crises often manifest over time as a series of compounding events that either could not be predicted, predictions were inaccurate, or were not acted upon. I’ve selected an intentionally contrarian case with climate change to provide a glimpse of potentially catastrophic unintended consequences. [11]

“Anthropogenic climate change has become the poster child of global threats. Global warming commandeers a disproportionate fraction of the attention given to global risks.” – GCR

If earth cooling from human response driven by populism, conflicted science, or any other reason resulted in an over-correction, or a response combined with volcanoes, asteroids, or slight variation of cyclic changes in Earth’s orbit to alter ocean and atmospheric patterns, it becomes plausible to forecast a scenario of an accelerated ice age where ice sheets rapidly reach the equator similar to what may have occurred during the Cryogenian period. Given our limited understanding of triggering mechanisms to include in sudden temperature changes, which are believed to have occurred 24 times over the past 100,000 years, it is also plausible that either a natural and/or human response could result in rapid warming. While either scenario of climate extreme may seem ludicrous given current sentiment, the same was true prior to most events. Whether natural, human caused, or some combination as observed with the Fukushima Daiichi nuclear disaster, the pattern of complexity of seemingly small events leading to larger events is more the norm rather than exception, with outcomes opposed to intent not unusual. While reduction of pollutants is prudent for health, climate, and economics, chemically engineered solutions should be resisted until our ability to manage climate change is proven with rigorous real-world models.

9/11

The tragedy of 9/11 provides a critical lesson in seemingly low risk series of events that can easily spiral into catastrophic scale if not planned and managed with precision. Those who decided not to share the Phoenix memo at the FBI with appropriate agencies undoubtedly thought it was a safe decision given information at the time. Who among us would have acted differently if having to make our decision based on the following questions in the summer of 2001?

  • What are the probabilities of an airline being used as a weapon?
  • Even if such an attempt was made, what are the probabilities of it occurring?
  • In the unlikely event such an attempt did occur, what is the probability that one of the towers at the World Trade Center would be targeted and struck?
  • In the very unlikely event one WTC tower were struck, what is the probability that the other tower would be struck similarly before evacuation could take place?
  • In the extremely unlikely scenario that terrorists attacked the second tower with a hijacked commercial airliner within 20 minutes of the first attack, what would be the probability of one of the towers collapsing within an hour?
  • If then this horrific impossible scenario were actually to have occurred, what then would be the possibility of the second tower collapsing a half hour later?

Prior to 9/11, such a scenario would seem to be an extremely low probability for anyone but an expert if possible to verify and process all information, but of course the questions describe the actual event, tragically providing one of the most painful and costly lessons in human life and treasure in the history of the U.S., with wide ranging global impact.

  • Direct costs from attacks: 2,999 lives lost with an additional 1,400 rescue workers having died since, some portion of which were directly caused. Financial costs from 9/11 are estimated to have been approximately $2 trillion to the U.S. alone. [12]

  • Iraq and Afghanistan wars: Approximately 36,000 people from the US and allied nations died in Iraq and Afghanistan, with nearly 900,000 disability claims approved by the VA alone. [13]  The human cost in the two nations were of course much higher, including large numbers of civilians. According to one report, the Iraq War has cost $1.7 trillion with an additional $490 billion in benefits owed to veterans, which could grow to more than $6 trillion over the next four decades counting interest. [14]  The Afghanistan war is estimated to have cost $4-6 trillion.

  • Global Financial Crisis: Many experts believe monetary easing post 9/11 combined with lax regulatory oversight expanded the unsustainable housing bubble and triggered the financial collapse in 2008, which revealed systemic risk accumulating over decades from poor regulatory oversight, governance, and policy. [15]  The total cost to date from the financial crisis is well over $10 trillion and continues to rise from a series of events reverberating globally, caused by realized moral hazard, broad global reactions, and significant continuing aftershocks from attempts to recover.

The reporting process for preventing terrorist attacks has been reformed and improved, though undoubtedly still challenged due to the inherently conflicted dynamics between interests, exponential growth of data, and limited ability for individuals to process. If the FBI decision makers were empowered with the knowledge of the author of the Phoenix memo (special agent Kenneth Williams), and free of concerns over interagency relations, politics, and other influences, 9/11 would presumably have been prevented or mitigated.

An Artificially Intelligent Special Agent

What if the process wasn’t dependent upon conflicted humans? An artificial agent can be programmed to be free of such influences, and can access all relevant information on the topic from available sources far beyond the ability of humans to process. When empowered with machine learning and predictive analytics, redundant precautions, and integrated with the augmented workflow of human experts, the probability of preventing or mitigating human caused events is high, whether in preventing terrorist plots, high cost projects with deep sea oil rigs, trading exotic financial products, nuclear power plant design, response to climate change, or any other such high-risk endeavor.

Governance of Human Engineered Systems

Long considered one of the primary technical risks, as well as key to overcoming many of our most serious challenges, the ability to program matter at the atomic scale is beginning to emerge. [16]  Synthetic biology is another area of modern science that poses great opportunity for improving life while introducing new risks that must be governed wisely with appropriate tools. [17]

While important private research cannot be revealed, published work is underway beyond the typical reporting linked to funding announcements and ad budgets. One example is the Synthetic Evolving Life Form (SELF) project at Raytheon led by Dr. Jim Crowder, which mimics the human nervous system within an Artificial Cognitive Neural Framework (ACNF). [18]  In their book Artificial Cognitive Architectures, Dr. Crowder and his colleagues describe an unplanned Internet discussion between an artificial agent and human researcher that called for embedded governance built into the design.

A fascinating project was recently presented by Joshua M. Epstein, Ph.D. at the Santa Fe Institute on research covered in his book Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science[19]  In this novel approach to agent based social behavior, Epstein has intentionally removed memory and pre-conditioned bias from Agent Zero to simulate what often appears to be irrational behavior resulting from social influences, reasoning, and emotions. Agent Zero provides a good foundation for considering additional governance methods for artificial agents interacting in groups, with potential application in AI systems designed to regulate systemic risk in networked environments.

Closing Thoughts

Given the rapidly developing AI-assisted world combined with recent governance failures in financial regulation, national security, and healthcare, the challenge for senior decision makers and AI systems engineers is to perform at much higher levels than the past, for if we continue on this trajectory it is conceivable that the human experience could confirm the Fermi Paradox within decades. I therefore view AI in part as a race against time.

While the time-scale calculus for many organizations supports accelerated transformation with the assistance of AI, it is wise to proceed cautiously with awareness that AI is a transdisciplinary field which favors patience and maturity in the architects and engineers, typically requiring several tens of thousands of hours of deep immersion in each of the applicable focus areas. It is also important to realize that the fundamental physics involved with AI favors if not requires the governance structure to be embedded within the engineered data, including business process, rules, operational needs, and security parameters. When conducted in a prudential manner, AI systems engineering is a carefully planned and executed process with multiple built-in redundancies.

Mark Montgomery is founder and CEO of http://www.kyield.com, which offers technology and services centered on Montgomery’s AI systems invention.

(Robert Neilson, Ph.D., and Garrett Lindemann, Ph.D., contributed to this article)

[1] In the GCR chapter Artificial Intelligence as a ‘Positive and Negative Factor in Global Risk’, Eliezer Yudkowsky prefers ‘Friendly AI’. His complete chapter is on the web here: https://intelligence.org/files/AIPosNegFactor.pdf

[2] During the course of writing this paper Stanford University announced a 100-year study of AI. The white paper by Eric Horvitz can be viewed here: https://stanford.app.box.com/s/266hrhww2l3gjoy9euar

[3] Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Cirkovic, (Oxford University Press, Sep 29, 2011): http://ukcatalogue.oup.com/product/9780199606504.do

[4] For more common events, see Ultra distant deposits from super eruptions: Examples from Toba, Indonesia and Taupo Volcanic Zone, New Zealand. N.E. Matthews, V.C. Smith, A Costa, A.J. Durant, D. M. Pyle and N.G.J. Pearce.

[5] USGS Fact Sheet for Alert-Notification System for Volcano Activity: http://pubs.usgs.gov/fs/2006/3139/fs2006-3139.pdf

[6] Paradise is about 45 miles to the north/blast side of St. Helens in full view until ash cloud enveloped Mt. Rainier with large fiery ash and friction lightening. A PBS video on the St. Helens eruption: http://youtu.be/-H_HZVY1tT4

[7] Researchers at Oregon State University claimed the first successful forecast of an undersea volcano in 2011 with the aid of deep sea pressure sensors: http://www.sciencedaily.com/releases/2011/08/110809132234.htm

[8] Global Risk Economic Report 2014, World Economic Forum: http://www3.weforum.org/docs/WEF_GlobalRisks_Report_2014.pdf

[9] While pandemics and healthcare obviously deserve and receive much independent scrutiny, understanding the relativity of economics is poor. Unlike physics, functional metric tensors do not yet exist in economics.

[10] New paper in journal Science: ‘The human splicing code reveals new insights into the genetic determinants of disease’ http://www.sciencemag.org/content/early/2014/12/17/science.1254806

[11] Skeptical views are helpful in reducing the number and scope of unintended consequences. For a rigorous contrarian view on climate change research see ‘Fat-Tailed Uncertainty in the Economics of Catastrophic Climate Change’, by Martin L. Weitzman: http://scholar.harvard.edu/files/weitzman/files/fattaileduncertaintyeconomics.pdf

[12] Institute for the Analysis of Global Security http://www.iags.org/costof911.html

[13] US and Coalition Casualties in Iraq and Afghanistan by Catherine Lutz, Watson Institute for International Studies: http://costsofwar.org/sites/default/files/articles/8/attachments/USandCoalition.pdf

[14] Daniel Trotta at Reuters, March 14th, 2013 http://www.reuters.com/article/2013/03/14/us-iraq-war-anniversary-idUSBRE92D0PG20130314

[15] In his book The Age of Turbulence: Adventures in a New World, Alan Greenspan states that monetary policy in the years following 9/11 “saved the economy”, while others point to Fed policy during this period as contributing to the global financial crisis beginning in 2008, which continues to reverberate today. Government guarantees, high-risk products in banking, companies ‘too big to fail’, corrupted political process, increased public debt, lack of effective governance, and many other factors also contributed to systemic risk and the magnitude of the crisis.

[16] A team of researchers published research in Nature Chemistry in May of 2013 on using a Monte Carlo simulation to demonstrate engineering at the nanoscale: http://www.nature.com/nchem/journal/v5/n6/full/nchem.1651.html

[17] A recent paper from MIT researchers ‘Tunable Amplifying Buffer Circuit in E. coli’ is an example of emerging transdisciplinary science with mechanical engineering applied to bacteria: http://web.mit.edu/ddv/www/papers/KayzadACS14.pdf

[18] My review of Dr. Crowder’s recent book Artificial Cognitive Architectures can be viewed here: https://kyield.wordpress.com/2013/12/17/book-review-artificial-cognitive-architectures/

[19] Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science, by

Joshua M. Epstein: http://press.princeton.edu/titles/10169.html The seminar slides (require Silverlight plugin): http://media.cph.ohio-state.edu/mediasite/Play/665c495b7515413693f52e7ef9eb4c661d