Why go to the Moon when what your company really needs is in the Rockies? (AI, Watson, Kyield)


Mark Betsy Austin on summit

Mark, Betsy & Austin on top of NM – 10-2015

.

This post is in response to an excellent article Tom Davenport wrote for the WSJ (now on LinkedIn) ‘Lessons from the Cognitive Front Lines: Early Adopters of IBM’s Watson’.

Tom is a long-term advocate for increasing jobs related to analytics, particularly in the service sector, and is an advisor to Deloitte, which is a strong alliance partner with IBM, and Deloitte is a sponsor of WSJ CIO. Like most in our industry, we are in constant discussions, but as of now Kyield has no formal alliances or conflicts with any of the people or organizations mentioned in this article.

I too am an advocate for jobs, though not necessarily for IT incumbents, but rather for customers and the broader economy. Smaller companies create most jobs and most job losses come from incumbent consolidation, which is a credible place from which to start this discussion. I think Tom’s article and most of the related strategy is about protecting a few very specific jobs at Armonk, NY, and perhaps at a few alliance partners—not creating them for customers or the broader economy. Modern job creation is no mystery; it’s very well documented.

This article triggered a great many thoughts so I felt compelled to blog about it very early in the morning from my perch in Santa Fe, NM. You see I am the founder of a company called Kyield with an authentic invention based on a theory I developed in our small lab 20 years ago (yield management of knowledge), which looks and sounds increasingly like what Watson has been attempting to become over the last few years. Our company is self-funded almost entirely by me and my wife (well into 7 figures), and frankly I’m feeling just a bit over-exploited at the moment, so hang in there as I poke some fun at the expense of our esteemed colleagues on the east coast and hopefully share something valuable in the process.

Several very important clues and quotes were revealed in this article. I highly recommend reading it carefully as Tom is one of the most experienced and knowledgeable observers in related overlapping domains. First, let’s dissect the lessons the article shares with us including that only one of the organizations interviewed in the article was a customer of Watson, which was University of Texas MD Anderson Cancer Center (MDACC), three were a “partner/co-developer” or “Watson ecosystem partners”, and the undisclosed health insurance company’s relationship type was also not disclosed. Those of us enlightened on the complex relationships in enterprise IT will of course immediately wonder what the terms of these relationships are, who is paying for what, and most importantly why.

Some things we can confirm. IBM has disclosed a billion dollar investment in Watson, often claims to be betting the farm on Watson and/or the cloud, and is obviously spending enormous sums on marketing, partnerships and sales. I too have a lot riding on my company Kyield so IBM’s CEO Ginni and I share that in common—our jobs and future wealth are riding on our respective systems. IBM is constantly reminding us that Watson and healthcare in particular are “moon shots” for IBM, but I’m seriously beginning to wonder if this moon shot is a prudent business decision for IBM and its customers, or a science project that needs another two decades of R&D like we performed with Kyield before attempting to unleash it on customers—particularly business customers (more on basic vs. applied research in a minute).

In order to get our arms around this topic we need to understand a few of the business and technical issues, which for me dates back to the early 1980s to include discussions with IBM and most other industry leaders off and on the entire time. As you may be aware, IBM has been substantially dependent upon the high-end service model since the previous major transformation led by Lou Gerstner in the early 1990s.

What is less known is that IBM grew to over 400,000 employees in that model with more in India than in the U.S. While a brilliant turn-around model in Lou’s time that probably saved the company, 20+ years later the service model has in my view grown far beyond the means to pay for it, and become a big part of the problem in IT for customers and the macro economy. I think IBM understands this well, but it’s a slow and difficult transformation. Ginni herself often states in interviews the challenge is whether IBM “can make the transformation in time”. I suspect with some private confirmation that time may be growing short.

IBM is not alone. All system integrators and many IT consultants share this misalignment of interest challenge often discussed today regarding both internal and external IT investments. The IT services sector represents something like a third of the now almost $4 trillion global IT industry, but drives spending in the majority, which is one reason why the enterprise cloud market is exploding. We then need to understand that IBM’s quarterly revenue has been falling like a rock for several years and so too has the company’s value, during which time competitors like AWS are experiencing record rapid growth, which places a great deal of pressure on the company, partners and loyal customers, not to mention investors and employees. While I am empathetic with IBM’s challenge and especially employees, rest assured that whatever pressure IBM is under it cannot compare to a self-funded entrepreneur.

We have our challenges as well to include the fallout from IBM’s problem in the marketplace, not least of which is a massive ad spend and sales force, with a combined millions of individuals in shareholders, employees and partners all over the world clicking on articles about Watson, which just incentivizes publishers to write more articles with the keyword Watson. Unfortunately, all that attention and spending isn’t necessarily good for customers, the economy, or even IBM—in this regard I may have as much or more relevant experience in my background as our friends at IBM as the challenge to overcome such an advantage in small companies is substantially greater than defense in an incumbent.

The namesake Watson by the way was not a scientist, but a famous salesman who built the early IBM by going door to door selling machines—trust me I respect that, as well as IBM, and many I know and have known at the company. This may provide a clue however regarding the dual branding definition in the name Watson. Prior to becoming CEO of IBM, Ginni was SVP Sales, Marketing, and Strategy at IBM (I too am guilty as I was a CSO of smaller turn-around companies long before changing paths before Lou’s time).

Let’s talk technology

With business strategy, misalignment of interests, and potential business model conflicts out of the way, let’s now take a brief look at the science and technology involved with Watson, which like my company Kyield and our OS is based on AI—in fact my core patent was labeled an AI system by the USPTO. Ginni is stepping back from using the term AI now (“a small part of it”) presumably due to the fear of job displacement out there and other nonsense perpetuated by famous brands with all manner of agendas, which is certainly understandable. I’ve contributed to that learning curve myself over at Wired, but make no mistake Watson is AI even if most of the revenue may come from some other stream like system integration, solutions, and consulting.

While defining AI is not a perfect science, the consensus among scientists is that AI can be divided into two forms, which is extremely important to understand and directly relevant to almost everything discussed here and elsewhere on AI:

  1. General AI (AGI), which is also referred to by some leading AI scientists and authors who cover the field as ‘super intelligence’, or strong AI.
  2. Narrow AI, which is also called weak, narrow or applied AI.

Augmentation or enhancement is by extension applied AI as it is very narrow and highly specific, particularly in our case for each individual entity down to the molecular level when necessary (as in personalized healthcare). Kyield is without question one of the world’s competency leaders at the confluence of human and artificial intelligence, if not the leader.

Now let’s delve into the carefully structured quotes in Tom’s article to glean some additional intelligence.

“It’s an apprenticeship form of training that takes years—there are lots of subtleties that Watson has to learn,” said Dr. Kris at MSKCC.

This is inherent with any deep learning (DL) application including source code shared by researchers with the public and those in our system, but the challenge frankly has always been with system design, algorithmics, and hardware, not marketing. DL is now widely available for anyone who has the talent, providing a long-term benefit across all sectors, and certainly not dependent on Watson, Kyield or any other system. Rather it’s a function within the system. The difference with Kyield is that while DL is tapped for continuous learning over time, other critically important functionality in the basic core provides immediate value to customers, like increased productivity and crisis prevention.

It was not a trivial undertaking to design the Kyield OS in a simple to use fashion. I doubt that it would have been possible if funded by a conflicted organization of any kind, including the super majority of corporations, foundations or government R&D programs. We sacrificed to remain independent throughout the long voyage across the valley of death in large part to avoid such conflicts and be free to focus only on the needs of customers and the specific tasks at hand, not least of which is to prevent crises sourced within large organizations.

“But the problem comes when the needed knowledge isn’t in the corpus. Dr. Kris at MSKCC comments: We had three drugs approved in lung cancer this year. None of them are in the literature yet. And definitions of cancer and its variations are being redefined all the time as we understand the biological characteristics of each one. The science is changing more rapidly than the published literature.”

This is a good example of many specific types of intellectual obstacles we were forced to overcome early in our R&D, and the solution is frankly partially represented in our patented design. The complete solution also includes tradecraft and secrets to include expected future patents, and like IBM we are dependent on intellectual property for survival, so I can’t disclose further except to say that it was a very difficult and expensive problem to overcome; one deemed necessary prior to offering to customers.

“MDACC (UT MD Anderson Cancer Center) actually referred to its project as a moon shot.”….. “An application like OEA cannot deliver on its intended impact of improving patient outcomes worldwide without addressing the necessary network infrastructure, security and regulatory controls, data sharing/access/use contracts, and reimbursement, not to mention the culture of medicine and clinical adoption. Only through addressing these non-technical challenges, we will be able to translate a piece of technology, like OEA, into impact. That is what separates an innovation from a transformation…that is what makes it a moon shot.” — Dr. Lynda Chin, who led the Watson-based project at MDACC.

I see this as the most mission oriented statement in Tom’s article, coming from the only customer disclosed. MDACC has a clear mission in scientific research to eventually eliminate cancer so related ‘moon shots’ fall well within the organization’s responsibility. Most organizations to include most businesses do not share a similar mission as MDACC, which is why we took our R&D much further in applied form and removed as much risk as possible for customers prior to offering, even if admittedly lacking billions USD in development, marketing or sales.

Definition of Moonshot:

A moonshot, in a technology context, is an ambitious, exploratory and ground-breaking project undertaken without any expectation of near-term profitability or benefit and also, perhaps, without a full investigation of potential risks and benefits.”

If you concluded from reading this article that Kyield doesn’t claim to be a moonshot, that would be correct. Kyield does not provide artificial general intelligence, but rather offers a highly evolved system with as much complexity driven out of it as possible, so it is very much an applied system with a laser focus. While Kyield was a moon shot in the mid 1990s when developing the theorem in our lab, it is now a viable product and system at a very attractive price with a reasonably good probability of achieving an outstanding ROI.

Bringing discussion back to earth

Back down here on earth at the southern tip of the Rocky Mountains in the Land of Enchantment is a City Different called Santa Fe, which is over 400 years old. This area is known for history, art, culture, climate and science, the combination of which is why we brought Kyield here from the Bay area seven years ago to mature our R&D. While NM has vast open spaces made famous by Georgia O’Keeffe among others, we also have one of the highest concentrations of intellectual capital in the known universe, including of course the Moon!

I have a suggestion that is entirely compatible with moon shots of the Watson kind, which local theorists understand better than most, and that is to adopt a very pragmatic AI system that follows the rules of laws, physics and economics. We engaged in this process by the book, took massive risk, played strictly by the rules of engagement, invented an authentic system from scratch, and are now offering the world’s most advanced system at the confluence of human and artificial intelligence, which can be adopted at a tiny fraction of the cost as those described in Tom’s article.

In addition, while almost every single one of the Fortune 100 has benefited greatly from the science here in NM, most of which was produced with taxpayer’s money (Kyield is a rare exception in that regard), that value is almost always exported in the form of spinouts, flips and M&A, usually to the coasts and occasionally off-shore. This commercialization (aka tech transfer) model that generates considerable wealth for a very few has not manifested into benefitting NM from the beneficiaries of the R&D in the private sector, otherwise the numbers would be very different.

Seven decades after the Manhattan Project and hundreds of billions of dollars later, NM has yet to experience a significant business success, will soon surpass WV to rank dead last in unemployment, and has among the highest rates of poverty and crime in the U.S. And it isn’t just about education as many with advanced degrees are unemployed or underemployed here. Part-time wait staff at local hospitality establishments or gift shops holding doctorates is not uncommon.

So my suggestion is to come on out and visit Santa Fe just as hundreds of the leading minds in the world do each year, and we can then discuss in greater detail how our applied science in the form of the Kyield OS can help your organization ascend to a higher level of performance, and do the right thing for your career, organization and the economy in the process. In so doing you will empower us to empower NM and perhaps the rest of the global economy to ascend to the next level, which would be a good thing for everyone.

Oh, about that Watson we keep reading about? No worries, it’s likely compatible with the Kyield OS—that’s what all those APIs are for. And who knows, one of these days that moon shot may just pay off. In the interim we can help your organization ascend almost immediately following adoption of the Kyield OS!

Mark Montgomery

Advertisements

New E-Book: Ascension to a Higher Level of Performance


Power of Transdisciplinary Convergence (Copyright 2015 Kyield All RIghts Reserved)

Power of Transdisciplinary Convergence

Ascension to a Higher Level of Performance 

The Kyield OS: A Unified AI System

By Mark Montgomery
Founder & CEO
Kyield

I just completed an extensive e-book for customers and prospective customers, which should be of interest to all senior management teams in all sectors as the content impacts every aspect of individual and corporate performance.

Our goals in this e-book are fivefold:

  1. Provide a condensed story on Kyield and the voyage required to reach this stage.
  2. Demonstrate how the Kyield OS assimilates disparate disciplines in a unified manner to rapidly improve organizations and then achieve continuous improvement.
  3. Discuss how advances in software, hardware and algorithmics are incorporated in our patented AI system design to accelerate strategic performance and remain competitive.
  4. Detail how a carefully choreographed multi-phase pilot of the Kyield OS can provide the opportunity for an enduring competitive advantage by establishing a continuously adaptive learning organization (CALO).
  5. Educate existing and prospective customers on the Kyield OS as much as possible without disclosing unrecoverable intellectual capital, future patents and trade secrets.
TABLE OF CONTENTS
INTRODUCTION  1
REVOLUTION IN IT-ENABLED COMPETITIVENESS  2
POWER OF TRANSDISCIPLINARY CONVERGENCE  3
MANAGEMENT CONSULTING  4
COMPUTER SCIENCE AND PHYSICS  5
ECONOMICS AND PSYCHOLOGY  9
LIFE SCIENCE AND HEALTHCARE 10
PRODUCTS AND INDUSTRY PLATFORMS 11
THE KYIELD OS 11
THE KYIELD PERSONALIZED HEALTHCARE PLATFORM 12
ACCELERATED R&D 13
SPECIFIC LIFE SCIENCE AND HEALTHCARE USE CASES 13
BANKING AND FINANCIAL SERVICES 14
THE PILOT PROCESS 15
EXAMPLE: BANKING, PHASE 1 17
PHASE 2 18
PHASE 3 18
PHASE 4 18
CONCLUSION: IN THIS CASE THE END JUSTIFIES THE MEANS  21

To request a copy of this e-book please email me at markm@kyield.com from your corporate email account with job title and affiliation.

Revolution in IT-Enabled Competitiveness


Four Stages of Enterprise Network Competence

Most current industry leaders owe their existence beyond basic competencies and resources to a strong competitive advantage from early adoption of systems engineering and statistical methods for industrial production that powered much of the post WW2 economy. These manual systems and methods accelerated global trade, extraction, logistics, manufacturing and scaling efficiencies, becoming computerized over the last half-century.

The computer systems were initially highly complex and very expensive, though resulted in historic business success such as American Airlines’ SABRE in 1959 [1] and Walmart’s logistics system staring in 1975 [2], which helped Walmart reach a billion USD in sales in a shorter period than any other company in 1980.

As those functions previously available to only a few became productized and widely adopted globally, the competitive advantage began to decline. The adoption argument then changed from a competitive advantage to an essential high cost of entry.[3]   When functionality in databases, logistics and desktops became ubiquitous globally the competitive advantage was substantially lost, yet costs continued to rise in software while falling dramatically in hardware, causing problems for customers as well as national and macro global economics. In order to achieve a competitive advantage in IT, it became necessary for companies to invest heavily in commoditized computing as a high cost of initial entry, and then invest significantly more in customization on top of the digital replicas most competitors enjoyed.

The network era began in the 1990s with the commercialization of the Internet and Web, which are based on universal standards, introduced a very different dynamic to the IT industry that has now impacted most sectors and the global economy. Initially under-engineered and overhyped for short-term gains during the inflation of the dotcom bubble, long-term impacts were underestimated as evidenced by ongoing disruption today causing displacement in many industries. We are now entering a new phase Michael Porter refers to as ‘the third wave of IT-driven competition’, which he claims “has the potential to be the biggest yet, triggering even more innovation, productivity gains, and economic growth than the previous two.” [4]

While I see the potential of smart devices similar to Porter, the potential for AI-enhanced human work for increased productivity, accelerated discovery, automation, prevention and economic growth is enormous and, similar to the 1990s, while machine intelligence is overhyped in the short-term, the longer term impact could indeed be “the biggest yet” of the three waves. This phase of IT-enabled competitiveness is the logical extension of the network economy benefiting from thousands of interoperable components long under development from vast numbers of sources to execute the ‘plug and play’ architecture many of us envisioned in the 1990s. This still emerging Internet of Entities when combined with advanced algorithmics brings massive opportunity and risk for all organizations in all sectors, requiring operational systems and governance specifically designed for this rapidly changing environment.

This is a clip from an E-book nearing completion titled: The Kyield OS: A Unified AI System; Rapid Ascension to a Higher Level of Performance. Existing or prospective customers are invited to send me an email for a copy upon completion within the next month – markm at kyield dot com.

[1] https://www.aa.com/i18n/amrcorp/corporateInformation/facts/history.jsp

[2] http://www.scdigest.com/ASSETS/FIRSTTHOUGHTS/12-07-26.php?cid=6047

[3] Lunch discussion on topic with Les Vadasz in 2009 in Silicon Valley.

[4] https://hbr.org/2014/11/how-smart-connected-products-are-transforming-competition

Recent trends in artificial intelligence algorithms


seppdeep754x466

Promise of spring reveals interesting hybrid variants

Those of us who have been through a few tech cycles have learned to be cautious, so for the second article in this series I thought it might be helpful to examine the state of AI algorithms to answer the question: what’s different this time?

I reached out to leading AI labs for their perspective, including Jürgen Schmidhuber at the Swiss AI Lab IDSIA. Jürgen’s former students include team members at Deep Mind who co-authored a paper recently published by Nature on deep reinforcement learning.

Our Recurrent Neural Networks (RNNs) have revolutionized speech recognition and many other fields, broke all kinds of benchmark records, and are now widely used in industry by Google (Sak et al.), Baidu (Hannun et al.), Microsoft (Fan et al.), IBM (Fernandez et al.), and many others. — Jürgen Schmidhuber

Jürgen recently published an overview on deep learning in neural networks with input from many others, including Yann Lecun, Director of AI Research at Facebook. Lee Gomes recently interviewed Lecun who provided one of the best definitions of applied AI I’ve seen:

It’s very much interplay between intuitive insights, theoretical modeling, practical implementations, empirical studies, and scientific analyses. The insight is creative thinking, the modeling is mathematics, the implementation is engineering and sheer hacking, the empirical study and the analysis are actual science. What I am most fond of are beautiful and simple theoretical ideas that can be translated into something that works. — Yann Lecun at IEEE Spectrum

Convolutional neural networks (CNNs)

Much of the recent advancement in AI has been due to convolutional neural networks, which can be trained to mimic partial functionality of the human visual cortex. The inability to accurately identify objects is a common problem, which slows productivity, increases risk, and causes accidents worldwide.

CNNs make use of local filtering with various max-pooling techniques and fewer parameters that make NNs easier to train than in a standard multilayer network. The invention and evolution of the nonlinear backpropagation (BP) algorithm through multi-layers, combined with other supervised learning methods, have enabled nascent artificial intelligence systems with the ability to continuously learn.

CNNs are valuable for a wide range of applications such as diagnostics in healthcare, agriculture, supply chain quality control and automated disaster prevention in all sectors. CNNs are also applied in high performance large-vocabulary continuous speech recognition (LVCSR).

FitNets

Yoshua Bengio is Professor at Université de Montréal and head of the Machine Learning Laboratory (LISA). He is making good progress on a new deep learning book for MIT Press with co-authors Ian Goodfellow and Aaron Courville.

What’s different? – More compute power (the most important element) – More labeled data – Better algorithms for supervised learning (the algorithms of 20 years ago—as is don’t work that well, but a few small changes discovered in recent years make a huge difference) — Yoshua Bengio

Yoshua and several colleagues recently proposed a novel approach to train thin and deep networks, called FitNets, which introduces ‘hints’ with improved ability to generalize while significantly reducing the computational burden. In an email exchange, he shared insights on thin nets:

The thin deep net idea is a procedure for helping to train thinner and deeper networks. You can see deep nets as a rectangle: what we call depth corresponds to its height (number of layers) and what we called thickness (or its opposite, being thin) is the width of the rectangle (number of neurons per layer). Deeper networks are harder to train but can potentially generalize better, i.e., make better predictions on new examples. Thinner networks are even harder to train, but if you can train them they generalize even better (if not too thin!). — Yoshua Bengio

Long Short-Term Memory (LSTM)

Sepp Hochreiter is head of the Institute of Bioinformatics at the JKU of Linz (photo above), and was Schmidhuber’s first student in 1991. Schmidhuber credits Sepp’s work for “formally showing that deep neural networks are hard to train, because they suffer from the now famous problem of vanishing or exploding gradients”.

Exponentially decaying signals—or exploding out of bounds, was as scientists are fond of saying— a ‘non-trivial’ challenge, requiring a series of complex solutions to achieve recent progress.

The advent of Big Data together with advanced and parallel hardware architectures gave these old nets a boost such that they currently revolutionize speech and vision under the brand Deep Learning. In particular the “long short-term memory” (LSTM) network, developed by us 25 years ago, is now one of the most successful speech recognition and language generation methods. — Sepp Hochreiter

Expectations for the near future

We will go beyond mere pattern recognition towards the grand goal of AI, which is more or less: efficient reinforcement learning (RL) in complex, realistic, partially observable environments… I believe it will be possible to greatly scale up such approaches, and build RL robots that really deserve the name. — Jürgen Schmidhuber at INNS.

Unsupervised learning and reinforcement learning remain prizes in the future (and necessary for real progress towards AI, among other things), in spite of intense research activity and promising advances. — Yoshua Bengio via email.

Deep Learning techniques have the potential to advance unsupervised methods like biclustering to improve drug design or detect genetic relationships among population groups. Another trend will be algorithms that store the current context like a working memory. —Sepp Hochreiter via email.

My perspective

Pioneers in ML and AI deserve a great deal of credit, as do sponsors who funded R&D through long winters. One difference I see today versus previous cycles is that the components in network computing have now created a more sustainable environment for AI, with greater variety of profitable business models that are dependent upon improvement.

In addition, awareness is growing that learning algorithms are a continuous process that rapidly creates more value over time, so organizations have a strong economic incentive to commit resources early or risk disruption.

In the applied world we are faced with many challenges, including security, compliance, markets, talent, and customers. Fortunately, although creating new challenges, emerging AI provides the opportunity to overcome serious problems that cannot be solved otherwise.

Mark Montgomery is founder and CEO of http://www.kyield.com, which offers technology and services centered on Montgomery’s AI systems invention.

This article was originally published at Computerworld

Discover Kyield on the voyage to CALO (continuously adaptive learning organization)


tidal pool

A tidal pool ecosystem in Half-Moon Bay, CA.

For those involved with the art, science, and mechanics of organizational management, the 1990s was an exciting if humbling decade. The decade was ushered in with new thinking from Peter Senge in The Fifth Discipline, which introduced The Learning Organization to many. The concept sounded absolutely refreshing to those in the trenches—who could disagree with such logic?

I was deeply engaged at the time in a series of turn-arounds in mid-market companies. The particular business case offered for consideration was a union shop that was bankrupt from the day it opened nearly a decade earlier, and had never made a payment on many millions of dollars of debt.

The new owner had acquired the operating company out of bankruptcy with assumptions that were based on experiences that did not apply to the subject or market. Due to a low discounted acquisition value, the buyers had little to lose if the company continued to underperform, but a great deal to gain if the company could be brought up to a competitive market position.

Long-story short, it was a very challenging yearlong engagement during which time we surpassed everyone’s short-term expectations by a significant degree, including my own. Before I share what went wrong, let’s review a few things we did right:

  • The subject called for and we received an unusual level of authority from owners, which required exceptional levels of mutual trust, and a great deal of credibility.

  • We put together a strong team with a mix of experience relevant to the specific case. Each could recognize the opportunity, and even though under-resourced, we were able to create one of the industry’s top performances that year.

  • We were able to gain the trust and support of the union—which enjoyed a double-digit increase in membership, with most members experiencing significant increases in compensation.

  • Though never union members, senior management to include the owners had personal experience with labor, which was demonstrated alongside workers at critical times and helped gain their support (we walked the talk).

  • The few remaining core customers were so desperate for improved product and service that a modest upgrade and competency improved sales as well as the reputation, from which new and larger customers were gained.

  • The communities involved were relieved to see a well-managed company emerge from years of bankruptcy, churn of management, and under-investment, so we gained support of regional governments (important in this case), which also enjoyed increased tax receipts.

  • The pricing of products and services were well below market, so after modest investment with competent management, I was able to raise prices significantly while increasing sales volume, resulting in a substantial, growing profit.

  • We were able to double the value of the company in one year based on cash flow and future orders, which dramatically improved the position of the parent company, allowing refinancing at more attractive rates.

Now allow me to share why I consider this engagement (with others in this era) to be among my greatest mistakes.

Failure to learn, adapt, and seize the more important opportunity

Many of us wrongly assumed that the parent company would learn from the experience and use the success as a platform to seize other opportunities, which would have required a change in the structure, type, and talent of the company.

Despite considerable coaching combined with all that accompanies short-term financial success, the parent company did not learn the most valuable lessons from this intense experience, so they were not able to adapt to rapidly changing markets. Motivation and desire also clearly contributed, so after a final attempt to convince the parent company owners to transform into a competitive model, we moved on after contract.

My final report to owners and lenders concluded that the company had probably hit a ceiling, recommending that they either embrace the transformation strategy or sell the company. We could take them no further in current form. Just one example of why—a colleague who was a key team member had received a far superior offer from a great company in a location he and his family preferred.

The client’s response was way below market at a tiny fraction of what the operations specialist had created for the client. The decision on my colleague was surprising, as he was one of the pillars, so it confirmed the ceiling for me. The owners were left with a rising star in a subsidiary that kept shining for a short period, during which time likely created profits far exceeding investment, but then began to fade. Unlike the other holdings of the parent company, this subsidiary required intensive, sophisticated, and experienced management.

A decade later I read where the subject had fallen back to a similar performance level to when the parent company acquired it. It gives me no pleasure to share that hindsight has demonstrated the period of our engagement to be the peak of the parent company’s 50-year history. We worked very hard to provide the opportunity for an enduring success.

While we enjoyed deep mutual respect with the parent company, which appeared adaptive in this acquisition and others we brought to them, they weren’t willing to transform their organization. They were opportunistic on a one-off basis, which is best suited for the flipping model, not for an enduring business. Even then technology played a big role through IBM mainframes, inventory management, and transactions over networks. Today of course most companies are facing more dramatic change in an environment that is far more talent and technology dependent.

The learning organization struggles in the 1990s

A few years and dozens of assignments later we had established a tech lab and incubator to explore and test opportunities due to Internet commercialization, which catapulted the economy with significant g-force into the network era. By the mid-1990s a few operational consultants had become critical of the apparent naiveté with the learning organization theory, which like knowledge management had a philosophy that many could embrace, but in practice found difficult to achieve given real-world constraints, including legal and physical, not just cultural or soft issues.

A few researchers pointed out that it was difficult to find actual cases of learning organizations (Kerka 1995). In papers, textbooks, and Ph.D. theses I was invited to review, the same few cases and papers were cited relentlessly, often comparing apples to oranges. One example was a study published by Finger and Brand in 1999, which found that systems needed to be non-threatening. That may have been the case for their subject at the Swiss Postal Service, but would be unrealistic where threats are among few constants, increasingly to include government and academia. Peter Senge apparently took notice of the criticism, reflected in the subtitle of his book The Dance of Change (1999); “The Challenges of Sustaining Momentum in Learning Organizations”.

By the end of the 1990s I had become vocal on the primitive state of systems and tools that could achieve the goals of the learning organization, and more importantly adapt in a sufficient time frame. My perspective was one from operating a live knowledge systems lab that was building, operating, and testing learning networks, which included daily forums that discussed hundreds of real-world cases in real-time over several years. Many were complaining about poor technology and lack of much needed innovation, yet few were focused on improvement from within organizations that would allow innovation to make it to market.

Some researchers have since suggested that in order to achieve the learning organization, it was first necessary for knowledge workers to abandon self-interest, while others claimed that ‘collective accountability’ is the key. In updating myself on this research recently, it sometimes seemed as if organizational management consultants and researchers were attempting to project a vision over actual evidence, denying the historical importance of technology for survival of our species, as well as mathematics, physics, and economics. Consider the message to an AI programmer or pharma scientist today coming from a tenured professor or government employee with life-long security on ‘the need to abandon self-interest’. This has been tested continuously in competitive markets and simply isn’t credible. The evidence is overwhelmingly polar to such advice.

Current state of the CALO

We may have been experiencing devolution and evolution concurrently, yet humans kept working to overcome problems, representing all major disciplines. My company Kyield is among them.

Significant progress has been made across and between all disciplines, including with understanding and engineering the dynamical components of modern organizations.

The network economy

No question that the structure of an organization greatly influences the ability to adapt even if having learned lessons well, including legal, tax, reporting, incentives, physical, and virtual. The network economy has altered the very foundational structure many organizations operate on top of.

While each structure needs to be very carefully crafted, the structural changes vary from the rare ‘no change is needed’ to increasingly common ‘bold change required to survive’. Though it need not be so, it is increasingly more efficient to disrupt and displace than to change from within.

Globalization

While we are experiencing a repatriation and regionalization trend, globalization radically changed the way organizations learn and how they must adapt. Several billion more people are driving the network economy than in 1990, with a significant portion moving from extreme poverty to the middle class.

The global financial crisis (GFC)

Suffice to say for this purpose that the GFC has altered the global operating and regulatory landscape for many businesses and governments—in some cases radically, and is still quite fluid. Currency swings in response to unprecedented monetary policies are the most recent example of this chaotic process, though only one of many organizations must navigate. If the regulatory agencies were CALOs, much of the pain would be mitigated.

Machine learning (ML)

Although quite early in commercialization and most cases still confidential, ML combined with cognitive computing and AI assisted augmentation is rapidly improving. Deep learning is an effective means of achieving a CALO in a pure network environment that interacts with customers such as search and social networking, though is being adopted widely now.

One of the largest continuous learning projects is Orion by UPS, which was kind enough to share some detail in public. Orion provides an excellent case for many to consider as it overlaps the physical world with advanced networks and large numbers of employees worldwide. Unlike Google or Facebook that began as virtual companies running on computer networks, UPS represents a large transformation of the type most organizations need to consider. In the case of UPS of course, they have massive logistical operations with very high volume of semi-automated and automated data management.

Having developed deeply tailored use case scenarios for each sector in Kyield’s customer pipeline, numbering in the dozens, I can offer a few words of advice in public.

  1. While all public cases should be considered, remember that few are shared in public for good reason. Few if any companies have the business model, need, resources, or capacity of Google or UPS, which is why they can share the information.

  2. Rare is the case when enormous custom projects should be copied. The craftwork of planning and design is substantially about taking available lessons, combining with technology, systems, and talent in a carefully tailored manner for the client.

  3. Regardless of whether a large custom project, completely outsourced, or anything in-between, board level supervision is necessary to avoid switching dependencies from one vendor or internal department to another. I see this occurring with new silos popping up in open source models, data science, statistics, and algorithms. The goal for most should be to reduce dependencies, which is nontrivial when dealing with high-level intellectual capital embedded in advanced technology, particularly given talent wars, level of contracting in these functional roles, and churn.

  4. Since few will be able to develop and maintain competitive custom systems, the goal should be to seek an optimal, adaptive balance between the benefits of custom tailored software systems (Orion or Google), with the efficiency of write once and adopt at scale in software development. This is one of several areas where Kyield can really make the difference on the level of ROI realized. Our continuously adaptive data management system (patented) is automatically tailored to each entity with semi-automated functions restricted to regulatory and security parameters at the corporate and unit level, with the option for individual knowledge workers to plan projects and goals.

  5. Plan from inception to expand the system to the entire organization, ecosystem, and Internet of Entities. Otherwise, the organization will be physically restricted from achieving much of the value any such system can offer, particularly in risk management. One of the biggest errors I see being made, including in some of the most sophisticated tech companies, is approaching advanced analytics as ‘only’ a departmental project. Of course it is wise to take the low hanging fruit through use of pre-existing department budgets where authority and ROI are simple, but it is a classic mistake for CIOs and IT to consider such projects strategic, with very rare exception such as for a strategic project.

  6. Optimize relationship management. Just one example is when our adaptive data management is combined with advanced ML algorithms that include predictive capabilities, which among other functions weighs counter party risk. A similar algorithm can be run for identifying business opportunities.

Concluding thoughts

While it is more challenging to achieve buy-in for organization-wide systems, it is physically impossible to achieve critical use cases otherwise, some of which have already proven to be very serious, and can be fatal. Moreover, very few distributed organizations can become a CALO without a holistic system design across the organization’s networks, particularly in the age of distributed network computing.

If this isn’t sufficient motivation to engage, consider that learning algorithms are very likely (or soon will be) improving the intelligence quotient and operational efficiency of your chief competitors at an extremely rapid rate. Lastly, if the subject organization or entire industry is apathetic and slow to change, it is increasingly likely that highly sophisticated, well-financed disrupters are maturing plans to deploy this type of technology to displace the subject, if not the entire industry.

Bottom line: Move towards CALO rapidly or deal with the consequences. 

Mark Montgomery is founder and CEO of http://www.kyield.com, which offers an advanced distributed operating system and related services centered around Montgomery’s AI systems invention.

Applied artificial intelligence: Mythological to neuroanatomical


First of a three part series exploring the depths of artificial intelligence in 2015.

Nueromorphic chip, University of Heidelberg

Neuromorphic chip, University of Heidelberg, Germany Credit: University of Heidelberg

While artificial intelligence (AI) has roots as far back as Greek mythology, and Aristotle is credited with inventing the first deductive reasoning system, it wasn’t until the post WWII era of computing that we humans began to execute machine intelligence with early supercomputers. The science progressed nicely until the onset of AI Winter in the 1960s, representing the beginning of severe cycles in technology. A great deal of R&D needed to evolve and mature over the next five decades prior to wide adoption of applied AI — here is a brief history and analysis of the rapidly evolving field of artificial intelligence and machine learning.

Hardware innovation

One of the primary obstacles to applied AI has been with scaling neural networks, particularly containing rich descriptive data and complex autonomous algorithms. Even in today’s relatively low cost computing environment scale is problematic, resulting in graduate students at Stanford’s AI lab complaining to their lab director “I don’t have 1,000 computers lying around, can I even research this?” In a classic case of accelerated innovation, Stanford and other labs then switched to GPUs: “For about US$100,000 in hardware, we can build an 11-billion-connection network,” reports Andrew Ng, who recently confirmed Baidu now “often trains neural networks with tens of billions of connections.”

Although ever-larger scale at greater speed improves deep learning efficiency, CPUs and GPUs do not function like a mammalian brain, which is why for many years researchers have investigated non–von Neumann architectures that would more closely mimic biological intelligence. One high profile investment is the current DARPA SyNAPSE program (Systems of Neuromorphic Adaptive Plastic Scalable Electronics).

In addition to neuromorphic chips, researchers are increasingly focused on the use of organic materials, which is part of a broader trend towards convergence of hardware, software, and wetware described by IARPA as cortical computing, aka neuroanatomical. The ability to manipulate mass at the atomic level in computational physics is now on the horizon, raising new governance issues as well as unlimited possibilities. The research goals of the Department of Energy in the fiscal 2016 budget proposal provides a good example of future direction in R&D:

The Basic Energy Sciences (BES) program supports fundamental research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels …. Key to exploiting such discoveries is the ability to create new materials using sophisticated synthesis and processing techniques, precisely define the atomic arrangements in matter, and control physical and chemical transformations.

Software and data improvements

Similar to hardware, software is an enabler that determines the scalability, affordability, and therefore economic feasibility of AI systems. Enterprise software to include data base systems also suffered a period of low innovation and business creation until very recently. In early 2015 several companies in the billion-dollar startup club are providing infrastructure support to AI, offering overlapping services such as predictive analytics, and/or beginning to employ narrow AI and machine learning (ML) internally. Many of us are now concerned with lack of experience and excessive hype that often accompany rapidly increased investment.

Database systems and applications

Database systems, storage and retrieval are of course of critical importance to AI. A few short years ago only one leading vendor supported semantic standards, which was followed by a second market leader two years ago. The emergence of multiple choices in the market is resulting in the first significant new competition in database systems in over a decade, several of which are increasingly competing for critical systems at comparable performance levels at significantly lower cost.

Data standards and interoperability

Poor interoperability vastly increases costs and systemic risk while dramatically slowing adaptability, and makes effective governance all but impossible. While early adopters of semantic standards include intelligence agencies and scientific computing, even banks are embracing data standards in 2015, in part due to regulatory agencies collaborating internationally. High quality data built on deeply descriptive standards combined with advanced analytics and ML can provide much improved risk management, strong governance, efficient operations, accelerated R&D and improved financial performance. In private discussions across industries, it is clear that a consensus has finally formed that standards are a necessity in the network era.

Automation, agility, and adaptive computing

An effective method of learning the agility quotient of an organization is post mortem analysis after a catastrophic event such as patent cliffs in big pharma, or failure to respond to digital disruption. Unfortunately for executives, by the time clarity is gained from hindsight, the wisdom is only useful if placed in a position of authority again. Similarly, failed governance is quite evident in the wake of industrial accidents, missed opportunities to prevent terrorist attacks and whale trades among many others.

Given that the majority of economic productivity is planned and executed over enterprise networks, it is fortunate that automation is beginning to replace manual system integration. Functional silos are causal to crises, as well as obstacles to solving many of our greatest challenges. Too often has been the case where the lack of interoperability has proven to be leveraged for the benefit of the IT ecosystem rather than customer mission. In drug development, for example, until recently scientists were reporting lag times of up to 12 months for important queries to be run. By automating data discovery, scientists can run their own queries, AI applications can recommend unknown queries, and deep learning can continuously seek solutions to narrowly defined problems tailored to the needs of each entity.

In the next article in this series we’ll take a look at recent improvements in algorithms and what types of AI applications are now possible.

(This article was originally published at Computerworld)

Mark Montgomery is the founder and CEO of Kyield, originator of the theorem “yield management of knowledge,” and inventor of the patented AI system that serves as the foundation for Kyield.

Plausible Scenarios For Artificial Intelligence in Preventing Catastrophes


(The PDF version of this article is now available here)

Plausible Scenarios For Artificial Intelligence in Preventing Catastrophes

 

Given consistent empirical evidence demonstrating that the raising of red flags has resulted in a lower probability of essential actors heeding warnings than causing of unintended consequences, those endowed with a public platform carry a special responsibility, as do system architects. Nowhere is this truer than at the confluence of advanced technology and existential risk to humanity.

As one who has invested a significant portion of my life studying crises and designing methods of prevention, including for many years the use of artificial intelligence (AI), I feel compelled to offer the following definitions of AI with a few of what I consider to be plausible scenarios on how AI could prevent or mitigate catastrophes, as well as brief enlightenment on how we reduce the downside risk to levels similar to other technology systems.

An inherent temptation with considerable incentives exists in the transdisciplinary field of AI for the intellectually curious to expand complexity infinitely, generally leaving the task of pragmatic simplicity to applied R&D, which is the perspective the following definitions and scenarios are offered, primarily for consideration and use in this article.

A Definition of Artificial Intelligence: Any combination of technologies and methodologies that result in learning and problem solving ability independent of naturally occurring intelligence.

A Definition of Beneficial AI: AI that has been programmed to prevent intentional harm and to mitigate unintended consequences within a strictly controlled governance system, which include redundant safety functions, and continuous human oversight (to include a kill switch in high-risk programs). [1]

A Definition of Augmentation: For the purposes of this article, augmentation is defined not as implants or direct connection to biological neural systems, which is an important rapidly emerging sub-field of AI in biotech, but rather simply enhancing the quality of human work products and economic productivity with the assistance of AI[2]

To aid in this exercise, I have selected quotes from the highly regarded book Global Catastrophic Risks (GCR), which consists of 22 chapters by 25 authors in a worthy attempt to provide a comprehensive view. [3]  I have also adopted the GCR definition of ‘global catastrophic’: 10 million fatalities or 10 trillion dollars. The following reminders from GCR seem appropriate in a period of intense interest in AI and existential risk:

“Even for an honest, truth-seeking, and well-intentioned investigator it is difficult to think and act rationally in regard to global catastrophic risks and existential risks.”

“Some experts might be tempted by the media attention they could get by playing up the risks. The issue of how much and in which circumstances to trust risk estimates by experts is an important one.”

Super Volcanoes

A very high probability event that should consume greater attention, super volcano eruptions occur about every 50,000 years. GCR highlights the Toba eruption in Indonesia approximately 75,000 years ago, which may be the closest humanity has come to extinction. The primary risk from super eruptions is airborne particulates that can cause a rapid decline in global temperatures, estimated to have been 5–15 C after the Toba event. [4]

While it appears that a super-eruption contains a low level of existential risk to the human race, a catastrophic event is almost certain and would likely exceed all previous disasters, followed by massive loss of life and economic collapse, reduced only by the level of preventative measures taken in advance. While preparation and alert systems have much improved since my co-workers and I observed the eruption of Mt. St. Helens from Paradise on Mt. Rainier the morning of May 18th, 1980, it is quite clear that the world is simply not prepared for one of the larger super-eruptions that will undoubtedly occur. [5] [6]

MM on Mt. Rainier, 5/18/1980

MM on Mt. Rainier, 5/18/1980

The economic contagion observed in recent events such as the 9/11 terrorists attacks, the global financial crises, the Tōhoku earthquake and tsunami, the current Syrian Civil War, and the ongoing Ebola Epidemic in West Africa serve as reasonable real-world benchmarks from which to make projections for much larger events, such as a super-eruption or asteroid. It is not alarmist to state that we are woefully unprepared and need to employ AI to increase the probability for preserving our species and others. The creation and colonization of highly adaptive outposts in space designed with the intent of sustainability would therefore seem prudent policy best served with the assistance of AI.

The role of AI in mitigating the risks associated with super volcanoes include accelerated research, more accurate forecasting, increased modeling for preparation and mitigation of damage, accelerating discovery of systems for surviving super eruptions, and to assist in recovery. These tasks require ultra high scale data analytics to deal with complexities far beyond the ability of humans to process within circumstantially dictated time windows, and are too critical to be dependent upon a few individuals, thus requiring highly adaptive AI across distributed networks involving large numbers of interconnected deep earth and deep sea sensors, linking individuals, teams, and organizations. [7]  Machine learning has the ability to accelerate all critical functions, including identifying and analyzing preventative measures with algorithms designed to discover and model scenario consequences.

Asteroids are similar to super volcanoes from an AI perspective, both requiring effective discovery and preparation, which while ongoing and improving could benefit from AI augmentation. Any future preventive endeavors may require cognitive robotics working in extremely hostile environments for humans where real time communications could be restricted. Warnings may be followed swiftly by catastrophic events—hence, preparation is critical, and if viewed as optional, we do so at our own peril.

Pandemics and Healthcare Economics

Among the greatest impending known risks to humans are biological contagions, due to large populations in urban environments that have become reliant on direct public interaction, regional travel by public transportation, and continuous international air travel. Antibiotic-resistant bacteria and viruses in combination with large mobile populations pose a serious short-term threat to humanity and the global economy, particularly from an emergent mutation resistant to pre-existing immunity or drugs.

“For example, if a new strain of avian flu were to spread globally through the air travel network that connects the world’s major cities, 3 billion people could potentially be exposed to the virus within a short span of time.” – Global Risk Economic Report 2014, World Economic Forum (WEF). [8]

Either general AI or knowledge work augmented with AI could be a decisive factor in preventing and mitigating a future global pandemic, which has the potential capacity to dwarf even a nuclear war in terms of short-term casualties. Estimates vary depending on strain, but a severe pandemic could claim 20-30% of the global population compared to 8-14% in a nuclear war, though aftermath from either could be as horrific.

Productivity in life science research is poor due to inherent physical risks for patients as well as non-physical influences such as cultural, regulatory, and financial that can increase systemic, corporate, and patient risk. Healthcare cost is among the leading global risks due to cost transference to government. The WEF, for example, cites ‘Fiscal Crisis in Key Economies’ as the leading risk in 2014. Other than poor governance that allows otherwise preventable crises to occur, healthcare costs are arguably the world’s most severe curable ongoing crisis, impacting all else, including pandemics.

Priority uses of AI for pandemics and healthcare: [9]

  • Accelerated R&D throughout life science ecosystem, closing patient/lab gap.

  • Advanced modeling scenarios for optimizing prevention, care, and costs.

  • Regulatory process including machine learning and predictive modeling.

  • Precision healthcare tailored to each person; more direct and automated. [10]

  • Predictive analytics for patient care, disease prevention, and economics.

  • Cognitive computing for diagnostics, patient care, and economics.

  • Augmentation for patients and knowledge workers worldwide.

“By contrast to pandemics, artificial intelligence is not an ongoing or imminent global catastrophic risk. However, from a long-term perspective, the development of general artificial intelligence exceeding that of the human brain can be seen as one of the main challenges to the future of humanity (arguably, even the main challenge). At the same time, the successful deployment of superintelligence could obviate many of the other risks facing humanity.” — Eliezer Yudkowsky in GCR chapter ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk’.

Climate Change

Crises often manifest over time as a series of compounding events that either could not be predicted, predictions were inaccurate, or were not acted upon. I’ve selected an intentionally contrarian case with climate change to provide a glimpse of potentially catastrophic unintended consequences. [11]

“Anthropogenic climate change has become the poster child of global threats. Global warming commandeers a disproportionate fraction of the attention given to global risks.” – GCR

If earth cooling from human response driven by populism, conflicted science, or any other reason resulted in an over-correction, or a response combined with volcanoes, asteroids, or slight variation of cyclic changes in Earth’s orbit to alter ocean and atmospheric patterns, it becomes plausible to forecast a scenario of an accelerated ice age where ice sheets rapidly reach the equator similar to what may have occurred during the Cryogenian period. Given our limited understanding of triggering mechanisms to include in sudden temperature changes, which are believed to have occurred 24 times over the past 100,000 years, it is also plausible that either a natural and/or human response could result in rapid warming. While either scenario of climate extreme may seem ludicrous given current sentiment, the same was true prior to most events. Whether natural, human caused, or some combination as observed with the Fukushima Daiichi nuclear disaster, the pattern of complexity of seemingly small events leading to larger events is more the norm rather than exception, with outcomes opposed to intent not unusual. While reduction of pollutants is prudent for health, climate, and economics, chemically engineered solutions should be resisted until our ability to manage climate change is proven with rigorous real-world models.

9/11

The tragedy of 9/11 provides a critical lesson in seemingly low risk series of events that can easily spiral into catastrophic scale if not planned and managed with precision. Those who decided not to share the Phoenix memo at the FBI with appropriate agencies undoubtedly thought it was a safe decision given information at the time. Who among us would have acted differently if having to make our decision based on the following questions in the summer of 2001?

  • What are the probabilities of an airline being used as a weapon?
  • Even if such an attempt was made, what are the probabilities of it occurring?
  • In the unlikely event such an attempt did occur, what is the probability that one of the towers at the World Trade Center would be targeted and struck?
  • In the very unlikely event one WTC tower were struck, what is the probability that the other tower would be struck similarly before evacuation could take place?
  • In the extremely unlikely scenario that terrorists attacked the second tower with a hijacked commercial airliner within 20 minutes of the first attack, what would be the probability of one of the towers collapsing within an hour?
  • If then this horrific impossible scenario were actually to have occurred, what then would be the possibility of the second tower collapsing a half hour later?

Prior to 9/11, such a scenario would seem to be an extremely low probability for anyone but an expert if possible to verify and process all information, but of course the questions describe the actual event, tragically providing one of the most painful and costly lessons in human life and treasure in the history of the U.S., with wide ranging global impact.

  • Direct costs from attacks: 2,999 lives lost with an additional 1,400 rescue workers having died since, some portion of which were directly caused. Financial costs from 9/11 are estimated to have been approximately $2 trillion to the U.S. alone. [12]

  • Iraq and Afghanistan wars: Approximately 36,000 people from the US and allied nations died in Iraq and Afghanistan, with nearly 900,000 disability claims approved by the VA alone. [13]  The human cost in the two nations were of course much higher, including large numbers of civilians. According to one report, the Iraq War has cost $1.7 trillion with an additional $490 billion in benefits owed to veterans, which could grow to more than $6 trillion over the next four decades counting interest. [14]  The Afghanistan war is estimated to have cost $4-6 trillion.

  • Global Financial Crisis: Many experts believe monetary easing post 9/11 combined with lax regulatory oversight expanded the unsustainable housing bubble and triggered the financial collapse in 2008, which revealed systemic risk accumulating over decades from poor regulatory oversight, governance, and policy. [15]  The total cost to date from the financial crisis is well over $10 trillion and continues to rise from a series of events reverberating globally, caused by realized moral hazard, broad global reactions, and significant continuing aftershocks from attempts to recover.

The reporting process for preventing terrorist attacks has been reformed and improved, though undoubtedly still challenged due to the inherently conflicted dynamics between interests, exponential growth of data, and limited ability for individuals to process. If the FBI decision makers were empowered with the knowledge of the author of the Phoenix memo (special agent Kenneth Williams), and free of concerns over interagency relations, politics, and other influences, 9/11 would presumably have been prevented or mitigated.

An Artificially Intelligent Special Agent

What if the process wasn’t dependent upon conflicted humans? An artificial agent can be programmed to be free of such influences, and can access all relevant information on the topic from available sources far beyond the ability of humans to process. When empowered with machine learning and predictive analytics, redundant precautions, and integrated with the augmented workflow of human experts, the probability of preventing or mitigating human caused events is high, whether in preventing terrorist plots, high cost projects with deep sea oil rigs, trading exotic financial products, nuclear power plant design, response to climate change, or any other such high-risk endeavor.

Governance of Human Engineered Systems

Long considered one of the primary technical risks, as well as key to overcoming many of our most serious challenges, the ability to program matter at the atomic scale is beginning to emerge. [16]  Synthetic biology is another area of modern science that poses great opportunity for improving life while introducing new risks that must be governed wisely with appropriate tools. [17]

While important private research cannot be revealed, published work is underway beyond the typical reporting linked to funding announcements and ad budgets. One example is the Synthetic Evolving Life Form (SELF) project at Raytheon led by Dr. Jim Crowder, which mimics the human nervous system within an Artificial Cognitive Neural Framework (ACNF). [18]  In their book Artificial Cognitive Architectures, Dr. Crowder and his colleagues describe an unplanned Internet discussion between an artificial agent and human researcher that called for embedded governance built into the design.

A fascinating project was recently presented by Joshua M. Epstein, Ph.D. at the Santa Fe Institute on research covered in his book Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science[19]  In this novel approach to agent based social behavior, Epstein has intentionally removed memory and pre-conditioned bias from Agent Zero to simulate what often appears to be irrational behavior resulting from social influences, reasoning, and emotions. Agent Zero provides a good foundation for considering additional governance methods for artificial agents interacting in groups, with potential application in AI systems designed to regulate systemic risk in networked environments.

Closing Thoughts

Given the rapidly developing AI-assisted world combined with recent governance failures in financial regulation, national security, and healthcare, the challenge for senior decision makers and AI systems engineers is to perform at much higher levels than the past, for if we continue on this trajectory it is conceivable that the human experience could confirm the Fermi Paradox within decades. I therefore view AI in part as a race against time.

While the time-scale calculus for many organizations supports accelerated transformation with the assistance of AI, it is wise to proceed cautiously with awareness that AI is a transdisciplinary field which favors patience and maturity in the architects and engineers, typically requiring several tens of thousands of hours of deep immersion in each of the applicable focus areas. It is also important to realize that the fundamental physics involved with AI favors if not requires the governance structure to be embedded within the engineered data, including business process, rules, operational needs, and security parameters. When conducted in a prudential manner, AI systems engineering is a carefully planned and executed process with multiple built-in redundancies.

Mark Montgomery is founder and CEO of http://www.kyield.com, which offers technology and services centered on Montgomery’s AI systems invention.

(Robert Neilson, Ph.D., and Garrett Lindemann, Ph.D., contributed to this article)

[1] In the GCR chapter Artificial Intelligence as a ‘Positive and Negative Factor in Global Risk’, Eliezer Yudkowsky prefers ‘Friendly AI’. His complete chapter is on the web here: https://intelligence.org/files/AIPosNegFactor.pdf

[2] During the course of writing this paper Stanford University announced a 100-year study of AI. The white paper by Eric Horvitz can be viewed here: https://stanford.app.box.com/s/266hrhww2l3gjoy9euar

[3] Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Cirkovic, (Oxford University Press, Sep 29, 2011): http://ukcatalogue.oup.com/product/9780199606504.do

[4] For more common events, see Ultra distant deposits from super eruptions: Examples from Toba, Indonesia and Taupo Volcanic Zone, New Zealand. N.E. Matthews, V.C. Smith, A Costa, A.J. Durant, D. M. Pyle and N.G.J. Pearce.

[5] USGS Fact Sheet for Alert-Notification System for Volcano Activity: http://pubs.usgs.gov/fs/2006/3139/fs2006-3139.pdf

[6] Paradise is about 45 miles to the north/blast side of St. Helens in full view until ash cloud enveloped Mt. Rainier with large fiery ash and friction lightening. A PBS video on the St. Helens eruption: http://youtu.be/-H_HZVY1tT4

[7] Researchers at Oregon State University claimed the first successful forecast of an undersea volcano in 2011 with the aid of deep sea pressure sensors: http://www.sciencedaily.com/releases/2011/08/110809132234.htm

[8] Global Risk Economic Report 2014, World Economic Forum: http://www3.weforum.org/docs/WEF_GlobalRisks_Report_2014.pdf

[9] While pandemics and healthcare obviously deserve and receive much independent scrutiny, understanding the relativity of economics is poor. Unlike physics, functional metric tensors do not yet exist in economics.

[10] New paper in journal Science: ‘The human splicing code reveals new insights into the genetic determinants of disease’ http://www.sciencemag.org/content/early/2014/12/17/science.1254806

[11] Skeptical views are helpful in reducing the number and scope of unintended consequences. For a rigorous contrarian view on climate change research see ‘Fat-Tailed Uncertainty in the Economics of Catastrophic Climate Change’, by Martin L. Weitzman: http://scholar.harvard.edu/files/weitzman/files/fattaileduncertaintyeconomics.pdf

[12] Institute for the Analysis of Global Security http://www.iags.org/costof911.html

[13] US and Coalition Casualties in Iraq and Afghanistan by Catherine Lutz, Watson Institute for International Studies: http://costsofwar.org/sites/default/files/articles/8/attachments/USandCoalition.pdf

[14] Daniel Trotta at Reuters, March 14th, 2013 http://www.reuters.com/article/2013/03/14/us-iraq-war-anniversary-idUSBRE92D0PG20130314

[15] In his book The Age of Turbulence: Adventures in a New World, Alan Greenspan states that monetary policy in the years following 9/11 “saved the economy”, while others point to Fed policy during this period as contributing to the global financial crisis beginning in 2008, which continues to reverberate today. Government guarantees, high-risk products in banking, companies ‘too big to fail’, corrupted political process, increased public debt, lack of effective governance, and many other factors also contributed to systemic risk and the magnitude of the crisis.

[16] A team of researchers published research in Nature Chemistry in May of 2013 on using a Monte Carlo simulation to demonstrate engineering at the nanoscale: http://www.nature.com/nchem/journal/v5/n6/full/nchem.1651.html

[17] A recent paper from MIT researchers ‘Tunable Amplifying Buffer Circuit in E. coli’ is an example of emerging transdisciplinary science with mechanical engineering applied to bacteria: http://web.mit.edu/ddv/www/papers/KayzadACS14.pdf

[18] My review of Dr. Crowder’s recent book Artificial Cognitive Architectures can be viewed here: https://kyield.wordpress.com/2013/12/17/book-review-artificial-cognitive-architectures/

[19] Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science, by

Joshua M. Epstein: http://press.princeton.edu/titles/10169.html The seminar slides (require Silverlight plugin): http://media.cph.ohio-state.edu/mediasite/Play/665c495b7515413693f52e7ef9eb4c661d