New E-Book: Ascension to a Higher Level of Performance


Power of Transdisciplinary Convergence (Copyright 2015 Kyield All RIghts Reserved)

Power of Transdisciplinary Convergence

Ascension to a Higher Level of Performance 

The Kyield OS: A Unified AI System

By Mark Montgomery
Founder & CEO
Kyield

I just completed an extensive e-book for customers and prospective customers, which should be of interest to all senior management teams in all sectors as the content impacts every aspect of individual and corporate performance.

Our goals in this e-book are fivefold:

  1. Provide a condensed story on Kyield and the voyage required to reach this stage.
  2. Demonstrate how the Kyield OS assimilates disparate disciplines in a unified manner to rapidly improve organizations and then achieve continuous improvement.
  3. Discuss how advances in software, hardware and algorithmics are incorporated in our patented AI system design to accelerate strategic performance and remain competitive.
  4. Detail how a carefully choreographed multi-phase pilot of the Kyield OS can provide the opportunity for an enduring competitive advantage by establishing a continuously adaptive learning organization (CALO).
  5. Educate existing and prospective customers on the Kyield OS as much as possible without disclosing unrecoverable intellectual capital, future patents and trade secrets.
TABLE OF CONTENTS
INTRODUCTION  1
REVOLUTION IN IT-ENABLED COMPETITIVENESS  2
POWER OF TRANSDISCIPLINARY CONVERGENCE  3
MANAGEMENT CONSULTING  4
COMPUTER SCIENCE AND PHYSICS  5
ECONOMICS AND PSYCHOLOGY  9
LIFE SCIENCE AND HEALTHCARE 10
PRODUCTS AND INDUSTRY PLATFORMS 11
THE KYIELD OS 11
THE KYIELD PERSONALIZED HEALTHCARE PLATFORM 12
ACCELERATED R&D 13
SPECIFIC LIFE SCIENCE AND HEALTHCARE USE CASES 13
BANKING AND FINANCIAL SERVICES 14
THE PILOT PROCESS 15
EXAMPLE: BANKING, PHASE 1 17
PHASE 2 18
PHASE 3 18
PHASE 4 18
CONCLUSION: IN THIS CASE THE END JUSTIFIES THE MEANS  21

To request a copy of this e-book please email me at markm@kyield.com from your corporate email account with job title and affiliation.

Advertisements

Revolution in IT-Enabled Competitiveness


Four Stages of Enterprise Network Competence

Most current industry leaders owe their existence beyond basic competencies and resources to a strong competitive advantage from early adoption of systems engineering and statistical methods for industrial production that powered much of the post WW2 economy. These manual systems and methods accelerated global trade, extraction, logistics, manufacturing and scaling efficiencies, becoming computerized over the last half-century.

The computer systems were initially highly complex and very expensive, though resulted in historic business success such as American Airlines’ SABRE in 1959 [1] and Walmart’s logistics system staring in 1975 [2], which helped Walmart reach a billion USD in sales in a shorter period than any other company in 1980.

As those functions previously available to only a few became productized and widely adopted globally, the competitive advantage began to decline. The adoption argument then changed from a competitive advantage to an essential high cost of entry.[3]   When functionality in databases, logistics and desktops became ubiquitous globally the competitive advantage was substantially lost, yet costs continued to rise in software while falling dramatically in hardware, causing problems for customers as well as national and macro global economics. In order to achieve a competitive advantage in IT, it became necessary for companies to invest heavily in commoditized computing as a high cost of initial entry, and then invest significantly more in customization on top of the digital replicas most competitors enjoyed.

The network era began in the 1990s with the commercialization of the Internet and Web, which are based on universal standards, introduced a very different dynamic to the IT industry that has now impacted most sectors and the global economy. Initially under-engineered and overhyped for short-term gains during the inflation of the dotcom bubble, long-term impacts were underestimated as evidenced by ongoing disruption today causing displacement in many industries. We are now entering a new phase Michael Porter refers to as ‘the third wave of IT-driven competition’, which he claims “has the potential to be the biggest yet, triggering even more innovation, productivity gains, and economic growth than the previous two.” [4]

While I see the potential of smart devices similar to Porter, the potential for AI-enhanced human work for increased productivity, accelerated discovery, automation, prevention and economic growth is enormous and, similar to the 1990s, while machine intelligence is overhyped in the short-term, the longer term impact could indeed be “the biggest yet” of the three waves. This phase of IT-enabled competitiveness is the logical extension of the network economy benefiting from thousands of interoperable components long under development from vast numbers of sources to execute the ‘plug and play’ architecture many of us envisioned in the 1990s. This still emerging Internet of Entities when combined with advanced algorithmics brings massive opportunity and risk for all organizations in all sectors, requiring operational systems and governance specifically designed for this rapidly changing environment.

This is a clip from an E-book nearing completion titled: The Kyield OS: A Unified AI System; Rapid Ascension to a Higher Level of Performance. Existing or prospective customers are invited to send me an email for a copy upon completion within the next month – markm at kyield dot com.

[1] https://www.aa.com/i18n/amrcorp/corporateInformation/facts/history.jsp

[2] http://www.scdigest.com/ASSETS/FIRSTTHOUGHTS/12-07-26.php?cid=6047

[3] Lunch discussion on topic with Les Vadasz in 2009 in Silicon Valley.

[4] https://hbr.org/2014/11/how-smart-connected-products-are-transforming-competition

Recent trends in artificial intelligence algorithms


seppdeep754x466

Promise of spring reveals interesting hybrid variants

Those of us who have been through a few tech cycles have learned to be cautious, so for the second article in this series I thought it might be helpful to examine the state of AI algorithms to answer the question: what’s different this time?

I reached out to leading AI labs for their perspective, including Jürgen Schmidhuber at the Swiss AI Lab IDSIA. Jürgen’s former students include team members at Deep Mind who co-authored a paper recently published by Nature on deep reinforcement learning.

Our Recurrent Neural Networks (RNNs) have revolutionized speech recognition and many other fields, broke all kinds of benchmark records, and are now widely used in industry by Google (Sak et al.), Baidu (Hannun et al.), Microsoft (Fan et al.), IBM (Fernandez et al.), and many others. — Jürgen Schmidhuber

Jürgen recently published an overview on deep learning in neural networks with input from many others, including Yann Lecun, Director of AI Research at Facebook. Lee Gomes recently interviewed Lecun who provided one of the best definitions of applied AI I’ve seen:

It’s very much interplay between intuitive insights, theoretical modeling, practical implementations, empirical studies, and scientific analyses. The insight is creative thinking, the modeling is mathematics, the implementation is engineering and sheer hacking, the empirical study and the analysis are actual science. What I am most fond of are beautiful and simple theoretical ideas that can be translated into something that works. — Yann Lecun at IEEE Spectrum

Convolutional neural networks (CNNs)

Much of the recent advancement in AI has been due to convolutional neural networks, which can be trained to mimic partial functionality of the human visual cortex. The inability to accurately identify objects is a common problem, which slows productivity, increases risk, and causes accidents worldwide.

CNNs make use of local filtering with various max-pooling techniques and fewer parameters that make NNs easier to train than in a standard multilayer network. The invention and evolution of the nonlinear backpropagation (BP) algorithm through multi-layers, combined with other supervised learning methods, have enabled nascent artificial intelligence systems with the ability to continuously learn.

CNNs are valuable for a wide range of applications such as diagnostics in healthcare, agriculture, supply chain quality control and automated disaster prevention in all sectors. CNNs are also applied in high performance large-vocabulary continuous speech recognition (LVCSR).

FitNets

Yoshua Bengio is Professor at Université de Montréal and head of the Machine Learning Laboratory (LISA). He is making good progress on a new deep learning book for MIT Press with co-authors Ian Goodfellow and Aaron Courville.

What’s different? – More compute power (the most important element) – More labeled data – Better algorithms for supervised learning (the algorithms of 20 years ago—as is don’t work that well, but a few small changes discovered in recent years make a huge difference) — Yoshua Bengio

Yoshua and several colleagues recently proposed a novel approach to train thin and deep networks, called FitNets, which introduces ‘hints’ with improved ability to generalize while significantly reducing the computational burden. In an email exchange, he shared insights on thin nets:

The thin deep net idea is a procedure for helping to train thinner and deeper networks. You can see deep nets as a rectangle: what we call depth corresponds to its height (number of layers) and what we called thickness (or its opposite, being thin) is the width of the rectangle (number of neurons per layer). Deeper networks are harder to train but can potentially generalize better, i.e., make better predictions on new examples. Thinner networks are even harder to train, but if you can train them they generalize even better (if not too thin!). — Yoshua Bengio

Long Short-Term Memory (LSTM)

Sepp Hochreiter is head of the Institute of Bioinformatics at the JKU of Linz (photo above), and was Schmidhuber’s first student in 1991. Schmidhuber credits Sepp’s work for “formally showing that deep neural networks are hard to train, because they suffer from the now famous problem of vanishing or exploding gradients”.

Exponentially decaying signals—or exploding out of bounds, was as scientists are fond of saying— a ‘non-trivial’ challenge, requiring a series of complex solutions to achieve recent progress.

The advent of Big Data together with advanced and parallel hardware architectures gave these old nets a boost such that they currently revolutionize speech and vision under the brand Deep Learning. In particular the “long short-term memory” (LSTM) network, developed by us 25 years ago, is now one of the most successful speech recognition and language generation methods. — Sepp Hochreiter

Expectations for the near future

We will go beyond mere pattern recognition towards the grand goal of AI, which is more or less: efficient reinforcement learning (RL) in complex, realistic, partially observable environments… I believe it will be possible to greatly scale up such approaches, and build RL robots that really deserve the name. — Jürgen Schmidhuber at INNS.

Unsupervised learning and reinforcement learning remain prizes in the future (and necessary for real progress towards AI, among other things), in spite of intense research activity and promising advances. — Yoshua Bengio via email.

Deep Learning techniques have the potential to advance unsupervised methods like biclustering to improve drug design or detect genetic relationships among population groups. Another trend will be algorithms that store the current context like a working memory. —Sepp Hochreiter via email.

My perspective

Pioneers in ML and AI deserve a great deal of credit, as do sponsors who funded R&D through long winters. One difference I see today versus previous cycles is that the components in network computing have now created a more sustainable environment for AI, with greater variety of profitable business models that are dependent upon improvement.

In addition, awareness is growing that learning algorithms are a continuous process that rapidly creates more value over time, so organizations have a strong economic incentive to commit resources early or risk disruption.

In the applied world we are faced with many challenges, including security, compliance, markets, talent, and customers. Fortunately, although creating new challenges, emerging AI provides the opportunity to overcome serious problems that cannot be solved otherwise.

Mark Montgomery is founder and CEO of http://www.kyield.com, which offers technology and services centered on Montgomery’s AI systems invention.

This article was originally published at Computerworld

Discover Kyield on the voyage to CALO (continuously adaptive learning organization)


tidal pool

A tidal pool ecosystem in Half-Moon Bay, CA.

For those involved with the art, science, and mechanics of organizational management, the 1990s was an exciting if humbling decade. The decade was ushered in with new thinking from Peter Senge in The Fifth Discipline, which introduced The Learning Organization to many. The concept sounded absolutely refreshing to those in the trenches—who could disagree with such logic?

I was deeply engaged at the time in a series of turn-arounds in mid-market companies. The particular business case offered for consideration was a union shop that was bankrupt from the day it opened nearly a decade earlier, and had never made a payment on many millions of dollars of debt.

The new owner had acquired the operating company out of bankruptcy with assumptions that were based on experiences that did not apply to the subject or market. Due to a low discounted acquisition value, the buyers had little to lose if the company continued to underperform, but a great deal to gain if the company could be brought up to a competitive market position.

Long-story short, it was a very challenging yearlong engagement during which time we surpassed everyone’s short-term expectations by a significant degree, including my own. Before I share what went wrong, let’s review a few things we did right:

  • The subject called for and we received an unusual level of authority from owners, which required exceptional levels of mutual trust, and a great deal of credibility.

  • We put together a strong team with a mix of experience relevant to the specific case. Each could recognize the opportunity, and even though under-resourced, we were able to create one of the industry’s top performances that year.

  • We were able to gain the trust and support of the union—which enjoyed a double-digit increase in membership, with most members experiencing significant increases in compensation.

  • Though never union members, senior management to include the owners had personal experience with labor, which was demonstrated alongside workers at critical times and helped gain their support (we walked the talk).

  • The few remaining core customers were so desperate for improved product and service that a modest upgrade and competency improved sales as well as the reputation, from which new and larger customers were gained.

  • The communities involved were relieved to see a well-managed company emerge from years of bankruptcy, churn of management, and under-investment, so we gained support of regional governments (important in this case), which also enjoyed increased tax receipts.

  • The pricing of products and services were well below market, so after modest investment with competent management, I was able to raise prices significantly while increasing sales volume, resulting in a substantial, growing profit.

  • We were able to double the value of the company in one year based on cash flow and future orders, which dramatically improved the position of the parent company, allowing refinancing at more attractive rates.

Now allow me to share why I consider this engagement (with others in this era) to be among my greatest mistakes.

Failure to learn, adapt, and seize the more important opportunity

Many of us wrongly assumed that the parent company would learn from the experience and use the success as a platform to seize other opportunities, which would have required a change in the structure, type, and talent of the company.

Despite considerable coaching combined with all that accompanies short-term financial success, the parent company did not learn the most valuable lessons from this intense experience, so they were not able to adapt to rapidly changing markets. Motivation and desire also clearly contributed, so after a final attempt to convince the parent company owners to transform into a competitive model, we moved on after contract.

My final report to owners and lenders concluded that the company had probably hit a ceiling, recommending that they either embrace the transformation strategy or sell the company. We could take them no further in current form. Just one example of why—a colleague who was a key team member had received a far superior offer from a great company in a location he and his family preferred.

The client’s response was way below market at a tiny fraction of what the operations specialist had created for the client. The decision on my colleague was surprising, as he was one of the pillars, so it confirmed the ceiling for me. The owners were left with a rising star in a subsidiary that kept shining for a short period, during which time likely created profits far exceeding investment, but then began to fade. Unlike the other holdings of the parent company, this subsidiary required intensive, sophisticated, and experienced management.

A decade later I read where the subject had fallen back to a similar performance level to when the parent company acquired it. It gives me no pleasure to share that hindsight has demonstrated the period of our engagement to be the peak of the parent company’s 50-year history. We worked very hard to provide the opportunity for an enduring success.

While we enjoyed deep mutual respect with the parent company, which appeared adaptive in this acquisition and others we brought to them, they weren’t willing to transform their organization. They were opportunistic on a one-off basis, which is best suited for the flipping model, not for an enduring business. Even then technology played a big role through IBM mainframes, inventory management, and transactions over networks. Today of course most companies are facing more dramatic change in an environment that is far more talent and technology dependent.

The learning organization struggles in the 1990s

A few years and dozens of assignments later we had established a tech lab and incubator to explore and test opportunities due to Internet commercialization, which catapulted the economy with significant g-force into the network era. By the mid-1990s a few operational consultants had become critical of the apparent naiveté with the learning organization theory, which like knowledge management had a philosophy that many could embrace, but in practice found difficult to achieve given real-world constraints, including legal and physical, not just cultural or soft issues.

A few researchers pointed out that it was difficult to find actual cases of learning organizations (Kerka 1995). In papers, textbooks, and Ph.D. theses I was invited to review, the same few cases and papers were cited relentlessly, often comparing apples to oranges. One example was a study published by Finger and Brand in 1999, which found that systems needed to be non-threatening. That may have been the case for their subject at the Swiss Postal Service, but would be unrealistic where threats are among few constants, increasingly to include government and academia. Peter Senge apparently took notice of the criticism, reflected in the subtitle of his book The Dance of Change (1999); “The Challenges of Sustaining Momentum in Learning Organizations”.

By the end of the 1990s I had become vocal on the primitive state of systems and tools that could achieve the goals of the learning organization, and more importantly adapt in a sufficient time frame. My perspective was one from operating a live knowledge systems lab that was building, operating, and testing learning networks, which included daily forums that discussed hundreds of real-world cases in real-time over several years. Many were complaining about poor technology and lack of much needed innovation, yet few were focused on improvement from within organizations that would allow innovation to make it to market.

Some researchers have since suggested that in order to achieve the learning organization, it was first necessary for knowledge workers to abandon self-interest, while others claimed that ‘collective accountability’ is the key. In updating myself on this research recently, it sometimes seemed as if organizational management consultants and researchers were attempting to project a vision over actual evidence, denying the historical importance of technology for survival of our species, as well as mathematics, physics, and economics. Consider the message to an AI programmer or pharma scientist today coming from a tenured professor or government employee with life-long security on ‘the need to abandon self-interest’. This has been tested continuously in competitive markets and simply isn’t credible. The evidence is overwhelmingly polar to such advice.

Current state of the CALO

We may have been experiencing devolution and evolution concurrently, yet humans kept working to overcome problems, representing all major disciplines. My company Kyield is among them.

Significant progress has been made across and between all disciplines, including with understanding and engineering the dynamical components of modern organizations.

The network economy

No question that the structure of an organization greatly influences the ability to adapt even if having learned lessons well, including legal, tax, reporting, incentives, physical, and virtual. The network economy has altered the very foundational structure many organizations operate on top of.

While each structure needs to be very carefully crafted, the structural changes vary from the rare ‘no change is needed’ to increasingly common ‘bold change required to survive’. Though it need not be so, it is increasingly more efficient to disrupt and displace than to change from within.

Globalization

While we are experiencing a repatriation and regionalization trend, globalization radically changed the way organizations learn and how they must adapt. Several billion more people are driving the network economy than in 1990, with a significant portion moving from extreme poverty to the middle class.

The global financial crisis (GFC)

Suffice to say for this purpose that the GFC has altered the global operating and regulatory landscape for many businesses and governments—in some cases radically, and is still quite fluid. Currency swings in response to unprecedented monetary policies are the most recent example of this chaotic process, though only one of many organizations must navigate. If the regulatory agencies were CALOs, much of the pain would be mitigated.

Machine learning (ML)

Although quite early in commercialization and most cases still confidential, ML combined with cognitive computing and AI assisted augmentation is rapidly improving. Deep learning is an effective means of achieving a CALO in a pure network environment that interacts with customers such as search and social networking, though is being adopted widely now.

One of the largest continuous learning projects is Orion by UPS, which was kind enough to share some detail in public. Orion provides an excellent case for many to consider as it overlaps the physical world with advanced networks and large numbers of employees worldwide. Unlike Google or Facebook that began as virtual companies running on computer networks, UPS represents a large transformation of the type most organizations need to consider. In the case of UPS of course, they have massive logistical operations with very high volume of semi-automated and automated data management.

Having developed deeply tailored use case scenarios for each sector in Kyield’s customer pipeline, numbering in the dozens, I can offer a few words of advice in public.

  1. While all public cases should be considered, remember that few are shared in public for good reason. Few if any companies have the business model, need, resources, or capacity of Google or UPS, which is why they can share the information.

  2. Rare is the case when enormous custom projects should be copied. The craftwork of planning and design is substantially about taking available lessons, combining with technology, systems, and talent in a carefully tailored manner for the client.

  3. Regardless of whether a large custom project, completely outsourced, or anything in-between, board level supervision is necessary to avoid switching dependencies from one vendor or internal department to another. I see this occurring with new silos popping up in open source models, data science, statistics, and algorithms. The goal for most should be to reduce dependencies, which is nontrivial when dealing with high-level intellectual capital embedded in advanced technology, particularly given talent wars, level of contracting in these functional roles, and churn.

  4. Since few will be able to develop and maintain competitive custom systems, the goal should be to seek an optimal, adaptive balance between the benefits of custom tailored software systems (Orion or Google), with the efficiency of write once and adopt at scale in software development. This is one of several areas where Kyield can really make the difference on the level of ROI realized. Our continuously adaptive data management system (patented) is automatically tailored to each entity with semi-automated functions restricted to regulatory and security parameters at the corporate and unit level, with the option for individual knowledge workers to plan projects and goals.

  5. Plan from inception to expand the system to the entire organization, ecosystem, and Internet of Entities. Otherwise, the organization will be physically restricted from achieving much of the value any such system can offer, particularly in risk management. One of the biggest errors I see being made, including in some of the most sophisticated tech companies, is approaching advanced analytics as ‘only’ a departmental project. Of course it is wise to take the low hanging fruit through use of pre-existing department budgets where authority and ROI are simple, but it is a classic mistake for CIOs and IT to consider such projects strategic, with very rare exception such as for a strategic project.

  6. Optimize relationship management. Just one example is when our adaptive data management is combined with advanced ML algorithms that include predictive capabilities, which among other functions weighs counter party risk. A similar algorithm can be run for identifying business opportunities.

Concluding thoughts

While it is more challenging to achieve buy-in for organization-wide systems, it is physically impossible to achieve critical use cases otherwise, some of which have already proven to be very serious, and can be fatal. Moreover, very few distributed organizations can become a CALO without a holistic system design across the organization’s networks, particularly in the age of distributed network computing.

If this isn’t sufficient motivation to engage, consider that learning algorithms are very likely (or soon will be) improving the intelligence quotient and operational efficiency of your chief competitors at an extremely rapid rate. Lastly, if the subject organization or entire industry is apathetic and slow to change, it is increasingly likely that highly sophisticated, well-financed disrupters are maturing plans to deploy this type of technology to displace the subject, if not the entire industry.

Bottom line: Move towards CALO rapidly or deal with the consequences. 

Mark Montgomery is founder and CEO of http://www.kyield.com, which offers an advanced distributed operating system and related services centered around Montgomery’s AI systems invention.

Applied artificial intelligence: Mythological to neuroanatomical


First of a three part series exploring the depths of artificial intelligence in 2015.

Nueromorphic chip, University of Heidelberg

Neuromorphic chip, University of Heidelberg, Germany Credit: University of Heidelberg

While artificial intelligence (AI) has roots as far back as Greek mythology, and Aristotle is credited with inventing the first deductive reasoning system, it wasn’t until the post WWII era of computing that we humans began to execute machine intelligence with early supercomputers. The science progressed nicely until the onset of AI Winter in the 1960s, representing the beginning of severe cycles in technology. A great deal of R&D needed to evolve and mature over the next five decades prior to wide adoption of applied AI — here is a brief history and analysis of the rapidly evolving field of artificial intelligence and machine learning.

Hardware innovation

One of the primary obstacles to applied AI has been with scaling neural networks, particularly containing rich descriptive data and complex autonomous algorithms. Even in today’s relatively low cost computing environment scale is problematic, resulting in graduate students at Stanford’s AI lab complaining to their lab director “I don’t have 1,000 computers lying around, can I even research this?” In a classic case of accelerated innovation, Stanford and other labs then switched to GPUs: “For about US$100,000 in hardware, we can build an 11-billion-connection network,” reports Andrew Ng, who recently confirmed Baidu now “often trains neural networks with tens of billions of connections.”

Although ever-larger scale at greater speed improves deep learning efficiency, CPUs and GPUs do not function like a mammalian brain, which is why for many years researchers have investigated non–von Neumann architectures that would more closely mimic biological intelligence. One high profile investment is the current DARPA SyNAPSE program (Systems of Neuromorphic Adaptive Plastic Scalable Electronics).

In addition to neuromorphic chips, researchers are increasingly focused on the use of organic materials, which is part of a broader trend towards convergence of hardware, software, and wetware described by IARPA as cortical computing, aka neuroanatomical. The ability to manipulate mass at the atomic level in computational physics is now on the horizon, raising new governance issues as well as unlimited possibilities. The research goals of the Department of Energy in the fiscal 2016 budget proposal provides a good example of future direction in R&D:

The Basic Energy Sciences (BES) program supports fundamental research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels …. Key to exploiting such discoveries is the ability to create new materials using sophisticated synthesis and processing techniques, precisely define the atomic arrangements in matter, and control physical and chemical transformations.

Software and data improvements

Similar to hardware, software is an enabler that determines the scalability, affordability, and therefore economic feasibility of AI systems. Enterprise software to include data base systems also suffered a period of low innovation and business creation until very recently. In early 2015 several companies in the billion-dollar startup club are providing infrastructure support to AI, offering overlapping services such as predictive analytics, and/or beginning to employ narrow AI and machine learning (ML) internally. Many of us are now concerned with lack of experience and excessive hype that often accompany rapidly increased investment.

Database systems and applications

Database systems, storage and retrieval are of course of critical importance to AI. A few short years ago only one leading vendor supported semantic standards, which was followed by a second market leader two years ago. The emergence of multiple choices in the market is resulting in the first significant new competition in database systems in over a decade, several of which are increasingly competing for critical systems at comparable performance levels at significantly lower cost.

Data standards and interoperability

Poor interoperability vastly increases costs and systemic risk while dramatically slowing adaptability, and makes effective governance all but impossible. While early adopters of semantic standards include intelligence agencies and scientific computing, even banks are embracing data standards in 2015, in part due to regulatory agencies collaborating internationally. High quality data built on deeply descriptive standards combined with advanced analytics and ML can provide much improved risk management, strong governance, efficient operations, accelerated R&D and improved financial performance. In private discussions across industries, it is clear that a consensus has finally formed that standards are a necessity in the network era.

Automation, agility, and adaptive computing

An effective method of learning the agility quotient of an organization is post mortem analysis after a catastrophic event such as patent cliffs in big pharma, or failure to respond to digital disruption. Unfortunately for executives, by the time clarity is gained from hindsight, the wisdom is only useful if placed in a position of authority again. Similarly, failed governance is quite evident in the wake of industrial accidents, missed opportunities to prevent terrorist attacks and whale trades among many others.

Given that the majority of economic productivity is planned and executed over enterprise networks, it is fortunate that automation is beginning to replace manual system integration. Functional silos are causal to crises, as well as obstacles to solving many of our greatest challenges. Too often has been the case where the lack of interoperability has proven to be leveraged for the benefit of the IT ecosystem rather than customer mission. In drug development, for example, until recently scientists were reporting lag times of up to 12 months for important queries to be run. By automating data discovery, scientists can run their own queries, AI applications can recommend unknown queries, and deep learning can continuously seek solutions to narrowly defined problems tailored to the needs of each entity.

In the next article in this series we’ll take a look at recent improvements in algorithms and what types of AI applications are now possible.

(This article was originally published at Computerworld)

Mark Montgomery is the founder and CEO of Kyield, originator of the theorem “yield management of knowledge,” and inventor of the patented AI system that serves as the foundation for Kyield.

Plausible Scenarios For Artificial Intelligence in Preventing Catastrophes


(The PDF version of this article is now available here)

Plausible Scenarios For Artificial Intelligence in Preventing Catastrophes

 

Given consistent empirical evidence demonstrating that the raising of red flags has resulted in a lower probability of essential actors heeding warnings than causing of unintended consequences, those endowed with a public platform carry a special responsibility, as do system architects. Nowhere is this truer than at the confluence of advanced technology and existential risk to humanity.

As one who has invested a significant portion of my life studying crises and designing methods of prevention, including for many years the use of artificial intelligence (AI), I feel compelled to offer the following definitions of AI with a few of what I consider to be plausible scenarios on how AI could prevent or mitigate catastrophes, as well as brief enlightenment on how we reduce the downside risk to levels similar to other technology systems.

An inherent temptation with considerable incentives exists in the transdisciplinary field of AI for the intellectually curious to expand complexity infinitely, generally leaving the task of pragmatic simplicity to applied R&D, which is the perspective the following definitions and scenarios are offered, primarily for consideration and use in this article.

A Definition of Artificial Intelligence: Any combination of technologies and methodologies that result in learning and problem solving ability independent of naturally occurring intelligence.

A Definition of Beneficial AI: AI that has been programmed to prevent intentional harm and to mitigate unintended consequences within a strictly controlled governance system, which include redundant safety functions, and continuous human oversight (to include a kill switch in high-risk programs). [1]

A Definition of Augmentation: For the purposes of this article, augmentation is defined not as implants or direct connection to biological neural systems, which is an important rapidly emerging sub-field of AI in biotech, but rather simply enhancing the quality of human work products and economic productivity with the assistance of AI[2]

To aid in this exercise, I have selected quotes from the highly regarded book Global Catastrophic Risks (GCR), which consists of 22 chapters by 25 authors in a worthy attempt to provide a comprehensive view. [3]  I have also adopted the GCR definition of ‘global catastrophic’: 10 million fatalities or 10 trillion dollars. The following reminders from GCR seem appropriate in a period of intense interest in AI and existential risk:

“Even for an honest, truth-seeking, and well-intentioned investigator it is difficult to think and act rationally in regard to global catastrophic risks and existential risks.”

“Some experts might be tempted by the media attention they could get by playing up the risks. The issue of how much and in which circumstances to trust risk estimates by experts is an important one.”

Super Volcanoes

A very high probability event that should consume greater attention, super volcano eruptions occur about every 50,000 years. GCR highlights the Toba eruption in Indonesia approximately 75,000 years ago, which may be the closest humanity has come to extinction. The primary risk from super eruptions is airborne particulates that can cause a rapid decline in global temperatures, estimated to have been 5–15 C after the Toba event. [4]

While it appears that a super-eruption contains a low level of existential risk to the human race, a catastrophic event is almost certain and would likely exceed all previous disasters, followed by massive loss of life and economic collapse, reduced only by the level of preventative measures taken in advance. While preparation and alert systems have much improved since my co-workers and I observed the eruption of Mt. St. Helens from Paradise on Mt. Rainier the morning of May 18th, 1980, it is quite clear that the world is simply not prepared for one of the larger super-eruptions that will undoubtedly occur. [5] [6]

MM on Mt. Rainier, 5/18/1980

MM on Mt. Rainier, 5/18/1980

The economic contagion observed in recent events such as the 9/11 terrorists attacks, the global financial crises, the Tōhoku earthquake and tsunami, the current Syrian Civil War, and the ongoing Ebola Epidemic in West Africa serve as reasonable real-world benchmarks from which to make projections for much larger events, such as a super-eruption or asteroid. It is not alarmist to state that we are woefully unprepared and need to employ AI to increase the probability for preserving our species and others. The creation and colonization of highly adaptive outposts in space designed with the intent of sustainability would therefore seem prudent policy best served with the assistance of AI.

The role of AI in mitigating the risks associated with super volcanoes include accelerated research, more accurate forecasting, increased modeling for preparation and mitigation of damage, accelerating discovery of systems for surviving super eruptions, and to assist in recovery. These tasks require ultra high scale data analytics to deal with complexities far beyond the ability of humans to process within circumstantially dictated time windows, and are too critical to be dependent upon a few individuals, thus requiring highly adaptive AI across distributed networks involving large numbers of interconnected deep earth and deep sea sensors, linking individuals, teams, and organizations. [7]  Machine learning has the ability to accelerate all critical functions, including identifying and analyzing preventative measures with algorithms designed to discover and model scenario consequences.

Asteroids are similar to super volcanoes from an AI perspective, both requiring effective discovery and preparation, which while ongoing and improving could benefit from AI augmentation. Any future preventive endeavors may require cognitive robotics working in extremely hostile environments for humans where real time communications could be restricted. Warnings may be followed swiftly by catastrophic events—hence, preparation is critical, and if viewed as optional, we do so at our own peril.

Pandemics and Healthcare Economics

Among the greatest impending known risks to humans are biological contagions, due to large populations in urban environments that have become reliant on direct public interaction, regional travel by public transportation, and continuous international air travel. Antibiotic-resistant bacteria and viruses in combination with large mobile populations pose a serious short-term threat to humanity and the global economy, particularly from an emergent mutation resistant to pre-existing immunity or drugs.

“For example, if a new strain of avian flu were to spread globally through the air travel network that connects the world’s major cities, 3 billion people could potentially be exposed to the virus within a short span of time.” – Global Risk Economic Report 2014, World Economic Forum (WEF). [8]

Either general AI or knowledge work augmented with AI could be a decisive factor in preventing and mitigating a future global pandemic, which has the potential capacity to dwarf even a nuclear war in terms of short-term casualties. Estimates vary depending on strain, but a severe pandemic could claim 20-30% of the global population compared to 8-14% in a nuclear war, though aftermath from either could be as horrific.

Productivity in life science research is poor due to inherent physical risks for patients as well as non-physical influences such as cultural, regulatory, and financial that can increase systemic, corporate, and patient risk. Healthcare cost is among the leading global risks due to cost transference to government. The WEF, for example, cites ‘Fiscal Crisis in Key Economies’ as the leading risk in 2014. Other than poor governance that allows otherwise preventable crises to occur, healthcare costs are arguably the world’s most severe curable ongoing crisis, impacting all else, including pandemics.

Priority uses of AI for pandemics and healthcare: [9]

  • Accelerated R&D throughout life science ecosystem, closing patient/lab gap.

  • Advanced modeling scenarios for optimizing prevention, care, and costs.

  • Regulatory process including machine learning and predictive modeling.

  • Precision healthcare tailored to each person; more direct and automated. [10]

  • Predictive analytics for patient care, disease prevention, and economics.

  • Cognitive computing for diagnostics, patient care, and economics.

  • Augmentation for patients and knowledge workers worldwide.

“By contrast to pandemics, artificial intelligence is not an ongoing or imminent global catastrophic risk. However, from a long-term perspective, the development of general artificial intelligence exceeding that of the human brain can be seen as one of the main challenges to the future of humanity (arguably, even the main challenge). At the same time, the successful deployment of superintelligence could obviate many of the other risks facing humanity.” — Eliezer Yudkowsky in GCR chapter ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk’.

Climate Change

Crises often manifest over time as a series of compounding events that either could not be predicted, predictions were inaccurate, or were not acted upon. I’ve selected an intentionally contrarian case with climate change to provide a glimpse of potentially catastrophic unintended consequences. [11]

“Anthropogenic climate change has become the poster child of global threats. Global warming commandeers a disproportionate fraction of the attention given to global risks.” – GCR

If earth cooling from human response driven by populism, conflicted science, or any other reason resulted in an over-correction, or a response combined with volcanoes, asteroids, or slight variation of cyclic changes in Earth’s orbit to alter ocean and atmospheric patterns, it becomes plausible to forecast a scenario of an accelerated ice age where ice sheets rapidly reach the equator similar to what may have occurred during the Cryogenian period. Given our limited understanding of triggering mechanisms to include in sudden temperature changes, which are believed to have occurred 24 times over the past 100,000 years, it is also plausible that either a natural and/or human response could result in rapid warming. While either scenario of climate extreme may seem ludicrous given current sentiment, the same was true prior to most events. Whether natural, human caused, or some combination as observed with the Fukushima Daiichi nuclear disaster, the pattern of complexity of seemingly small events leading to larger events is more the norm rather than exception, with outcomes opposed to intent not unusual. While reduction of pollutants is prudent for health, climate, and economics, chemically engineered solutions should be resisted until our ability to manage climate change is proven with rigorous real-world models.

9/11

The tragedy of 9/11 provides a critical lesson in seemingly low risk series of events that can easily spiral into catastrophic scale if not planned and managed with precision. Those who decided not to share the Phoenix memo at the FBI with appropriate agencies undoubtedly thought it was a safe decision given information at the time. Who among us would have acted differently if having to make our decision based on the following questions in the summer of 2001?

  • What are the probabilities of an airline being used as a weapon?
  • Even if such an attempt was made, what are the probabilities of it occurring?
  • In the unlikely event such an attempt did occur, what is the probability that one of the towers at the World Trade Center would be targeted and struck?
  • In the very unlikely event one WTC tower were struck, what is the probability that the other tower would be struck similarly before evacuation could take place?
  • In the extremely unlikely scenario that terrorists attacked the second tower with a hijacked commercial airliner within 20 minutes of the first attack, what would be the probability of one of the towers collapsing within an hour?
  • If then this horrific impossible scenario were actually to have occurred, what then would be the possibility of the second tower collapsing a half hour later?

Prior to 9/11, such a scenario would seem to be an extremely low probability for anyone but an expert if possible to verify and process all information, but of course the questions describe the actual event, tragically providing one of the most painful and costly lessons in human life and treasure in the history of the U.S., with wide ranging global impact.

  • Direct costs from attacks: 2,999 lives lost with an additional 1,400 rescue workers having died since, some portion of which were directly caused. Financial costs from 9/11 are estimated to have been approximately $2 trillion to the U.S. alone. [12]

  • Iraq and Afghanistan wars: Approximately 36,000 people from the US and allied nations died in Iraq and Afghanistan, with nearly 900,000 disability claims approved by the VA alone. [13]  The human cost in the two nations were of course much higher, including large numbers of civilians. According to one report, the Iraq War has cost $1.7 trillion with an additional $490 billion in benefits owed to veterans, which could grow to more than $6 trillion over the next four decades counting interest. [14]  The Afghanistan war is estimated to have cost $4-6 trillion.

  • Global Financial Crisis: Many experts believe monetary easing post 9/11 combined with lax regulatory oversight expanded the unsustainable housing bubble and triggered the financial collapse in 2008, which revealed systemic risk accumulating over decades from poor regulatory oversight, governance, and policy. [15]  The total cost to date from the financial crisis is well over $10 trillion and continues to rise from a series of events reverberating globally, caused by realized moral hazard, broad global reactions, and significant continuing aftershocks from attempts to recover.

The reporting process for preventing terrorist attacks has been reformed and improved, though undoubtedly still challenged due to the inherently conflicted dynamics between interests, exponential growth of data, and limited ability for individuals to process. If the FBI decision makers were empowered with the knowledge of the author of the Phoenix memo (special agent Kenneth Williams), and free of concerns over interagency relations, politics, and other influences, 9/11 would presumably have been prevented or mitigated.

An Artificially Intelligent Special Agent

What if the process wasn’t dependent upon conflicted humans? An artificial agent can be programmed to be free of such influences, and can access all relevant information on the topic from available sources far beyond the ability of humans to process. When empowered with machine learning and predictive analytics, redundant precautions, and integrated with the augmented workflow of human experts, the probability of preventing or mitigating human caused events is high, whether in preventing terrorist plots, high cost projects with deep sea oil rigs, trading exotic financial products, nuclear power plant design, response to climate change, or any other such high-risk endeavor.

Governance of Human Engineered Systems

Long considered one of the primary technical risks, as well as key to overcoming many of our most serious challenges, the ability to program matter at the atomic scale is beginning to emerge. [16]  Synthetic biology is another area of modern science that poses great opportunity for improving life while introducing new risks that must be governed wisely with appropriate tools. [17]

While important private research cannot be revealed, published work is underway beyond the typical reporting linked to funding announcements and ad budgets. One example is the Synthetic Evolving Life Form (SELF) project at Raytheon led by Dr. Jim Crowder, which mimics the human nervous system within an Artificial Cognitive Neural Framework (ACNF). [18]  In their book Artificial Cognitive Architectures, Dr. Crowder and his colleagues describe an unplanned Internet discussion between an artificial agent and human researcher that called for embedded governance built into the design.

A fascinating project was recently presented by Joshua M. Epstein, Ph.D. at the Santa Fe Institute on research covered in his book Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science[19]  In this novel approach to agent based social behavior, Epstein has intentionally removed memory and pre-conditioned bias from Agent Zero to simulate what often appears to be irrational behavior resulting from social influences, reasoning, and emotions. Agent Zero provides a good foundation for considering additional governance methods for artificial agents interacting in groups, with potential application in AI systems designed to regulate systemic risk in networked environments.

Closing Thoughts

Given the rapidly developing AI-assisted world combined with recent governance failures in financial regulation, national security, and healthcare, the challenge for senior decision makers and AI systems engineers is to perform at much higher levels than the past, for if we continue on this trajectory it is conceivable that the human experience could confirm the Fermi Paradox within decades. I therefore view AI in part as a race against time.

While the time-scale calculus for many organizations supports accelerated transformation with the assistance of AI, it is wise to proceed cautiously with awareness that AI is a transdisciplinary field which favors patience and maturity in the architects and engineers, typically requiring several tens of thousands of hours of deep immersion in each of the applicable focus areas. It is also important to realize that the fundamental physics involved with AI favors if not requires the governance structure to be embedded within the engineered data, including business process, rules, operational needs, and security parameters. When conducted in a prudential manner, AI systems engineering is a carefully planned and executed process with multiple built-in redundancies.

Mark Montgomery is founder and CEO of http://www.kyield.com, which offers technology and services centered on Montgomery’s AI systems invention.

(Robert Neilson, Ph.D., and Garrett Lindemann, Ph.D., contributed to this article)

[1] In the GCR chapter Artificial Intelligence as a ‘Positive and Negative Factor in Global Risk’, Eliezer Yudkowsky prefers ‘Friendly AI’. His complete chapter is on the web here: https://intelligence.org/files/AIPosNegFactor.pdf

[2] During the course of writing this paper Stanford University announced a 100-year study of AI. The white paper by Eric Horvitz can be viewed here: https://stanford.app.box.com/s/266hrhww2l3gjoy9euar

[3] Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Cirkovic, (Oxford University Press, Sep 29, 2011): http://ukcatalogue.oup.com/product/9780199606504.do

[4] For more common events, see Ultra distant deposits from super eruptions: Examples from Toba, Indonesia and Taupo Volcanic Zone, New Zealand. N.E. Matthews, V.C. Smith, A Costa, A.J. Durant, D. M. Pyle and N.G.J. Pearce.

[5] USGS Fact Sheet for Alert-Notification System for Volcano Activity: http://pubs.usgs.gov/fs/2006/3139/fs2006-3139.pdf

[6] Paradise is about 45 miles to the north/blast side of St. Helens in full view until ash cloud enveloped Mt. Rainier with large fiery ash and friction lightening. A PBS video on the St. Helens eruption: http://youtu.be/-H_HZVY1tT4

[7] Researchers at Oregon State University claimed the first successful forecast of an undersea volcano in 2011 with the aid of deep sea pressure sensors: http://www.sciencedaily.com/releases/2011/08/110809132234.htm

[8] Global Risk Economic Report 2014, World Economic Forum: http://www3.weforum.org/docs/WEF_GlobalRisks_Report_2014.pdf

[9] While pandemics and healthcare obviously deserve and receive much independent scrutiny, understanding the relativity of economics is poor. Unlike physics, functional metric tensors do not yet exist in economics.

[10] New paper in journal Science: ‘The human splicing code reveals new insights into the genetic determinants of disease’ http://www.sciencemag.org/content/early/2014/12/17/science.1254806

[11] Skeptical views are helpful in reducing the number and scope of unintended consequences. For a rigorous contrarian view on climate change research see ‘Fat-Tailed Uncertainty in the Economics of Catastrophic Climate Change’, by Martin L. Weitzman: http://scholar.harvard.edu/files/weitzman/files/fattaileduncertaintyeconomics.pdf

[12] Institute for the Analysis of Global Security http://www.iags.org/costof911.html

[13] US and Coalition Casualties in Iraq and Afghanistan by Catherine Lutz, Watson Institute for International Studies: http://costsofwar.org/sites/default/files/articles/8/attachments/USandCoalition.pdf

[14] Daniel Trotta at Reuters, March 14th, 2013 http://www.reuters.com/article/2013/03/14/us-iraq-war-anniversary-idUSBRE92D0PG20130314

[15] In his book The Age of Turbulence: Adventures in a New World, Alan Greenspan states that monetary policy in the years following 9/11 “saved the economy”, while others point to Fed policy during this period as contributing to the global financial crisis beginning in 2008, which continues to reverberate today. Government guarantees, high-risk products in banking, companies ‘too big to fail’, corrupted political process, increased public debt, lack of effective governance, and many other factors also contributed to systemic risk and the magnitude of the crisis.

[16] A team of researchers published research in Nature Chemistry in May of 2013 on using a Monte Carlo simulation to demonstrate engineering at the nanoscale: http://www.nature.com/nchem/journal/v5/n6/full/nchem.1651.html

[17] A recent paper from MIT researchers ‘Tunable Amplifying Buffer Circuit in E. coli’ is an example of emerging transdisciplinary science with mechanical engineering applied to bacteria: http://web.mit.edu/ddv/www/papers/KayzadACS14.pdf

[18] My review of Dr. Crowder’s recent book Artificial Cognitive Architectures can be viewed here: https://kyield.wordpress.com/2013/12/17/book-review-artificial-cognitive-architectures/

[19] Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science, by

Joshua M. Epstein: http://press.princeton.edu/titles/10169.html The seminar slides (require Silverlight plugin): http://media.cph.ohio-state.edu/mediasite/Play/665c495b7515413693f52e7ef9eb4c661d

Transforming Healthcare With Data Physics


I just completed an in-depth paper on how our work and system can help life science and healthcare companies overcome the great challenges they face, so I wanted to share some thoughts while still fresh. The paper is part of our long-term commitment to healthcare and life sciences, requiring a deep dive over the past several weeks to update myself on the latest research in behavioral psychology, machine learning, deep learning, genetics, chemicals, diagnostics, economics, and particle physics, among others. The review included several hundred papers as well as a few dozen reports.

Kyield Distributed OS - Life Science and Healthcare

The good news is that the science is improving rapidly. An important catalyst to accelerated learning over the past 20 years has been embracing the multi-disciplinary approach, which academia resisted for many years despite the obvious benefits, but is now finally mainstream with positive impact everywhere one looks.

The bad news is that the economics of U.S. healthcare has not noticeably improved. For a considerable portion of the population it has deteriorated. The economic trajectory for the country is frankly grim unless we transform the entire healthcare ecosystem.

A common obstacle to vast improvement in healthcare outcomes that transcends all disciplines with enormous economic consequences is data management and analytics, or perhaps more accurately; the lack thereof. There is no doubt that unified networks must play a lead role in the transformation of healthcare. A few clips from the paper:

“By structural we mean the physics of data, including latency, entropy, compression, and security methodology. The Kyield system is intended to define structural integrity in NNs, continually exploring and working to improve upon state-of-the-art techniques.”

“While significant progress has been made with independent standards towards a more sustainable network economy, functionality varies considerably by technology, industry, and geography, with variety of data types and models remaining among the greatest obstacles to discovery, cost efficiency, performance, security, and personalization.”

Life science and healthcare are particularly impacted by heterogeneous data, which is one reason why networked healthcare is primitive, expensive, slow, and alarmingly prone to error.

“Biodiversity presents a unique challenge for data analytics due to its ambiguity, diversity, and specialized language, which then must be integrated with healthcare and data standards as well as a variety of proprietary vendor technology in database management systems, logistics, networking, productivity, and analytics programs.”

“Due to the complexity across LS and healthcare in data types, standards, scale, and regulatory requirements, a functional unified network OS requires specific combinations of the most advanced technology and methods available.”

Among the most difficult challenges facing management in mature life science companies are cultures that have been substantially insulated from economic reality for decades, only recently feeling the brunt of unsustainable economic modeling throughout the ecosystem, typically in the form of restructures, layoffs, and in some cases closure. This uncertainty particularly impacts individuals who are accustomed to career security and relatively high levels of compensation. I observed this often during a decade of consulting. The pain caused by a dysfunctional economic system is similar to the diseases professionals spend their careers fighting; often unjustly targeting individuals in a seemingly random manner, which of course has consequences.

“Among many changes for knowledge workers associated with the digital revolution and macro economics are less security, more free agency, more frequent job changes, much higher levels of global venture funding, less loyalty to corporate brands and mature industry models, and considerably increased motivation and activism towards personal passionate causes.”

Healthcare is a topic where I have personal passion as it cuts to the core of the most important issues to me, including family, friends, colleagues, and economics, which unfortunately in U.S. healthcare represents a highly self-destructive model. My brother was diagnosed with Lou Gehrig’s disease (amyotrophic lateral sclerosis/ALS) in 1997 not long after his only child was born. I’ll never forget that phone call with him or what he and his family endured over the next three years even though his case was a fine example of dedicated people and community. My father passed a decade later after a brutal battle with type 2 diabetes; we had an old friend pass from MS recently, and multiple cancers as well as epilepsy are ongoing within our small group of family and friends. So it would be foolhardy to deny the personal impact and interest. Healthcare affects us all whether we realize it or not, and increasingly, future generations are paying for the current generation’s unwillingness to achieve a sustainable trajectory. Unacceptable doesn’t quite capture the severity of this systemic failure we all own a part of.

The challenge as I see it is to channel our energy in a positive manner to transform the healthcare system with a laser focus on improved health and economic outcomes. This of course requires a focus on prevention, reduced complexity throughout the ecosystem, accelerated science, much improved technology, and last but not least; rational economic modeling to included increased competition. The latter will obviously require entirely new distribution systems and business models more aligned with current science and economic environment. Any significant progress must include highly evolved legislation reflecting far more empowerment of patients and dramatic improvement in fiscal discipline for the ultimate payer we call America while there is still time to manage the disease. If we continue to treat only the symptoms of healthcare in America it may well destroy the quality of life for the patient, if indeed the patient as we know it survives at all. This essentially represents my diagnosis.

A few of the 80 references I cited in the paper linked below are good sources to learn more:

Beyond borders: unlocking value. Biotechnology Industry Report 2014, EY
http://www.ey.com/GL/en/Industries/Life-Sciences/EY-beyond-borders-unlocking-value

Dixon-Fyle, S., Ghandi, S., Pellathy, T., Spatharou, A., Changing patient behavior: the new frontier in healthcare value (2012). Health International, McKinsey & Company.
http://healthcare.mckinsey.com/sites/default/files/791750_Changing_Patient_Behavior_the_Next_Frontier_in_Healthcare_Value.pdf

Thessen A., Cui H., Mozzherin D. Applications of Natural Language Processing in Biodiversity Science Adv Bioinformatics.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3364545/

Top 10 Clinical Trial Failures of 2013. Genetic Engineering & Biotechnology News.
http://www.genengnews.com/insight-and-intelligence/top-10-clinical-trial-failures-of-2013/77900029/

Begley, C.G., Ellis, L.M. (2012) Drug development: raise standards for preclinical cancer research. Nature 483 http://www.nature.com/nature/journal/v483/n7391/pdf/483531a.pdf

Cambria, E., and White, B. Jumping NLP curves: A review of natural language processing research. IEEE Computational Intelligence Magazine, 9:1–28, 2014.
http://www.krchowdhary.com/ai/lects/nlp-research-com-intlg-ieee.pdf

Montgomery, M. Diabetes and the American Healthcare System. Kyield, Published online May 2010
http://www.kyield.com/images/Kyield_Diabetes_Use_Case_Scenario.pdf

All quotes above are mine from Kyield’s paper of 8-15-2014:

Unified Network Operating System
With Adaptive Data Management Tailored to Each Entity
Biotech, Pharmaceuticals, Healthcare, and Life Sciences

Complex Dynamics at the Confluence of Human and Artificial Intelligence


(This article was featured at Wired)

Fear of AI vs. the Ethic and Art of Creative Destruction

While it may be an interesting question whether the seasons are changing in artificial intelligence (AI), or to what extent the entertainment industry is herding pop culture, it may not have much to do with future reality. Given recent attention AI has received and the unique potential for misunderstanding, I thought a brief story from the trenches in the Land of Enchantment might shed some light.

The topic of AI recently came up at Santa Fe Institute (SFI) during a seminar by Hamid Benbrahim surrounding research in financial markets. Several senior scientists chimed in during Hamid’s talk representing computer science (CS), physics (2), neuroscience, biology, and philosophy, as well as several practioners with relevant experience. SFI is celebrating its 30th anniversary this year as a pioneer in complexity research where these very types of topics are explored, attracting leading thinkers worldwide.

Following the talk I continued to discuss financial reforms and technology with Daniel C. Dennett, who is an external professor at SFI. While known as an author and philosopher, Professor Dennett is also Co-Director of the Center for Cognitive Studies at Tufts University with extensive published works in CS and AI. Professor Dennett shared a personal case that provides historical and perhaps futuristic context involving a well-known computer scientist at a leading lab during the commercialization era of the World Wide Web. The scientist was apparently concerned with the potential negative impact on authors given the exponentially increasing mass of content, and I suspect also feared the network effect in certain types of consumer services that quickly result in winner-takes-all dominance.

Professor Dennett apparently attempted to reassure his colleague by pointing out that his concerns, while understandable, were likely unjustified for the mid-term as humans have a consistent history of adapting to technological change, as well as adapting technology to fill needs. In this case, Dennett envisioned the rise of specialty services that would find, filter, and presumably broker in some fashion the needs of reader and author. Traditional publishing may change even more radically than we’ve since observed, but services would rise, people and models would adapt.

One reason complexity attracts leading thinkers in science and business is the potential benefit across all areas of life and economy. The patterns and methods discovered in one field are increasingly applied to others in no small part due to collaboration, data sharing, and analytics. David Wolpert for example stated his reasoning for joining SFI part-time from LANL was a desire to work on more than one discipline simultaneously. Many others have reported similarly both for the potential impact from sharing knowledge between disciplines and the inherent challenge. I can certainly relate from my own work in applied complex adaptive systems, which at times seems as if God or Nature were teasing the ego of human intellect. Working with highly complex systems tends to be a humbling experience.

That is not to say, however, that humans are primitive or without power to alter our destiny. Our species did not come to dominate Earth due to ignorance or lack of skills, for better or worse. We are blessed with the ability to intentionally craft tools and systems not just for attention-getting nefariousness, but solving problems, and yes being compensated for doing so. Achieving improvement increasingly requires designs that reduce the undesirable impacts of complexity, which tend to accumulate as increased risk, cost, and difficulty.

Few informed observers claim that technological change is pain-free as disruptions and displacements occur, organizations do fail, and individuals do lose jobs, particularly in cultures that resist macro change rather than proactively adapt to changing conditions. That is after all the nature of creative destruction. Physics, regulations, and markets may allow us to control some aspects of technology, manage processes in others, and hopefully introduce simplicity, ease of use, and efficiency, but there is no escaping the tyranny of complexity, for even if society attempted to ban complexity, nature would not comply, nor would humans if history is any guide. The risk of catastrophic events from biological and human engineered threats would remain regardless. The challenge is to optimize the messy process to the best of our ability with elegant and effective solutions while preventing extreme volatility, catastrophic events, and as some of us intend—lead to a more sustainable, healthy planet.

2012 Kyield Enterprise UML Diagram - Human Skull

The dynamics involved with tech-led disruption are well understood to be generally beneficial to greater society, macroeconomics, and employment. Continual improvements with small disruptions are much less destructive and more beneficial than violent events that have occurred throughout history in reaction to extreme chronic imbalances. Diversification, competition, and churn are not only healthy, but essential to progress and ultimately survival. However, the messy task is made far more costly and painful than necessary, including to those most impacted, as entrenched cultures resist that which they should be embracing. Over time all manner of protectionist methods are employed to defend against change, essential disruption, or power erosion, eventually to include manipulation of the political process, which often has toxic and corrosive impacts. As I am writing this a description following a headline in The Wall Street Journal reads as follows:

 “Initiatives intended to help restrain soaring college costs are facing resistance from schools and from a bipartisan bloc of lawmakers looking to protect institutions in their districts.”

Reading this article reminded me of an interview with Ángel Cabrera, who I had the pleasure of getting to know when he was President of Thunderbird University, now in the same role at George Mason University. His view as I recall was that the reforms necessary in education were unlikely to come from within, and would require external disruptive competition. Regardless of role at the time, my experience has been similar. A majority of cultures fiercely resist change, typically agreeing only to reforms that benefit the interests of narrow groups with little concern for collective impact or macro needs. Yet society often looks to entrenched institutions for expertise, leadership, and decision power, despite obvious conflicts of interest, thus creating quite a dilemma for serious thinkers and doers. As structural barriers grow over time it becomes almost impossible to introduce new technology and systems regardless of need or merit. Any such scenario is directly opposed to proper governance policy, or what is understood to result in positive outcomes.

Consider then recent research demonstrating that resistance to change and patterns of human habit are caused in part by chemicals in the brain, and so we are left with an uncomfortable awareness that some cultures are almost certainly and increasingly knowingly exploiting fear and addiction to protect personal power and financial benefits that are often unsustainable, and eventually more harmful than tech-enabled adaptation to the very special interests they are charged with serving, not to mention the rest of society who would clearly benefit. This would seem to cross the line of motivation for change to civic duty to support those who appear to be offering the best emerging solutions to our greatest problems.

This situation of entrenched interests conflicting with the greater good provides the motivation for many involved with both basic and applied R&D, innovation, and business building. Most commonly associated with the culture of Silicon Valley, in fact the force for rational reforms and innovation has become quite global in recent years, although resistance to even the most obvious essential changes are still at times shockingly stubborn and effective.

Given these observations combined with awareness that survival of any organization or species requires adaptation to constantly changing conditions, one can perhaps see why I asked the following questions during various phases of our R&D:

Why not intentionally embrace continuous improvement and adaptation?

Why not tailor data consumption and analytics to the specific needs of each entity?

Why not prevent readily preventable crises?

Why not accelerate discoveries and attribute human capital more accurately and justly?

Why not rate, incentivize, and monetize mission-oriented knowledge?

The story I shared in conversation with Dan Dennett at SFI was timely and appropriate to this topic as philosophy not only deserves a seat at the table with AI, but also has contributed to many of the building blocks that make the technology possible, such as mathematics and data structures, among others.

The primary message I want to convey is that we all have a choice and responsibility as agents for positive change, and our actions impact the future, especially with AI systems. For example, given that AI has the capacity to significantly accelerate scientific discovery, improve health outcomes, and reduce crises, I have long believed ethics requires that we deploy the technology. However, given that we are also well aware that high unemployment levels are inhumane, contain considerable moral hazard, and risk for civil unrest, AI should be deployed surgically and with great care. I do not support wide deployment of AI for the primary purpose of replacing human workers. Rather, I have focused my R&D efforts on optimizing human capital and learning in the near-term. To the best of my awareness this is not only the most ethical path forward for AI systems, but is also good business strategy as I think the majority of decision makers in organizations are of similar mind on the issue.

In closing, from the perspective of an early advisor to very successful tech companies rather than inventor and founder of an AI system, I’d like to support the concerns of others. While we need to be cautious with spreading undue fear, it has become clear to me that some of the more informed warnings are not unjustified. Some highly competitive cultures particularly in IT engineering have demonstrated strong anti-human behavior, including companies I am close to who would I think quite probably not self-restrain actions based on ethics or macro social needs, regardless of evidence presented to them. In this regard they are no different than the protectionist cultures they would replace, and at least as dangerous. I strongly disagree with such extreme philosophies. I believe technology should be tapped to serve humans and other species, with exceptions reserved for contained areas such as defense and space research where humans are at risk, or in areas such as surgery where machine precision in some cases are superior to humans and therefore of service.

Many AI applications and systems are now sufficiently mature for adoption, the potential value and functionality are clearly unprecedented, and competitive pressures are such in most sectors that to not engage in emerging AI could well determine organizational fate in the not-too-distant future. The question then is not whether to deploy AI, or increasingly even when, but rather how, which, and with whom. About fifteen years ago during an intense learning curve I published a note in our network for global thought leaders that the philosophy of the architect is embedded in the code—it just often requires a qualified eye to see it. This is where problems in adoption of emerging technology often arise as those few who are qualified include a fair percentage of biased and conflicted individuals who don’t necessarily share a high priority for the best interest of the customer.

My advice to decision makers and chief influencers is to engage in AI, but choose your consultants, vendors, and partners very carefully.


Book Review: “Artificial Cognitive Architectures”


“Artificial Cognitive Architectures”

James A. Crowder, John N. Carbone, Shelli A. Friess

Aficionados of artificial intelligence often fantasize, speculate, and debate the holy grail that is a fully autonomous artificial life form, yet rarely do we find a proposed architecture approaching a credible probability of success.  With “Artificial Cognitive Architectures”, Drs Crowder, Carbone and Friess have painstakingly pulled together many disparate pieces of the robot puzzle in sufficient form to convince this skeptic that a human-like robot is finally within the realm of achievement, even if still at the extreme outer bounds of applied systems.

The authors propose an architecture for a Synthetic system, which is an Evolving, Life Form (SELF):

A prerequisite for a SELF consciousness includes methodologies for perceiving its environment, take in available information, make sense out of it, filter it, add to internal consciousness, learn from it, and then act on it.

SELF mimics the human central nervous system through a highly specific set of integrated components within the proposed Artificial Cognitive Neural Framework (ACNF), which includes an Artificial Prefrontal Cortex (APC) that serves as the ‘mediator’. SELF achieves its intelligence through the use of Cognitrons, which are software programs that serve in this capacity as ‘subject matter experts’.  An artificial Occam abduction process is then tapped to help manage the ‘overall cognitive framework’ called ISAAC (Intelligent information Software Agents to facilitate Artificial Consciousness).

The system employs much of the spectrum across advanced computer science and engineering to achieve the desired results for SELF, reflecting extensive experience. Dr. Jim Crowder is Chief Engineer, Advanced Programs at Raytheon Intelligence and Information Systems. He was formerly Chief Ontologist at Raytheon which is where I first came across his work. Dr. John Carbone is also at Raytheon; a quick search will reveal many of his articles and patents in related areas. Dr. Shelli Friess is a cognitive psychologist; a discipline that until recently was rarely found associated with advanced computing architecture, even though mimicry of the human nervous system clearly calls for a deep transdisciplinary approach. For example, “Artificial Cognitive Architectures“ introduces  ‘acupressure’, ‘deep breathing’, ‘positive psychology’ and other techniques to SELF as proposed to become ‘a real-time, fully functioning, autonomous, self-actuating, self-analyzing, self-healing, fully reasoning and adapting system.’

While even the impassioned AI post-doc may experience acronym fatigue while consuming “Artificial Cognitive Architectures”, the 18 years of research behind the book with careful attention to descriptive terminology helps to minimize the confusion surrounding a topic that by necessity begins to take on the complexity of our species.

Serious students and practitioners of AI will find “Artificial Cognitive Architectures” particularly interesting for the broad systems approach, while most others with curiosity surrounding this topic will find the book technical but fascinating. Those searching for HAL 9000 will be delighted to see similar reasoning and emotions on display, while simultaneously disappointed to discover designed-in governance and security features that will hopefully prevent such Hollywood scenarios from occurring. The security design was apparently influenced by an actual entertaining case when an earlier version of intelligent agent developed for the U.S. government was inadvertently left plugged in by Dr. Crowder, resulting in a late night Instant Messaging exchange between a human colleague and a Cognitron slumber party of sorts.

Readers will find a more mature posture regarding policy and security than commonly found in popular AI culture, apparently reflecting the serious work of applying AI to missile and other systems at Raytheon.

I personally found the book refreshing as it overlaps much of my own work at the confluence of human-driven AI systems. I also share a concern for internal security as it appears inevitable that machines with even the most basic cognitive ability will immediately observe how irresponsible their organic brethren have conducted themselves as stewards of earth’s resources.

J.A. Crowder et al., Artificial Cognition Architectures
DOI 10.1007/978-1-4614-8072-3_5, © Springer Science+Business Media New York 2014

http://www.springer.com/engineering/computational+intelligence+and+complexity/book/978-1-4614-8071-6