Applied artificial intelligence: Mythological to neuroanatomical

First of a three part series exploring the depths of artificial intelligence in 2015.

Nueromorphic chip, University of Heidelberg

Neuromorphic chip, University of Heidelberg, Germany Credit: University of Heidelberg

While artificial intelligence (AI) has roots as far back as Greek mythology, and Aristotle is credited with inventing the first deductive reasoning system, it wasn’t until the post WWII era of computing that we humans began to execute machine intelligence with early supercomputers. The science progressed nicely until the onset of AI Winter in the 1960s, representing the beginning of severe cycles in technology. A great deal of R&D needed to evolve and mature over the next five decades prior to wide adoption of applied AI — here is a brief history and analysis of the rapidly evolving field of artificial intelligence and machine learning.

Hardware innovation

One of the primary obstacles to applied AI has been with scaling neural networks, particularly containing rich descriptive data and complex autonomous algorithms. Even in today’s relatively low cost computing environment scale is problematic, resulting in graduate students at Stanford’s AI lab complaining to their lab director “I don’t have 1,000 computers lying around, can I even research this?” In a classic case of accelerated innovation, Stanford and other labs then switched to GPUs: “For about US$100,000 in hardware, we can build an 11-billion-connection network,” reports Andrew Ng, who recently confirmed Baidu now “often trains neural networks with tens of billions of connections.”

Although ever-larger scale at greater speed improves deep learning efficiency, CPUs and GPUs do not function like a mammalian brain, which is why for many years researchers have investigated non–von Neumann architectures that would more closely mimic biological intelligence. One high profile investment is the current DARPA SyNAPSE program (Systems of Neuromorphic Adaptive Plastic Scalable Electronics).

In addition to neuromorphic chips, researchers are increasingly focused on the use of organic materials, which is part of a broader trend towards convergence of hardware, software, and wetware described by IARPA as cortical computing, aka neuroanatomical. The ability to manipulate mass at the atomic level in computational physics is now on the horizon, raising new governance issues as well as unlimited possibilities. The research goals of the Department of Energy in the fiscal 2016 budget proposal provides a good example of future direction in R&D:

The Basic Energy Sciences (BES) program supports fundamental research to understand, predict, and ultimately control matter and energy at the electronic, atomic, and molecular levels …. Key to exploiting such discoveries is the ability to create new materials using sophisticated synthesis and processing techniques, precisely define the atomic arrangements in matter, and control physical and chemical transformations.

Software and data improvements

Similar to hardware, software is an enabler that determines the scalability, affordability, and therefore economic feasibility of AI systems. Enterprise software to include data base systems also suffered a period of low innovation and business creation until very recently. In early 2015 several companies in the billion-dollar startup club are providing infrastructure support to AI, offering overlapping services such as predictive analytics, and/or beginning to employ narrow AI and machine learning (ML) internally. Many of us are now concerned with lack of experience and excessive hype that often accompany rapidly increased investment.

Database systems and applications

Database systems, storage and retrieval are of course of critical importance to AI. A few short years ago only one leading vendor supported semantic standards, which was followed by a second market leader two years ago. The emergence of multiple choices in the market is resulting in the first significant new competition in database systems in over a decade, several of which are increasingly competing for critical systems at comparable performance levels at significantly lower cost.

Data standards and interoperability

Poor interoperability vastly increases costs and systemic risk while dramatically slowing adaptability, and makes effective governance all but impossible. While early adopters of semantic standards include intelligence agencies and scientific computing, even banks are embracing data standards in 2015, in part due to regulatory agencies collaborating internationally. High quality data built on deeply descriptive standards combined with advanced analytics and ML can provide much improved risk management, strong governance, efficient operations, accelerated R&D and improved financial performance. In private discussions across industries, it is clear that a consensus has finally formed that standards are a necessity in the network era.

Automation, agility, and adaptive computing

An effective method of learning the agility quotient of an organization is post mortem analysis after a catastrophic event such as patent cliffs in big pharma, or failure to respond to digital disruption. Unfortunately for executives, by the time clarity is gained from hindsight, the wisdom is only useful if placed in a position of authority again. Similarly, failed governance is quite evident in the wake of industrial accidents, missed opportunities to prevent terrorist attacks and whale trades among many others.

Given that the majority of economic productivity is planned and executed over enterprise networks, it is fortunate that automation is beginning to replace manual system integration. Functional silos are causal to crises, as well as obstacles to solving many of our greatest challenges. Too often has been the case where the lack of interoperability has proven to be leveraged for the benefit of the IT ecosystem rather than customer mission. In drug development, for example, until recently scientists were reporting lag times of up to 12 months for important queries to be run. By automating data discovery, scientists can run their own queries, AI applications can recommend unknown queries, and deep learning can continuously seek solutions to narrowly defined problems tailored to the needs of each entity.

In the next article in this series we’ll take a look at recent improvements in algorithms and what types of AI applications are now possible.

(This article was originally published at Computerworld)

Mark Montgomery is the founder and CEO of Kyield, originator of the theorem “yield management of knowledge,” and inventor of the patented AI system that serves as the foundation for Kyield.


About Mark Montgomery
I am a technologist, serial entrepreneur, business consultant, recovered VC, and inventor with interests that are both broad and deep across multiple disciplines, including organizational management, computing, communications, economics, sociology, science and nature, among others. For the past several years I have been founder and CEO of Kyield, which offers a distributed operating system for achieving optimal yield of executable knowledge across large data networks. The patented AI system core acts to unify networks with adaptive data tailored to each entity with continuous predictive analytics designed to significantly reduce ongoing costs while accelerating productivity, and generally make life more satisfying and productive for knowledge workers and their organizations. We provide popular free white papers, use case scenarios, and other information at .

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: