Complex Dynamics at the Confluence of Human and Artificial Intelligence


(This article was featured at Wired)

Fear of AI vs. the Ethic and Art of Creative Destruction

While it may be an interesting question whether the seasons are changing in artificial intelligence (AI), or to what extent the entertainment industry is herding pop culture, it may not have much to do with future reality. Given recent attention AI has received and the unique potential for misunderstanding, I thought a brief story from the trenches in the Land of Enchantment might shed some light.

The topic of AI recently came up at Santa Fe Institute (SFI) during a seminar by Hamid Benbrahim surrounding research in financial markets. Several senior scientists chimed in during Hamid’s talk representing computer science (CS), physics (2), neuroscience, biology, and philosophy, as well as several practioners with relevant experience. SFI is celebrating its 30th anniversary this year as a pioneer in complexity research where these very types of topics are explored, attracting leading thinkers worldwide.

Following the talk I continued to discuss financial reforms and technology with Daniel C. Dennett, who is an external professor at SFI. While known as an author and philosopher, Professor Dennett is also Co-Director of the Center for Cognitive Studies at Tufts University with extensive published works in CS and AI. Professor Dennett shared a personal case that provides historical and perhaps futuristic context involving a well-known computer scientist at a leading lab during the commercialization era of the World Wide Web. The scientist was apparently concerned with the potential negative impact on authors given the exponentially increasing mass of content, and I suspect also feared the network effect in certain types of consumer services that quickly result in winner-takes-all dominance.

Professor Dennett apparently attempted to reassure his colleague by pointing out that his concerns, while understandable, were likely unjustified for the mid-term as humans have a consistent history of adapting to technological change, as well as adapting technology to fill needs. In this case, Dennett envisioned the rise of specialty services that would find, filter, and presumably broker in some fashion the needs of reader and author. Traditional publishing may change even more radically than we’ve since observed, but services would rise, people and models would adapt.

One reason complexity attracts leading thinkers in science and business is the potential benefit across all areas of life and economy. The patterns and methods discovered in one field are increasingly applied to others in no small part due to collaboration, data sharing, and analytics. David Wolpert for example stated his reasoning for joining SFI part-time from LANL was a desire to work on more than one discipline simultaneously. Many others have reported similarly both for the potential impact from sharing knowledge between disciplines and the inherent challenge. I can certainly relate from my own work in applied complex adaptive systems, which at times seems as if God or Nature were teasing the ego of human intellect. Working with highly complex systems tends to be a humbling experience.

That is not to say, however, that humans are primitive or without power to alter our destiny. Our species did not come to dominate Earth due to ignorance or lack of skills, for better or worse. We are blessed with the ability to intentionally craft tools and systems not just for attention-getting nefariousness, but solving problems, and yes being compensated for doing so. Achieving improvement increasingly requires designs that reduce the undesirable impacts of complexity, which tend to accumulate as increased risk, cost, and difficulty.

Few informed observers claim that technological change is pain-free as disruptions and displacements occur, organizations do fail, and individuals do lose jobs, particularly in cultures that resist macro change rather than proactively adapt to changing conditions. That is after all the nature of creative destruction. Physics, regulations, and markets may allow us to control some aspects of technology, manage processes in others, and hopefully introduce simplicity, ease of use, and efficiency, but there is no escaping the tyranny of complexity, for even if society attempted to ban complexity, nature would not comply, nor would humans if history is any guide. The risk of catastrophic events from biological and human engineered threats would remain regardless. The challenge is to optimize the messy process to the best of our ability with elegant and effective solutions while preventing extreme volatility, catastrophic events, and as some of us intend—lead to a more sustainable, healthy planet.

2012 Kyield Enterprise UML Diagram - Human Skull

The dynamics involved with tech-led disruption are well understood to be generally beneficial to greater society, macroeconomics, and employment. Continual improvements with small disruptions are much less destructive and more beneficial than violent events that have occurred throughout history in reaction to extreme chronic imbalances. Diversification, competition, and churn are not only healthy, but essential to progress and ultimately survival. However, the messy task is made far more costly and painful than necessary, including to those most impacted, as entrenched cultures resist that which they should be embracing. Over time all manner of protectionist methods are employed to defend against change, essential disruption, or power erosion, eventually to include manipulation of the political process, which often has toxic and corrosive impacts. As I am writing this a description following a headline in The Wall Street Journal reads as follows:

 “Initiatives intended to help restrain soaring college costs are facing resistance from schools and from a bipartisan bloc of lawmakers looking to protect institutions in their districts.”

Reading this article reminded me of an interview with Ángel Cabrera, who I had the pleasure of getting to know when he was President of Thunderbird University, now in the same role at George Mason University. His view as I recall was that the reforms necessary in education were unlikely to come from within, and would require external disruptive competition. Regardless of role at the time, my experience has been similar. A majority of cultures fiercely resist change, typically agreeing only to reforms that benefit the interests of narrow groups with little concern for collective impact or macro needs. Yet society often looks to entrenched institutions for expertise, leadership, and decision power, despite obvious conflicts of interest, thus creating quite a dilemma for serious thinkers and doers. As structural barriers grow over time it becomes almost impossible to introduce new technology and systems regardless of need or merit. Any such scenario is directly opposed to proper governance policy, or what is understood to result in positive outcomes.

Consider then recent research demonstrating that resistance to change and patterns of human habit are caused in part by chemicals in the brain, and so we are left with an uncomfortable awareness that some cultures are almost certainly and increasingly knowingly exploiting fear and addiction to protect personal power and financial benefits that are often unsustainable, and eventually more harmful than tech-enabled adaptation to the very special interests they are charged with serving, not to mention the rest of society who would clearly benefit. This would seem to cross the line of motivation for change to civic duty to support those who appear to be offering the best emerging solutions to our greatest problems.

This situation of entrenched interests conflicting with the greater good provides the motivation for many involved with both basic and applied R&D, innovation, and business building. Most commonly associated with the culture of Silicon Valley, in fact the force for rational reforms and innovation has become quite global in recent years, although resistance to even the most obvious essential changes are still at times shockingly stubborn and effective.

Given these observations combined with awareness that survival of any organization or species requires adaptation to constantly changing conditions, one can perhaps see why I asked the following questions during various phases of our R&D:

Why not intentionally embrace continuous improvement and adaptation?

Why not tailor data consumption and analytics to the specific needs of each entity?

Why not prevent readily preventable crises?

Why not accelerate discoveries and attribute human capital more accurately and justly?

Why not rate, incentivize, and monetize mission-oriented knowledge?

The story I shared in conversation with Dan Dennett at SFI was timely and appropriate to this topic as philosophy not only deserves a seat at the table with AI, but also has contributed to many of the building blocks that make the technology possible, such as mathematics and data structures, among others.

The primary message I want to convey is that we all have a choice and responsibility as agents for positive change, and our actions impact the future, especially with AI systems. For example, given that AI has the capacity to significantly accelerate scientific discovery, improve health outcomes, and reduce crises, I have long believed ethics requires that we deploy the technology. However, given that we are also well aware that high unemployment levels are inhumane, contain considerable moral hazard, and risk for civil unrest, AI should be deployed surgically and with great care. I do not support wide deployment of AI for the primary purpose of replacing human workers. Rather, I have focused my R&D efforts on optimizing human capital and learning in the near-term. To the best of my awareness this is not only the most ethical path forward for AI systems, but is also good business strategy as I think the majority of decision makers in organizations are of similar mind on the issue.

In closing, from the perspective of an early advisor to very successful tech companies rather than inventor and founder of an AI system, I’d like to support the concerns of others. While we need to be cautious with spreading undue fear, it has become clear to me that some of the more informed warnings are not unjustified. Some highly competitive cultures particularly in IT engineering have demonstrated strong anti-human behavior, including companies I am close to who would I think quite probably not self-restrain actions based on ethics or macro social needs, regardless of evidence presented to them. In this regard they are no different than the protectionist cultures they would replace, and at least as dangerous. I strongly disagree with such extreme philosophies. I believe technology should be tapped to serve humans and other species, with exceptions reserved for contained areas such as defense and space research where humans are at risk, or in areas such as surgery where machine precision in some cases are superior to humans and therefore of service.

Many AI applications and systems are now sufficiently mature for adoption, the potential value and functionality are clearly unprecedented, and competitive pressures are such in most sectors that to not engage in emerging AI could well determine organizational fate in the not-too-distant future. The question then is not whether to deploy AI, or increasingly even when, but rather how, which, and with whom. About fifteen years ago during an intense learning curve I published a note in our network for global thought leaders that the philosophy of the architect is embedded in the code—it just often requires a qualified eye to see it. This is where problems in adoption of emerging technology often arise as those few who are qualified include a fair percentage of biased and conflicted individuals who don’t necessarily share a high priority for the best interest of the customer.

My advice to decision makers and chief influencers is to engage in AI, but choose your consultants, vendors, and partners very carefully.

Advertisements

Why It Has Always Been, and Will Always Be — The Network of Entities


Physics won this debate before anyone had a vision that a computer network might someday exist, but biology played an essential role on the team.

The reason of course is that all living things, including humans and our organizations, are unique in the universe—for our purposes anyway—until that identical parallel universe is discovered. Even perfectly cloned robots cannot occupy the same time and place, so while quite similar a machine working directly adjacent to an otherwise identical clone may be electrocuted or run over by a forklift, and will then have much different needs.

More importantly to organic creatures like myself, our DNA while similar to others is not only unique, but our health and well being are influenced by a myriad of other factors as well, including nutrition, behavior, environment, and socioeconomics among others, the totality interaction of which we only partially understand. We do know, however, that our universe, our bodies and our brains are constantly changing with a set of factors at any one time that strongly favor an adaptive response—or in many cases proactive, certainly to include managing data and information.

While networks of things and of people certainly exist, it always has been and forever will be the Internet of entities, the individual make-up of which at any moment in time, including dynamic relationships, require humans and human organizations to manage the best we are able with the most accurate information available, increasingly for the foreseeable future by this human entity to include managing organizational entities, machine entities, and yes even sensory entities. This is why I created Kyield and designed the system that powers it in precisely the manner offered.

Legacy of the Tōhoku Earthquake: Moral Imperative to Prevent a Future Fukushima Crisis


An article in the New York Times reminds us once again that without a carefully crafted and highly disciplined governance architecture in place, perceived misalignment of personal interests between individuals and organizations across cultural ecosystems can lead to catastrophic decisions and outcomes. The article was written by Martin Fackler and is titled: Nuclear Disaster in Japan Was Avoidable, Critics Contend.

While not unexpected by those who study crises, rather yet another case where brave individuals raised red flags only to be shouted down by the crowd, the article does provide instructive granularity that should guide senior executives, directors, and policy makers in planning organizational models and enterprise systems. In a rare statement by a leading publication, Martin Fackler reports that insiders within “Japan’s tightly knit nuclear industry” attributed the Fukushima plant meltdown to a “culture of collusion in which powerful regulators and compliant academic experts”.  This is a very similar dynamic found in other preventable crises, from the broad systemic financial crisis to narrow product defect cases.

One of the individuals who warned regulators of just such an event was professor Kunihiko Shimizaki, a seismologist on the committee created specifically to manage risk associated with Japan’s off shore earthquakes. Shimizaki’s conservative warnings were not only ignored, but his comments were removed from the final report “pending further research”. Shimizaki is reported to believe that “fault lay not in outright corruption, but rather complicity among like-minded insiders who prospered for decades by scratching one another’s backs.”  This is almost verbatim to events in the U.S. where multi-organizational cultures evolved slowly over time to become among the highest systemic risks to life, property, and economy.

In another commonly found result, the plant operator Tepco failed to act on multiple internal warnings from their own engineers who calculated that a tsunami could reach up to 50 feet in height. This critical information was not revealed to regulators for three years, finally reported just four days before the 9.0 quake occurred causing a 45 foot tsunami, resulting in the meltdown of three reactors at Fukushima.

Three questions for consideration

1) Given that the root cause of the Fukushima meltdown was not the accurately predicted earthquake or tsunami, but rather dysfunctional organizational governance, are leaders not then compelled by moral imperative to seek out and implement organizational systems specifically designed to prevent crises in the future?

2) Given that peer pressure and social dynamics within the academic culture and relationship with regulators and industry are cited as the cause by the most credible witness—from their own community who predicted the event, would not prudence demand that responsible decision makers consider solutions external of the inflicted cultures?

3) With the not-invented-here-syndrome near the core of every major crises in recent history, which have seriously degraded economic capacity, can anyone afford not to?

Steps that must be taken to prevent the next Fukushima

1) Do not return to the same poisoned well for solutions that caused or enabled the crisis

  • The not-invented-here-syndrome combined with bias for institutional solutions perpetuates the myth that humans are incapable of anything but repeating the same errors over again.

  • This phenomenon is evident in the ongoing financial crisis which suffers from similar cultural dynamics between academics, regulators and industry.

  • Researchers have only recently begun to understand the problems associated with deep expertise in isolated disciplines and cultural dynamics. ‘Expertisis’ is a serious problem within disciplines that tend to blind researchers from transdisciplinary patterns and discovery, severely limiting consideration of possible solutions.

  • Systemic crises overlaps too many disciplines for the academic model to execute functional solutions, evidenced by the committee in this case that sidelined their own seismologist’s warnings for further study, which represents a classic enabler of systemic crises.

2) Understand that in the current digital era through the foreseeable future, organizational governance challenges are also data governance challenges, which requires the execution of data governance solutions

    • Traditional organizational governance is rapidly breaking down with the rise of the neural network economy, yet governance solutions are comparably slow to be adopted.

    • Many organizational leaders, policy makers, risk managers, and public safety engineers are not functionally literate with state-of-the-art technology, such as semantic, predictive, and human alignment methodologies.

    • Functional enterprise architecture that has the capacity to prevent the next Fukushima-like event, regardless of location, industry, or sector, will require a holistic design encapsulating a philosophy that proactively considers all variables that have enabled previous events.

      • Any functional architecture for this task cannot be constrained by the not-invented-here-syndrome, defense of guilds, proprietary standards, protection of business models, national pride, institutional pride, branding, culture, or any other factor.

3) Adopt a Finely Woven Decision Tapestry with Carefully Crafted Strands of Human, Sensory, and Business Intelligence

Data provenance is foundational to any functioning critical system in the modern organization, providing:

      • Increased accountability

      • Increased security

      • Carefully managed transparency

      • Far more functional automation

      • The possibility of accurate real-time auditing

4) Extend advanced analytics to the entire human workforce

      • incentives for pre-emptive problem solving and innovation

      • Automate information delivery:

        • Record notification

        • Track and verify resolution

        • Extend network to unbiased regulators of regulators

      • Plug-in multiple predictive models:

        • -establish resolution of conflicts with unbiased review.

        • Automatically include results in reporting to prevent obstacles to essential targeted transparency as occurred in the Fukushima incident

5) Include sensory, financial, and supply chain data in real-time enterprise architecture and reporting

      • Until this year, extending advanced analytics to the entire human workforce was considered futuristic (see 1/10/2012 Forrester Research report Future of BI), in part due to scaling limitations in high performance computing. While always evolving, the design has existed for a decade

      • Automated data generated by sensors should be carefully crafted and combined in modeling with human and financial data for predictive applications for use in risk management, planning, regulatory oversight and operations.

        • Near real-time reporting is now possible, so governance structures and enterprise architectural design should reflect that functionality.

 

Conclusion

While obviously not informed by a first-person audit and review, if reports and quotes from witnesses surrounding the Fukushima crisis are accurate, which are generally consistent from dozens of other human caused crises, we can conclude the following:

The dysfunctional socio-economic relationships in this case resulted in an extremely toxic cultural dynamic across academia, regulators and industry that shared tacit intent to protect the nuclear industry. Their collective actions, however, resulted in an outcome that idled the entire industry in Japan with potentially very serious long-term implications for their national economy.

Whether psychological, social, technical, economic, or some combination thereof, it would seem that no justification for not deploying the most advanced crisis prevention systems can be left standing. Indeed, we all have a moral imperative that demands of us to rise above our bias, personal and institutional conflicts, and defensive nature, to explore and embrace the most appropriate solutions, regardless of origin, institutional labeling, media branding, or any other factor. Some crises are indeed too severe not to prevent.

Mark Montgomery
Founder & CEO
Kyield
http://www.kyield.com

Best of Kyield Blog Index


I created an index page containing links to the best articles in our blog with personal ratings:

Best of Kyield Blog Index.

Book Review: What’s Holding You Back? — By Robert J. Herbold


While I rarely seem to have time for book reviews, the timing, content, and match to current needs of Bob Herbold’s new book is even more rare, so I wanted to share some thoughts while fresh. I read the book while on annual vacation in the San Juan Mountains with my wife at the end of September.

As a previous business and organizational consultant with dozens of similar cases in my past, combined with years of pro-bono advisory for the public sector and non-profits, and recent partner negotiations with multinationals, I can confirm that the operational guidance throughout the book is spot on.

Indeed, many of the current economic challenges facing the U.S. and EU can be directly traced to the consensus cultures in large organizations of the type highlighted consistently in the business cases Bob shares. In case after case, he shows us how lack of accountability, fear of negative impact on careers, refusing to take decisive bold action meeting actual needs, and poor cultures for innovation have led to failure in our hyper competitive global economy. We know what works and what doesn’t, but the truth is what works is quite often very difficult and uncomfortable; not unlike team competition on the football field or climbing mountains.

I suspect that this book will clash sharply with the predominant idealism found in government, academia, unions, and multinationals that have sufficient market power to ignore competitive issues during the tenure of current management, or so many believed until very recently. For decades now the most common response to increased competition has been to circle the wagons and kick the can down the road for the next generation to worry about, which is of course why western economies in particular are suffering today.  However common and popular, any such culture and practice is failure by any credible definition of leadership. This is why most strong leaders agree that we are suffering from an era of leadership crisis; it’s impossible to conclude otherwise.

Some managers in consensus cultures might even consider Bob’s message as brutal, claiming that such a management philosophy is politically unacceptable in their organizational culture. Such a conclusion would be accurate in many organizations I’ve worked with, which is why crisis is so common today. The mathematical truth in a world with finite resources is indeed often perceived as brutal, particularly for those who have enjoyed a life of surplus and subsidy. These are precisely the cultures and managers who need to fully embrace the teachings offered by Bob, if for no other reason than everything such cultures claim to care about are being economically devastated by uncompetitive philosophies and practices. It is ironic that what’s holding our economy back are so many cultures and leaders who are refusing to follow the proven practices that work in the real economy and attempt to protect a world that no longer exists, rather than deal with the reality we face on planet earth today.

While I found a few items I could quibble with relating to my own specialty work in Kyield, they are minor compared to the much needed broader message. A few of my favorite conclusions include:

  • Avoid commodity hell

  • Staff for success

  • Move weak performers out quickly

  • Creativity and six sigma don’t mix

  • Demand accountability and decisiveness

  • Value ideas from anywhere

  • Exploit inflection points

In each of these areas and many others, Bob provides some detail on how to execute, whether in large global companies like his role at P&G and Microsoft, or in business units dealing with many of the same issues.

Finally, I’d like to share a few personal thoughts. While I’ve never met Bob in person, as is often the case in business we do have multiple shared acquaintances in our past, so I pinged him when reading the book and we’ve exchanged a few emails since.  I often thought about Bob in his role as COO at Microsoft when his duties included dealing with the DOJ on one hand and Bill Gates on the other, which frankly must have been among the most difficult positions any modern executive has endured. The outcome for Bill Gates and Microsoft was exceptional by any known metric, which can be largely attributed to Bob Herbold in my view.

In a recent interview and podcast with the Puget Sound Business Journal (my old stomping ground), the editor asked Bob about his experience at Microsoft and why the company has not been more innovative. His answer was almost verbatim to my own in answering the same question for my late partner Dr. Russell Borland who was on the founding team of Word, and a key person in many of the early products that still deliver most of the profits for MSFT.

Paraphrasing the answer: “It’s very difficult to voluntarily cannibalize two of the most profitable products in the history of business (Office and Windows)”.

Frankly, this reality was not easy for Russell to digest, nor even myself who was an early booster to MS. However, based on fiduciary responsibility that requires quantitative reasoning to guide decision making, a CEO in a public company wouldn’t voluntarily cannibalize such cash cows, and Steve Ballmer hasn’t, so it’s up to customers and innovators–that’s why we need new companies; creative destruction and disruption are essential to our economy.

Unfortunately for the world, when companies who dominate our work environment with commoditized products fail to innovate, and fail to provide easily adaptable tools that enable differentiation and competitiveness, the risk of failure becomes systemic to the entire global economy, acting like a wet blanket to creativity and economic growth. That’s why we spent 15 years creating Kyield—organizations simply can’t afford commoditization of the digital workplace environment, and the world cannot afford not to embrace the rare generational leap Kyield represents.

If you are a senior executive in a mid to large size organization, you will hopefully already have this book, but if not—buy it for all of your managers, then call me and let’s talk about what it will take to install Kyield in your organization so that your organization can execute and optimize the lessons learned:

What’s Holding You Back?
10 Bold Steps That Define Gutsy Leaders
By Robert J. Herbold

Key patent issued


My key patent for Kyield was issued today by the USPTO as scheduled earlier this month.

Title: Modular system for optimizing knowledge yield in the digital workplace

Abstract: A networked computer system, architecture, and method are provided for optimizing human and intellectual capital in the digital workplace environment.

To view our press release go here

To view the actual patent  go here

I will post an article when time allows on the importance of this technology and IP, and perhaps one on the experience with the patent system. Thanks, MM

Too Big to Fail or Too Primitive to Succeed?


Our economy very nearly experienced a financial version of Armageddon due to the gap between a primitive governance structure and highly sophisticated tools employed by a few with interests that were deeply misaligned with the needs of sustaining our national and global economy. We have all since unwillingly experienced the negative impacts of untamed technology while experiencing few of the benefits of the tamed; whether for resolution of the current crisis or prevention of the next.

Given the systemic nature and scale of the financial crisis, and in consideration of the poor ongoing economic conditions, it’s clear that the financial industry, political process, and regulators have all fallen short of achieving the individual mission of each, particularly in consideration of current technological capabilities.

For the past several months financial institutions have been attempting to convince regulators that they should not be labeled a Systemically Important Financial Institution (SIFI). The process of implementing the 2010 Dodd-Frank law in the U.S. has resulted in spin offs in an attempt to avoid increased U.S. regulation, while the new global rules for multi-national banks on top of Basel III, including surcharges and increased capital ratios, is resulting in a comprehensive rethink of the fundamental assumptions surrounding the global banking model.

Observing this dynamic invites a mental imagery of bureaucrats, politicians, and academics in team competition, each applying favored remedies such as duck tape over economic journals in a futile attempt to plug giant leaks in the hull of a Nimitz-class aircraft carrier.

When basic human greed clashed with globalization, networking, and technology, the combination introduced a complexity far beyond the organizational structures and tools available to regulators or corporations. Indeed, the reaction we’ve observed suggests that remedies employed to manage this crisis were designed for a war fought over seven decades ago during the Great Depression; an era when state-of-the-art technology was represented by the IBM Type 285 Numeric Printing Tabulator– capable of tabulating 150 cards per minute. The hourly sales of IBM today are approaching the annual sales of 1933, and billions of records are now run in seconds, yet our archaic regulatory system is employing printing presses in response to the largest financial crisis in 75 years.

A great deal has been learned in recent years beyond traditional economic theory about the systemic nature of networks, social behavior, contagion, and the global economy, with considerable investment in basic and applied research focused on technologies specifically designed to prevent systemic crises.

In the era of high performance computing on an increasingly interconnected planet with ever expanding pipes, economic tipping points can be reached very quickly that can bankrupt even the previously most wealthy nation on earth, particularly in a weakened economic condition suffering from structural problems. Focusing on SIFIs is of course essential, even if tardy by decades, but the emphasis should be on managing real systemic risk, which requires a very specific data structure that ensures data integrity, enhanced security, system-wide automation, modernized organizational structures, and continual, real-time improvement.

Without deep intelligence on the constantly changing relationships in a carefully constructed semantic layer, and automatically managed by pre-configured data valves, systemic risk is impossible to manage well, or even I argue at a level that is minimally acceptable.

Sophisticated new multi-disciplinary systems have been designed specifically to address the modern challenges in systemic risk management, but have yet to be built out and deployed. Policy makers should insist on the new generation of technologies to better protect citizens and the economy; regulators should embrace and promote the technology for it’s impossible to meet their mission otherwise; and financial institutions should adopt the technology due to rare ROI and sharply reduced levels of risk.

Data Integrity: The Cornerstone for BI in the Decision Process


When studying methods of decision making in organizations, mature professionals with an objective posture often walk away wondering how individuals, organizations, and even our species have survived this long. When studying large systemic crises, it can truly be a game changer in the sport of life, providing motivation that extends well beyond immediate personal gratification.

Structural integrity in organizations, increasingly reflected by data in computer networking, has never been more important. The decision dimension is expanding exponentially due to data volume, global interconnectedness, and increased complexity, thus requiring much richer context, well-engineered structure, far more automation, and increasingly sophisticated techniques.

At the intersection of the consumer realm, powerful new social tools are available worldwide that have proven valuable in affecting change, but blind passion is ever-present, as is self-serving activism from all manner of guild. Ideology surrounding the medium plays a disproportionate role in phase 3 of the Internet era, to include crowdsourcing, social networking, and mainstream journalism. Sentiment can be measured more precisely today, but alignment is allusive, durability questionable, and integrity rare.

Within the enterprise, managers are dealing with unprecedented change, stealthy risk, and compounding complexity driven in no small part by technology. Multi-billion dollar lapses sourced from multiple directions have become common, including a combination of dysfunctional decision processes, group/herding error, self-destructive compensation models, conflicting interests, and poorly designed enterprise architecture relative to actual need.

Specifically to enterprise software, lack of flexibility, commoditization, high maintenance costs, and difficulty in tailoring has created serious challenges for crisis prevention, innovation, differentiation, and global competitiveness. It is not surprising then, given exponential growth of data, which often manifests in poor decisions in complex environments, Business Intelligence (BI) is a top priority in organizations of all types. BI is still very much in its infancy, however, often locked in the nursery, subjecting business analysts to dependency on varying degrees of IT functionality to unlock the gate to the data store.

Given the importance of meaningful, accurate data to the mission of the analyst and future of the organization, recent track records in decision making, and challenges within the organization and IT industry, it is not surprising that analysts would turn to consultants and cloud applications seeking alternative methods, even when aware of extending the vicious cycle of data silos.

Unfortunately, while treating the fragmented symptoms of chronic enterprise maladies may provide brief episodic relief, only a holistic approach specifically designed to address the underlying root causes is capable of satisfying the future needs of high performance organizations.

The dirty dozen fault lines to look for in structural integrity of data

  1. Does your EA automatically validate the identity of the source in a credible manner? (Y/N)

  2. Is your IT security redundant, encrypted, bio protected, networked, and physical?  (Y/N)

  3. Are your data languages interoperable internally and externally? (Y/N)

  4. Is the enterprise fully integrated with customers, partners, social networking, and communications? (Y/N)

  5. Do you have a clear path for preventing future lock-in from causing unnecessary cost, complexity, and risk?  (Y/N)

  6. Are data rating systems tailored to specific needs of the individual, business unit, and organization? (Y/N)

  7. Are original work products of k-workers protected with pragmatic, automated permission settings? (Y/N)

  8. Does each knowledge worker have access to organizational data essential to their mission? (Y/N)

  9. Are compensation models driving mid to long-term goals, and well aligned with lowering systemic risk? (Y/N)

  10. Is counter party risk integrated with internal systems as well as best available external data sources? (Y/N)

  11. Does your organization have enterprise-wide, business unit, and individual data optimization tools? (Y/N)

  12. Are advanced analytics and predictive technologies plug and play? (Y/N)

If you answered yes (Y) to all of these questions, then your organization is well ahead of the pack; even if perhaps a bit lonely. If you answered no (N) to any of these questions, then your organization likely has existing fault lines in structural integrity that will need to be addressed in the near future.  The fault lines may or may not be visible even to the trained professional, until the next crisis of course, at which time it becomes challenging for management to focus on anything else.

Thoughts on the Santa Fe Institute


A topic of considerable thought, discussion and debate for many of us long before a series of ever-larger crises, the Santa Fe Institute (SFI) chose the theme of complexity in regulation for their annual meeting this week. Prior to sharing my thoughts on the important topic of simplifying regulation in a future post, which will be covered more extensively in my book in progress, I want to focus a bit on SFI.

I was fortunate to attend last year’s 25th anniversary meeting at SFI, but this year I was only able to view the final full day via webcast, which was excellent. The official SFI about page can be found here, although having written many of these descriptions myself; I’ve yet to write or read one that captures the essence of the organization, people, or contributions, so please allow the liberty of a few additional lines in first person.

I have been following SFI regularly for over 15 years, and since moving to Santa Fe nearly two years ago have had many interactions. SFI essentially pioneered complexity as a discipline, but has also been a leader in what I refer to in my own work as a mega disciplinary approach to discovery, without which frankly many researchers and their cultures can become blinded, and discoveries stalled, with R&D performing substantially below potential.

One of several strengths at SFI is their ability to draw from a very broad universe of scholars, each of whom is a leading expert in a specific discipline, but also share an interest in complexity theory—which affects everything else, as well as the need to work across disciplines to optimize learning.

The intimate size and environment of SFI is no doubt partially responsible for attracting so many leading scholars to contribute and engage. After living in Santa Fe, visiting the campus and attending multiple events, with a great many exchanges with larger institutions for 30 years, I can certainly understand the appeal for permanent faculty, visiting scholars, post docs, and business network members.

This year’s event was organized by Chris Wood and David Krakauer, who are two individuals at SFI I have had the pleasure to get to know recently (forgive my informality here; it comes natural). David heads up the faculty and Chris divides his time between research, administration, and running the SFI business network. These two represent a diverse faculty and also make an interesting combination, with Chris being the calm diplomatic type while David exudes sufficient rebelliousness at times for me to wonder, despite his brilliance, how he prospered at Cambridge (due to my own rebel instincts and frustration with academia), until reflecting on his current role. It is precisely the challenge when shepherding deep diversity that brings out the best in people; one of several skills David demonstrates when leading groups.

A good way to learn more about SFI from afar is to view a sample of their research online, including videos. Their model, however, like many—is not perfect, as the institute is substantially dependent on donations in what has been a very uncertain time and economy of late, so for those who may be seeking a worthy tax deduction this year, I would urge you to consider a donation.  For larger corporations and foundations, I recommend exploring the SFI business network, which is similar in many respects to the experimental virtual network I operated in the late 1990s, but also benefits from the physical conference interactions throughout the year, not just with SFI scientists and staff, but also with other business network members. Several of our network members have also been business network members of SFI, so I have known quite a few over the past dozen years, including Franz Dill on our Kyield advisory board who represented P & G for many years at SFI. For corporate and foundation executives in particular, I highly recommend viewing a short video interview with SFI Vice President Nancy Deutsch to explore relationship options.

Unlike universities and federal labs that grew large physical empires with massive overhead, the small size of SFI, fewer conflicts, independence, location and talent attract exceptional human quality, providing a rare situation certainly worth preserving and improving. My hope for SFI is that the community and entity will continue to adapt, evolve, and forge a strong and diverse financial structure that could become a model for the future.

While it may not always be obvious to some of my colleagues in business and finance, the independent theoretical research produced by SFI is essential for thinking through and eventually helping to overcome the world’s most pressing challenges, which is from my perspective excellent long term strategic alignment for any mission statement.

Mark Montgomery
Founder & CEO
Kyield

How can information technology improve health care?


I recall first asking this question in leadership forums in our online network in 1997, hoping that a Nobel laureate or Turing Award winner might have a quick answer.  A few weeks earlier I had escorted my brother Brett and his wife from Phoenix Sky Harbor airport to the Mayo Clinic in Scottsdale, seeking a better diagnosis than the three-year death sentence he had just received from a physician in Washington. Unfortunately, Mayo Clinic could only confirm the initial diagnosis for Amyotrophic lateral sclerosis (ALS).

In my brother’s case, the health care system functioned much better than did the family; it was the dastardly disease that required a cure, along with perhaps my own remnant hubris, but since his employer covered health care costs we were protected from most of the economic impact. I then immersed myself in life science while continuing the experiential learning curve in our tech incubator. It soon became apparent that solving related challenges in research would take considerably longer than the three years available to my brother, his wife, and their new son. Close observation of health care has since revealed that research was only part of the challenge.

Symptoms of an impending crisis

During my years in early stage venture capital, symptoms of future economic crisis in health care appeared in several forms, including:

  • R&D failed to consider macro economics
  • Technology that increased costs were most likely to be funded and succeed
  • Technology that decreased costs were often unfunded and/or not adopted
  • Cultural silos in scientific disciplines were entrenched as effective guilds
  • Professional compensation packages were growing rapidly
  • Regulatory bureaucracy was devolving
  • The valley of death was expanding rapidly
  • Trajectory of HC costs and customer means were in opposing X formation

Over the course of the following decade, while observing my father’s experience with diabetes—including billing, it became obvious that few stakeholders in the life science and health care ecosystem were provided with a financial incentive for preserving the overall system; meaning the challenge was classically systemic. Clayton Christensen sums up the situation in health care succinctly: “clearly, systemic problems require systemic solutions.”

12 years to design an answer

When looking at the challenges within information technology and health care first as individual systems, and then combined as a dynamic integrated system, we came to several conclusions that eventually led to the design of our semantic healthcare platform.

Ten essentials:

  1. Patients must manage their own health, including data
  2. Universal computing standards; likely regulated in health care
  3. A trustworthy organization and architecture
  4. Simple to use for entry level skills
  5. Unbiased, evidenced-based information and learning tools
  6. Highly structured data from inception
  7. High volume data meticulously synthesized from inception
  8. Integrated professional and patient social networking
  9. Mobile to include automation, analytics, and predictive
  10. Anonymous data should be made available to researchers

Probable benefits

While we must build, scale, and evaluate our system to confirm and measure our predictions, we anticipate measurable benefits to multiple stakeholders, including:

  • Higher levels of participation in preventative medicine
  • More timely and accurate use of diagnostics
  • Reduced levels of unnecessary hospital visitation
  • Higher levels of nutrition
  • Lower levels of over-prescription
  • Reduced levels of human error
  • Improved physician productivity
  • Reduced overall cost of health care
  • Much greater use of analytics and predictive algorithms
  • Expedited paths to discovery for LS researchers

In May of 2010, we unveiled our semantic health care platform with a companion diabetes use case scenario, which is written in a story telling format, and is freely available in PDF.