Clever is Cute as Sustainable is Wise


If the financial crisis confirmed anything, it is that the majority of humans are followers, not leaders, and that leaders throughout our society have yet to capture the significance of technology to their members and organizations.

One of the primary causal factors cited by thought leaders in studying crises is poor leadership, to include those who accept misaligned or conflicted interests. When we see “skimming off the top” in others we label it corruption, yet few see it in themselves at all, or choose to ignore it, resulting in the same outcome. While balance is obviously needed for survival—indeed managing that balance well is key for modern leaders, when we over-emphasize short-term profits, we then elevate the influence and power of those who are skilled at winning very short-term battles, rather than long-term wars. I have personally experienced that strategy in organizations and observed it in many others; it doesn’t end well.

One problem with the short-term leadership model is that the skills for software programming, instant trading, manipulating markets, or otherwise amassing great wealth quickly, does not necessarily translate to good leadership in a private company, government, or stewardship in philanthropy. Indeed, in my own observations and those of many others, quite the opposite is often true, yet our information institutions instruct society to emulate the clever rather than the wise. Should we be surprised then at the trend line of manipulation, polarization, and ever deeper crises?

Unlike the early days of the industrial revolution, in the second inning of the information revolution we now understand that most of the challenges facing the human species are long-term in nature, so we must realign our thinking and structures accordingly, including financial incentives and leadership development. Alas, since the long-term has been greatly compressed by consistent failure of short-term behavior, our entire species must now learn to act in the short-term on behalf of our mutual long-term interests. Easier said than done in our culture. The good news is that it’s quite possible…tick-tock, tick-tock, tick-tock.

The process of identifying, mentoring, and recruiting strong leaders often consists of conflicting advice that tends towards self-propelling cultures, regardless of organizational type. In addition to skill sets and track records sketched from misleading data, leaders are often selected based on ideology, dysfunctional networks, and susceptibility to peer pressure, instead of independent thought, good decision making, and wisdom.

Given the evidence, a rational and intelligent path would be to reconstruct our thinking and behavior surrounding the entire topic of leadership and organizational structures, and then tailor that thinking specifically for the environment we actually face, with tools specifically designed for the actual task. For many cultures, such a path begins by emerging from deep denial and embracing evidence-based decision making. Once emerged from the pit of denial, they soon discover among other truths that resources are not infinite after all, personal accountability is not limited to the inefficiencies of organizations, and that both the problems and solutions we face are inextricable from computing, organizational management, and personal accountability. Only then will the path to sustainability began to take shape in the vision field in sufficient form to differentiate the forest from the trees.

Yet another of the many disciplines related to this topic defines psychosis as a “mental disorder characterized by symptoms, such as delusions that indicate impaired contact with reality”.  An appropriate translation of insanity might be “refusal to adopt tools and models designed to achieve sustainability”, aka survival.

If this sounds familiar in your organization, it could well be traced to your leadership development model and process, for leaders are the decision makers who have budget authority. Perhaps it’s time for your organization to redefine strategic from clever to wise, and synchronize the organizational clock with present-day reality?

Advertisements

Hidden costs of complexity in the enterprise


Often is the case when consuming fresh evidence that the truism is reconfirmed: what is not visible is far greater than what is; in terms of opportunity, risk, and cost. Few concepts are so profoundly transformative when this string of characters begins to take shape in the mind’s eye.

Unlike biological evolution where logical adaptation occurs over many generations, the dominant complex systems on earth today are driven by technology, evolving very rapidly, and of course subject to human manipulation.

From competitive advantage to utility tax

Similar to healthcare, education, and finance where complexity creep has been manipulated for long periods until surpassing sustainability, enterprise software has become primarily a game of ‘heads I win; tails you lose’ for those paying the bills. While the first generation of enterprise software provided a strong competitive advantage to early adopters, the bulk of systems have long since resembled a tax on admission.

Today most leading enterprise systems offer a competitive advantage to very few organizations where innovation is focused on optimizing commoditization, scale and capitalization; not products, service, or improvement. The result is a breakdown in product design, tragically to include a negative influence on innovation for customers worldwide.

The tipping point for market dysfunction in the enterprise occurred more than a decade ago when most of the innovation began to be outsourced to consumer ventures in Silicon Valley that serve a very different purpose for a different customer. Even if the majority of SV VC firms and young social networking entrepreneurs understood the needs of organizations, globally adopted consumer technology cannot provide a competitive advantage to organizations. It should be obvious that if everyone is using the same rigid programs with little if any adaptation or customization, then the advantage falls only to those with scale.

Another sign of market dysfunction in IT more generally is visible by comparing cash generation of industry leaders with their competitors. A recent article in the WSJ revealed that nine tech companies generated $68.5 billion in new cash during the Great Recession while the other 65 competitors in the S&P 500 generated $13.5 billion combined.

The war chests at IT leaders tells a different story for each company, including recent product success at Apple, the power of the network effect with Google on the Web, effectively managed monopolies, and an economy dominated for many years by investment banks, weak regulation and judicial enforcement, and last but not least; apathy in enterprise customers. I argue that this level of consolidation of market power isn’t good for anyone, including the stockholders of those with the largest war chests as the dividends are either very small or non-existent, with the majority performing poorly in stock appreciation.

Need for ‘market farming’

It’s essential to understand the difference between proprietary applications and proprietary standards, particularly relating to network effect, market power and systemic risk. IT architecture in the network era is of course global and largely unregulated outside of minimal anti-trust enforcement, with voluntary standards bodies, so it should not be surprising then that proprietary standards still dominate enterprise computing, or that proprietary systems are manipulated for self gain. That is after all the fiduciary duty of public companies with bonus incentives, stock options, and careers dependent upon same.

Of course it’s also true that given government’s terrible track record in regulating technology, combined with nationalist conflicts that too-often suffocate innovation with protectionism, very few of the vastly divergent stakeholders in the ecosystem favor increased regulation, which leaves us with the heavy lifting of systems design and market education to attract intelligent decisions by early adopters in an environment that is predatory to new entrants. Markets do eventually depend upon well-informed customers acting in their best interest, even in expert professional roles; particularly in markets dominated by lock in.

From the perspective of a consultant and venture capitalist who coached hundreds of businesses of all sizes on defending against predation, it is an understatement that enterprise customers need to take a far more proactive role as experts in market farming. The financial incentive to do so is certainly sufficient, as is the risk in not doing so. Until such time that more effective global governance of standards is enforced, however, national governments, non-profits, consumers, and especially CIOs have a special responsibility to be proactive as ‘market farmers’ within the IT ecosystem.

Ownership of data

In the modern world, data represents knowledge, security, identity, health, and wealth. Those who control proprietary standards within the network effect in an unregulated environment not only can extort a high price of entry to the utility, but can also greatly influence if not control ownership of data. We see that often today on the public web (see LinkedIn’s license agreement for an example).

Our position with Kyield is that while we have every right and indeed a responsibility to protect our proprietary intellectual capital within our platform, we do not have the right to extend the power enabled by the platform over others. I would even go so far as to say that we have a responsibility to defend customers from others doing so to the best of our ability. Our power should only extend to the borders of our internal systems, beyond which we should embrace universal standards, so we do, even though they are far from perfect and extremely slow to evolve relative to need.

Networks are not new, they are just much larger, contain data representing far more than ever before, and are of course global. Nations created standards for electric grids presumably to prevent the electric companies from owning every appliance maker, if not their economy. The ‘extend and conquer’ strategy is predatory, obviously very dangerous to the global economy, and should therefore be regulated and aggressively enforced. Nationalist conflicts and protectionism have prevented proper regulation.

It’s my personal position that no one should have power over the languages used for public transactions, nor should any company or group of companies (or any entity including governments) have influence, power, or ownership of data other than the original owner of that data or those who have been granted the rights of use. Ownership and control of data is I believe a fundamental right for individuals and legal entities. It would surprise me if the U.S. Supreme Court failed to better clarify data standards and ownership at some point in the near future.

Of course standards bodies have a higher calling not only to manage the standards process properly within meaningful time constraints, but also consider the influence of conflicting interests on their financial model, as well as sustainable economics relating to adoption. I recently suggested to the W3C for example that their Wiki was insufficient for communicating the benefits of the languages the body created and approved. The response was that they didn’t have the bandwidth and that communications was up to vendors. The response fell well short of what leaders in science have been teaching for many years, suggesting to me that significant change was needed in the model and culture.

With very few exceptions, during the 15 years I have been observing the evolution of computing standards, it has been confirmed often that the IT industry is no more able to self-regulate than the finance or healthcare industries. I wish they could, but wishing doesn’t make it so.

Beauty of simplicity

Those who doubt the need for reducing complexity in our society probably haven’t spent much time working pro bono for a worthy cause that cannot be saved due to entrenched interests owing their existence to manipulated complexity. I have seen the dilemma many times during my career manifest in legislation where the individual interests of the few manipulate the outcome for the many, thereby threatening the whole, to include themselves eventually of course, with little or no apparent understanding of the impact of their self centered activism. The current political situation in the U.S. is obviously not what the Founding Fathers (and Mothers) intended, and indeed feared most.

A very similar dynamic evolved in computing over the past few decades, which I began to recognize in the mid 1990s when we converted our consulting firm to one of the first commercial Internet labs and incubators. It was only when I trained to become a network engineer and object oriented programmer that the degree of manipulation and systemic risk became apparent.

“Complicated systems and their combinations should be considered only if there exist physical-empirical reasons to do so.” –Albert Einstein

There have been reasons for the increasing complexity in enterprise software, but have been primarily limited to protecting cash cows and extending market share in the global network ecosystem. While economies of scale and Moore’s law pushed hardware costs lower, software costs escalated while arguably delivering less value, despite a massive reaction by open source. The parallel evolution in healthcare, education, and government in the U.S. was very similar; we were all paying more for less.

The problem today of course is that our collective resources can no longer afford even the status quo, much less the trajectory, from the costs due to manipulated complexity. In enterprise software, manipulated complexity also creates a very high maintenance cost now exceeding 20% of product cost per year in some cases, and of course a much higher level of security risk. This is a dangerous game and largely unnecessary.

One of the elements I liked best when reviewing the papers of the Google founders and later their early beta was the elegance of simplicity. Google was among the first substantial efforts in the Internet era to reduce a highly complex environment to a very simple interface designed for humans in their own natural language, and it worked very well indeed.

Ironically I found Google while searching for raw IP in an effort to achieve something similar for the enterprise, but enterprise data had not been converted to simple HTML like the public web, rather was subject to incompatible languages and data silos, and of course HTML lacked the ability to provide the embedded intelligence we were seeking, so we had a long wait for standards to catch up to the vision often associated with the semantic web. I like to think we used the time wisely.

Simplicity for the enterprise has not yet arrived at the station, but the train is full and fast approaching.

Toyota’s failure to act on the dots


It seems that every few weeks we hear of another major series of tragedies leading to a crisis that could have been substantially mitigated, if not entirely prevented. Among the most disturbing to discover, and most costly, are those involving safety issues that either threaten or cause loss of life.

Often has been the case during the past decade when the organization involved has been a government agency, but this quarter Toyota wins the prize for failure to act on the digital dots in the enterprise (in fairness Toyota should share the prize with NHTSA—also the front runners for the annual prize—it’s still early in the year).

In the case of  Toyota, we may never know how many digital red flags existed within the enterprise network, but the clear pattern in other cases suggests that the probabilities are high that warnings were common within communications, documents, and web servers indexed by enterprise search engines. Similar to previous events, these problems were known for years by the corporation and the agencies charged with regulation, yet they collectively failed to prevent loss of life, resulting in personal tragedy for a few and far reaching economic destruction in one of the world’s premier corporations.

While growing too fast may have contributed to this problem, it was organizational system design that allowed it to become one of the worst crises in the company’s history, and a serious threat to the future. Toyota has done a good job with innovation in product design compared to most of its competitors, but failed in adopting innovation in organizational management that would sustain their success.

When will leaders of the largest organizations finally understand that failing to invest in how organizations function is often the most expensive mistake executives will ever make? Given the size and impact of industry leaders, combined with the systemic nature of the global economy, these types of failures are unacceptable to growing numbers of communities, companies, and consumers.

An article in the WSJ sums it up well with this quote:

Liu Jiaxiang, a Toyota dealer in Shenzhen, said sales at his stores in southern China have been “steady” since late January. “What keeps me awake at night is a possible long-term erosion of customer confidence in the Toyota brand” because of the recall problem, he said.

Denial is a powerful force in humans, particularly when combined with peer pressure, social herding, politics, bureaucracy, contradictory information, and sheer inertia that seems unstoppable at times. It’s also true that cultures of invincibility have a consistent track record of experiencing crisis and decline soon thereafter.

Mr. Toyoda, if this post finds you, I invite your team to revisit our publications and jointly explore how our holistic semantic enterprise platform could have reduced or prevented this series of recalls. Even if  Toyota had deployed Kyield when your domain first appeared in our Web logs, we almost certainly could have contained this event to a multi-million dollar product defect issue, rather than the multi-billion dollar corporate crisis that it has become.

Mark Montgomery
Founder & CEO – Kyield
Web: http://www.kyield.com
Blog: https://kyield.wordpress.com
email: markm@kyield.com
Twitter: @kyield

Automation of the modern day ‘factory floor’


I have written an op-ed on how to overcome the series of systemic failures we’ve experienced over the past dozen years, and submitted to a major publication.

In that process I shared the piece with a few people in my network, including an old friend and mentor who was on the founding team of one of the most important technology companies in the past century — ever, actually. In that discussion, I found myself using the factory floor automation in the industrial revolution to describe what semantic (AI) technologies can do, and specifically Kyield. My friend suggested that the ‘systemic failure’ in the recent terrorism incident may be as much human as data, to which I replied:

Of course it’s a human problem — just as errors on the factory floor were (in the industrial revolution – so it is with knowledge workers in the information revolution), which is a big reason why so much of (the work) was automated and made transparent to others. Similarly now with robotics entering surgery — we don’t want incompetent workers hiding their mistakes — covering up, protecting turf, their buddies, legacies, or organizations when it can cost large numbers of lives (or the global economy).

This is not to say that we want to replace large numbers of workers with automation, but rather put them to work in a less devastating and more productive manner. We really don’t need dozens of highly paid terrorism experts deciding which visa to yank, when in fact much (more) complex issues are already fully automated elsewhere in our society. Rather they should be trained to operate the CKO module in Kyield for example so that they can more effectively manage their organizations for continual improvement relative to their mission.

I would add that we also cannot afford, nor should we, to allow large numbers of mistakes like a physician’s handwriting on prescriptions continue to kill people. Similarly, we should not continue to promote by our apathy, or allow lobbyists to manipulate the process, over now curable diseases such as misinterpretation of data or lack of interoperability that determine the fate of wars, economies, and large numbers of lives. Moreover, we certainly cannot afford activism into enterprise systems like we observed in the housing bubble and meltdown.  We can’t afford it (status quo) economically or morally. Fortunately, it is no longer necessary relating to enterprise networks, which determine to a large extent how modern organizations function– meaning our society.

Mark Montgomery
Founder & CEO – Kyield
Web: http://www.kyield.com
Blog: https://kyield.wordpress.com
email: markm@kyield.com
Twitter: @kyield

Web 3.0 Leaders Look to the Year Ahead


Jenny Zaino at SemanticWeb.com asked a group of us to provide predictions for 2010. An interesting mix and worth a close look, particularly for those seeking input from the front lines of Web innovation.

Preventing the next Fort Hood tragedy, by design


The recent tragedy at Fort Hood was only the latest in a series of crises that would likely have been prevented if the U.S. Government had adopted a logical holistic system design when I first began making the argument more than a decade ago. Since that time we’ve witnessed trillions of dollars and tens of thousands of lives lost; 9/11 and two wars, Katrina’s turf battles and incompatible communications, the mortgage bubble and global financial crisis, and now the Fort Hood massacre. The current trajectory of systems design and dysfunction isn’t sustainable.

“The care of human life and happiness, and not their destruction, is the first and only object of good government.” – Thomas Jefferson

While this particular tragedy is still under investigation, patterns are emerging that are very similar to previous crises, including 9/11. So let’s take a closer look at this event relative to what is currently possible with organizational design and state-of-the-art technology in order to better understand how to prevent the next crisis, for it will surely occur unless prevented by a logical holistic system design.

Crisis prevention by organizational design

It is true that some crises cannot be prevented, but it’s also true that most human caused crisis can be, particularly those that are systemic, including all cases cited here. In fact many tragedies are reported to have been prevented by intelligence agencies without our detailed knowledge, some of which would undoubtedly help inform our democracy if declassified, but we are still obviously missing preventable catastrophic events that we can ill afford to endure as a nation; economically or otherwise.

“In times of change, learners inherit the Earth, while the learned find themselves beautifully equipped to deal with a world that no longer exist.” – Eric Hoffer.

In each of the cases mentioned here, including Fort Hood, actionable evidence was available either on the Web or within the content of digital files residing on agency computer networks, but were not shared with the appropriate individuals or partners in the decision chain, usually due to careerism, turf protection, and justified fear of retribution.

It is difficult for leaders to understand that members in a hierarchical bureaucracy are often punished by micro social cultures for doing the right thing, such as sharing information or taking action to prevent tragedy. A good report from the field on 9/11 is Coleen Rowley’s Memo to FBI Director Robert Mueller in 2002.

Interests are not aligned: Denial does not a better system make

“The really valuable thing in the pageant of human life seems to me not the State but the creative, sentient individual, the personality; it alone creates the noble and the sublime.…” – Albert Einstein

The reality is that interests of the individual and that of the organization are often not well aligned, so system designs need to include intentional realignment. However, in the case of the Fort Hood massacre, red flags were so prevalent that many of us are asking the logical question: How explicit must a threat be before the systems will require action?

Red flags were hidden from those who need to know

In the case of Fort Hood, as was the case with 9/11, the U.S. Government apparently again experienced a data firewall between agency cultures, supported in previous cases by fear-induced interpretation of regulations and defensive micro cultures within agencies. The Washington Post reported that an FBI-led task force was monitoring emails of the suspect Army Maj. Nidal M. Hasan, some of which were shared with a Washington field office, but were not shared with the military, to include apparently Hasan’s supervisors who clearly were in the camp of ‘need to know’. A properly designed architecture as described in our recent hypothetical use case scenario for the DHS would have automatically alerted those in the decision chain who were pre-determined to ‘need to know’ when certain phrases are present, including the base commander and security officer in this case who may have prevented the tragedy in a manner that did not compromise the subject’s rights to privacy or freedom of religion.

“The status quo is the only solution that cannot be vetoed.” – Clark Kerr

One such semantic phrase for example that should probably be immediately shared with base commanders and counter terrorist experts would be: “communicating with known terrorists”. No one in the chain of command, including criminal investigators, should be empowered to prevent that information from reaching those in a position to prevent tragedy, whether a national security threat or localized. Indeed, logic suggests that local surveillance might be necessary in order to define the threat, if any.

Crisis Prevention by Technical Design

Among the many academic disciplines influencing modern enterprise architecture are organizational management, computer science (CS), and predictive theory, which manifests in the modern work place environment as network design, computer languages, and mathematical algorithms. The potential effectiveness of these disciplines depends primarily on three dynamically interrelated factors:

1. Availability and quality of the data

“A popular government without popular information, or the means of acquiring it, is but a prologue to a farce or a tragedy, or perhaps both.”– James Madison

The problem reflected in the decades-old phrase GIGO (garbage-in garbage-out) used in computer science influenced the holistic semantic design of Kyield more than any other factor. Rather than attacking the root of the problem at the source and investing in prevention, CS in general and consumer search in particular have teetered at the edge of chaos by combining clever algorithms and massive computing power to convert unstructured data (GI) to relevance (GO). While search and conversion of unstructured data has improved substantially in the past decade, it cannot compare to a logically designed QIQO (quality-in quality-out) system. Evolving to a QIQO environment from GIGO in organizational computing requires a holistic solution that is focused on prevention, improving work quality, and enhanced innovation.

It became apparent during several years of extensive applied R&D shortly after the commercialization of the Internet and WWW that embedding intelligence in files would result in far more functionality and efficiency, particularly within enterprise networks.

Without availability of high quality data that provides essential transparency while protecting privacy, the potential of enterprise computing is severely hampered, and in some cases has already become more of the problem than the solution. Once essential data is collected containing carefully tailored embedded intelligence, the task of preventing crises can be semi-automated.

2. Through data barriers

“It doesn’t work to leap a twenty-foot chasm in two ten-foot jumps.” – American proverb

Unlike other industries in previous technical revolutions, the U.S. has generally embraced a laissez-faire approach to technical standards, resulting in proprietary standards that are leveraged for market share. Unfortunately, the result in technology has been much like that in finance, although largely invisible with costs of inoperability transferred to customers. Unfettered innovation can have tragic consequences. In the network era, inoperable systems have increasingly contributed to some of our greatest challenges; including failure in crisis prevention, cost and inefficiencies in healthcare, and reduced innovation and productivity in the workplace. So in our case, even though voluntary standards are less than ideal, we’ve embraced the W3C standards for public transactions.

3. Data constructs and analytics

“Our major obligation is not to mistake slogans for solutions.” — Edward R. Morrow

Once the essential data is collected, many of our current great challenges in organizations become within reach:

  • Red flagging can be automated while protecting jobs and privacy.

  • Realignment of interests between the individual and organization.

  • Accountability and meritocracy is far more achievable.

  • Original work by individuals and teams can be protected.

  • Information overflow can finally be managed well.

  • Creativity and innovation can be enhanced.

  • Predictive and ‘what if?’ modeling /algorithms are much easier.

  • Formerly essential unknowns about the org become known.

  • The organization can become more adaptive to change.

  • Cultural management and continuous learning is manifest.

  • Rich visual metrics of formerly unknown patterns become routine.

Crisis Review

To his credit Secretary Gates has called for a system-wide review of the Fort Hood tragedy, which will coincide with reviews by the Army, White House, and Congress.

However, it would be irresponsible not to emphasize that the underlying stresses that likely contributed to this tragedy are directly related to failure in preventing previous crises. The result of previous failures to adopt logically functional systems is that our macro-fiscal situation in the U.S. is now so degraded that future prevention requires a much greater effort than would have been the case a decade ago.

Preventing systemic crises and related security (economic and warfare) are the foremost reasons for our government agencies to exist, and was the primary motivation for creating Kyield, even if the holistic design provides many other side benefits. The system problem has now been solved by design; but it has yet to be adopted.

“I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times.” – Thomas Jefferson

Mark Montgomery

Drucker on long term values


“Whether a business should be run for short-term results or with a focus on the long term is likewise a question of values. Financial analysts believe that businesses can be run for both simultaneously.

Successful businesspeople know better. To be sure, every company has to produce short-term results. But in any conflict between short-term results and long-term growth, each company will determine its own priority.

This is not primarily a disagreement about economics. It is fundamentally a value conflict regarding the function of a business and the responsibility of management.” — Peter Drucker, HBR 1/2005