Five Essential Steps For Strategic (adaptive) Enterprise Computing


Given the spin surrounding big data, duopoly deflection campaigns by incumbents, and a culture of entitlement across the enterprise software ecosystem, the following 5 briefs are offered to provide clarity for improving strategic computing outcomes.

1)  Close the Data Competency Gap

Much has been written in recent months about the expanding need for data scientists, which is true at this early stage of automation, yet very little is whispered in public on the prerequisite learning curve for senior executives, boards, and policy makers.

Data increasingly represents all of the assets of the organization, including intellectual capital, intellectual property, physical property, financials, supply chain, inventory, distribution network, customers, communications, legal, creative, and all relationships between entities. It is therefore imperative to understand how data is structured, created, consumed, analyzed, interpreted, stored, and secured. Data management will substantially impact the organization’s ability to achieve and manage the strategic mission.

Fortunately, many options exist for rapid advancement in understanding data management ranging from off-the-shelf published reports to tailored consulting and strategic advisory from individuals, regional firms, and global institutions. A word of caution, however—technology in this area is changing rapidly, and very few analysts have proven able to predict what to expect within 24-48 months.

Understanding Data Competency

    • Data scientists are just as human as computer or any other type of scientist
    • A need exists to avoid exchanging software-enabled silos for ontology-enabled silos
    • Data structure requires linguistics, analytics requires mathematics, human performance requires psychology, predictive requires modeling—success requires a mega-disciplinary perspective

2)  Adopt Adaptive Enterprise Computing

A networked computing workplace environment that continually adapts to changing conditions based on the specific needs of each entity – MM 6.7.12

While computing has achieved a great deal for the world during the previous half-century, the short-term gain became a long-term challenge as ubiquitous computing was largely a one-time, must-have competitive advantage that everyone needed to adopt or be left behind.  It turns out that creating and maintaining a competitive advantage through ubiquitous computing within a global network economy is a much greater challenge than initial adoption.

A deep misalignment of interests now exists between customer entities that need differentiation in the marketplace to survive and much of the IT industry, which needs to maintain scale by replicating the precise same hardware and software at massive scale worldwide.

When competitors all over the world are using the same computing tools for communications, operations, transactions, and learning, yet have a dramatically different cost basis for everything else, the region or organization with a higher cost basis will indeed be flattened with economic consequences that can be catastrophic.

This places an especially high burden on companies located in developed countries like the U.S. that are engaged in hyper-competitive industries globally while paying the highest prices for talent, education and healthcare—highlighting the critical need to achieve a sustainable competitive advantage.

Understanding adaptive enterprise computing:

    • Adaptive computing for strategic advantage must encompass the entire enterprise architecture, which requires a holistic perspective
    • Adaptive computing is strategic; commoditized computing isn’t—rather should be viewed as entry-level infrastructure
    • The goal should be to optimize intellectual and creative capital while tailoring product differentiation for a durable and sustainable competitive advantage
    • Agile computing is largely a software development methodology while adaptive computing is largely a business strategy that employs technology for managing the entire digital work environment
    • The transition to adaptive enterprise computing must be step-by-step to avoid operational disruption, yet bold to escape incumbent lock-in

3)  Extend Analytics to Entire Workforce

Humans represent the largest expense and risk to most organizations, so technologists have had a mandate for decades to automate processes and systems that either reduce or replace humans. This is a greatly misunderstood economics theory, however. The idea is to free up resources for re-investment in more important endeavors, which has historically employed the majority of people, but in practice the theory is dependent upon long-term, disciplined, monetary and fiscal policy that favors investment in new technologies, products, companies and industries. When global automation is combined with an environment that doesn’t favor re-investment in new areas, as we’ve seen in recent decades, capital will sit on the sidelines or be employed in speculation that creates destructive bubbles, the combination of which results in uncertainty with high levels of chronic unemployment.

However, while strategic computing must consider all areas of cost competitiveness, it’s also true that most organizations have become more skilled at cost containment than human systems and innovation. As we’ve observed consistently in recent years, the result has been that many organizations have failed to prevent serious or fatal crises, failed to seize missed opportunities, and failed to remain innovative at competitive levels.

While hopefully the macro economic conditions will broadly improve with time, the important message for decision makers is that untapped potential in human performance analytics that can be captured with state-of-the-art systems today is several orders of magnitude higher than through traditional supply chain analytics or marketing analytics alone.

Understanding Human Performance Systems:

    • Improved human performance systems improves everything else
    • The highest potential ROI to organizations today hasn’t changed in a millennium: engaging humans in a more competitive manner than the competition
    • The most valuable humans tend to be fiercely protective of their most valuable intellectual capital, which is precisely what organizations need, requiring deep knowledge and experience for system design
    • Loyalty and morale are low in many organizations due to poor compensation incentives, frequent job change, and misaligned motivation with employer products, cultures and business models
    • Motivation can be fickle and fluid, varying a great deal between individuals, groups, places, and times
    • For those who may have been otherwise engaged—the world went mobile

4)  Employ Predictive Analytics

An organization need not grow much beyond the founders in the current environment for our increasingly data rich world to require effective data management designed to achieve a strategic advantage with enterprise computing. Indeed, often has been the case where success or failure depended upon converting an early agile advantage into a more mature adaptive environment and culture. Within those organizations that survive beyond the average life expectancy, many cultures finally change only after a near-death experience triggered by becoming complacent, rigid, or simply entitled to that which the customer was in disagreement—reasons enough for adoption of analytics for almost any company.

While the need for more accurate predictive abilities is obvious for marketers, it is no less important for risk management, investment, science, medicine, government, and most other areas of society.

Key elements that impact predictive outcomes:

    • Quality of data, including integrity, scale, timeliness, access, and interoperability
    • Quality of algorithms, including design, efficiency, and execution
    • Ease of use and interpretation, including visuals, delivery, and devices
    • How predictions are managed, including verification, feed-back loops, accountability, and the decision chain

5)  Embrace Independent Standards

Among the most important decisions impacting the future ability of organizations to adapt their enterprise computing to fast changing external environmental forces, which increasingly influences the ability of the organization to succeed or fail, is whether to embrace independent standards for software development, communications, and data structure.

Key issues to understand about independent standards:

    • Organizational sovereignty—it has proven extremely difficult and often impossible to maintain control of one’s destiny in an economically sustainable manner over the long-term with proprietary computing standards dominating enterprise architecture
    • Trade secrets, IP, IC, and differentiation are very difficult to secure when relying on consultants who represent competitors in large proprietary ecosystems
    • Lock-in and high maintenance fees are enabled primarily by proprietary standards and lack of interoperability
    • Open source is not at all the same as independent standards, nor necessarily improve adaptive computing or TCO
    • Independent standards bodies are voluntary in most of the world, slow to mature, and influenced by ideology and interests within governments, academia, industry, and IT incumbents
    • The commoditization challenge and need for adaptive computing is similar with ubiquitous computing regardless of standards type
Advertisements

Legacy of the Tōhoku Earthquake: Moral Imperative to Prevent a Future Fukushima Crisis


An article in the New York Times reminds us once again that without a carefully crafted and highly disciplined governance architecture in place, perceived misalignment of personal interests between individuals and organizations across cultural ecosystems can lead to catastrophic decisions and outcomes. The article was written by Martin Fackler and is titled: Nuclear Disaster in Japan Was Avoidable, Critics Contend.

While not unexpected by those who study crises, rather yet another case where brave individuals raised red flags only to be shouted down by the crowd, the article does provide instructive granularity that should guide senior executives, directors, and policy makers in planning organizational models and enterprise systems. In a rare statement by a leading publication, Martin Fackler reports that insiders within “Japan’s tightly knit nuclear industry” attributed the Fukushima plant meltdown to a “culture of collusion in which powerful regulators and compliant academic experts”.  This is a very similar dynamic found in other preventable crises, from the broad systemic financial crisis to narrow product defect cases.

One of the individuals who warned regulators of just such an event was professor Kunihiko Shimizaki, a seismologist on the committee created specifically to manage risk associated with Japan’s off shore earthquakes. Shimizaki’s conservative warnings were not only ignored, but his comments were removed from the final report “pending further research”. Shimizaki is reported to believe that “fault lay not in outright corruption, but rather complicity among like-minded insiders who prospered for decades by scratching one another’s backs.”  This is almost verbatim to events in the U.S. where multi-organizational cultures evolved slowly over time to become among the highest systemic risks to life, property, and economy.

In another commonly found result, the plant operator Tepco failed to act on multiple internal warnings from their own engineers who calculated that a tsunami could reach up to 50 feet in height. This critical information was not revealed to regulators for three years, finally reported just four days before the 9.0 quake occurred causing a 45 foot tsunami, resulting in the meltdown of three reactors at Fukushima.

Three questions for consideration

1) Given that the root cause of the Fukushima meltdown was not the accurately predicted earthquake or tsunami, but rather dysfunctional organizational governance, are leaders not then compelled by moral imperative to seek out and implement organizational systems specifically designed to prevent crises in the future?

2) Given that peer pressure and social dynamics within the academic culture and relationship with regulators and industry are cited as the cause by the most credible witness—from their own community who predicted the event, would not prudence demand that responsible decision makers consider solutions external of the inflicted cultures?

3) With the not-invented-here-syndrome near the core of every major crises in recent history, which have seriously degraded economic capacity, can anyone afford not to?

Steps that must be taken to prevent the next Fukushima

1) Do not return to the same poisoned well for solutions that caused or enabled the crisis

  • The not-invented-here-syndrome combined with bias for institutional solutions perpetuates the myth that humans are incapable of anything but repeating the same errors over again.

  • This phenomenon is evident in the ongoing financial crisis which suffers from similar cultural dynamics between academics, regulators and industry.

  • Researchers have only recently begun to understand the problems associated with deep expertise in isolated disciplines and cultural dynamics. ‘Expertisis’ is a serious problem within disciplines that tend to blind researchers from transdisciplinary patterns and discovery, severely limiting consideration of possible solutions.

  • Systemic crises overlaps too many disciplines for the academic model to execute functional solutions, evidenced by the committee in this case that sidelined their own seismologist’s warnings for further study, which represents a classic enabler of systemic crises.

2) Understand that in the current digital era through the foreseeable future, organizational governance challenges are also data governance challenges, which requires the execution of data governance solutions

    • Traditional organizational governance is rapidly breaking down with the rise of the neural network economy, yet governance solutions are comparably slow to be adopted.

    • Many organizational leaders, policy makers, risk managers, and public safety engineers are not functionally literate with state-of-the-art technology, such as semantic, predictive, and human alignment methodologies.

    • Functional enterprise architecture that has the capacity to prevent the next Fukushima-like event, regardless of location, industry, or sector, will require a holistic design encapsulating a philosophy that proactively considers all variables that have enabled previous events.

      • Any functional architecture for this task cannot be constrained by the not-invented-here-syndrome, defense of guilds, proprietary standards, protection of business models, national pride, institutional pride, branding, culture, or any other factor.

3) Adopt a Finely Woven Decision Tapestry with Carefully Crafted Strands of Human, Sensory, and Business Intelligence

Data provenance is foundational to any functioning critical system in the modern organization, providing:

      • Increased accountability

      • Increased security

      • Carefully managed transparency

      • Far more functional automation

      • The possibility of accurate real-time auditing

4) Extend advanced analytics to the entire human workforce

      • incentives for pre-emptive problem solving and innovation

      • Automate information delivery:

        • Record notification

        • Track and verify resolution

        • Extend network to unbiased regulators of regulators

      • Plug-in multiple predictive models:

        • -establish resolution of conflicts with unbiased review.

        • Automatically include results in reporting to prevent obstacles to essential targeted transparency as occurred in the Fukushima incident

5) Include sensory, financial, and supply chain data in real-time enterprise architecture and reporting

      • Until this year, extending advanced analytics to the entire human workforce was considered futuristic (see 1/10/2012 Forrester Research report Future of BI), in part due to scaling limitations in high performance computing. While always evolving, the design has existed for a decade

      • Automated data generated by sensors should be carefully crafted and combined in modeling with human and financial data for predictive applications for use in risk management, planning, regulatory oversight and operations.

        • Near real-time reporting is now possible, so governance structures and enterprise architectural design should reflect that functionality.

 

Conclusion

While obviously not informed by a first-person audit and review, if reports and quotes from witnesses surrounding the Fukushima crisis are accurate, which are generally consistent from dozens of other human caused crises, we can conclude the following:

The dysfunctional socio-economic relationships in this case resulted in an extremely toxic cultural dynamic across academia, regulators and industry that shared tacit intent to protect the nuclear industry. Their collective actions, however, resulted in an outcome that idled the entire industry in Japan with potentially very serious long-term implications for their national economy.

Whether psychological, social, technical, economic, or some combination thereof, it would seem that no justification for not deploying the most advanced crisis prevention systems can be left standing. Indeed, we all have a moral imperative that demands of us to rise above our bias, personal and institutional conflicts, and defensive nature, to explore and embrace the most appropriate solutions, regardless of origin, institutional labeling, media branding, or any other factor. Some crises are indeed too severe not to prevent.

Mark Montgomery
Founder & CEO
Kyield
http://www.kyield.com

Press release on our enterprise pilot program


We decided to expand our reach on our enterprise pilot program through the electronic PR system so I issued the following BusinessWire release today:

Kyield Announces Pilot Program for Advanced Analytics and Big Data with New Revolutionary BI Platform 

“We are inviting well-matched organizations to collaborate with us in piloting our break-through system to bring a higher level of performance to the information workplace,” stated Mark Montgomery, Founder and CEO of Kyield. “In addition to the significant competitive advantage exclusive to our pilot program, we are offering attractive long-term incentives free from lock-in, maintenance fees, and high service costs traditionally associated with the enterprise software industry.”

Regards, MM

New paper: Optimizing Knowledge Yield in the Digital Workplace


I am pleased to share a new paper that may be of interest:

Optimizing Knowledge Yield in the Digital Workplace
A new system design for thriving in the data-intensive universe

From the abstract:

The purpose of this paper is threefold. First, it briefly describes how the digital
workplace evolved in an incremental manner. Second, it discusses related structural
technical and economic challenges for individuals and organizations in the digital
workplace. Lastly, it summarizes how Kyield’s novel approach can serve to provide
exponential performance improvement.

Interview with Jenny Zaino at Semanticweb.com


Just wanted to share this interview and article with Jenny Zaino over at Semanticweb.com on my recent patent and related IP.

Manage Structured Data and Reap the Benefits

A detailed paper on this topic is nearing completion and will post a brief and description in the next few days.

Too Big to Fail or Too Primitive to Succeed?


Our economy very nearly experienced a financial version of Armageddon due to the gap between a primitive governance structure and highly sophisticated tools employed by a few with interests that were deeply misaligned with the needs of sustaining our national and global economy. We have all since unwillingly experienced the negative impacts of untamed technology while experiencing few of the benefits of the tamed; whether for resolution of the current crisis or prevention of the next.

Given the systemic nature and scale of the financial crisis, and in consideration of the poor ongoing economic conditions, it’s clear that the financial industry, political process, and regulators have all fallen short of achieving the individual mission of each, particularly in consideration of current technological capabilities.

For the past several months financial institutions have been attempting to convince regulators that they should not be labeled a Systemically Important Financial Institution (SIFI). The process of implementing the 2010 Dodd-Frank law in the U.S. has resulted in spin offs in an attempt to avoid increased U.S. regulation, while the new global rules for multi-national banks on top of Basel III, including surcharges and increased capital ratios, is resulting in a comprehensive rethink of the fundamental assumptions surrounding the global banking model.

Observing this dynamic invites a mental imagery of bureaucrats, politicians, and academics in team competition, each applying favored remedies such as duck tape over economic journals in a futile attempt to plug giant leaks in the hull of a Nimitz-class aircraft carrier.

When basic human greed clashed with globalization, networking, and technology, the combination introduced a complexity far beyond the organizational structures and tools available to regulators or corporations. Indeed, the reaction we’ve observed suggests that remedies employed to manage this crisis were designed for a war fought over seven decades ago during the Great Depression; an era when state-of-the-art technology was represented by the IBM Type 285 Numeric Printing Tabulator– capable of tabulating 150 cards per minute. The hourly sales of IBM today are approaching the annual sales of 1933, and billions of records are now run in seconds, yet our archaic regulatory system is employing printing presses in response to the largest financial crisis in 75 years.

A great deal has been learned in recent years beyond traditional economic theory about the systemic nature of networks, social behavior, contagion, and the global economy, with considerable investment in basic and applied research focused on technologies specifically designed to prevent systemic crises.

In the era of high performance computing on an increasingly interconnected planet with ever expanding pipes, economic tipping points can be reached very quickly that can bankrupt even the previously most wealthy nation on earth, particularly in a weakened economic condition suffering from structural problems. Focusing on SIFIs is of course essential, even if tardy by decades, but the emphasis should be on managing real systemic risk, which requires a very specific data structure that ensures data integrity, enhanced security, system-wide automation, modernized organizational structures, and continual, real-time improvement.

Without deep intelligence on the constantly changing relationships in a carefully constructed semantic layer, and automatically managed by pre-configured data valves, systemic risk is impossible to manage well, or even I argue at a level that is minimally acceptable.

Sophisticated new multi-disciplinary systems have been designed specifically to address the modern challenges in systemic risk management, but have yet to be built out and deployed. Policy makers should insist on the new generation of technologies to better protect citizens and the economy; regulators should embrace and promote the technology for it’s impossible to meet their mission otherwise; and financial institutions should adopt the technology due to rare ROI and sharply reduced levels of risk.

Preparing for the big data storm in mHealth


Imagine the gasp from the family physician when he is confronted with the first text message from a patient asking whether they are dying due to the asynchronistic patterns displayed by their new mobile app, which is being downloaded by the millions daily.  Or consider the cardiologist who suddenly has the opportunity to observe a continuous mobile data stream 24/7/365 sourced from multiple sensors on 95% of her patients, creating more data in a day than the previous decade.

For some time now I have been thinking about structures, classifications, compression, security, scaling, and synthesis of data for the anticipated big data storm just beginning to form in mobile health. Even with a healthy dose of de-hyped skepticism, the mobile health data storm promises record sustained winds, with much higher gusts.  An extension of the Internet and Web, which is often compared to a global electric grid, mobility also contains dynamics more comparable to solar winds or sea plankton, complete we intend with personalized recipes that will positively impact human behavior, diagnostics, and therapies.

While specific outcomes can only be fully understood with high-scale testing, several assumptions warrant serious consideration, including:

  • Several billion people worldwide will be introduced to lab-quality, real-time data for the first time
  • To avert chaos, mHealth data must be highly structured from inception
  • Care givers will quickly become far more conversant in computing
  • Care facilities and organizations will be transformed, with multiple new disruptive models emerging
  • Multi-dimensional visualization of human physiology will become ubiquitous from global to nano scale
  • Self-managed care will rapidly scale, as will expectations
  • Sensory implants and implanted sensors will normalize
  • Advanced data management engines will become essential cores of healthcare organizations
  • The gap between life science research and health care will begin to close
  • The velocity of discoveries will rapidly expand

In looking out to this new galaxy, it’s relatively easy to see far more dramatic change than is commonly discussed. Medicine has avoided much of the revolutionary change affecting other industries from rapid technological evolution, in large part by regulatory management, leading to a model that is no longer affordable. How might health care change when ‘democratized’ by billions of people? Who knows; a considerable amount is probably a fair guess. I am far less certain of how mobile health will change regulation than the necessity to properly manage the data, or the need for data structure, in order to optimize decision making and realize the potential of personalized medicine.

The mobile phenomenon is much different than the emergence of enterprise networks, the consumer web, or big data found within intelligence and research, but is rather more like a hybrid that borrows from each, with the extraordinarily complex individual human brain increasingly in the driver’s seat. It should prove to be a journey full of adventure, hidden obstacles, and exciting discovery.