The ‘Sweet Spot of Big Data’ May Contain a Sour Surprise for Some


I’ve been observing a rather distasteful trend in big data for the enterprise market over the past 18 months that has reached the point of wanting to share some thoughts despite a growing mountain of other priorities.

As the big data hype grew over the past few years, much of which was enabled by Hadoop and other FOSS stacks, internal and external teams serving large companies have perfected a (sweet spot) model that is tailored to the environment and tech stack. Many vendors have also tailored their offerings for the model backed up with arguably too much funding by VCs, dozens too many analyst reports, and a half-dozen too many CIO publications attempting to extend reach and increase the ad spend.

The ‘sweet spot’ goes something like this:

  • Small teams consisting of data scientists and business analysts.
  • Employing exclusively low cost IT commodities and free and open source software (FOSS).
  • Targeting $1 to $5 million projects within units of mid to large sized enterprises.
  • Expensed in current budget cycle (opex), with low risk and high probability of quick, demonstratable ROI.

So what could possibility be wrong with this dessert—looks perfect, right? Well, not necessarily—at least for those of us who have viewed a similar movie many times previously, with a similar foreshadowing, plot, cast of characters, and story line.

While this service model and related technology has been a good thing generally, resulting in new efficiency, improved manufacturing processes, reduced inventories, and perhaps even saved a few lives, not to mention generated a lot of demand for data centers, cloud service providers and consultants, we need to look at the bigger picture in the context of historical trends in tech adoption and how such trends evolve to see where this trail will lead. Highly paid data scientists, for example, may then find that they have been frantically jumping from one project to the next inside a large bubble with a thin lining, rising high over an ecosystem with no safety net, and then suddenly find themselves a target of flying arrows from the very CFO who has been their client, and for good fundamental reasons.

As we’ve seen many times before at the confluence of technology and services, the beginning of an over-hyped trend creates demand for high-end talent that is unsustainable even often in the mid-term. Everyone from the largest vendors to emerging companies like Kyield to leading consulting firms and many independents alike are in general agreement that while service talent in big data analytics (and closely related) are capturing up to 90% of the budget in this ‘sweet spot’ model today, the trend is expected to reverse quickly as automated systems and tools mature. The reaction to such trends is often an attempt to create silos of various sorts, but even for those in global incumbents or models protected by unions and laws like K-12 in the U.S., it’s probably wise to seek a more sustainable model and ecosystem tailored for the future. Otherwise, I fear a great many talented people working with data will find in hindsight that they have been exploited for very short-term gain in a model that no longer has demand and may well find themselves competing with a global flood of bubble chasers willing to work cheaper than is even possible given the cost of living in their location.

What everyone should realize about the big data trend

While there will likely be strong demand for the very best mathematicians, data modelers, and statisticians far beyond the horizon, the super majority of organizations today are at some point in the journey of developing mid to long-term strategic plans for optimizing advanced analytics, including investments not just for small projects, but the entire organization. This is not occurring in a vacuum, but rather in conjunction with consultants, vendors, labs and emerging companies like ours that intentionally provide a platform that automates many of the redundant processes, enable plug and play, and make advanced analytics available to the workforce in a simple to use, substantially automated manner. While it took many years of R&D for all of the pieces to come together, the day has come when physics allows such systems to be deployed and so this trend is inevitable and indeed underway.

The current and future environment is not like the past when achieving a PhD in one decade will necessarily provide job demand in the next, unless like everyone else in society one can continue to grow, evolve and find ways to add value in a hyper competitive world. The challenges we face (collectively) in the future are great and so we cannot afford apathy or wide-spread cultures that are protecting the (unsustainable) past, but rather only those attempting to optimize the future.

In our system design, we embrace the independent app, data scientist, and algorithm, and recommend to customers that they do so as well—there is no substitute for individual creativity—and we simply must have systems that reflect this reality, but it needs to occur in a rationally planned manner that attacks the biggest problems facing organizations, and more broadly across society and the global economy.

The majority seem frankly misguided on the direction we are headed: the combination of data physics, hardware, organizational dynamics and economics requires us to automate much of this process in order to prevent the most dangerous systemic crises and to optimize discovery. It’s the right thing to do. I highly recommend to everyone in related fields to plan accordingly as these types of changes are occurring in half the time as a generation ago and the pace of change is still accelerating. At the end of the day, the job of analytics is to unleash the potential of customers and clients so that they can then create a sustainable economy and ecology. 

Advertisements

Yield Management of Knowledge for Industry + FAQs


Industrial Yield Management of Knowledge from www.kyield.com

I decided to share this slightly edited version of a diagram that was part of a presentation we recently completed for an industry leading organization. Based on feedback this may be the most easily understandable graphic we’ve produced to date in communicating the Kyield enterprise system. As part of the same project we published a new FAQs page on our web site that may be of interest.  Most of my writing over the past several months has been in private tailored papers and presentations related to our pilot and partner programs.

I may include a version of this diagram in a public white paper soon if I can carve out some writing time. If you don’t hear from me before then I wish you and your family a happy holiday season.

Kind regards, MM

New Video on Kyield Enterprise – Data Tailored to Each Entity


White board video presentation on Kyield Enterprise


Kyield founder Mark Montgomery provides a 14 minute white board presentation on Kyield Enterprise and the Kyield Enterprise Pilot

Converting the Enterprise to an Adaptive Neural Network


Those tracking business and financial news may have observed that a little bit of knowledge in the corner office about enterprise architecture, software, and data can cause great harm, including for the occupant, often resulting in a moving van parked under the corner suite of corporate headquarters shortly after headlines on their latest preventable crisis. Exploitation of ignorance in the board room surrounding enterprise computing has become mastered by some, and is therefore among the greatest of many challenges for emerging technology that has the capacity for significant improvement.

The issues surrounding neural networks requires total immersion for extended duration. Since many organizations lack the luxury of time, let’s get to it.

Beware the Foreshadow of the Black Swan

A recent article by Reuters confirms what is perhaps the worst kept secret in the post printing press era: Many Wall Street executives say wrongdoing is necessary: survey. A whopping 25% of those surveyed believe that in order to be personally successful, they must conduct themselves in an unethical manner to include breaking important laws, some of which are intended to defend against contagion; a powerful red flag warning even if only partially true.

This reminds me of a situation almost a decade ago when I had the unpleasant task of engaging the president of a leading university about one of their finance professors who may have been addressing a few respondents to this very survey when he lectured: “if you want to survive in finance, forget ethics”. Unfortunately for everyone else, even if that curriculum served the near-term interests of the students, which is doubtful given what has transpired since, it cannot end well for civilization. Fortunately, in this case the university president responded immediately, and well beyond expectation, after I sent an email stating that I would end my relationship with the university if that philosophy was shared by the institution.

For directors, CEOs, CFOs, and CROs in any sector, this latest story should only confirm that if an individual is willing to risk a felony for his/her success, then experience warns that corporate governance rates very low on their list of priorities. Black Swan events should therefore be expected in such an environment, and so everything possible should be planned and executed to prevent them, which requires mastering neural networks.

Functional Governance: As simple as possible; as complex as necessary

Functional governance and crisis prevention in the modern complex organization requires deep understanding of the organizational dynamics embedded within data architecture found throughout the far more complex environment of enterprise networks and all interconnected networks.

Kyield Enterprise Diagram 2.7 (protected by Copyright and U.S. Patent)

Are you thinking what I think you may be thinking about now? In fact adaptive neural networks in a large enterprise is quite comparable to the complexity found in brain surgery or rocket science, and in some environments even more so. The largest enterprise neural networks today far outnumber comparable nodes, information exchanges, and memory of even the most exceptional human neurological system. Of course biological systems are self-contained with far more embedded intelligence that adapt to an amazing variety of change, which usually enables sustainability throughout a complete lifecycle—our lives, with little or no external effort required. Unfortunately, even the most advanced enterprise neural networks today are still primitive by comparison to biological systems, are not adaptive by design, and are subject to a menagerie of internal and external influences that directly conflict with the future health of the patient, aka the mission of the organization.

So the next question might be, where do we start?

The simple answer is that most organizations started decades ago with the emergence of computer networking and currently manage a very primitive, fragmented neural network that wasn’t planned at all, but rather evolved in an incremental manner where proprietary standards became commoditized and lost the ability to provide competitive differentiation, yet are still very expensive to maintain. Those needing a more competitive architecture have come to the right place at the right time as we are deeply engaged in crafting tailored action plans for several organizations at various stages of our pilot program for Kyield enterprise, which is among the best examples of a state-of-the-art, adaptive enterprise neural network architecture I am aware of. We’ve recently engaged with large to very large organizations in banking, insurance, biotech, government, manufacturing, telecommunications, engineering and pharmaceuticals in the early stages of our pilot process.

Tailored Blueprint

Think of the plan as a combination of a technical paper, a deeply tailored use case for each organization, and a detailed time-line spanning several years. In some ways it serves as sort of a redevelopment blueprint for a neighborhood that has been locked-in to ancient infrastructure with outdated electrical, plumbing, and transportation systems that are no longer compatible or competitive. Most have either suffered a crisis, or wisely intent on prevention, while seeking a significant competitive advantage.

The step-by-step process we are tailoring for each customer serves to guide collaborative teams through the conversion process from the ‘current architecture’ to an ‘adaptable neural enterprise network’, starting with the appropriate business unit and extending throughout subsidiaries over weeks, months and years in careful orchestration according to the prioritized needs of each while preventing operational disruptions. Since we embrace independent standards with no lock-in or maintenance fees and offer attractive long-term incentives, the risk for not engaging in our pilot program appears much greater than for those who do. In some cases it looks like we may be able to decrease TCO substantially despite generational improvement in functionality.

Those who are interested and believe they may be a good candidate for our pilot program are welcome to contact me anytime.

Five Essential Steps For Strategic (adaptive) Enterprise Computing


Given the spin surrounding big data, duopoly deflection campaigns by incumbents, and a culture of entitlement across the enterprise software ecosystem, the following 5 briefs are offered to provide clarity for improving strategic computing outcomes.

1)  Close the Data Competency Gap

Much has been written in recent months about the expanding need for data scientists, which is true at this early stage of automation, yet very little is whispered in public on the prerequisite learning curve for senior executives, boards, and policy makers.

Data increasingly represents all of the assets of the organization, including intellectual capital, intellectual property, physical property, financials, supply chain, inventory, distribution network, customers, communications, legal, creative, and all relationships between entities. It is therefore imperative to understand how data is structured, created, consumed, analyzed, interpreted, stored, and secured. Data management will substantially impact the organization’s ability to achieve and manage the strategic mission.

Fortunately, many options exist for rapid advancement in understanding data management ranging from off-the-shelf published reports to tailored consulting and strategic advisory from individuals, regional firms, and global institutions. A word of caution, however—technology in this area is changing rapidly, and very few analysts have proven able to predict what to expect within 24-48 months.

Understanding Data Competency

    • Data scientists are just as human as computer or any other type of scientist
    • A need exists to avoid exchanging software-enabled silos for ontology-enabled silos
    • Data structure requires linguistics, analytics requires mathematics, human performance requires psychology, predictive requires modeling—success requires a mega-disciplinary perspective

2)  Adopt Adaptive Enterprise Computing

A networked computing workplace environment that continually adapts to changing conditions based on the specific needs of each entity – MM 6.7.12

While computing has achieved a great deal for the world during the previous half-century, the short-term gain became a long-term challenge as ubiquitous computing was largely a one-time, must-have competitive advantage that everyone needed to adopt or be left behind.  It turns out that creating and maintaining a competitive advantage through ubiquitous computing within a global network economy is a much greater challenge than initial adoption.

A deep misalignment of interests now exists between customer entities that need differentiation in the marketplace to survive and much of the IT industry, which needs to maintain scale by replicating the precise same hardware and software at massive scale worldwide.

When competitors all over the world are using the same computing tools for communications, operations, transactions, and learning, yet have a dramatically different cost basis for everything else, the region or organization with a higher cost basis will indeed be flattened with economic consequences that can be catastrophic.

This places an especially high burden on companies located in developed countries like the U.S. that are engaged in hyper-competitive industries globally while paying the highest prices for talent, education and healthcare—highlighting the critical need to achieve a sustainable competitive advantage.

Understanding adaptive enterprise computing:

    • Adaptive computing for strategic advantage must encompass the entire enterprise architecture, which requires a holistic perspective
    • Adaptive computing is strategic; commoditized computing isn’t—rather should be viewed as entry-level infrastructure
    • The goal should be to optimize intellectual and creative capital while tailoring product differentiation for a durable and sustainable competitive advantage
    • Agile computing is largely a software development methodology while adaptive computing is largely a business strategy that employs technology for managing the entire digital work environment
    • The transition to adaptive enterprise computing must be step-by-step to avoid operational disruption, yet bold to escape incumbent lock-in

3)  Extend Analytics to Entire Workforce

Humans represent the largest expense and risk to most organizations, so technologists have had a mandate for decades to automate processes and systems that either reduce or replace humans. This is a greatly misunderstood economics theory, however. The idea is to free up resources for re-investment in more important endeavors, which has historically employed the majority of people, but in practice the theory is dependent upon long-term, disciplined, monetary and fiscal policy that favors investment in new technologies, products, companies and industries. When global automation is combined with an environment that doesn’t favor re-investment in new areas, as we’ve seen in recent decades, capital will sit on the sidelines or be employed in speculation that creates destructive bubbles, the combination of which results in uncertainty with high levels of chronic unemployment.

However, while strategic computing must consider all areas of cost competitiveness, it’s also true that most organizations have become more skilled at cost containment than human systems and innovation. As we’ve observed consistently in recent years, the result has been that many organizations have failed to prevent serious or fatal crises, failed to seize missed opportunities, and failed to remain innovative at competitive levels.

While hopefully the macro economic conditions will broadly improve with time, the important message for decision makers is that untapped potential in human performance analytics that can be captured with state-of-the-art systems today is several orders of magnitude higher than through traditional supply chain analytics or marketing analytics alone.

Understanding Human Performance Systems:

    • Improved human performance systems improves everything else
    • The highest potential ROI to organizations today hasn’t changed in a millennium: engaging humans in a more competitive manner than the competition
    • The most valuable humans tend to be fiercely protective of their most valuable intellectual capital, which is precisely what organizations need, requiring deep knowledge and experience for system design
    • Loyalty and morale are low in many organizations due to poor compensation incentives, frequent job change, and misaligned motivation with employer products, cultures and business models
    • Motivation can be fickle and fluid, varying a great deal between individuals, groups, places, and times
    • For those who may have been otherwise engaged—the world went mobile

4)  Employ Predictive Analytics

An organization need not grow much beyond the founders in the current environment for our increasingly data rich world to require effective data management designed to achieve a strategic advantage with enterprise computing. Indeed, often has been the case where success or failure depended upon converting an early agile advantage into a more mature adaptive environment and culture. Within those organizations that survive beyond the average life expectancy, many cultures finally change only after a near-death experience triggered by becoming complacent, rigid, or simply entitled to that which the customer was in disagreement—reasons enough for adoption of analytics for almost any company.

While the need for more accurate predictive abilities is obvious for marketers, it is no less important for risk management, investment, science, medicine, government, and most other areas of society.

Key elements that impact predictive outcomes:

    • Quality of data, including integrity, scale, timeliness, access, and interoperability
    • Quality of algorithms, including design, efficiency, and execution
    • Ease of use and interpretation, including visuals, delivery, and devices
    • How predictions are managed, including verification, feed-back loops, accountability, and the decision chain

5)  Embrace Independent Standards

Among the most important decisions impacting the future ability of organizations to adapt their enterprise computing to fast changing external environmental forces, which increasingly influences the ability of the organization to succeed or fail, is whether to embrace independent standards for software development, communications, and data structure.

Key issues to understand about independent standards:

    • Organizational sovereignty—it has proven extremely difficult and often impossible to maintain control of one’s destiny in an economically sustainable manner over the long-term with proprietary computing standards dominating enterprise architecture
    • Trade secrets, IP, IC, and differentiation are very difficult to secure when relying on consultants who represent competitors in large proprietary ecosystems
    • Lock-in and high maintenance fees are enabled primarily by proprietary standards and lack of interoperability
    • Open source is not at all the same as independent standards, nor necessarily improve adaptive computing or TCO
    • Independent standards bodies are voluntary in most of the world, slow to mature, and influenced by ideology and interests within governments, academia, industry, and IT incumbents
    • The commoditization challenge and need for adaptive computing is similar with ubiquitous computing regardless of standards type

Video short on data silos + extending BI to entire workplace


Brief video on adaptive computing, avoiding data silos and extending BI across the enterprise and entire information workplace:

Legacy of the Tōhoku Earthquake: Moral Imperative to Prevent a Future Fukushima Crisis


An article in the New York Times reminds us once again that without a carefully crafted and highly disciplined governance architecture in place, perceived misalignment of personal interests between individuals and organizations across cultural ecosystems can lead to catastrophic decisions and outcomes. The article was written by Martin Fackler and is titled: Nuclear Disaster in Japan Was Avoidable, Critics Contend.

While not unexpected by those who study crises, rather yet another case where brave individuals raised red flags only to be shouted down by the crowd, the article does provide instructive granularity that should guide senior executives, directors, and policy makers in planning organizational models and enterprise systems. In a rare statement by a leading publication, Martin Fackler reports that insiders within “Japan’s tightly knit nuclear industry” attributed the Fukushima plant meltdown to a “culture of collusion in which powerful regulators and compliant academic experts”.  This is a very similar dynamic found in other preventable crises, from the broad systemic financial crisis to narrow product defect cases.

One of the individuals who warned regulators of just such an event was professor Kunihiko Shimizaki, a seismologist on the committee created specifically to manage risk associated with Japan’s off shore earthquakes. Shimizaki’s conservative warnings were not only ignored, but his comments were removed from the final report “pending further research”. Shimizaki is reported to believe that “fault lay not in outright corruption, but rather complicity among like-minded insiders who prospered for decades by scratching one another’s backs.”  This is almost verbatim to events in the U.S. where multi-organizational cultures evolved slowly over time to become among the highest systemic risks to life, property, and economy.

In another commonly found result, the plant operator Tepco failed to act on multiple internal warnings from their own engineers who calculated that a tsunami could reach up to 50 feet in height. This critical information was not revealed to regulators for three years, finally reported just four days before the 9.0 quake occurred causing a 45 foot tsunami, resulting in the meltdown of three reactors at Fukushima.

Three questions for consideration

1) Given that the root cause of the Fukushima meltdown was not the accurately predicted earthquake or tsunami, but rather dysfunctional organizational governance, are leaders not then compelled by moral imperative to seek out and implement organizational systems specifically designed to prevent crises in the future?

2) Given that peer pressure and social dynamics within the academic culture and relationship with regulators and industry are cited as the cause by the most credible witness—from their own community who predicted the event, would not prudence demand that responsible decision makers consider solutions external of the inflicted cultures?

3) With the not-invented-here-syndrome near the core of every major crises in recent history, which have seriously degraded economic capacity, can anyone afford not to?

Steps that must be taken to prevent the next Fukushima

1) Do not return to the same poisoned well for solutions that caused or enabled the crisis

  • The not-invented-here-syndrome combined with bias for institutional solutions perpetuates the myth that humans are incapable of anything but repeating the same errors over again.

  • This phenomenon is evident in the ongoing financial crisis which suffers from similar cultural dynamics between academics, regulators and industry.

  • Researchers have only recently begun to understand the problems associated with deep expertise in isolated disciplines and cultural dynamics. ‘Expertisis’ is a serious problem within disciplines that tend to blind researchers from transdisciplinary patterns and discovery, severely limiting consideration of possible solutions.

  • Systemic crises overlaps too many disciplines for the academic model to execute functional solutions, evidenced by the committee in this case that sidelined their own seismologist’s warnings for further study, which represents a classic enabler of systemic crises.

2) Understand that in the current digital era through the foreseeable future, organizational governance challenges are also data governance challenges, which requires the execution of data governance solutions

    • Traditional organizational governance is rapidly breaking down with the rise of the neural network economy, yet governance solutions are comparably slow to be adopted.

    • Many organizational leaders, policy makers, risk managers, and public safety engineers are not functionally literate with state-of-the-art technology, such as semantic, predictive, and human alignment methodologies.

    • Functional enterprise architecture that has the capacity to prevent the next Fukushima-like event, regardless of location, industry, or sector, will require a holistic design encapsulating a philosophy that proactively considers all variables that have enabled previous events.

      • Any functional architecture for this task cannot be constrained by the not-invented-here-syndrome, defense of guilds, proprietary standards, protection of business models, national pride, institutional pride, branding, culture, or any other factor.

3) Adopt a Finely Woven Decision Tapestry with Carefully Crafted Strands of Human, Sensory, and Business Intelligence

Data provenance is foundational to any functioning critical system in the modern organization, providing:

      • Increased accountability

      • Increased security

      • Carefully managed transparency

      • Far more functional automation

      • The possibility of accurate real-time auditing

4) Extend advanced analytics to the entire human workforce

      • incentives for pre-emptive problem solving and innovation

      • Automate information delivery:

        • Record notification

        • Track and verify resolution

        • Extend network to unbiased regulators of regulators

      • Plug-in multiple predictive models:

        • -establish resolution of conflicts with unbiased review.

        • Automatically include results in reporting to prevent obstacles to essential targeted transparency as occurred in the Fukushima incident

5) Include sensory, financial, and supply chain data in real-time enterprise architecture and reporting

      • Until this year, extending advanced analytics to the entire human workforce was considered futuristic (see 1/10/2012 Forrester Research report Future of BI), in part due to scaling limitations in high performance computing. While always evolving, the design has existed for a decade

      • Automated data generated by sensors should be carefully crafted and combined in modeling with human and financial data for predictive applications for use in risk management, planning, regulatory oversight and operations.

        • Near real-time reporting is now possible, so governance structures and enterprise architectural design should reflect that functionality.

 

Conclusion

While obviously not informed by a first-person audit and review, if reports and quotes from witnesses surrounding the Fukushima crisis are accurate, which are generally consistent from dozens of other human caused crises, we can conclude the following:

The dysfunctional socio-economic relationships in this case resulted in an extremely toxic cultural dynamic across academia, regulators and industry that shared tacit intent to protect the nuclear industry. Their collective actions, however, resulted in an outcome that idled the entire industry in Japan with potentially very serious long-term implications for their national economy.

Whether psychological, social, technical, economic, or some combination thereof, it would seem that no justification for not deploying the most advanced crisis prevention systems can be left standing. Indeed, we all have a moral imperative that demands of us to rise above our bias, personal and institutional conflicts, and defensive nature, to explore and embrace the most appropriate solutions, regardless of origin, institutional labeling, media branding, or any other factor. Some crises are indeed too severe not to prevent.

Mark Montgomery
Founder & CEO
Kyield
http://www.kyield.com

New Video: Extending Analytics to the Information Workplace


Consolidated thoughts and comments 2-2012


I thought it might be of interest to consolidate a few thoughts and comments over the past couple of weeks (typo edits included).

~~~~~~~~~~~~~~

Tweeted thoughts @Kyield

As distasteful as self-promotion can be, value entering the knowledge stream would be quite rare or postmortem otherwise, as king bias rules

~~

Raw data is just a mess of yarn until interlaced by master craftspeople with a quality loom, providing context, utility, and probability

~~

As with any predatory culture sculpted by inherent naivete in nature, IT carnivores feast on Leporidae and become frustrated by Testudines

~~

Super organizational intelligence through networked computing has already surpassed the capacity of the physical organization

~~

Superhuman intelligence with machines won’t surpass the human brain until organic computing matures

~~~~~~~~~~~~~~

At Data Science Central in response to an article and posting by Dan Woods at Forbes: IBM’s Anjul Bhambhri on What Is a Data Scientist?

Until very recently—perhaps still, very few outside of a minority in research seem to appreciate the changes for data management in networks, and most of those scientists were engaged with physics or trans-disciplinary research. The ability to structure data employing AI techniques is beginning to manifest in ways intended by the few in CS working the problem for almost 20 years now. The result is a system like ours that was designed from inception over time in conjunction with a new generation of related technologies specifically for the neural network economy, with what we’ll call dumb data silos (extremely destructive from a macro perspective) in the bulls-eye of replacement strategies. Of course the need for privacy, security, and commerce is even more necessary in the transparent world outside of database silos, which provides a challenge especially in consumer and public networks, or at the hand-off of data between enterprise and public networks. The new generation of technologies requires a new way of thinking, of course, especially for those who were brought up working with the relational databases. Like any progress in mature economies, next generation technologies will not come without some level of disruption and displacement so there are  fair amounts of defensive tactics out there. Evolution continues anyway.

~~~~~~~~~~~~~~

At Semanticweb.com in response to an article by Paul Miller: Tread Softly

Yes I suppose this does need a bit more attention. We shouldn’t have a consumer web and enterprise web in technology, languages and standards—agreed, although I do think we’ll continue to see different functionality and to some extent technology that provides value relative to their much different needs. Cray for example announced a new division last week based on semantics with several partners close to this blog, but it’s very unlikely we’ll see a small business web site developer on the sales target list of Cray’s DOS anytime soon. There is a reason for that.

While I agree that it can be (and often is) a misleading distinction—reason for commenting—it is precisely misleading due to the confusion over the very different economic drivers, models, and needs of the two markets. In this regard I am in total agreement with Hendler in the interview just before this article—the problem is that the semantic web community doesn’t understand the economics of the technology they are researching, developing, and in many cases selling, and this is in part due to lots of misleading information in their learning curve—not so much here but before they leave the university. This is in part due to the level of maturity of the technology but also in part due to the nature of R&D and tech transfer when we have folks deciding what to invest in who have little experience or knowledge about product development and applied economics, and they also have conflicting internal needs that quite often influence or dictate where and how the dollars are invested. (Note: I would add here on my own blog a certain level of ideological bias that doesn’t belong in science but is nonetheless prevalent).

The free web supported in part by advertising, in part by e-commerce, but still mainly by user generated content, like all economic models, should heavily influence product and technology development for those products that are targeted for that specific use case. We are beginning to see a bit of divergence with tools reflective of the vastly different needs (rdfa was a good standards example), but for the sake of example it’s not terribly challenging for a fortune 50 company to form a team of experts to create a custom ontology and operate a complex set of tools that are the norm in semantics—indeed they usually prefer the complexity as would-be competitors often cannot afford the same luxury, so the high bar is a competitive advantage. However, the average small business web designer is already tapped out by keeping up with just the changing landscape of the existing web (not to mention mobile) and cannot even consider taking on that same level of complexity or investment.

The technical issues are understood but the economics are not, especially relating to the high cost of talent for the semantic web (technologies), or in some cases as cited above even to include universities and government, they understand well that the complexity in the enterprise provides strong career security.

So if I am providing a free web site service that is supported by ad revenue, it’s unlikely that I will adopt the same technology and tools developed for intelligence agencies, military, and the fortune 500. The former lives in the most hyper competitive environment in human history and the latter quite often has very little competition with essentially unlimited resources. Vastly different worlds, needs, markets, tactics and tools. But we still must use the same languages to communicate between the two—therein lies the challenge in the neural network economy, or for (developing, adopting, and maintaining) network standards.

The economic needs and abilities are radically different which should dominate the models, even though it’s true that both are part of the same neural network economy and must interact with the least amount of integration friction as possible. I do believe that like we’ve seen in other technologies eventually scale will converge the two—so-called consumerization, although in this case it’s somewhat in reverse. So you can see that I agree with your wise advice to walk softly on hype with the consumer semantic web, but I am quite certain that it’s more misleading to not point out the dramatically different economic drivers and needs of the two markets. Indeed it’s quite common for policy makers and academics to create large numbers of martyrs (often with good intentions) by promoting and even funding modeling they have no experience with beyond the subsidized walls of research. What is great for society is not always great for the individual entrepreneur or investor, and quite often isn’t.

I’ve recently even decided that in an attempt to prevent unnecessary martyrdom by very inexperienced and often ideological entrepreneurs of the type common in semantics that we should just take the heat of incoming arrows and point out the sharp misalignment of interests between the two markets and the stakeholders. The super majority of revenue in the semantic web reflects the jobs posted here recently—defense contractors, super computing, universities, governments, and the Fortune 500.

It’s not by accident that in venture capital we see specialists in individuals—very few tend to be good at both enterprise and consumer markets—especially in tech transfer during and graduating from the valley of death. While exceptions exist and the companies they invest in can serve both particularly as they mature, the norm is that the understanding, modeling, tactics and tools of the two markets in order to be successful are actually usually conflicting and contradictory in the early stages.

So it’s not only possible to experience an inflection point for the same technology in one market and not the other, it’s actually more common than not (although would agree that not preferable in this case). A decade ago it may have been surprising to most—today it’s not surprising to me. The functionality of a web that provides deep meaning and functionality is beyond the economic ability of the advertising model that dominates the majority of the web, and even the majority of end consumers ability to pay for it. The gap is closing and I expect it to continue to close over time, but to date I still see primarily subsidies in one form or another on the free semantic web for the functionality—whether capital, talent, infrastructure, or some combination thereof. Been a long day already so groggy but hope this helps a bit to clarify the point I was attempting to make. Thanks, MM