New Video- Kyield Enterprise: Human Economics in Adaptive NN


Advertisements

Consolidated thoughts and comments 2-2012


I thought it might be of interest to consolidate a few thoughts and comments over the past couple of weeks (typo edits included).

~~~~~~~~~~~~~~

Tweeted thoughts @Kyield

As distasteful as self-promotion can be, value entering the knowledge stream would be quite rare or postmortem otherwise, as king bias rules

~~

Raw data is just a mess of yarn until interlaced by master craftspeople with a quality loom, providing context, utility, and probability

~~

As with any predatory culture sculpted by inherent naivete in nature, IT carnivores feast on Leporidae and become frustrated by Testudines

~~

Super organizational intelligence through networked computing has already surpassed the capacity of the physical organization

~~

Superhuman intelligence with machines won’t surpass the human brain until organic computing matures

~~~~~~~~~~~~~~

At Data Science Central in response to an article and posting by Dan Woods at Forbes: IBM’s Anjul Bhambhri on What Is a Data Scientist?

Until very recently—perhaps still, very few outside of a minority in research seem to appreciate the changes for data management in networks, and most of those scientists were engaged with physics or trans-disciplinary research. The ability to structure data employing AI techniques is beginning to manifest in ways intended by the few in CS working the problem for almost 20 years now. The result is a system like ours that was designed from inception over time in conjunction with a new generation of related technologies specifically for the neural network economy, with what we’ll call dumb data silos (extremely destructive from a macro perspective) in the bulls-eye of replacement strategies. Of course the need for privacy, security, and commerce is even more necessary in the transparent world outside of database silos, which provides a challenge especially in consumer and public networks, or at the hand-off of data between enterprise and public networks. The new generation of technologies requires a new way of thinking, of course, especially for those who were brought up working with the relational databases. Like any progress in mature economies, next generation technologies will not come without some level of disruption and displacement so there are  fair amounts of defensive tactics out there. Evolution continues anyway.

~~~~~~~~~~~~~~

At Semanticweb.com in response to an article by Paul Miller: Tread Softly

Yes I suppose this does need a bit more attention. We shouldn’t have a consumer web and enterprise web in technology, languages and standards—agreed, although I do think we’ll continue to see different functionality and to some extent technology that provides value relative to their much different needs. Cray for example announced a new division last week based on semantics with several partners close to this blog, but it’s very unlikely we’ll see a small business web site developer on the sales target list of Cray’s DOS anytime soon. There is a reason for that.

While I agree that it can be (and often is) a misleading distinction—reason for commenting—it is precisely misleading due to the confusion over the very different economic drivers, models, and needs of the two markets. In this regard I am in total agreement with Hendler in the interview just before this article—the problem is that the semantic web community doesn’t understand the economics of the technology they are researching, developing, and in many cases selling, and this is in part due to lots of misleading information in their learning curve—not so much here but before they leave the university. This is in part due to the level of maturity of the technology but also in part due to the nature of R&D and tech transfer when we have folks deciding what to invest in who have little experience or knowledge about product development and applied economics, and they also have conflicting internal needs that quite often influence or dictate where and how the dollars are invested. (Note: I would add here on my own blog a certain level of ideological bias that doesn’t belong in science but is nonetheless prevalent).

The free web supported in part by advertising, in part by e-commerce, but still mainly by user generated content, like all economic models, should heavily influence product and technology development for those products that are targeted for that specific use case. We are beginning to see a bit of divergence with tools reflective of the vastly different needs (rdfa was a good standards example), but for the sake of example it’s not terribly challenging for a fortune 50 company to form a team of experts to create a custom ontology and operate a complex set of tools that are the norm in semantics—indeed they usually prefer the complexity as would-be competitors often cannot afford the same luxury, so the high bar is a competitive advantage. However, the average small business web designer is already tapped out by keeping up with just the changing landscape of the existing web (not to mention mobile) and cannot even consider taking on that same level of complexity or investment.

The technical issues are understood but the economics are not, especially relating to the high cost of talent for the semantic web (technologies), or in some cases as cited above even to include universities and government, they understand well that the complexity in the enterprise provides strong career security.

So if I am providing a free web site service that is supported by ad revenue, it’s unlikely that I will adopt the same technology and tools developed for intelligence agencies, military, and the fortune 500. The former lives in the most hyper competitive environment in human history and the latter quite often has very little competition with essentially unlimited resources. Vastly different worlds, needs, markets, tactics and tools. But we still must use the same languages to communicate between the two—therein lies the challenge in the neural network economy, or for (developing, adopting, and maintaining) network standards.

The economic needs and abilities are radically different which should dominate the models, even though it’s true that both are part of the same neural network economy and must interact with the least amount of integration friction as possible. I do believe that like we’ve seen in other technologies eventually scale will converge the two—so-called consumerization, although in this case it’s somewhat in reverse. So you can see that I agree with your wise advice to walk softly on hype with the consumer semantic web, but I am quite certain that it’s more misleading to not point out the dramatically different economic drivers and needs of the two markets. Indeed it’s quite common for policy makers and academics to create large numbers of martyrs (often with good intentions) by promoting and even funding modeling they have no experience with beyond the subsidized walls of research. What is great for society is not always great for the individual entrepreneur or investor, and quite often isn’t.

I’ve recently even decided that in an attempt to prevent unnecessary martyrdom by very inexperienced and often ideological entrepreneurs of the type common in semantics that we should just take the heat of incoming arrows and point out the sharp misalignment of interests between the two markets and the stakeholders. The super majority of revenue in the semantic web reflects the jobs posted here recently—defense contractors, super computing, universities, governments, and the Fortune 500.

It’s not by accident that in venture capital we see specialists in individuals—very few tend to be good at both enterprise and consumer markets—especially in tech transfer during and graduating from the valley of death. While exceptions exist and the companies they invest in can serve both particularly as they mature, the norm is that the understanding, modeling, tactics and tools of the two markets in order to be successful are actually usually conflicting and contradictory in the early stages.

So it’s not only possible to experience an inflection point for the same technology in one market and not the other, it’s actually more common than not (although would agree that not preferable in this case). A decade ago it may have been surprising to most—today it’s not surprising to me. The functionality of a web that provides deep meaning and functionality is beyond the economic ability of the advertising model that dominates the majority of the web, and even the majority of end consumers ability to pay for it. The gap is closing and I expect it to continue to close over time, but to date I still see primarily subsidies in one form or another on the free semantic web for the functionality—whether capital, talent, infrastructure, or some combination thereof. Been a long day already so groggy but hope this helps a bit to clarify the point I was attempting to make. Thanks, MM

Interview with Jenny Zaino at Semanticweb.com


Just wanted to share this interview and article with Jenny Zaino over at Semanticweb.com on my recent patent and related IP.

Manage Structured Data and Reap the Benefits

A detailed paper on this topic is nearing completion and will post a brief and description in the next few days.

What do Watson, House, and our Education System have in Common?


As I was reading articles about Watson winning Jeopardy, I was thinking about one of my wife’s favorite TV shows; House. The main character, Dr. Gregory House–played brilliantly by Hugh Laurie, is an emotionally unstable genius who leads a team of physicians in a diagnostic unit at a fictional teaching hospital in Princeton, New Jersey.

In most episodes of House, the diagnosticians torture this viewer, the patient, and the patient’s family, with a game much like Jeopardy when each of the experts draw on their memory over days and weeks to match symptoms with disease and therapy. As if this isn’t painful enough for someone who has been focused on applying semantic technologies to improve healthcare efficiencies, the team invariably nearly kills the patient multiple times during the diagnostic Q & A before House has an epiphany when his pain-killer addicted mind finally connects the dots between Jungian philosophy, tropical parasites, chemical toxicity, and/or genetic disorders to miraculously save the patient (usually).

I have actually found myself talking over the TV at times (very rare otherwise), suggesting some version of “put those symptoms in a database”, or “structure your data and stop torturing your patients and viewers”, and “save a million dollars per patient in unnecessary healthcare costs”.

Of course this frustration comes only after a half-century of witnessing our education system—and by extension our society, favor memorization techniques rather than the more essential understanding of how to improve, innovate and solve problems. Most of our challenges as individuals, organizations, a planet, and species depend not only on the quality of data far beyond the abilities of any single human expert–or even group–that would be best categorized as hubris, but rather the integrity of the collective knowledge, and more importantly what we do with it, aka the decision process.

While a sharp memory is essential in solving real-world problems such as diagnostics in highly complex environments, memory is a task that is much better performed by computer networks than humans, particularly with properly structured data sourced and updated from human experts. Unfortunately, the vast majority of institutions can no longer afford Watson, House, or our education system, which is among the best arguments for the semantic web.

In the real-world, Dr. Gregory House and his team would be employing Watson for diagnostics to free up time for improved therapy, research, and caring for additional patients.

Mark Montgomery
Founder & CEO
Kyield