What’s Wrong With the Neural Network? Lack of Data Structure that Enables Governance
January 31, 2012 Leave a comment
A recent article at CACM brought up a topic that I’ve been thinking about off and on for quite a few years surrounding the relationship between network architecture, security, functionality, and sustainable economics in the neural network economy.
The article in question is titled “BufferBloat: What’s Wrong With the Internet?“, in discussion format with Vint Cerf—an old acquaintence, Van Jacobson, Nick Weaver, and Jim Gettys. Thanks to Franz Dill who alerted me to the article on his blog.
For this discussion let’s define the current neural network as: encompassing any individual entity, device, application, or sensor that is either part of, or connected to, the Internet, World Wide Web, or extension thereof—such as telecom, cable, or wireless networks. By looking at this problem systemically in the context of what is required of a neural network to perform as a sustainable ecosystem—which requires certain elements and features, we can then perhaps be far enough removed for a perspective that may help address this and other specific problems in the neural network economy.
Like most ecosystems, the first rule of neural networks can be taken directly from biology, which is that in order to maintain the network it must contain essential elements that sustain its existence. Otherwise it will simply perish. For the sake of this discussion let’s place a label of economics on the discipline and define the activity as one that serves the purpose of achieving sustainable resource management. In this context sustainability of the ecosystem would include any entity providing a subsidy.
In order to achieve sustainability, especially when dealing with economic entities, a critical mass of essential resources must have motivation to behave in such a way that favors sustainability. To manage the impacts from that behavior, usually through incentives and penalties called regulation, a structure of governance is generally required. And in order for the ecosystem to adapt to constant change, the governance structure found to be most effective is one that embraces markets, aka market economics, or in long form: resource management through governance of markets by a regulatory scheme that incentivizes and enforces adaptable behavior towards a sustainable ecosystem. Of course any sustainable governance system for neural networks must consider such issues as preventative maintenance, security, growth and contraction, among others.
When we observe the type of behavior demonstrated in the BufferBloat problem, the underlying cause is quite often a combination of factors that include network engineering and governance engineering, both of which appear to be critical causal factors in this case. In the neural network economy, regardless of the many types of motivation that would impact individual behavior, the network must first contain sufficient intelligence to inform the individual (whether human or machine) about the impact of their behavior, and in order for machines to collect sufficient intelligence to do so the global computer network must provide an adaptable model that balances security and privacy according to regional regulation.
Finally, as our patented system provides, that adaptability should in my view include the quality and quantity of data as defined and managed by each entity’s profile within the governance parameters, whether national regulations or business rules in the enterprise. For example, the operating system I am using to write and upload this to the Web has a basic profile with built-in security and tracking features, but of course unlike Kyield the profile functionality in this OS does not extend to my specific data needs or that of my organizational entity.
One of the intriguing aspects of the BufferBloat challenge is that it highlights the problems that can arise when very little provenance exists, without which accountability and security remain elusive, especially given regional regulatory schemes. This is why we long ago embraced the concept of universal standards for data structure, aka the ‘semantic web’. The conceptual framework contained the potential to allow a more intelligent, functional, and one hopes—rationally sustainable, medium.
But that gets to this awkward problem of not knowing what the source of the congestion might be unless you’re monitoring source/destination pair traffic or something along those lines. – Vint Cerf
…it can turn into an impossible problem because someone who doesn’t want to be policed will just spread traffic over a zillion addresses, and there’s no way to detect whether those are separate flows or all coming from a single user. – Nick Weaver
As is the case in the physical world, when followed down the maze to inevitable conclusion, governance can be reduced to an individual human rights issue that overlaps civil liberties, privacy, security, individual property rights, and in the U.S. those covered by the Bill of Rights. Many people believe that provenance and privacy are incompatible, but actually it depends again on a combination of technology architecture and choices made in governance policy. The problem outlined above by Nick Weaver is only possible due to the specific architecture of the Internet that allows it. That is to say that anonymity and lack of provenance is intentional policy of the Internet, which in some parts of the world, and perhaps at certain times, are more necessary than others, particularly relating to threat by heavy handed governments, which calls for adaptability.
In a general sense the BufferBloat problem is partially about alignment of interests between the supply chain of vendors, or as is often the case –misalignment of interests. For example, search engines have a business model that is dependent upon advertising, which is in turn dependent upon a ‘free Web’, and somewhat reliant on unlimited bandwidth. Of course as the BufferBloat case demonstrates, the Web is certainly not free, rather it’s a question of who is paying for what, when, and how. The interests of hardware and desktop software vendors have been fairly well aligned with search engines to date as they all tend to profit from demand for their products subsidized by free content. Telecoms and cable companies also profit from free content on the Web—although not necessarily unfettered bandwidth; and generally speaking consumer products companies have also shared alignment of interests with the ‘free’ advertising model of the Web, provided of course the ROI in advertising remained beneficial. These dynamics are beginning to change, however, as consumer companies, IT vendors and governments begin to realize that their customers need jobs for income to buy their products and pay taxes.
The current neural network economic model conflicts to some degree with the interests of countries like the U.S. due to our service-oriented economy, as well as most organizations within the so-called knowledge economy, many of which are in the process of being disrupted and/or replaced by the business models of the Internet and Web. Essentially the economic model of the Web is made possible by massive subsidies by content providers worldwide of every type with an advertising revenue sharing structure that requires tracking and favors volume. This was well understood before current search leaders were conceived, but the model was viewed as a necessary step, albeit a reluctant one by Silicon Valley which did not want to see its future limited to advertising.
Apple is one exception to the IT industry that has profited greatly from poor structure and governance in the neural network economy by creating platforms and models that are more economically sustainable and better aligned with the interests of content providers, but is the Apple case not yet another symptom of the lack of network governance?
The BufferBloat problem is a good example of what can occur in a neural network environment where the limitations of a single discipline of limited scope and interest can impact the global economy, where misplaced ideology and good intentions have the potential to wreak havoc, and where the business models of those influencing governance can be directly conflicting with the broader needs of both the physical and neural network economies.
The scale challenge with structured data
While scale has occasionally served as an excuse not to embrace standards or deploy structured data, scale has been a legitimate technical obstacle to wider adoption of structured data, as has been ease of use, representing significant if temporary technical risks. However, the trajectory of innovation in underlying hardware, software and trade craft has been pointing towards resolution on a parallel path with Moore’s law all along. What only a supercomputer could accomplish a few years ago with large data sets that required hours to run can now be performed in minutes or even seconds in most data centers. Moreover, the short-term trajectory reflects a very high probability of near-real time execution at consumer search scale soon.
While structured data can only provide a partial solution in the near-term to the specific BufferBloat challenge, the combination of a Semantic Web and Internet with deeper embedded intelligence is foundational in providing the potential to overcome a host of complex technical and governance challenges within the neural network economy, including the potential to assist greatly with security, privacy, data quality, data volume, business model alignment, and more stable global economic equilibrium.