Enabling contextual computing in today’s enterprise information fabrics

Fabric information

Back in the 1970s, Bob Metcalfe, Ethernet pioneer and founder of Internet equipment company 3Com, was working on something called “Data Reconfiguration Service” for the early days of the Internet. “It was an effort to write a special-purpose programming language to convert data formats,” Metcalfe said during an OriginTrail.io 2021 panel session. on the Internet could be unified into a single, silo-free information network that everyone could access. This project did not come to fruition. What killed this effort? [at least early on] was standardization. It has proven to be more efficient to use the same software, rather than going between incompatible things.

“First you build a platform, then applications emerge. And then you build another platform and the apps emerge,” Metcalfe said. Every platform needs a killer app to make it work. Ethernet’s flagship application, Metcalfe said, was the laser printer. “We all decided that the only way to print on this beautiful high-speed bitmap printer [which was eight feet long and five feet wide during the 1970s] was on ethernet. So guess what? Everyone had to be on Ethernet. It was this printer that got people plugging in this card [in the 1980s and 1990s] in their PCs.

In 2021, Metcalfe joined the advisory board of OriginTrail, a decentralized knowledge graph service provider that uses P2P data graphs in conjunction with blockchains to enable large-scale sharing of trusted data across the blockchain. ‘supply. He and Trace Labs Founder and CTO Branimir Rakic ​​(OriginTrail is a TraceLabs product) discussed connectivity trends on a YouTube video which is my source for this information. Rakic ​​listed physical layer, data layer and application layer. Metcalfe called the human layer buyers and sellers.

“Why are neurons better than transistors? Mecalfe asked. “The answer is connectivity.” There are layers of connectivity that we haven’t started to reach, he pointed out.

Metcalfe dubbed Metcalfe’s law (aka the network effect) in a 2013 paper he published in one of the IEEE journals.

The original law: “The value of the network grows as the square of the number of nodes attached.”

The complementary law of 2013: “The value can go to infinity if the nodes go to infinity. The stumbling block, of course, is that nodes can’t go on forever, but the implication is that as the network grows, as it continues, so does increased usage, which translates by the square of the value. To make his point, Metcalfe fitted an adoption curve to Facebook’s revenue growth.

He praised OriginTrail for focusing on building an extra layer of connectivity with the decentralized knowledge graph approach. What I’ve learned is that a silo-less network of networks approach (which P2P data networks such as IPFS enable will eventually drive another wave of value on top of what has already been achieved.

Contextual computing and the next level of connectivity

How do you unlock the value of networks of networks without silos? By enabling discovery and reuse at the data layer that has not been made available by APIs and application-centric programming.

Instead, use a knowledge graph base for development, which declares reusable logic and predicate rules as an extensible base data model – an ontology. 85% of the code becomes redundant if context and machine-readable rules are made available for reuse via ontologies in graphs.

Several years ago, former Defense Advanced Research Projects Agency (DARPA) I2O Director John Launchbury made a video explaining the different approaches to AI throughout history and how those approaches should combine if we want to get closer to artificial general intelligence (AGI).

Launchbury remembers the first phase of AI rule-based systems, including Knowledge Representation (KR). (Most KRs these days are in the form of knowledge graphs.) These systems, he pointed out, were strong when it came to reasoning in specific contexts. Rules-based and knowledge-based systems continue to be quite important – TurboTax is one example he gave.

Enabling contextual computing in today's enterprise information fabrics

Representing knowledge using declarative languages ​​in the form of knowledge graphs continues to be the most effective way to create and weave contexts. An example of this is Datalog: a factual statement on one side of an expression and a rule on the other side. Another example is the RDF stack (triplified data in subject/predicate/object form, where each triple is a small expandable graph).

Machines only solve problems within their frame of reference. Contextual computing would allow them to work within an expanded frame of reference, allowing networks of context, which some would call an information web. In the process, the machines come closer to what we call understanding, associating each node with the parameters, situations, and actions it needs to become meaningful. Relations allow these parameters, situations and actions to be described in the way the nodes are contextually connected.

The second phase of AI Launchbury described is the phase we are currently in – statistical machine learning. This stage includes deep learning or multi-layer neural networks and really focuses on the probabilistic rather than the deterministic. Machine learning, as we have seen, can be quite good for perception and learning, but even proponents admit that deep learning, for example, is poor for abstraction and reasoning.

The third phase of AI Launchbury plans to combine the techniques of phases I and II. Other forms of logic, including description logic, are exploited in this phase. In a well-designed, standards-based knowledge graph, contexts are modeled, and models live and evolve with instance data.

In other words, more logic becomes part of the graph, where it is potentially reusable and can evolve.

Harnessing the power of knowledge graphs and statistical machine learning together

In January 2023, with the ubiquity of the Internet and so many concurrent improvements in networked computing, everyday developers are using ChatGPT to help them reformat code. Developer attorney Shawn Wang (@swyx) tweeted a telling observation to start the new year:

ChatGPT’s current killer application is not research, therapy, math, browser control, virtual machine emulation, or any of those other select examples with huge disclaimers. responsibility.

It’s much more everyday:

Reformatting information from any X format to any Y format.

It’s not like ChatGPT always (or ever) gets the reformatting just right. This could give developers a head start on a reformatting task. It will all depend on the prompts each user leverages for reformatting purposes, as well as the scope of the training package, validation methods used, and user knowledge.

But either way, ChatGPT could be an indicator that better building and maintaining a unified, silo-free web is becoming possible with better machine support. This means that the standardization mentioned by Metcalfe could also be achievable soon. Data networks of networks, if properly architected, will be able to interlock and interact with each other, as well as at scale, not only through statistical means, but through human feedback and logic in the form of symbols.

Leave a Comment