Memory Expansion – Mining the Data Gems of a Jeweled Babylon of Information




Memomatics, understood as the study of the Meme, decoding it into an ontological mapping, is a valuable tool to improve semantic webs and search engines. Commercial and advertising applications facilitated by Artificial Intelligence agents can benefit from the correlations found as explained below:

According to Wikipedia, a Meme is a term that identifies ideas or beliefs that are transmitted from one person or group of people to another. The name comes from an analogy: just as genes transmit biological information, it can be said that Memes transmit information of ideas and beliefs. The Memome can be viewed as the complete collection of all Memes. If we delve a little deeper into this concept, it can also be said that it encompasses all human knowledge.

Genomics and Proteomics are the study of the genome, the totality of the hereditary information of organisms and their complete complement of proteins, respectively. Likewise, Memomics can be considered the study of the Memome, the complete collection of all Memes.

In Genomics and Proteomics the study involves different types of “mapping” of the functions and structures of genes and proteins. The mapping can be, for example, or it can be pathological, that is, the correlation between expression profiles of certain genes and proteins with diseases, or it can be topological: expression with respect to a certain type of tissue, cell type or organ.

Likewise, Memomics studies the ontological mapping of ideas and terms. A company, Alitora Systems, has taken the first steps in the field of Memomics and guess where they started: with life science data. They have developed useful text and data extraction tools that can speed up a meaningful search and provide links to most ontologically correlated concepts.

A more ambitious project would be to do a complete ontological mapping of all human knowledge. That is, for each existing term or concept, with which concepts it is naturally linked. What I mean by this is not just providing a semantic mapping, which provides the meaning of a term into features and other terms. I would like to expand the mappings as suggested in my previous article: “Minerva OWLs only fly at dusk – Patently Intelligent Ontologies”. That is, mapping the proximity relationship of each term defined in a semantic web with another equally defined term to find out the average distance between those terms in all the documents of the entire World Wide Web and the weight of the frequency of such occurrences. Such an ontology map could detect terms that have an occurrence correlation that is well above the “noise”. Many trivial terminologies will occur with high frequency in proximity to any virtual term. This forms a noise frequency level which is a threshold that significant term correlations must exceed. Such terminologies include all kinds of syntactic terms like conjunctions, adverbs, adjectives, modal verbs, etc.

One disadvantage of setting the threshold too high is that terms that are normally trivial, in combination with another term, could have a very specific meaning.

When this ontological mapping is carried out only within specific segmented classes/fields of meaning, important correlations can suddenly emerge, which were not visible in most classes and fields.

Therefore, such ontological proximity mapping with weighted frequency of occurrence could be carried out in combination with a “website classification” (i-taxonomy).

Vice versa, the ontological proximity mapping exercise with weighted frequency of occurrence could provide classes and subclasses. Therefore, this process can be implemented iteratively. A meaningful mapping can create classes, which in turn can be mined from data to find new mappings and suggest new subclasses.

Another ontological mapping is to determine if certain links on the web have a correlation to certain terms.

The implementation must start with all the information present on the web on a fixed date. This information must somehow be stored as frozen to implement the extensive data mining exercise of proximity mapping. Once the given Memome is fully decoded, the process can be repeated iteratively with reloads and will eventually catch up with the “present” at that time.

Artificial intelligence agents will carry out the ontological mapping process and learn from the patterns they recognize, making it easier to map future events and create more classes. In addition, the most frequently used links detected and/or generated in this way can be added to the appropriate hubs in the “Hubbit” system, which I explained in my previous article: “From search engines to hub generators and Internet interfaces multi-purpose personal centralized . Frequented links will be favored and insignificant links will not reach a permanent stage according to the evangelical saying: “To those who have will be given, to those who do not have will be taken away”, which is also a good metaphor for the way in which they are established. neural links in our brain.

Undertaking such a large project would require enormous amounts of memory and computing power and may still be beyond what is technically possible. This is the downside. But the computing power and memory of computers has been increasing exponentially for many decades, and there is no reason to believe that the required technology is not at hand.

The applications and commercial advantages are numerous.

Chatbots and other language systems can be improved by learning from these correlation maps. Search engines can be improved by displaying results in a ranking based on frequently weighted proximity mapping. At the bottom of a search, you might have suggestions in the form of “people who searched for these terms also searched for…”.

Trade ontological mappings can be created where terms are linked to all companies involved in trading products related to the term. Like Alitora Systems, it has mapped how certain genes linked to diseases are connected to the companies that develop drugs against these diseases through a mechanism that involves the associated gene, protein or metabolic pathway.

So you could also create the Commercome as a searchable database: the complete set of all trade relationships, i.e. products linked to sellers, buyers, manufacturers, etc. Commercomics would map the relationships ontologically. Once such an information network has been created, it will have become a very useful and easy way to identify your competitors and newcomers in the field (as long as the system is kept up to date).

Advertising could greatly benefit from such correlation maps. In analogy to suggestions in the form of “people who searched for these terms also searched for…”, technology based on ontological maps could be used in advertising: that is, based on the same principle in analogy to what happens on commercial sites like Amazon.com : “people who bought A, also bought B” but going a little further than this principle in an evolutionary and learning algorithm. For example, advertising costs could be linked to the frequency of clicking on the ad in question (PPC advertising), while also being linked to the frequency of display of the ad. In this way, once again obeying the principle of “to those who have will be given, to those who do not have it will be taken away”. Other commercial text and data mining mapping might involve mapping the frequency of clicks on ads for certain search terms. This could also be coupled with a system that links the cost of advertising to the frequency of clicks and/or the frequency of viewing. Again, the AIbot providing these functions would learn from the context and adapt the display of information accordingly. Again, AIbot would generate classes and extract more specific mappings from the generated subclasses.

Queries to FAQ sheets could be assisted by such AIbots, preferably capable of conversing in natural language like a chatbot. Based on the answers and questions and the user satisfaction results, such bots could be programmed to learn and evolve into more efficient information providers.

Therefore, Memomics can be expanded to become a valuable engine for extracting datagems from a jeweled Babylon of information.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post