Memomatics, understood as the study of the Meme, decoding it into an ontological mapping, is a valuable tool to improve semantic webs and search engines. Commercial and advertising applications facilitated by Artificial Intelligence agents can benefit from the correlations found as explained below:
According to Wikipedia, a Meme is a term that identifies ideas or beliefs that are transmitted from one person or group of people to another. The name comes from an analogy: just as genes transmit biological information, Memes can be said to transmit information about ideas and beliefs. The Memome can be seen as the complete collection of all Memes. If we delve a little deeper into this concept, it can also be said that it encompasses all human knowledge.
Genomics and Proteomics are the study of the genome, all of the hereditary information of organisms and all of their protein complement, respectively. Likewise, Memomics can be considered the study of the Memome, the complete collection of all Memes.
In Genomics and Proteomics, the study involves different types of “mapping” of the functions and structures of genes and proteins. The mapping can be, for example, or it can be pathological, that is, the correlation between expression profiles of certain genes and proteins with diseases, or it can be topological: expression with respect to a certain tissue type, cell type or organ.
Likewise, Memomics studies the ontological mapping of ideas and terms. A company, Alitora Systems, has taken the first steps in the field of Memomics and guess where they started: with life sciences data. They have developed useful text and data extraction tools that can speed up meaningful searching and provide links to most ontologically correlated concepts.
A more ambitious project would be to make a complete ontological mapping of all human knowledge. That is, for each existing term or concept, with which concepts it is naturally linked. What I mean by this is not just providing a semantic mapping, which provides the meaning of a term into features and other terms. I would like to expand the mappings as suggested in my previous article: “Minerva OWLs only fly at dusk – Patently Smart Ontologies”. That is, to map the relationship of proximity of each term defined in a semantic web with each other to know the average distance between those terms in all the documents of the entire World Wide Web and the weight of the frequency of such occurrences. Such an ontology map could detect terms that have an occurrence correlation that is well above “noise”. Many trivial terminologies will occur with high frequency in proximity to any virtual term. This forms a noise frequency level which is a threshold that significant term correlations must exceed. Such terminologies include all kinds of syntactic terms like conjunctions, adverbs, adjectives, modal verbs, etc.
One disadvantage of setting the threshold too high is that normally trivial terms, in combination with another term, could have a very specific meaning.
When this ontological mapping is carried out only within specific segmented classes/fields of meaning, important correlations can suddenly emerge, which were not visible in most classes and fields.
Therefore, such ontological proximity mapping with weighted frequency of occurrence could be carried out in combination with a “website classification” (i-taxonomy).
Vice versa, the weighted-frequency-of-occurrence ontological proximity mapping exercise could yield classes and subclasses. Therefore, this process can be implemented iteratively. A meaningful mapping can create classes, which in turn can be mined from data to find new mappings and suggest new subclasses.
Another ontological mapping is to determine if certain links on the web have a correlation with certain terms.
The implementation must start with all the information present on the web on a fixed date. This information must be stored in some way as frozen to implement the extensive proximity mapping data mining exercise. Once the given Memome is fully decoded, the process can be iteratively repeated with reloads and will eventually catch up with the “present” at that time.
AI agents will carry out the ontological mapping process and learn from the patterns they recognize, making it easier to map future events and create more classes. In addition, the most frequently used links thus detected and/or generated can be added to the appropriate hubs in the “Hubbit” system, which I discussed in my previous article: “From Search Engines to Generators of Hubs and Internet Interfaces centralized personal multipurpose Frequent liaisons will be favored and insignificant liaisons will not reach a permanent stage according to the evangelical saying: “To the one who has, more will be given, to the one who has not, it will be taken away”, which is also a good metaphor for the way in which they are established. the neural links in our brain.
Undertaking such a huge project would require enormous amounts of memory and computational power and, at this point, may be beyond what is technically possible. This is the disadvantage. But the computing power and memory of computers has been increasing exponentially for many decades and there is no reason to believe that the required technology is not at hand.
The commercial applications and advantages are numerous.
Chatbots and other linguistic systems can be improved by learning from these correlation maps. Search engines can be improved by displaying results in a frequency-weighted proximity mapping ranking. At the bottom of a search, you might have suggestions in the form of “people who searched for these terms also searched for…”.
Commercial ontological mappings can be created where terms are linked to all companies involved in the trade of products related to the term. Like Alitora Systems, it has mapped how certain genes linked to diseases are connected to the companies that develop drugs against these diseases through a mechanism that involves the associated gene, protein or metabolic pathway.
Therefore, the memorandum of commerce (Commercome) could also be created as a searchable database: the complete set of all commercial relations, that is, products linked to sellers, buyers, manufacturers, etc. Commercomics would map the relationships ontologically. Once such an information network has been created, it will have become a very useful and easy way to identify your competitors and newcomers in the field (as long as the system is kept up to date).
Advertising could greatly benefit from such correlation maps. In analogy with suggestions in the form of “people who searched for these terms also searched…”, technology based on ontology mapping could be used in advertising: that is, based on the same principle in analogy with what happens on commercial sites like Amazon.com: “people who bought A, also bought B” but going a little further than this principle in an evolutionary and learning algorithm. For example, advertising costs could be linked to the frequency of clicking on the ad in question (PPC advertising), while the frequency of display of the ad is also linked. In this way, obeying again the principle of “to the one who has, it will be given, to the one who does not have, it will be taken away”. Other text and data mining business mapping could involve mapping the frequency of ad clicks for certain search terms. This could also be coupled with a system that links the cost of advertising to click rate and/or view rate. Once again, the AIbot providing these functions would learn from the context and adapt the display of information according to the context. Again, AIbot would generate classes and extract more specific mappings from the generated subclasses.
FAQ sheet queries could be assisted by such AIbots, preferably capable of conversing in natural language like a chatbot. From the answers and questions and user satisfaction results, such bots could be programmed to learn and evolve into more efficient information providers.
Therefore, Memomics can be expanded to become a valuable engine for mining the datagems of an information-jeweled Babylon.