If we approach the world without an eye toward anything in particular, we end up making observation after observation, none necessarily incorrect but all of them questionable in their relevance. If we take the next step, and pay attention to which patterns in the flow of reality are important and which aren’t, then we’ll be closer to a useful approach. But we soon run into a problem: a glut of information, a flood of disconnected pieces. A general AI with a massive memory capacity, a greater memory capacity than every human who has ever lived combined, may be in a position to deal with such a challenge. But humans, with our limited memory faculty, have no chance. The human race’s strategy, then, lies in a particular kind of compression of the data in each individual’s head, and a division of labor in knowledge operating across the human population.
About a year ago, I wrote a post I called The Invisible Shackles of Natural Language. I recommend reading the post in its entirety, but the basic point was that humans are highly social animals, and that because of this fact it’s very difficult for most people to appreciate how primitive natural language is as a communication system. Feeling misunderstood is a painful emotion; upon discovering a scene of thought impractical to communicate with current communication systems, most people turn right around and forget what they saw.
I was originally taken aback by the obstacles I encountered when trying to explain memetic analysis to several of my contacts. But of course feeling surprised is an indication that you failed to model the situation with sufficient clarity. Further contemplation, then, led me to an important theory for why memetic analysis is so difficult for most people.