Monday, February 19, 2007

Recent Semantic Memory Models

“Much of the knowledge that we store in memory is encoded using language” (Steyvers, 2006, p. 327). The three new models proposed to explain human memory using semantic properties are the Retrieving Effectively from Memory (REM) model, probabilistic topic models, and the Syntagmatic Paradigmatic (SP) model. These three models emphasize the task of probabilistic assumption in memory that draws on research in computer science, information retrieval, data, and computational linguistics. Probabilistic assumption is the philosophy that certainty is impossible, and therefore decisions must be based on probabilities.

The REM model is primarily applied to recognition memory. In contrast to previous models, the REM model looks at how environmental learning is symbolized in memory. It discusses the encoding process and demonstration of information, as it continues to stress the role of probabilistic inference in explaining human memory. The REM model thinks words are represented by vectors (the place and impact of the word in a given semantic space) of characteristic principles. Each word is processed in memory to see if it has been previously stored. Every word leads to a place in the memory that makes a decision if it is a new word or an old word.

The probabilistic topic models focus on prediction as an essential difficulty of memory. Context is emphasized; these models rely on text and descriptions of words. These models use formation of words concentrating on topics. The idea is that documents are an assortment of topics represented by the probabilities of the words within that topic. Based on information of a document, the same word can be used in two separate papers but have a different meaning within each topic. When reading a paper or in conversation a predication can be made for the word that will emerge next based on the prior words, according to the topic models. To further explain this word play: if the topic is music some words included most likely would be sing, singing, song, sang, songs, and so on. Looking at the history of previous models, probabilistic topic models provide a rationale of how context should influence memory as long as the context is properly adjusted by the information of the setting.

The Syntagmatic Paradigmatic (SP) model is based upon the idea that structural and propositional knowledge can be controlled by syntagmatic associations and paradigmatic associations. Syntagmatic associations occur between words that follow each other such as, walk – briskly. Paradigmatic associations occur between words that fit in the same slots across sentences such as, open – closed. This model uses sequential and relational memory to process sentences. It uses the String Edit Theory (SET) to describe how one string can be changed into a second string. For example, (What did Columbus discover?) could be changed to (What did Emerson discover?) by only changing one word in the sentence. The model accomplishes this without indicating grammar, semantic role, or inferential knowledge.

The authors (Steyvers, 2006) have written an excellent review of rational models of memory, and how probabilistic inference influences semantic memory. These more recent models represent the concealed semantic structure of language. Learning how to investigate the power of language within human memory retrieval is the goal of these models. All these models assume memory should be addressed as a problem of probabilistic inference. More affluent representations of the arrangement of linguistic stimuli will persist as cognition is explored.
To search semantic representations that integrate common relationships between words click on WordNet.

Reference:

Steyvers, M., Griffiths, T.L., & Dennis, S. (2006). Probabilistic inference in human semantic memory. TRENDS in Cognitive Sciences. 10 No.7, 327-334.

No comments: