The theories of semantic memory and working memory with perceptual processes: visual and auditory all intertwine. When semantic memory is activated the knowledge of the world as you have learned it comes into play. As this knowledge is activated the working memory also comes into play because you are thinking about it at the present time, it gets your attention. The object you are viewing or the sound you are listening to activate your perceptual processes. When you presently think about a topic you are using your working memory. Your perceptual processes: visual and auditory are stimuli that activate the working memory. When you see or hear something, you are going to think about what you have just seen or heard. The processes of identifying what you have seen or heard involve your semantic memory. You perceive objects differently depending on your semantic memory. Multi-modal brain processes are triggered linking the new information to stored information in memory. New learning is created when the perceptual information processed links with the episodic memory and the semantic system. The visual stimuli trigger the visuo-spatial part of the working memory. The phonological loop in the working memory is triggered by the auditory stimuli which produce our perception.
When an object is presented, different parts of the brain process the information. The right hemisphere is involved in processing specific visual form about the object, it deciphers between perceptually identical or perceptually different. The left hemisphere processes more abstract and/or semantic information about the object. After an object has been presented repetitively the neurons in the fusiform and lateral occipital cortices where visual perception and semantic processing are located, reduce activity because the object has been previously presented. The prefrontal cortex which retrieves semantic information also has reduced brain activity (Simons, Koutstaal, Prince, 2003).
Recognition is a process that occurs in thinking when something recurs or happens again. The re- means to do something again and cognition is the process of knowing. Recognition means you know something because you have seen it or done it in the past. In order for something to be recognized, it must be familiar. When a person processes information meaningfully they will remember it better, this process is known as deep processing.
Yonelinas (2001) studied the outcome of semantic vs. perceptual encoding on the approximation of recollection and familiarity processes. Three theories were used for the study: Jacoby’s process dissociation procedure, Tulving’s remember-know procedure, and Yonelinas dual-process signal-detection model. It is not clear if the three theories rely on the same memory retrieval.
Participants were instructed to auditorily listen to a list of 80 words using both deep processing where they related the pleasantness of the word, and shallow processing where they counted the syllables in the word. Each theory was tested as an individual experiment with different members. Recollection was measured as the capability to verify if a word was offered in an incidentally prearranged heard list or in a deliberately prearranged seen list. Anything the participants remembered about the study could provide as recollection in the remember-know test.
Deep processing resulted in greater recollection and familiarity of the words in the three theories. The results show that semantic in contrast to perceptual processing at the training stage led to a raise in recollection and familiarity. Much research shows that the two processes, recollection and familiarity, depend on assistance from semantic processing (Yonelinas, 2001).
Sometimes you do not want to remember something. People, who are distracted by unwanted memories, try to refocus their attention to control what they want to remember. When a memory is overridden it requires the use of the central executive control system (Levy, Anderson, 2002). If a memory is suppressed several times it may cause memory failure. When the person wants to remember the memory they have suppressed, it may be impaired.
I am currently working in a special education classroom with Kindergarten through grade three students. I use many assignments in the class which relate pictures and written word in order to build semantic memory. These tie in the visual processes. Auditory processes are stimulated when I say the word and the student finds the related picture. Attention is easier to keep in small groups and I usually work with 4-5 kids at a time. These activities encourage the students to process the information meaningfully so that they will remember it better. Time schedules do not allow for extended activities, I normally have 20-30 minutes at a time. Before the children are allowed to leave my classroom they are asked to tell me at least one thing they have learned today. This is a memory strategy which helps them remember the information that they see as important. More research needs to be done on practical applications in the classroom dealing with the complex processes of the brain and the activities which result in true learning.
References:
Levy, B.J., & Anderson, M.C. (2002). Inhibitory processes and the control of memory retrieval. TRENDS in Cognitive Sciences, 6 No.7, Retrieved January 27, 2007, from http://www.psych.nwu.edu/~ej/IntroCogSci/LevyAnderson2002.pdf.
Simons, J.S., Koutstaal, W., Prince, S., Wagner, A.D., & Schacter, D.L. (2003). Neural mechanisms of visual object priming: evidence for perceptual and semantic distinction in fusiform cortex. NeuroImage, 19, Retrieved January 27, 2007,from http://www.sciencedirect.com/.
Yonelinas, A.P. (2001). Consciousness, control, and confidence: the 3 c's of recognition memory . Journal of Experimental Psychology, 130 No.3, Retrieved January 27, 2007, from http://psychology.ucdavis.edu/labs/Yonelinas/pdf/29_01CCC.pdf
Sunday, February 25, 2007
Brain Regions for Semantic Memory
Figure 2. Brain regions showing differences between successful RM activity during encoding (ESA) and during retrieval (RSA). The bar graphs display differences in the effect size of activations for remembered versus forgotten items during encoding (i.e., subsequently remembered vs forgotten) and during retrieval (i.e., hits vs misses). Diff, Difference; Dorsolat, dorsolateral; ENC, encoding; Forg, forgotten; P, perceptual; Post. Parahipp./Hipp., posterior parahippocampal cortex/hippocampus; Rem, remembered; RET, retrieval; S, semantic; Ventrolat, ventrolateral. Enclosed box displays the MTL from a sagittal slice at x = -27.
Brain regions that showed ESA/RSA overlaps that differed between semantic (S) and perceptual (P) RM conditions (teal, semantic; purple, perceptual). The left ventrolateral (L Ventrolat) PFC region (BA 45; x, y, z: -46, 26, 2) was slightly more posterior/dorsal than the left ventrolateral PFC region that showed greater ESA than RSA for both semantic and perceptual RM (red area, BA 47; x, y, z: -49, 33, -8). B Parietal, Bilateral parietal cortex; Diff, difference; ENC, encoding; Forg, forgotten; L Occiptemp, left occipitotemporal cortex; R Post Parahipp G, right posterior parahippocampal cortex/hippocampus; Rem, remembered; RET, retrieval. (Prince et al, 2005)

Reference:
Prince, S.E., Daselaar, S.M., & Cabeza, R. (2005). Neural correlates of relational memory: successful encoding and retrieval of semantic and perceptual associations. The Journal of Neuroscience. 25(5), 1203-1210.
Brain Regions Associated with Semantic and Perceptual Processes
When people have a vivid memory of an exact statement and the topic of conversation, they are making semantic associations within the memory; if they also remember the speaker’s voice they are using their perceptual associations. These memories are called relational memory. They are connected to the medial temporal lobes and the prefrontal cortex.
This study used a functional magnetic resonance imaging (fMRI) to explore relational memory in the brain. Prince (2005) wanted to find the answers to three questions. First of all, he wanted to know if relational memory encoding and retrieval activations were different in the medial temporal lobes and the prefrontal cortex. Second, he asked does relational memory involve the reactivation during retrieval of process-specific encoding regions? Last, is there a brain region critical to successful relational memory, regardless of memory phase (encoding vs. retrieval) and stimulus content (semantic vs. perceptual)?
Prince wanted to find information leading to the transfer – appropriate processing (TAP) principle by Morris. It proposed a memory overlap between encoding and retrieval. When retrieving information the cognitive function depended on the nature of the information.
The participants used in the study were scanned for semantic and perceptual conditions. Semantic memory was tested while encoding and retrieving words. For example, a pair of words in plain font was displayed. The perceptual processes were tested between the words and different fonts. The font was different for each pair.
The results of the fMRI answered the above questions. First, differences were found in the encoding and retrieval activations in both the medial temporal lobes and the prefrontal cortex. Secondly, several of the same regions were activated in both the encoding and retrieval in a content-specific method. Last, there was one brain region critical to successful relational memory during encoding and retrieval in semantics and perceptual processing, the left hippocampus. This was the first study to prove that a common hippocampal region is activated during encoding and retrieval, and during semantic and perceptual relational memory.
The brain region where semantic memory is located is the Hippocampus which is inside the temporal lobe. The hippocampus encodes memories, or makes it possible for memories to form at all. The temporal lobe stores memories after the initial encoding process is completed. There is a passageway from the structure of the nervous system where semantic memory is located in the temporal lobes to the hippocampus. (Graham et al, 2000)
Brain regions associated with Semantic Memory - Pictures are posted on blog with this title.
The TAP principle confirmed that reactivation during retrieval used the same brain regions that were used previously in the encoding phase. This proved there were overlaps in many regions of the brain during encoding and retrieval that differed between the semantic and perceptual relational memory conditions. Even though semantic processing and episodic encoding were located in the ventrolateral prefrontal cortex, they may also be activated in several subregions of the area. With perceptual relational memory while encoding and retrieving information the areas of the brain that were activated were the left occipitotemporal, bilateral parietal and right parahippocampal regions (Prince, 2005).
References:
Prince, S.E., Daselaar, S.M., & Cabeza, R. (2005). Neural correlates of relational memory: successful encoding and retrieval of semantic and perceptual associations. The Journal of Neuroscience. 25(5), 1203-1210.
Graham, K.S., Simons, J.S., Pratt, K.H., Patterson, K., & Hodges, J.R. (2000). Insights from semantic dementia on the relatinship between episodic and semantic memory. Neuropsychologia. 38, 313-324.
This study used a functional magnetic resonance imaging (fMRI) to explore relational memory in the brain. Prince (2005) wanted to find the answers to three questions. First of all, he wanted to know if relational memory encoding and retrieval activations were different in the medial temporal lobes and the prefrontal cortex. Second, he asked does relational memory involve the reactivation during retrieval of process-specific encoding regions? Last, is there a brain region critical to successful relational memory, regardless of memory phase (encoding vs. retrieval) and stimulus content (semantic vs. perceptual)?
Prince wanted to find information leading to the transfer – appropriate processing (TAP) principle by Morris. It proposed a memory overlap between encoding and retrieval. When retrieving information the cognitive function depended on the nature of the information.
The participants used in the study were scanned for semantic and perceptual conditions. Semantic memory was tested while encoding and retrieving words. For example, a pair of words in plain font was displayed. The perceptual processes were tested between the words and different fonts. The font was different for each pair.
The results of the fMRI answered the above questions. First, differences were found in the encoding and retrieval activations in both the medial temporal lobes and the prefrontal cortex. Secondly, several of the same regions were activated in both the encoding and retrieval in a content-specific method. Last, there was one brain region critical to successful relational memory during encoding and retrieval in semantics and perceptual processing, the left hippocampus. This was the first study to prove that a common hippocampal region is activated during encoding and retrieval, and during semantic and perceptual relational memory.
The brain region where semantic memory is located is the Hippocampus which is inside the temporal lobe. The hippocampus encodes memories, or makes it possible for memories to form at all. The temporal lobe stores memories after the initial encoding process is completed. There is a passageway from the structure of the nervous system where semantic memory is located in the temporal lobes to the hippocampus. (Graham et al, 2000)
Brain regions associated with Semantic Memory - Pictures are posted on blog with this title.
The TAP principle confirmed that reactivation during retrieval used the same brain regions that were used previously in the encoding phase. This proved there were overlaps in many regions of the brain during encoding and retrieval that differed between the semantic and perceptual relational memory conditions. Even though semantic processing and episodic encoding were located in the ventrolateral prefrontal cortex, they may also be activated in several subregions of the area. With perceptual relational memory while encoding and retrieving information the areas of the brain that were activated were the left occipitotemporal, bilateral parietal and right parahippocampal regions (Prince, 2005).
References:
Prince, S.E., Daselaar, S.M., & Cabeza, R. (2005). Neural correlates of relational memory: successful encoding and retrieval of semantic and perceptual associations. The Journal of Neuroscience. 25(5), 1203-1210.
Graham, K.S., Simons, J.S., Pratt, K.H., Patterson, K., & Hodges, J.R. (2000). Insights from semantic dementia on the relatinship between episodic and semantic memory. Neuropsychologia. 38, 313-324.
Auditory sentence processing
When a person listens to someone talking the brain is working to process all the information. Phonological information, phonemes (basic unit of human language), syntactic (grammar) and semantic (prior knowledge) information are all processed within less than a second. There are different views on how auditory language is processed. One view, proposes syntax is processed separately before semantic information – this is called serial or syntax – first model. Another view, constraint – satisfaction model, states that all parts of information act together at each segment of language comprehension. Friederici (2002) suggests a neurocognitive model of sentence comprehension. First the grammar is noted, then the meaning of the individual words and formation processes occur with the goal of thematic role assignment, and at last the different types of information are integrated.
Most studies completed on semantic processes use words and not sentence processing. Past studies concluded that semantic processes are controlled by the left temporal region. When strategic or memory aspects are included the frontal cortex is activated. Many parts of the temporal region are activated during sentence processing: the left inferior frontal gyrus, the right superior temporal gyrus, the left middle temporal gyrus, and the left posterior temporal lobe. The left inferior frontal gyrus is also called Broca’s area when connected to language comprehension. This area is responsible for language production, processing musical sequences, the perception of the rhythm of motion, and the imagery of motion.
A sentence can only be processed through a decision that it makes sense, which requires memory resources. The left hemisphere predominately processes syntactic and semantic information. The formation of relationships with syntactic and semantics are activated by the frontal regions. All the areas above must be stimulated to comprehend an auditory sentence.
Reference:
Friederici, A.D. (2002, February). Towards a neural basis of auditory sentence processing. TRENDS in Cognitive Science, 6 No.2, Retrieved February 11, 2007, from http://psy.ucsd.edu/~dswinney/Psy253_pdfs/friederici%202002%20TICS.pdf.
Most studies completed on semantic processes use words and not sentence processing. Past studies concluded that semantic processes are controlled by the left temporal region. When strategic or memory aspects are included the frontal cortex is activated. Many parts of the temporal region are activated during sentence processing: the left inferior frontal gyrus, the right superior temporal gyrus, the left middle temporal gyrus, and the left posterior temporal lobe. The left inferior frontal gyrus is also called Broca’s area when connected to language comprehension. This area is responsible for language production, processing musical sequences, the perception of the rhythm of motion, and the imagery of motion.
A sentence can only be processed through a decision that it makes sense, which requires memory resources. The left hemisphere predominately processes syntactic and semantic information. The formation of relationships with syntactic and semantics are activated by the frontal regions. All the areas above must be stimulated to comprehend an auditory sentence.
Reference:
Friederici, A.D. (2002, February). Towards a neural basis of auditory sentence processing. TRENDS in Cognitive Science, 6 No.2, Retrieved February 11, 2007, from http://psy.ucsd.edu/~dswinney/Psy253_pdfs/friederici%202002%20TICS.pdf.
Auditory Working Memory Uses Semantics
Scientists want to know what happens in the brain when a person experiences imagery or mental acts that they re-enact in their mind after the object is gone. Some think imagery and perception share definite neural structures. Studies completed on the activated brain regions during an imagination are associated with sensory processing and processing or retrieving information. Visual imagery is similar to auditory imagery when manipulated in the brain; people can hear sounds in their heads. The imagery task involved retrieval of musical semantic memory when a tune was demonstrated. The working memory is also being used to rehearse the initial pitch and retrieve the following pitch while also making comparisons (Halpern & Zatorre, 1999).
The first goal of the study was to confirm the activation of the auditory association areas in the superior temporal gyrus during silent auditory tasks. Second, the authors thought if they took out the lyrics from the music it would show an irregular activation in right temporal and frontal lobes. Third, if the working memory is associated to music does it activate the dorsolateral frontal lobe? Finally if a person had to imagine a tune recently played would there be any asymmetry in dorsolateral frontal cortex.
PET scans were used on subjects who had years of musical training to measure the brain activity in these four previous areas. The music used was tunes that were familiar to people such as classical tunes from Broadway productions, commercials, television shows, and popular songs. All the participants indicated they were able to hear the music in their heads.
The study confirmed that musical imagery was found in the right auditory association cortex along with the supplementary motor area. When the musical imagery involved retrieval from musical semantic memory the right inferior frontal region, the middle frontal areas, together with right auditory association areas in superior temporal gyrus were all activated. The left frontal area and the supplementary motor area are activated when the imagery does not involve semantic retrieval. This leads to the point that auditory association areas do process imagined music that is familiar to people. The activation in auditory cortex during the imagined tune is related to the auditory stimulation. Only associative cortical regions were activated in the imagery tasks, which agree with the first question in the study. The right auditory cortical activation in the right hemisphere processes tonal patterns, due to reactions from the perceptual and imagined musical tasks. The right hemisphere broadens the perceptual breakdown to include the complex tonal imagery processes. A person uses their semantic memory when retrieving and imagining a familiar tune.
Another region activated by the music retrieval was the right thalamus. The neural conductor during semantic memory retrieval could depend on what is being retrieved. Mostly, the right hemisphere is involved when retrieving familiar tunes. The right hemisphere has already been determined to process the perception and discrimination of musical information. The episodic memory could have been activated during the tasks, but it should have used mostly semantic memory because the music only played a few notes, the rest had to be retrieved from the long-term memory. When generating an auditory image the supplementary motor area is significant because it is correlated with auditory and motor memory systems.
Previously the authors thought the imagery tasks would require the auditory working memory in the left frontal area. Now this study has shown semantic musical memory is controlled by the right frontal lobe. The dorsolateral regions in the frontal lobe work with the working memory to process verbal and figural information. They normally find more activity in the right frontal areas, the areas activated in the left must be a result of the participants using their working memory.
Reference:
Halpern, AuthorA.R., & Zatorre, R.J. (1999). When that tune runs through your head: a pet investigation of auditory imagery for familiar melodies. Cerebral Cortex. 9 No. 7, 697-704.
The first goal of the study was to confirm the activation of the auditory association areas in the superior temporal gyrus during silent auditory tasks. Second, the authors thought if they took out the lyrics from the music it would show an irregular activation in right temporal and frontal lobes. Third, if the working memory is associated to music does it activate the dorsolateral frontal lobe? Finally if a person had to imagine a tune recently played would there be any asymmetry in dorsolateral frontal cortex.
PET scans were used on subjects who had years of musical training to measure the brain activity in these four previous areas. The music used was tunes that were familiar to people such as classical tunes from Broadway productions, commercials, television shows, and popular songs. All the participants indicated they were able to hear the music in their heads.
The study confirmed that musical imagery was found in the right auditory association cortex along with the supplementary motor area. When the musical imagery involved retrieval from musical semantic memory the right inferior frontal region, the middle frontal areas, together with right auditory association areas in superior temporal gyrus were all activated. The left frontal area and the supplementary motor area are activated when the imagery does not involve semantic retrieval. This leads to the point that auditory association areas do process imagined music that is familiar to people. The activation in auditory cortex during the imagined tune is related to the auditory stimulation. Only associative cortical regions were activated in the imagery tasks, which agree with the first question in the study. The right auditory cortical activation in the right hemisphere processes tonal patterns, due to reactions from the perceptual and imagined musical tasks. The right hemisphere broadens the perceptual breakdown to include the complex tonal imagery processes. A person uses their semantic memory when retrieving and imagining a familiar tune.
Another region activated by the music retrieval was the right thalamus. The neural conductor during semantic memory retrieval could depend on what is being retrieved. Mostly, the right hemisphere is involved when retrieving familiar tunes. The right hemisphere has already been determined to process the perception and discrimination of musical information. The episodic memory could have been activated during the tasks, but it should have used mostly semantic memory because the music only played a few notes, the rest had to be retrieved from the long-term memory. When generating an auditory image the supplementary motor area is significant because it is correlated with auditory and motor memory systems.
Previously the authors thought the imagery tasks would require the auditory working memory in the left frontal area. Now this study has shown semantic musical memory is controlled by the right frontal lobe. The dorsolateral regions in the frontal lobe work with the working memory to process verbal and figural information. They normally find more activity in the right frontal areas, the areas activated in the left must be a result of the participants using their working memory.
Reference:
Halpern, AuthorA.R., & Zatorre, R.J. (1999). When that tune runs through your head: a pet investigation of auditory imagery for familiar melodies. Cerebral Cortex. 9 No. 7, 697-704.
Saturday, February 24, 2007
Semantic Dementia
Semantic dementia is a result of the lateral temporal lobes in the brain degenerating causing the person to slowly lose previous knowledge about the world. A semantic dementia patient may not recognize a teapot or be able to explain its function. The patient has word-finding difficulties. They have lost the meaning of certain words - “What is a teapot?” People who are afflicted with semantic dementia exhibit reduced skills when using semantic memory (Graham, et al, 2000). They have complications naming items, putting items into a category, matching a picture and the correct word, saying the correct color for a common item (yellow for a banana), categorizing items such as plants and animals, and knowing names of people or even themselves. A person with the onset of semantic dementia will normally speak fluently and articulate well, but use only a small number of content words. Research into semantic dementia indicates that new recollections were recovered easier compared to those from long-ago, leading to the belief that new learning is possible during the beginning stages of the disorder.
Our semantic memory is what we use to define objects, meanings of words, facts, concepts, and people. Associated to semantic memory is episodic memory which
is retrieval of personal experiences from the past. If a person has a deficit in semantic memory it is normally referred to as semantic dementia. When a person has a deficit in episodic memory they are referred to as having amnesia.
Tulving (Graham et al., 2000) originally proposed that episodic and semantic memory were separate in long – term memory based on how the brain acquires, processes, and stores information. After tests were completed on patients with amnesia showing a simple separation between episodic and semantic memory, Tulving revised his model. The current theory called Serial encoding, Parallel storage, and Independent retrieval (SPI) gives an opinion that episodic memory is reliant on semantic knowledge and it is a subsystem of semantic memory. In Tulving’s SPI model there are four key categories of cognitive memory organization: perceptual depiction; semantic; working; and episodic memory. He includes three fundamental building blocks in the model. First, information is encoded into systems consecutively, with encoding in one system dependent upon production from the prior phase. Second, information can be stored in different systems at the same time. Third, information in different systems can be retrieved independently without any effects on retrieval of information from other systems. This analysis clarifies how a person with amnesia can recover semantic information that was learned previously. Tulving said, “…a double dissociation between semantic and episodic memory is not possible and only single dissociations (impaired episodic memory and preserved semantic memory) can occur” (p.314). According to the SPI model, information in episodic memory cannot be formulated without information from semantic memory, and the semantic memory must have input from perceptual systems.
Graham and his coauthors disagree with Tulving’s theory that people with semantic dementia cannot learn new information. There are two experiments completed in the research to disprove his theory. The authors’ hypothesis states that new learning is usually a blend of sensory/perceptual information experienced during the learning episode and semantic information about the substance of the event. They predicted that patients with semantic dementia would demonstrate stability within recognition memory in the perceptually identical (PI) form but a reduced aptitude to choose a perceptually different (PD) item.
For the first experiment with semantic and episodic memory there were eight patients with semantic dementia, eight patients with Alzheimer’s, and eighteen people in the control group. In the recognition memory trial, to show a PI item they would use the same colored picture twice, the PD item was still the same item as shown previously, but from a different angle or side view. Initially the patients were asked to identify by name the items in the pictures. Later they were asked to select the pictures they had seen previously by using PI or PD. The results showed the patients with semantic dementia performed better on PI information than on PD, comparable to the control group. There were large gaps between the three groups on items shown PD. For the authors to evaluate their hypothesis of the impact on semantic knowledge associated with new learning they examined performance on the picture naming task with PD items in the episodic assessment. The results showed patients with semantic dementia had impaired semantic knowledge, but the episodic memory was maintained. These results oppose Tulving’s SPI model. Graham and his coauthors understanding of the outcome is that the manipulation in the PD form had two effects. First when the item was presented in the initial naming task then again PD the perceptual information previously learned was not helpful. Second the episodic decision relied on the abstract information of the original image.
In the second experiment, one person with semantic dementia previously tested in experiment one was reevaluated. Performance was evaluated in recognition memory. The patients were asked if items shown before in the first experiment were known items or unknown items. The results showed that the PI items were all known, but the PD items were unknown.
After reviewing the two experiments the authors proved that semantic memory is damaged and episodic memory is not when using PI items with patients who have semantic dementia. Graham and his coauthors were able to prove Tulving’s hypothesis false. A person with semantic dementia can learn new information. New learning was not impaired in patients with semantic dementia unless the patient was tested perceptually differently than they had previously learned.
New episodic learning relies on perceptual information and conceptual knowledge in normal brain functions. Perceptual areas in the brain help to identify items that are not familiar. The perceptual information processed goes straight to the episodic memory and the semantic system, to create new learning. When a person has semantic dementia to learn new episodic information the items have to be PI. When the items are PD the patient can not use their perceptual information, so they have to use their semantic knowledge which is impaired and they are not able to identify the object.
Participation from different neocortical (the roof of the cerebral cortex that forms the part of the mammalian brain that has evolved most recently and makes possible higher brain functions such as learning) (Encarta Dictionary, online) areas in the brain, which promote perceptual examination and semantic memory, work in harmony to support new learning. Neuroscientists believe that the hippocampus and the infero-lateral temporal lobes are damaged in patients who have semantic dementia. There must be a link from the visual structure contained by the perceptual system that is devoted to the breakdown of auditory and tactile stimuli.
People with impaired semantic memory maintained performance on tests of recognition memory if the stimulus was perceptually identical between learning and testing. Using the same perceptual stimulus is noted to dramatically increase semantic memory if introduced in the episodic task. The authors proved that sensory or perceptual knowledge and semantic memory work together to assist new learning. This type of semantic knowledge is incorporated in my classroom daily. Children work on tasks such as picture word associations and assembling puzzles with items named. I agree that new learning is improved by using sensory/perceptual processes with semantic memory.
Reference:
Graham, K.S., Simons, J.S., Pratt, K.H., Patterson, K., & Hodges, J.R. (2000). Insights from semantic dementia on the relationship between episodic and semantic memory. Neuropsychologia. 38, 313-324.
Our semantic memory is what we use to define objects, meanings of words, facts, concepts, and people. Associated to semantic memory is episodic memory which
is retrieval of personal experiences from the past. If a person has a deficit in semantic memory it is normally referred to as semantic dementia. When a person has a deficit in episodic memory they are referred to as having amnesia.
Tulving (Graham et al., 2000) originally proposed that episodic and semantic memory were separate in long – term memory based on how the brain acquires, processes, and stores information. After tests were completed on patients with amnesia showing a simple separation between episodic and semantic memory, Tulving revised his model. The current theory called Serial encoding, Parallel storage, and Independent retrieval (SPI) gives an opinion that episodic memory is reliant on semantic knowledge and it is a subsystem of semantic memory. In Tulving’s SPI model there are four key categories of cognitive memory organization: perceptual depiction; semantic; working; and episodic memory. He includes three fundamental building blocks in the model. First, information is encoded into systems consecutively, with encoding in one system dependent upon production from the prior phase. Second, information can be stored in different systems at the same time. Third, information in different systems can be retrieved independently without any effects on retrieval of information from other systems. This analysis clarifies how a person with amnesia can recover semantic information that was learned previously. Tulving said, “…a double dissociation between semantic and episodic memory is not possible and only single dissociations (impaired episodic memory and preserved semantic memory) can occur” (p.314). According to the SPI model, information in episodic memory cannot be formulated without information from semantic memory, and the semantic memory must have input from perceptual systems.
Graham and his coauthors disagree with Tulving’s theory that people with semantic dementia cannot learn new information. There are two experiments completed in the research to disprove his theory. The authors’ hypothesis states that new learning is usually a blend of sensory/perceptual information experienced during the learning episode and semantic information about the substance of the event. They predicted that patients with semantic dementia would demonstrate stability within recognition memory in the perceptually identical (PI) form but a reduced aptitude to choose a perceptually different (PD) item.
For the first experiment with semantic and episodic memory there were eight patients with semantic dementia, eight patients with Alzheimer’s, and eighteen people in the control group. In the recognition memory trial, to show a PI item they would use the same colored picture twice, the PD item was still the same item as shown previously, but from a different angle or side view. Initially the patients were asked to identify by name the items in the pictures. Later they were asked to select the pictures they had seen previously by using PI or PD. The results showed the patients with semantic dementia performed better on PI information than on PD, comparable to the control group. There were large gaps between the three groups on items shown PD. For the authors to evaluate their hypothesis of the impact on semantic knowledge associated with new learning they examined performance on the picture naming task with PD items in the episodic assessment. The results showed patients with semantic dementia had impaired semantic knowledge, but the episodic memory was maintained. These results oppose Tulving’s SPI model. Graham and his coauthors understanding of the outcome is that the manipulation in the PD form had two effects. First when the item was presented in the initial naming task then again PD the perceptual information previously learned was not helpful. Second the episodic decision relied on the abstract information of the original image.
In the second experiment, one person with semantic dementia previously tested in experiment one was reevaluated. Performance was evaluated in recognition memory. The patients were asked if items shown before in the first experiment were known items or unknown items. The results showed that the PI items were all known, but the PD items were unknown.
After reviewing the two experiments the authors proved that semantic memory is damaged and episodic memory is not when using PI items with patients who have semantic dementia. Graham and his coauthors were able to prove Tulving’s hypothesis false. A person with semantic dementia can learn new information. New learning was not impaired in patients with semantic dementia unless the patient was tested perceptually differently than they had previously learned.
New episodic learning relies on perceptual information and conceptual knowledge in normal brain functions. Perceptual areas in the brain help to identify items that are not familiar. The perceptual information processed goes straight to the episodic memory and the semantic system, to create new learning. When a person has semantic dementia to learn new episodic information the items have to be PI. When the items are PD the patient can not use their perceptual information, so they have to use their semantic knowledge which is impaired and they are not able to identify the object.
Participation from different neocortical (the roof of the cerebral cortex that forms the part of the mammalian brain that has evolved most recently and makes possible higher brain functions such as learning) (Encarta Dictionary, online) areas in the brain, which promote perceptual examination and semantic memory, work in harmony to support new learning. Neuroscientists believe that the hippocampus and the infero-lateral temporal lobes are damaged in patients who have semantic dementia. There must be a link from the visual structure contained by the perceptual system that is devoted to the breakdown of auditory and tactile stimuli.
People with impaired semantic memory maintained performance on tests of recognition memory if the stimulus was perceptually identical between learning and testing. Using the same perceptual stimulus is noted to dramatically increase semantic memory if introduced in the episodic task. The authors proved that sensory or perceptual knowledge and semantic memory work together to assist new learning. This type of semantic knowledge is incorporated in my classroom daily. Children work on tasks such as picture word associations and assembling puzzles with items named. I agree that new learning is improved by using sensory/perceptual processes with semantic memory.
Reference:
Graham, K.S., Simons, J.S., Pratt, K.H., Patterson, K., & Hodges, J.R. (2000). Insights from semantic dementia on the relationship between episodic and semantic memory. Neuropsychologia. 38, 313-324.
Brain Regions Activated by Episodic, Working, and Semantic Memory Tasks
Selected regions of the prefrontal cortex (PFC) have been shown to be activated by many diverse, complex cognitive operations by use of functional brain imaging with positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) (Nyberg, 2003). Semantic memory, episodic memory, verbal and spatial working memory, conceptual priming, written word recognition, smell perception, and sustained attention are all cognitive functions which lead to these activations found in the prefrontal cortex. Different regions of the PFC are activated depending on the memory task. Frontopolar activation is associated with semantic memory tasks, working memory, and episodic memory.
Nyberg conducted two PET experiments on episodic, working, and semantic memory to look for a similarity in particular activations connected with different memory systems.
He tested 29 healthy males with negative medical histories and similar university educations. PET scan images were compared to determine brain region activations after assigned tasks were completed.
Four regions of the prefrontal cortex (P < 0.05) were found related to the three areas of memory: left frontopolar cortex, left mid-ventrolateral PFC, left mid-dorsolateral PFC, and dorsal anterior cingulate cortex. The central executive in the working memory is thought to have involved the four regions of the brain in accomplishing different memory tasks. Active encoding and retrieval of information was found in the mid-ventrolateral PFC. Paying attention to information which involved all the memory tasks depended heavily on the dorsolateral PFC.
This study showed that although similarities in brain activity were elicited across different memory tasks, specific different areas were activated. The human brain has multiple processing components that are activated depending on the task, but it does show that the memory systems are related (Nyberg, 2003). This study is important because complex brain processes common to many different memory tests have not been well-studied.
As a teacher this study causes me to reflect on how complex the brain and memory systems are. How little we understand and how much we need to learn.
Reference:
Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K.M., & Ingvar, M. (2003). Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia. 41, 371-377.
Nyberg conducted two PET experiments on episodic, working, and semantic memory to look for a similarity in particular activations connected with different memory systems.
He tested 29 healthy males with negative medical histories and similar university educations. PET scan images were compared to determine brain region activations after assigned tasks were completed.
Four regions of the prefrontal cortex (P < 0.05) were found related to the three areas of memory: left frontopolar cortex, left mid-ventrolateral PFC, left mid-dorsolateral PFC, and dorsal anterior cingulate cortex. The central executive in the working memory is thought to have involved the four regions of the brain in accomplishing different memory tasks. Active encoding and retrieval of information was found in the mid-ventrolateral PFC. Paying attention to information which involved all the memory tasks depended heavily on the dorsolateral PFC.
This study showed that although similarities in brain activity were elicited across different memory tasks, specific different areas were activated. The human brain has multiple processing components that are activated depending on the task, but it does show that the memory systems are related (Nyberg, 2003). This study is important because complex brain processes common to many different memory tests have not been well-studied.
As a teacher this study causes me to reflect on how complex the brain and memory systems are. How little we understand and how much we need to learn.
Reference:
Nyberg, L., Marklund, P., Persson, J., Cabeza, R., Forkstam, C., Petersson, K.M., & Ingvar, M. (2003). Common prefrontal activations during working memory, episodic memory, and semantic memory. Neuropsychologia. 41, 371-377.
Monday, February 19, 2007
Recent Semantic Memory Models
“Much of the knowledge that we store in memory is encoded using language” (Steyvers, 2006, p. 327). The three new models proposed to explain human memory using semantic properties are the Retrieving Effectively from Memory (REM) model, probabilistic topic models, and the Syntagmatic Paradigmatic (SP) model. These three models emphasize the task of probabilistic assumption in memory that draws on research in computer science, information retrieval, data, and computational linguistics. Probabilistic assumption is the philosophy that certainty is impossible, and therefore decisions must be based on probabilities.
The REM model is primarily applied to recognition memory. In contrast to previous models, the REM model looks at how environmental learning is symbolized in memory. It discusses the encoding process and demonstration of information, as it continues to stress the role of probabilistic inference in explaining human memory. The REM model thinks words are represented by vectors (the place and impact of the word in a given semantic space) of characteristic principles. Each word is processed in memory to see if it has been previously stored. Every word leads to a place in the memory that makes a decision if it is a new word or an old word.
The probabilistic topic models focus on prediction as an essential difficulty of memory. Context is emphasized; these models rely on text and descriptions of words. These models use formation of words concentrating on topics. The idea is that documents are an assortment of topics represented by the probabilities of the words within that topic. Based on information of a document, the same word can be used in two separate papers but have a different meaning within each topic. When reading a paper or in conversation a predication can be made for the word that will emerge next based on the prior words, according to the topic models. To further explain this word play: if the topic is music some words included most likely would be sing, singing, song, sang, songs, and so on. Looking at the history of previous models, probabilistic topic models provide a rationale of how context should influence memory as long as the context is properly adjusted by the information of the setting.
The Syntagmatic Paradigmatic (SP) model is based upon the idea that structural and propositional knowledge can be controlled by syntagmatic associations and paradigmatic associations. Syntagmatic associations occur between words that follow each other such as, walk – briskly. Paradigmatic associations occur between words that fit in the same slots across sentences such as, open – closed. This model uses sequential and relational memory to process sentences. It uses the String Edit Theory (SET) to describe how one string can be changed into a second string. For example, (What did Columbus discover?) could be changed to (What did Emerson discover?) by only changing one word in the sentence. The model accomplishes this without indicating grammar, semantic role, or inferential knowledge.
The authors (Steyvers, 2006) have written an excellent review of rational models of memory, and how probabilistic inference influences semantic memory. These more recent models represent the concealed semantic structure of language. Learning how to investigate the power of language within human memory retrieval is the goal of these models. All these models assume memory should be addressed as a problem of probabilistic inference. More affluent representations of the arrangement of linguistic stimuli will persist as cognition is explored.
To search semantic representations that integrate common relationships between words click on WordNet.
Reference:
Steyvers, M., Griffiths, T.L., & Dennis, S. (2006). Probabilistic inference in human semantic memory. TRENDS in Cognitive Sciences. 10 No.7, 327-334.
The REM model is primarily applied to recognition memory. In contrast to previous models, the REM model looks at how environmental learning is symbolized in memory. It discusses the encoding process and demonstration of information, as it continues to stress the role of probabilistic inference in explaining human memory. The REM model thinks words are represented by vectors (the place and impact of the word in a given semantic space) of characteristic principles. Each word is processed in memory to see if it has been previously stored. Every word leads to a place in the memory that makes a decision if it is a new word or an old word.
The probabilistic topic models focus on prediction as an essential difficulty of memory. Context is emphasized; these models rely on text and descriptions of words. These models use formation of words concentrating on topics. The idea is that documents are an assortment of topics represented by the probabilities of the words within that topic. Based on information of a document, the same word can be used in two separate papers but have a different meaning within each topic. When reading a paper or in conversation a predication can be made for the word that will emerge next based on the prior words, according to the topic models. To further explain this word play: if the topic is music some words included most likely would be sing, singing, song, sang, songs, and so on. Looking at the history of previous models, probabilistic topic models provide a rationale of how context should influence memory as long as the context is properly adjusted by the information of the setting.
The Syntagmatic Paradigmatic (SP) model is based upon the idea that structural and propositional knowledge can be controlled by syntagmatic associations and paradigmatic associations. Syntagmatic associations occur between words that follow each other such as, walk – briskly. Paradigmatic associations occur between words that fit in the same slots across sentences such as, open – closed. This model uses sequential and relational memory to process sentences. It uses the String Edit Theory (SET) to describe how one string can be changed into a second string. For example, (What did Columbus discover?) could be changed to (What did Emerson discover?) by only changing one word in the sentence. The model accomplishes this without indicating grammar, semantic role, or inferential knowledge.
The authors (Steyvers, 2006) have written an excellent review of rational models of memory, and how probabilistic inference influences semantic memory. These more recent models represent the concealed semantic structure of language. Learning how to investigate the power of language within human memory retrieval is the goal of these models. All these models assume memory should be addressed as a problem of probabilistic inference. More affluent representations of the arrangement of linguistic stimuli will persist as cognition is explored.
To search semantic representations that integrate common relationships between words click on WordNet.
Reference:
Steyvers, M., Griffiths, T.L., & Dennis, S. (2006). Probabilistic inference in human semantic memory. TRENDS in Cognitive Sciences. 10 No.7, 327-334.
Sunday, February 18, 2007
Semantic Memory
Semantic memory consists of your general knowledge about the world. It is something you know, even though you do not remember where or when you learned it. Language and conceptual knowledge are parts of semantic memory. Semantic memory influences most of our cognitive activities such as determining locations, reading sentences, solving problems, and making decisions (Graham et al, 2000).
Categories and concepts are essential components of semantic memory. People divide the world into categories to make sense of their knowledge. A category is a class of objects that belong together. A concept refers to our mental representations of our categories. Categories have many objects included in them, for example a category of miniature dogs. Each of the objects can be called a dog. People organize the objects into categories like files using their knowledge of the world. We have a concept of what a dog is. Dogs may be different colors, shapes, and sizes yet we still know that they are dogs. People have a concept that a dog is an animal. They then make a deduction that if you have a dog you have to feed, water, love, and exercise your animal. These deductions or inferences greatly expand our knowledge. People combine similar objects into a single concept. So if you are thinking about getting a family pet you could choose between any of the categories of breed you want, German Shepard, Maltese, Doberman, or Husky. Your mental concept of a dog can be any or all of these breeds.
Theories of semantic memory include: the feature comparison model, the prototype approach, and the exemplar approach. Network models of semantic memory include: the Collins and Loftus network model, Anderson’s ACT theory, the Parallel Distributed Processing Approach (PDP). For further information please see this link: Outline of Semantic Memory Models
The feature comparison model is a concept defined as a set of features – a list. The word “bird” has a set of features such as has wings, flies, lays eggs, lives in trees, builds a nest. Sentence verification is also included in this model. People compare subject and predicate for similarity, such as “A coat is clothing.” There is a similarity between the coat and clothing because a coat is clothing.
With the prototype approach a person will make a decision if an item belongs to a category by comparing it with a prototype (the item most typical of the category). For example, when a person thinks of transportation they think of the most common forms such as a car, truck, plane, train, and bicycle. Most of the time we do not attach an item such as a wheelchair or elevator to transportation, unless it directly affects our lives. For a person who has mobility problems these examples are major forms of transportation.
The exemplar approach explains that people first learn specific examples of a concept. Then they classify each new stimulus by deciding how directly it relates to the specific examples or concepts. For example, what is the first thing you think of when someone says, “fruit?” Most people will think of a common fruit such as an apple or banana.
The Collins and Loftus network model is a netlike organization of concepts in memory with many interconnections of nodes and links. These nodes and links show that the meaning of a word is greatly affected by associations between the word and its associated concepts. The spreading activation network shows how the nodes and links are connected.
Anderson’s theory, called Adaptive Control of Thought Model (ACT), is a complex model which attempts to account for all language, learning, decision making, and human cognition. It is composed of three memory structures: declarative knowledge (knowledge about facts and things), procedural knowledge (knowledge about how to perform actions), and working memory. He proposes that a sentence can be broken down into small meaningful bits of information. For example, the sentence, “Once there was a little red hen who lived in a barnyard with her three chicks and a duck, a pig, and a cat.” can be broken into smaller sections of information. 1. Once there was a little red hen who lived in a barnyard. 2. Her three chicks lived with her. 3. A duck, a pig, and a cat lived in the barnyard.
The Parallel Distributed Processing approach (PDP) proposes that cognitive processes can be understood in terms of networks that link together neuron-like units. With this approach many operations can proceed simultaneously rather than one at a time. The PDP model can explain how the human memory can figure out missing information from a general category. People make a spontaneous generalization or a conclusion by using an assumption based on the general information. This conclusion is also called inductive thinking. Deductive thinking allows people to fill in missing information about a particular person or item in a category by making an educated guess or a default assignment. An example of PDP would be a riddle: 1. It is yellow. 2. It grows on a tree. 3. It has the shape of a quarter moon. 4. Monkeys prefer this item. Did you guess the item was a banana from my description (Matlin, 2005)?
Semantic memory storage is programmed using language. “Much of the knowledge that we store in memory is encoded using language” (Steyvers, 2006, p. 327). Three new models have been proposed to explain human memory using semantic properties: the Retrieving Effectively from Memory (REM) model, probabilistic topic models, and the Syntagmatic Paradigmatic (SP) model. The three models emphasize the task of probabilistic assumption in memory that draws on research in computer science, information retrieval, data, and computational linguistics. Probabilistic assumption is the philosophy that certainty is impossible, and therefore decisions must be based on probabilities.
Semantic memory is just one part of longterm memory. Episodic memory, the memory of those events we have experienced, requires active reprocessing of a prior event. Semantic memory does not require recall of the event to use the information. Tulving created a model of longterm memory which originally stated semantic memory and episodic memory had a hierarchical relationship. Studies done on patients with semantic dementia forced Tulving to revise his model of long term memory to include the reliance of episodic and semantic memory upon perceptual input (Graham et al, 2000).
References:
Matlin, Margaret W. (2005). Cognition Sixth Edition. Hoboken, NJ: John Wiley & Sons, Inc..
Graham, K.S., Simons, J.S., Pratt, K.H., Patterson, K., & Hodges, J.R. (2000). Insights from semantic dementia on the relationship between episodic and semantic memory. Neuropsychologia. 38, 313-324.
Steyvers, M., Griffiths, T.L., & Dennis, S. (2006). Probabilistic inference in human semantic memory. TRENDS in Cognititve Sciences. 10 No.7, 327-334.
Categories and concepts are essential components of semantic memory. People divide the world into categories to make sense of their knowledge. A category is a class of objects that belong together. A concept refers to our mental representations of our categories. Categories have many objects included in them, for example a category of miniature dogs. Each of the objects can be called a dog. People organize the objects into categories like files using their knowledge of the world. We have a concept of what a dog is. Dogs may be different colors, shapes, and sizes yet we still know that they are dogs. People have a concept that a dog is an animal. They then make a deduction that if you have a dog you have to feed, water, love, and exercise your animal. These deductions or inferences greatly expand our knowledge. People combine similar objects into a single concept. So if you are thinking about getting a family pet you could choose between any of the categories of breed you want, German Shepard, Maltese, Doberman, or Husky. Your mental concept of a dog can be any or all of these breeds.
Theories of semantic memory include: the feature comparison model, the prototype approach, and the exemplar approach. Network models of semantic memory include: the Collins and Loftus network model, Anderson’s ACT theory, the Parallel Distributed Processing Approach (PDP). For further information please see this link: Outline of Semantic Memory Models
The feature comparison model is a concept defined as a set of features – a list. The word “bird” has a set of features such as has wings, flies, lays eggs, lives in trees, builds a nest. Sentence verification is also included in this model. People compare subject and predicate for similarity, such as “A coat is clothing.” There is a similarity between the coat and clothing because a coat is clothing.
With the prototype approach a person will make a decision if an item belongs to a category by comparing it with a prototype (the item most typical of the category). For example, when a person thinks of transportation they think of the most common forms such as a car, truck, plane, train, and bicycle. Most of the time we do not attach an item such as a wheelchair or elevator to transportation, unless it directly affects our lives. For a person who has mobility problems these examples are major forms of transportation.
The exemplar approach explains that people first learn specific examples of a concept. Then they classify each new stimulus by deciding how directly it relates to the specific examples or concepts. For example, what is the first thing you think of when someone says, “fruit?” Most people will think of a common fruit such as an apple or banana.
The Collins and Loftus network model is a netlike organization of concepts in memory with many interconnections of nodes and links. These nodes and links show that the meaning of a word is greatly affected by associations between the word and its associated concepts. The spreading activation network shows how the nodes and links are connected.
Anderson’s theory, called Adaptive Control of Thought Model (ACT), is a complex model which attempts to account for all language, learning, decision making, and human cognition. It is composed of three memory structures: declarative knowledge (knowledge about facts and things), procedural knowledge (knowledge about how to perform actions), and working memory. He proposes that a sentence can be broken down into small meaningful bits of information. For example, the sentence, “Once there was a little red hen who lived in a barnyard with her three chicks and a duck, a pig, and a cat.” can be broken into smaller sections of information. 1. Once there was a little red hen who lived in a barnyard. 2. Her three chicks lived with her. 3. A duck, a pig, and a cat lived in the barnyard.
The Parallel Distributed Processing approach (PDP) proposes that cognitive processes can be understood in terms of networks that link together neuron-like units. With this approach many operations can proceed simultaneously rather than one at a time. The PDP model can explain how the human memory can figure out missing information from a general category. People make a spontaneous generalization or a conclusion by using an assumption based on the general information. This conclusion is also called inductive thinking. Deductive thinking allows people to fill in missing information about a particular person or item in a category by making an educated guess or a default assignment. An example of PDP would be a riddle: 1. It is yellow. 2. It grows on a tree. 3. It has the shape of a quarter moon. 4. Monkeys prefer this item. Did you guess the item was a banana from my description (Matlin, 2005)?
Semantic memory storage is programmed using language. “Much of the knowledge that we store in memory is encoded using language” (Steyvers, 2006, p. 327). Three new models have been proposed to explain human memory using semantic properties: the Retrieving Effectively from Memory (REM) model, probabilistic topic models, and the Syntagmatic Paradigmatic (SP) model. The three models emphasize the task of probabilistic assumption in memory that draws on research in computer science, information retrieval, data, and computational linguistics. Probabilistic assumption is the philosophy that certainty is impossible, and therefore decisions must be based on probabilities.
Semantic memory is just one part of longterm memory. Episodic memory, the memory of those events we have experienced, requires active reprocessing of a prior event. Semantic memory does not require recall of the event to use the information. Tulving created a model of longterm memory which originally stated semantic memory and episodic memory had a hierarchical relationship. Studies done on patients with semantic dementia forced Tulving to revise his model of long term memory to include the reliance of episodic and semantic memory upon perceptual input (Graham et al, 2000).
References:
Matlin, Margaret W. (2005). Cognition Sixth Edition. Hoboken, NJ: John Wiley & Sons, Inc..
Graham, K.S., Simons, J.S., Pratt, K.H., Patterson, K., & Hodges, J.R. (2000). Insights from semantic dementia on the relationship between episodic and semantic memory. Neuropsychologia. 38, 313-324.
Steyvers, M., Griffiths, T.L., & Dennis, S. (2006). Probabilistic inference in human semantic memory. TRENDS in Cognititve Sciences. 10 No.7, 327-334.
Saturday, February 17, 2007
Working Memory
Working memory is a dynamic process of interpreting, integrating and manipulating information for use in problem-solving and planning future actions (Matlin, 2005). The phrase “working memory” has replaced the older terminology of short-term memory in part because it is now believed to be such a complex interactive process, while short-term memory was viewed as a more rigid set of facts. Working memory is the active process of interpreting a whole sentence or phrase and beginning to relate new information to prior knowledge. Alan Baddeley is considered the leading theorist of working memory. His model was developed in 1974 and recently revised (Baddeley, 2002). “The multicomponent approach to working memory aims to understand the way in which information is temporarily stored and maintained in the performance of complex cognitive processing.” (p.94). The theory, named the multicomponent working memory model, is composed of four parts: central executive, phonological loop, visuo-spatial sketchpad, and episodic buffer. The central executive integrates information from all three areas. It is thought to regulate and control actions, plan strategies, and coordinate behavior. The phonological loop concerns short-term verbal memory. It has two components - a phonological store for a limited number of sounds for short time periods, and an articulatory rehearsal system. The visuo-spatial sketchpad is the store house for visual information. It combines visual information with similar information from motor and tactile sources. The episodic buffer is a “go-between” storage system that holds scenes and interacts with the above systems using a multimodal code. Retrieval from the buffer happens during conscious awareness, which provides multiple informational sources to be manipulated simultaneously. The episodic buffer also helps to solve problems and plan future behavior. Working memory is much more than just short-term. Further information on this subject can be found by accessing the following links:
SparkNotes: Memory: Short-Term Memory
Thinker: Memory: Working Memory
References
Baddeley, A.D. (2002). Is working memory still working?. European Psychologist, 7 No.2, Retrieved February 11, 2007, from http://psych.colorado.edu/~tcurran/psyc5665/papers/Baddeley_2002.pdf.
Matlin, Margaret W. (2005). Cognition Sixth Edition. Hoboken, NJ: John Wiley & Sons,Inc..
SparkNotes: Memory: Short-Term Memory
Thinker: Memory: Working Memory
References
Baddeley, A.D. (2002). Is working memory still working?. European Psychologist, 7 No.2, Retrieved February 11, 2007, from http://psych.colorado.edu/~tcurran/psyc5665/papers/Baddeley_2002.pdf.
Matlin, Margaret W. (2005). Cognition Sixth Edition. Hoboken, NJ: John Wiley & Sons,Inc..
Subscribe to:
Posts (Atom)