Edited by: Guenther Palm, University of Ulm, Germany
Reviewed by: Matthew A. Smith, University of Pittsburgh, USA; Petia D. Koprinkova-Hristova, Bulgarian Academy of Sciences, Bulgaria
*Correspondence: Bernd J. Kröger
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Production and comprehension of speech are closely interwoven. For example, the ability to detect an error in one's own speech, halt speech production, and finally correct the error can be explained by assuming an inner speech loop which continuously compares the word representations induced by production to those induced by perception at various cognitive levels (e.g., conceptual, word, or phonological levels). Because spontaneous speech errors are relatively rare, a picture naming and halt paradigm can be used to evoke them. In this paradigm, picture presentation (target word initiation) is followed by an auditory stop signal (distractor word) for halting speech production. The current study seeks to understand the neural mechanisms governing self-detection of speech errors by developing a biologically inspired neural model of the inner speech loop. The neural model is based on the Neural Engineering Framework (NEF) and consists of a network of about 500,000 spiking neurons. In the first experiment we induce simulated speech errors semantically and phonologically. In the second experiment, we simulate a picture naming and halt task. Target-distractor word pairs were balanced with respect to variation of phonological and semantic similarity. The results of the first experiment show that speech errors are successfully detected by a monitoring component in the inner speech loop. The results of the second experiment show that the model correctly reproduces human behavioral data on the picture naming and halt task. In particular, the halting rate in the production of target words was lower for phonologically similar words than for semantically similar or fully dissimilar distractor words. We thus conclude that the neural architecture proposed here to model the inner speech loop reflects important interactions in production and perception at phonological and semantic levels.
Speech production is a hierarchical process starting with the activation of an idea, which is intended to be communicated, proceeds with the activation of words, then with modification and sequencing of words with respect to grammatical and syntactic rules, and ends with the activation of a sequence of motor actions that realize the intended utterance (Dell and Reich,
Restricting our attention to single word production (such as in a picture naming task), speech production starts with the activation of semantic concepts (e.g., “has wheels,” “can move,” “can transport persons,”), retrieves an associated word (e.g., “car”) and its phonological form (/kar/) from the mental lexicon (see e.g., Dell and Reich,
Both the cognitive and sensorimotor stages of speech production mainly involve retrieving and activating units or chunks already stored in repositories of cognitive knowledge and sensorimotor skill, respectively. This knowledge and these skills were learned during speech and language acquisition. The cognitive knowledge repository that plays a central role in word production is called the
It can be assumed that speech monitoring, i.e., the comparison of intended and produced speech, can be realized by linking production and perception outcomes at different levels or stages (e.g., concept, phonological form, or motor plan levels). While the slower outer speech loop includes all stages (from conceptualization to articulation and back), the inner speech loop only includes conceptualization until retrieval of the phonological form and vice versa. This inner loop theory of speech monitoring has been successful in explaining the fact that speech errors are often repaired so quickly that the involvement of the (slow) auditory feedback loop (outer speech loop) can be ruled out (Postma,
Because it is not trivial to evoke speech errors, in this study a picture naming and halt paradigm is used (Slevc and Ferreira,
The main goal of this study is to develop a neural architecture for speech production and perception (and comprehension), which, on the one hand, enables fast, effortless and error-free realization of word production, and, on the other hand, allows for the simulation of speech errors and realistic and effective speech monitoring. Thus, the neural architecture of our model consists of speech production, perception and monitoring components and, furthermore, should be capable of detecting and correcting speech errors. A further major goal of this study is to underline the tight connection between speech production and perception (e.g., Pickering and Garrod,
The neural model of speech processing developed here uses the principles of the Neural Engineering Framework (NEF, Eliasmith and Anderson,
A model of speech production, speech perception, speech monitoring, and speech error detection and repair is complex and must include cortical as well as subcortical components. For building complex models using the NEF it is advantageous to use the
Semantic pointers are also the vehicles for representing
Because it is non-trivial to generate an amount of semantic pointers for representing the concepts, words (i.e., lemmas and orthographic forms), and phonological forms of words for a specific natural language vocabulary, including a representation of the similarities of words at the concept (or semantic) and at the phonological-phonetic level, a
In the following sections we introduce our model for speech production and perception including speech monitoring and error-processing, and we present experimental results (i) modeling halting in production when distortions are evoked at semantic and phonological levels within the model and (ii) for simulating a picture naming and halt task (Slevc and Ferreira,
The architecture of the neural model comprises an
The
From a functional viewpoint, the neural model presented here is designed for (i) self-detection of speech errors occurring during word production by self-monitoring, and for (ii) realizing a picture naming and halt task, which requires the self-monitoring component in order to compare self-produced target words to externally produced distractor words. Consequently, the
The similarity values representing concept, word, and phonological levels can be calculated as dot products (see Section Introduction). Thus, dot products are used here for calculating utility values Ui for actions Ai (
The basic actions which can be selected in our model are A1 = ‘NEUTRAL’ (do nothing), A2 = ‘SPEAK’, A3 = ‘CONSIDER_HALT’, and A4 = ‘HALT’. If one of these actions becomes chosen, its semantic pointer similarity value approaches 1 within the time course of the neural activation patterns of the
U2(t) = DOT_PROD(perceptual_input_buffer, ‘NEW_VISUAL’)
U3(t) = DOT_PROD(perceptual_input_buffer, ‘NEW_AUDIO’)
U4(t) = DOT_PROD(perceptual_input_buffer, ‘WORD’) −
DOT_PROD(concept_buff_perc, WORD_i_CONCEPT) +
DOT_PROD(word_buff_perc, WORD_i_LEMMA) +
DOT_PROD(phonol_buff_perc, WORD_i_PHONOL)
if all utility values Ui(t) < 0.25 (where Ui(t) ranges between 0 and 1)
then: select action ‘NEUTRAL’ (i.e., do nothing);
if U2(t) is highest utility value currently
then: select action A(t) = ‘SPEAK’;
if U3(t) is highest utility value currently
then: select action A(t) = ‘CONSIDER_HALT’;
if U4(t) is highest utility value currently
then: select action A(t) = ‘HALT’;
Thus, action selection mainly leads to ‘NEUTRAL’ if no dot product is above 0.25 (if-statement i). If a new word is activated by a visual signal at the concept input buffer, the ‘SPEAK’ action will always be chosen (if-statement ii). The semantic pointers ‘NEW_VISUAL’ indicates the beginning of a visual presentation of a new word during the time interval ‘WORD’. If an external audio signal is presented quickly after the activation of ‘SPEAK’ (as happens in the picture naming and halt tasks) then ‘CONSIDER_HALT’ will be activated (if-statement iii). In addition, if a word is clearly activated within all buffers of the inner speech loop (which is always the case after the ‘NEW_VISUAL’ input appears) the ‘HALT’ action could be chosen if the difference between semantic pointers activated in the production (i.e., WORD_i_CONCEPT, WORD_i_LEMMA, and WORD_i_PHONOL) and perception/comprehension pathways (i.e., current neural activation pattern within cortical SPA buffers concept_buff_perc, word_buff_perc, and phonol_buff_perc) is small at each of the concept, word and phonological levels (if-statement iv). If this difference is large, then utility value U4 will be low, leading to no activation of the ‘HALT’ action.
In our model, action selection indirectly leads to an activation of a go-signal for the speech production module (see the arrow from action selection to speech production in Figure
It has been mentioned above that the neural states activated in the concept, word, and phonological buffers within the inner speech loop are neural activation patterns equivalent to or represented by semantic pointers. These semantic pointers are stored as vectors within the mental lexicon module of our model (not shown in Figure
In the case of the mental lexicon, the semantic pointer network needs to be subdivided into subnetworks for
Our neural model also requires subnetworks for visual representations of concepts and for auditory representations of words. In our experimental scenario, visual images are closely related to concepts, and aural signals are closely related to phonological forms. Each of these subnetworks contains 90 items, each of which corresponds directly to one of the words defined in the subnetworks for concepts, words, and phonological forms. The semantic pointers for visuals are labeled with the prefix ‘V_’ (e.g., ‘V_Apple_Apfel’) and the semantic pointers for aural signals are labeled with the prefix ‘A_’ (e.g., ‘A_apple’). In addition, semantic pointers are defined for visual and auditory input representations. The pointers within these subnetworks are labeled with an initial ‘Vin_’ or ‘Ain_’ respectively.
Eighteen different input or target words are used in both simulation experiments (listed in the first column of Table
Apple | Peach | Apathy | Apricot | Couch |
Basket | Crib | Ban | Bag | Thirst |
Bee | Spider | Beacon | Beetle | Flag |
Bread | Donut | Brick | Bran | Nail |
Camel | Pig | Cash | Calf | Bucket |
Carrot | Spinach | Cast | Cabbage | Evening |
Duck | Raven | Sub | Dove | Brass |
Elephant | Moose | Elm | Elk | Stripe |
Fly | Moth | Flu | Flea | Rake |
Lamp | Candle | Landing | Lantern | Package |
Peanut | Almond | Piano | Pecan | Dress |
Rabbit | Beaver | Raft | Rat | Coffee |
Snake | Eel | Snack | Snail | Fire |
Spoon | Ladle | Sparkle | Spatula | Cable |
Squirrel | Mole | Skate | Skunk | Chain |
Train | Bus | Trophy | Trolley | Fox |
Truck | Jeep | Trap | Tractor | Celery |
Trumpet | Horn | Traffic | Trombone | Corner |
We conducted 35 trials in which productions of each of the 18 target words were simulated (630 simulations in total). No distractors or stop signals were activated. Neural activation levels for different semantic pointers in different cortical SPA buffers are displayed in Figure
In the lower part of Figure
We simulated ten trials in which the model produces each of the 18 target words with some kind of distortion (column 1 in Table
The induction of a conceptual (or semantic) speech error in a picture naming task was simulated by adding a second concept buffer to the production pathway and by connecting the output of that second buffer (“side branch buffer”) together with the output of the original concept buffer (given in Figure
A similar process was used to induce phonological speech errors. Specifically, a second word-level buffer was added to the production pathway of the inner speech loop, and was connected to the phonological buffer of the production pathway. This new word-level buffer is activated by a word which is phonologically similar but not identical to the target word. This leads to strong activation of the distortion word in the phonological buffer of the production pathway, which is then propagated to all levels of the perception pathway. The temporal activation pattern of the second word buffer is displayed in the third row in Figure
Five trials of 72 word combinations (i.e., 18 target words and 4 × 18 = 72 distractor words) were used in the picture naming and halt task. Target words were activated by picture naming (column 1 of Table
In all four simulations of the picture naming and halt task, visual input starts at about 100 ms and is fully activated at 150 ms. The audio input starts at about 500 ms and is fully activated at 550 ms. The perceptual input buffer signals the presentation of new visual or audio input (row 1 in Figures
Nengo source code for this model can be downloaded at
We simulated normal productions of the 18 target words listed in Table
It can further be seen that in the case of normal production (i.e., no distractor or error signal activation) the activities in all six cortical SPA buffers represent the (same) target word (“duck” in the case of the example given in Figure
In summary, for all 630 simulation trials, at no point in time were any of the conceptual, word or phonological buffers found to represent a semantic pointer for any word different from the target word. The phonological buffer within the production pathway was not activated at all in four trials (0.6% of all trials; see Figure
No production signal | 0.6% (4) | 0.6% (4) |
Weak production signal | 1.4% (9) | 2.0% (13) |
Erroneous self-perception | 3.7% (23) | 5.7% (36) |
The same neural model as in Experiment 1a (i.e., the model given in Figure
Semantic distractor activation (Figure
Phonological distractor activation (Figure
In both the semantic distractor and phonological distractor cases, we consider two conditions with respect to the relative strength of the side-branch connections compared to the standard connections: (i) strong coupling, in which the side branch connections were twice as strong as the regular connections, and (ii) weak coupling, in which the side branch connections had the same strength as the regular connections. In each condition, five trials were simulated for each of the 18 target words (column 1 in Table
Strong | 100.0% (90) | 31.1% (28) |
Weak | 56.7% (51) | 10.0% (9) |
Five trials of 72 word combinations were used in the picture naming and halt task (18 target words induced by picture naming coupled with 18 semantically similar, phonologically similar, semantically and phonologically similar, and dissimilar distractor words for a total of 72 combinations; see Section Experiment 2).
For every word combination, ‘SPEAK’ and ‘CONSIDER_HALT’ actions are activated by the action selection module (see row 4 in Figures
It is interesting to see that the phonological input signal to the comprehension pathway always overrides the input signal from the production pathway (i.e., the inner speech signal) in the case of phonologically different word pairs (Figures
In summary, the percentage and absolute number of halted trials is given in Table
Strong | 60.0% (54) | 0% (0) | 0% (0) | 96.7% (87) |
Medium | 21.1% (19) | 0% (0) | 0% (0) | 68.9% (62) |
Weak | 5.6% (5) | 1.1% (1) | 0% (0) | 14.4% (13) |
It can be seen from Table
The main goal of this paper is to develop a spiking neuron model of speech processing at multiple cognitive levels, mainly for word and phonological form selection and monitoring at the level of the mental lexicon. Because temporal aspects can be modeled in the Neural Engineering Framework and because this model—as with all spiking neuron models—generates variations from trial to trial at the level of neural states and their processing, it was possible to test the quality of this model by (i) checking the “natural” occurrence of speech errors, by (ii) checking whether the model is capable of generating speech errors if we evoke ambivalent neural states at different cognitive levels within speech production by including “side branches,” and by (iii) comparing the simulation results of a picture naming and halt task with human data. Our model and simulation results showing error-production and picture naming and halt performance reinforce the assumption that an inner speech loop exists. That is, the assumption of interacting production and perception pathways reinforce the existence of an inner speech monitor that compares production and perception related neural states at different cognitive levels within the inner speech loop. Thus, our spiking neuron model is a comprehensive cognitive approach for modeling and monitoring lexical access.
Because of the trial-to-trial variation of our neural model, “rare events” occurred at the cognitive levels of neural activation, which resulted in a stop of word production in 2.0% of 630 trials of normal word production (Experiment 1a; see Table
In order to generate speech errors in our model, “side branches” were added within the production pathway of the inner speech loop (Experiment 1b, and see descriptions of the side branches in Sections Methods and Results). These side branches result in an increase in activation of competitive words which are semantically or phonologically similar to the target word. In both cases (adding neural activations for semantically or phonologically similar words) the model is capable of generating and detecting speech errors at a high rate (between 10 and 100%, see Table
Because the introduction of “side branches” is a modification of the production model (modeling a modification within the brain of the speaker), speech errors are induced in behavioral experiments by modifying the communication or production scenario (i.e., the experimental paradigm). One kind of “artificial” production scenario—not directly for evoking speech errors but to investigate the speech monitoring function—is the introduction of an acoustical stop signal in a picture naming task (Slevc and Ferreira,
In the case of phonologically similar distractor words, human data give a halting rate of about 20% in comparison to a halting rate of around 40% for semantically similar or completely dissimilar distractor words (Slevc and Ferreira,
In summary, like humans, the model is capable of halting when the distractor is dissimilar or only semantically similar to the target word. The fact that the model does not halt on distractors that are phonologically similar to the target forms in the picture naming and halting paradigm may result from the fact that the evaluation of similarities of neural activations between production and perception pathways in the inner speech loop seems to under-emphasize differences at the phonological level in comparison to differences at the semantic level. But recall that our model includes an “inner loop shortcut” connecting the phonological buffer of the production pathway to that of the perception pathway. This shortcut could be the source of an assimilation of the phonological forms within the production and perception pathways and thus also be the source of the difficulty of detecting differences between phonological forms in the production and perception pathways. But this shortcut is inevitable, because it is the basis for self-monitoring of inner speech. Moreover, we have demonstrated that phonological errors can be induced and detected by our model when side-branches are used (see the results of Experiment 1b, Table
Finally, it should be noted that the degree of similarity for phonologically and semantically similar word pairs is quantitatively comparable from the viewpoint of the implementation of these representations at the level of the mental lexicon. For all 18 word pairs (rows in Table
In this paper we have proposed a comprehensive spiking neuron model of the inner speech loop. This model includes a word production pathway starting from the conceptual level and ending with the phonological level, as well as a word perception (and comprehension) pathway starting from the phonological level and ending with the conceptual level. While this paper has focused on interactions between production and perception during inner speech, the proposed model also has the potential to be a sensible starting point for speech processing in general. In particular, a number of straightforward extensions suggest themselves. (i) A sentence-level module could be added in order to facilitate production and comprehension of whole utterances. This could be a good starting point for investigating face-to-face communication processes from a modeling perspective (Kröger et al.,
All authors contributed to software coding, writing, and correcting the manuscript. BK conducted the experiments.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
i) First two lines: Relation types are listed here. ‘VisR’: Visual Impression related to a concept; ‘AudiR’: auditory impression of the word, representing the phonological form of the word; ‘OrthoR’: orthographic representation of the word; ‘PhonoR’: phonological representation of the word; ‘IsA’: concept is member of a group, defined by a subordinate concept.
ii) Concepts = {}: 90 concepts are listed as semantic pointers in the Python dictionary concepts = {}. Each semantic pointer, representing one concept is listed at the beginning of a new line. Relations are listed in the same line.
Concepts are defined by an English-German word pair, representing that concept (e.g., ‘Apple_Apfel’). Visual representations are directly related to concepts and include a prefix ‘V_’. Auditory, orthographic and phonological representations are related to a word, here to the English word, representing the concept.
visR = ‘VisR’, audiR = ‘AudiR’, orthoR = ‘OthoR’, isA = ‘IsA’
Relation_types = [visR, audiR, orthoR, phonoR, isA]
concepts = {
‘Almond_Mandel’: [(visR, ‘V_Almond_Mandel’), (audiR, ‘A_Almond’), (orthoR, ‘W_Almond’), (phonoR, ‘St_El_mend’), (isA, ‘Nut_Nuss’)],
‘Apathy_Apathie’: [(visR, ‘V_Apathy_Apathie’), (audiR, ‘A_Apathy’), (orthoR, ‘W_Apathy’), (phonoR, ‘St_E_pe_si’)],
‘Apple_Apfel’: [(visR, ‘V_Apple_Apfel’), (audiR, ‘A_Apple’), (orthoR, ‘W_Apple’), (phonoR, ‘St_E_pel’), (isA, ‘Fruits_Obst’)],
…
‘Trumpet_Trompete’: [(visR, ‘V_Trumpet_Trompete’), (audiR, ‘A_Trumpet’), (orthoR, ‘W_Trumpet’), (phonoR, ‘St_trOm_pet’), (isA,
‘BrassWind_BlechblasInstr’)]
}
i) First two lines: Relation type: ‘IsA’: concept is member of a group, defined by a subordinate concept.
ii) concepts_deep = {}: Deep Concepts are listed as semantic pointers in the Python dictionary concepts_deep = {}. Each semantic pointer, representing one deep concept is listed at the beginning of a new line. Relations are listed in the same line. Deep concepts will be used in concept dictionary (see Appendix A1)
isA = ‘IsA’
Relation_types = [isA]
concepts_deep = {
# deep concepts, describing relations between concepts in concept-network:
‘Bin_Behaelter’: [(isA, ‘Object_Gegenstand’)],
‘Bird_Vogel’: [(isA, ‘Animal_Tier’)],
‘Bluebottle_Brummer’: [(isA, ‘Insect_Insekt’)],
…
‘Vegetables_Gemuese’: [(isA, ‘Food_Nahrung’)],
# deep concepts, describing relations within deep concept network:
‘ClovenHoofed_Paarhufer’: [(isA, ‘FourLeg_Vierbeiner’)],
‘FourLeg_Vierbeiner’: [(isA, ‘Animal_Tier’)],
‘Insect_Insekt’: [(isA, ‘Animal_Tier’)],
‘Vehicle_Fahrzeug’: [(isA, ‘Object_Gegenstand’)]
# basic level:
‘Animal_Tier’: [],
‘Food_Nahrung’: [],
‘Object_Gegenstand’: [],
}
i) First two lines: Relation type: ‘InclPhon’: phonological representation of a word includes sub-representations (e.g., single phones, groups of phones, syllables)
ii) phonos = {}: Phonological representations of words are listed as semantic pointers in the Python dictionary phonos = {}. Each semantic pointer labels the phonological representation of a word and is listed at the beginning of a new line. Relations are listed in the same line.
Phonological representations of words include an underline in order to separate syllables. The phonological transcriptions in part use SAMPA notation (SAMPA,
inclPhon = ‘InclPhon’
Relation_types = [inclPhon]
phonos = {
‘St_El_mend’: [(inclPhon, ‘PSt_El’), (inclPhon, ‘P_mend’)],
‘St_E_pe_si’: [(inclPhon, ‘PSt_Ep’), (inclPhon, ‘PSt_E’), (inclPhon, ‘P_pe’), (inclPhon, ‘P_si’)],
‘St_E_pel’: [(inclPhon, ‘PSt_Ep’), (inclPhon, ‘PSt_E’), (inclPhon, ‘P_pel’)],
…‘St_trOm_pet’: [(inclPhon, ‘PSt_tr’), (inclPhon, ‘PSt_trOm’), (inclPhon, ‘P_pet’)],
}
Deep phonological representations are single phones, groups of phones, syllables. They are listed as semantic pointers in the Python dictionary phonos_deep = {}. These representations always start with the prefix ‘P’ for part. If a sub-representation is part of a stressed syllable, the prefix ‘ST_’ is included as well. Deep phonological representations are related to phonological representations (Appendix A3).
phonos_deep = {
# syllables or parts of syllables, occurring in more than one word:
‘PSt_Ep’: [], ‘PSt_bE’: [], ‘PSt_bi’: [], ‘PSt_br’: [], ‘PSt_da’: [], ‘PSt_kE’: [], ‘PSt_El’: [], ‘PSt_fl’: [], ‘PSt_lE’: [], ‘PSt_pi’: [], ‘PSt_rE’: [], ‘PSt_snE’: [],
‘PSt_sp’: [], ‘PSt_sk’: [], ‘PSt_tr’: [],
# other words or syllables, not necessarily describing dependencies between phonological representations within phonological network:
‘P_bel’: [], ‘P_bIdZ’: [], ‘P_bIt’: [], ‘P_boUn’: [], ‘P_del’: [], ‘P_der’: [], ‘P_dIN’: [], ‘P_fent’: [], ‘P_fi’: [], ‘P_fIk’: [], ‘P_je’: [], ‘P_kel’: [], ‘P_ken’: [], ‘P_kIdZ’: [], ‘P_kIt’: [], ‘P_kOt’: [], ‘P_la’: [], ‘P_le’: [], ‘P_lI’: [], ‘P_mel’: [], ‘P_mend’: [], ‘P_nat’: [], ‘P_ne’: [], ‘P_nIN’: [], ‘P_nItS’: [], ‘P_noU’: [],
‘P_pe’: [], ‘P_pel’: [], ‘P_pet’: [], ‘P_pi’: [], ‘P_pri’: [], ‘P_rel’: [], ‘P_ret’: [], ‘P_rI’: [], ‘P_si’: [], ‘P_te’: [], ‘P_tel’: [], ‘P_ten’: [], ‘P_tje’: [], ‘P_we’: [], ‘P_wen’: [], ‘P_wer’: [], ‘PSt_bas’: [], ‘PSt_bEg’: [], ‘PSt_bEn’: [], ‘PSt_bras’: [], ‘PSt_brEd’: [], ‘PSt_brEn’: [], ‘PSt_brIk’: [], ‘PSt_bU’: [], ‘PSt_bUs’: [],
‘PSt_dab’: [], ‘PSt_dak’: [], ‘PSt_daw’: [], ‘PSt_doU’: [], ‘PSt_drEs’: [], ‘PSt_dZip’: [], ‘PSt_E’: [], ‘PSt_EI’: [], ‘PSt_Elk’: [], ‘PSt_Elm’: [], ‘PSt_faI’: [],
‘PSt_flaI’: [], ‘PSt_flEg’: [], ‘PSt_fli’: [], ‘PSt_flu’: [], ‘PSt_fOks’: [], ‘PSt_hOrn’: [], ‘PSt_i’: [], ‘PSt_il’: [], ‘PSt_kaf’: [], ‘PSt_kast’: [], ‘PSt_kaUtS’: [],
‘PSt_kEI’: [], ‘PSt_kEn’: [], ‘PSt_kEs’: [], ‘PSt_kO’: [], ‘PSt_kOr’: [], ‘PSt_krIb’: [], ‘PSt_lEI’: [], ‘PSt_lEmp’: [], ‘PSt_lEn’: [], ‘PSt_mOs’: [], ‘PSt_moUl’: [],
‘PSt_muz’: [], ‘PSt_nEIl’: [], ‘PSt_pE’: [], ‘PSt_pI’: [], ‘PSt_pIg’: [], ‘PSt_pitS’: [], ‘PSt_raft’: [], ‘PSt_rEI’: [], ‘PSt_rEIk’: [], ‘PSt_rEt’: [], ‘PSt_s9rst’: [],
‘PSt_sE’: [], ‘PSt_skEIt’: [], ‘PSt_skUI’: [], ‘PSt_skUnk’: [], ‘PSt_snEIk’: [], ‘PSt_snEIl’: [], ‘PSt_snEk’: [], ‘PSt_spaI’: [], ‘PSt_spar’: [], ‘PSt_spE’: [],
‘PSt_spI’: [], ‘PSt_spun’: [], ‘PSt_straIp’: [], ‘PSt_trE’: [], ‘PSt_trEIn’: [], ‘PSt_trEk’: [], ‘PSt_trEp’: [], ‘PSt_trO’: [], ‘PSt_trOm’: [], ‘PSt_troU’: [],
‘PSt_trUk’: [], ‘PSt_tSEIn’: []
}
Word (i.e., orthographic representations) are listed as semantic pointers in the Python dictionary words = {}. These representations always start with the prefix ‘W_’ followed by the English word.
words = {
‘W_Almond’: [], ‘W_Apathy’: [], ‘W_Apple’: [], ‘W_Apricot’: [], ‘W_Bag’: [], ‘W_Ban’: [], ‘W_Basket’: [], ‘W_Beacon’: [], ‘W_Beaver’: [], ‘W_Bee’: [], ‘W_Beetle’: [], ‘W_Bran’: [], ‘W_Brass’: [], ‘W_Bread’: [], ‘W_Brick’: [], ‘W_Bucket’: [], ‘W_Bus’: [], ‘W_Cabbage’: [], ‘W_Cable’: [], ‘W_Calf’: [], ‘W_Camel’: [], ‘W_Candle’: [], ‘W_Carrot’: [], ‘W_Cash’: [], ‘W_Cast’: [], ‘W_Celery’: [], ‘W_Chain’: [], ‘W_Coffee’: [], ‘W_Corner’: [], ‘W_Couch’: [],
‘W_Crib’: [], ‘W_Donut’: [], ‘W_Dove’: [], ‘W_Dress’: [], ‘W_Dub’: [], ‘W_Duck’: [], ‘W_Eel’: [], ‘W_Elephant’: [], ‘W_Elk’: [], ‘W_Elm’: [], ‘W_Evening’: [], ‘W_Fire’: [], ‘W_Flag’: [], ‘W_Flea’: [], ‘W_Flu’: [], ‘W_Fly’: [], ‘W_Fox’: [], ‘W_Horn’: [], ‘W_Jeep’: [], ‘W_Ladle’: [], ‘W_Lamp’: [], ‘W_Landing’: [], ‘W_Lantern’: [], ‘W_Mole’: [], ‘W_Moose’: [], ‘W_Moth’: [], ‘W_Nail’: [], ‘W_Package’: [], ‘W_Peach’: [], ‘W_Peanut’: [], ‘W_Pecan’: [], ‘W_Piano’: [], ‘W_Pig’: [], ‘W_Rabbit’: [], ‘W_Raft’: [], ‘W_Rake’: [], ‘W_Rat’: [], ‘W_Raven’: [], ‘W_Skate’: [], ‘W_Skunk’: [], ‘W_Snack’: [], ‘W_Snail’: [], ‘W_Snake’: [], ‘W_Sparkle’: [], ‘W_Spatula’: [], ‘W_Spider’: [], ‘W_Spinach’: [], ‘W_Spoon’: [], ‘W_Squirrel’: [], ‘W_Stripe’: [], ‘W_Thirst’: [], ‘W_Tractor’: [], ‘W_Traffic’: [], ‘W_Train’: [], ‘W_Trap’: [], ‘W_Trolley’: [], ‘W_Trombone’: [], ‘W_Trophy’: [], ‘W_Truck’: [], ‘W_Trumpet’: []
}