Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 16 February 2016
Sec. Educational Psychology

Psychological Literacy Weakly Differentiates Students by Discipline and Year of Enrolment

  • 1School of Psychology and Exercise Science, Murdoch University, Perth, WA, Australia
  • 2School of Psychology and Speech Pathology, Curtin University, Perth, WA, Australia

Psychological literacy, a construct developed to reflect the types of skills graduates of a psychology degree should possess and be capable of demonstrating, has recently been scrutinized in terms of its measurement adequacy. The recent development of a multi-item measure encompassing the facets of psychological literacy has provided the potential for improved validity in measuring the construct. We investigated the known-groups validity of this multi-item measure of psychological literacy to examine whether psychological literacy could predict (a) students’ course of enrolment and (b) students’ year of enrolment. Five hundred and fifteen undergraduate psychology students, 87 psychology/human resource management students, and 83 speech pathology students provided data. In the first year cohort, the reflective processes (RPs) factor significantly predicted psychology and psychology/human resource management course enrolment, although no facets significantly differentiated between psychology and speech pathology enrolment. Within the second year cohort, generic graduate attributes (GGAs) and RPs differentiated psychology and speech pathology course enrolment. GGAs differentiated first-year and second-year psychology students, with second-year students more likely to have higher scores on this factor. Due to weak support for known-groups validity, further measurement refinements are recommended to improve the construct’s utility.

Introduction

The construct of psychological literacy has become an integral part of discussions around the skills a graduate from a psychology degree should have (McGovern et al., 2010; Cranney et al., 2011b, 2012; Trapp et al., 2011; Mair et al., 2013; Karantzas, 2014; Baker, 2015). Psychological literacy is most commonly defined as “…the general capacity to adaptively and intentionally apply psychology to meet personal, professional, and societal needs" (Cranney et al., 2012, p. iii). It is theorized to consist of nine facets: psychological knowledge, scientific thinking, critical thinking, application of psychological principles, ethical behavior, information literacy, effective communication competence, respect for diversity and insight (McGovern et al., 2010). Current research is focusing on how to operationally define and subsequently measure psychological literacy (Cranney et al., 2011a; Karantzas, 2014; Roberts et al., 2015). However, there remain some questions as to whether psychological literacy should be seen as a desirable goal for university graduates from all disciplines, or whether psychological literacy should be seen as the primary goal of a psychology education and as a set of skills that sets the psychology graduate apart from other health professions.

Our previous research (Roberts et al., 2015) examined the factor structure of self-report measures of the nine facets of psychological literacy defined by McGovern et al. (2010) and found three underlying factors and one independent construct (interactional diversity). The three factors were reflective processes (RPs), generic graduate attributes (GGAs), and psychology as a helping profession (PHP) (see Figure 1). The RPs factor comprised self- and other-reflection. The GGAs factor comprised scientific thinking, information literacy, communication competence, ethical behavior, insight, and critical thinking. Critical thinking loaded on RPs in one sample, and on GGAs in a second sample, with the latter loading argued by the authors as being more valid due to the distinction between reflecting on the behavioral or mental processes of the self and others (RPs), and the applied problem-solving focus of the critical thinking items (a generic university graduate attribute). PHP comprised personal growth and applied helping. The finding of both generic and psychology-specific factors in our previous research (Roberts et al., 2015) suggested that only some aspects of psychological literacy may be specific to psychology graduates, but this has yet to be tested.

FIGURE 1
www.frontiersin.org

FIGURE 1. Indicator measures and latent factors of psychological literacy, adapted from Roberts et al. (2015).

This paper builds on the findings of our previous research (Roberts et al., 2015) by further examining the validity of the self-report measures of psychological literacy. In particular, we focus on known-groups validity. This type of validity is based on the proposition that for a test to be valid it must be able to discriminate between groups that theoretically differ (Hattie and Cooksey, 1984).

Psychological literacy is a skill taught as part of a psychology degree, and differences in psychological literacy between students from psychology and non-psychology courses would be expected using valid tests. Differences between disciplines in terms of what students are taught about psychological literacy have been identified. Murdoch et al. (2014) argue that there are three characteristics that set graduates of psychology apart from other health professionals. These are (1) knowledge and use of the scientific method of enquiry, (2) psychological literacy, and (3) the combined skills and knowledge of case formulation and diagnosis. Murdoch et al. (2014) reviewed the amount and type of mental health training provided to students of psychology, nursing, social work, and medicine across universities in Canada. The data showed that psychology students received more instruction in psychological literacy in comparison to nursing, social work, and medical students, with the latter disciplines receiving approximately the same amount of instruction in psychological literacy as each other. If the self-report measures of psychological literacy are valid, they should be able to discriminate between psychology students and students from other disciplines.

It is expected that psychology and non-psychology students would vary in terms of their capabilities in domains relevant to psychological literacy even at the onset of their enrolment based on selection effects. Schneider’s (1987) Attraction Selection Attrition theory contends that individuals are drawn to organizations (and by extension, university courses) on the basis of perceived similarities. Supporting this, Boone et al. (2004) have demonstrated in a tertiary education context that students self-select into courses based on personality similarities. These findings are consistent with Holland’s (1996) extensive work on the similarities of individuals within particular occupations, and its extension in the context of tertiary education, demonstrating that students tend to perceive fit with their course based on individual differences factors such as interests and personality. We may therefore anticipate that students self-selecting into psychology degrees, based on future careers in areas where psychological literacy is valuable and congruent, may at the onset of enrolment into a psychology degree have higher levels of psychological literacy than peers enrolling in other disciplines. Valid measures of psychological literacy factors should be able to discriminate between students entering psychology degrees and students entering other disciplines.

Furthermore, differences in psychology literacy at the time of enrolment are likely to continue to manifest beyond the first year of study for two reasons. First, as noted above, psychology students receive more education in psychological literacy than students enrolled in non-psychology courses. Second, Schneider’s (1987) Attraction Selection Attrition theory suggests that psychology students who perceive discrepancies between themselves and their course (and, consequently, the facets of psychological literacy represented within their coursework) are less likely to continue their education in this field. This homogenizing process of the individual differences for students enrolled in psychology and non-psychology courses progress would therefore consolidate expected differences in psychological literacy between courses beyond the onset of enrolment. Psychological literacy measures should therefore discriminate between students enrolled in psychology majors and students enrolled in other majors. One study provides preliminary support for this hypothesis. Morris et al. (2013) examined awareness, importance, and perceived development of psychological literacy in 213 students taking undergraduate psychology units. Students were grouped into non-psychology majors (students taking a psychology unit[s] as an elective), psychology majors (students enrolled in either of the two accredited undergraduate psychology degrees) and experienced psychology majors (students enrolled in one of two accredited psychology degrees who had also completed the course capstone unit). While there were no significant effects of group or year level for the importance placed on psychological literacy there was a significant group effect for how much students thought their psychological literacy had developed during their studies. Experienced majors reported more development than majors, and majors reported more development than non-majors.

If psychological literacy develops as a function of undergraduate education in psychology, we might also expect psychological literacy to increase with years of psychology education. Morris et al.’s (2013) study also provided preliminary support for this hypothesis. There was a significant increase in the reported development of psychological literacy across the years of the psychology degree. In addition, there were significant correlations between the number of psychology units a student had completed and the importance and development ratings they gave. However, it must be noted that a limitation of this study is that the relative importance and development of psychological literacy were measured using single item measures of unknown reliability or validity. Valid measures of psychological literacy factors should be able to discriminate between students of differing years of study.

The Current Study

In summary, our previous research (Roberts et al., 2015) began the process of measuring the nine proposed facets of psychological literacy in undergraduate psychology students using single and multi-item self-report measures. This research focused on determining the factor structure underlying psychological literacy, finding generic and psychology-specific factors. In the current study, we build on this research to examine the known-groups’ validity of the psychological literacy self-report measures. If students entering university are attracted to study particular disciplines based on pre-existing interests and personality traits we might also expect that students entering a psychology degree will already be higher on the psychology-specific factors of psychological literacy than students entering other disciplines (H1). If psychological literacy is something that is taught in undergraduate psychology degrees, we would expect that the psychology-specific factors of psychological literacy would discriminate between psychology students and students from other disciplines beyond the first year (H2). Similarly, we would expect that students who have completed more psychology education would score higher on psychology specific factors of psychological literacy than those who have completed less psychology education. That is, psychological literacy variables would significantly differentiate students from each year of study (H3).

Materials and Methods

Participants

The participants for this research were a convenience sample of 886 students at an Australian university. Of this sample, 74 participants did not report their discipline of study (N = 43), or were enrolled in a course that was not common in our sample thereby enhancing the risk of overdispersion during later analysis (N = 31) (Tabachnick and Fidell, 2013). These cases were subsequently removed from any further analysis. Ninety-nine cases where participants identified English as not being their first language were removed from further analyses due to the potentially biased data. Participants enrolled in their fourth year, honors, or postgraduate programs of study were not represented in sufficient quantity for each reported discipline to meet the frequency assumptions of the forthcoming logistic regression analyses, and were omitted from further analysis (N = 28). Of the remaining 685 participants, 515 were enrolled in a psychology degree, 87 were enrolled in a psychology and human resource management double-degree, and 83 were enrolled in a speech pathology degree1. Table 1 presents a summary of the demographic variables measured by the course of enrolment. Part of this sample (students enrolled in a psychology course) has previously been used in Roberts et al. (2015) to assess the factor structure of the multi-item facet measures of psychological literacy.

TABLE 1
www.frontiersin.org

TABLE 1. Demographic characteristics of the undergraduate participants (N = 685).

Measures

An online questionnaire was developed comprising single-item and multi-item measures of the nine facets of psychological literacy, and demographic items (age, gender, years of study, the number of psychology units completed, full-time or part-time status, international, or domestic student). Only the multi-item measures of psychological literacy were used in the current study. A summary of the measures is presented in Table 2. Further details of the measures, including their psychometric properties, are provided in Roberts et al. (2015).

TABLE 2
www.frontiersin.org

TABLE 2. Measures of psychological literacy facets.

Procedure

Recruitment for the study was conducted in two time periods; the first semester of 2013 and 2014; following Curtin University Human Research Ethics Committee approval. Students were recruited through an announcement in psychology lectures and learning management system sites and through a school-based research participation pool. Students participating through the pool were awarded research points and all other participants were entered into a prize draw for a $100 Amazon.com voucher.

In line with best practize recommendations (Allen and Roberts, 2010), the online questionnaire was ‘sandwiched’ between a participant information sheet and a debriefing page hosted on the university website. Links were provided to the participant information sheet. Upon reading the participation information sheet and consenting to participate, students were redirected to the questionnaire. Survey data were downloaded from Qualtrics.com into SPSS (v. 20) for analysis. The data was screened for missing values and multiple responding. Due to the possible confound of the year of enrolment, data was split into first-year and second-year students for subsequent analyses regarding discipline differences in psychological literacy factors.

Results

Regression scores for each factor of psychological literacy reflective of the model identified in Roberts et al. (2015) [see Figure 1] were calculated for each participant via confirmatory factor analytic methods. Correlations and variances of the psychological literacy factor scores are presented in Table 3 and the factor scores were used in the forthcoming regression analyses.

TABLE 3
www.frontiersin.org

TABLE 3. Bivariate correlations, variances, and reliabilities of the psychological literacy factors (N = 685).

Psychological Literacy at the Time of Entering Degree by Discipline (H1)

Our first set of analyses was designed to test the known-groups validity of students entering the first year of their undergraduate studies (H1).

Multinomial logistic regression was used to predict group membership (psychology majors, psychology-HRM majors, and speech pathology majors) of the first year cohort. The predictor variables were the measures of each facet of psychological literacy. GGAs, PHP, and RPs were entered in a single block. Psychology majors were chosen as the reference group, as this allowed theoretically meaningful contrasts between psychology majors and the other two groups of majors being examined. All statistical assumptions were met prior to conducting the main analysis.

The predictor-inclusive model was significantly different from the baseline model, χ2(6) = 13.12, p = 0.041, indicating that the set of psychological literacy factors was capable of distinguishing between psychology majors and speech pathology/psychology HRM majors. Non-significant Pearson (p = 0.551) and Deviance (p = 1.000) model fitting statistics indicated good model fit. Cox and Snell R2 = 0.048, and Nagelkerke R2 = 0.067, indicated small-to-moderate effect sizes for the model by Cohen’s (1988) conventions. Parameter estimates for the model are presented in Table 4. Comparisons between psychology and psychology-HRM students indicated that the RPs factor scores were capable of differentiating group membership between the students. Psychology students were significantly more likely to have a higher factor score on RPs compared to that of the Psychology-HRM students within the first year sample. Within the first year cohort, however, there were no significant predictors of group membership between the psychology and speech pathology students.

TABLE 4
www.frontiersin.org

TABLE 4. Parameter estimates for psychological literacy factors differentiating enrolment groups.

Psychological Literacy in Second-Year Undergraduate Students (H2)

Multinomial logistic regression, using the same set of psychological literacy predictors, was conducted on the second year data to examine whether the prediction of group membership for psychology, psychology-HRM, and speech pathology majors was evident. All assumptions were validated prior to testing, and all psychological literacy factors were entered in one predictor block. The predictor-inclusive model was significantly different from the baseline model, χ2(6) = 30.88, p < 0.001, indicating that the set of psychological literacy factors predicted discipline membership for the second year data. Both non-significant Pearson (p = 0.289) and Deviance (p = 1.000) statistics indicated good model fit. Cox and Snell R2 = 0.079, and Nagelkerke R2 = 0.101, indicated small-to-moderate effect sizes for the model by Cohen’s (1988) conventions and were comparatively larger than that of the first year data model. Comparisons between psychology majors and psychology-HRM/speech pathology majors were again conducted, with psychology majors being the reference group. Table 4 demonstrates the parameter estimates for each discipline comparison. Speech pathology group membership was predicted by both GGAs and RPs. Students were more likely to be a speech pathology student if they had a higher GGAs factor score, or a lower RPs score, in comparison to psychology students.

Psychological Literacy by Year of Enrolment (H3)

Our third set of analyses was designed to test whether psychological literacy increased with psychology education. Only data from participants identifying as psychology majors was included for analysis (N = 515).

All assumptions were met prior to using multinomial logistic regression to predict year group membership from the factors of psychological literacy. The first year undergraduate group was set as the reference group for the analysis. The predictor-inclusive model significantly predicted year group membership based on the factors of psychological literacy, χ2(6) = 13.70, p = 0.033. Effect size, as reported by Cox and Snell R2 = 0.026, and Nagelkerke R2 = 0.032, were both small by Cohen’s (1988) conventions. Both non-significant Pearson (p = 0.370) and Deviance (p = 1.000) statistics indicated good model fit. Comparisons between first-year psychology undergraduates and second-year psychology undergraduates indicated a significant difference in GGAs. Students with higher GGA scores were significantly more likely to be second-year psychology students. Conversely, comparisons between first- and third-year psychology undergraduates did not indicate any significant indicators of group membership. The parameter coefficients are reported in Table 5.

TABLE 5
www.frontiersin.org

TABLE 5. Parameter estimates for psychological literacy factors differentiating year groups for psychology majors.

Discussion

The purpose of the current study was to investigate whether the previously identified factors of psychological literacy (Roberts et al., 2015) were capable of differentiating group membership, in terms of course of enrolment or year of enrolment, as a test of known-groups validity. We predicted that participant scores on the factors of psychological literacy would significantly contribute to the prediction of group membership between psychology and non-psychology undergraduate students. At the time of entering their degree, group membership differences were predicted only by the factor of RPs between the psychology and psychology-HRM students, with the remaining psychological literacy factors not indicating any significant value in predicting group membership. Likewise, there was no notable differentiation between psychology and speech pathology students in this first-year sample. These results provided weak, partial support for H1. In predicting group membership within the second year cohort, more factors of psychological literacy were capable of distinguishing between psychology and speech pathology students. Psychology and speech pathology membership were predicted by GGAs and RPs scores. No psychological literacy factors were significant in predicting group membership between psychology and psychology HRM students. These findings provide partial support for H2. Lastly, we investigated whether year-group membership in psychology students could be predicted by the factors of psychological literacy, which were presumed to improve as psychology undergraduate education progressed. Weak support was provided for H3, as the first and second year psychology students were differentiated by GGAs, with higher GGA scores being more likely for second-year psychology students. No psychological literacy factors were influential in differentiating first and third year psychology students, contrary to what was expected for H3. Effect sizes for all analyses were small-to-moderate in size, providing limited support for the known-groups validity of these measures of psychological literacy.

These findings differ from what was predicted based on prior research. The limited ability of psychological literacy factor scores to predict group membership for the first year students departs from predictions from Schneider’s (1987) Attraction-Selection–Attrition framework. While we proposed that students were more likely to be attracted to a course based on existing individual-level similarities, which in turn would allow group membership prediction based on heterogeneity across courses, this was not supported. By the second year of enrolment, students in the three majors have all been exposed to some psychology education as part of an interprofessional first year course requirement, but have also been exposed to discipline-specific education. The second year sample of students varied on more facets of psychological literacy than first year students, perhaps attributable to discipline-specific education encouraging more-informed perceptions of match or mismatch with their course and future profession. This may have resulted in greater homogeneity within the student cohort of each course due to attrition where mismatches were perceived, promoting greater group membership prediction within the second year sample. Our findings are similar to Morris et al. (2013), with psychology and non-psychology students demonstrating a degree of heterogeneity in factors of psychological literacy, although in our analyses this heterogeneity was not present between psychology and psychology-HRM students

Consistent with the findings of Morris et al. (2013), we demonstrated support for psychological literacy differentiating group membership between years of enrolment in a psychology degree. Our findings, while limited, demonstrated that GGAs differentiated between psychology students at the time of course entry, and psychology students in their second year. These findings must be considered in the context of the whole model however; there was no differentiating effect between first and third year students, which is unusual if psychological literacy is theorized to improve as a function of course tenure (Morris et al., 2013). Furthermore, GGAs was the only significant predictor of first- and second-year group membership for psychology students, and this facet of psychological literacy has been considered previously by the authors as skills that are not psychology-specific, but likely learned by university students as part of their undergraduate progression (Roberts et al., 2015). Our findings therefore provide weak support for the efficacy of psychological literacy being capable of differentiating psychology students at different stages of their course progression.

From a measurement perspective, the measures selected to capture McGovern et al.’s (2010) conceptualisation of psychological literacy may not be optimal, and this may be reflected in the limited support for known-groups validity in the current study. For example, Roberts et al. (2015) noted that the factor structure coefficients and model fit values for the three-factor model underlying psychological literacy could benefit from further improvement. The opportunity to examine the factor structure of the facets of psychological literacy for speech pathology and psychology-HRM double degree students was not tenable with the current sample due to the small sample sizes (Kline, 2010). Roberts et al. (2015) identified issues with low standardized factor loadings, indicating the need to further examine the indicators of latent factors of psychological literacy. This may in turn reduce the prospect of attenuated relationships between indicators and factors, thereby reducing the prospect of type II errors when examining the predictive validity of psychological literacy and other outcomes. Additionally, investigation of the construct validity of the three-factor model of psychological literacy by Roberts et al. (2015) based on samples from other universities, would provide valuable information on the model’s generalizability. While the exploratory nature of Roberts et al. (2015) and the current study provides a first step in examining the predictive and construct validity of psychological literacy, respectively, further work is needed.

Our recommendations for future research therefore fall into two broad categories: the further evaluation of the way in which psychological literacy is measured, and the examination of the three factor model of Roberts et al. (2015) with other samples. To address the first recommendation, we propose that future research investigating psychological literacy may benefit from trialing smaller subsets of items that aim to tap into the factors underlying the construct. While Roberts et al. (2015) and the current study have used existing measures that were considered to reflect each of the facets of psychological literacy proposed by McGovern et al. (2010), designing and trialing a parsimonious measure that reflects these facets would be advantageous. A reduction in the number of scale items would also be beneficial in terms of reducing respondent fatigue during administration.

Addressing the first recommendation may consequently provide evidence that addresses our second recommendation, which is the need for future research to further examine the validity of Roberts et al.’s (2015) three-factor model of psychological literacy. While the large psychology student samples from Roberts et al. (2015) provided a sufficiently powered analysis of the factor structure of psychological literacy, the need to test this model with samples from other universities and other disciplines is a valuable future direction. We examined whether the three factors of psychological literacy could predict group membership between courses that were based on health-focused interaction with other persons. Stronger results may be obtained when comparing courses from less related disciplines, including those that do not provide any psychology units in their undergraduate coursework. Courses such as engineering, which Holland’s (1996) interest-major typology suggests would attract and retain students with pronounced differences in personality/interests in comparison to health-sciences students, may demonstrate greater differences in psychological literacy. By addressing these current limitations, the construct of psychological literacy may be a valuable means of representing the skills developed during a psychology degree.

Author Contributions

All authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This research was partially funded by a Small Grant from the School of Psychology and Speech Pathology at Curtin University awarded to Associate Professor Lynne Roberts.

Footnotes

  1. ^ We can be confident that psychology students in second year and beyond were not students who had newly switched into a psychology major, as second and third year units have prerequisites of the completion of psychology units from the previous year.

References

Allen, P. J., and Roberts, L. D. (2010). The ethics of outsourcing online survey research. Int. J. Techn. 1, 35–48. doi: 10.4018/jte.2010070104

CrossRef Full Text | Google Scholar

Baker, S. C. (2015). Greetings from the president. Teachnol. Psychol. 42:93. doi: 10.1177/0098628314567217

CrossRef Full Text | Google Scholar

Boone, C., van Olffen, W., and Roijakkers, N. (2004). Selection on the road to a career: evidence of personality sorting in educational choice. J. Career Dev. 31, 61–78. doi: 10.1023/B:JOCD.0000036706.17677.ee

CrossRef Full Text | Google Scholar

Cacioppo, J. T., Petty, R. E., and Kao, C. F. (1984). The efficient assessment of need for cognition. J. Pers. Assess. 48, 306–307. doi: 10.1207/s15327752jpa4803_13

PubMed Abstract | CrossRef Full Text | Google Scholar

Chester, A., Burton, L. J., Xenos, S., and Elgar, K. (2013). Peer mentoring: supporting successful transition for first year undergraduate psychology students. Aust. J. Psychol. 65, 30–37. doi: 10.1111/ajpy.12006

CrossRef Full Text | Google Scholar

Cohen, J. (1988). Statistical Power Analysis for the Behavioural Sciences, 2nd Edn. Hillsdale, NJ: Erlbaum.

Google Scholar

Cranney, J., Botwood, L., and Morris, S. (2012). National Standards for Psychological Literacy and Global Citizenship: Outcomes of Undergraduate Psychology Education. Available at: http://www.olt.gov.au/resource-national-standards-psychological-literacy

Google Scholar

Cranney, J., Morris, S., Krochmalik, A., and Botwood, L. (2011a). “Assessing psychological literacy,” in Assessing Teaching and Learning in Psychology: Current and Future Perspectives, eds D. S. Dunn, C. M. Baker, C. M. Mehrotra, R. E. Landrum, and M. A. McCarthy (Belmont, CA: Cengage), 95–106.

Google Scholar

Cranney, J., Morris, S., Martin, F. H., Provost, S. C., Zinkiewicz, L., Reece, J., et al. (2011b). “Psychological literacy and applied psychology in undergraduate education,” in The Psychologically Literate Citizen: Foundations and Global Perspectives, eds J. Cranney and D. S. Dunn (New York, NY: Oxford University Press), 146–166.

Google Scholar

Gervasio, A. H., Wendorf, C. A., and Yoder, N. F. (2010). Validating a psychology as a helping profession scale. Teach. Psychol. 37, 107–113. doi: 10.1080/00986281003609199

CrossRef Full Text | Google Scholar

Grant, A. M., Franklin, J., and Langford, P. (2002). The self-reflection and insight scale: a new measure of private self-consciousness. Soc. Behav.Personal. 30, 821–836. doi: 10.2224/sbp.2002.30.8.821

CrossRef Full Text | Google Scholar

Hattie, J., and Cooksey, R. W. (1984). Procedures for assessing the validities of tests using the “known-groups” method. Appl. Psychol. Meas. 8, 295–305. doi: 10.1177/014662168400800306

CrossRef Full Text | Google Scholar

Holland, J. L. (1996). Exploring careers with a typology: what we have learned and some new directions. Am. Psychol. 51, 397–406. doi: 10.1037/0003-066X.51.4.397

CrossRef Full Text | Google Scholar

Hu, S., and Kuh, G. D. (2003). Diversity experiences and college student learning and personal development. J. Coll. Student Dev. 44, 320–334. doi: 10.1016/j.nepr.2012.08.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Hughes, S., Lyddy, F., and Lambe, S. (2013). Misconceptions about psychological science: a review. Psychol. Learn. Teach. 12, 20–31. doi: 10.2304/plat.2013.12.1.20

CrossRef Full Text | Google Scholar

Karantzas, G. (2014). Shaping the Future of Psychology Through Developing and Assessing Graduate Attributes using Collaborative Learning. Sydney, NSW: Office for Learning and Teaching. Available at: http://www.olt.gov.au/project-shaping-future-psychology-through-developing-and-assessing-graduate-attributes-using-collabo

Kline, R. B. (2010). Principles and Practices of Structural Equation Modeling, 3rd Edn. New York, NY: Guilford.

Google Scholar

Kurbanoglu, S. S., Akkoyunlu, B., and Umay, A. (2006). Developing the information literacy self-efficacy scale. J. Doc. 62, 730–743. doi: 10.1108/00220410610714949

CrossRef Full Text | Google Scholar

Mair, C., Taylor, J., and Hulme, J. (2013). An Introductory Guide to Psychological Literacy and Psychologically Literate Citizenship. New York: Higher Education Academy.

Google Scholar

McCroskey, J. C., and McCroskey, L. L. (1988). Self-report as an approach to measuring communication competence. Commun. Res. Rep. 5, 108–113. doi: 10.1080/08824098809359810

PubMed Abstract | CrossRef Full Text | Google Scholar

McGovern, T. V., Corey, L., Cranney, J., Dixon, W., and Holmes, J. D. (2010). “Psychologically literate citizens,” in Undergraduate Education in Pscyhology: A Blueprint for the Future of the Discipline, ed. D. F. Halpern (Washington, DC: American Psychological Association), 9–27.

PubMed Abstract | Google Scholar

Morris, S., Cranney, J., Jeong, J. M., and Mellish, L. (2013). Developing psychological literacy: student perceptions of graduate attributes. Aust. J. Psychol. 65, 54–62. doi: 10.1111/ajpy.12010

CrossRef Full Text | Google Scholar

Murdoch, D. D., Gregory, A., and Eggleton, J. M. (2014). online first). Why psychology? An investigation of the training in psychological literacy in nursing, medicine, social work, counselling psychology, and clinical psychology. Can. Psychol. 56, 136–146. doi: 10.1037/a0038191

CrossRef Full Text

Roberts, L. D., Heritage, B., and Gasson, N. (2015). The measurement of psychological literacy: a first approximation. Front. Psychol. 6:105. doi: 10.3389/fpsyg.2015.00105

PubMed Abstract | CrossRef Full Text | Google Scholar

Schlenker, V. (2008). Integrity and character: implications of principled and expedient ethical ideologies. J. Soc. Clin.Psychol. 27, 1078–1125. doi: 10.1111/j.1467-6494.2007.00488.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Schneider, B. (1987). The people make the place. Pers. Psychol. 40, 437–453. doi: 10.1111/j.1744-6570.1987.tb00609.x

CrossRef Full Text | Google Scholar

Sosu, E. M. (2013). The development and psychometrical validation of a critical thinking disposition scale. Think. Skills Creat. 9, 107–119. doi: 10.1016/j.tsc.2012.09.002

CrossRef Full Text | Google Scholar

Tabachnick, B. G., and Fidell, L. S. (2013). Using Multivariate Statistics, 6th Edn. Upper Saddle River, NJ: Pearson.

Google Scholar

Trapp, A., Banister, P., Ellis, J., Latto, R., Miell, D., and Upton, D. (2011). The Future of Undergraduate Psychology in the United Kingdom. New York: Higher Education Academy Psychology Network.

Google Scholar

Keywords: psychological literacy, known-groups validity, measurement, undergraduate psychology, graduate attributes

Citation: Heritage B, Roberts LD and Gasson N (2016) Psychological Literacy Weakly Differentiates Students by Discipline and Year of Enrolment. Front. Psychol. 7:162. doi: 10.3389/fpsyg.2016.00162

Received: 20 July 2015; Accepted: 28 January 2016;
Published: 16 February 2016.

Edited by:

Howard Thomas Everson, City University of New York, USA

Reviewed by:

Min Liu, University of Hawaii at Manoa, USA
Shuyan Sun, University of Maryland Baltimore County, USA

Copyright © 2016 Heritage, Roberts and Gasson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Brody Heritage, b.heritage@murdoch.edu.au

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.