Skip to main content

EVALUATION article

Front. Public Health, 16 July 2015
Sec. Public Health Education and Promotion

Quantitative Evaluation of the Community Research Fellows Training Program

  • 1Division of Public Health Sciences, Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
  • 2George Warren Brown School of Social Work, Washington University in St. Louis, St. Louis, MO, USA
  • 3St. Louis Integrated Health Network, St. Louis, MO, USA

Context: The community research fellows training (CRFT) program is a community-based participatory research (CBPR) initiative for the St. Louis area. This 15-week program, based on a Master in Public Health curriculum, was implemented by the Division of Public Health Sciences at Washington University School of Medicine and the Siteman Cancer Center.

Objectives: We measure the knowledge gained by participants and evaluate participant and faculty satisfaction of the CRFT program both in terms of meeting learning objectives and actively engaging the community in the research process.

Participants: We conducted analyses on 44 community members who participated in the CRFT program and completed the baseline and follow-up knowledge assessments.

Main outcome measures: Knowledge gain is measured by a baseline and follow-up assessment given at the first and final session. Additionally, pre- and post-tests are given after the first 12 sessions. To measure satisfaction, program evaluations are completed by both the participants and faculty after each topic. Mid-way through the program, a mid-term evaluation was administered to assess the program’s community engagement. We analyzed the results from the assessments, pre- and post-tests, and evaluations.

Results: The CRFT participants’ knowledge increased at follow-up as compared with baseline on average by a 16.5 point difference (p < 0.0001). Post-test scores were higher than pre-test scores for 11 of the 12 sessions. Both participants and faculty enjoyed the training and rated all session well.

Conclusion: The CRFT program was successful in increasing community knowledge, participant satisfaction, and faculty satisfaction. This success has enhanced the infrastructure for CBPR as well as led to CBPR pilot projects that address health disparities in the St. Louis Greater Metropolitan Area.

Introduction

The benefits of community-based participatory research (CBPR) have been well established; however, there is limited information on developing the infrastructure and increasing community capacity to partner in CBPR projects. Participating in public health research training can prepare community members for collaborative work with academic researchers and empower them to act as equal partners in the research process (1, 2). In an effort to alleviate mistrust, foster community-academic relationships, and educate community stakeholders in St. Louis, the Division of Public Health Sciences at Washington University School of Medicine (WUSM), and the Siteman Cancer Center, began the community research fellows training (CRFT) program. Aligned with CBPR principles, this program has the potential to be mutually beneficial for the participants, training faculty, and the local health research community.

In 2010, a similar training program, Community Alliances for Research Empowering Social Change (CARES), was initiated on Long Island, New York (3, 4). This program was developed based on feedback from community members of the Suffolk County Minority Health Action Coalition at a mini-summit on Minority Health focused on CBPR (5). Participants newly introduced to this concept felt these approaches could really benefit their communities but requested training in research methods to address concerns about their limited knowledge about research and their ability to be equal partners in a process they did not know much about.

The CARES academic-community collaboration developed a training program for community members based on the standard Master of Public Health curriculum, designed to implement culturally appropriate ways to increase research literacy among community members (4). The CRFT program built upon the model created by the CARES training program. The St. Louis-based CRFT program expanded the goals of the CARES program, adding topics to the curriculum, based on CARESs participant evaluations and input from the community advisory board to make it culturally tailored and region specific. After the CARES program evaluation (3), there was discussion that the success of the CARES program was specific to the suburban region and small class size (6). The CRFT program more than doubled the program size and was conducted in urban St. Louis, as opposed to the former Long Island, New York suburban location, in order to assess the generalizability of this CBPR program’s approach. This paper provides a quantitative evaluation of the CRFT program. To this end, we measure the knowledge gained and evaluate participant and faculty satisfaction with the CRFT program to determine whether learning objectives and goals were met through this CBPR approach.

Background and Rationale

Community-based participatory research emerged from research traditions of 1980s and 1990s that focused on engaging stakeholders affected by the public health concerns at hand (7). Community engagement is a powerful instrument in bringing about positive social and community health change (8). Active community member engagement in the research process improves health outcomes, health promotion and prevention, and institution-community relationships (3, 4, 912). Community engagement requires a long-term process that builds trust, values contributions of all stakeholders, and generates a collaborative framework (13). CBPR is an effective vehicle to speed up the elimination of the mortality and morbidity disparities consistently seen among minorities, low-income, and other vulnerable populations (14). The success of CBPR is dependent on the strong formation of community-researcher relationships; in order to engage communities to collaborate with researchers to address identified health concerns, researchers must build trust and rapport with community members by maintaining a consistent presence (4, 12, 15, 16).

Several studies suggest that medical mistrust and negative encounters with health care personnel are closely linked to racial disparities in health (17, 18). Attempting to reduce racial disparities is complicated by medical mistrust among other barriers that reduce participants’ willingness to actively engage in medical research (19). CBPR has been shown to be effective in ameliorating or abating some of these issues by engaging underserved communities (9, 20) and has emerged as an evidence-based approach to address the complex issues that affect the health of marginalized populations using innovative and effective community–academic partnerships to address health disparities (2123). Over the years, CBPR has become valued as an effective research strategy for improving community health and reducing health disparities (2432). Community engagement can contribute to a more nuanced understanding of health problems, increasing the relevance of problems examined (3335), improving the quality and outcomes of health promotion activities, disease prevention initiatives, and research studies (10, 31, 36).

Methods

Community Research Fellows Training Program

The CRFT program was a pilot project of the Program to Eliminate Cancer Disparities at the Siteman Cancer Center (SCC), Barnes Jewish Hospital, and WUSM. The goal of CRFT was to train community members to serve as the bidirectional conduit between WUSM/SCC and communities in St. Louis. A community advisory board (CAB) was formed to help guide all aspects of the CRFT program including recruitment and acceptance decisions, program implementation, selection of pilot projects, and evaluation of the program. The objectives of this training were to

1. Enhance community knowledge and understanding of research

2. Create a pool of trained community members who can serve on Institutional Review Boards and community research advisory boards

3. Develop CBPR pilot projects that address health disparities in the St. Louis Greater Metropolitan Area

4. Enhance the infrastructure for CBPR

5. Provide community members with skills to engage as equal partners in every phase of the research process.

The CRFT program sought to involve community members in research methods training designed to help them to become good consumers of research; understand the utility of research in improving health outcomes in their communities, increase their understanding of how to work with academic researchers, and develop skills that increase capacity for organizations and communities to engage in research. To this end, a 15-week training course, adapted from the CARES program, was designed and implemented (4). It is important to note that a community group was convened to consider the faculty rationale and strategy for implementing the program. Both the faculty and the community members were focused on ways to facilitate academic researchers and community members as co-equals. The advisory group reviewed the materials from the CARES program and believed that a similar program could benefit the community. The course is comprised of 25 topics, divided across 12 didactic training sessions and 3 experiential workshops, held weekly (Thursday evenings 6–9 p.m.) April–August 2013. Each session is a condensed 3 hour lay-friendly version of Master of Public Health (MPH) curriculum topics including health literacy, ethics, cultural competency, epidemiology, quantitative and qualitative research methods, chronic disease prevention, clinical trials, study design, program evaluation, and grant writing. Each session was led by one or two faculty members recruited to teach in the program by the Principal Investigator; Table 1 lists the session topics and learning objectives from the course syllabus given to participants at orientation.

TABLE 1
www.frontiersin.org

Table 1. Session topics and learning objectives.

Twelve CAB members, 17 faculty members, and 10 research assistants were involved in the conception, planning, and implementation of the CRFT program (37). While participants are not compensated they do receive free training and resources; to further empower the participants and engage them in the academic process, throughout the CRFT program, we referred to our community members as “fellows” similar to those on fellowship at an academic institution (from this point forward, we will do the same here). Fellows were recruited through The St. Louis American (a local newspaper; 32%), E-mail (20%), radio (14%), community websites (2%), community newsletters (6%), flyers (8%), personal referrals (6%), and word of mouth (12%) during January–March 2013. Of the 62 applicants, 50 (81%) were accepted into the program, which was double the program target of 25. The inclusion criteria for the CRFT program required participants to be at least 18 years old and live or work in the St. Louis greater metropolitan area. Applications were reviewed by the CRFT CAB such that the participants were selected to be purposefully diverse in composition in terms of work, education, and life experiences.

The training uses multiple teaching approaches (large didactic interactive lectures, small group activities, group exercises, and small and large group discussions) to explain topics in ways that reach a variety of learning styles (37). Consistent with CBPR principles and the needs of adult learners, participants provided feedback on the most feasible day and time for the course. Pedagogically, each session is formatted to support active class participation that encourages adults to draw upon their experiences, group activities that require problem solving and direct application of the lecture material, supplemented by homework assignments that foster independence and self-direction through application of the material to their communities (38). Using CBPR approaches, this training program recognizes the contribution participants can make to the learning process and the diversity of the cohort foster example-based learning using culturally competent region specific scenarios. Further information about the program development, recruitment strategies, training structure, program implementation, pilot projects, and best practices can be found elsewhere (37).

The Human Research Protection Office at WUSM classified this study as program evaluation and non-human subjects’ research. Analysis was completed using SAS/STAT 9.4 (Cary, NC, USA). Of the 50 fellows enrolled in the CRFT program, 45 (90%) completed the 15-week training program and 44 (88%) completed both the baseline and follow-up assessments. The majority of the fellows who completed both the baseline and follow-up assessments were female (84%) and African-American/Black (86%). The fellows ranged in age from 28 to 72, with a mean age of 51. The cohort of fellows was comprised of community members (30%), those affiliated with community-based organizations (23%), health care workers (21%), government workers (11%), those affiliated with faith-based organizations (9%), and those in academia (7%). Education attainment ranged from junior high school to those with a graduate degree. Over half (57%) of fellows self-reported previously taking a research course (Table 2). The 6 fellows who did not complete the training and/or final assessment were similar to the 44 who completed the training and the final assessment; all were African-American/Black females, ranging in education attainment from a high school diploma to graduate degree. Their affiliation varied from faith-based organizations, community-based organizations, academia, and community members with no organizational affiliation. The mean age was slightly lower, 48 (range 36–57). For consistency, from this point forward, the sample will be the 44 fellows who completed both the baseline and follow-up assessments.

TABLE 2
www.frontiersin.org

Table 2. Demographic characteristics of CRFT participants in evaluation sample (N = 44).

Assessment of Participant Knowledge

The fellows’ baseline and follow-up assessments were linked using their CRFT ID numbers. Each assessment consisted of 29 identical open-ended questions created to measure the fellow’s knowledge of the CRFT training topics (Table 1). The baseline questionnaire also included the Patient Trust in Medical Researchers scale (39), an adaptation of the Computer Engaged Research Index, multiple levels of empowerment indices (40, 41), and two health literacy assessments, the Newest Vial Sign and the Rapid Assessment of Adult Literacy in Medicine (37, 4245). This analysis only focuses on the initial 29 open-ended assessment questions, administered at both baseline and after the completion of CRFT, in order to assess the participants’ knowledge gain. For the full baseline and final questions with sample answers, see Coats et al. (37). A baseline and final score were computed for each individual by summing the individual question’s scores, with a possible score of 0, indicating an incorrect answer, 1 indicating a partially correct answer, or 2 indicating an essentially correct answer. Baseline and follow-up assessments were graded by two graders using a rubric to ensure consistency (37). We created a sum score for each individual for both the baseline and follow-up assessments. To assess the difference between the baseline and follow-up assessment, we performed a paired t-test after finding no evidence against normality using the Shapiro–Wilk test. We then looked at fellows’ responses to see if each individual question’s score increased, decreased, or remained the same from baseline to follow-up.

For each question, we examined the percent that scored “essentially correct” on the baseline assessment as compared to the percent that scored “essentially correct” on the follow-up assessment. We analyzed the data in a two-way contingency table, assessing the change from baseline to follow-up. To assess whether the contingency table is symmetric, we used McNemar’s test, with the null hypothesis that the questions are answered correctly or incorrectly at the same rate from baseline to follow-up.

Fellows took pre- and post-tests at each of the 12 didactic training sessions. These pre- and post- tests were developed by the CRFT team and approved by the teaching faculty member each week. The first eight sessions consisted of five multiple-choice questions on the pre- and post-tests. The final four sessions consisted of 10 multiple-choice questions on the pre- and post-tests. Questions assessed the learning objectives the faculty member intended to cover during the weekly session; pre and post assess the same content but use different items. Pre- and post-tests were scored by a team of CRFT research assistants using a SAS macro developed for this purpose (46). Due to the violation of normality assumptions, we used a non-parametric test, the Wilcoxon signed-rank test, to evaluate the score differences on pre-test and post-test for each session.

Evaluation of Participant Satisfaction

At the end of each of the 12 didactic training sessions, fellows completed evaluations. Evaluations consisted of six statements with Likert-scale response options and four free response questions; eight of the sessions had two parts, and therefore, two evaluations, making for a total 20 evaluation questions. We focus on the quantitative Likert-scale response questions for this analysis. Fellows were asked to rate the following statements from 1 to 5, with 1 being “Strongly Disagree” and 5 being “Strongly Agree” for evaluation questions 1–5, and 1 being “Poor” and 5 being “Excellent” for the sixth evaluation question:

1. The exercise learning objectives were met.

2. The information learned in this session was helpful.

3. I understood the concepts presented in this session.

4. The facilitator(s) were well organized.

5. The facilitator(s) seemed knowledgeable about the subject.

6. Overall, how would you rate this session?

The mean for each statement is computed, as well as a range of within-session means.

The CRFT CAB felt it was important to also assess satisfaction with teaching in the program. A main goal of the CRFT program is to create a bridge between the St. Louis community and researchers; to foster this connection, faculty satisfaction is imperative. Fifteen faculty members taught the 12 didactic training sessions and completed evaluations. The CRFT faculty evaluation consisted of seven Likert response items:

1. Community Research Fellows seemed well prepared for today’s training session

2. Community Research Fellows frequently took notes in the training session

3. Community Research Fellows contributed to discussions in the training session

4. Community Research Fellows provided comments that were insightful and constructive

5. Community Research Fellows asked insightful and constructive questions

6. Community Research Fellows listened attentively and seemed interested when I presented materials and information

7. Overall, how would you rate your experience teaching this session?

Items 1–6 have Likert-scale responses 1 to 5, with 1 being “Strongly Disagree” and 5 being “Strongly Agree” and item 7 has Likert-scale responses with 1 being “Poor” and 5 being “Excellent.” In addition, there were three yes/no items:

1. Would you be willing to teach again for the CRFT program in the future?

2. Did you learning anything from the Fellows during the training session?

3. Are you willing to collaborate with Fellows on a CBPR pilot project?

This analysis only examines the 10 close-ended questions on the faculty evaluation which also included 3–4 opened ended response questions depending on answers to the close-ended questions. The mean, SD, minimum and maximum for each of the 7 Likert response items were computed. The three dichotomous response questions were examined by the percent reporting yes to each question.

Results

Assessment of Participant Knowledge

Overall, there is evidence of knowledge gained, with the average score increasing from 20.6 on the baseline assessment to 37.1 on the final assessment (mean change of 16.5, range −7, 41). This corresponds to an absolute percent increase of 28.4% (range 12, 70.7%). Only three fellows (out of 44, 6.8%) decreased their scores from baseline to follow-up. The paired t-test for this knowledge change was highly significant (p < 0.001). Of the 29 questions, on average fellows answered about 7 (23.5%) questions essentially correctly at baseline (mean, 6.8; SD, 7.0; median, 4.0) and 16 (54.8%) questions essentially correctly at follow-up (mean, 15.9; SD, 6.7; median, 15.0). We then looked at fellows’ responses to see if each individual question’s score increased, decreased, or remained the same from baseline to follow-up (Table 3). At least 50% of the fellows increased their score for 12 (41.4%) of the 29 questions. The three questions with the greatest percent increase in score were defining bio-repository or biobank [36 (81.8%) increased their score from baseline to follow-up], defining the Belmont Report [28 (63.6%) increased their score from baseline to follow-up], and explaining the difference between primary and secondary data [26 (59.1%) increased their score from baseline to follow-up]. The three smallest differences were for naming two of the most common data sets used for health policy research [5 (11.4%) increased their score from baseline to follow-up], defining the term ethnography [11 (25.0%) increased their score from baseline to follow-up], and describing the health promotion planning model that is best to prevent and reduce substance abuse in African-American communities [12 (27.3%) increased their score from baseline to follow-up].

TABLE 3
www.frontiersin.org

Table 3. Baseline to follow-up question score changes (N = 44).

For each of the 29 questions, we examined which fellows scored “essentially correct” on the baseline assessment as compared to scoring “essentially correct” on the follow-up assessment. Using McNemar’s test, we found that 24 of the 29 questions had statistically significant results, suggesting that there was evidence of improvement from baseline to follow-up (Table 3). This further suggests that there was not only overall score improvement but also question level improvement for the majority (82.8%) of the questions.

We then examined the sample stratified by whether or not the fellow had taken a previous research course. Twenty-five of the 44 fellows (56.8%) self-reported previously taking a research course. This group had a mean difference between the baseline and follow-up assessment scores of 13.4, or 23.1% absolute increase. The remaining 19 fellows (43.2%) had a mean difference between baseline and final assessment scores of 20.6, or a 35.5% absolute increase in score. The scores in both groups are normally distributed, with a Shapiro–Wilk p-value of 0.778 and 0.783, respectively. We performed a two-sample t-test and found that the score differences are statistically significant, with a p-value of 0.038, indicating that although both groups had significant overall increase of knowledge from the baseline to the follow-up assessment, fellows who had not previously taken a research course had a significantly higher increase in knowledge. This suggests that both fellows who had taken a research course previously and those who had not, gained knowledge from the CRFT program, and that the knowledge gained was more substantial for those who had never taken a research course. This further emphasizes that this course is well designed, both for those who have some previous knowledge about research, and for those who lack it completely.

Comparisons for the mean percent of correct scores on pre- and post-tests at each session showed that in 11 of the 12 sessions, post-test scores were higher than pre-test scores; one session, Public Health Research, had an average post-test score lower than the pre-test score. Based on the Wilcoxon signed-rank tests, sessions 6 (Epidemiology), 8 (CBPR), 9 (Research Ethics), 10 (Qualitative Methods), and 11 (Clinical Trials) post-test scores were significantly higher than pre-test scores (p = 0.001, <0.001, <0.001, <0.001, and 0.028, respectively); the post-test score for session 3, Public Health Research, was significantly lower than the pre-test score (p = 0.003) (Table 4).

TABLE 4
www.frontiersin.org

Table 4. CRFT pre-/post-tests scores and score difference (post-test minus pre-test) by session.

Evaluation of Participant Satisfaction

Overall, session evaluations were high, with all session mean evaluation scores between 4 and 5 on a five point Likert-scale, with 4 indicating “Good” or “Agree” and 5 indicating “Excellent” or “Strongly Agree” depending on the question (Figure 1). The average response to the first evaluation statement, assessing whether the learning objectives were met, is 4.4, with the individual session averages ranging from 4.2 to 4.6. The session with the highest mean in terms of this first statement was second session, part two, Data. The average response to the second statement, whether information learned during the session was helpful, is 4.5 with a mean range from 4.1 to 4.7. The session with the highest mean in terms of the second statement is session three, Public Health Research. The average response to the third statement, whether the fellow understood the concepts presented in the session, is 4.3 with the individual session averages ranging from 3.7 to 4.6. The session with the highest average response to this item is session 8, CBPR. The average response to the fourth statement, whether the facilitator was well organized, is 4.6 with the individual session averages ranging from 4.1 to 4.8. The session with the highest average response to this item is session 7, Quantitative Research. The average response to the fifth statement, whether the facilitator seemed knowledgeable, is 4.7 with the individual session averages ranging from 4.3 to 4.9. The session with the highest average response to this item is the second half of session four, Health Literacy. The sixth item, the overall session evaluation, has a mean of 4.5 with a range from 3.7 to 4.8. The session with the lowest overall evaluation is second session, part two, Data. This is intriguing, as this session also ranked the highest for the first item, evaluating if the exercise learning objectives were met, suggesting that fellows ranked sessions overall based on more than whether the learning objectives were met. The session with the highest overall mean evaluation is the fourth session, part two, Health Literacy. As seen above, this session also ranked as having the most knowledgeable session facilitator. This information is useful for evaluating the importance of faculty involvement in this program. Averaging all six items for each session, the mean within-session evaluation is 4.5 with a range from 4.1 to 4.7. This indicates that, on average, responses for all items for each session was between “Agree” and “Strongly Agree” or “Good” and “Excellent.”

FIGURE 1
www.frontiersin.org

Figure 1. Fellows’ evaluations. ***Overall mean evaluation score, averaging all six Likert-scale evaluation responses from each fellow.

Faculty evaluations are an integral part of the CRFT program’s evaluation. To note, 15 (100%) of the faculty members responded that they would be willing to teach again for the CRFT program. Thirteen (87%) responded that they learned something from the fellows during the training session and 15 (100%) responded that they are willing to collaborate with fellows on a CBPR pilot project. The mean overall faculty rating of their teaching experience is 4.8 (Table 5). This indicates that the majority of the faculty members rated the experience as “Excellent” on a Likert-scale from 1-“Poor,” to 5-“Excellent.” This further suggests a CBPR success.

TABLE 5
www.frontiersin.org

Table 5. CRFT faculty evaluations. Likert-scale responses ranged from 1 to 5, with 1 being “Strongly Disagree” and 5 being “Strongly Agree” or 1 being “Poor” and 5 being “Excellent.

Discussion

We assessed participant knowledge gain and evaluated participant and faculty satisfaction. The CRFT program evaluation results build upon and enhance the positive results demonstrated in the CARES program. The completion rate in the CRFT program was much higher than that in the CARES program; 90% of the fellows completed the 15-week training, as opposed to 68% in the CARES program, and 88% completed the baseline and follow-up assessments, as compared to 58% in the CARES program (3). Additionally, attendance rates were higher in the CRFT program, and the sample increased from 11 completing both the baseline and follow-up in the CARES program to 44 completing the baseline and follow-up in CRFT. Our findings suggest that the participants did gain knowledge through this program, as evidenced through the significant increase in score from baseline to follow-up assessment and the average increase in all but one of the pre-/post-tests. We also found that, while both participants who had previously taken a research course and those who had not experienced significant increases in assessment scores from the baseline to follow-up assessment, those who had never taken a research course experienced higher increases in scores on average. This suggests that the CRFT program is well designed, both for those who have some previous knowledge about research, and for those who lack it, and, as would logically follow, those who have no previously research knowledge will learn substantially more. We believe that it was important for the CRFT program to have a diverse group of fellows in terms of previous knowledge as some of the learning occurs in small groups with other classmates. Though we have a modest sample size (n = 44), we chose robust tests to evaluate all aspects of the CRFT program.

This quantitative evaluation analysis is just one component of the comprehensive (formative and summative) mixed-methods (quantitative/qualitative) evaluation of the CRFT program. The use of multiple items to assess participant knowledge and satisfaction allows for triangulation of results. For example, Session 3 Public Health Research was the only session to see a significant decrease in score on the post-test compared to the pre-test. However, in the session evaluation, participants said that the information learned in the session was helpful; more so than any other session (highest mean). This suggests a need to refine items on assessment instruments, specifically pre- and post-tests. Session pre- and post-tests consisted solely of multiple-choice items to help reduce participant burden and time of administration. However, we needed to increase the number of questions (from 5 to 10) to increase the variability of question types (easy, medium, hard) thus increasing variability in scores across participants; a major limitation is some participants felt like they were taking a test. Inconsistency between pre-test and post-test scores could be due to differences in items used on the assessments. In future work, we examine the use of the same items (in a different order) on the pre-/post-test.

The participant and faculty evaluations were similarly positive, with all averages above four on a five point Likert-scale, averaging between “Good” and “Excellent” or “Agree” and “Strongly Agree” depending on the question. The overall satisfaction, both from the participant and the faculty member’s perspective, is crucial in evaluating a CBPR program. Our assessment of the knowledge gained by fellows through the CRFT program confirms that the first, second, and fifth CRFT program goals have been met. Our evaluation of participant and faculty satisfaction suggests the fourth program goal has been met. In terms of the third program goal, the CRFT program has funded two pilot projects, led by participants in the program, and two more projects have been funded by other sources; we believe that CRFT has been instrumental in enhancing the CBPR infrastructure in St. Louis, opening the doors to collaboration in the future. CRFT pilot projects extend Fellows learning beyond the classroom setting to implementing CBPR projects in real community-based settings, demonstrating the ability of the course to create community–academic partnerships that address community driven research questions (37).

Conclusion

The CRFT program was successful in increasing participant knowledge in public health topics, such as epidemiology, CBPR, research ethics, and clinical trials, as evidenced by the increased scores from the baseline to final assessment and positive difference in pre- and post-tests administered after each session. At the initiation of this program, we set out to achieve five goals.

The first goal was to enhance community knowledge and understanding of research. From a quantitative perspective, there is evidence of knowledge gained with an improvement from 20.6 on the baseline assessment to 37.1 on the final assessment (mean change of 16.5) and the mean increase in post-test scores compared to pre-test scores. This demonstrates that the course content (Table 1), was at an appropriate level for the incoming fellows, despite the diversity in education background. Additionally, the evaluations completed after each session overwhelmingly suggest that the fellows felt that the learning objectives were met during each session, again demonstrating that the communities’ knowledge and understanding of research was enhanced by this program.

The second goal was to create a pool of trained community members who can serve on Institutional Review Boards and community advisory boards. While the quantitative data here suggest that the fellows, on average, would have the knowledge base to serve on these boards, the actual outcomes more clearly support this objective. CRFT alumni have formed their own Patient Research Advisory Board and are currently serving on at least five advisory boards and community partnerships across the WUSM medical center; five CRFT alumni current serve on the CRFT community advisory board.

The third goal was to develop CBPR pilot projects that address health disparities in the St. Louis Greater Metropolitan Area. The knowledge gained, as demonstrated here, was a springboard for 9 groups consisting of a total of 30 fellows, to submit brief proposals for pilot projects. Four of these were funded, and are currently underway in the St. Louis Greater Area (37).

The fourth goal was to enhance the infrastructure for CBPR. The gathering of community members and faculty for bidirectional learning was successful in developing CBPR infrastructure, and the positive participant and faculty evaluations suggest both groups enjoyed this approach to partnership development. Additionally, the four pilot projects demonstrate that the foundation laid by this course created a conducive environment for community members to become involved in the research process.

Finally, our fifth goal was to provide community members with skills to engage as equal partners in every phase of the research process. Our course covers the spectrum of the research process, from research methods, to quantitative and qualitative analysis, to research ethics, synthesis, and evaluation. We demonstrate here, through the increase in scores, that knowledge was gained in these areas, giving the participants the information needed and pilot projects provide an opportunity for this goal to be realized. Overall, the CRFT program was successful, as evidenced by the knowledge gained, the positive participant and faculty evaluations, and the movement toward initiating fellow-inspired community-centric pilot projects.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The CRFT program was sponsored by Program to Eliminate Cancer Disparities (National Institutes of Health, National Cancer Institute grant U54CA153460) at the Siteman Cancer Center, Barnes Jewish Hospital, and Washington University School of Medicine. The authors thank the CRFT fellows for their participation and contribution, and the faculty and staff that volunteered their time and talents to the design, instruction, and implementation of the CRFT program.

References

1. Robson C. Real World Research: A Resource for Social Scientists and Practitioner-Researchers. Oxford: Wiley-Blackwell (2002).

Google Scholar

2. Tandon SD, Phillips K, Bordeaux BC, Bone L, Brown PB, Cagney KA, et al. A vision for progress in community health partnerships. Prog Community Health Partnershn (2007) 1(1):11–30. doi: 10.1353/cpr.0.0007

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Goodman MS, Si X, Stafford JD, Obasohan A, Mchunguzi C. Quantitative assessment of participant knowledge and evaluation of participant satisfaction in the CARES training program. Prog Community Health Partnershn (2012) 6(3):361–8. doi:10.1353/cpr.2012.0051

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Goodman MS, Dias JJ, Stafford JD. Increasing research literacy in minority communities: CARES fellows training program. J Empir Res Hum Res Ethics (2010) 5(4):33–41. doi:10.1525/jer.2010.5.4.33

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Goodman M, Stafford J; Suffolk County Minority Health Action Coalition. Mini-summit health proceedings. Center for Public Health and Health Policy Research Mini-Summits on Minority Health, 2008. Community Engaged Scholarship for Health (2011). Available from: www.ces4health.info

Google Scholar

6. Kwon S, Rideout C, Tseng W, Islam N, Cook WK, Ro M, et al. Developing the community empowered research training program: building research capacity for community-initiated and community-driven research. Prog Community Health Partnershn (2012) 6(1):43. doi:10.1353/cpr.2012.0010

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Freudenberg N, Tsui E. Evidence, power, and policy change in community-based participatory research. Am J Public Health (2014) 104(1):11–4. doi:10.2105/AJPH.2013.301471

CrossRef Full Text | Google Scholar

8. Fawcett SB, Paine-Andrews A, Francisco VT, Schultz JA, Richter KP, Lewis RK, et al. Using empowerment theory in collaborative partnerships for community health and development. Am J Community Psychol (1995) 23(5):677–97. doi:10.1007/BF02506987

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Viswanathan, M, Ammerman A, Eng E, Garlehner G, Lohr KN, Griffith D, et al. Community-Based Participatory Research: Assessing the Evidence. Summary, Evidence Report/Technology Assessment: 99. Rockville, MD: AHRQ Agency for Healthcare Research and Quality (2004).

Google Scholar

10. Minkler M, Wallerstein N, editors. Community-Based Participatory Research for Health: From Process to Outcomes. San Francisco, CA: John Wiley & Sons (2011).

Google Scholar

11. Israel BA, Eng E, Schulz AJ, Parker EA, editors. Introduction to methods in community-based participatory research for health. Methods in Community-Based Participatory Research for Health. San Francisco, CA: Jossey-Bass (2005). p. 3–26.

Google Scholar

12. D’Alonzo KT. Getting started in CBPR: lessons in building community partnerships for new researchers. Nurs Inq (2010) 17(4):282–8. doi:10.1111/j.1440-1800.2010.00510.x

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Butterfoss FD, Francisco VT. Evaluating community partnerships and coalitions with practitioners in mind. Health Promot Pract (2004) 5(2):108–14. doi:10.1177/1524839903260844

PubMed Abstract | CrossRef Full Text | Google Scholar

14. US Department of Health and Human Services. Healthy People 2020 (2011). Available from: http://www.healthypeople.gov/2020/about/disparitiesAbout.aspx

Google Scholar

15. Israel BA, Krieger J, Vlahov D, Ciske S, Foley M, Fortin P, et al. Challenges and facilitating factors in sustaining community-based participatory research partnerships: lessons learned from the Detroit, New York City and Seattle Urban Research Centers. J Urban Health (2006) 83(6):1022–40. doi:10.1007/s11524-006-9110-1

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Baker EL, White LE, Lichtveld MY. Reducing health disparities through community-based research. Public Health Rep (2001) 116(6):517. doi:10.1016/S0033-3549(04)50083-3

CrossRef Full Text | Google Scholar

17. Boulware LE, Cooper LA, Ratner LE, LaVeist TA, Powe NR. Race and trust in the health care system. Public Health Rep (2003) 118(4):358. doi:10.1016/S0033-3549(04)50262-5

CrossRef Full Text | Google Scholar

18. Lichtenberg PA, Brown DR, Jackson JS, Washington O. Normative health research experiences among African American elders. J Aging Health (2004) 16(5 Suppl):78S–92S. doi:10.1177/0898264304268150

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Braun KL, Nguyen TT, Tanjasiri SP, Campbell J, Heiney SP, Brandt HM, et al. Operationalization of community-based participatory research principles: assessment of the National Cancer Institute’s Community Network Programs. Am J Public Health (2012) 102(6):1195–203. doi:10.2105/AJPH.2011.300304

PubMed Abstract | CrossRef Full Text | Google Scholar

20. Pinto RM. Community perspectives on factors that influence collaboration in public health research. Health Educ Behav (2009) 36(5):930–47. doi:10.1177/1090198108328328

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Jagosh J, Macaulay AC, Pluye P, Salsberg J, Bush PL, Henderson J, et al. Uncovering the benefits of participatory research: implications of a realist review for health research and practice. Milbank Q (2012) 90(2):311–46. doi:10.1111/j.1468-0009.2012.00665.x

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Israel BA, Coombe CM, Cheezum RR, Schulz AJ, McGranaghan RJ, Lichtenstein R, et al. Community-based participatory research: a capacity-building approach for policy advocacy aimed at eliminating health disparities. Am J Public Health (2010) 100(11):2094–102. doi:10.2105/AJPH.2009.170506

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Jagosh J, Pluye P, Macaulay AC, Salsberg J, Henderson J, Sirett E, et al. Assessing the outcomes of participatory research: protocol for identifying, selecting, appraising and synthesizing the literature for realist review. Implement Sci (2011) 6(1):24. doi:10.1186/1748-5908-6-24

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Salimi Y, Shahandeh K, Malekafzali H, Loori N, Kheiltash A, Jamshidi E, et al. Is community-based participatory research (CBPR) useful? A systematic review on papers in a decade. Int J Prev Med (2012) 3(6):386.

PubMed Abstract | Google Scholar

25. Wallerstein N, Duran B. Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity. Am J Public Health (2010) 100(S1):S40–6. doi:10.2105/AJPH.2009.184036

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Nguyen G, Hsu L, Kue KN, Nguyen T, Yuen EJ. Partnering to collect health services and public health data in hard-to-reach communities: a community-based participatory research approach for collecting community health data. Prog Community Health Partnershn (2010) 4(2):115–9. doi:10.1353/cpr.0.0120

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Trinh-Shevrin C, Islam N, Tandon SD, Abesamis N, Hoe-Asjoe H, Rey M. Using community-based participatory research as a guiding framework for health disparities research centers. Prog Community Health Partnershn (2007) 1(2):195. doi:10.1353/cpr.2007.0007

PubMed Abstract | CrossRef Full Text | Google Scholar

28. Wallerstein NB, Duran B. Using community-based participatory research to address health disparities. Health Promot Pract (2006) 7(3):312–23. doi:10.1177/1524839906289376

CrossRef Full Text | Google Scholar

29. Shalowitz MU, Isacco A, Barquin N, Clark-Kauffman E, Delger P, Nelson D, et al. Community-based participatory research: a review of the literature with strategies for community engagement. J Dev Behav Pediatr (2009) 30(4):350–61. doi:10.1097/DBP.0b013e3181b0ef14

CrossRef Full Text | Google Scholar

30. Bordeaux BC, Wiley C, Tandon SD, Horowitz CR, Brown PB, Bass EB. Guidelines for writing manuscripts about community-based participatory research for peer-reviewed journals. Prog Community Health Partnershn (2007) 1(3):281. doi:10.1353/cpr.2007.0018

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Balazs CL, Morello-Frosch R. The three Rs: how community-based participatory research strengthens the rigor, relevance, and reach of science. Environ Justice (2013) 6(1):9–16. doi:10.1089/env.2012.0017

PubMed Abstract | CrossRef Full Text | Google Scholar

32. Eder M, Tobin JN, Proser M, Shin P. Special issue introduction: building a stronger science of community-engaged research. Prog Community Health Partnershn (2012) 6(3):227–30. doi:10.1353/cpr.2012.0040

CrossRef Full Text | Google Scholar

33. Cargo M, Mercer SL. The value and challenges of participatory research: strengthening its practice. Annu Rev Public Health (2008) 29:325–50. doi:10.1146/annurev.publhealth.29.091307.083824

PubMed Abstract | CrossRef Full Text | Google Scholar

34. Minkler M. Community-based research partnerships: challenges and opportunities. J Urban Health (2005) 82:ii3–12. doi:10.1093/jurban/jti034

CrossRef Full Text | Google Scholar

35. Davis SW, Cassel K, Moseley MA, Mesia R, De Herrera PA, Kornfeld J, et al. The cancer information service: using CBPR in building community capacity. J Cancer Educ (2011) 26(1):51–7. doi:10.1007/s13187-010-0159-x

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Minkler M. Ethical challenges for the “outside” researcher in community-based participatory research. Health Educ Behav (2004) 31(6):684–97. doi:10.1177/1090198104269566

PubMed Abstract | CrossRef Full Text | Google Scholar

37. Coats JV, Stafford JD, Thompson VS, Javois BJ, Goodman MS. Increasing research literacy the community research fellows training program. J Empir Res Hum Res Ethics (2015) 10(1):3–12. doi:10.1177/1556264614561959

PubMed Abstract | CrossRef Full Text | Google Scholar

38. Merriam SB. Andragogy and self-directed learning: pillars of adult learning theory. New Dir Adult Cont Educ (2001) 2001(89):3–14. doi:10.1002/ace.3

CrossRef Full Text | Google Scholar

39. Mainous AG, Smith DW, Geesey ME, Tilley BC. Development of a measure to assess patient trust in medical researchers. Ann Fam Med (2006) 4(3):247–52. doi:10.1370/afm.541

CrossRef Full Text | Google Scholar

40. Minkler M, Blackwell AG, Thompson M, Tamir H. Community-based participatory research: implications for public health funding. Am J Public Health (2003) 93(8):1210–3. doi:10.2105/AJPH.93.8.1210

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Khodyakov D, Stockdale S, Jones A, Mango J, Jones F, Lizaola E. On measuring community participation in research. Health Educ Behav (2013) 40(3):346–54. doi:10.1177/1090198112459050

PubMed Abstract | CrossRef Full Text | Google Scholar

42. Davis TC, Crouch MA, Long SW, Jackson RH, Bates P, George RB, et al. Rapid assessment of literacy levels of adult primary care patients. Fam Med (1991) 23(6):433–5.

PubMed Abstract | Google Scholar

43. Davis TC, Michielutte R, Askov EN, Williams MV, Weiss BD. Practical assessment of adult literacy in health care. Health Educ Behav (1998) 25(5):613–24. doi:10.1177/109019819802500508

PubMed Abstract | CrossRef Full Text | Google Scholar

44. Murphy PW, Davis TC, Long SW, Jackson RH, Decker BC. Rapid estimate of adult literacy in medicine (REALM): a quick reading test for patients. J Read (1993) 37:124–30.

Google Scholar

45. Weiss BD, Mays MZ, Martz W, Castro KM, DeWalt DA, Pignone MP, et al. Quick assessment of literacy in primary care: the newest vital sign. Ann Fam Med (2005) 3(6):514–22. doi:10.1370/afm.405

PubMed Abstract | CrossRef Full Text | Google Scholar

46. Lyons SA, Goodman MS. Community Research Fellows Training Program Evaluation: Using SAS® to Analyze Pre-and Post-Test Data. Burlington, VT: New England SAS Users Group (2013). p. 1–6.

Google Scholar

Keywords: community-based participatory research, health disparities, community education, public health, health education

Citation: D’Agostino McGowan L, Stafford JD, Thompson VL, Johnson-Javois B and Goodman MS (2015) Quantitative evaluation of the community research fellows training program. Front. Public Health 3:179. doi: 10.3389/fpubh.2015.00179

Received: 21 April 2015; Accepted: 29 June 2015;
Published: 16 July 2015

Edited by:

Roger A. Harrison, University of Manchester, UK

Reviewed by:

Ricardo J. Wray, University of Pennsylvania, USA
Mary Odum, Texas A&M University, USA

Copyright: © 2015 D’Agostino McGowan, Stafford, Thompson, Johnson-Javois and Goodman. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Lucy D’Agostino McGowan, Vanderbilt University, 2525 West End Avenue, Suite 1100, Nashville, TN 37235, USA, ld.mcgowan@vanderbilt.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.