INTERPRETABILITY OF A CANADIAN INFORMATICS COMPETENCY SCALE AMONG FOURTH-YEAR NURSING STUDENTS by ANDREA ELIZABETH DRESSELHUIS B.Sc.N., University of British Columbia, 1993 R.N., British Columbia Institute of Technology, 1988 Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN NURSING in the FACULTY OF GRADUATE STUDIES TRINITY WESTERN UNIVERSITY July, 2019 © Andrea Dresselhuis, 2019 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 2 Abstract Nursing informatics merges nursing practice, its information and knowledge, with information communication technologies to improve patient care. Uptake of informatics competencies can be measured using self-perceived assessment scales. A scale for measuring Canadian nursing informatics has been recently developed from national competency indicators. In order to examine its wording and interpretability, cognitive interviewing was conducted with eight fourth-year nursing students as they completed the Canadian Nurse Informatics Competency Assessment Scale. Findings revealed issues related to misinterpreted survey items, items seen as “difficult” to answer, and specific words and phrases not recognized or misinterpreted. Furthermore, design flaws such technology-related jargon, wording ambiguity, or doublebarrelled questions were revealed. Correspondingly, specific item and response re-wording revisions have been recommended to improve wording, interpretability and scale validity. Improving this scale may contribute to nursing informatics assessment and uptake in Canada which may be timely and strategic given that nursing informatics preparedness in Canada lags. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 3 Dedication I dedicate this thesis to my father, Fred Harris, who, when I was a little girl, made me a periscope that allowed me to see around corners. While I peered into it, stunned that I could see the living room while standing in the stairwell, Dad explained the science behind it. Dad always wanted his “girls” to be inquisitive and passionate about the world and modelled a lifelong love of learning. You would be proud to know I made it across this finish line, Dad. Thanks for showing me that what awaits around the next corner can be both exciting and full of lessons. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 4 Acknowledgements I would first like to thank the eight fourth-year students who kindly shared their time, even during Christmas break, to participate in this study. Thank you for conducting yourselves with tremendous professionalism and sincerity. You were such enthusiastic and willing recruits and I am so grateful to have met each of you. Second, I would like to acknowledge the support of the nursing administration and faculty at Trinity Western University. At every step of my studies, I have encountered kindness and encouragement from remarkable individuals who are passionate about nursing and invested in my learning. Thank you for modelling integrity and kindness alongside a pursuit for excellence. I would like to acknowledge my thesis committee for their expertise and commitment to this project. Dr. Rick Sawatzky, your enthusiasm and affirmation conjoined with your expert feedback and “just a few comments” swayed me to go many extra miles to reconsider things and be a better researcher. I am indebted to your calm guidance and keen intellect. Dr. Maggie Theron, thank you for steering me from the outset. I still can’t believe how you discovered the newly developed scale for me to use! Your own research with nursing students using technology was inspiring and extremely helpful, and your kindness and capacity to know what to say when I was grieving was refreshing and life-giving. To my children—Lauren and David, Claire, Jonathan, Anneke and Mariechen—I could not have embarked on this journey without your unflagging support. You have cheered me on and happily taken on many additional responsibilities allowing me to focus on school. Contrary to my fears, the fact that five of you are on the cusp of pursuing post-baccalaureate studies makes me realize that my decision to go back to school may have inspired you more than it hindered you. You are unique and precious and inspire me more than you’ll ever know. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE To my husband Jeff, thank you from the bottom of my heart for hemming me in with your unconditional love. Your humour and tremendous capacity kept my head above water. You believed I would thrive in my studies, listened when I felt overwhelmed, challenged me to persevere, and consistently stepped in to protect my time. To be fully known and cherished is a rare gift. Finally, to the One who is Love, the only wise God, be honour and glory. You are my source of strength and my bedrock. It is my prayer that our dependence and interaction with technology will never cause us to lose sight of how precious our humanity is, and how valuable our human interpersonal connections are. 5 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 6 Table of Contents Abstract................................................................................................................................ 2 Dedication............................................................................................................................ 3 Acknowledgements ............................................................................................................. 4 Table of Contents ................................................................................................................ 6 List of Tables .................................................................................................................... 13 List of Figures ................................................................................................................... 14 Chapter One: Introduction and Background ..................................................................... 16 Personal Motivation............................................................................................... 17 Background ........................................................................................................... 18 Nurse Informatics Competencies .......................................................................... 19 Informatics Assessment ......................................................................................... 20 Rationale for Research .......................................................................................... 20 Underlying Theoretical Models ............................................................................. 21 Bandura’s theory of self-efficacy .............................................................. 22 Validity measurement theory .................................................................... 23 Definition of Terms ............................................................................................... 25 Competency ............................................................................................... 25 Information and communication technologies (ICTs) .............................. 25 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 7 Nursing informatics .................................................................................. 25 Nursing students ....................................................................................... 25 Self-perceived competency ...................................................................... 25 Thesis Purpose ....................................................................................................... 26 Thesis Method ....................................................................................................... 26 Relevance and Significance ................................................................................... 27 Outline of Thesis .................................................................................................. 27 Chapter Summary ................................................................................................. 28 Chapter Two: Literature Review ...................................................................................... 29 Literature Review Methods .................................................................................. 29 Search Strategy .......................................................................................... 29 Inclusion and exclusion criteria ................................................................ 30 Literature Review ................................................................................................. 31 Nursing informatics benefit health care .................................................... 31 Tools and scales for assessing nursing informatics competencies ........... 32 SANICS ......................................................................................... 34 TRI ................................................................................................. 35 CARS ............................................................................................ 36 KSANI ........................................................................................... 37 Discovering a Canadian informatics self-assessment tool ....................... 38 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 8 Nursing informatics competencies: A Canadian context ......................... 39 Chapter Summary .................................................................................................. 43 Chapter Three: Methods ................................................................................................... 44 Study Design ........................................................................................................ 44 Sampling methods .................................................................................... 44 Data collection procedures ........................................................................ 48 Cognitive interviewing methods................................................................ 50 Think-aloud .................................................................................. 52 Verbal probing .............................................................................. 53 Data analysis ............................................................................................. 56 CASNs entry-to-practice competencies document ....................... 59 Ethics .................................................................................................................... 61 Chapter Summary .................................................................................................. 63 Chapter Four: Results ....................................................................................................... 65 Sample Description .............................................................................................. 65 Presentation of Findings ....................................................................................... 68 Survey questions misinterpreted by participants ....................................... 69 Survey question 4 ......................................................................... 70 Survey question 7 ......................................................................... 72 Survey question 20 ....................................................................... 74 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 9 Survey question 14 ....................................................................... 75 Survey question 18 ....................................................................... 76 Survey question 19 ........................................................................ 78 Survey question 21 ........................................................................ 79 Survey question 6 .......................................................................... 80 Summary of survey questions misinterpreted by participants ....... 81 Ease or difficulty answering survey questions ......................................... 82 Questions viewed as “easy” to answer ......................................... 83 Questions viewed as “difficult” to answer ................................... 87 Summary of ease or difficulty answering survey questions ......... 88 Words or phrases not recognized or misinterpreted ................................. 89 “ICTs” not recognized .................................................................. 90 “Various types of electronic records” misinterpreted ................... 92 “Interoperable” not recognized ..................................................... 94 “Organizational policies” misinterpreted ..................................... 95 “Information standards” misinterpreted ....................................... 96 “Informatics” misinterpreted ........................................................ 98 “Variety of ICTs” misinterpreted ............................................... 100 “Health information systems” misinterpreted ............................ 101 Summary of word and phrases not recognized or understood .... 102 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 10 Other problems identified by participants ............................................... 102 “The question is asking me more than one thing” ....................... 105 “Without experience I don’t know how to answer” ................... 107 “Aspects of question unclear” ..................................................... 108 Summary of other problems identified by participants ............... 110 Chapter Summary ................................................................................................ 111 Chapter Five: Discussion ................................................................................................ 115 Limitations .......................................................................................................... 115 Sample representation ............................................................................ 116 Researcher proficiency ............................................................................ 116 Sex .......................................................................................................... 117 Discussion of Findings ....................................................................................... 117 Principles of survey design ..................................................................... 118 Jargon ......................................................................................... 118 Ambiguity ................................................................................... 119 Double-barrelled questions ......................................................... 122 Satisficing, acquiescence and Dunning-Kruger effect ........................... 123 Benefits of improving the interpretability of the C-NICAS ................... 125 Educational preparedness and item misinterpretation ............................. 127 Pilot testing and pre-testing on targeted populations ............................. 130 Test validity, response error, and inferences .......................................... 133 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 11 Applicability of suggested revisions ...................................................... 135 Chapter Summary ............................................................................................... 138 Chapter Six: Conclusions and Recommendations .......................................................... 140 Research Summary ............................................................................................. 140 Conclusions ......................................................................................................... 141 Recommended Revisions to the C-NICAS ........................................................ 144 Recommendations for Nursing Research on Informatics Competency Assessments ............................................................................................. 151 Recommendations for Nursing Education .......................................................... 152 Recommendations for Nursing Practice ............................................................. 154 Recommendations for Nursing Leadership ........................................................ 155 Recommendations for Nursing Policy ................................................................ 156 Chapter Summary ............................................................................................... 157 References ...................................................................................................................... 159 Appendix A: Search Terms for Literature Review ......................................................... 177 Appendix B: PRISMA flow diagram ............................................................................. 178 Appendix C: Review Matrix of Selected Articles .......................................................... 179 Appendix D: C-NICAS .................................................................................................. 184 Appendix E: Oral Recruitment Script ............................................................................ 186 Appendix F: Interview Script ......................................................................................... 187 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 12 Appendix G: Human Research Ethics Board Certificate Approval ............................... 197 Appendix H: Participant Consent Form .......................................................................... 198 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 13 List of Tables Table 1 Sample Description ........................................................................................................ 66 Table 2 Recommended Revisions for C-NICAS Items and Responses ...................................... 145 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 14 List of Figures Figure 1. Screen shot of Data Matrix depicting responses from P1 and P2 to survey Item # 9 .............................................................................................................................. 58 Figure 2. Number of survey questions misinterpreted by each participant ................................. 69 Figure 3. Percentage of participants who misinterpreted questions ........................................... 70 Figure 4. Number of questions rated as “easy”, “average” or “difficult” to answer 83 by each participant ........................................................................................................... 84 Figure 5. Number of participants rating each question as “easy”, “average” or “difficult” .......................................................................................................................... 85 Figure 6. Number of words or phrases not recognized or misinterpreted .................................... 91 Figure 7. Number of other survey problems per question ........................................................... 104 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE The questionnaire designer may intend one interpretation yet find that individuals presented with the questions adopt an alternate understanding that, in retrospect, appears quite reasonable. If cognitive interviewing leads us to appropriate findings or insights, we may then modify our materials to enhance clarity. This . . . ultimately increases the likelihood that they will respond in a thoughtful manner and give accurate answers. (Willis, 2005, p. 4) 15 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 16 Interpretability of the Canadian Nurse Informatics Competency Assessment Scale Among Fourth-Year Nursing Students Chapter One: Introduction and Background Twenty-first century health care has been marked by an increase in the utilization of information and communication technology (ICT), revealing a pressing need for seamlessly connected health, digital literacy, and carefully honed informatics capacities (Bickford, 2015; Choi & De Martinis, 2012; Chung & Staggers, 2014; Hubner et al., 2016). Nursing informatics (NI) can develop decision-making in all areas of nursing through the use of standardized data (Canadian Nurses Association [CNA], 2017, p. 1). Informatics education in the workplace and nursing programs is linked with both satisfaction and competency in NI (Hern, Key, Goss, & Owens, 2015). However, nurses are “often placed in the context of ICT in their workplace . . . without having received the necessary training” (Gonçalves, Castro, & Fialek, 2015, p. 1012). Furthermore, Canadian nursing students lack necessary resources in their formal education for developing competencies in NI (Nagle & Clarke, 2004; Ronquillo, Topaz, Pruinelli, Peltonen, & Nibber, 2017) and feel ineffective at searching electronic scientific databases, using spreadsheets, ensuring data security, and analyzing the quality of health care websites (Jetté, Tribble, Gagnon, & Mathieu, 2010). To respond to this challenge, Canadian nursing schools must prepare graduates competent in informatics (Borycki, Foster, Sahama, Frisch, & Kushniruk, 2013; Frisch & Borycki, 2013). Recognizing the importance of NI and its integration into Canadian entry-topractice competencies, the Canadian Association of Schools of Nursing (CASN) developed entry-to-practice NI competencies (CASN, 2012a). These competencies outline the minimum requirement that all registered nurses emerging from an undergraduate nursing program in INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 17 Canada should possess pertaining to nursing and health informatics (CASN, 2012a). To measure the attainment of these competencies, a survey tool, the Canadian Nurse Informatics Competency Assessment Scale (C-NICAS), has been developed and tested on a large population of Canadian registered nurses (Kleib & Nagle, 2018a, 2018b). A preliminary psychometric analysis of the CNICAS reveals evidence of validity and reliability. Survey questionnaire designs can be improved by examining and understanding the cognitive processes utilized by participants as they answer survey questions (McColl, 2006; McColl, Meadows, & Barofsky, 2003; Polit & Beck, 2017). Cognitive interviewing offers an opportunity for survey participants to give their understanding of the survey questions and articulate their decision-making aloud —i.e., how they have recalled and retrieved their answers and responses. It does not appear that cognitive interviewing was performed on the C-NICAS; therefore, wording of its survey questions may still stand to benefit from a closer examination of their meanings and interpretability. Furthermore, testing the C-NICAS on a population of student nurses is warranted given that the C-NICAS is based on entry-level competencies. To establish the interpretability and readability of the C-NICAS’ survey questions, cognitive interviewing was conducted in a small sample of fourth-year nursing students using approaches as suggested by Willis (2005), Miller, Willson, Chepp, and Padilla (2014), and Collins (2015a). Personal Motivation My personal interest in computers and informatics began shortly after starting graduate school as a mature student, decades after my BScN graduation. In graduate school, using a laptop for the first time, I struggled to navigate multiple tabs/windows, conduct online searches, and even master a track pad, despairing at how un-savvy and illiterate I was in computer INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 18 knowledge and ICTs. When I was last in school, I conducted scholarly research using microfiches and the Dewey decimal system, locating journals and reference books filed on library shelves. In graduate school, research consisted of online searches yielding thousands of results and, much to my horror, slightly tweaking the search, added another thousand. To preserve my sanity and avoid being labelled prehistoric, I learned how to ask for help. The irony was not lost on me when, in order to problem-solve, I became reliant on the very technology I had been struggling with. While today’s generation of nursing students have grown up surrounded by technology in a way I have not, keeping abreast of ICTs in the health care arena is a tremendous challenge regardless of one’s computer and ICT proficiency. It is fascinating to consider how seemingly dependent health care has become on technology within the span of only a few decades. My curiosity with computers, ICTs, and informatics led me to learn how they were incorporated into nursing curricula across Canada. This, in turn, led me to consider how educational informatics competencies could be measured. I believe nurses want to embrace technology if they know that patient care delivery is enhanced. I also believe nurses will not jeopardize human interpersonal connections in order to achieve this. Background It can be said that health care has become both complicated and simplified by ICTs, yet the appropriate use of ICTs in digitally connected health services can have cost-saving results and help in the shift toward a person-centred approach (CNA, 2017). Quality of care can improve in a technology-rich environment, positively affecting aspects of patient care such as patient safety, administration efficiency and support for evidence-based care (Darvish, Bahramnezhad, Keyhanian, & Navidhamidi, 2014). Despite enormous shifts in the way health care is delivered using ICTs, nurses may not be equipped to meet these challenges (Akman, INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 19 Erdemir, & Tekindal, 2014; Kinnunen, Rajalahti, Cummings, & Borycki, 2017; Lavin, Harper, & Barr, 2015). Moreover, nurse educators are “under prepared in the requisite skills to use or demonstrate informatics technologies” (Kinnunen et al., 2017, p. 45), typically approaching informatics as a domain separate from knowledge and skill acquisition (Kinnunen et al., 2017). In a systematic review on the integration of informatics in undergraduate nursing programs, Kleib, Zimka, and Olson (2013) found “inconsistent integration of theoretical knowledge and clinical experiences related to informatics education as well as variations in the duration and sequencing of informatics instruction” (p. 150). Specifically, in Canada, NI has not been quickly integrated into nursing undergraduate curricula (Nagle & Clarke, 2004; Ronquillo et al., 2017). Nurse Informatics Competencies CASN’s NI competencies were developed with the aim of integrating NI in both curricula and professional practice (CASN, 2012a). Canada’s undergraduate nursing programs, by adjusting their curricula to include CASN’s informatics competencies, champion nursing informatics, driving forward an agenda of safeguarding “information synthesis in accordance with professional and regulatory standards in the delivery of patient/client care” (CASN, 2012a, p. 5). Measuring uptake of informatics in the nursing student population is crucial to understanding the effectiveness of NI integrated into nursing curricula. Are entry-level registered nurses in Canada meeting these competencies? What are ways this can be determined? Are Canadian nursing schools assessing for the presence of NI competencies in their graduating students? A literature review suggests this is not yet occurring. Kleib and Nagle (2018a) assert little is known about how informatics competencies are perceived by practicing Canadian nurses. Assessment of NI competencies in Canada is poised, therefore, to be potentially valuable in both educational settings and in the context of nursing practice. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 20 Informatics Assessment According to Kleib and Nagle (2018a), a survey based on CASN’s entry-to-practice competencies offers an opportunity to “evaluate nurses’ readiness for informatics practice, understand educational needs, and plan informatics education in the workplace” (p. 351). Student self-assessment of competencies creates an awareness of knowledge and level of expertise by identifying education and experiential needs which, in turn, can guide learning and clinical practice (Hill, McGonigle, Hunter, Sipes, & Hebda, 2014). Furthermore, assessing for evidence of informatics competency is supportive of student-centred learning as curricula may be adjusted to individual student needs (Choi & Zucker, 2013). A self-assessment tool based on CASN’s NI competencies, if used in the context of evaluating, may also inform nursing educators planning undergraduate nursing curricula. Rationale for Research NI competencies are increasingly recognized for their essential role in today’s multifaceted and technology-saturated health care environment. As schools of nursing incorporate NI into their curricula, reliable evaluation strategies are needed to determine the effectiveness of incorporating these competencies. Surveying students regarding their level of competence is one such strategy (Melrose, Park, & Perry, 2015). Using the C-NICAS with undergraduate nursing students is defensible given that the C-NICAS emerged from entry-topractice competencies—those developed for newly graduated baccalaureate nurses entering the health care workforce (CASN, 2012a). Validity evidence is needed as the scale has not been validated for use in this population. This study offers a piece of that validity evidence. Information from survey research must be accurate before its results can be generalized into a broader population. While the aim of a survey is clear communication between INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 21 participants and researchers, questionnaires are rarely free from error (Willis, 2005). Cognitive interviewing is an important aspect of survey development as it allows researchers to “study the manner in which targeted audiences understand, mentally process, and respond to” (Willis, 2005, p. 3) survey questions. Cognitive interviewing addresses sources of error in survey data which leads to the re-design of questionnaires which, in turn, leads to improved data quality. Hamme Peterson, Peterson, and Gilmore Powell (2017) claim, with cognitive interviewing, “the descriptions of item intent and the associated construct dimension serve as a basis from which to judge if there is a misalignment between how the respondent interprets the item and what it is intended to measure” (p. 218). Interpretability of a survey’s scores affects the generalizability of the survey’s results. Additionally, as respondent transcripts are compared to the intent of the survey’s items, misalignment can be captured between the intended meaning and the respondent’s interpretation (Hamme Peterson et al., 2017). Benner (2017) asserts, “even simple everyday words can be misunderstood in a survey” (p. 544), making the task of ensuring each question is understood similarly by each respondent somewhat daunting. Cognitive interviewing is helpful for discovering, diagnosing, triaging, and treating survey problems (Benner, 2017). Cognitive interviewing on the C-NICAS tool offers a method for drawing forth recommendations for item re-wording as well as identifying any validity concerns. Underlying Theoretical Models Theoretical models offer important context to research studies by providing a relevant backdrop. Two theoretical models related to my research study are outlined next. The first theory, Bandura’s (1977) theory of self-efficacy, guides understanding related to self-perceived competencies. The second theory, validity measurement theory, underpins my research aim of further examining the validity of the C-NICAS using cognitive interviewing. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 22 Bandura’s theory of self-efficacy. A strongly perceived sense of self-efficacy has a positive influence on efforts individuals make to overcome adverse experiences and gain mastery of situations and experiences (Bandura, 1977). Similarly, self-reflection, seen as emerging from Bandura’s (1986) social cognitive theory, acknowledges how humans use self-evaluation to make sense of their experiences, and respond by modifying their behaviours and actions. This conceptual framework explains why participants may be motivated to complete a self-assessment questionnaire—self-efficacy is concerned with achieving mastery, and self-reflection with improving quality of future performances. Theoretical underpinnings of this research project centre around Bandura’s (1977, 1986, 1993) work connecting human agency and self-efficacy to cognitive functioning: Among the mechanisms of agency, none is more central or pervasive than people’s beliefs about their capabilities to exercise control over their own level of functioning and over events that affect their lives . . . . Much human behavior, which is purposive, is regulated by forethought embodying cognized goals. Personal goal setting is influenced by self-appraisal of capabilities. The stronger the perceived self-efficacy, the higher the goal challenges people set for themselves and the firmer is their commitment to them. (Bandura, 1993, p. 118) Self-efficacy beliefs affect how people think, feel, behave, and motivate themselves, and are influenced by cognitive processes (Bandura, 1977, 1993). Self-efficacy not only influences a person’s decision to enact a certain behaviour, but also if they will engage meaningfully in it, and if they will persevere doing it. According to Bandura (1986), one’s self-efficacy is influenced by: 1) previous experience (authentic mastery experiences); 2) witnessing others similar to one’s self perform successfully INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 23 (vicarious experiences); 3) believing achievement is possible (social/verbal persuasion); and 4) one’s physiological state (feeling aversive arousal such as fear, stress, tension or agitation). Inner motivation to act can be influenced by many factors such as disincentives, constraints, and seriousness of ‘missteps’, as well as choice, emotions, and persistence (Bandura, 1986). To regulate their performance or task efforts, “performers must have some idea of the performances they are seeking to attain and have at least some information about what they are doing” (Bandura, 1986, p. 398). If the aim of the task is clear, and the effort needed to complete it apparent, one’s perception of efficacy positively influences realization of the task (Bandura, 1986). Conversely, when participating in a function in which people are unaware of what they are aiming for may cause them to struggle to translate their perceived efficacy into lasting motivation (Bandura, 1986). Hence, participating in a survey questionnaire where the objective of the survey is apparent, wording clear, and instructions easy to follow is more likely to trigger accurate and truthful responses than if it is not. Engagement will be genuine, and motivation will be present to complete the task accurately and honestly. In contrast, participation in a survey in which respondents feel confused or puzzled by the wording of its items may, in fact, result in disinterest and/or disengagement. This discussion of Bandura’s (1977, 1993) work related to social cognitive theory has been helpful in situating cognitive interviewing contextually as a backdrop to participant involvement in surveys. Next, we turn our attention to a discussion of measurement theory. Measurement theory, and specifically validity measurement theory, offers an explanation of the role cognitive interviewing plays in establishing validity for a survey tool. Validity measurement theory. Important conventional approaches to establishing validity in surveys and questionnaires include quantitative psychometric analyses (e.g., reliability INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 24 studies, factor analysis, and correlational analyses for acquiring validity evidence), several of which were performed on the C-NICAS (Kleib & Nagle, 2018b). As Hawkins, Elsworth, and Osborne (2018) assert, examining validity lies beyond evidential proof for one type of validity. They suggest, instead, that multiple sources of evidence be examined to substantiate a validity claim (Hawkins et al., 2018). This range of sources includes qualitative perspectives, including cognitive interviewing (Hawkins et al., 2018). Cognitive interviewing can enhance the interpretability and meaning of survey score results, as well as identify whether the intended meaning of a survey item has been captured in its wording. Perspectives for establishing validity in self-assessment surveys inform key decision-making processes and, as such, may stand to benefit from new and scrutinizing approaches. Hawkins et al. (2018) suggest both an interpretative argument and a validity argument are required to establish validity for a new survey. An interpretative argument makes clear how users of the survey in its new context will interpret and use its resultant data; a validity argument, on the other hand, establishes that evaluation of key evidence has occurred (Hawkins et al., 2018). This includes a decision if use of the scores in the survey’s new setting are sufficiently supported or not supported (Hawkins et al., 2018). Ongoing examination of validity in both new and established survey tools is a crucial step to valid interpretation of measurement scores. Contemporary validity theorists such as Hawkins et al. (2018) define validity “not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use” (p. 1695). They further contend when surveys are placed in new contexts such as a new language translation (or, in the case of my proposed research study, a new population of entry-to-practice nurses), a validity argument structure is a sound approach to establish the validity of its score interpretation. Performing cognitive interviewing with the C- INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 25 NICAS offers additional proof of its validity. The upcoming section defines key concepts related to this research and is followed by a description of the purpose and relevance of the study. Definition of Terms Competency. Drawing on the work of Borycki et al. (2013), competency (in the nursing context) can be defined as a nurse’s capacity to “combine knowledge, attitudes and skills with external resources and apply these to specific practice situations” (p. 346). Competency can also be described as a “complex know-act based on combining and mobilizing internal resources (knowledge, skills, attitudes) and external resources to apply appropriately to specific types of situations” (CASN, 2012a, p. 13). Information and communication technologies (ICTs). ICTs, in the health care context, have been defined as encompassing those “digital and analogue technologies that facilitate the capturing, processing, storage, and exchange of information via electronic communication” (CASN, 2012a, p. 13). Nursing informatics. Nursing informatics may be viewed as a merging of computer science and nursing practice. Specifically, CASN (2012a) defines nursing informatics as the integration of “nursing, its information and knowledge, and their management, with information and communication technologies to promote the health of people, families and communities worldwide” (p. 13). Nursing students. Nursing students are those enrolled in an undergraduate Canadian nursing program. For the purposes of this project, participating nursing students will be in their fourth and final year. Self-perceived competency. Self-perceived competencies are closely related to INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 26 Bandura’s (1994) theory of perceived self-efficacy which Bandura defined as, “people’s belief about their capabilities to produce designated levels of performance that exercise influence over events that affect their lives” (p. 71). Whereas self-efficacy indicates how well an individual can execute actions required to face a variety of situations (Bandura & Schunk, 1981), self-perceived competency indicates how individuals cognitively appraise these levels of performance using their knowledge, skills, values, and attitudes (Desbiens & Fillion, 2011). In this context it denotes self-perception of one’s capacity to utilize NI knowledge to deliver care to patients and their families. Thesis Purpose The purpose of this study is to address the research question, “How do fourth-year nursing students interpret and respond to survey questions on the C-NICAS?” In this study, I want to ascertain how well each survey question in the C-NICAS scale is understood. Are the questions in the C-NICAS scale interpretable? Is the wording clear? Are the questions too difficult to understand? Are the questions unacceptably vague? Is there an association between unclear wording or phrasing and item misalignment? An analysis of the findings will aid in determining the interpretability of the C-NICAS, strengthening its usefulness, accuracy and validity. Thesis Method My study’s design involves conducting eight cognitive interviews with fourth-year nursing students. Using the technique of cognitive interviewing on a newly developed tool such as the C-NICAS allows for a close-up examination of the interpretability and validity of a survey tool still in its preliminary testing stages. A population of fourth-year nursing students closely resembles the target population of entry-to-practice nurses that CASN’s competencies were INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 27 developed for. Participants were interviewed between November 2018 and January 2019. All interviews were recorded and transcribed. Analysis involved a qualitative and iterative approach of synthesis and reduction across several steps to identify patterns and recurring phenomena. Relevance and Significance Assessing self-perceived competencies is a known and effective strategy for identifying students’ educational needs as it can direct learning opportunities and suggest curriculum strategies (Hill et al., 2014). Attaining competency in NI will prepare today’s nursing graduates for a future steeped in information and communication technological advances (CNA, 2017). It is evident, however, that educational preparedness for NI lags in Canadian nursing programs (Nagle & Clarke, 2004; Ronquillo et al., 2017). Given the urging of Kleib and Nagle (2018b) to verify the C-NICAS’ stability as well as test it with a student nursing population, conducting cognitive interviewing with the C-NICAS scale to determine its understandability and readability seems a sound decision. By evaluating the interpretability of C-NICAS, a newly developed survey tool designed to assess NI competencies, I hope to add to efforts intent on preparing for a future of digital and interconnected health care. Outline of Thesis This thesis is organized into six chapters. This first chapter has outlined rationale for the study, relevant background, theoretical underpinnings, and key definitions, and presented the research question. Chapter Two presents the literature review together with search and retrieval strategies. Chapter Three describes research methodology, including study design, qualitative data analysis strategies and ethical considerations. Chapter Four contains study results, and Chapter Five situates these findings in the existing research literature. The final chapter, Chapter Six, identifies several limitations and summarizes the study’s conclusions about the INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 28 interpretability of the C-NICAS. Suggestions for re-wording revisions are also presented in Chapter Six along with recommendations for nursing research, education, practice, leadership, and policy. Chapter Summary This first chapter presented the significance of NI in the Canadian context and highlighted the emergence of a new survey tool (the C-NICAS) to test informatics competencies developed for Canadian nurses. Two theories underpinning study constructs have been presented— Bandura’s theory of self-efficacy and validity measurement theory. Rationale for the importance of assessing informatics competencies, and specifically for conducting cognitive interviewing on this new survey tool have been outlined. Key definitions have been discussed, and my study’s research question, “How do fourth-year nursing students interpret and respond to survey questions on the C-NICAS?”, has been introduced. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 29 Chapter Two: Literature Review A review of the literature will be discussed next. Details of my search strategy will be outlined as well as a description of inclusion and exclusion criteria used in the literature search. A summary of what was identified in the literature also follows. Specifically, this includes: (a) benefits of NI; (b) survey tools used to test NI competencies; (c) what is measured when assessing NI competencies; and (d) Canada’s current NI context. As well, I include how I determined that a Canadian informatics assessment tool had recently been developed. Lastly, an overview of the Canadian context of nursing informatics will be given. Literature Review Methods My literature review objective was to describe and summarize current research related to informatics assessment in Canada in the undergraduate population, as well as examine what NI assessment tools are being used internationally to assess informatics competencies. Rationale for understanding Canada’s context was to determine what the current status and progress of informatics assessment was. It should be noted that my original aim in conducting this literature review was to identify a tool suitable for assessing NI in the Canadian entry-to-practice nursing population and, if it did not exist, develop one. After finishing my initial literature review, I concluded that such a tool did not yet exist. However, shortly thereafter, I learned that a tool based on CASN’s NI competencies had recently been developed and tested. This led me to contact the developers who supported its use in a population of nursing students. Search strategy. A review of the literature was conducted using the CINAHL and MEDLINE databases. CINAHL and MEDLINE were determined to be the most relevant databases for the intersecting subjects of NI, nursing, healthcare, and competency measurement tools. To capture the highest number of relevant articles related to nursing informatics, INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 30 “informatics” and “nurs*” OR “informatics” and “healthcare” were used (the asterisk denotes all related words, e.g., “nurse”, “nurses”, “nursing”) with the proximity operator “n3”. Using the connecting word AND, these search strategies were combined with words designed to capture the concept of a measurement tool, “measure*” OR “tool*” OR “survey” OR “checklist* OR “assess*” OR “competen*” (the asterisk used to denote all related words, e.g., “measurement”, “tools”, “surveys”, “assessment(s)”, and “competency(ies)”). Please refer to Appendix A for search terms for literature review. Inclusion and exclusion criteria. Articles included in these database searches were those published in English, printed between 2013-2017, and scholarly peer-reviewed. In total, 424 articles were identified using these search strategies with Medline (199) and CINAHL (225). Using other methods such as forward citation searching, shoulder tapping, and recommendations from thesis committee members generated an additional 32 articles. The inclusion criteria for selecting relevant articles were the intersecting concepts of nursing and informatics, informatics competencies, entry-to-practice nurses, and self-assessment surveys. Articles deemed nonrelevant related to apps, electronic health records (EMRs), mobile technology, non-informatics nursing competencies, patient health literacy, e-health, and tele-health. Editorials were also excluded. After screening and removing duplicate articles, 89 full-text articles were assessed for eligibility, of which 13 were isolated for synthesis and review. Nine of the selected articles described NI in the Canadian context (e.g., a study involving Quebec nursing students’ perceptions of resources and their development of NI competencies). The other four articles used self-administered surveys to test NI competencies on nursing students. Refer to Appendix B for a PRISMA flow chart (Moher, Liberti, Tetzlaff, & Altman, 2009) depicting the INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 31 identification, screening and eligibility of the included articles, and Appendix C for a synopsis of the thirteen articles. Literature Review The literature review provided important information about: (a) the benefits of NI, (b) survey tools used to test NI competencies; (c) what is measured when assessing NI competencies; and (d) Canada’s current NI context. Each of these topics will be discussed, including a description of tools for testing NI competencies, followed by a summary of how I stumbled upon the C-NICAS tool. Finally, a discussion will unfold of the Canadian context of nursing informatics. Nursing informatics benefits health care. Benefits of NI include decreased health care costs as well as enhanced decision-making through standardized data (CNA, 2017). Integrated health care systems that ensure personal health records are seamlessly and securely available to both patient and their health care provider can also have cost-saving benefits. Authors of Unleashing Innovation: Excellent Healthcare for Canada, advocate Canada adopt such an integrated delivery system of care, reporting the impact of such integrated organizations (e.g., Kaiser Permanente) as significantly decreasing number of clinic visits, emergency visits, and hospital admissions (Health Canada, 2015). They claim advancing information technology as one element critical to these successes and assert nurses in primary health care provider roles are well situated to lobby key government stakeholders for health care systems that invest in information technology and reduce taxpayer burden (Health Canada, 2015). With standardized data, information can be effectively and efficiently collected, extracted, aggregated, analyzed, and interpreted (CNA, 2017). As well, nursing engagement with a patient-centred, digitally connected health care system contributes to the realization of a INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 32 “person-centred model of health and wellness” (CNA, 2017, p. 1). Furthermore, the use of ICTs in clinical practice contributes to increased efficiency and increased quality (CNA, 2017). NI abilities can also increase time spent at the bedside (Hill et al., 2014). Hill et al. (2014) maintain this additional time can be used “to improve critical thinking and problem solving; thereby, increasing clinical reasoning skills” (p. 106). A further benefit to NI competencies may be linked to the specialized knowledge and responsibilities of various levels of NI competencies (e.g., the beginner nurse, experienced nurse, NI specialist, etc.). Borycki, Cummings, Kushniruk, and Saranto (2017) suggest, whilst benefits abound from the utilization of health information technology, technology-induced errors are on the increase. Defining technology-induced errors as those stemming from interactions between technology and humans in ‘real world work activities’, these errors arise from a wide manner of sources, including legislation, policy, programming/design, and lack of training or support (Borycki et al., 2013). Nevertheless, these authors assert an increased awareness of appropriate responsibilities at each level, may, in fact mitigate technology-induced errors and improve patient safety (Borycki et al., 2013). If realized, patient safety concerns related to technologyinduced errors can be addressed, from beginner nurses identifying and reporting ‘near misses’, to nurse informatics researchers extending their study focus to include health information technology safety evidence (Borycki et al., 2013). Benefits of these requisite responsibilities could be inestimable and far-reaching if technology-related errors are caught early or prevented entirely. NI has the potential to benefit nursing and health care in concrete and evidence-based ways. Tools and scales for assessing NI competencies will be discussed next. Tools and scales for assessing nursing informatics competencies. Many selfassessment tools developed specifically for assessing NI competencies were noted in the INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 33 literature. Examples selected for the purposes of this review include: 1) the Self-assessment of Nursing Informatics Competencies (SANICS), measuring knowledge, attitude and skills (Abdrbo, 2015); 2) the Technology Readiness Index (TRI), measuring readiness-to-use and perceptions of technology (Odlum, 2016); 3) the Computer Anxiety Rating Scale (CARS) (AkhuZaheya & Khater, 2013); and 4) Bryant, Whitehead, and Kleier’s (2016) Knowledge, Skills, and Attitudes towards Nursing Informatics (KSANI), measuring knowledge, skills and attitudes in nursing students. While it is noted there are other scales in the literature measuring NI competencies (for example, Staggers Nursing Computer Experience Questionnaire [SNCEQ] [Staggers, 1994]; Information Technology Attitude Scales for Health [ITASH] [Ward, Pollard, Glogowska, & Moule, 2007]; TIGER-based assessment of Nursing Informatics Competencies [TANIC] [Hübner et al., 2016; Hunter, McGonigle, Hill, Hebda, & Sipes, 2014; Sipes, McGonigle, Hunter, Hebda, Hill, & Lamblin, 2016]; Nursing Informatics Competency assessment of Level 3 and Level 4 [NICA L3/L4] [Hunter et al., 2014; Sipes et al., 2016]; and the Health Information Technology Competencies Tool [HITCOMP] [Sipes et al., 2017]), the selected tools I reviewed were chosen for their strength of relevance and alignment with my intent to find one to assess NI competencies in a Canadian nursing student population. Specifically, each of these tools assessed NI competencies in nursing students or entry-topractice nurses and encompassed a range of concepts addressed in the CASN (2012a) competencies such as knowledge, skills and attitudes, perceptions of technology adoption, patient safety, and computer literacy. With these four tools under scrutiny, I wanted to establish anything of pertinence to NI competency assessment in the Canadian context. It is noted for each of the selected tools, the number of items in each scale varies, as does the targeted population and focus of interest. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 34 Abdrbo (2015) surveyed 154 nursing students in the last two years of their undergraduate program as well as first-year graduates (‘interns’) to complete the 30-item, 5-point Likert Selfassessment of Nursing Informatics Competencies (SANICS) questionnaire, measuring: clinical informatics role, basic computer knowledge and skills, applied computer skills, clinical informatics attitudes, and wireless device skills. Odlum (2016) applied the 5-point Likert, 36item Technology Readiness Index (TRI) tool to forty-three nursing students to measure two technology readiness domains: optimism and innovativeness; and two technology inhibitor domains: discomfort and insecurity. To measure computer anxiety alongside computer literacy in nursing students, Akhu-Zaheya and Khater (2013) applied the Computer Anxiety Rating Scale (CARS) to undergraduate nursing students in Jordon. This self-assessment rating scale consists of 9 positive statements about computers and 10 negative statements. Bryant et al. (2016) designed the four-point Likert, 24-item Knowledge, Skills, and Attitudes towards Nursing Informatics (KSANI) scale on competencies developed for undergraduate and masters-entry nursing students by the American-based institute, Quality and Safety Education for Nurses (Cronenwett et al., 2007). It was noted, across the literature, developers of tools and scales measuring NI competencies were primarily concerned with assessing knowledge, skills, and attitudes of the individual. Furthermore, a need to assess education-related opportunities was also stressed. A closer look at how these various tools assessed these constructs follows. I had yet to discover that one had already been developed based on CASN’s (2012a) NI competencies. Self-Assessment of Nursing Informatics Competencies (SANICS) scale. Using the Self-assessment of Nursing Informatics Competencies (SANICS) scale as well as perceived preparedness for patient safety, Abdrbo (2016) assessed knowledge (e.g., patient safety and INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 35 error-and-cause analysis), skills (e.g., reporting and response to errors, resource utilization, evidence-based practice, and communications during hand-overs), and attitudes (e.g., responsibilities of health care professionals for patient safety culture, and error reporting and disclosing). Key demographics related to opportunities were also collected, such as computeruse experience, undergraduate NI courses, frequency of computer use, years using a computer, and access to computer at work. NI competencies were correlated with health informaticsrelated education. Her research indicated, “nurses may have limited [clinical informatics] skills, but with education, they become more likely to use available information systems” (Abdrbo, 2015, p. 513). A significant difference in higher patient safety skills as well as safety knowledge scores was noted between those students who took a NI course, and those who did not (Abdrbo, 2015). Abdrbo (2015) suggests, “education should be provided through several approaches: courses, tutorials, and clinical training” (p. 514), adding, “learning nursing informatics competencies will emphasize safety practices” (p. 513). Technology Readiness Index (TRI). Odlum (2016) maintains understanding technology perceptions by entry-level nurses is critical, as this knowledge can “enhance training and success in practice settings” (p. 314). This approach stems from the conviction that attitudes and perceptions are closely related to the adoption of health care-related technology. Measuring contributors toward technology optimism as well as those factors that induce a sense of discomfort toward technology can be achieved using the Technology Readiness Index (Odlum, 2016). Contributors to technology adoption are an optimistic attitude which welcomes technology, and a willingness to be a technology pioneer (Odlum, 2016). In contrast, perceptions that inhibit technology adoption include a belief that they lack control over technology, and a skepticism that technology will work correctly (Odlum, 2016). Each domain INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 36 affects technology readiness (positively or negatively) as each describes an individual’s tendency to use and embrace new technologies. These four domains may be summarized as follows: Optimism is the view of technology in a positive way and the belief that its use offers efficacy, flexibility and control. Innovativeness is the propensity for one to be a technological pioneer. Discomfort is the belief there is a lack of control over technology use and insecurity is the disbelief and skepticism in the ability for technology to work correctly [italics added]. (Odlum, 2016, p. 315) Use of this scale with a sample of urban American nursing students indicated nursing student experience with technology in the clinical setting is conversely linked to optimism, when compared to no experience with technology in the workplace (Odlum, 2016). This suggests a reality of challenges faced in the workplace (Odlum, 2016). It was also noted classroom or clinical training appears to improve technology readiness of entry-to-practice nurses as evidenced by increased optimism and decreased discomfort, both which suggest a need for early exposure and ongoing instructional approaches to address obstacles and barriers (Odlum, 2016). Computer Anxiety Rating Scale (CARS). Assessing competency in NI can also include determining anxiety-related computer literacy rates (Akhu-Zaheya & Khater, 2013). A range of factors influences anxiety towards computers—from computer experience, to demographics such as culture, to a fear of losing files or information when using a computer (Akhu-Zaheya & Khater, 2013). Amongst nursing students, computer anxiety was significantly and negatively correlated with program year and computer experience (Akhu-Zaheya & Khater, 2013). Using the Computer Anxiety Rating Scale developed by Heinssen, Glass, and Knight (1987) to assess computer anxiety, a negative correlation (r = -0.5) was found between computer anxiety and computer literacy; in this context, computer literacy was defined as “the ability to use a INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 37 computer” (Akhu-Zaheya & Khater, 2013, p. 37). Believing that educational opportunities to gain computer experience during nursing education programs increase computer literacy, participants were asked about specific learning opportunities. The authors conclude, “it is important to develop education and training courses that correspond with a student’s individual needs and meets student requirements in clinical practice, with more attention being paid to nursing student computer experience” (Akhu-Zaheya & Khater, 2013, p. 45). Knowledge, Skills and Attitudes towards Nursing Informatics (KSANI) scale. The Knowledge, Skills and Attitudes towards Nursing Informatics (KSANI) scale was developed based on the domains of knowledge, skills, attitudes and opportunities (Bryant et al., 2016). Specifically, the KSANI scale measures “attitudes toward informatics”, and “perception of informatics knowledge” as well as “informatics skills confidence” (Bryant et al., 2016). Questions related to education opportunities were also asked to gauge whether participants had applied informatics knowledge during their nursing education. To this end, Bryant et al. (2018) assessed nursing students’ perceptions of how informatics were integrated into their studies, and opportunities they had to use informatics technologies during their education. Examples of the KSANI’s Likert-style questions include: “I feel confident in my ability to document patient care in an electronic health record” (knowledge); “I feel confident in my ability to describe the benefits of different communication technologies” (skills); “It is important to me that all health professionals seek lifelong learning of information technology skills” (attitude); “In my program I had the opportunity to see examples of clinical decision-making supports and alerts” (opportunities) (Bryant et al. 2016). It is noted the KSANI scale competencies were based on nationally established informatics competencies for pre-licensure nursing students from the American-based “Quality and Safety Education for Nurses Initiative.” INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 38 An overview of these informatics competency scales has indicated some common ground. Across the literature, NI researchers champion a need for assessing knowledge, skills, and attitudes. They also lay claim to the importance of assessing education and learning opportunities in both nursing programs and in the workplace. It is also noted that selfassessment of NI is often based on nationally-developed competencies. I noted, however, that none of these scales specifically address NI in the Canadian context. Shortly after arriving at this conclusion, it was brought to my attention that a tool relevant in the Canadian context had recently been developed. Discovering a Canadian informatics self-assessment tool. As my research was aligned with strategies for evaluating self-competence of CASN’s NI in undergraduate nursing students, I was searching the literature to find a self-assessment tool that could be used to assess CASN NI competencies. A thorough review of the literature revealed that such a tool did not exist. Shortly after determining this, Dr. Maggie Theron, while at the Western North-western Region Canadian Association of Schools of Nursing conference, attended a presentation by Dr. Manal Kleib introducing the C-NICAS as a newly developed tool based on CASN’s NI competencies. Follow-up emails and conversation with Dr. Kleib confirmed the availability of articles describing the development and testing of the C-NICAS published ahead of print (Kleib & Nagle, 2018d, 2018e). She willingly shared the articles and tool with me, stating further research with the tool was ‘timely’, and using the tool with a different population ‘ideal’ (M. Kleib, personal communication, April 24, 2018). Please refer to Appendix D for the C-NICAS tool. The C-NICAS was developed based on CASN’s (2012a) competency indicator statements and initial testing on a large sample of practicing nurses in Alberta, Canada indicated good reliability and construct validity (Kleib & Nagle, 2018b). INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 39 Kleib and Nagle (2018a) developed the 21-item, 5-point C-NICAS scale to measure the following categories: foundational information and communication technology (ICT) skills, information and knowledge management, professional and regulatory accountability, and use of ICT in the delivery of patient care. Face validity in a small pilot test with informatics nurses resulted in minor modifications to the scale, after which the C-NICAS was applied to a large population (n = 2844) of already-practicing Canadian registered nurses in the province of Alberta. Survey data included mean scores for each of the four subscales, and demographic questions such as informatics-related education, informatics readiness, and engagement. Kleib and Nagle (2018a) assert, “given that positive attitudes toward technology are correlated with informatics competencies, the improvement of informatics competencies among nurses is vital to the incorporation of informatics in practice” (p. 358). They maintain the professional and regulatory accountability subscale of C-NICAS is representative of the affective domain thus giving indication of attitudes toward competencies, and evaluated as, “Recognizes the importance of nurses’ involvement in the design, selection, implementation and evaluation of ICTs applications and systems in health care” (Kleib & Nagle, 2018c, p. 1). It is noteworthy that Kleib and Nagle (2018b) suggest the need to gauge further evidence of the C-NICAS’ stability by conducting further research “among practicing nurses in different settings or provinces” (p. 364). They also urge future use of the C-NICAS with entry-level nurses stating its validation is “warranted among nursing students” (p. 6). In the following section, the emergence of the NI competencies in Canada will be described. Nursing informatics competencies: A Canadian context. In Canada, CASN (2012a) addressed NI competencies with initial efforts beginning in 2011 with funding from Canada Health Infoway. Honey et al. (2017) offer an overview of the development and use of NI INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 40 competencies in nursing students in six different countries, and state rationale for addressing NI competencies in Canada included: 1) limited informatics content in existing nursing curricula, 2) the need for entry-topractice nursing competencies reflecting the skills and knowledge needed to work in Health Information Technology (HIT) enabled practice environments, 3) the lack of shared understanding and consensus among educators on required informatics competencies for entry level practice and 4) the need to better prepare registered nurses to practice in increasingly data, information and technology rich environments. (p. 57) The first phase of this initiative developed entry-to-practice NI competencies for Canadian faculty to incorporate into nursing programs (Nagle et al., 2014). A later stage developed a Faculty Resource and Toolkit (CASN, 2013). To develop the NI competencies, a task force, Generating Momentum to Prepare Nursing Graduates for the Electronic World of Health Care Delivery Project, was formed (CASN, 2012a). This task force consisted of over 50 Canadian educators, RNs, students, key stakeholders and NI experts (Borycki et al., 2013: Nagle et al., 2014). After an extensive review of the literature, 30 competencies were drafted, then 20, which ultimately formed 19 indicator statements (Nagle et al., 2014). At this time, a decision was made to compose three broad competency statements, with one over-arching competency (Nagle et al., 2014). A final draft was sent for review to Deans and Directors of Schools of Nursing for feedback (Borycki et al., 2013). The final document, “Nursing Informatics: Entry-to-practice Competencies for Registered Nurses,” was released in 2012. The aims of this project were: 1) to promote dialogue between educators, informatics experts and nursing students on integrating NI into entry-to-practice competencies; 2) assist faculty in teaching NI; and 3) develop NI outcome-based objectives for nursing curricula INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 41 (CASN, 2012a). In summary, the CASN document: indicates the need to expect a core set of foundational skills and knowledge for all incoming nursing students while schools of nursing focus on the provision of learnings that lead to the achievement of an overarching competency: Uses information and communication technologies to support information synthesis in accordance with professional and regulatory standards in the delivery of patient/client care. (Nagle, 2013, p. 1) NI is supported by the CNA and the Canadian Nursing Informatics Association in a joint position statement (CNA, 2017). Specifically, they advocate the following principles: 1) NI competencies are necessary for today’s health care environment, as are nurses equipped with specialized NI knowledge; 2) standardized clinical terminologies such as the International Classification for Nursing Practice (ICNP) and Systematized Nomenclature of Medicine (SNOMED) are recommended for nursing documentation in Canada’s electronic health records; 3) standardized assessment methodologies and documentation tools such as the Canadian Health Outcomes for Better Information and Care (C-HOBIC) are needed for nurses to deliver quality, safe patient care; and 4) nurses must be responsive to evolving health care technologies, and flexible to innovative alternatives to delivery of care (CNA, 2017). To actualize these four principles requires substantial commitment on the part of many stakeholders including educators, researchers, and policy-makers. One such response has been the development of a self-perceived competency scale by two Canadian nurse educators and informatics researchers. Internationally, NI and the development of informatics competencies can be traced in the nursing literature to the 1980s and, since then, many different self-assessment tools have been developed, tested, and used. Frequently, these tools are based on nationally standardized INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 42 competencies and assess common constructs of knowledge, skill, attitudes, and education-related opportunities. Comparatively, the C-NICAS, based on nationally-developed competencies, also assesses these constructs. Coupled with early psychometric analysis indicating factor analysis, reliability, and construct validity, it appears poised as a useful aid for determining NI competencies among entry-level students in the Canadian context. Determining its readability and interpretability may strengthen its likelihood of doing so. In the literature, psychometric testing is commonly reported on self-assessment informatics tools. I noted, with interest, that cognitive interviewing was not mentioned as being used during the development phase of any informatic surveys reviewed in my literature review. Advantages of using cognitive interviewing when developing a questionnaire include clarifying, in detail, the meaning of each survey question as interpreted by each participant. Analysis of findings from cognitive interviewing transcripts allows researchers to “study the manner in which targeted audiences understand, mentally process, and respond to the materials we present – with a special emphasis on potential breakdowns in this process” (Willis, 2005, p. 3). These analyses may lead to strategic modifications to the survey questions which, in turn, will reduce response error and bias (Willis, 2005). Earlier, in Chapter One, a presentation of Bandura’s work (1977, 1986, 1993) as it relates to human cognitive functioning and self-appraisal of one’s abilities was described in the context of survey participation. Notably, in the literature articles reviewed for this study, constructs such as self-assessment or competencies were not linked to theories such as Bandura’s social cognitive theory and self-efficacy theory. Despite this, it is suggested these theories are relevant to understanding concepts in this research study. Bandura’s (1986) theory of self-efficacy suggests why participants may be motivated to complete a competency assessment survey. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 43 According to Bandura (1993), human behaviour is influenced by cognized goals and appraisal of one’s abilities. A healthy self-efficacy allows individuals to gain mastery of tasks and overcome negative experiences. Self-efficacy also directs how meaningfully one engages in and perseveres at a task (Bandura, 1993). When encountering obstacles such as being unable to understand what is expected of them, individuals cannot maintain a sense of efficacy and may lose motivation (Bandura, 1986). Completing a survey offers individuals an opportunity to assess if they have mastered a skillset. However, if a survey is hard to interpret or selecting an answer from the options given is difficult, their engagement and capacity to persevere may be lessened. Chapter Summary This chapter has described the literature review I conducted to search for an informatics competency assessment tool suitable for assessing informatics in the Canadian context. The literature revealed an abundance of tools used to assess NI competencies. These tools frequently assess knowledge, skills and attitudes, and educational opportunities, and are often based on nationally-established competency standards. After concluding that such a tool did not exist in the Canadian context, it was brought to my attention that a survey based on CASN’s (2012a) NI competency indicators had recently been developed and tested. Early psychometric analysis of the C-NICAS showed promising signs of reliability and validity for use with a general nursing population in Alberta. However, the applicability and appropriateness of this instrument for use with senior nursing students is unknown. The purpose of this thesis is to examine this by investigating how fourth-year nursing students interpret and respond to survey questions on the C-NICAS, and to specifically examine the wording and interpretability of the questions. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 44 Chapter Three: Methods My research question, “How do fourth-year nursing students interpret and respond to survey questions on the C-NICAS?”, was addressed using cognitive interviewing. This approach is ideally suited for “identifying issues of validity due to response processes and for providing recommendations for revision” (Hamme Peterson et al., 2017, p. 221). In this context, response processes refer to the “thought processes and operations involved in responding to an item” (Hamme Peterson et al., 2017, p. 217). This chapter contains a summary of my study design for cognitive interviewing. It commences with a description of how study participants were recruited. This is followed by a presentation of how I used cognitive interviewing as my research method, including a detailed depiction of the use of think-aloud and verbal probes. This chapter also outlines data collection strategies and depicts how I extracted, managed, and analysed my research data. Finally, ethical principles relating to the project are described. Study Design My research design involved recruiting fourth-year nursing students and then conducting cognitive interviews with each participant. These interviews were recorded and transcribed, and the data was analyzed using a qualitative approach following established guidelines offered by cognitive interview experts Willis (2005), Miller et al. (2014), and Collins (2015a). This section reviews my sampling methods, data collection procedures, and cognitive interviewing strategies as well as my approaches to data analysis. Sampling methods. Study participants were recruited from two fourth-year nursing classes at Trinity Western University’s (TWU) School of Nursing. As the C-NICAS scale was derived from CASNs (2012a) competency statements written for entry-to-practice nurses, applying the C-NICAS to nursing students in their final year of study seemed highly appropriate. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 45 In 2018, the C-NICAS was inaugurally tested on 2844 Canadian nurses of all ages and work experiences (Kleib & Nagle, 2018b). According to Dr. Kleib, further research with the tool was ‘timely’, and using it with a new population ‘ideal’ (M. Kleib, personal communication, April 24, 2018). Given that the C-NICAS is based on entry-level competencies but that this population has yet to be targeted, testing it on a population of fourth-year student nurses is warranted. My sampling approach, sequential sampling, involved first constructing a convenience sample of potential participants, and then purposively selecting from that sample to achieve diversity of perspectives and experiences. A convenience sample selects “the most readily available persons” (Polit & Beck, 2017, p. 724). Also known as volunteer sampling, it is efficient and well suited for a qualitative approach of interviewing (Polit & Beck, 2017). For my study, however, purposive sampling was also engaged to ensure better diversity of perspective and experience related to informatics and ICTs. During a fall 2018 class lecture, I came in person to invite all fourth-year nursing students at TWU to participate in my research project. I attended the last 10 minutes of a class and read a scripted recruitment presentation (see Appendix E for recruitment script). Wanting to recruit a range of perceived abilities and confidence levels related to nursing informatics, and suspecting that those feeling competent on the topic may be more likely to participate than those who do not, I stressed that the perspectives of those who struggle with ICT and informatics were as equally valuable as those who feel more comfortable with the topic. A coffee gift card ($15) was offered to all students who would participate. After my recruitment presentation, students were invited to sign up at the front of the classroom. An enthusiastic response was noted as 16 participants immediately volunteered. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 46 Having this many volunteer was an unexpected development as I had hoped to recruit, over a period, a total of six to eight participants. Eight is typically considered an adequate sample size for the purposes of cognitive interviewing (Hamme Peterson et al., 2017; Willis, 2005). While on one hand Willis (2005) readily admits “the more interviews we can do, the better” (p. 226), he also highlights the advantages of ensuring a variety of individuals are used to inform survey design decisions. According to Willis (2005), cognitive interviews should seek to maximize subject variance, and this “intensity and depth of focus” (p. 227) is seen as a trade off for quantity of observation. Hamme Peterson et al. (2017) concur, arguing that selected participants should represent “a range of experiences in relation to the assessment’s conceptual terrain” (p. 220), and similarly recommend sample sizes of 5 – 15. While noting how Blair and Conrad (2011) could identify additional questionnaire problems with sample sizes of 50 or greater, Hamme Peterson et al. (2017) conclude “that small numbers of cognitive interviews expose proportionally more serious problems than minor issues. As sample size increases, the rate of new problem identification per interview declines, suggesting diminishing returns” (p. 220). Having a goal of six to eight for my sample size, and faced with an over-abundance of volunteer recruits, I decided to engage in purposive sampling. After using a convenience sampling approach to identify a list of candidate participants (those who volunteered immediately following the recruitment presentation), purposive sampling was used for the purposes of achieving a diversity of experiences and perspectives. Purposive sampling was employed to select a diverse sample of eight from the 16 students who volunteered. To accomplish this, I emailed each of the 16 volunteers to ask them to complete a short diversity questionnaire. The aim of this diversity questionnaire was to select students with an array of computer and informatics experiences in health care for my study. In addition to INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 47 asking a few demographic questions (such as age, sex, and current professional designation), participants were asked to rate themselves on how competent they felt in their overall ICT readiness. They were also asked about their employed student nurse experience, and to list (and rate) any previous computer/ICT courses. (A detailed summary of participant demographics is presented in Chapter Four). Participants emailed me their diversity questionnaire replies and I notated all responses and correspondingly arranged interviews. Interviews were conducted over an eight-week period between November 2018 and January 2019. After eight interviews were completed, it was decided that a reasonable range of diversity had been achieved with one glaring exception—no fourth-year male students had volunteered. To remedy this, I initiated a plan to invite all fourthyear males via email to participate in my study. Two emails were sent, one week apart, via the school administration office inviting the six fourth-year males to participate in my study. Unfortunately, no males responded to this re-invitation to participate. In summary, fourth-year nursing students were recruited to participate in my study as a strategy to be as consistent as possible with the population for which the C-NICAS was developed. Recruitment took place as a 10-minute presentation to all fourth-year nursing students at TWU during a fall lecture class of 2018. An enthusiastic (and unexpected) response of 16 volunteers created a micro-quandary of having a larger than anticipated sample size. Subsequently, a sequential sampling approach was used. This was achieved by gathering a convenience sample of volunteer participants, then purposefully selecting from that sample to achieve diversity of perspectives and experiences. After eight interviews were conducted it was determined that a variety of diverse perspectives had been achieved. A purposeful attempt was INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 48 made to re-invite and recruit fourth-year male nursing students to participate but was unsuccessful. A presentation of how interview data was collected will be covered next. Data collection procedures. Interviews scripts were built on the two constructs of cognitive interviewing—think-aloud and verbal probes. Each survey item in the C-NICAS was given a unique think-loud prompt (e.g., “Please say, out loud, what you are thinking. What’s going through your mind as you answer this question?”), as well as item-specific probes (e.g., “What did you understand by the term interoperable?” or “What time period were you thinking when answering? From when until when?”). Following d’Ardenne’s (2015) suggestion to use think-aloud when each survey item is first presented, before asking probes, I organized all thinkalouds to precede verbal probes. A trial test run is suggested to “revise the probes, write the instructions, and pre-empt what to do about unexpected issues that may arise” (d’Ardenne, 2015, p. 122). A trial cognitive interview was conducted with a recently graduated RN. This experience allowed me to run through interviewing techniques and finesse the wording of some verbal probes. Feedback from this mock interview and discussions with my supervisor confirmed a decision to carry through with the strategy of using think-aloud upfront for each survey item, followed by the immediate use of verbal probes. Eight interviews were conducted over an eight week time period during the fall 2018 semester, Christmas break, and the beginning of the winter 2019 semester. Interviews were conducted on campus and held in meeting rooms arranged with permission from the School of Nursing. The interviews lasted from 50 minutes to 1 hour 36 minutes. All interviews were digitally recorded and professionally transcribed by an experienced transcriptionist. During each interview, notes were made in the margin of the interview script. Shortly INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 49 afterwards, field notes were written up from these ‘jottings’, comprising of details of what was observed and said, including interactions between researcher and participant, “phrases and key words [to act] as memory aids” (Melnyk & Fineout-Overholt, 2015, p. 150). These notes record information related to conversations and observed events, and serve to more fully understand the data and, as such, are “recorded as completely and objectively as possible” (Polit & Beck, 2017, p. 521). Reflexive notes, documenting “personal experiences, reflections, and progress while in the field” (Polit & Beck, 2017, p. 522) were also taken immediately following each interview to capture impressions and other key details of the interviews. At a later stage when these details could not be easily recalled, these notes contributed to the project’s success as they maintained an “analytic distance from the actual data” (Polit & Beck, 2017, p. 522). Immediately after receiving each transcript, I listened to each interview in its entirety to proofread and make margin notes on the transcript. These margin notes arose from jottings made on my interview script during the interview, as well as reflexive comments made right after the interviews. I also ensured sighs and pauses were captured, noting details such as how many times participants read and re-read certain questions. I also marked certain voice inflections to clarify context, e.g., participant was re-reading the question or expressing frustration. To summarize, interviews were constructed using both think-aloud prompts and verbal probes unique to each survey item and a trial run was performed to test the interview script. Data was collected using recorded cognitive interviews. Careful notetaking during and after the interviews augmented the process, serving to record relevant details of the interview not captured on the transcript. Soon after receiving each interview transcript, they were checked for accuracy with further notes added in the margins. A detailed description of my research approach, cognitive interviewing, and all related techniques follows. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 50 Cognitive interviewing methods. For my research, I examined the cognitive processes of eight fourth-year undergraduate nursing students as they answered each of the 21 items in the C-NICAS scale. Conducting this study using a student nurse population offered a glimpse at its readability and interpretability in the population for which the CASN (2012a) competencies were written—entry-to-practice nurses. Cognitive interviewing techniques as described by Willis (2005), Miller et al. (2014), and Collins (2015a) were implemented for this study. The aim of this research was not to determine self-perceived informatics competencies from this population, but rather use cognitive interviewing to examine the C-NICAS tool to determine if its survey questions posed any concerning challenges or difficulties. For this reason, the reader should note a distinction is made between respondents (those who complete surveys or questionnaires) and participants (those student nurses recruited for my cognitive interview research project). Cognitive interviewing focuses on a survey’s questions rather than the administration of a questionnaire, examining both overt, observable cognitive processes, as well as covert, hidden ones (Willis, 1999). Willis (1999) suggests, “if applied properly, cognitive interviewing is likely to be an effective means for identifying potential problems, before the problems are encountered repeatedly in the fielded survey” (p. 34). Specifically, cognitive interviewing examines comprehension of the question, recallability of information, decision processes in answering the question, and individual response patterns (Willis, 1999). According to d’Ardenne (2015), each of these four stages has different aims in cognitive interviewing: for comprehension, to “explore comprehension of key terms . . . .[and] the question as a whole” (p. 104); for retrieval, to determine if participants “can recall the required information” (p.104); for judgement, to discover participant’s strategies as they answer and “explore the boundaries of what [they] include and exclude within their answers” (p. 104); and for response, to gauge if the question is personal or INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 51 embarrassing and whether participants “are able to map their ‘in mind’ answer onto the answer categories available [and to] to check whether any answer categories are missing from the list provided” (p. 104). Strategies for achieving this are think-aloud interviewing (the participant is asked to vocalize all their thoughts after reading a survey question) and verbal probing (the interviewer asks a series of probing questions designed to paraphrase, clarify, and elucidate confidence and recall) (Willis, 2005). Viewed as complementary approaches, Willis (2005) suggests: in practice, think-aloud and verbal probing actually fit together very naturally. I find it helpful to ask subjects to think aloud as much as possible (we do want to get them to be talkative), but do not hesitate to jump in with probing questions whenever . . . appropriate. (p. 58) Adeptly maneuvering between both these techniques during a cognitive interview session requires certain skill and expertise of the interviewer. Accordingly, Willis (2005) emphasizes the importance of pre-interview preparation including scripted verbal probes and comments to encourage think-aloud. An interview script was prepared consisting of verbal probes for each survey item along with several prepared spontaneous probes. Conducting cognitive interviews was selected as my research approach to allow me to discover whether fourth-year nursing students understood the C-NICAS survey the way it was intended (d’Ardenne, Gray, & Collins, 2015). Both think-aloud and verbal probes (scripted and non-scripted) were used to systematically explore the cognitive processes of each participant. Details concerning how I structured the cognitive interviews for my project will be covered next. Specifically, how I used think-aloud and verbal probes in my research interviews will be outlined. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 52 Think-aloud. When compared to retrospective memory recall, advantages of using think-aloud include improved accuracy of data as participants respond ‘in the moment’ to each question, offering their thoughts without being directly prompted by the interviewer (d’Ardenne, 2015). For my research, I encouraged participants to think aloud as they read each survey item for the first time. This was accomplished by prefacing each survey item with, “Please say, out loud, what you are thinking. What’s going through your mind as you answer e.g., question one?” At this point, my role took on ‘active observer’ qualities as I nodded and affirmed their think-aloud efforts until they wrote down their answer. Once a reply was noted, I reclaimed an ‘interviewer’ role and asked a series of questions (verbal probes) related to that question. This procedure was repeated for each survey item. Encouraging think-aloud at the beginning of each survey item prompted early reactions to each item and produced comments valuable for understanding each survey item’s interpretability. Although think-aloud was encouraged during the interviews as a ‘participant-initiated’ source of thought processes and reactions, it was acknowledged think-aloud may be unfamiliar to participants. Willis (2005) and Collins (2015b) argue think-aloud training at the outset of cognitive interviews is critical to the successful engagement of thinking out loud. To familiarize each respondent with expectations related to their participation during the cognitive interview, a short recall exercise using think-aloud is suggested (Collins, 2015b; Willis, 2005). Before the interviews commenced, participants were led through an example of thinking out loud to count windows in a house. During this short recall exercise, questions were answered, and positive feedback was used to praise participant engagement in think-aloud (d’Ardenne, 2015). While encouraging participants to think aloud is a highly effective cognitive interviewing strategy, it does contain drawbacks. For one, some participants may feel uncomfortable thinking INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 53 out loud. This may be attributed to the sensitivity or personal nature of the survey’s topics or from a general discomfort sharing one’s thoughts in front of a stranger. Furthermore, even those who are comfortable talking aloud do not share every thought process (d’Ardenne, 2015). Given the general nature of C-NICAS’ questions, participants did not appear to view the topics as sensitive or personal. However, it was noted thinking out loud was easier for some than others. As well, answers to my research objectives were not always captured during participant thinkalouds. For these reasons, I also used two types of questioning approaches, scripted and spontaneous verbal probes. These are discussed next. Verbal probing. During think-aloud, the interviewer has a relatively unobtrusive role, whereas during verbal probing, “the interviewer ‘probes’ the respondent with direct questions about their thought processes during the question-response process” (Willson & Miller, 2014, p. 21). While think-alouds are more likely to elicit self-selected responses, verbal probing aims to deliberately “guide respondents through their cognitive processes” (Willson & Miller, 2014, p. 21). An added advantage is the way in which verbal probes allow the interviewer to formulate questions specifically related to research objectives. Verbal probing requires an interviewer to remain active and present and, if adept at this tactic, key details of participant’s cognitive processes can be elicited (Willson & Miller, 2014). Verbal probes can be scripted or nonscripted. Scripted probes. I followed Willis’ (1999) suggestion to base a series of scripted probes on the four constructs of comprehension, memory recall, decision-making, and response-making. Specifically, I devised an interview script containing verbal probes for each of the 21 survey items to test understanding, retrieval, judgement and response (refer to Appendix F for interview script). When writing this script, I was also cognizant of the following: using open-ended INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 54 questions to encourage participants to do most of the talking, keeping verbal probes simple and easy to understand, and ensuring original survey aims are not overlooked (d’Ardenne, 2015). Lastly, I also ensured all verbal probes were neutral and not biased (d’Ardenne, 2015) (e.g., I avoided saying, “You answered that slowly, was that difficult?”). Examples of scripted probes asked during each survey question included, “What did you understand by the phrase, e.g., system process and functional issues?” or “In your own words, what do you think this question is trying to ask?” These questions assessed comprehension of key terms, and whether the question overall was understood. To test recall, I asked, “What time period were you thinking about when answering?” or “Can you recall a time when you last….?” This allowed me to establish if participants were able to recall the required information. To measure decision-making, I asked, “How did you work out your answer to this question?” This let me explore what strategies participants were using when responding. These frequently used scripted probes set the stage for consistency throughout the eight different interviews. Additionally, in order to understand why respondents selected “not applicable” (NA), all participants who selected NA for any C-NICAS survey item were asked, “Can you explain why you chose ‘not applicable’?” NA had been added to the original C-NICAS scale to allow respondents to indicate when a competency indicator item was not relevant to their nursing practice (Kleib & Nagle, 2018a). By asking participants to clarify why they chose NA, I was looking to compare their explanations with the original intent of this response option; that is, as the scale developers had intended it. Lastly, to test response-making, I asked, “How easy or difficult was it to select an answer from the options provided?” (d’Ardenne, 2015). This was frequently asked of each participant at every question except for Q13 (an unintentional omission). Once they answered this question, I INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 55 followed up with, “Can you tell me why?” or “Why was that?” The purpose of soliciting this information was two-fold. I wanted to hear how they viewed each response option in relation to the question and I also wanted to give them an opportunity to comment on any aspect of the question of their choosing. Asking why a question is easy or difficult is an example of a participant-driven probe. By inviting participants to respond as they want to, participant-driven probes offer new opportunities to connect with participants, allowing interviewers to map responses in relation to each question and unearth perspectives devoid of interviewer bias (d’Ardenne, 2015). Scripted probes are advisable for novice interviewers as they standardize specific areas of the survey being probed (Gray & Blake, 2015). Scripted probes are ideal for exploring the same issues from one interview to the next and have the added advantage of offering consistency in the data analysis phase (d’Ardenne, 2015). Irrespective of interviewer experience, the cognitive interviewer is encouraged to address unanticipated issues as they emerge (Gray & Blake, 2015). These unforeseen issues can be spotted through careful observation and addressed using unscripted probes. Un-scripted (spontaneous) probes. Attentive listening allowed me to determine when to use spontaneous probes. If, for example during a think-aloud, an interesting new thought or comment emerged, I would invite them to say more. Spontaneous probes such as, “Can you tell me more about that?” or “When you had that teaching experience, was it online?” help capture ‘in the moment’ reactions or responses (d’Ardenne, 2015). Responding spur-of-the-moment allows interviewers to clarify, respond to new issues or explore previously unforeseen problems (d’Ardenne, 2015). To prepare for this eventuality I prepared a few spontaneous probes such as, “I noticed you changed your answer on question 3, can you explain why that was?” Spontaneous INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 56 probes also sprang from jottings made during each interview and led to small but important detours. These unscripted probes were also effective at clarifying vague responses and encouraging less verbose participants to say more on a topic. To sum up, cognitive interviewing positions the researcher in a front-row seat to witness the thought processes of survey participants as they answer each survey item. During eight cognitive interviews held with fourth-year nursing students, I asked each participant to read each survey item of the C-NICAS aloud and continue talking out loud while they interpreted the question and decided on an answer. As a potentially unfamiliar technique, think-aloud was demonstrated and outlined at the outset of each interview to increase comfort and familiarity with it. Probes were pre-constructed to examine each participant’s comprehension, retrievability, judgement, and response to the survey items (Tourangeau, 1984). Verbal probes are ideal for understanding cognitive functioning as not all participants are comfortable processing their thoughts out loud (d’Ardenne, 2015). Once participants selected an answer, a series of scripted probes were deployed to draw out further thought processes. Spontaneous probes were also used to explore unforeseen issues or clarify cognitive processes. Using think-aloud and verbal probes in the interviews produced a wealth of both individual cognitive narratives and patterns of responses relating to each survey item. How I proceeded to organize and analyze the data is presented next. Data analysis. Miller, Willson, Chepp, and Ryan (2014) suggest analysis of cognitive interviews is rooted in qualitative methodology, where “thematic schema are inductively developed ‘from the ground up’” (p. 42). This approach is iterative, requiring the analyst to move back and forth between raw data (interview transcripts), patterns, and emerging conceptual claims (Miller et al., 2014). Adhering to this approach to analyze my research data, I observed INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 57 an iterative process of synthesis and reduction across several steps. First, interview transcripts were synthesized into individual interview summaries. Second, these summaries were compared across all participants to reveal patterns and identify phenomena (Miller et al., 2014). Third, detailed summaries pertaining to each survey item were made. During this phase, odd or unusual observations were also noted. As well, specific patterns pertaining to each survey item were identified. The first analytic step comprised of writing detailed summaries from each interview. These detailed notes from each interview served to reduce transcripts into meaningful summaries and offer a systematized way to compare one interview to another. Data from each interview was organized into interview templates as suggested by d’Ardenne and Collins (2015). These templates follow their recommended interpretative sociological Framework approach to cognitive interviewing, “concerned with identifying substantive findings and addressing specific . . . research objectives” (d’Ardenne & Collins, 2015, p. 144). These templated summaries followed the chronology of each interview and included the following: overall test score, answers to each survey item, think-alouds (in verbatim), responses to verbal prompts (in verbatim), as well as other findings/comments not originally anticipated. After each interview was distilled to a summary template, I used an Excel spread sheet to create a data matrix for the second analytic step. To allow for a comparison of participant responses across each survey item, this spread sheet followed the order of the 21 C-NICAS items. The spread sheet captured a detailed synopsis of all think-aloud comments as well as responses to every scripted and unscripted probe (see Figure 1). INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 58 Figure 1. Screen shot of Data Matrix depicting responses from P1 and P2 to survey item #9. The use of a data matrix in this manner is suggested to bring data collection closer to data management, from descriptive towards explanatory analysis (d’Ardenne & Collins, 2015). Following the completion of this spread sheet, a final analysis was made to compare all responses for each survey item to identify common interpretative patterns (Miller et al., 2014). For this task, I followed the recommendation of d’Ardenne and Collins (2015) who suggest creating new matrices or templates for each survey item. For this third stage, I created 21 separate summaries for each survey question. During this phase, data management began to resemble what Miller et al. (2014) refer to as first identifying thematic schema, then advanced schema when certain patterns are more apparent than others. Thematic schema are those themes common across participant narratives, whereas INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 59 advanced schema involves systematically examining cross-group comparisons to “identify whether any particular theme is more apparent” (Miller et al., 2014, p. 44). These 21 summaries contained patterns, or categories, pertaining to each survey item. For instance, all participant responses to the question, “In your own words, what do you think this survey question is asking you?” were compiled, as well as details concerning misinterpreted words or phrases. During this phase I also categorized when survey questions in general were misinterpreted and noted recurring patterns associated with certain survey items. Specifically, I noticed patterns of comprehension issues repetitively surfacing. I was starting to see an emergence of patterns in the data that had yet to be further synthesized. Determining items as misinterpreted (or not) was aided by a familiarity with the CASN (2012a) document used by Kleib and Nagle (2018a) to create the C-NICAS. By comparing all responses to each survey item’s matching competency indicator, I was able to judge if an item was interpreted accurately (or not). A brief description of this document establishes context for how I determined interpretability of each survey item. CASNs entry-to-practice competencies document. CASN’s (2012a) NI competencies for entry-to-practice registered nurses were created to increase uptake of NI across Canada’s undergraduate nursing curricula and encourage the integration of NI beyond curriculum into professional practice (CASN, 2012a). This document, entitled, “Nursing informatics entry-topractice competencies for registered nurses”, contains one overarching competency: “Uses information and communication technologies to support information synthesis in accordance with professional and regulatory standards in the delivery of patient/client care” (CASN, 2012a, p. 5). Arising from this umbrella competency are 19 accompanying performance indicators to reflect how learning emerges and competency develops. The 21 survey items of the C-NICAS INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 60 emerge from these 19 indicators. It is important to note how the detail contained within each of these indicators offers a tangible way of knowing if the C-NICAS survey items have been interpreted accurately or not. Options for responses to all C-NICAS survey items were a 4-point Likert scale (1 = not competent, 2 = somewhat competent, 3 = competent, 4 = very competent) and NA = not applicable. Miller et al. (2014) refer to the final analytic stage as ‘synthesizing and summarizing’. During this stage, synopses are required to categorize different survey item interpretations as well as identify true errors or problems. The emphasis at this stage, according to Collins (2015b), is to “refer to the original aims of the test question and compare these to the range of interpretations, recall strategies, response and decision-making behaviours identified, noting where errors or mistakes occur” (p. 166). It is important during these steps not to dismiss odd or unique findings—instead of forcing the data to fit typology, it is better to remain curious and carefully consider how these cases should be managed (Collins, 2015b). Taking this advice, I created a new category for each survey item I entitled, “One of a kind/Odd Observations”. It is also advised during this stage to speculate and notate implications of key findings (Collins, 2015b). This was realized when I identified misinterpreted survey items and highlighted questions viewed as difficult to answer. Alongside these observations, I added comments to account why these may be occurring. I also noted when words or phrases were unrecognizable or unfamiliar and when retrieval responses varied. Comparisons were made both within and between participants, and all recurring findings were categorized and flagged, with accompanying notes on possible explanations. During this phase of analysis, I followed recommended approaches to reporting and summarizing cognitive interview findings from questionnaires. According to Willis and Miller INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 61 (2011), analysis may involve describing how “particular types of responses are assigned to a descriptive category representing a question defect or a particular cognitive process” (p. 335). Bode and Jansen (2013) used cognitive interviewing to identify both misinterpretations and comprehension problems in an aging scale. Boeije and Willis (2013) similarly suggest that cognitive interviews are effective at testing new survey items when researchers identify ‘problem areas’ and recommend resolutions. By analyzing the data from this perspective, I observed problem areas of item misalignment specific to certain survey items in my research data. In summary, my research study method centered on cognitive interviewing and its techniques of think-aloud and verbal probing. Recruitment from a final fourth-year class of nursing students was successful in recruiting more volunteers than expected. Subsequent purposive sampling resulted in a diverse sample of eight participants. The notable exception was that no male participants volunteered. An attempt to remedy this was made by re-inviting all males from this cohort to participate but was, unfortunately, not successful. An analytic approach relying on iterative synthesis and reduction was employed to analyze the data. As patterns and categories emerged from the final phase of analysis, a decision was made to organize the data into recurring problem areas. Ethics Ethics approval was obtained from the Research Ethics Board at TWU prior to recruitment of participants (refer to Appendix G for approval certificate). Recruitment was arranged through the Dean of the School of Nursing. During the final 10 minutes of a fall 2018 lecture in which all fourth-year nursing students were in attendance, I outlined the details of my research project, including confidentiality issues such as anonymity and privacy. I invited them to sign up after class with their contact information. After 16 students signed up expressing their INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 62 interest in participating, I decided to engage in purposive sampling to diversify participant perspectives related to informatics and ICTs. An addendum outlining this request was submitted to the Research Ethics Board and was approved shortly thereafter. Students who completed these diversity questionnaires were contacted to arrange interviews. Prior to meeting, each participant was asked to review and sign a written consent form (refer to Appendix H for consent form). At the outset of each interview, the consent form was reviewed, and questions were answered. No risks or discomforts were anticipated with this study, and an opportunity to debrief after each interview was given. Participants were reminded that they could withdraw from the project at any time without any negative consequences. They were informed that their confidentiality would be maintained, and responses kept anonymous as all identifiable information would be removed from the interviews. They were also made aware that data related to this project would be kept under password for five years, after which time it would be permanently destroyed. As I was not their instructor, students would not have felt any coercion or pressure to participate. As well, faculty were not aware of who participated, as ethical provisos required the instructor to leave the room during the recruitment presentation. All participants received a $15 coffee card to thank them for their time. Recordings of interviews were shared with, and received from, my transcriptionist through a secure password-protected server. My transcriptionist also signed a confidentiality documenting agreeing to maintain confidentiality of all data, store all data securely, and not disclose any details of the research data to third parties. All identifiable information was removed from the transcripts and participants were identified using pseudonyms such as P1 (for Participant 1, etc.). All identifiable data was temporarily stored on a locked, password-protected device then destroyed. Details of some transcripts were shared with my supervisor through a INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 63 secure website, ownCloud (2018). All research data has been stored on a password-protected computer that I alone know the password for. All electronic data from the study will be kept for five years if needed for secondary analysis or audit purposes. All paper materials containing research data were shredded on conclusion of the data analysis. Chapter Summary This chapter outlined research design methods and approaches. Evaluating cognitive interviewing data can reveal if participant responses to the survey items match those intended by its author(s) (Beatty & Willis, 2007). This understanding can lead to meaningful interpretations of score results improving the validity of the tool. As Hawkins et al. (2018) assert, a survey tool’s validity may be viewed as less about its statistical property, and more about the degree of empirical evidence supporting the intended interpretation of the tool’s scores. Initial psychometric testing of the C-NICAS on a large population of Albertan nurses provided important validity evidence in the form of both factor analysis and reliability. The purpose of this study was to use cognitive interviewing to explore how participants interpreted and responded to each of the survey questions and to confirm these responses matched the intended interpretation of the tool. Testing the C-NICAS on a population of fourth-year nursing students is consistent with its underlying construct of NI competencies written for entry-to-practice nurses (CASN, 2012a). Cognitive interviewing can expose wording problems of items that may benefit from revision and this can be achieved by a careful study of the test content and response processes (Hamme Peterson et al., 2017). Responses to each of the C-NICAS’ 21 survey questions were examined for clarity and interpretability to confirm the scale’s validity. Using think-aloud and verbal probes, data was collected from eight participants as they responded and interpreted each INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 64 of the C-NICAS’ 21 items. An analytical approach, based on iterative synthesis and reduction, was applied to manage and analyze the data. One to one-and-a-half hour transcripts were transformed into detailed individual participant summaries containing verbatim responses. Each summary, containing key details distilled from the interviews, was placed in a template for ease of participant cross comparison. Next, these templates were used to form 21 detailed accounts of each survey item in a Word Excel matrix. Thirdly, a newly-created templated document for survey item cross comparison was created. This document, organized in chronological order of the survey, contained think-aloud verbatim responses and all replies to every verbal probe. The final step involved identifying and categorizing patterns in the data. Key and recurrent findings were notated, along with comments to account for possible explanations. I also tracked odd observations and less frequently occurring problems and considered how to manage them. Based on a review of the literature, a decision was made to compile and describe the data according to the identification of problem areas. All appropriate ethical considerations were upheld and maintained throughout this study. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 65 Chapter Four: Results This chapter outlines findings related to my research study and addresses the question, “How do fourth-year nursing students interpret and respond to survey questions on the CNICAS?” During analysis, survey item misalignment and patterns of problematic responses in certain questions were observed. These categories and patterns were distilled into four problem areas: misinterpreted survey items, questions perceived as “difficult” to answer, problematic words and phrases, and three other issues. Prior to outlining these study findings, a description of the study’s sample will be portrayed. Sample Description All eight recruited participants were female, and all were current students (refer to Table 1 for sample description). All participants (but one) had recent experience working as Employed Student Nurses, a supervised program designed to support student nurses to consolidate their learning and “earn while they learn” (Vancouver Coastal Health Authority, 2019, para. 1). A range of hours working as Employed Students Nurses is noted (from 200 to 620 hours). In describing their overall informatics and ICT competence, four participants indicated they disagreed (and three stated they agreed) with the following statement, “As it relates to nursing and health care, I feel overall competent in my ICT readiness” (one participant did not indicate an answer to this question). All participants (but one) had experience with electronic charting. The median age of participants was 21.5 years. Interviews with the students took place around the halfway mark of their fourth year. By this time in their program, nursing students at TWU have had approximately 830 hours of clinical experience under the supervision of nursing instructors (H. Meyerhoff, personal communication, May 1, 2019). At the end of fourth year, all students are assigned a clinical INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE Table 1 Sample Description Age of participants, mean (range) 25.9 (21-52) Sex of participants Female Male 8 0 As it relates to nursing & health care, I feel overall competent in ICT readiness*, 1 = Strongly Disagree 2 = Disagree 3 = Agree 4 = Strongly Agree 0 4 3 0 Current professional designation (e.g., SN, former LPN etc.) Student nurse Former care aid 8 1 Employed Student Nurse experience (estimated hours) No experience 200-399 400-599 600 + 1 2 2 3 Previous or ongoing computer or informatics training/education. No experience Highschool IT course Online charting orientation 5 1 2 NI self-perceived competence* (based on C-NICAS scores): 0-18 Not Competent 19-40 Somewhat Competent 41-62 Competent 63-84 Very Competent 0 2 5 0 Length of interview (hrs: min), mean (range) 1:08 (0:50-1:36) Note. C-NICAS legend and score competency descriptors taken from C-NICAS. Used with permission from authors. *Information missing for one participant. 66 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 67 preceptor and work an additional 360 hours of consolidated nursing practice in a preceptorship. This preceptorship was due to start for these students within a few months of when the interviews were held. Overall informatics and ICT exposure during their baccalaureate nursing education included electronic medication administration and charting during clinical experiences at hospitals whenever available, online/library searches for research purposes, online simulations for clinical practice, a second-year adult nursing care WebQuest/Wiki assignment, and a secondyear health website evaluation assignment (Trinity Western University, 2016a, 2016b, 2017, 2018). All participants stayed for the entirety of each interview, with one exception, P7, who left the interview after answering Q15. A description of the interview experience with P7 is offered to understand, in part, why this interview may have ended early. Early in the interview, P7 appeared frustrated trying to understand some questions and alternated between picking NA and “not competent”. For instance, during Q6, she commented, “oh boy this is the exact same issue previously . . . . I have had the same feeling for the past four questions so at this point in the assessment . . . my inclination would be to just give up and just bull---- my way through it” (P7). Despite these comments, she continued to engage meaningfully with new questions until Q11, when she remarked, “I feel like it’s, um, when you get a question on an exam and you’re like I have no clue so you just check something off. That’s how I felt” (P7). Several questions later, when asked if the question was easy or difficult, she replied, “easy at this point. I’m halfway through the survey and I’m just getting apathetic. My interest has waned” (P7). After answering Q15, she stated she had to leave as someone was expecting her. Total length of this interview was 1 hour 7 minutes. During a short debrief she mentioned that she wished the survey developer could have sat next to her in order to clarify many questions. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 68 Two participants scored in the high range of “somewhat competent” of the C-NICAS scale. As per the C-NICAS legend, this is defined as: “Has a beginning level of knowledge and skills in informatics; requires some assistance in understanding and/or performing requisite competencies; identifies a need to learn more” (Kleib & Nagle, 2018c, p. 2). All others scored “competent” which is defined as: “Has a moderate to a very good level of knowledge and skills in informatics; requires minimal assistance in understanding and/or performing requisite competencies; actively seeks to advance his/her competency” (Kleib & Nagle, 2018c, p. 2). P7’s C-NICAS score was not calculated due to the truncated interview. One participant indicated she had previous computer/informatics-related education; specifically, she had taken an information technology course in high school. Two mentioned receiving electronic charting orientation. All others indicated no experience with informatics-related education or courses. When participants are carefully selected for their capacity to make relevant and insightful comments, small sample sizes of cognitive interviews may “help to pinpoint the trouble and elicit suggestions for how to fix it” (Thompson et al., 2011, p. 3). Furthermore, findings from cognitive interviews can point to problem areas whether anticipated or not (Boeije & Willis, 2013). In this next discussion of findings, I will highlight the close connection between these pinpointed areas and the cognitive thought processes elicited from interviews. To this end, participant’s comments and quotes will be embedded in each section alongside quantitative data such as figures. Presentation of Findings Through data analysis, I identified four problem areas: (a) eight of the 21 survey items were misinterpreted (including three misinterpreted by all participants); (b) 10 survey items were repeatedly described as “difficult” to answer; (c) 13 words or phrases were identified as INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 69 problematic to understand; (d) three other issues (i.e., “the question is asking me more than one thing”, “without experience I don’t know how to answer” and “aspects of question unclear”). These problem areas will be discussed in sequence. Survey questions misinterpreted by participants. The number of survey questions Number of questions misinterpreted misinterpreted by participants ranged from two to thirteen (mean = 7.3, SD= 3.5) (see Figure 2). 13 14 11 12 9 10 8 6 8 6 5 4 4 2 2 0 P1 P2 P3 P4 P5 P6 P7 P8 Participants Figure 2. Number of survey questions misinterpreted by each participant. N = 8. Data are missing for Qs 16-21 for P7. All in all, eight of the 21 questions (38%) were misinterpreted by three or more of the eight participants (see Figure 3). Three questions (Q4, Q7, and Q20) were misinterpreted by all participants. Q14, Q18 and Q19 were misinterpreted by five participants, and Q6 and Q21 were misinterpreted by three participants. A wide range of responses was observed when participants interpreted each survey item. Some were confident giving an inaccurate response, while others INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 70 wondered if they were guessing. Similarly, when accurately interpreting a question, some were confident in their interpretations, while others were unsure. Each of the three questions misinterpreted by all participants contained words or phrases two or more participants grappled with. Other questions were misinterpreted in different contexts or interpreted accurately only in part. What follows is a summary of how each of these eight survey items were misinterpreted. These eight questions are presented in order of frequency, from those most frequently Percentage of participants who misinterpreted questions misinterpreted to those least frequently. 100% 100% 100% 71% 71% 63% 57% 38% 25% 13% 13% 0 25% 13% 13% 25% 25% 13% 0 0 0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14 Q15 Q16 Q17 Q18 Q19 Q20 Q21 Survey Questions Figure 3. Percentage of participants who misinterpreted questions. N = 8, except for Qs 16-21 which were not discussed with one participant. Survey question 4. Q4, “Analyses, interprets, and documents pertinent nursing and patient data using standardized language”, was misinterpreted by all participants. A common thread was grappling with the phrase “standardized languages”. Two participants indicated the question was referring to charting in English. P7 stated, “I would exclude tribal languages or dialects that are not commonly used . . . what I included in my mind as a standardized language INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 71 would be English, French, Spanish, Mandarin, um Arabic” (P7). P3 commented, “in documentation I’ve only ever done it in English cause it’s the only language I know” (P3). Several participants talked aloud to guess the meaning of the phrase “standardized languages” and the overall question. The following quote describes how P8 viewed Q4 as asking her how she communicates with others: I don’t . . . know if I use what’s considered standardized language although I think on my unit . . . I communicate in a way that is understood by the other nurses . . . I feel . . . . confident in my ability to analyze . . . figure out interpretations in how to talk and document my findings for patients. (P8) To determine an accurate interpretation of Q4, I examined the CASN (2012a) competency indicator from which this question sprung: “Analyses, interprets, and documents pertinent nursing data and patient data using standardized nursing and other clinical terminologies (e.g., ICNP, C-HOBIC, and SNOMEDCT, etc.) to support clinical decision making and nursing practice improvements [italics added for emphasis]” (p. 7). This description offers important context for the term “standardized languages”. Specifically, it suggests that a knowledge of data standards such as the International Classification of Nursing Practice (ICNP), Canadian Health Outcomes for Better Information and Care (C-HOBIC) and Systematic Nomenclature of Medicine Clinical Terms (SNOMED-CT) is important for using clinical terminologies when documenting online. While P1, P2, P4 and P5 saw this question as referring to charting i.e., “proper abbreviations” (P1), “nursing medical language” (P2), “phrases or short terms that we use in health care” (P5), or “lingo . . . and . . . shorthand with charting” (P4), all participants overlooked using standardized languages while documenting. When asked INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 72 to describe the question in their own words, all participants described communicating and documenting what is pertinent but did not refer to universal data standards. Data standards and the use of standardized languages improves communication between health professionals, augments data collection to evaluate nursing practice outcomes and advances the quality of nursing interventions (Rutherford, 2008). Participants understood the question as charting what is pertinent. This interpretation is only partially accurate as none of them referred to the crux of the question, using standardized languages (or as the CASN [2012a] indicator refers to them, “standardized clinical terminologies”), when documenting pertinent nursing data. Survey question 7. Q7, “Articulates the significance of information standards for interoperable electronic health records”, was the second question to be misinterpreted by all participants. Reaction to this question began as soon as participants read the question. A near universal response of pauses, confusion and nervous laughter emerged. For instance, P2 said, “my brain goes WHAT . . . I have no clue what they’re trying to . . . what are information standards?” (P2); P4 remarked, “Bleh. It’s a jumble of words” (P4); and P6 stated, “I’m going to be completely honest, my mind has all of a sudden went blank (laugh) ‘cause I don’t know what that question is really asking” (P6). This survey question appeared to contain two unfamiliar terms. All participants except for two did not interpret “interoperable” accurately and five did not recognize or know how to interpret “information standards”. The term “interoperable” triggered more misinterpretation than “information standards”. Six participants did not accurately guess what it meant. P1 stated, “‘interoperable’ is throwing me for a bit of a loop” (P1), while P2 wondered, “inter does that mean in between operations? . . . if I was doing this in real life, I’d probably skip that question . . . no clue what they’re looking INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 73 for . . . feel really dumb” (P2), and P6 read the question and immediately said, “there’s already a word I don’t know [interoperable]” (P6). P6 read “perable” and tried to associate that with permeable then quickly stated, “which I think is irrelevant…I read it and thought of permeable but then I’m just like no, now I’m just sidetracked” (P6). Similar misinterpretations in Q7 occurred with “information standards”. While three participants interpreted it accurately, five could not. Several participants attempted to explain what it meant but gave up trying. For instance, P6 started her think-aloud by stating, “thinking of it as like a standard . . . but that actually doesn’t even make sense . . . my mind is just like I don’t know” (P6). Others offered erroneous explanations, such as P1, “the protocol for charting when it’s . . . late or for charting if you made an error” (P1); P3, “not too sure what they mean by information standards . . . . the minimum standard of what needs to be recorded, probably like minimum information you need” (P3); and P7, “the sort of broader nursing ethics expectations or CRNBC standards of practice of patient confidentiality and privacy” (P7). “Interoperability” in this context refers to the effective exchange of health-related information between systems by permitted users and is essential for meeting the “informationsharing needs across care settings, providers, patients, and population health care environments” (Halley, Sensmeier, & Brokel, 2009, p. 310). The CASN (2012a) indicator offers two examples of “information standards” providing clarification: “Articulates the significance of information standards (i.e. messaging standards and standardized clinical terminologies) [italics added for emphasis]” (p. 7). All participants misinterpreted Q7 and this stemmed from not understanding two key words in the question: “interoperable” and “information standards”. Comments made by participants indicate that both these words are unfamiliar to them; in fact, in several interviews, participants were hesitant to guess their meanings. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 74 Survey question 20. Seven of the eight participants misinterpreted Q 20, “Describes various types of electronic records used in care”. (One participant did not respond to this question due to a truncated interview). A universal response was observed interpreting this question. All participants interpreted it as referring to hospital-based electronic charting systems such as Pyxis or Meditech. In her own words, P1 thought the question was asking, “what is my ability . . . explaining the different kinds of technologies that we use?” (P1) then commented, “I can feel pretty competent . . . describing what they are for and even how to use them or why we use those systems” (P1). P1 offered Pyxis, Meditech and eHealth as examples but wondered if she was missing something. Similarly, P6 described hospital-based charting as, “again I’m thinking of Meditech and I think that’s what they’re referring to when it comes to types of electronic records use in care and . . . I can describe the types” (P6). P6 listed “various types” as including doctor’s notes, labs, test results, previous hospital visits, and notes from other health care providers. The following quote indicates how P5 misunderstood the question as pertaining to records from patients’ health histories: So electronic records, I immediately think of . . . a health record that’s more comprehensive that a doctor would make like the whole health history . . . electronic records there’s also . . . ones from interdisciplinary members of the team [e.g., physio, speech therapist] those all make records and you have, can also have histories like past records. (P5) The competency indicator linked to this question contains key explanatory details: “Describes the various types of electronic records used across the continuum of care (e.g., EHR, EMR, PHR, etc.) and their clinical and administrative uses [italics added for emphasis]” (CASN, 2012a, p. 11). This description highlights differences between types of electronic patient INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 75 records in the health system. Electronic health records (EHR), electronic medical records (EMR) and personal health records (PHR) serve three distinct functions in the access and utilization of electronic health care data. An EHR is a longitudinal health history accessible across more than one health care organization and typically contains less details than an EMR (Health Information Management, 2019). An EMR is comprised of an individual’s health-related files at a health care organization or practitioner’s office (Health Information Management, 2019). A PHR, while not a legal document, contains pertinent confidential medical records and is individually managed and owned by the patient (Health Information Management, 2019). The question is asking respondents how competent they are describing three distinct types of electronic records, EHRs, EMRs and PHRs. Without this detail embedded in the question, participants were unable to accurately guess what was meant by “various types of electronic records”. Furthermore, it appears participants describe what was familiar to them – recent experiences electronic charting during hospital-based practicums. Survey question 14. Q14, “Demonstrates professional judgment in the presence of technologies”, was misinterpreted by five participants. P2 interpreted this as maintaining ethical standards when charting, “as a nurse as I am working with technology, am I being above board, am I being honest in my charting?” (P2). Four other participants viewed the question as asking if they avoid technology for personal use while at work. P5 wondered if the question was asking her, “when we’re working, as a nurse if I am using the technology for my own purposes or whether it is for work related purposes” (P5), and similarly, P3 stated, “like your phones . . . so don’t be on your phone at work . . . . only use the technology . . . . if you want to look something up” (P3). P8 concurred, seeing the misuse of technology at work as a concerning issue, as is shown in the following quote: INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 76 I don’t really have my phone on me in the unit and I just use the unit technologies at hand so I’m going to say I’m competent at that . . . I just put it [my cell phone] away cause I know I’m tempted with things like that . . . I’m not entirely sure. It’s a little bit vague but I would guess that it’s asking whether or not I use technologies in a professional way and in a way that is benefiting the patient rather than harming the patient. (P8) A close look at the competency indicator for this survey question offers important insight and clarifies intent, “Demonstrates that professional judgement must prevail in the presence of technologies designed to support clinical assessments, interventions, and evaluation (e.g., monitoring devices, decision support tools, etc [italics added for emphasis]” (CASN, 2012a, p. 9). In other words, the question is asking, “How competent are you at maintaining professional judgement while using technology designed to support the nursing process?” or “When faced with technology glitches, do you defer to professional judgement?” Five participants misinterpreted Q14. One participant misinterpreted the question as maintaining honesty when charting. The four others who misinterpreted the question all viewed the question identically – avoiding unprofessional use of technology (i.e., using technology for work, and not for personal use). With “professional judgement prevailing” missing from the question, participants seem unable to accurately interpret the question. Survey question 18. Q18, “Uses ICTs in a manner that supports the nurse-patient relationship”, was misinterpreted by five participants. The following quote from P6 indicates how she interpreted the question as calling in translators: I immediately think of translator . . . we use the Voceras [a paging system] to call them in and help us interpret or help us communicate with our patients . . . that’s where I INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 77 started to get the idea of how it improves and supports the nurse-patient relationship. (P6) While two participants gave examples showing how ICTs support the nurse-patient relationship (e.g., noting a patient is “Rich” if they are Richard, or not charting at the bedside), four did not understand the question’s intent. These four participants wondered how technology can support an interpersonal human relationship. P2 asked, “how does technology support THAT?” (P2), and P4 wondered, “I don’t know how it improves our relationship with them . . . I don’t get the question . . . I can understand the question but I . . . don’t see its translation into real life” (P4). The following quote from P5 describes how she similarly struggled to see a connection between technology and interpersonal relationships: I’m not sure what particularly about using ICT would support the nurse-patient relationship . . . because you are able to look up information about them and provide that to them [e.g., low hemoglobin] . . . . I don’t know how the computer helps with that like it’s more of an interpersonal thing. (P5) Again, the competency indicator for this question offers clarification about the intended meaning, “Uses ICTs in a manner that supports (i.e., does not interfere with) the nurse-patient relationship [italics added for emphasis]” (CASN, 2012a, p. 11). The phrase “does not interfere with” adds a previously unseen meaning: using technology in a manner that does not invade the nurse-patient relationship. It is noteworthy that four participants were puzzled at how technology could support the nurse-patient relationship. Furthermore, without the phraseology “does not interfere with”, participants focused their efforts recalling ways in which technology supports the nurse-patient relationship instead of ways they have kept technology at an arm’s length to preserve interpersonal connections. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 78 Survey question 19. Q19, “Describes the various components of health information systems”, was misinterpreted by five participants. Specifically, what was observed was an unfamiliarity with the phrase “health information systems”. As this phrase is central to the question, participants who did not recognize or understand it were not able to indicate their competency. P2 initially interpreted this phrase to mean health and body systems but admitted later, “part of me thinks am I missing something with their full definition of health information systems?” (P2). She also stated several times she did not know what “health information systems” meant. Other participants were also unsure they interpreted Q19 accurately. P3 mused, “what are health information systems? Meditech [an electronic records software] is kinda my understanding” (P3); and P4 stated, “I’m not even sure. I’m thinking ICTs . . . are for health professionals to use but health information systems could be for the public as well” (P4). P6 and P8 interpreted the question as accessing websites for information and support. P6 described this as using FH (Fraser Health) Pulse, a health authority website, to access information for patients. Similarly, P8 interpreted the question as asking about accessing various online resources, “makes sense to think of that as the Fraser Health [a local Health authority] internet and all the different resources you have there” (P8) and thought “various components” meant different ways of accessing resources on that website. Q19 arises from the following CASN (2012a) indicator: “Describes the various components of health information systems (e.g., results reporting, computerized provider order entry, clinical documentation, electronic Medication Administration Records, etc.)” (p. 11). These examples clarify what is meant by “various components”. Instead, however, participants gave a wide array of misinterpreted responses—from describing how to select technology, to thinking “health information systems” was symbiotic with online charting, to nurses getting INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 79 health information online, to patients asking for health-related information, and finally to obtaining information and resources online for professional use. It was noted that only two participants found the question “easy” to answer. Without examples embedded in the question, participants appeared to struggle answering the question and misinterpreted it in a wide array of responses. Survey question 21. Q21, “Describes benefits of informatics to improve health systems and quality of care”, was misinterpreted by four participants. The central issue centered on a lack of familiarity with the word “informatics” as well as several erroneous assumptions concerning the overall meaning of the question. P5 thought about using information to inform patient care and interpreted the question as obtaining knowledge: “keeping informed . . . emails on different things that are being rolled out on the unit . . . new research (e.g., infection rates) or… products on the unit” (P5). She values being informed in this way because new ideas or products have been tested and are likely to work or offer benefit. In contrast, P6 described an impasse in trying to understand and answer the question: [W]hat are the benefits of providing information to our patients to improve health systems and quality of care . . . if that was the case . . . that’s what the question is asking it wouldn’t make sense how that would improve health systems . . . like what benefits of informations [sic] would improve health systems . . . unless you’re getting feedback from patients . . . I can see how that can help with quality of care but I don’t know how that correlates with health systems. (P6) Conversely, P4 interpreted the question from a different angle where she saw “informatics” as pertaining to user-friendly information posters (e.g., found on bathroom stall doors): INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 80 I am a visual learner and I can see things . . . so informatics are pretty easy for me to figure out and learn . . . . Even if I don’t understand, even if I have never seen it before I can read it and figure it out because they’re usually very well laid out. (P4) To weigh participant comments and judge interpretability of this question, I referred to the following definition of NI as outlined by the International Medical Informatics Association in CASNs entry-to-practice informatics competencies document, “A science and practice [which] integrates nursing, its information and knowledge, and their management, with information and communication technologies to promote the health of people, families and communities worldwide” (as cited in CASN, 2012a, p. 13). NI and the integration of information and communication technologies are viewed as necessary infrastructure for achieving a high level of quality of care and safety in health care (Hwang & Park, 2011). Participant comments indicated an inaccurate interpretation of the question, due in large part because of unfamiliarity with the word “informatics”. Participant interpretations of the word “informatics” widely varied—from patient or educational information to mini informational posters. The same four participants who misinterpreted “informatics” misinterpreted the question. Without a definition of this term or prior mention of it in differing contexts earlier in the survey, participants misinterpreted the overall question. Survey question 6. Q6, “Describes the processes of data gathering, recording and retrieval in paper and electronic records”, was misinterpreted by three participants. While less frequently misinterpreted than other questions, narratives from these think-alouds are nonetheless significant as they reveal how severely this question was misunderstood. P2, in her own words, described the question as, “how can I take the full information about the patient, how do I pull it all together and print it out somewhere” (P2). P5, on the other hand, wondered INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 81 if it meant retrieving records after a discharge or taking a health history, “to retrieve that information in paper and electronic records . . . electronic records . . . I understand that more it would be . . . a doctor’s role to . . . obtain an extensive health history” (P5). It is to be noted, however, that P5 was not convinced of this interpretation stating several times she wasn’t sure what the question was asking. In contrast, P6 interpreted the question as collecting information for the purposes of research, “like someone doing a study. That’s what I’m thinking of when they’re gathering up data and recording and retrieving it in paper and electronical records” (P6). A review of the competency indicator associated with this question allowed me to clarify question intent and critique comments emerging from Q6. The indicator states, “Describes the processes of data gathering, recording and retrieval, in hybrid or homogenous health records (electronic or paper), and identifies informational risks, gaps, and inconsistencies across the healthcare system [italics added for emphasis]” (CASN, 2012a, p. 7). This competency requirement points to a familiarity with both paper and electronic records as well as asks respondents to report their confidence identifying and reporting charting issues. While it is noted only three participants misinterpreted this question, how far these misinterpretations deviated from the question’s intent is concerning—from equating “gathering” with “printing off the records”, to believing physicians, not nurses, should be tasked with data retrieval, to interpreting the question as gathering information for research studies. Without an inclusion of key details as found in the CASN indicator, three participants individually and widely misinterpreted the question. Summary of survey questions misinterpreted by participants. A discussion of questions most frequently misinterpreted by participants addresses the primary aim of the study. It was INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 82 determined that eight of the 21 survey questions were misinterpreted by three or more participants, and three of these questions (Q7, Q20, and Q14) were misinterpreted by all participants. Participant confidence wavered when answering questions. For instance, when participants misinterpreted questions, some were convinced they had interpreted it accurately, while others were not sure. Conversely, when questions were interpreted accurately, some participants were confident they had interpreted accurately, while others doubted their interpretation. The CASN (2012a) document used to write all 21 survey items on the C-NICAS was used at every step of data analysis to determine if a question was interpreted correctly or not. This section reviewed the eight most frequently misinterpreted questions. When questions were misunderstood, it was common to see a wide array of interpretations. It appears when questions lack examples or when the intent of the competency indicator has not been translated onto the survey item, participants struggle to interpret the question and, instead, individually interpret the question in a context familiar to them. Ease or difficulty answering survey questions. Effort was made to ask every participant during each question how easy or difficult it was to select an answer from the options provided, and why. Missing responses to this question are attributed to an omission of the question on my script for Q13, not all participants directly answering the question, and not asking the question toward the end of some interviews owing to time constraints. Despite these exceptions, 112 responses to this question were reviewed. Overall, participants selected “easy” more frequently (54) than “difficult” (39). “Average” responses were the least frequent (19). On several occasions average-in-difficulty questions seemed “difficult” at first, but became easier once participants talked through it. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 83 I collated many varied responses to the question, “How easy or difficult was it to select an answer from the options provided?” Responses such as “easier”, “easiest yet”, “really easy”, “a bit easier”, “fairly easy”, and “somewhat easy” were all categorized as “easy” to answer. “Really hard”, “more difficult”, “a bit more difficult”, “frustrating”, or “really difficult” were amassed as “difficult”. Words such as “average”, “moderate”, “in the middle”, “in between easy and hard”, “okay” or “pretty comfortable” indicated an answer in between these extremes. These responses were combined into one category, “average”. Responses relating to either extreme, “easy” or “difficult”, will be summarized. These conversations revealed interesting findings and will be summarized next. Questions viewed as “easy” to answer. The number of questions participants found “easy” to answer ranged from five to 14 (mean = 9, SD = 3.3) (see Figure 4). Those who selected “easy” most frequently were P5 and P4. When P5 explained why questions were “easy” to answer, her most frequent response was she understood the question. She also recurrently replied she had understood the question and had previous experience. For example, P5 found Q9, “easy because I understood it. It was also . . . more relative to me and . . . something . . . I’m actively doing” (P5). Similarly, when P4 expanded on why she selected “easy” she frequently said it was because she understood the question and had previous experience. For instance, Q16 was “easy” because she had previous experience that she “could analyze the question with” (P4). For both P5 and P4, who most frequently labelled questions as “easy” to answer, it is interesting to note a strong association with ease of answering and interpreting the question accurately was not observed. In fact, during the interviews, participants misinterpreted the question yet concluded it was “easy” to answer 18 times. In most of these situations, INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 84 participants appeared confident of their interpretation and did not second guess themselves. As Number of questions rated as "easy", "average" or "difficult" to answer by each participant. well, participants stated questions were “easy” when it did not take them long to answer. 20 15 9 8 2 1 7 8 14 11 2 5 P2 P3 P4 Participants "DIFFICULT" 10 5 0 P1 5 13 0 10 5 8 "AVERAGE" 0 5 P5 6 3 P6 7 2 6 P7 7 P8 "EASY" Figure 4. Number of questions rated as “easy”, “average” or “difficult” to answer by each participant. N = 8, except for data missing from P7 on Qs 16-21. Other data missing as the question How easy or difficult was it to select an answer from the options provided? was not asked during every question. Eight questions were found “easy” to answer by four or more participants. Furthermore, it is noted the five questions deemed easiest to answer were also very highly accurately interpreted: Q5 was interpreted accurately by 75% of the participants; Q2, Q8 and Q9 (88%); and Q16 (100%). It is perhaps not surprising to also note that these five questions contained zero or very low numbers of unrecognizable words or phrases. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 85 1 3 6 6 0 2 EASY 0 Q8 0 Q6 Q4 Q3 Q1 Q2 0 1 2 1 6 1 0 3 2 2 0 2 1 0 1 4 1 3 2 3 0 Q18 2 3 2 4 2 Q14 2 Q13 1 5 Q12 3 4 4 1 Q11 0 6 Q10 2 6 Q9 0 Q5 3 2 5 7 0 Q21 6 1 0 4 Q20 0 3 4 4 Q17 3 1 Q16 6 5 1 0 Q15 3 6 Q19 7 Q7 Number of "easy", "average" or "difficult" responses per survey question 8 Survey Questions AVERAGE DIFFICULT Figure 5. Number of participants rating each question as “easy”, “average” or “difficult”. N = 8. Data are missing for Qs 16-21 from one participant and when participants were not asked to rate the question (e.g., owing to time constraints). Also, Q13 data is missing from an unintended omission on my interviewer script. This observed trend continued with the other participants. Those who described questions as “easy” to answer mirrored these two responses—they had experience and/or they understood the question. When participants had experience, they often referred to details of these past experiences, or mentioned it was something they performed frequently. Sometimes, however, they simply stated they could picture a past scenario or had experience. When participants understood questions, they commented on knowing what the question was asking them, or stated it was easier to figure out than other questions. For instance, P2 stated one INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 86 question was “straightforward . . . not trying to rattle my brain to figure out . . . what answer they were looking for” (P2). Other, less frequent, explanations for why questions were “easy” to answer included: “talking out loud helps explain the question”, “previous question offered context”, “knowing they were on par with other nurses”, “it’s a black or white question”, “having knowledge or skills to draw on”, or “words and language are familiar”. An irregularity was observed in the pattern of equating an “easy” question with having experience and understanding the question. For two questions (Q5 and Q11), several participants indicated they had no experience yet perceived the question as easy to answer and tended to select NA or “not competent”. To illustrate, P2 described Q5 as “pretty easy” because she did not have experience helping patients and their families review online information (and answered NA). Similarly, P1 described Q11 as, “easy because I knew I don’t have confidence in that area” (P1); however, she picked “not competent”. This pattern of response was also observed on other occasions. In these instances, not understanding or lack of experience appeared to allow them to quickly decide an answer, thus labelling the question as “easy” to answer. In other words, some questions were “easy” to answer because participants did not understand the question or lacked experience. To summarize, the five questions viewed as easiest to answer were also highly accurately interpreted. These questions, not surprisingly, contained low numbers of unrecognizable words or phrases. Across the board, participants, when selecting easy, most commonly stated it was because they understood the question and had experience. While other reasons were described, these two explanations recurred the most frequently. A slight anomaly was noted with several questions when participants stated they had no experience (or did not understand the question) yet described the question as “easy” to answer. It was noted that in these situations, participants INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 87 perceived the question as easy to answer because they didn’t have experience or understand the question. For these questions, they commonly answered NA or “not competent”. As interviewer, I also noticed that questions described as “easy” to answer also took less time to answer. Questions viewed as “difficult” to answer. Ten questions were found “difficult” to answer by three or more participants (see Figure 5). Of these 10 questions seen as “difficult” to answer, seven of them were misinterpreted by three or more participants, and seven of them contained words or phrases not recognized or misinterpreted by two or more participants. The mean number of questions participants found “difficult” to answer was 6.5 (SD = 2.4). Both P3 and P2 found a higher than average number of survey questions difficult to answer (see Figure 4). When P3 described questions as “difficult” to answer it was invariably related to not understanding the question. Q7 and Q12 were both labelled as “difficult” because she did not know what the question meant. For example, P3 thought Q20 was “more difficult because of . . . being unsure of . . . what else it might be asking” (P3). Likewise, questions P2 found difficult were often challenging for her to figure out or there were aspects of the question she did not understand. To illustrate, for Q20, P2 remarked, “it took a while to figure out. It would be nice if there was an example” (P2); P2 similarly described Q12 as, “frustrating because I just really am unsure what the question is going for” (P2). It was common for other participants who found a question “difficult” to answer to state they did not understand the question. Moreover, comments surfaced relating to wanting a different option to select. In Q7, P8 answered “somewhat competent” and stated, “really hard. I just sort of gave up in a sense. I just picked an answer that wouldn’t make me too committed to one extreme or the other because I didn’t really know what the question was saying” (P8). P2 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 88 also picked “not competent” for Q7 commenting, “They need to have an option unclear . . . I don’t understand . . . I’m not sure what they’re looking for" (P2). Later in the interview, P2 suggested she needed an “‘I need more information’ box” (P2) and described feeling conflicted. She didn’t want to put NA nor “not competent” because if she understood the question, she might already be competent at it. The same sentiment was expressed by P5 who wanted either an “I’m not understanding the question” or “I need more information” option. On many occasions, participants found it challenging to select a competency indicator when they viewed the question as containing multiple components. This was especially true for P7 who frequently commented that many of the questions contained several components which made it difficult to assess her competency when she believed she was competent in one aspect of the question but not another. Likewise, P1 stated Q10 was difficult because she thought it was asking three separate questions. Selecting a response was challenging for her because it was, “harder to find the average of them” (P1). All in all, ten (48%) questions were viewed as “difficult” to answer by three or more participants. Of the ten questions viewed most frequently as being “difficult” to answer, seven (70%) were misinterpreted. It is noted these seven questions also contained words or phrases not recognized by two or more participants. The most frequent explanation for why a question was “difficult” to answer was because they could not understand the question. Other reasons include “aspects of the question are unclear”, or “the question is asking more than one thing”. Furthermore, questions “difficult” to answer were associated with recurring comments on how challenging it was to choose from the competency indicators provided; instead, they wanted a different option. Summary of ease or difficulty answering survey questions. Asking for participants to INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 89 respond to the response-driven probes of, “How easy or difficult was it to select an answer from the options provided?” and “Please explain why” offered a valuable opportunity to solicit responses that were participant-driven (d’Ardenne, 2015). These probes invited participants to comment freely on their experience understanding and responding to each question and unearthed some important and unanticipated responses. It is noted more questions were viewed as “easy” to answer than “difficult”. Understanding, or not understanding the question, was common to most explanations as to why questions were “easy” or “difficult” to answer. Most commonly, labelling a question as “easy” to answer was because they understood the question and because they had experience. It was interesting to observe, on some occasions, when participants lacked experience (or, sometimes, when questions were not understood), questions could still be labelled as “easy” to answer. In these instances, participants selected responses of “not competent” or NA. Overall, questions most commonly labelled “easy” were very likely to be interpreted accurately. Ten questions were labelled as “difficult” by three or more participants and seven of these contained unrecognizable or misunderstood words or phrases for two or more participants. Questions deemed “difficult” also had multiple components or sub-sections within the question making it challenging to select a competency indicator for some. When questions were difficult to answer, participants also commented that they wished there was another option to select, e.g., “unclear” or “I need more information”. Seven (70%) of the 10 questions seen most frequently as “difficult” to answer were misinterpreted. Words and phrases not recognized or misinterpreted. The third problem area relates to words or phrases not recognized or misinterpreted by participants. Before describing this category, a distinction will be made between words and phrases not recognized or INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 90 misinterpreted, and survey items categorized as misinterpreted or “difficult” to answer. First, it is acknowledged how some, but not all, misinterpreted or difficult-to-answer questions contained unrecognizable words or phrases. Inaccurate interpretations can be triggered by other causes, just as questions seen as difficult can be attributed to an array of reasons. Second, by using the same verbal probes repeatedly, uncommon words or phrases could be examined separately to uncover how they were interpreted by all participants. Participant narratives, when examined individually, can reveal, in detail, if or how words or phrases are problematic. Third, detailing which words and phrases were unfamiliar and if, or how, they were misinterpreted may inform re-wording revisions aimed at improving the interpretability of the C-NICAS. Words or phrases identified as problematic were “ICTs”, “various types of electronic records”, “interoperable”, “organizational policies”, “information standards”, “informatics”, “variety of ICTs” and “health information systems” (see Figure 6). They were identified as problematic as three or more participants struggled to recognize or accurately interpret them. In some instances, these problematic words were found in more than one question. In total, these problematic words or phrases affected eight survey questions. Some struggle occurred with six other words (“pertinent”, “applications”, “ICT application and systems”, “standardized languages”, and “system process and functional issues”), but these were not labelled as problematic as fewer participants did not recognize or understand them. Each of the problematic words or phrases will be outlined next in order of how frequently they were not recognized or misinterpreted. “ICTs” not recognized. The acronym “ICTs” is referred to in six questions in the CNICAS. During the survey interviews, eight participants stumbled over this word, struggling to interpret its meaning. When first mentioned, it is defined in a grey heading box directly above Number of words or phrases not recognized or misinterpreted per surevy question INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 91 7 7 6 6 55 5 4 4 3 33 3 3 2 2 1 1 2 11 1 Q21 Q20 Q18 Q17 Q16 Q19 Survey questions Q15 Q14 Q12 Q13 Q11 Q9 Q10 Q7 Q8 Q5 Q6 Q1 Q2 Q3 Q4 0 "ICTs" "applications" "standardized languages" "pertinent" "interoperability" "information standards" "organizational policies" "system process & functional issues" "ICTs application & systems in health care" "variety of ICTs" "health information systems" "various types of electronic records" "informatics" Figure 6. Number of words or phrases not recognized or misinterpreted. Bolded words indicate most frequently occurring. N = 8. Data missing for Q16-21 from one participant. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 92 Q1 as “information and communication technologies”. However, when reading Q1 for the first time, six participants did not immediately see this definition and made comments indicating they did not recognize what “ICTs” stood for. P3 stated, “I’ve never seen that abbreviation before” (P3); P4 asked, “What is ICT? I have no idea” (P4); P5 said, “I don’t know what ICT stands for” (P5); P6 said, “I don’t know what ICT devices are” (P6); and P8 explained, “I’ve been trying to figure out what ICT means and I’ve been looking at the sheet trying to understand” (P8). Eventually, all participants correctly interpreted ICTs in both Q1 and Q2. In Q11, however, when “ICTs” re-appears, P6 and P7 did not appear to understand what it meant in a different context. P6 read the question, commenting, “oh again with the ICTs” (P6) and struggled to recall what ICTs meant. P7 wondered what an “innovative ICT” was, causing her to remark that she had “no clue” (P7) what the question was asking. In summary, having a hard-to-spot definition of ICTs at the beginning of the survey appeared to affect the confidence of many participants as they answered more than one question containing this word. It appears this was an unfamiliar term for nearly all participants. Considering that “ICTs” appears in six of the C-NICAS questions, the potential for this acronym to influence the C-NICAS’ future interpretability should be considered. “Various types of electronic records” misinterpreted. Q20 contains the phrase “various types of electronic records” and, as discussed earlier, Q20 was misinterpreted by all participants. Earlier in the survey (in Q6), “electronic records” was accurately interpreted in the context of paper or electronic charting. However, in Q20, participants did not accurately interpret the phrase “various types of electronic records” as per the CASN (2012a) competency indicator from which this survey item emerged. Specifically, there was lack of recognition of the question’s INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 93 intent to describe the differences between electronic health records (EHR), electronic medical records (EMR) and personal health records (PHR). It was clear participants were familiar with the term “electronic records” as evidenced by their previous experience using electronic charting software programs. However, these software programs were only one type of electronic records and it appeared that they did not know more than one. Several participants questioned what other electronic records the question was referring to. P5 commented, “I could describe different types of electronic records, but I don’t know if they’re defining the types based on . . . who wrote it or whether it’s actually like computer formatting that defines a different type of record” (P5). The following quote shows how P3 corroborated this sentiment: ‘[T]ypes of electronic records’ is confusing . . . as far as I am aware nurses only chart through Meditech (a charting software program) . . . whether we need to know about other types of electronic records I’m not sure because . . . my understanding is you are supposed to be able to access all your information through Meditech. (P3) All participants found Q20 “difficult” or “average” in difficulty to answer. Participant responses suggest that providing examples could be helpful. To illustrate, P2 stated having an example would have helped as a “trigger” and sighed when she said, “it took a while to figure out. It would be nice if there was an example i.e., like CT scan or labs . . . to help trigger the thinking into various types of electric records” (P2). For her, not understanding Q20 was like “weeding through her brain to try and figure out what they are looking for, for electronic records, what records do I use” (P2). Similarly, P5 found Q20 difficult to answer, “because I wasn’t super confident in my definition of ‘types’” (P5). INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 94 These reactions to Q20 highlight how participants wrestled with the term “various types of electronic records”. From their clinical hospital experience, participants are aware of electronic charting programs but can not describe “various types”. Furthermore, the overall intent of the question asking them to describe the differences between EHR, EMR and PHR is missed by all participants. Without examples embedded in the question, participants individually interpreted the question as asking them how familiar they are with individual software charting programs. “Interoperable” not recognized. The term “interoperable” appears in Q7 and was unfamiliar to five participants. After reading the question during the think-aloud, comments emerged indicating confusion and what seemed to be embarrassed laughter. P1 stated, “ok again I’m confused (laugh)” (P1) and P8 said, “ok again I’m confused (laugh)” (P8). P5 re-read “interoperable” and stated, “I don’t know what that means” (P5), and P7 remarked, “it’s unclear to me what is meant by interoperable health records” (P7). While several participants attempted to interpret “interoperable”, others did not try to guess what it meant. P1 attempted to dissect the word, “inter” meaning intermediate period, and “operable” reminded her operating. She interpreted it as, “in the moment of recording” (P1). P3 tried to guess what “interoperable” meant, but quickly concluded she was not sure. Likewise, P8 stated, “I just jumped over [interoperable] . . . . It sounded important but I don’t know what it means” (P8). Another participant read the question and stated she did not know what interoperable meant, guessing “perable” might be associated with “permeable” then quickly dismissed this idea. P2 also hazarded a guess to understand “interoperable”, “it has something to do with medical records . . . . Interoperable is like between [pause] well, in between operations is intra-operable so it’s not in between operations. I, I can’t even guess” (P2). INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 95 Overall, “interoperable” was not a familiar word. Above all, it was not discernable and caused participants to guess or skip over the word. Competency indicators participants selected were low, ranging from “not competent” to “somewhat competent” (one participant selecting NA). Not recognizing “interoperable” is associated with misinterpreting Q7. “Organizational policies” misinterpreted. The term “organizational policies” appears in two questions: Q10, “Complies with legal and regulatory requirements, ethical standards and organizational policies”, and Q12, “Identifies and reports system process and functional issues according to organizational policies”. P2 and P3 misunderstood “organizational policies” in both questions and P7 misunderstood it in Q12. Overall “organizational policies” was misinterpreted on five occasions by three different participants. “Organizational policies” was interpreted as relating to regulatory, legal and ethical standards. The following quote illustrates how P2 viewed “organizational policies” in Q10: [C]omplying with legal and regulatory requirement, ethical standards and organizational policies . . . you’re aware of what is required of you within a legal standard and a regulator standard with CRNBC [provincial regulatory body] and ethical standards. Are you an ethical person or . . . do you tend to be unethical about thing . . . in organizational policies? (P2) Similarly, for P3, “organizational” related to regulatory bodies such as the “new CRNBC” (British Columbia College of Nursing Professionals [BCCNP], provincial regulatory body). Examples P3 gave included maintaining standards of practice and keeping licencing up to date, “that’s what I think of as organizational” (P3). “Organizational policies” also appears in Q12 and was misinterpreted by the same two participants (P2 and P3) as well as by P7. P3 expressed concern that she did not know what the INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 96 question was asking, “um, I have no idea what it’s asking, and I’ve never had to report anything like that I don’t think . . . nor have I found anything like that” (P3). Again, P3 thought the question related to her provincial regulatory board and union: [H]ow things will work . . . if the websites are working and functional issues; it’s common to get glitches or the websites being revamped . . . like how [the provincial regulatory body] is being updated if its keeping up to pace, if its working or not . . . . like if you’re having problems . . . trying to use . . . the BCNU [provincial union] website or the CRNBC . . . website . . . say I was applying for . . . my license or . . . with my ESN [Employed Student Nurse] stuff if the websites there to do that aren’t working, if you’re able to identify that and report it to those organizations. (P3) Similarly, as P2 paused and re-read Q12, she admitted she did not know what “organizational policies” meant. P7 also paused several times reading Q12 and wondered what “organizational policies” are: “I don’t even know what the organizational policies are. I’m sure they’re out there, um but, ya (pause)” (P7). In summary, three participants struggled with the phrase “organizational policies” when it was presented in two survey questions. It appears “organizational policies” was either not understood (P7) or viewed as relating to regulatory bodies (P2 and P3). When “organizational policies” is situated in a question devoid of an institutional health care context, participants interpret it as referring to their professional regulatory body. In other words, participants equated the word “organizational” with their regulatory body, not health care institutions. “Information standards” misinterpreted. The phrase “information standards” in Q7 was misinterpreted by 5 participants and appears linked to how Q7 was misinterpreted by all participants. In this context, “information standards” refers to standardized clinical INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 97 terminologies. An array of erroneous interpretations was observed. Several participants tried to guess what “information standards” meant before admitting defeat. P1 associated “information standards” with charting, “How or what you chart . . . the protocol for charting when it’s . . . late or for charting if you made an error” (P1). P6 attempted a guess but then gave up. P3 wondered what “information standards” meant: “not too sure what they mean by information standards . . . the minimum standard of what needs to be recorded, probably like minimum information you need” (P3). It is apparent that two unfamiliar words in Q7 led to feelings of bafflement for some. P7 explained answering this question was difficult because it felt vague, in part, because “there’s no operational definitions of these terms [“interoperable” and “information standards”]” (P7). Likewise, P8 described answering this question as: really hard. I just sort of gave up in a sense. I just picked an answer that wouldn’t make me too committed to one extreme or the other because I didn’t really know what the question was saying . . . . the wording was strange . . . didn’t know what [interoperable] meant . . .[and] ‘articulates the significance of information standards’ didn’t really mean anything. (P8) Participants were unfamiliar with the term “information standards” and, to try and understand the question, hazarded guesses. Without examples embedded in the survey item these guesses did not result in accurate interpretations. Further compounding this issue was a second problematic word, “interoperable,” in the same question. In other words, alongside “interoperable”, and lacking an unclear reference, “information standards” was unfamiliar and not recognized by participants. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 98 “Informatics” misinterpreted. In the last question of the survey, Q21, the word “informatics” is mentioned for the first time. Aside from “informatics” appearing in the title of the survey, “Canadian Nurse Informatics Competency Assessment Scale”, this word is not mentioned elsewhere in the survey. Most participants reacted with bewilderment to “informatics” when they read it for the first time in Q21. In total, 4 participants misinterpreted the term (P3, P4, P5, and P6). Again, a wide array of interpretations was observed. P3 read the question and admitted she couldn’t, “break down the word ‘informatics’ with regards to these ICTs” (P3). She mistakenly summarized “informatics” to mean: [F]eedback if it’s to improve health care systems and quality of care . . . which is kind of the point of the technologies to improve it . . . I’m thinking the information . . . the technology can provide on its usage whether its statistics or stuff like that . . . whether it’s being used effectively . . . and its benefits. (P3) This comment reflects how P3 erroneously thought “informatics” was statistical analysis of technology use necessary for improving technology. She admitted that she wasn’t sure she correctly interpreted the question, “that one word made it a little more difficult. Otherwise I think I understand what it’s asking but . . . what they mean by informatics made it a little confusing” (P3). Not surprisingly, she stated it was “difficult” to decide on her competency. P4 also incorrectly interpreted “informatics”, describing it as pictorial-based educational pamphlets. The following quote indicates how P4 described what informatics meant to her: [T]he pictures and . . . little blurbs . . . (laughter) that are put . . . on the back . . . bathroom stall doors, people like to read and understand. That’s what I’m thinking informatics are unless I’m completely wrong and had the wrong system in my mind . . . INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 99 it’s a very . . . easily understood and engaging way to present . . .health information to health care professionals in general. (P4) In contrast, P5 misinterpreted “informatics” as obtaining information and resources online. During her think-aloud she described the benefits of “informatics” as important for informing her about new ideas or products that have been tested or likely to bring a benefit. She gave examples of how she stays informed and how she uses online resources to inform her nursing care of patients. P5 did not doubt her interpretation, stating she understood what “informatics” meant and how it applied to her. In contrast, P6 was hesitant in her interpretation. She thought “informatics” related to security features of health care systems. She grappled with the question because it did not make sense to her and because she could not see how informatics correlated with quality of care. It is interesting to note two other participants interpreted “informatics” correctly, but both were unsure of their interpretation of the word. At the start of the interview, P1 commented on the title of the survey and then during Q21, commented, “‘Informatics’ . . . that term again” (P1). She speculated “informatics” could mean “information technologies . . . [but] it might be asking something a little bit different” (P1). Similarly, when P8 read question 21, she stated, “so I don’t know what the word ‘informatics’ means but I’m going to assume that it means . . . the different ways to access information and technology” (P8). In summary, the appearance of the word “informatics” in Q21 triggered feelings of confusion for six participants as they did not recognize this word. It noted, however, that two participants eventually navigated their way through to an accurate interpretation of the word. The range of misinterpretations varied widely, and it is noted the four participants who misinterpreted “informatics” in Q21 misinterpreted the overall question. Without defining what INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 100 “informatics” means or providing examples of informatics, participants individually (and widely) interpreted its meaning. “Variety of ICTs” misinterpreted. In Q16, the phrase “variety of ICTs” presented as a problem to three participants. When they read Q16 aloud, they recalled feeling confused earlier in the survey when they first encountered the acronym “ICT” in Q1. As well, they did not understand what a “variety of ICTs” meant in this context. Each of these participants selected “somewhat competent” as their answer. P2 admitted not knowing what “variety of ICTs” meant: “okay we’re back to one of these questions . . . [variety of ICTs] is a very vague term. I’m not quite sure” (P2). She described the question as “somewhat frustrating . . . because I don’t like being not competent . . . I don’t want to put not competent, but I don’t know exactly what I need to be competent in” (P2). She explained she chose “not competent” because she didn’t know what “variety of ICTs” means. P3 admitted she too felt stuck understanding the phrase “variety of ICTs” in the context of this question: I can name a few . . . not specified how much is a variety . . . there is probably more that I’m not thinking of . . . I feel like I’m good at doing one thing of them and maybe another thing I’m not as good at. Would have been helpful to know what ICTs are being referred to. (P3) P3 described the question as very broad and general and could only think of a few examples. P6 reacted similarly not recognizing or understanding the term “ICTs”, “again with the ICTs . . . . I need to know what . . . falls in that scope of an ICT” (P6). She then accurately interpreted the phrase as different technologies and communications around the health care setting. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 101 To sum up, “variety of ICTs” posed as a challenge for three participants in Q16 and served as a reminder that having “ICTs” appear early in the questionnaire without clear definition appears to be distracting for some and misleading for others. It is further noted that adding clarity to this phrase may have spillover effects as “ICTs” is mentioned in the C-NICAS a total of six times. “Health information systems” misinterpreted. Q19, “Describes the various components of health information systems”, contains the phrase “health information systems” which was misinterpreted by three participants. Q19, overall, was misinterpreted by 5 participants including these three. Comments made by these three participants provide insight into how they grappled with the phrase “health information systems”. P2 interpreted it as health/body systems and saw the question as asking if she could describe how to chart pertinent patient data such as cardiac and respiratory assessments. Although she summarized the question in her own words as, “do I have an understanding and am I able to explain the online charting?” (P2), she remained doubtful she had fully captured the meaning of “health information systems”. P4 hesitated over the term “health information systems”, misinterpreting it as, “ways people can get health information” (P4). Similarly, she thought she may not have completely understood it. P6 misinterpreted “health information systems”, thinking it referred to patients asking for health care-related information; however, she felt confident she understood what it meant. Overall, inaccurate interpretations of “health information systems” were linked to the misinterpretation of Q19. All three participants who didn’t understand this phrase misinterpreted the question. Furthermore, without examples of different components of health information systems, participants individually interpreted this phrase. Instead of correctly viewing the INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 102 question as the various ways in which information and communication technology intersects with health care, participants interpreted the question as accessing health-related information online. Summary of words and phrases not recognized or misinterpreted. Eight words or phrases were not recognized or misinterpreted by three or more participants and affected a total of eight questions. As the intent of these words and phrases is clarified in the CASN (2012a) competency indicators, I was able to ascertain which words or phrases were inaccurately interpreted. Several of these words or phrases, such as “interoperable”, “information standards” and “informatics” embody the world of information technology and were unfamiliar to most participants. It is interesting to note that of the eight questions containing these problematic phrases, six were still interpreted accurately and two were not. It is important, however, to comment on the degree of bafflement and frustration felt by many participants as they interacted with these words or phrases. It is noted several of these issues may be preventable. For instance, the acronym “ICTs” in six of the survey’s questions is devoid of an easily locatable definition. This could be addressed by clearly defining ICT upfront, instead of placing it in a box above the first question. Without examples or readily available definitions, these eight words or phrases which three or more participants did not recognize created issues of concern as participants wrestled to understand them. While these words and phrases were not strongly associated with misinterpreting the overall question, it is noted that four of these questions were labelled as “difficult” to answer by three or more participants. The extent to which these problematic words or phrases may influence engagement for future respondents is unknown. Other problems identified by participants. An assortment of observations and INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 103 unexpected comments point to three new issues of concern. It is argued here that these issues should not be set aside; instead, they merit standing alone as a category of problems unique from those previously described. A note of explanation is offered upfront to delineate how these unexpected observations became a category of problematic issues. First, many of these problems emerged from the data as participant-driven comments. In other words, while I intentionally weighed questions as misinterpreted (or not) and asked if questions were easy or difficult to answer, these comments surface as unanticipated issues. When unanticipated issues occur, the researcher must not dismiss these findings if they do not fit with the current explanation of the process (Collins, 2015b). Instead, these unforeseen issues must be examined closely, particularly if they recur on more than one occasion. Second, as analysis is undertaken, reflection must occur on how to treat these cases (Collins, 2015b). The researcher should ask her/himself such questions as, “Is there a pattern not yet seen related to these cases?”, “Is there anything in this circumstance to explain this?”, or “How engaged was the interviewer or participant during this part of the interview?” (Collins, 2015b). After engaging in such reflection, I concluded these newly identified problematic areas could be linked as explanations for why questions were misinterpreted or “difficult” to understand. Furthermore, I noted that while unanticipated in nature, they stemmed from regularly occurring participant comments. This led me to determine that these issues were significant to merit a category of their own. These issues have been labelled as “the question is asking me more than one thing”, “without experience I don’t know how to answer”, and “aspects of question unclear” (see Figure 7). The first issue, “the question is asking me more than one thing”, was brought up 18 times. The second, “without experience I don’t know how to answer”, was mentioned 17 times, and “aspects of question unclear”, 11 times. Other comments such as “grammar is confusing”, “I INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 104 9 1 7 4 6 5 8 4 7 4 2 5 1 1 1 1 Survey Questions 1 1 1 Q20 1 Q21 1 Q18 1 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 Q11 Q12 Q13 Q14 Q15 1 2 Q19 3 1 1 0 4 Q16 3 Q17 Number of other problems identified 8 "unsure how 'describes' relates to competency" "aspects of question unclear" "I think this is a 'yes or no' question" "unsure how 'articulates' relates to my competency" "without experience I don’t know how to answer" "grammar is confusing" "the question is asking me more than one thing" Figure 7. Number of other survey problems per question. Bolded items most frequently occurring. N = 8. Data are missing on Qs 16-21 from one participant. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 105 think this is a yes or no question”, and “how does being able to describe or articulate relate to competency assessment?” were noted but not labelled as problematic as each of these were onetime observations. The three issues categorized as significant will be outlined next in order of frequency. “The question is asking me more than one thing”. The comment “the question is asking me more than one thing” was first noted in Q3 and subsequently mentioned in Q5, Q6, Q10, Q13, and Q15. Questions containing different components caused participants to think they were responding to multiple separate questions within one question. At the heart of this sentiment was a difficulty determining one’s competency for the different components, particularly if their competency varied for one or more of the sub-questions. It is noted this comment, mentioned 18 times, was mentioned most often by two participants (P7 and P3). This comment was first noted in Q3, “Performs search and critical appraisal of on-line literature and resources”. Four participants noted the question was referring to two steps— searching and appraising—and commented they were more competent at one than the other. Additional comments were made related to searching and appraising in two different places— work and school—which added complexity to the question. Again, comments were raised about knowing how to do one better than the other. Q5, “Assists patients and their families to access, review and evaluate on-line information”, was viewed as containing five different components (for patients and their families, and access, review and evaluate). Similarly, Q6, “Describes the processes of data gathering, recording and retrieval in paper and electronic records”, was viewed as containing multiple aspects and this generated frustration as participants did not know how to rate themselves with each part. This was illustrated when P7 stated, “oh boy this is the exact INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 106 same issue previously . . . again . . . different components to this question to which I would likely answer differently if they were separated into simple clear questions” (P7). In Q10, “Complies with legal and regulatory requirements, ethical standards and organizational policies”, four participants commented on the challenge of dissecting a question with multiple components. Again, this affected their selection of a competency indicator. One participant described how she found the average of each section. Another felt competent with most of the parts but not others. P7 described how answering a question containing different components made her feel: I always get (laugh) a little bit annoyed with questions like this because they’re so broad . . . . and sometimes it’s confusing to keep all those [parts] straight . . . . If those were all separate, I would probably be able to answer them a bit better. (P8) Q13 and Q15 also triggered “the question is asking me more than one thing” comments. Q13, “Maintains effective nursing practice and patient safety during system unavailability”, was seen as two questions, and Q15, “Recognizes the importance of nurses’ involvement in the design, selection, implementation and evaluation of ICTs applications and systems in health care”, was viewed as four separate questions (nurses’ involvement in design, nurses’ involvement in selection, etc.). For P7, she felt frustrated answering both these questions. In summary, while it is noted P7 most frequently commented on the problem “the question is asking me more than one thing”, it was raised at least once by every participant (except for P2). In most instances, these questions contained two to four components while one question (Q5) was viewed as containing five separate questions. Significantly, when participants felt more (or less) competent in one aspect of the question than another, deciding on their competency triggered feelings of frustration for some. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 107 “Without experience I don’t know how to answer”. Another criticism heard during the interviews was not knowing how to rank one’s competency without previous experience. This comment was nearly as frequent as the last issue discussed in the previous section; in total it was mentioned 15 times. A range of competency responses were observed when participants answering these questions. The three questions that triggered this comment are briefly outlined. In Q5, “Assists patients and their families to access, review and evaluate online information”, four participants did not have experience helping patients and their families to review online information. Participants selected a range of competency indicators (from “not competent” to “somewhat competent”, and NA). In Q11, “Advocates for the use of current and innovative ICTs in health care”, participants were confused by the term “current and innovative ICTs” and struggled to interpret the question. Moreover, since they did not fully understand the question, they wondered whether they had any experience, or none at all. For responses, they selected “not competent”, “somewhat competent” or NA. In Q13, “Maintains effective nursing practice and patient safety during system unavailability”, participants also commented on their lack of experience with system unavailability. While some participants had experience with minor technology glitches (e.g., one computer or medication cart was not working temporarily), no one had ever experienced system unavailability. Without experience, two participants anticipated they would be competent, whereas four participants selected NA. To sum up, lack of experience created a range of reactions when answering some of the survey’s questions. Despite this, participants did not avoid selecting competency indicators, most commonly selecting NA or “not competent”. Having limited experience created a challenge for many participants when selecting a competency indicator. For some who chose “somewhat competent” or “competent”, they did not know if they would be competent (but INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 108 hoped they would), while others who knew they were not yet competent, picked “not competent”. Still others thought lack of experience was proof that the question did not yet apply to them, and selected NA. Without addressing this issue, future results may not accurately reflect exactly how lack of experience influences participant’s responses. “Aspects of question unclear”. This remark emerged 11 times during the interviews. To clarify how this comment is distinct from the problem described earlier as “words or phrases not recognized or misinterpreted”, the following explanation is offered. When words or phrases were unfamiliar or not recognized, participants either misinterpreted or did not recognize certain specific words or phrases. In this category however, a different aspect of the question was unclear, unrelated to a specific word or phrase. For instance, participants remarked they were not sure if the question meant this or that, or they commented that the question (or aspects of it) appeared vague. This category also encapsulates comments such as, “I don’t understand how this concept relates to the question” or “how can I measure my competency in this area?”. In other words, “aspects of question unclear” captures a variety of comments beyond wording and phrasing issues, and instead refers to specific or general features of survey items. In this section, participants raised conceptual concerns about how one part of question was vague or unclear. If a question was labelled as confusing or vague it was also placed in this category. “Aspects of question unclear” was mentioned four times in Q11, twice in Q14, and once in Qs 9, 12, 19, 20, and 21. This problem sometimes interfered with participant’s ability to accurately interpret these survey questions. In Q11, “Advocates for the use of current and innovative ICTs in health care”, one participant wondered if the question included new and upcoming technologies not currently in use. One participant remarked she did not feel confident interpreting what she should be INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 109 advocating for, and another wondered what the difference between a “current” and an “innovative” ICT was. Similarly, P8 wanted clarification regarding advocacy and technology because she associates advocacy work with people, not technology. In Q14, “Demonstrates professional judgment in the presence of technologies”, two participants found aspects of the question unclear. One participant stated she did not know what the question meant “at all” and another participant described the question as “vague”. Both accurately interpreted the question. Aspects of four other questions were unclear for some participants. In Q9, “Critically evaluates data and information from a variety of credible sources to inform nursing care”, one participant felt a definition of “credible” was missing and did not know what a “variety” of sources entailed. The following quote reveals how, in Q19, “Describes the various components of health information systems”, P8 misinterpreted the question and wondered what to judge her competency on: I’m not clear on whether it’s asking me to just describe all the different components . . . whether it’s just resources to access or things to know about before making either decisions or moving forward with something . . . or if it’s asking me to actually be able to know how to access all of them. (P8) Similarly, in Q20, “Describes various types of electronic records used in care”, P8 remarked it was strange to evaluate her competency describing something, “am I competent at describing something?” (P8). This question, overall, was misinterpreted by P8. In contrast, while P8 found aspects of Q21 unclear, describing it as vague, she did interpret the question accurately. Comments concerning aspects of some questions being unclear affected a total of six questions. While it did not highly correlate with misinterpreting questions, it appeared to create hesitation for most participants as they deliberated how and what to answer. Complex thought INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 110 processes were observed as participants wondered about the question’s intent. It is unknown how these cognitive processes could influence future respondent’s responses or overall engagement with the survey items. Summary of other problems identified by participants. This collection of three other issues was amassed from unexpected observations that did not conform to the study’s initially anticipated patterns. “The question is asking me more than one thing” was the most commonly cited, followed by “without experience I don’t know how to answer the question”, and thirdly “aspects of question unclear”. Questions containing more than one question made it difficult for participants to determine their competency particularly if they felt competent in one aspect more than another. It is noted that this comment was raised most often and mentioned by all but one participant. This comment was associated with expressed feelings of frustration and bewilderment. Lack of experience was another frequently occurring trigger for not knowing how to answer certain questions. In this instance, participants fell into one of three categories: (a) without experience, they believed the question did not apply (and thus answered NA); (b) without experience, they imagined they would be competent in the future (and thus rated their competency as “somewhat competent” or “competent”); (c) without experience, still others viewed themselves as not yet competent (and thus rated themselves as “not competent”). Without a response option to indicate lack of experience, future results of the C-NICAS may be affected by this discrepancy in responses. The third issue, “aspects of question unclear” differs from words or phrases not recognized or misinterpreted. This is primarily because these comments transcended specific words or phrases; for instance, either specific or general aspects of questions were unclear or vague, or participants did not know if the question was asking x, y, or z. “Aspects of question INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 111 unclear” was sometimes linked with a question being misinterpreted. When aspects of a question were unclear, participants used their think-aloud time to verbally process what the meaning of the question was. If a survey is administered without the cognitive process of thinking aloud, it is unknown if interpreting the question would be more (or less) difficult to answer. In other words, how does the opportunity to think aloud affect survey responses when compared to completing a survey without cognitive interviewing? Does think-aloud help or interfere with participants interpreting questions accurately? Unfortunately, these interesting questions lay beyond the scope of this study. Evidence, however, has shown how several questions were vague or unclear. Furthermore, as it is thought these concerns can be addressed using exemplars and expanded terminology from the corresponding competency indicators, wording revisions are suggested to improve clarity. Chapter Summary This study aimed to address the research question, “How do fourth-year nursing students interpret and respond to survey questions on the C-NICAS?” Primary interview strategies were designed to detect wording and interpretability problems with individual survey items in the CNICAS. Correspondingly, the data revealed eight (38%) of the 21 questions were misinterpreted by three or more participants. Significantly, three questions were misinterpreted by all (100%) participants. Further to the principal aims of the study, eight words and phrases were identified as not recognized or understood by three or more participants. Specifically, these words and phrases affected a total of eight questions. Surprisingly, words or phrases not recognized or misinterpreted did not correspond with all misinterpreted questions. The combined effect of think-aloud, and scripted and spontaneous probes unlocked new and revelatory data. One such INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 112 key was asking participants to describe how easy or difficult it was to select an option from the competency indicators provided in the survey. During each interview, participants were asked about the ease or difficulty of each question and given an opportunity to explain why. This scripted, participant-driven verbal probe invited participants to freely comment on why the question was easy or difficult to answer. As interviewer, I did not ask leading questions or solicit specific responses when participants were explaining why. These comments, along with others stemming from different aspects of the interviews, resulted in an array of explanatory data, from which three new patterns emerged. These categories have been summarized as, “the question is asking me more than one thing”, “without experience I don’t know how to answer” and “aspects of question unclear”. The frequency with which these three problematic issues occur is noteworthy—the first is mentioned 18 times, the second 17 times and the third 11 times. From the cognitive processes of the participants, new ways of perceiving the survey items were observed. These narratives appear as explanations for item misalignment, shedding light on why participants struggled when interpreting certain questions. These unanticipated findings also described how participants interacted with the survey items as well as what they wished was improved about the survey. Just as significant were the expressions of frustration (and, for some, apathy) when faced with choosing a competency indicator when they did not have experience, or when they believed the question was asking them more than one thing. This chapter has presented data results from eight cognitive interviews conducted with fourth-year nursing students. Beatty and Willis (2007) suggest analysis of cognitive interviewing data “be based on whether apparent problems can be logically attributed to question characteristics” (p. 301). This has been demonstrated on several fronts, when: (a) problematic INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 113 words or phrases reflected jargon; (b) participants struggled to select a competency indicator because they lacked experience; (c) participants admitted frustration or apathy because the questions were asking them more than more thing; and (d) “difficult”, “vague” or “misinterpreted” survey items were associated with questions containing complex and unfamiliar informatics-related concepts. Presented within this chapter are categories of problem areas, each supported qualitatively by the narrative processes of participants. In summary, eight survey items were misinterpreted by three or more participants, and 10 questions were identified as “difficult” to answer by three or more participants. It was presumed that isolating problematic words or phrases would explain item misalignment. As such, the interview script consisted of inquiries about certain words or phrases in nearly every survey item. While eight words or phrases were identified as being problematic for three or more participants, an association between these words or phrases and item misalignment was not strong. Many of the misinterpreted questions contained complex informatics concepts and/or were missing important context from the original competency indicators. Notably, patterns of other unexpected data emerged as a result of thinkalouds and verbal probes. Participants generously shared their reactions to all survey items through their thinkaloud responses and answers to verbal probes. This chapter has outlined in detail which questions were misinterpreted and why questions were viewed as easy or difficult to answer. In addition to understanding which items were misinterpreted and which questions were viewed as “difficult” to answer, a list of words and phrases not recognized or understood has been compiled. Additionally, three frequently occurring explanatory descriptions, described in this chapter as “other problems” have been outlined. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 114 Overall, the C-NICAS contains several issues that may affect its interpretability with future respondents. These issues include many survey items misinterpreted, the discovery of several problematic words or phrases (many viewed as avoidable) and a concerning number of questions viewed as “difficult” to answer. Three additional categories of issues offer some explanatory details concerning how participants were stymied in interacting and interpreting survey items. The following chapter offers a discussion of these results in the context of current literature findings. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 115 Chapter Five: Discussion The primary aim of this research study was to address the research question, “How do fourth-year nursing students interpret and respond to survey questions on the C-NICAS?” Related questions I investigated were: “Are the questions in the C-NICAS scale interpretable?”, “Is the wording clear?”, “Are the questions too difficult to understand?”, “Are the questions unacceptably vague?”, and “Is there an association between unclear wording or phrasing and item misalignment?” I determined the C-NICAS, as a newly developed scale, might benefit from further testing and evaluation. Applying the qualitative research technique of cognitive interviewing can be a necessary step toward a scale’s refinement and development (Bode & Jansen, 2013). Cognitive interviewing can help develop a fledgling survey by peering into the functioning of each survey item (Boeije & Willis, 2013). As a result, wording improvement suggestions can emerge as well as other potential sources of measurement error (Padilla, Benítez, & Castillo, 2013). Data gathered from cognitive interviews on the C-NICAS revealed several survey items that were misinterpreted, words or phrases that were not recognized or understood, questions viewed as difficult to answer, and several other explanatory findings. In this chapter, these findings will be linked to current literature findings. Bringing cognitive interviewing data findings to light after analytic scrutiny is likely to have wider implications. As such, practical implications of these research results will also be considered in Chapter Six. Limitations There are several limitations to this study. First, the sample is not representative. The second limitation relates to my lack of experience as a cognitive interviewer, and a third concerns sex disparity. Each limitation will be outlined next. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 116 Sample representation. While sampling efforts were targeted to a population closely resembling who the C-NICAS’ competencies were intended for, entry-to-practice nurses, this sample is not representative. By drawing on a convenience sample, from which a purposive sample was constructed, those who volunteered may be atypical in some respects from other entry-to-practice nursing students across Canada. For instance, they may be attracted to the study for personal reasons (Dionne, 2014). Beatty and Willis (2007) state when participants are chosen by convenience for cognitive interviewing, such samples are not representative of a larger population and, as a result, the extent of questionnaire problems in the larger population can not be established: [Cognitive interview researchers] only identify question characteristics that are believed to pose problems with some unspecified frequency. Other than that, the specific guidance that is available advocates demographic variety of respondents, and that participants should include people relevant to the topic of the questionnaire being tested. (p. 295) Beatty and Willis (2007), while admitting demographic variety does not ensure representativeness, argue that “casting as wide a net as possible over varying circumstances maximizes the chances that discovery will be effective” (p. 296). To mitigate the effects of homogeneity, participants were purposively selected to achieve a variety of informatics experience and competency. Researcher proficiency. The second limitation relates to my lack of proficiency as a cognitive interviewer. Cognitive interviewers must possess excellent listening skills, understand the study’s design well enough to respond to what participants do or do not say, be able to stick to the script for consistency, and ensure the pace of the interview allows for participants to freely share their thoughts (Gray, 2015). To reduce bias when interviewers conduct face-to-face INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 117 interviews, Polit and Beck (2017) assert it is essential that interviewers create “an atmosphere that encourages candour” (p. 279) and accept all expressed opinions as ‘natural’. Conducting cognitive interviews for the first time required a high level of preparation and critical reflection, both of which I endeavored to undertake. Sex. Another limitation was the absence of males who volunteered for my study. Consequently, results may not be transferable to males. When compared with females, evidence suggests that males may possess higher confidence levels when learning about technology (Maag, 2006). Similarly, Wishart and Ward (2002) found males held more positive attitudes towards computers and were more likely to use them than females. Their research also suggests that males possess a stronger internal locus of control over technology than females (Wishart & Ward, 2002). As described earlier, after noting only females volunteered for my study, intentional efforts were made to recruit males from the fourth-year cohort at TWU. Unfortunately, these efforts did not yield any male responses. Discussion of Findings The study’s findings revealed that many survey items were misinterpreted. Eight of the 21 items (38%) were misinterpreted by three or more of the participants, including three which were misinterpreted by all. These misinterpretations were linked to unfamiliar informaticsrelated concepts that lacked context or exemplars. In some instances, participant interpretations deviated very far from the original intent of the question. As well, many words or phrases were misunderstood or misinterpreted. These words or phrases included unfamiliar terms and often encompassed informatics jargon. Also, seemingly ordinary phrases were misinterpreted in an informatics-specific context. Findings additionally revealed how many items were perceived as difficult or vague to answer or were double-barrelled, asking the reader to consider up to five INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 118 separate questions. Difficult questions were equated with those that were hard to understand or contained words or phrases that participants did not understand. In these instances, common narrative responses included feelings of frustration, apathy, and disengagement. Finally, study results revealed that when participants did not have informatics experience, they did not know how to select from the competency indicators provided, and stated they wished other options were available (e.g., “I don’t understand”, “I need more information”, or “question unclear”). In this next section, main findings from my research study will be linked to current literature. Specifically, study findings will be discussed in relation to: (a) established principles of survey design; (b) benefits of improving the interpretability of the C-NICAS; (c) education preparedness; (d) pilot testing and pre-testing on target populations; (e) statistical validity, response error and inferences; and (f) applicability of suggested survey revisions. Principles of survey design. Streiner and Norman (2008) maintain effective questionnaires must contain interpretable survey items and that basic criteria should be adhered to when deciding how to achieve interpretability. This recommended criterion includes keeping the reading level consistent with that of a 12-year-old, reducing ambiguity and value-laden words, and avoiding jargon and double-barrelled questions. Of these survey design principles, jargon, ambiguity and double-barrelled questions appeared as problematic issues in the CNICAS. These design principles will be discussed in the context of my study’s findings. Jargon. When writing survey items, jargon, or ‘technical vocabulary’ can easily slip into a questionnaire (Streiner & Norman, 2008): Since we use a technical vocabulary on a daily basis, and these terms are fully understood by our colleagues, it is easy to overlook the fact that these words are not INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 119 part of the everyday vocabulary of others, or may have very different connotations. (Streiner & Norman, 2008, p. 80) Jargon such as “ICTs”, “interoperable”, “health information standards”, “information standards”, “informatics” and “standardized languages” were found in different items of the C-NICAS survey. These words or phrases were frequently not recognized or misinterpreted. Notably, interacting with these words caused bafflement, frustration and, on occasion, apathy during participant response-making. In total, eight words or phrases were not recognized or understood by three or more participants. Rapport between interviewer and participant can be enhanced when familiar language is used and jargon is avoided (DeJonckheere & Vaughn, 2019). Using cognitive interviewing to revise a questionnaire, Jobe and Mingay (1989) replaced complex language with simpler words and observed reduced comprehension problems with survey items. To reduce jargon in the CNICAS, some technology-laden words could be substituted with more familiar words, and exemplars could be included in the survey items. Segal, June, and Marty (2019) maintain that jargon can encumber successful communication in an interview, arguing that when questions are clear and understandable, pertinent information is easily obtained, rapport and trust established, and communication enhanced. A balance must be struck, however, between ‘talking down’ and considering the participant’s education, and cognitive and intellectual capacities (Segal et al., 2019). While unrecognizable words and phases should be omitted in the C-NICAS, the use of some information technology terms may be unavoidable. In these instances, providing carefully selected examples in parentheses may circumnavigate this issue. Ambiguity. According to Podsakoff, MacKenzie, Lee, and Podsakoff (2003), when items are ambiguous, participants may respond to them randomly or by using their own heuristic INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 120 approach. In my research, there were many startling deviations from accurate item interpretations. The literature seems to indicate that these personalized ‘idiosyncratic meanings’ stem from participant’s own individual response tendencies (Podsakoff et al., 2003). Item complexity and/or ambiguity, they assert, may result from a variety of issues including: words with multiple meanings, colloquialisms or technical jargon, or unfamiliar or infrequently used words (Podsakoff et al., 2003). Streiner and Norman (2008) concur, suggesting ambiguity in a survey item can stem from wording issues or ineffective response alternatives. Findings in my study indicated that participants, in the absence of understanding certain questions, often gave widely varying responses. Furthermore, for several questions, comments emerged relating to wanting another option to select. These suggestions included: “question unclear’, “I need more information”, “don’t know”, or “I don’t understand”. When a question is perceived differently by different participants, item wording may need to be adjusted, or responses may have to be reconsidered (Streiner & Norman, 2008). This is also the case when response alternatives appear vague or hard to select from for some participants (Streiner & Norman, 2008). When questionnaires are designed, items that are too broadly defined or ambiguous should be removed or modified in order to improve survey interpretation (Salomon, Gasquet, Mesbah, & Ravaud, 1999; Tong, Sainsbury, & Craig, 2007; van Teijilingen & Hundley, 2001). Chenail (2011) agrees, arguing that pre-testing plays an important role addressing instrumentation and measurement biases. Among other strategies, he maintains the importance of identifying difficult or ambiguous questions and discarding them when necessary (Chenail, 2011). Furthermore, Chenail (2011) maintains the importance of establishing a range of appropriate responses, and/or re-wording or re-scaling questions that were not answered as INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 121 intended. While this study’s results do not point to the removal of any one survey item, wording improvement has been recommended. Ambiguity may also occur when respondents participate in a survey viewing themselves through specific attributes for which they were recruited. Jenkinson, Peto, and Coulter (1996) found that a commonly used health questionnaire, the short form 36 (SF 36), known to perform well in the general population, was perceived differently by a sub-population. The researchers were surprised to find some items were perceived as misleading or ambiguous. Specifically, as recruitment had been aimed at those with a medical condition, these respondents tended to answer many questions in the survey as they related to this one aspect of their health. Jenkinson et al. (1996) recommend that pre-testing trials are undertaken on each population the questionnaire will be carried out on. In my study, participants were recruited as fourth-year nursing students. When they completed the C-NICAS, they were cognizant they completed it as a small cohort of fourth-year nursing students. Did this contribute to the unusually high number of ambiguous items? It is interesting to note how some C-NICAS questions appeared to be interpreted expressly through the lens of being a student. For instance, participant comments mentioned faculty and clinical instructors, and conducting online searches for the purposes of school. Others stated they weren’t sure they were expected to be competent in certain areas while being a student. The extent to which this is an issue (if at all), or how it may have affected item ambiguity is a speculative, but interesting question. If being identified as a cohort of students affected item ambiguity, future survey design decisions should be weighed considering this issue. Also, questionnaire data from a large population of students could be misleading or may not be reliable for making decisions from. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 122 Double-barrelled questions. A double-barrelled item is one which “asks two or more questions at the same time, each of which can be answered differently” (Steiner & Norman, 2008, p. 79). Results from my study revealed many items containing multiple components. Furthermore, participants felt frustrated or uncertain how to indicate their competency when they were more competent at one aspect than another. Choi and Pak (2005) point out that doublebarrelled questions “make it difficult for the respondent to know which part of the question to answer and for the investigator to know which part of the question the respondent actually answered” (p. 2). A helpful way to recognize double-barrelled questions are those questions containing the words, ‘and’ or ‘or’ in the wording (Williams, 2003). The use of two or more verbs in one question is also a telltale sign of a double-barrelled question (Lietz, 2010). Doublebarrelled questions in the C-NICAS were mentioned eighteen times and affected six questions. The word ‘and’ is present in each of these questions, as is the use of more than one verb. Krosnick and Presser (2010) distill conventional wisdom about double-barrelled questions and suggest researchers ask “about one thing at a time” (p. 264). Rewriting double-barrelled questions reduces the difficulty encountered when interpreting participant responses during data analysis (Ng, 2006). Specifically, each question must be written separately (Streiner & Norman, 2008). This is recommended for several of the C-NICAS items. In summary, study findings revealed how design aspects of the C-NICAS interfered with participant response-making and the misinterpretation of survey items. Principles of good survey design such as avoiding jargon, ambiguity, and double-barrelled questions are linked with improved validity (Streiner & Norman, 2008). Furthermore, good study design can enhance rapport and trust between researcher and participant (DeJonckheere & Vaughn, 2019). Solutions INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 123 suggested in the literature such as re-wording to simplify language or separating compound questions have been discussed. Satisficing, acquiescence and the Dunning-Kruger effect. Survey researchers rely on respondents to interpret survey items they way they intended each item to be interpreted. Furthermore, survey researchers seek genuine and truthful responses. When true respondent behaviour is masked or suppressed, accurate responses do not emerge. Respondent behaviour during survey-taking may be influenced by the survey itself. Specifically, phenomena such as satisficing and acquiescence can interfere with participant response-making, and create measurement error (Streiner & Norman, 2008). As well, the Dunning-Kruger effect may also occur, particularly in low-performing participants who overestimate their abilities (Mahmood, 2016). Satisficing, or the reduction of effort to give optimal answers, is fostered by low participant ability or motivation and high task difficulty (Krosnick, 1999). According to Brenner (2017), bias occurs from satisficing when: [T]he respondent reflects on his or her self-concept—how do I see myself and which identities are important to me?—and uses this information to answer the question rather than systematically and exhaustively scouring his or her memory for instances of the behavior, enumerating them, and reporting the answer. (p. 544) Satisficing is more likely to occur the greater the task difficulty (Krosnick, 1999). Survey findings suggest this when participants admitted “giving up” when trying to interpret a question they saw as “difficult” to answer. Similarly, acquiescence, “the tendency to endorse any assertion made in a question, regardless of its content” (Krosnick, 1999, p. 552) is influenced by many factors, including INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 124 limited cognitive ability, fatigue or difficult questions. Wording of each item in the C-NICAS is framed in the affirmative. In other words, each survey item lists an informatics-related skill that respondents may believe they ought to be competent in. If this is the case, affirming one’s competency without ‘scouring’ their memory could possibly occur. It is unknown to what degree participant behaviour in this study was influenced by these two phenomena. While research from the interviews does not scrutinize score results, but rather narrative processes of each participant, it is acknowledged that the observed tendency to either present oneself favourably or agree with question content may have interfered with participant responses. Acquiescence, like satisficing, is more common when a question is seen as “difficult” to answer, when respondents are fatigued, or when a question is viewed as less personally important (Krosnick, 1999). Study findings revealed issues such as double-barrelled questions, jargon and ambiguity were linked to item “difficulty”. Addressing these issues may reduce the level of difficulty to complete certain items which, may in turn, reduce acquiescence for future respondents. Nursing students, when assessing themselves with specific tasks may overinflate their capabilities, a finding suggestive of the Dunning-Kruger effect (Theron, Redmond, & Borycki, 2017; Tse et al., 2014). This may have been observed when P4 did not want to assess herself as ‘not competent’ because she wanted, “no extremes” (P4) for her answers. To reduce the Dunning-Kruger effect, it is suggested that questionnaires contain a mix of both positive and negatively worded items. However, for competency indicators in the C-NICAS, reframing items to word them negatively may be an unsatisfactory solution. In summary, study findings point to what may be described as a gap between ideal accurate responses and participant behaviour. This gap may be explained, in part, by satisficing, INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 125 acquiescence, or the Dunning-Kruger effect. These phenomena are more likely to occur when survey items are difficult to answer. It may be possible to reduce these less than ideal responses by determining how issues with items perceived as “difficult” can be addressed. Benefits of improving the interpretability of the C-NICAS. The uptake of informatics in nursing curriculum has far-reaching implications for health care (CNA, 2017). Informatics competence is linked with improved patient care, increased patient safety and patient care quality (Darvish et al., 2014). Tele-nursing, e-health education programs and e-education for professional development are available irrespective of place or time through the portal of technology (Darvish et al., 2014). NI preparedness, however, is lacking in undergraduate nursing curricula in Canada (Nagle & Clarke, 2004; Ronquillo et al., 2017) and elsewhere (De Gagne, Bisanar, Makowski, & Neumann, 2012). Establishing an effective assessment tool for evaluating CASNs (2012a) NI competencies would be a critical benchmark to this aim. Wording suggestions stemming from this study’s results are aimed at improving the interpretability of the C-NICAS. By refining the C-NICAS, its usefulness and reliability may be improved. This, in turn, may potentially increase its uptake as a tool to measure entry-to-practice NI competencies. An abundance of NI self-assessment tools can be found in the literature, including the recently developed C-NICAS. In the Canadian context, a recent effort with an American-developed tool, the Staggers Nursing Computer Experience Questionnaire (SNCEQ), was used to assess NI uptake. To contrast and predict the C-NICAS’ future utility against this tool, a closer examination of how the SNCEQ was used in a Canadian context will be made. The SNCEQ was recently used in Ontario, Canada to assess uptake of NI competencies at two schools of nursing over four years. Using a modified version of the SNCEQ, students were asked to evaluate their NI competency online with a 49-item survey (Dionne, 2014). Results INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 126 from this study revealed that NI competencies scores were positively affected by technologyrelated work experience (from nursing or non-nursing related work). Furthermore, NI competencies showed an increased progression over the four years of the baccalaureate programs with highest scores noted prior to fourth-year when concentrated clinical placements occur (Dionne, 2014). Nursing students with technology exposure in external work environments began with and averaged higher scores when compared to those without this work experience. These results suggest that technology exposure in workplaces outside of clinical placement experiences may play an important role in informatics competency development. Using a selfassessment tool was critical to determining these findings. The recent emergence of the C-NICAS and the utility of this modified SNCEQ provides further evidence that efforts to evaluate NI competencies in nursing students in Canada may be on the increase. As noted in the review conducted for this study, informatics literature abounds with self-assessment strategies to assess NI competencies. When compared to the 49 items in the modified SNCEQ, the 21 items of the C-NICAS may be a less onerous approach for evaluating NI competencies. Length of surveys is a known deterrent in survey engagement as lengthy surveys are less likely to be completed (Burns et al., 2008). According to Choi and Pak (2005), response fatigue and disengagement can occur when surveys take an excessive amount of time: “Respondents are unable to concentrate . . . especially if the topics are not of interest. . . . respondents tend to say all yes or all no or refuse to answer all remaining questions” (p. 7). Answering a survey uniformly and inaccurately negatively affects survey results. If the CNICAS is of an enticing length for completion and some wording improvements are made on it, uptake on its use as an effective tool for monitoring NI competencies may occur. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 127 In summary, evidence of misinterpretation of eight of the survey’s items was discovered. Held in the light of cognitive interviewing literature, wording improvements are recommended to improve the interpretability of the C-NICAS. With minor wording adjustments, this Canadiandeveloped scale may prove advantageous and have a positive impact assessing NI in Canada. To assess its potential, a discussion of the current climate of informatics competency assessment in Canada was held. Compared to other lengthier scales, the 21-item C-NICAS with its known psychometric strength (Kleib & Nagle, 2018b) and background relevance to Canadian informatics may play a key role in the future of informatics literacy assessment in Canada. Educational preparedness and item misinterpretation. Problems identified with the C-NICAS include item misinterpretation and ambiguity, words and phrases not recognized or misinterpreted, difficulty answering questions, the use of jargon and double-barrelled questions, and difficulty with response making when participants lack informatics-specific experience. Item re-wording and the inclusion of exemplars has been suggested for specific questions. Will item improvement address these issues or is there another explanatory factor, such as education preparedness? Are Canadian nursing programs keeping abreast of informatics preparation at the baccalaureate level? The literature suggests that this is not the case. To what extent (if any) does lack of ICT and informatics education factor into the item misinterpretation noted in the findings? If so, is item re-wording necessary? Respondents to a survey questionnaire are expected to “attend to and understand the question, recall whatever facts are relevant, make a judgement if the question calls for one, and select a response” (Tourangeau, 1984, p. 73). Completing a survey accurately and attentively can be a daunting task, and it is sobering to note each of these cognitive processes as representative of potential sources of error (Hamme Peterson et al., 2017). Cognitive INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 128 interviewing designed to understand ‘construct-irrelevant variance’ is suggested in addition to, and following, psychometric testing (Hamme Peterson et al., 2017). Hamme Peterson et al. (2017) suggest while psychometrics such as factor analysis (to investigate item interrelationships) and Rasch analysis (to identify item difficulty) can highlight poor performing items, these analyses do not indicate why. Furthermore, for the researcher, “the goal is to identify items where there is a misalignment between participant interpretation and the developer’s intentions and to identify ways to modify those items based on participant response” (Hamme Peterson et al., 2017, p. 217). Through think-aloud and the use of verbal probes constructed to test comprehension, recall, retrieval and judgement, the data revealed several issues related to item misalignment. As a result, several explanatory causes have been offered, including the use of jargon and double-barrelled questions and situating complex informatics concepts in questions without context. However, researchers must also identify underlying questionnaire problems beyond comprehension, recall, judgment, and response (Knafl et al., 2007). This is key to understanding, with precision, the basis for these various issues (Knafl et al., 2007). A review of the literature indicates that Canadian nursing schools are slow to uptake informatics competencies in their curricula. Thompson and Skiba (2008) identified a discrepancy between what NI are, and what faculty think they are, stating it was common for faculty to assume, “exposure to a computer constituted education in informatics. This incongruity is analogous to believing you are a musician because you know how to play the radio” (p. 317). In a survey of Canadian nursing schools, Nagle and Clarke (2004) found evidence that faculty question the potential NI has to improve quality of nursing care, and that faculty are unclear how best to incorporate informatics into curricula. Prensky (2001) suggests a INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 129 disparity of technology prowess in the educational setting may exist between faculty and students, a difference he attributes to different socialization experiences. Faculty, who Prensky (2001) refers to as digital immigrants, have been socialized differently and struggle to learn and speak a new language, a language that students, digital natives, have been fluent in since birth. Understanding and reducing these barriers and accessing relevant educational content may help Canadian schools move toward informatics uptake. Canadian-based resources, such as an informatics teaching toolkit (CASN, 2013) and a digital health-based learning resource (CASN, 2016) are designed to support informatics uptake at the educational level. Other efforts to understand and reduce these barriers are being made (Kleib et al., 2013). In a review of the literature to examine what strategies and outcomes are associated with the uptake of NI at the baccalaureate level, Kleib et al. (2013) suggest the following to improve competency development: (a) institutions commit to integrating informatics competencies; (b) institutions and service sectors facilitate the development of informatics in nursing students; and (c) faculty use an array of innovative educational strategies to develop informatics competencies. Digital health nursing faculty peer leadership opportunities in Canada have also been created to integrate content into curricula and establish supportive peer networks (CASN, 2015). Evidence indicates, however, that nursing programs lag in preparing nursing students for informatics competencies (De Gagne et al., 2012; Nagle & Clarke, 2004; Ronquillo et al., 2017). The C-NICAS stands as a potentially important evaluative tool. Irrespective of the readiness of faculty and education programs, re-wording the C-NICAS items is maintained as a sound strategy to improve item clarification. As overall informatics readiness and preparedness increase, item misalignments such as those occurring in the C-NICAS may ultimately disappear. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 130 Until then, adding item clarification with exemplars, and reducing jargon and ambiguity to improve item interpretation is suggested. To summarize, efforts to understand participant narratives were based on the four cognitive processes of comprehension, recall, judgement, and response. These narratives revealed issues related to survey item misalignment of the C-NICAS, specifically misinterpretation of words, phrases, and overall questions. The extent to which these observed issues are influenced by factors other than question problems was considered. In particular, the extent to which they may they reflect a lack of informatics preparedness at the baccalaureate level was presented. This discussion concluded with the realization that while informatics readiness and awareness in Canada is increasing, overall informatic preparedness still lags. As such, wording improvements to the C-NICAS to improve its overall interpretability remains a necessary and timely strategy. Pilot testing and pre-testing on targeted populations. A review of the literature suggests that cognitive interviewing is a crucial step in item refinement for new questionnaires (Beatty & Willis, 2007; Bode & Jansen, 2013; Brenner, 2017; Padilla et al., 2013; Vis-Visschers & Meertens, 2013). Furthermore, when using cognitive interviewing to pre-test newly developed surveys, experts recommend improvements be made in a series of rounds (Presser et al., 2004; Thompson et al., 2011; Willis & Miller, 2011). Thompson et al. (2011) claim cognitive interviewing is an “iterative process in which one or more revised versions of the questionnaire are subjected to cognitive interviews” (p. 3) and recommend that questionnaire refinement occur with small but purposively selected numbers of participants. Willis and Miller (2011) agree, stating, “when an item is changed, it is desirable to submit the new version to a further round of testing” (p. 336). They further suggest that changes be made to questions INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 131 following each round before receiving subsequent testing. If re-wording strategies are implemented with the C-NICAS, additional rounds of cognitive interviewing with similar populations of students may offer further refinement. Results from my study indicate the need for several item revisions, each suggested to improve wording and interpretability of the CNICAS. If these revisions are made, it may be ideal to continue with several more cognitive interviewing rounds, each time implementing the suggested changes from the previous round. As a new survey tool, the C-NICAS underwent pilot testing (Kleib & Nagle, 2018a). Burns et al. (2008) maintain that pilot testing serves a unique function distinct from pre-testing; moreover, pre-testing should precede pilot testing: Pre-testing focuses on the clarity and interpretation of individual questions and ensures that questions meet their intended purpose. Pilot testing focuses on the relevance, flow and arrangement of the questionnaire, in addition to the wording of the questionnaire. Although pilot testing can detect overt problems with the questionnaire, it rarely identifies their origins, which are generally unveiled during pre-testing. (p. 249) Using cognitive interviewing in the pre-testing phase to re-test items continues until no further changes are suggested by participants, and questionnaire format and terms are well understood (Thompson et al., 2011). The purpose of pilot testing is to examine a questionnaire’s “flow, salience, acceptability and administrative ease, identifying unusual, redundant, irrelevant or poorly worded question stems and responses” (Burns et al., 2011, p. 248). Developers of the C-NICAS pilot-tested the scale during which several significant changes were made. It is noted, however, that those who pilot tested the C-NICAS were members of the Nursing Informatics Association of Alberta (Kleib & Nagle, 2018a). Pilot testing, according to Bradburn, Sudman, and Wansink (2004), should be conducted on INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 132 respondents for whom the questionnaire is aimed at. Van Teijlingen and Hundley (2001) concur, stating that pilot testing should be done with those “who are as similar as possible to the target population” (p. 2). This population of informatics nurses may differ in some respects from the population for whom this survey is intended. For instance, informatics nurses are likely acquainted with ICT terminology, and, as such, informatics jargon in the C-NICAS may have been easily recognizable and accurately interpreted. Locating informatics jargon and complex informatics concepts (without exemplars) in the C-NICAS may be explained, in part, by who pilot tested the survey. Study findings revealed that the presence of misleading technology jargon and the lack of examples was distracting, and linked to the misinterpretation of words, phrases, and overall questions. While the extent to which future respondents may encounter these issues is unknown, these findings suggest that similar populations may encounter the same issue if it is not addressed. In summary, the importance of pre-testing and pilot testing has been reviewed. Suggestions found in the literature include conducting pre-testing prior to pilot testing, conducting pre-testing in a series of rounds using small sample sizes, and ensuring that pilot tests are done on a population that closely resembles the target population (Bradburn et al., 2004; Thompson et al., 2011; Willis & Miller, 2011). It is noted that pilot testing on the C-NICAS was performed on members of the Nursing Informatics Association of Alberta. It is possible this population differs from entry-level nurses in their familiarity of informatics jargon and ICT concepts. As such, unless specific wording issues are addressed, it is speculated that future respondents of the C-NICAS could react similarly to how those in the study did, misinterpreting items and experiencing difficulty answering some of the questions. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 133 Test validity, response error, and inferences. Attaining accurate conclusions about whether certain human traits exist and, if they do, to what extent, allows researchers to make valid conclusions from a scale’s results (Streiner & Norman, 2008). Testing for validity is a complex endeavour that requires consideration of both “the nature of what is being measured and the relationship of that variable to its purported cause” (Streiner & Norman, 2008, p. 247). Examining what is being measured is trickier still when how it is defined or measured can vary from one person to another (e.g., informatics competency; see Streiner & Norman, 2008). Moreover, when what is examined concerns human behaviour, measuring the relationship between what is observed and what it reflects can be fraught with subjective biases (Streiner & Norman, 2008). How these validity concerns relate to my study’s findings will now be examined. When surveys are self-administered, they must be clearly and carefully worded to avoid misinterpretation from their intended meaning. How a survey’s items are interpreted is foundational to all inferences made from its analyses; when survey items are misinterpreted, the validity of the test is affected (Hamme Peterson et al., 2017). As previously discussed, sources of confusion from survey items stem from: [U]nderstanding (is the item wording, terminology, and structure clear and easy to understand?), retrieval (Has the respondent ever formed an attitude about the topic? Does the respondent have the necessary knowledge to answer the question? Are the mental calculations or long-term memory retrieval requirements too great?), judgment (Is the question too sensitive to yield an honest response? Is the question relevant to the respondent? Is the answer likely to be a constant?) and response (Is the desired response INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 134 available and/or accurately reflected in the response options? Are the response options clear?). (Hamme Peterson et al., 2017, p. 219) Study results revealed issues with comprehension and response-making, more so than with judgement and recall. When considering the survey’s items, participants did not appear to have difficulty retrieving relevant experience (or identifying that they had none). Overall, judging relevant responses did not pose as difficult. As participants judged their answers, they demonstrated intentionality and carefully contemplated their answers. However, many items contained words or phrases participants did not understand. Also, the overall meaning of several items was misunderstood. Participants misinterpreted eight survey items, likely negatively affecting test validity. Participants also struggled with response-making when they encountered doublebarrelled questions, questions they did not understand or when they lacked experience. These concerns also impact a survey’s validity. Response error represents the “discrepancy between a theoretical ‘true score’ and that which is reported by the respondent” (Willis, 2005, p. 13). Response error is caused by those characteristics of questions that lead participants to respond incorrectly, and substantially alters data quality (Willis, 2005). Survey questions can produce response error if they are overly challenging to comprehend, or if the meaning and intention is vague (Willis, 2005). Completing survey questions without dialogue or opportunities for clarification felt irksome and frustrating for many participants. Questionnaires do not resemble natural exchanges of communication between humans and when this happens, response error can occur quite simply because “survey questions usually do not allow for the flexible interactions that establish grounding” (Willis, 2005, p. 19). Comments from the data that reflect this include: (a) wishing INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 135 the interviewer had been sitting next to them to clarify items; (b) expressing frustration when they didn’t understand the question or encountered a double-barrelled question; and (c) feeling uncomfortable selecting “not competent” because of not understanding the question. While not all survey problems stemming from this concern can be addressed by re-writing questions, a structured interview designed to elicit cognitive responses can pinpoint problematic wording (d’Ardenne, Collins, & Blake, 2015). For example, the question may be puzzling to interpret, too wordy or ambiguous, or a category may be missing from the answer (Willis, 2005). Cognitive interviewing offers a platform to identify these sources of error and minimize response bias by identifying wording and interpretability problems. In the C-NICAS, several of the survey items lacked the context readily available in the details of the competency indicators (CASN, 2012a). It is recommended that the inclusion of exemplars may resolve this issue. To summarize, in the words of Streiner and Norman (2008), validating a scale can be described as “the degree of confidence we can place on the inferences we make about people based on their scores from the scale” (p. 251). Reviewing findings from my study in the light of current literature indicated how item misalignment and struggles with response-making may interfere with scale validity. Inferences are derived from how a study is interpreted. When wording problems interfere with interpretation, study validity is negatively affected (Hamme Peterson et al., 2017). Several factors contributing to response error have been considered. To mitigate response error and improve scale validity, item improvement with re-wording and the inclusion of exemplars is recommended. Applicability of suggested revisions. Revision suggestions for wording refinement in the C-NICAS stem from links made between study findings and the following: (a) principles of survey design; (b) lack of clear reference or exemplars; (c) misinterpreted items; (d) items INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 136 containing unrecognizable words or phrases; (e) items “difficult” to answer; and/or (f) not knowing how to answer without experience. Details of suggested item re-wording revisions are presented in Chapter Six. It is recognized that item revisions arising from study findings may either apply specifically to nursing students or more broadly to Canadian nurses taking the CNICAS in the future. The extent to which wording suggestions are intended for nursing students or apply more generally to a broader population is related to how conclusions were drawn about item misalignment. Specifically, participant narratives that indicated misinterpreted or “difficult” items or words/phrases not recognized are considered subjective responses when compared to those pointing to design flaws or lacking exemplars. As these subjective responses represent cognitive narratives from a specific cohort of fourth-year nursing students at one Canadian university, it is not ideal to extend application of these wording improvements beyond a nursing student population closely resembling this sample. Responses highlighting survey design problems or unclear references are considered less subjective because they reflect broadly-known principles of good survey design. It is suggested item revisions arising from design flaws may be more broadly applicable. A delineation may be made between those items representing subjective responses and those representing broader survey design flaws. This will highlight item revisions likely to be more applicable to nursing student populations closely resembling the study’s sample, and which would be more generally applicable. To facilitate this, item revision suggestions have been made for 20 of the 21 CNICAS survey items, alongside accompanying rationale such as: word or phrase not recognized, participants state question is “difficult” to answer, use of jargon, question misinterpreted, unclear reference, etc. However, upon close examination of the rationales underlying each recommended revision, it is noted how both subjective responses and responses indicative of INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 137 survey design flaws overlap in most items. In fact, most re-wording suggestions contain two or more ‘mixed’ rationales. This indicates the complexity of separating which wording revisions are more applicable to nursing students, and which may be more broadly applied. It is further noted that pilot testing of the C-NICAS was conducted on informatics nurses and resulted in several design changes, including the decision to shorten items and remove examples of technologies that had been added for clarification (Klein & Nagle, 2018a). Informatics nurses are likely to differ in their ICT and informatics knowledge from entry-to-practice nurses and the general nursing population. The decision to shorten the length of some items and remove exemplars was based on feedback from nurses not representative of the targeted audience— practicing nurses in Canada (Kleib & Nagle, 2018a). It is recommended that future re-wording decisions consider and weigh the value of reinstalling exemplars in this light. It is further recommended that future wording changes undergo further rounds of cognitive interviewing in both a sample closely representing this study and a general nursing population. In summary, it is proposed that item re-wording revisions may be applicable to either a nursing student population closely resembling this study, or more broadly. It is suggested that an item revision is more applicable to nursing students when its rationale relates to subjective responses in the data such as words/phrases not recognized or misunderstood, or item misunderstood or “difficult” to answer. Item revisions stemming from survey design flaws or an unclear reference are presented as less subjective and may be considered as applicable more broadly. A clear delineation, however, is complicated by the way both types of rationale have influenced re-wording revisions. As pilot testing decisions during the survey’s initial development were made in consultation with informatics nurses, recommendations to consider the benefits of adding exemplars, and continue cognitive interview pre-testing have been made. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 138 Chapter Summary This chapter set out to situate study results in the context of current literature findings. By examining study findings in this manner, their meaning and significance can be evaluated. Study findings revealed how design aspects of the C-NICAS interfered with the misinterpretation of survey items and participant response making. These findings were discussed in relation to current recommendations and principles for good survey design. It was noted how each design flaw can be addressed with re-wording or question reformatting. A discussion of the ramifications of improving the interpretability of the C-NICAS was situated in the context of assessing informatics competency in Canada. It was concluded that the C-NICAS (with some wording adjustments) may be poised as an effective assessment tool, particularly as its length, when compared to other surveys, is not overly burdensome. Given the increasing (albeit gradual) shift towards establishing informatics competency across health care sectors in Canada, this is seen as an encouraging prospect. The external factor of inadequate informatics preparedness was raised as a possible contributing factor to item misalignment in the C-NICAS. To contextualize this, an overview of the climate of Canada’s baccalaureate level informatics readiness was presented. It is acknowledged that the extent to which C-NICAS item problems may be education related (as opposed to wording ones) is unknown and speculative. Given the current climate of lack of informatics readiness, it is suggested that wording changes proceed as recommended. Issues pertaining to test validity, response-error and inferences were also examined. Reviewing the literature highlighted the importance of establishing validity in a scale. The degree to which the C-NICAS’ validity and interpretability may be improved was considered in this light, reinforcing the need to make item wording adjustments. Finally, a recommendation to consider item revisions as either applicable to nursing students closely INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 139 matching this study’s sample or more generally to Canadian nurses was made. Challenges arising from efforts to delineate which items apply more (or less) broadly have been outlined, as well as specific recommendations for those considering future re-wording revisions. It is significant that nursing programs in Canada tasked with incorporating CASNs (2012a) NI competencies into their educational programs do not possess a tool to effectively assess how well these indicators are met in the student nursing population. Survey results, examined through the lens of current literature, reveal an overarching recommendation of item re-wording to improve scale validity and interpretability. If this is undertaken, the C-NICAS may emerge as a reliable, valid and effective evaluative tool to assess NI in Canada. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 140 Chapter Six: Conclusions and Recommendations Cognitive interviewing may be useful simply because it provides information to make . . . design decisions as logically as possible—indeed, it may be the most efficient method available for illuminating such issues. In that light, cognitive interviewing may be less suited to finding the “best” questions than to guiding “best informed” design decisions. (Beatty & Willis, 2007, p. 304) The purpose of this study was to investigate the wording and interpretability of the CNICAS, a newly developed informatics scale based on CASN (2012a) competency indicators for entry-to-practice nurses. Preliminary testing for factor analysis and internal consistency reliability has been previously conducted, indicating evidence of factor structure and good internal consistency (Kleib & Nagle, 2018b). However, the scale developers recommended further testing among nursing students (Kleib & Nagle, 2018b). Using cognitive interviewing as my research method, I addressed the question, “How do fourth-year nursing students interpret and respond to survey questions on the C-NICAS?” This chapter contains conclusions from the study, a table outlining recommended wording revisions, as well as implications and recommendations for research, education, practice, leadership and policy. Research Summary Using the two cognitive interviewing techniques of think-aloud and verbal probing resulted in a wealth of narrative responses, “textual data that relays how and why [participants] answered the question as they did, revealing the interpretive process used . . . to relate survey questions to their own life experiences and circumstances” (Willis & Miller, 2011, p. 334). Findings indicated survey item misalignment and several problematic areas, namely: misinterpreted survey items, items perceived as difficult to answer, and problematic INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 141 words/phrases as well as three other issues. These other issues highlighted how participants responded to items when they lacked experience or found a question unclear or double-barrelled. The significance of these findings was explored in a review of the current literature. This situated the results in new perspectives allowing for their broader significance to be considered. Six such considerations concerning the study’s findings were made and are outlined next as study conclusions. Conclusions First, study findings revealed the presence of some jargon, and questions that were double-barreled or ambiguous interfered with participant response-making and the misinterpretation of survey items. Therefore, several item revision recommendations related to foundational principles of survey design (DeJonckeere & Vahn, 2019; Streiner & Norman, 2008) have been made. It is advised that survey design principles be at the forefront of implementing any future C-NICAS item and response revisions. Second, literature findings revealed evidence that NI preparedness lags in baccalaureate nursing programs (Maag, 2006; Thompson & Skiba, 2008) including Canada (Nagle & Clarke, 2004; Ronquillo et al., 2017). The acquisition of NI competencies by entry-level nurses is recognized as useful for the purposes of planning, decision-making and documentation of delivery of care (Choi & Zucker, 2013; Hill et al., 2014). As the C-NICAS shows early signs of reliability and validity when assessing self-perceived NI competencies in the general nursing population (Kleib & Nagle, 2018b), its utility as a tool to assess entry-to-practice nursing population in Canada’s NI is an ongoing endeavour to be further explored and refined. To this aim, study findings have indicated item misalignment in a sample of fourth-year nursing students, and, as a result, wording improvements are suggested. It is suggested that the validity INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 142 of the C-NICAS may be enhanced with item and response revisions. If this occurs, the CNICAS, as an assessment scale in the entry-to-practice nursing population, may be beneficial in moving informatics educational preparedness forward. Third, recommendations for revisions to the C-NICAS were derived from participant narrative responses to what was presumed to stem from item problems. The possibility that an external influencing factor of educational unpreparedness was also considered. In other words, to what extent (if any) does lack of ICT and informatics education contribute to the item misalignment observed in my study’s findings? If the fourth-year students in my study lack informatics preparedness, to what extent is item misalignment related to these educational gaps in learning? These theoretical queries raise further questions related to education and understanding the C-NICAS survey items. How are educational bodies evaluating informatics competencies? Is the lag in preparedness related to the lag in assessing competencies, or something else? As efforts continue to address these and other related questions, wording improvements to the C-NICAS in the form of item and response revisions remain as a sound approach to improve its overall interpretability (Beatty & Willis, 2007; Bode & Jansen, 2013; Brenner, 2017; Padilla et al., 2013; Vis-Visschers & Meertens, 2013). Fourth, current literature findings indicate that pilot testing and pre-testing is recommended for newly developed tools and should take place using populations for whom the survey is developed for. While the C-NICAS underwent pilot testing, it was not conducted on the targeted population of all Canadian nurses, but rather on a group of informatics nurses. Study findings pointed to the presence of technology-laden jargon and it is speculated that informatics nurses may have perceived these words as more familiar than problematic than the participants did in my study. It is suggested that addressing these specific wording concerns may INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 143 prevent future respondents of the C-NICAS misinterpreting or not recognizing certain words or phrases. Fifth, it was observed how study findings, data analysis inferences, and validity concerns related to response errors. Response errors can occur when survey questions have comprehension issues, an unclear intention, or are misinterpreted (Willis, 2005). Validating a scale, according to Streiner and Norman (2008), is concerned with determining degrees of confidence on inferences made from a test’s scores from within that population. When survey items are misinterpreted, study validity is negatively affected (Hamme Peterson et al., 2017). Item and response revision recommendations have emerged from study findings. It is suggested these revisions may improve the C-NICAS’ study validity and reduce future response error. In the words of Streiner and Norman (2008), a validation study asks the following questions, “‘Does the hypothesis of this validation study make sense in light of what the scale is designed to measure’ and ‘Do the results of this study allow us to draw inferences about the people that we wish to make?’” (p. 252). Answering these questions in the affirmative allows us to make meaningful, or valid, statements about people based on their scores (Streiner & Norman, 2008). By improving its validity, meaningful conclusions may be derived from future results of the CNICAS. Sixth, and finally, the question of whether suggested revisions should apply specifically to nursing students or more broadly to the survey’s target population was considered. Study findings pointed to misinterpreted questions, words, or phrases that weren’t recognized, and questions that were “difficult” to answer. These represent cognitive responses from a specific cohort of fourth-year nursing students and may be considered subjective when compared to item misalignment stemming from survey design problems such as double-barreled questions, jargon, INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 144 or unclear references. It is suggested C-NICAS item revisions to survey issues stemming from these more subjective responses may be applied more specifically to nursing students, whereas revisions addressing survey design problems could apply more broadly to other populations of nurses. Study findings of item misalignment in the C-NICAS, however, indicated the complexity of separating items caused by design flaws from those that were misinterpreted, seen as “difficult” to answer, or contained unrecognizable words or phrases. This complexity arises from the fact that items requiring revisions contain both subjective and less subjective problems. As such, replacing some of the jargon with simpler terminology, separating questions in two parts, and adding exemplars is suggested as a strategy to reduce item misalignment. Continued cognitive interviewing, as suggested in the literature, is also advised to overcome any further unanticipated wording or interpretability issues. Recommended Revisions to the C-NICAS In addition to general study conclusions, practical implications also emerged, leading to a list of suggested recommendations for item and response revisions. These recommendations are presented next in Table 2. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 145 Table 2 Recommended Revisions for C-NICAS Items and Responses Original C-NICAS Item 1. Use ICT devices Problem type (bold) and explanation Suggested recommendation (bold) for revision Word or phrase not recognized. (“ICTs”) Grammar. (keep verb tense consistent with other items) Participants state question is ‘difficult’ to answer. Revised item to correct grammar & situate definition of ICT* in question: • “Uses information and communication technologies (ICT) devices.” Word or phrase not recognized. (“ICTs”) Grammar. (keep ICT singular as definition “informational & communication technologies” is already plural) (Refer to item revision recommendation to define ICTs within Q1) 3. Performs search and critical appraisal of on-line literature and resources. Double-barrelled question. (search AND appraisal) Participants state question is “difficult” to answer. Revised question to construct two separate questions: • “Performs searches of online literature and resources.” • “Performs critical appraisal of online literature and resources.” 4. Analyses, interprets, and documents pertinent nursing and patient data using standardized languages. Double-barrelled question. (analyze, interpret AND document) Unclear reference. (participants unsure this is referring to charting using standardized languages) Question misinterpreted. (all participants misinterpreted question as charting what is important) Revised question to construct two separate questions, substitute “pertinent” with a simpler word (important), & include exemplars of standardized languages*: • “Analyses and interprets important nursing and patient data using standardized languages (e.g., International Classification of Nursing Practice [ICNP], Canadian Health Outcomes for Better Information of Care [C-HOBIC] or Systematic Nomenclature of 2. Uses ICTs applications Revised item: • “Uses ICT applications.” INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE Participants state question is “difficult” to answer. • 146 Medicine Clinical Terms [SNOMED-CT]).” “Documents important nursing and patient data using standardized language (e.g., International Classification of Nursing Practice [ICNP], Canadian Health Outcomes for Better Information of Care [CHOBIC] or Systematic Nomenclature of Medicine Clinical Terms [SNOMED-CT]).” 5. Assists patients and their families to access, review and evaluate online information. Unclear reference. (participants unsure question is referring to online health-related information) Question misinterpreted. (participants interpreted question as helping patients look up their medical records) Without experience I don’t know how to answer. (participants do not feel confident selecting an answer from the options provided) Revised question to clarify intent*: • “Assists patients and their families to locate and evaluate online healthrelated information that is relevant and credible.” Revised response options to clarify intention of NA: • “NA (not applicable to my nursing practice)” 6. Describes the processes of data gathering, recording and retrieval in paper and electronic records. Unclear reference. (Participants unsure question is referring to health care data) Question misinterpreted. (wide array of misinterpretations) Participants state question is “difficult” to answer. Revised question to clarify intent & construct two separate questions: • “Gathers and records health care data in both paper and electronic records.” • “Retrieves health care data in both paper and electronic records.” 7. Articulates the significance of information standards for interoperable Words or phrase not recognized. (“information standards” & “interoperable electronic health records”) Revised question to simply & clarify meaning*: • “Understands the importance of information standards (i.e., standardized clinical terminologies) INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 147 electronic health records. Unclear reference. (participants unsure question is referring to enhanced accessibility and compatibility of health records across the health care system) Question misinterpreted. (wide array of misinterpretations) Participants state question is “difficult” to answer. for enhanced accessibility and compatibility of health records across the health care system.” 8. Articulates the importance of standardized nursing data to reflect nursing practice and advance nursing knowledge. Unclear reference. (participants understand “reflect” as referring to “personal reflection”) Revised question to clarify intent: • “Articulates the importance of standardized nursing data to reflect current nursing practice and advance nursing knowledge.” 9. Critically evaluates data and information from a variety of credible sources to inform nursing care. Unclear reference. (participants unsure question is referring to sources such as experts, clinical applications, databases, practice guidelines, relevant websites) Revised question to clarify intent and add exemplars*: • “Critically evaluates data and information from a variety of credible sources (including experts, clinical applications, databases, practice guidelines, relevant websites, etc.) to inform nursing care.” 10. Complies with legal and regulatory requirements, ethical standards and organizational policies Word or phrase misinterpreted. (“organizational policies”) Revised item to clarify meaning*: • “Complies with legal and regulatory requirements, ethical standards and health care institutional policies.” 11. Advocates for the use of current and innovative ICTs in health care. Without experience I don’t know how to answer. (participants do not feel confident selecting an (Refer to item revision recommendation to define ICTs within Q1) Revised response options to clarify intention of NA: INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 148 answer from the options provided) Word or phrase not recognized. (“ICTs”) Participants state question is “difficult” to answer. “NA (not applicable to my nursing practice)” 12. Identifies and reports system process and functional issues according to organizational policies. Word or phrase misinterpreted. (“organizational policies”, “system process & functional issues”) Unclear reference (participants are unsure what types of issues to identify and report) Revised item to clarify meaning and add exemplars*: • “Identifies and reports system process and functional issues (e.g., error messages, misdirections, device malfunctions, etc.) according to health care institutional policies.” 13. Maintains effective nursing practice and patient safety during system unavailability. Without experience I don’t know how to answer. (participants do not feel confident selecting an answer from the options provided) Revised response options to clarify intention of NA • “NA (not applicable to my nursing practice)” 14. Demonstrates professional judgment in the presence of technologies. Unclear reference. (participants are unsure question refers to professional judgement prevailing in the presence of technologies) Question misinterpreted. (wide array of misinterpretations) Revised item to clarify intent and add exemplars*: • “Demonstrates that professional judgement must prevail in the presence of technologies designed to support clinical assessments, interventions, and evaluation (e.g., monitoring devices, decision support tools, etc.).” 15. Recognizes the importance of nurses' involvement in the design, selection, implementation and evaluation of ICTs applications and systems in Word or phrase not recognized. (“ICTs”) Double-barrelled Question. (design, select, implement, AND evaluate) (Refer to item revision recommendation to define ICTs within Q1) No revision recommended to construct four separate questions as this lengthens survey unnecessarily. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 149 health care. 16. Identifies and demonstrates appropriate use of a variety of ICTs to deliver care. Word or phrase not recognized. (“variety of ICTs”) Unclear reference (participants are unsure what ICTs the question is referring to) (Refer to item revision recommendation to define ICTs within Q1) 17. Uses decision support tools to assist clinical judgment. None. None. 18. Uses ICTs in a manner that supports the nursepatient relationship. Word or phrase not recognized. (“ICTs”) Unclear reference (participants are unsure question refers to using ICTs in manner that does not interfere with the nurse-patient relationship) Question misinterpreted. (wide array of misinterpretations) Participants state question is “difficult” to answer. (Refer to item revision recommendation to define ICTs within Q1) 19. Describes the various components of health information systems. Word or phrase not recognized. (“health information systems”) Unclear reference (participants are unsure what are the various components of health information systems) Revised item to clarify intent and add exemplars*: • “Accesses the various components of health information systems (e.g., results reporting, computerized provider order entry, clinical documentation, electronic Medication Administration Records, etc.).” Revised item to clarify intent*: • “Identifies and demonstrates appropriate use of a variety of ICTs (e.g., point of care systems, EHR, EMR, capillary blood glucose, hemodynamic monitoring, telehomecare, fetal heart monitoring devices, etc.) to deliver care to diverse populations in a variety of settings.” Revised item to clarify intent*: • “Uses ICTs in a manner that supports (i.e., does not interfere with) the nurse-patient relationship.” INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 150 Question misinterpreted. (wide array of misinterpretations) Participants state question is “difficult” to answer. 20. Describes various types of electronic records used in care. Unclear reference (participants are unsure the question is referring to the various typed of electronic records used across the continuum of care) Word or phrase not recognized. (“various types of electronic records”) Question misinterpreted. (wide array of misinterpretations) Participants state question is “difficult” to answer. Revised item to clarify intent and add exemplars*: • “Describes various types of electronic records used across the continuum of care (e.g., e-health records [EHR], e-medical records [EMR] and personal health records [PHR]) including their distinct uses across the patient care continuum.” 21. Describes benefits of informatics to improve health systems and quality of care. Word or phrase not recognized. (“informatics”) Question misinterpreted. (wide array of misinterpretations) Participants state question is “difficult” to answer. Revised item to add definition of “informatics”: • “Describes benefits of informatics (definition below) to improve health systems and quality of care. Informatics: “a specialty that integrates nursing science with multiple information management and analytical sciences to identify, define, manage, and communicate data, information, knowledge, and wisdom in nursing practice” (Healthcare Information and Management Systems Society, 2019, para. 1).” Note. (*) indicates revisions and exemplars taken from CASN (2012a) document, Nursing informatics entry-topractice competencies for registered nurses. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 151 Recommendations for Nursing Research on Informatics Competency Assessments A review of the literature for this study offered a synopsis of current research on informatics competency assessment tools. An analysis of the informatics scene in Canada has revealed that while some strides in ICT and digital health improvement have been made, informatics uptake overall still lags. According to De Gagne et al. (2012), reasons for this include lack of consensus on integrating informatics concepts into BSN curricula and a mixed perception of informatics knowledge by nursing faculty. Similarly, Ronquillo et al. (2017) attribute this lag to a need for specialized NI education and more research related to issues such as patient safety and efficiency of patient care, as well as a need for increased advocacy and leadership concerning NI knowledge. Moreover, only recently a tool based on CASNs (2012a) competency indicators was developed (Kleib & Nagle, 2018a). The C-NICAS shows promising signs of utility if its validity can be substantiated. As a result of my study’s findings, scale refinement is suggested through item and response re-wording suggestions. Further research is recommended with the C-NICAS once these revisions have been undertaken. Specifically, further cognitive pre-testing with small selective samples closely resembling the population for whom the competency indicators were written may more strongly establish the C-NICAS’ interpretability. Future research efforts are also recommended to evaluate the influence of informatics education in both the short and long term (Kleib et al., 2013). To measure the incremental progress of informatics competency development with nursing students, the development of a modified C-NICAS scale for nursing students is recommended. Further pre-testing and pilot testing is recommended with this modified scale. To measure longer term implications of informatics uptake, further research is recommended to test the influence of informatics education on competency development in INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 152 nurses after graduation. Literature findings indicate that understanding informatics uptake in healthcare can also be measured assessing informatics attitudes, basic computer knowledge and skills (Abdrbo, 2015; Bryant et al., 2016), perceptions of technology readiness (Odlum, 2016), and computer literacy and computer anxiety (Akhu-Zaheya & Khater, 2013). Research using a variety of assessment strategies acknowledges the complexities of measuring informatics uptake in healthcare. Recommendations for Nursing Education Striking evidence exists that informatics preparedness lags in baccalaureate education (De Gagne et al., 2012; Maag, 2006; Nagle & Clarke, 2004; Ronquillo et al., 2017; Thompson & Skiba, 2008). Therefore, recommendations are directed at faculty to promote awareness of informatics competencies and understand the difference between computer and information literacy (Thompson & Skiba, 2008). Barriers such as IT literacy gaps between faculty and students may be addressed with informatics resources and training directed at faculty (De Gagne et al., 2012; Fetter, 2008). Several Canadian content resources are available to support faculty preparedness (e.g., Nursing Informatics: Entry-to-practice Competencies for Registered Nurses [CASN, 2012a]; Nursing Informatics Inventory: Existing Teaching and Learning Resources [CASN, 2012b]; Nursing Informatics Teaching Toolkit: Supporting the Integration of the CASN Nursing Informatics Competencies into Nursing Curricula [CASN, 2013]; Consumer Health Solutions: A Teaching and Learning Resource for Nursing Education [CASN, 2016]). Specifically, incorporating NI education modules as well as stand alone courses are also recommended (De Gagne et al., 2012; Fetter, 2008; Jetté et al., 2012). Education recommendations also include students possessing basic computer skills prior to starting their nursing education (De Gagne et al., 2012). Basic computer skills and informatics INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 153 education are associated with informatics competence (Kleib et al., 2013). The incorporation of NI into nursing programs is related to improved competency in both the use and management of ICTs (Hwang & Park, 2011). Innovative approaches to interacting and learning informatics may also include the use of personal digital assistants (PDAs) (Kleib et al., 2013) and virtual world learning (De Gagne, Oh, Vorderstrasse, & Johnson, 2013). Assessing health websites is also a recommended educational approach (Fiore, 2015; Jetté et al., 2012; Theron, Astle, Dixon, & Redmond, 2019). This assignment strategy has been observed to aid students in progressing beyond examining ‘surface criteria’, to engage in critical thinking, including the “exploration and analysis of credibility, argument, purpose . . . evidence of information . . . [and] wise judgment” (Theron et al., 2019, p. 11). It is recommended that strategic planning be implemented at an administrative level to facilitate these types of innovative ways of integrating informatics into nursing curricula. According to Kleib et al. (2013), when competency indicators encompass knowledge, skills, and attitudes, they can be helpful in determining performance markers. Choi and DeMartins (2013) concur that “establishing a baseline of informatics competencies in undergraduate and graduate nursing students is vital to planning informatics curricula and adequately preparing students to promote safe, evidence-based nursing care” (p. 1974). Moreover, best practices, as set out by McClarty and Gaertner (2015) to assess competencybased learning, should include ensuring validity of the assessment scale and using evidence to set competency thresholds. If such incremental progress in informatics learning is needed to measure baccalaureate curricula changes, a modified C-NICAS scale for students may need to be constructed and tested. Consideration of this strategy is strongly recommended. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 154 It is acknowledged there are other ways of measuring how students attain informatics competencies beyond using self-assessment scales. Ronquillo et al. (2017) suggests that the impact of NI be assessed as it affects nursing care and patient outcomes. Staggers, Gassert, and Curan (2002) propose that NI competencies be further divided into categories or levels of proficiency—from beginner, to experienced, informatics specialist, and informatics innovators. These competencies are based on the three categories of computer skills, informatics knowledge, and informatics skills (Staggers et al., 2002). The authors suggest that improving guidelines to understanding core concepts related to informatics will allow nurses of all ranges of informatics competencies to benefit from the knowledge and applied skills of NI. To assess these levels of competencies, Chung and Staggers (2014) developed and used a 112-item Nursing Informatics Competencies Questionnaire. They argue that assessing beginner to experienced informatics competencies among practicing nurses is advantageous as it may help understand how to support nurses using informatics in clinical practice (Chung & Staggers, 2014). If student nurses approaching completion of their program are “beginner” informatics nurses, perhaps the beginner-level NI competencies can be used to assess clinical ICT experiences. Recommendations for Nursing Practice The following recommendations stem from current informatics literature findings. A significant benefit of NI on nursing practice has been the use of standardized data to enhance decision-making (CNA, 2017). Furthermore, interoperability of health records improves access for both patients and health care professionals. Therefore, recommendations for clinical practice include advocacy for the use of standardized languages and interoperability of electronic records across health care systems. Moreover, a person-centered approach to nursing care is promoted by digitally connected health care that encourages patients and their families to manage and track INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 155 their health (CNA, 2017). Allowing patients direct access to their health records promotes autonomy, and patients in remote areas can be reached using innovative digital health care services, reducing inconvenient and costly travel expenses. Furthermore, when health records contain standardized languages, patient information can be collated and interpreted more efficiently (CNA, 2017). As such, a further recommendation includes encouraging nurses to access online and digitally connected health resources for the purposes of patient education and advocacy. When nurses use health and informatics science effectively, they endorse optimum patient outcomes (Baskaran & Baby, 2015). Rutherford (2008) argues that the quality of nursing interventions is improved using data standards and standardized languages. It is recommended that nurses use ICTs and informatics to access evidence-based literature and decision support tools to augment their nursing practice. Recommendations for Nursing Leadership These recommendations arise from current research findings. It is recommended that nurse leaders understand the importance and implications of NI in health care so they can improve their own informatics competency and augment their learning of how ICTs relate to nursing practice and policy (CNA, 2017). As noted, utilization of standardized clinical terminologies has a positive influence on nursing practice. Nursing leaders are encouraged to adopt and promote those standardized languages as endorsed by national bodies such as the Canadian Nursing Informatics Association (i.e., the Systematized Nomenclature of Medicine [SNOMED-CT] and the International Classification for Nursing Practice [ICNP]). A further recommendation pertains to nurse leaders to seek opportunities to engage in advocacy for current and innovative ICTs in health care. Nurse leaders are also recommended to participate in the design, selection, implementation and evaluation of ICTs applications in health care (CNA, INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 156 2017). These measures will ensure that advocacy and robust uptake of health care-related informatics trickles to nurses in all areas of practice. Recommendations for Nursing Policy These policy-related recommendations stem from current literature findings and support the advancement of NI in Canada. Nursing policy makers are encouraged to consider the incorporation of informatics in all health policy decisions. The future of Canadian health care stands to benefit greatly from “data-driven innovation” (Naylor et al., 2015). For instance, electronic health records make it possible for patients to access and manage their own personal health information. Therefore, it is recommended that organizations such as the Healthcare Innovation Agency of Canada support the policy development of tools that make such personcentered data management possible (Naylor et al., 2015). Nursing policy groups are further recommended to lobby for electronic health records that are interoperable and linked across health sectors. Understanding the barriers and facilitators to NI competencies in undergraduate nursing education and health care institutions has implications for nursing education, leadership and policy (Fetter, 2009). One such barrier has been identified as a lack of standardization in policies and practices concerning informatics integration in education (Fetter, 2009; Kleib et al., 2013). Nursing policy groups are therefore recommended to advocate key stakeholders to implement standardized NI initiatives in curricula. Actions that encourage this approach to informatics are more likely to achieve desired educational outcomes (Kleib et al., 2013). Creators of education policies should also encourage collaborative approaches to develop informatics competencies (Kleib et al., 2013). Ronquillo et al. (2017) maintain that addressing the needs of nursing students includes focusing on the nursing faculty directly responsible for NI competencies. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 157 Ronquillo et al. (2017) further suggest that demystifying NI and, instead, advocating for its advantages and relevance are likely to increase awareness and knowledge of what NI is. It is also recommended that nursing accreditation organizations support digital health efforts by mandating informatics education. Chapter Summary “Cognitive interviewing can put useful information into the hands of researchers who need to make difficult choices” (Beatty & Willis, 2007, p. 307). This study has examined the following research question, “How do fourth-year nursing students interpret and respond to survey questions on the C-NICAS?” Rationale for conducting this study was to identify further evidence of the C-NICAS’ validity by examining the interpretability and readability of its survey items among fourth-year nursing students. How respondents interpret an item must match what the researcher intended it to measure before survey results can be generalized, and key decisions made from their findings. Cognitive interviewing was selected as an established method for capturing misalignment between an item’s intended meaning and a respondent’s interpretation of it. Subsequent suggestions for item revisions may improve the scale’s validity among fourthyear nursing students. The value of this research relates to developing an effective strategy for evaluating how NI competencies are incorporated in educational curriculum and more broadly within healthcare. Evaluating NI competencies is one way to assess uptake of nursing informatics, update nursing curricula, and support student-centred learning. Study findings from cognitive narratives addressed the research question, revealing misinterpreted survey items, questions seen as “difficult” to answer, problematic words and phrases, and challenges faced when participants lacked experience or found a question unclear or double-barrelled. Fourth-year nursing students closely resembled the population for which the INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 158 competency indicators for the survey were intended. Recommended item and response revisions are aimed at improving the wording and interpretability of the C-NICAS. It is recognized that these revisions require further testing and analysis, and, as such, are not offered as blanket solutions to those problems identified in the study. Careful efforts have been made to present cognitive interviewing study data in a logical, systematic and cohesive manner in order to inform wording refinements. In the words of Beatty and Willis (2007), cognitive interview findings: [M]ay not always point to a clearly superior version of a question. Rather than attempting to find the “right” way to ask a survey question, cognitive interviewing may be more suited to helping researchers assess tradeoffs—the advantages and disadvantages of asking questions in a certain manner. (p. 304) By analyzing the study data, an attempt has been made to weigh such trade-offs as added extra questions to eliminate double-barrelled items and reduced succinctness to add exemplars and improve clarity. Cognitive interviewing strategies gave insight into meanings that were misconstrued or resisted; as a result, it was possible to identify a list of specific problems with certain survey items. It is hoped that recommended revisions on the C-NICAS will avoid or reduce these problems in the future. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 159 References Abdrbo, A. A. (2015). Nursing informatics competencies among nursing students and their relationship to patient safety competencies: Knowledge, attitude, and skills. CIN: Computers, Informatics, Nursing, 33(11), 509-514. doi: 10.1097/CIN.0000000000000197 Akhu-Zaheya, L. M., Khater, W., Nasar, M., & Khraisat, O. (2013). Baccalaureate nursing students’ anxiety related computer literacy: A sample from Jordan. Journal of Research in Nursing, 18(1), 36-48. https://doi.org/10.1177/1744987111399522 Akman, A., Erdemir, F., & Tekindal, M. A. (2014). Psychometric properties and reliability of the Turkish version of the technology attitudes survey and nursing students’ attitudes toward technology. International Journal of Caring Sciences, 7(2), 415-425. Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191-215. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall, Inc. Bandura, A. (1993). Perceived self-efficacy in cognitive development and functioning. Educational Psychologist, 28(2), 117-148. Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.), Encyclopedia of human behaviour (pp. 71-81). New York, NY: Academic Press. Retrieved from https://pdfs.semanticscholar.org/63c0/16b24e575bc19f58710a3ed49838878560f8.pdf INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 160 Bandura, A., & Schunk, D. H. (1981). Cultivating competence, self-efficacy and intrinsic interest through proximal self-motivation. Journal of Personality and Social Psychology (41)3, 586-598. Retrieved from https://www.uky.edu/~eushe2/Bandura/Bandura1981JPSP.pdf Baskaran, P. A., & Baby, P. (2015). A descriptive study to assess the knowledge and attitude of the staff nurses regarding nursing informatics in selected hospital, Bangalore. International Journal of Nursing Education, 8(2), 250-255. doi: 10.5958/0974-9357.2015.00114.2 Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287-311. https://doi.org/10.1093/poq/nfm006 Bickford, C. J. (2015). The specialty of nursing informatics: New scope and standards guide practice. CIN: Computers, Informatics, Nursing, 33(4), 129-131. doi: 10.1097/CIN.0000000000000150 Blair, J., & Conrad, F. G. (2011). Sample size for cognitive interview pretesting. Public Opinion Quarterly, 75(4), 636-658. https://doi.org/10.1093/poq/nfr035 Bode, C., & Jansen, H. (2013). Examining the personal experience of aging scale with the Three Step Test Interview. Methodology, (9)3, 96-103. https://doi.org/10.1027/1614-2241/a000071 Boeije, H., & Willis, G. (2013). The cognitive interviewing reporting framework (CIRF). Methodology, (9)3, 87-95. https://doi.org/10.1027/1614-2241/a000075 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 161 Borycki, E. M., Cummings, E., Kushniruk, A. W., & Saranto, K. (2017). Integrating health information technology safety into nursing informatics competencies. Studies in Health Technology and Informatics, 232, 222-228. doi: 10.3233/978-1-61499-738-2-222 Borycki, E. M., Foster, J., Sahama, T., Frisch, N., & Kushniruk, A. W. (2013). Developing national level informatics competencies for undergraduate nurses: Methodological approaches from Australia and Canada. Studies in Health Technology and Informatics: Enabling Health and Healthcare through ICT, 183, 345-349. doi: 10.3233/978-1-61499-203-5-345 Bradburn, N. M., Sudman, S., & Wansink, B. (2004). Asking questions: The definitive guide to questionnaire design—for market research, political polls, and social and health questionnaires. San Francisco, CA: John Wiley & Sons. Brenner, P. S. (2017). Narratives of response error from cognitive interviews of survey questions about normative behavior. Sociological Methods & Research, 46(3), 540-564. doi: 10.1177/0049124115605331 Bryant, L., Whitehead, D., & Kleier, J. (2016). Development and testing of an instrument to measure informatics knowledge, skills, and attitudes among entry-level nursing students. Online Journal of Nursing Informatics, 20(2). Burns, K. E. A., Duffett, M., Kho, M. E., Meade, M. O., Adhikari, N. K. J., Sinuff, T., & Cook, D. J. (2008). A guide for the design and conduct of self-administered surveys of clinicians. CMAJ, 179(3), 245-252. doi:10.1503/cmaj.080372 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 162 Canadian Association of Schools of Nursing. (2012a). Nursing informatics entry-topractice competencies for registered nurses. Retrieved from http://digitalhealth.casn.ca/wp-content/uploads/2019/03/Infoway-ETP-comp-FINALAPPROVED-fixed-SB-copyright-year-added.pdf Canadian Association of Schools of Nursing. (2012b). Nursing informatics inventory: Existing teaching and learning resources. Retrieved from https://www.casn.ca/2014/11/casnnursing-informatics-inventory-report-existing-teaching-learning-resources/ Canadian Association of Schools of Nursing. (2013). Nursing informatics teaching toolkit: Supporting the integration of the CASN nursing informatics competencies into nursing curricula. Retrieved from https://www.casn.ca/2014/12/nursing-informatics-teaching-toolkit/ Canadian Association of Schools of Nursing. (2015). Infoway digital health nursing faculty peer leaders. Retrieved from https://www.casn.ca/infoway-digital-health-nursing-faculty-peer-leaders/ Canadian Association of Schools of Nursing. (2016). Consumer health solutions: A teaching and learning resource for nursing education. Retrieved from https://www.casn.ca/2016/04/consumer-health-solutions-resource/ Canadian Nurses Association. (2017). Joint position statement: Nursing informatics. Retrieved from https://www.cna-aiic.ca/-/media/cna/page-content/pdf-fr/nursinginformatics-joint-position-statement.pdf Chenail, R. J. (2011). Interviewing the investigator: Strategies for addressing instrumentation and researcher bias concerns in qualitative research. The Qualitative Report, 16(1), 255262. Retrieved from https://nsuworks.nova.edu/tqr/vol16/iss1/16/ INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 163 Choi, B. C., & Pak, A. W. (2005). Peer reviewed: A catalog of biases in questionnaires. Preventing Chronic Disease, 2(1), 1-13. Retrieved from http://www.cdc.gov/pcd/issues/2005/jan/04_0050.htm Choi, J., & De Martinis, J. E. (2013). Nursing informatics competencies: Assessment of undergraduate and graduate nursing students. Journal of Clinical Nursing, 22(13-14), 1970-1976. https://doi.org/10.1111/jocn.12188 Choi, J., & Zucker, D. M. (2013). Self-assessment of nursing informatics competencies for doctor of nursing practice students. Journal of Professional Nursing, 29(6), 381-387. http://dx.doi.org/10.1016/j.profnurs.2012.05.014 Chung, S. Y., & Staggers, N. (2014). Measuring nursing informatics competencies of practicing nurses in Korea: Nursing informatics competencies questionnaire. CIN: Computers, Informatics, Nursing, 32(12), 596-605. doi: 10.1097/CIN.0000000000000114 Collins, D. (Ed.). (2015a). Cognitive interviewing practice. London, UK: Sage Publications, Ltd. Collins, D. (2015b). Analysis and interpretation. In D. Collins (Ed.), Cognitive interviewing practice (pp. 162-174). London, UK: Sage Publications, Ltd Cronenwett, L., Sherwood, G., Barnsteiner, J., Disch, J., Johnson, J., Mitchell, P., ... & Warren, J. (2007). Quality and safety education for nurses. Nursing Outlook, 55(3), 122131. doi: 10.1016/j.outlook.2007.02.006 d’Ardenne, J. (2015). Developing interview protocols. In D. Collins (Ed.), Cognitive interviewing practice (pp. 101-125). London, UK: Sage Publications Ltd. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 164 d’Ardenne, J., & Collins, D. (2015). Data management. In D. Collins (Ed.), Cognitive interviewing practice (pp. 142-1161). London, UK: Sage Publications Ltd. d’Ardenne, J., Collins, D., & Blake, M. (2015). Application of findings. In D. Collins (Ed.), Cognitive interviewing practice (pp. 175-194). London, UK: Sage Publications Ltd. d’Ardenne, J., Gray, M., & Collins, D. (2015). Wider applications of cognitive interviewing. In D. Collins (Ed.), Cognitive interviewing practice (pp. 243-263). London, UK: Sage Publications Ltd. Darvish, A., Bahramnezhad, F., Keyhanian, S., & Navidhamidi, M. (2014). The role of nursing informatics on promoting quality of health care and the need for appropriate education. Global Journal of Health Science, 6(6), 11-18. doi: 10.5539/gjhs.v6n6p11 De Gagne, J. C., Bisanar, W. A., Makowski, J. T., & Neumann, J. L. (2012). Integrating informatics into the BSN curriculum: A review of the literature. Nurse Education Today, 32(6), 675-682. doi: 10.1016/j.nedt.2011.09.003 De Gagne, J. C., Oh, J., Kang, J., Vorderstrasse, A. A., & Johnson, C. M. (2013). Virtual worlds in nursing education: A synthesis of the literature. Journal of Nursing Education, 52(7), 391-396. doi: 10.3928/01484834-20130610-03 DeJonckheere, M., & Vaughn, L. M. (2019). Semistructured interviewing in primary care research: A balance of relationship and rigour. Family Medicine and Community Health, 7(2), e000057. doi:10.1136/fmch-2018-000057 Desbiens, J. F., & Fillion, L. (2011). Development of the palliative care nursing selfcompetence scale. Journal of Hospice & Palliative Nursing, 13(4), 230-241. doi: 0.1097/NJH.0b013e318213d300 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 165 Dionne, M. (2014). Does work experience requiring the use of technology for College and University nursing students influence nursing informatics competency scores by the end of the 4th year program for one school in the province of Ontario, Canada? A crosssectional design (Master’s thesis). University of Ottawa, Ottawa, ON, Canada. Fetter, M. S. (2008). Curriculum strategies to improve baccalaureate nursing information technology outcomes. Journal of Nursing Education, 48(2), 78-85. Fetter, M. S. (2009). Baccalaureate nursing students' information technology competence–– Agency perspectives. Journal of Professional Nursing, 25(1), 42-49. doi: 10.1016/j.profnurs.2007.12.005 Fiore, P. (2015). Teaching health information science for health care instructors. Procedia-Social and Behavioral Sciences, 174, 1415-1419. doi: 10.1016/j.sbspro.2015.01.769 Frisch, N., & Borycki, E. (2013). A framework for leveling informatics content across four years of a bachelor of science in nursing (BSN) curriculum. Studies in Health Technology and Informatics, 183, 356-366. doi:10.3233/978-1-61499-203-5-356 Gonçalves, L. S., Castro, T. C., & Fialek, S. (2015). Computer experience of nurses. Studies in Health Technology and Informatics, 216, 1012-1012. doi:10.3233/978-1-61499-564-7-1012 Gray, M. (2015). Conducting cognitive interviews. In D. Collins (Ed.), Cognitive interviewing practice (pp. 126-141). London, UK: Sage Publications Ltd. Gray, M. & Blake, M. (2015). Cross-national, cross-cultural and multilingual cognitive interviewing. In D. Collins (Ed.), Cognitive interviewing practice (pp. 220-242). London, UK: Sage Publications Ltd. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 166 Halley, E. C., Sensmeier, J., & Brokel, J. M. (2009). Nurses exchanging information: Understanding electronic health record standards and interoperability. Urologic Nursing, 29(5), 305-313. Hamme Peterson, C., Peterson, N. A., & Gilmore Powell, K. (2017). Cognitive interviewing for item development: Validity evidence based on content and response processes. Measurement and Evaluation in Counseling and Development, 50(4), 217223. https://doi.org/10.1080/07481756.2017.1339564 Hawkins, M., Elsworth, G. R., & Osborne, R. H. (2018). Application of validity theory and methodology to patient-reported outcome measures (PROMs): Building an argument for validity. Quality of Life Research, 1-16. https://doi.org/10.1007/s11136-018-1815-6 Healthcare Information and Management Systems Society. (2019). What is nursing informatics? Retrieved from https://www.himss.org/what-nursing-informatics Health Information Management. (2019). Differences between EMR, EHR and PHR. Retrieved from http://www.himconnect.ca/meet-him/faqs/differences-between-emr-ehr-and-phr Heinssen, R. K., Glass, C. R., & Knight, L. A. (1987). Assessing computer anxiety: Development and validation of the Computer Anxiety Rating Scale. Computers in Human Behavior, 3(1), 49-59. https://doi.org/10.1016/0747-5632(87)90010-0 Hern, M. J., Key, M., Goss, L. K., & Owens, H. (2015). Facilitating adoption of informatics and meaningful use of electronic health records with nursing faculty. Journal of Nursing Education and Practice, 5(3), 118-126. https://doi.org/10.5430/jnep.v5n3p118 Hill, T., McGonigle, D., Hunter, K. M., Sipes, C., & Hebda, T. L. (2014). An instrument for assessing advanced nursing informatics competencies. Journal of Nursing Education and Practice, 4(7), 104-112. doi: 10.5430/jnep.v4n7p104 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 167 Honey, M. L., Skiba, D. J., Procter, P., Foster, J., Kouri, P., & Nagle, L. M. (2017). Nursing informatics competencies for entry to practice: The perspective of six countries. Forecasting Informatics Competencies for Nurses in the Future of Connected Health, 51, 31-40. doi: 10.3233/978-1-61499-738-2-51 Hübner, U., Shaw, T., Thye, J., Egbert, N., Marin, H. F., & Ball, M. (2016). Towards an international framework for recommendations of core competencies in nursing and interprofessional informatics: The TIGER competency synthesis project. MIE, 655-659. doi: 10.3233/978-1-61499-678-1-655 Hunter, K., McGonigle, D., Hill, T., & Hebda, T. (2014). Self-reported assessment of basic and informatics specialist/innovator nursing informatics competencies: TANIC© and NICA L3/L4©. Nursing Informatics Today, 29(2), 4-7. Hwang, J., & Park, H. (2011). Factors associated with nurses' informatics competency. Computer, Informatics, Nursing, 29(4), 256-262. doi: 10.1097/NCN.0b013e3181fc3d24 Jenkinson, C., Peto, V., & Coulter, A. (1996). Making sense of ambiguity: Evaluation in internal reliability and face validity of the SF 36 questionnaire in women presenting with menorrhagia. BMJ Quality & Safety, 5(1), 9-12. Jetté, S., St-Cyr Tribble, D., Gagnon, J., & Mathieu, L. (2010). Nursing students’ perceptions of their resources toward the development of competencies in nursing informatics. Nurse Education Today, 30(8), 742-746. doi: https://doi.org/10.1016/j.nedt.2010.01.016 Jobe, J. B., & Mingay, D. J. (1989). Cognitive research improves questionnaires. American Journal of Public Health, 79(8), 1053-1055. doi: 10.2105/ajph.79.8.1053 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 168 Kinnunen, U., Rajalahti, E., Cummings, E., & Borycki, E. M. (2017). Curricula challenges and informatics competencies for nurse educators. Forecasting Informatics Competencies for Nurses in the Future of Connected Health, 41-48. doi:10.3233/978-1-61499-738-2-41 Kleib, M., & Nagle, L. (2018a). Development of the Canadian nurse informatics competency assessment scale and evaluation of Alberta’s registered nurses’ self-perceived informatics competencies. CIN: Computers, Informatics, Nursing, 36(7), 350-358. doi: 10.1097/CIN.0000000000000435 Kleib, M., & Nagle, L. (2018b). Psychometric properties of the Canadian nurse informatics competency assessment scale. CIN: Computers, Informatics, 36(7), 359-365. doi: 10.1097/CIN.0000000000000437 Kleib, M., & Nagle, L. (2018c). C-NICAS: Canadian Nurse Informatics Competency Assessment Scale. Used with permission by authors. Kleib, M. & Nagle. (2018d). Development of the Canadian nurse informatics competency assessment scale and evaluation of Alberta’s registered nurses’ self-perceived informatics competencies. CIN: Computers, Informatics, Nursing, 00(0). [Ahead of print publication shared with permission from author]. doi: 10.1097/CIN.0000000000000435 Kleib, M., & Nagle, L. (2018e). Psychometric properties of the Canadian nurse informatics competency assessment scale. CIN: Computers, Informatics, 00(0). [Ahead of print publication shared with permission from author]. doi: 10.1097/CIN.0000000000000437 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 169 Kleib, M., Zimka, O., & Olson, K. (2013). Status of informatics integration in baccalaureate nursing education: A systematic review. CJNR (Canadian Journal of Nursing Research), 45(1), 138-154. doi: 10.1177/084456211304500111 Knafl, K., Deatrick, J., Gallo, A., Holcombe, G., Bakitas, M., Dixon, J., & Grey, M. (2007). The analysis and interpretation of cognitive interviews for instrument development. Research in Nursing & Health, 30(2), 224-234. doi: 10.1002/nur.20195 Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50(1), 537-567. https://doi.org/10.1146/annurev.psych.50.1.537 Krosnick, J. A., & Presser, S. (2010). Question and questionnaire design. In P. V. Marsden and J. D. Wright (Eds.), Handbook of survey research (pp. 263-313). Bingley, UK: Emerald Group Publishing. Lavin, M. A., Harper, E., & Barr, N. (2015). Health information technology, patient safety, and professional nursing care documentation in acute care settings. OJIN: The Online Journal of Issues in Nursing, 20(2). doi: 10.3912/OJIN.Vol20No02PPT04 Lietz, P. (2010). Research into questionnaire design: A summary of the literature. International Journal of Market Research, 52(2), 249-272. http://dx.doi.org/10.2501/S147078530920120X Maag, M. M. (2006). Nursing students' attitudes toward technology: A national study. Nurse Educator, 31(3), 112-118. Mahmood, K. (2016). Do people overestimate their information literacy skills? A systematic review of empirical evidence on the Dunning-Kruger effect. Communications in Information Literacy, 10(2), 199-213. https://doi.org/10.15760/comminfolit.2016.10.2.24 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 170 McClarty, K. L., & Gaertner, M. N. (2015). Measuring mastery: Best practices for assessment in competency-based education. AEI Series on Competency-Based Higher Education, 1-16. Retrieved from http://hdl.voced.edu.au/10707/367254. McColl, E. (2006). Cognitive interviewing. A tool for improving questionnaire design. Quality of Life Research 15(3), 571-573. doi: 10.1007/s11136-005-5263-8 McColl, E., Meadows, K., & Barofsky, I. (2003). Cognitive aspects of survey methodology and quality of life assessment. Quality of Life Research, 12(3), 217-218. doi: 10.1023/A:1023233432721 Melrose, S., Park, C., & Perry, B. (2015). Creative clinical teaching in the health professions. Retrieved from http://epub-fhd.athabascau.ca/clinical-teaching/ Melnyk, B., & Fineout-Overholt, E. (2015). Evidence-based practice in nursing and healthcare: A guide to best practice (3rd ed.). Philadelphia, PA: Wolters Kluwer Health. Miller, K., Willson, S., Chepp, V., & Padilla, J. (Eds.). (2014). Cognitive interviewing methodology. Hobeken, NJ: John Wiley & Sons, Inc. Miller, K., Willson, S., Chepp, V., & Ryan, J. M. (2014). Analysis. In K. Miller, S. Willson, V. Chepp, & J. Padilla. (Eds.), Cognitive interviewing methodology (pp. 35-50). Hobeken, NJ: John Wiley & Sons, Inc. Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G., & Prisma Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PloS Medicine 6(7): e1000097. Retrieved from https://doi.org/10.1371/journal.pmed.1000097 Nagle, L. M., & Clarke, H. F. (2004). Assessing informatics in Canadian schools of nursing. Medinfo, 2004, 912-915. doi: 10.3233/978-1-60750-949-3-912 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 171 Nagle, L. M., Crosby, K., Frisch, N., Borycki, E. M., Donelle, L., Hannah, K. J., . . Shaben, T. (2014). Developing entry-to-practice nursing informatics competencies for registered nurses. Studies in Health Technology and Informatics,201, 356-363. doi: 10.3233/978-1-61499-415-2-356 Naylor, D., Girard, F., Mintz, J., Fraser, N., Jenkins, T. & Power, C. (2015). Unleashing innovation: Excellent healthcare for Canada. Report of the advisory panel on healthcare innovation (Health Canada catalogue no. H22-4/9-2015E-PDF). Retrieved from http://publications.gc.ca/pub?id=9.807352&sl=0 Ng, C. J. (2006). Designing a questionnaire. Malaysian family physician: The Official Journal of the Academy of Family Physicians of Malaysia, 1(1), 32-35. Odlum, M. (2016). Technology readiness of early career nurse trainees: Utilization of the Technology Readiness Index (TRI). Studies in Health Technology and Informatics, 225, 314-318. doi:10.3233/978-1-61499-658-3-314 ownCloud. (2018). Features [online website]. Retrieved from https://owncloud.org/features/ Padilla, J. L., Benítez, I., & Castillo, M. (2013). Obtaining validity evidence by cognitive interviewing to interpret psychometric results. Methodology,9(3), 113-122. https://doi.org/10.1027/1614-2241/a000073 Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903. doi: 10.1037/0021-9010.88.5.879 Polit, D. F., & Beck, C. T. (2017). Nursing research: Generating and assessing evidence for nursing practice (10th ed.). Philadelphia, PA: Lippincott Williams & Wilkins. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 172 Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9(5), 1-6. https://doi.org/10.1108/10748120110424816. Presser, S., Couper, M. P., Lessler, J. T., Martin, E., Martin, J., Rothgeb, J. M., Singer, E. (2004). Methods for testing and evaluating survey questions. Public Opinion Quarterly 68(1), 109-130. https://doi.org/10.1093/poq/nfh008 Ronquillo, C., Topaz, M., Pruinelli, L., Peltonen, L., & Nibber, R. (2017). Competency recommendations for advancing nursing informatics in the next decade: International survey results. Studies in Health Technology and Informatics, 232, 119-129. doi: 10.3233/978-1-61499-738-2-119 Rutherford, M. (2008). Standardized nursing language: What does it mean for nursing practice? OJIN: The Online Journal of Issues in Nursing, (13)1, 1-12. doi: 10.3912/OJIN.Vol13No01PPT05 Salomon, L., Gasquet, I., Mesbah, M., & Ravaud, P. (1999). Construction of a scale measuring inpatients' opinion on quality of care. International Journal for Quality in Health Care, 11(6), 507-516. https://doi.org/10.1093/intqhc/11.6.507 Segal, D., June, A., & Marty, M. M. (2019). Basic issues in interviewing and the interview process. In D. Segal (Ed.), Diagnostic Interviewing (pp. 1- 21). New York, NY: Springer. https://doi.org/10.1007/978-1-4939-9127-3_19 Sipes, C., McGonigle, D., Hunter, K. M., Hebda, T., Hill, T., & Lamblin, J. (2016). Operationalizing the TANIC and NICA-L3/L4 tools to improve informatics competencies. In Studies in Health Technology and Informatics, 225, 292-296. doi: 10.3233/978-1-61499-658-3-292 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 173 Sipes, C., Hunter, K., McGonigle, D., West, K., Hill., & Hebda, T. (2017). The health information technology competencies tool: Does it translate for nursing informatics in the Unites States? CIN: Computers, Informatics, Nursing, 35(12), 609-614. doi: 10.1097/CIN.0000000000000408 Staggers, N. (1994). The Staggers nursing computer experience questionnaire. Applied Nursing Research, 7(2), 97-106. https://doi.org/10.1016/0897-1897(94)90040-X Staggers, N., Gassert, C. A., & Curran, C. (2002). A Delphi study to determine informatics competencies for nurses at four levels of practice. Nursing Research, 51(6), 383-390. Streiner, D. L., & Norman, G. R. (2008). Health measurement scales: A practical guide to their development and use (4th ed.). Oxford, UK: Oxford University Press. Theron, M. J., Astle, B., Dixon, D., & Redmond, A. (2019). Beyond checklists: A nursing informatics education strategy for undergraduate nursing students appraising health information on social networking sites (SNS)/Au-delà des listes de vérification: Une stratégie de formation infirmière au numérique pour l’évaluation, par les étudiantes de premier cycle, des informations sur la santé présentes sur les sites des réseaux sociaux (SRS). Quality Advancement in Nursing Education-Avancées en formation Infirmière, 5(1), 1-18. doi: 10.17483/2368-6669.1174 Theron, M., Redmond, A., & Borycki, E. (2017). Nursing students' perceived learning from a digital health assignment as past of the nursing care for the childbearing family course. In F. Lau et al. (Eds.), Building Capacity for Health Informatics in the Future. Open Access, IOS Press. doi: 10.3233/978-1-61499-742-9-328 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 174 Thompson, B. W., & Skiba, D. J. (2008). Informatics in the nursing curriculum: A national survey of nursing informatics requirements in nursing curricula. Nursing Education Perspectives, 29(5), 312-317. Thompson, J. J., Kelly, K. L., Ritenbaugh, C., Hopkins, A. L., Sims, C. M., & Coons, S. J. (2011). Developing a patient-centered outcome measure for complementary and alternative medicine therapies II: Refining content validity through cognitive interviews. BMC Complementary and Alternative Medicine, 11(1), 1-17. https://doi.org/10.1186/1472-6882-11-136 Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19(6), 349-357. https://doi.org/10.1093/intqhc/mzm042 Tourangeau, R. (1984). Cognitive sciences and survey methods. In T. B. Jabine, M. L. Straf, & R. Tourangeau (Eds.), Cognitive aspects of survey methodology: Building a bridge between Disciplines: Report of the advanced research seminar on cognitive aspects of survey methodology (pp. 73-100). Washington, DC: National Academy Press. Retrieved from: https://www.nap.edu/read/930/chapter/4#73 Trinity Western University. (2016a). Nursing 124: Communication and health teaching course syllabus. Langley, BC: Author Trinity Western University. (2016b). Nursing 245: Nursing care of the adult course syllabus. Langley, BC: Author Trinity Western University. (2017). Nursing 252: Nursing care of the childbearing family course syllabus. Langley, BC: Author INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 175 Trinity Western University. (2018). Nursing 332: Nursing research course syllabus. Langley, BC: Author Tse, A. M., Niederhauser, V., Steffen, J. J., Magnussen, L., Morrisette, N., Polokoff, R., & Chock, J. (2014). A statewide consortium’s adoption of a unified nursing curriculum: Evaluation of the first two years. Nursing Education Perspectives, 35(5), 315-323. doi: 10.5480/14-1387 Vancouver Coastal Health Authority. (2019). Employed student nurse (ESN) program [website]. Retrieved from https://careers.vch.ca/work-here/students-and-residents/employed-student-nurse-program/ van Teijlingen, E. R., & Hundley, V. (2001). The importance of pilot studies. Social Research Update, 35.Retrieved from http://hdl.handle.net/2164/157 Vis-Visschers, R., & Meertens, V. (2013). Evaluating the cognitive interviewing reporting framework (CIRF) by rewriting a Dutch pretesting report of a European health survey questionnaire. Methodology, 9(3), 104-112. doi: 10.1027/1614-2241/a000072 Ward, R., Pollard, K., Glogowska, M., & Moule, P. (2007). Developing Information Technology Attitude Scales for Health (ITASH). Studies in Health Technology and Informatics, 129(1), 177-181. doi: 10.3233/978-1-58603-774-1-177 Williams, A. (2003). How to… Write and analyse a questionnaire. Journal of Orthodontics, 30(3), 245-252. https://doi.org/10.1093/ortho/30.3.245 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 176 Willis, G. B. (1999). Cognitive interviewing: A “how to” guide, from the short course “Reducing Survey Error through Research on the Cognitive and Decision Processes in Surveys.” In Meeting of the American Statistical Association. Retrieved from https://www.hkr.se/contentassets/9ed7b1b3997e4bf4baa8d4eceed5cd87/gordonwillis.pdf Willis, G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks, CA: Sage Publications. Willis, G. B., & Miller, K. (2011). Cross-cultural cognitive interviewing: Seeking comparability and enhancing understanding. Field Methods, 23(4), 331-341. https://doi.org/10.1177/1525822X11416092 Willson, S., & Miller, K. (2014). Data collection. In K. Miller, S. Willson, V. Chepp, & J. Padilla (Eds.), Cognitive interviewing methodology (pp. 15-34). Hobeken, NJ: John Wiley & Sons, Inc. Wishart, J., & Ward, R. (2002). Individual differences in nurse and teacher training students' attitudes toward and use of information technology. Nurse Education Today, 22(3), 231240. doi: 10.1054/nedt.2001.0697 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE Appendix A Search Terms for Literature Review CINAHL and MEDLINE SEARCHES (conducted January 26, 2018) Nursing Informatics Competencies Informatics Nurs* n3 Informatics Informatics n3 nurs* OR informatics n3 healthcare Nurs* n3 Informatics n3 Competen* AND Measurement Tools Measure* OR tool* OR survey* OR checklist* OR assess* OR competen* Measure* OR tool* OR survey* OR checklist* OR assess* OR competen* Measure* OR tool* OR survey* OR checklist* OR assess* OR competen* Measure* OR tool* OR survey* OR checklist* OR assess* OR competen* 177 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 178 Appendix B Articles identified through database searching (Medline: 199, CINAHL: 225) (n = 424) Identification Identification PRISMA Flow Diagram Additional articles identified through other sources (citation searching, shoulder tap, recommended by thesis committee members) (n = 32) Articles screened (n = 451) Articles excluded (n = 362) Full-text articles assessed for eligibility (n = 89 ) Full-text articles excluded, with reasons (see below) (n = 76) Eligibility Included Included Eligibility Screening Articles after duplicates removed (n = 451) Articles included in literature review (n = 13) Reasons for exclusion: - Editorials - not relevant to the main intersecting concepts of nursing informatics, nursing informatics competencies, Canadian nursing, nursing students or entry-topractice nurses, or selfassessment surveys. Note: duplicates reviewed at multiple stages INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 179 Appendix C Review Matrix of Selected Articles Author/Title/Journal Year Purpose/Research Focus Assess the relationship between nursing informatics and patient safety competencies using SANICS (5 factor 30-item version), and Pt Safety Competencies SelfEvaluation (PSCAE). Half the participants in both groups had taken a nursing informatics course. Research Method Descriptive, crosssectional, selfadministered questionnaire. Sample Relevant Findings/Results Abdrbo, A. A. (2015). Nursing informatics competencies among nursing students and their relationship to patient safety competencies: Knowledge, attitude, and skills. CIN: Computers, Informatics, Nursing, 33(11), 509-514. 2015 154 convenience sample (99 undergraduate nursing students and 55 interns*) in Saudi Arabia *(interns = graduated nurses in 1st year of practice) Learning NI competencies emphasizes pt safety practices. Significant difference between scores those who had taken NI course (and those who hadn’t) related to patient safety knowledge and skills, NI competencies and patient safety competencies, significantly correlated. Knowledge, skills and attitudes of students vs interns - both had high mean scores. Akhu-Zaheya, L. M., Khater, W., Nasar, M., & Khraisat, O. (2013). Baccalaureate nursing students’ anxiety related computer literacy: a sample from Jordan. Journal of Research in Nursing, 18(1), 36-48. 2013 Assess the anxietyrelated computer literacy rates of nursing students in Jordan using Arabic version of Computer Anxiety Rating Scale and Computer Literacy Scale. What are the factors that predict computer anxiety? What is the relationship between computer anxiety and computer literacy? Selfadministered questionnaire. 441 convenient sample of undergraduate nursing students (1-4th year) in one university in Jordan (100% initial response rate, d/t missing data – 95% response rate) 60% had taken 1 or more computer course Computer anxiety is related to 1. Computer experience and 2. Student year of education. Significant negative relationship between computer anxieties and computer literacy rates. Need identified for use of computers in education/training to increase comp. literacy and reduce computer anxiety. Lack of knowledge regarding importance to nurse leaders and educators. Borycki, E. M., Foster, J., Sahama, T., Frisch, N., & Kushniruk, A. W. (2013). Developing national level informatics competencies for undergraduate nurses: methodological approaches from Australia and Canada. In K. L. Courtney, O. Shabestari, & A. Kuo 2013 Overview of approaches used in Canada for developing CASN's NIEPCs (including a comparison/ discussion of differences w Australia) Discussion paper, describing methods used by Canada and Australia to develop national level NI competencies n/a Similarities between 2 countries: task forces/project groups; use of literature to initiate and drive competency development; stakeholder engagement Differences: use of grey literature & regulatory information by Canada; draft versions reviewed in Canada by NI specialists INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE (Eds.) Studies in Health Technology and Informatics: Enabling Health and Healthcare through ICT, IOS Press: Victoria, BC, 345-349. Borycki, E. M., Cummings, E., Kushniruk, A. W., & Saranto, K. (2017). Integrating Health Information Technology Safety into Nursing Informatics Competencies. Studies in health technology and informatics, 232, 222228. 180 as well as school Deans, students, RNs in practice. 2017 To address technology-induced errors and safety within health information technology, 5 NI competency levels are defined (building on work of Staggers et al. 2001*) Discussion paper, describing 5 levels of NI competencies n/a Develop and test instrument to measure (in entrylevel nursing students): 1] educational opportunity to apply informatics 2] knowledge of informatics 3] informatics skills confidence 4] attitude toward informatics Does tool have content validity; are items internally consistent and reliable? Selfadministered Survey (24 items) (KSANI Scale*) Convenience sample 300 nursing students from Florida Aim of developing entry-to-practice NI competencies: 1. Integrate nursing informatics into entry-to-practice competencies 2. Increase capacity for educators to teach nursing informatics CASN’s NI competencies for entry-tocnpractice nurses *Staggers N, Gassert CA, Curran C. Informatics competencies for nurses at four levels of practice. J Nurs Educ. 2001; 40(7):303-16. Bryant, L., Whitehead, D., & Kleier, J. (2016). Development and testing of an instrument to measure informatics knowledge, skills, and attitudes among entrylevel nursing students. Online Journal of Nursing Informatics (OJNI), 20(2), Available at http://www.himss.org/oj ni 2016 Canadian Association of Schools of Nursing. (2012). Nursing Informatics: Entry-topractice competencies for registered nurses. Retrieved from http://digitalhealth.casn.c a/content/user_files/2017 /12/Nursing-InformaticsEntry-to-Practice- 2012 *Knowledge, Skills and Attitudes towards Nursing Informatics Scale n/a 5 levels: the beginner nurse, the experienced nurse, the nursing informatics specialist, the nursing innovator, and the nursing informatics researcher. Corresponding health information technology safety competencies are suggested with each competency level (see Table 1) Urge beginning level nurses to receive training in order to identify and report technology-induced errors. Instrument based on the QSEN informatics competencies for prelicensure nurses Tool found to be "sound and appropriate for the target population" (p. 2) e.g., I feel confident in my ability to.. It is important to me that …In my nursing program I had opportunities to... 1= 'not confident (or important or no opportunity)' 4= 'extremely confident (or important or frequent opportunity)' Has content validity, and internal consistency/reliability. Overarching competency: ‘Uses information and communication technology to support information synthesis in accordance with professional and regulatory standards in the delivery of patient care” Competency 1: “Uses relevant information and knowledge to support INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE Competencies-forRNs_updated-June-42015.pdf 3. Engage key stakeholders developing related objectives in curricula. Canadian Nurses Association. (2017). Joint Position statement: Nursing Informatics. Retrieved from https://www.cnaaiic.ca/en/~/media/cna/pa ge-content/pdffr/nursing-informaticsjoint-position-statement 2017 Jetté, S., St-Cyr Tribble, D., Gagnon, J., & Mathieu, L. (2010). Nursing students' perceptions of their resources toward the development of competencies in nursing informatics. Nurse education today 30, 742746 2010 4 Positions: 1. NI competencies are essential 2. Adoption needed for SNOMED CT and ICNP to standardize clinical terminologies. 3. assessment and documentation tools must also be standardized. 4. As NI is constantly evolving, responses must be similarly adaptable. What are the internal and external resources student nurses need to develop NI competencies? (Development of questionnaire) Is there a relationship between sociodemographic profiles and internal/external resources? CNA Joint Position Statement with Canadian Nursing Informatics Association (CNIA). n/a (mailed) questionnaire survey 131 collegelevel nursing students in Quebec Survey measures internal resources -knowledge, interest, and personality trait; external resources -material, financial or social support (e.g., access to computers, appropriate software for the profession, databases) (note: low internal consistency 181 the delivery of evidencebased patient care” Competency 2: “Use information and communication technologies in accordance with professional and regulatory standards and workplace policies” Competency 3: “Uses information and communication technologies in the delivery of patient care” Endorses International Medical Informatics Association (IMIA) definition of NI. Primary health care focus can be achieved when patient-centered ICT/ digitally connected health is aligned- resulting in connected health. NI knowledge necessary for assessing data from multiple sources. ICT use associated with increased quality and safety in healthcare. Students reported knowledge to act in NI ‘moderately high’ from having necessary internal and external resources. Students lack knowledge using spreadsheet programs, presentation software, data security, analyzing quality of health web-sites, and searching edatabases. Differences noted between time allotted for age and computer use at work/word-processing training/and training about information systems used at work. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE Dev of C-NICAS and results of use of tool with Albertan nurses. reported) Exploratory, crossdescriptive survey (emailed) Kleib, M., & Nagle, L. (2018). Development of the Canadian Nurse Informatics Competency Assessment Scale and Evaluation of Alberta's Registered Nurses' Selfperceived Informatics Competencies. Computers, Informatics, Nursing. Advance online publication. DOI: 10.1097/CIN.000000000 0000435 2018 Surveyed 2844 Alberta nurses (RNs and RPNs) Kleib, M., & Nagle, L. (2018). Psychometric properties of the Canadian Nurse Informatics competency assessment scales. Computers, Informatics, Nursing. Advance online publication. DOI: 10.1097/CIN.000000000 0000437 2018 What is factor structure of CNICAS? What is internal consistency reliability of CNICAS? Psychometric testing of CNICAS with survey of Albertan nurses n/a Nagle, L., Crosby, K., Frisch, N. Borycki, E., Donelle, L., Hannah, K., . . . Shaben, T. (2014). Developing Entry-topractice nursing informatics competencies for registered nurses. Nursing Informatics, 356-363. 2014 “Describe the process and outcomes of developing [19] informatics entryto-practice competencies [NIEPC’s] for adoption by Canadian Schools of Nursing” (p. 356). n/a Nagle, L., Sermeus, W., & Junger, A. (2017). Evolving role of the nursing informatics specialist. Forecasting Informatics competencies for nurses in the future of connected health, 212221. 2017 Discussion paper of overview NI competencies in Canada. Stage 1 developing NIEPC's Stage 2 faculty resource Toolkit to integrate competencies into educational curricula. Discussion paper regarding evolving role of the nursing informatics specialist. Table 1: 6 new competencies and 15 roles. 98.6% RNs 1.5% RPNs How has the role of the nursing informatics specialist evolved, and, for the future, what opportunities and responsibilities will there be for them? 182 Overall self-perceived NI competencies scores ‘slightly above’ competent. Highest scores in ‘foundational ICT skills’ and lowest scores in ‘Information and knowledge management’ NA= ‘not relevant to practice’ (2 variables associated with NA >50 yrs more likely, community health less likely) Four factors explained 61% of the variance. Bartlett test of sphericity (strength of linear relationship among variable) significant (P<.001). Internal consistency high α .926 (overall) and high for subscales (.89 - .94) Due to high sample size, good generalizability. In 2011, funding from Canada Health Infoway used by CASN to address, the “ICT needs of nursing students & faculty" (p. 357). Two working groups: 1] develop NIEPCs to increase awareness of NI needed by students upon program graduation 2] develop faculty 'Resource Toolkit' to integrate competencies into curriculum. n/a Today’s use of informatics is critical. Historically, healthcare knowledge doubled every century, now ~18 months. “To a large extent the core competencies of the nursing informatics specialist have become INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 183 essential for all nurses” p. 214 Forecasts use of ‘big data’ (p. 217), describes 9 factors informing role of the informatics nurse (p. 214), discussion of informatics considering ‘information continuity’ (p. 217), connected health (emphasis on information use vs technology use). On average takes 17 years for research evidence to reach clinical practice. Odlum, M. (2016). Technology Readiness of Early Career Nurse Trainees: Utilization of the Technology Readiness Index (TRI). In W. Sermeus, P.M. Procter, & P. Weber (Eds.) Nursing Informatics 2016 ehealth for All: Every Level Collaboration – From Project to Realization, 225, 314 – 318. https://lirias.kuleuven.be/ bitstream/123456789/58 8256/2/nursing2016_ser meus_NI2016.pdf#page= 352 2016 Assuming optimism and innovativeness drive readiness, and discomfort and insecurity inhibit readiness, what are the technology perceptions of nursing students? To further understand link between perception and technology adoption. Technology Readiness Index (TRI) Survey 36 item tool 5-point Likert strongly disagree strongly agree Convenience, cross-sectional sample 43 urban (New York city) nursing students Significant factors influencing perceptions of technology: - decreased optimism related to clinical practice vs no clinical practice; - increased discomfort of US born students (72%) vs ‘other-born’ TRI Scale Design: Contributors: 1. optimism - technology is ‘positive, and offers efficacy, flexibility and control’ 2. innovativeness ‘propensity to be a technological pioneer’ Inhibitors: 1. discomfort ‘lack of control with technology use’ 2. insecurity - ‘disbelief or skepticism that technology will work correctly’ INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE Appendix D C-NICAS1 1 Kleib, M. & Nagle, L. (018). Used with permission by authors. 184 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 185 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 186 Appendix E Oral Recruitment Script Good afternoon. My name is Andrea Dresselhuis. I’m doing my master’s in nursing here at Trinity and I have an interest in nursing education, informatics and questionnaires. I would like to introduce my research focus to you and invite you to participate in my research study to improve the way that surveys and questionnaires are developed and administered. Have you ever completed a survey and felt that you were being asked the wrong question? Or perhaps you didn’t understand some of the questions? Or even the intent of the survey? If this has been your experience, has this ever affected your engagement with a survey or even your interest in completing it accurately? How many of you have heard of CASN? CASN stands for the Canadian Association of Schools of Nursing. CASN is Canada’s think tank for nursing schools and is responsible for mandating what you have learned here so far at Trinity’s school of nursing (including this very lecture that you’ve had today). In recent years CASN has asked all nursing programs across Canada to implement, into their curriculum, knowledge specific to information computer technology. This effort is aimed at preparing Canada’s nursing graduates for a healthcare system steeped in and influenced by informatics and computer technology. One way of assessing how learning is taking place is to administer a survey and ask students to assess themselves. Recently, two Canadian nursing informatics researchers published results from their newly developed survey designed to test Canadian nurses self-perceived competence in informatics and computer technology. This survey has been abbreviated as CNICAS. For my thesis, I reached out these researchers and they gave me permission to use their survey for the first time with a group of nursing students—YOU. Here is where your valuable input and insights are of interest to me. I am looking to invite you, as entry-to-practice nurses, to volunteer to take a quick survey on informatics. I am specifically interested in what you were thinking while you completed the survey. In other words, if you volunteer to participate in my research project, while you complete the survey, I will ask you to think out loud while you read and answer the questions. This will allow me to tune in to how you are interpreting each question. As you answer all 21 questions, I will prompt you to talk out loud so that I can see how YOU interpreted each survey question. By being a participant in my research project you will only have to meet with me once. Not every survey developer takes time to engage in this listening process to improve questionnaire wording and interpretability. If surveys are worded carelessly and administered with interpretability problems, results from those scores may result in ineffective decisions by the same powers that be that set out to ask the questions in the first place. While the survey itself is quite short (21 questions), completing the survey and interview may take up to (or just over) an hour. By participating in this very exciting study, you will be involved in the cutting edge of clarifying the interpretability of a newly developed survey questionnaire. Furthermore, you will help champion the importance of developing survey questions that are not only carefully well-worded, but also carefully match the over-arching intent of the survey. You will not need to prepare for this study. If you are interested in participating, please email me (see PowerPoint/whiteboard). To thank you for your time, all participants will receive a $15 [coffee] card. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 187 Appendix F Interview Script Explanatory Notes: In addition to all scripted prompts, unscripted/spontaneous probes, may be helpful to further encourage participants to do most of the talking. Spontaneous probes may be added to further clarify a respondent’s response during each cognitive interview. Examples of a spontaneous probes I may use during my cognitive interviews: “I noticed you skipped (or changed) question 3, can you explain why that was?” The C-NICAS survey will be conducted using paper and pencil. Comments to encourage think aloud will be made by the interviewer before participants answer each of the survey’s 21 questions. Respondents will read and consider each survey item while thinking aloud, after which they will write down their response. Depending on the amount of information garnered from the think aloud, the interviewer will consider using verbal probes. Verbal probes may be considered if they participant is reluctant to think out loud or if little cognitive response processes are revealed. Some, all or none of the verbal probing questions listed below for each survey question may be asked. In other words, use of verbal probes is optional depending on the respondent’s initial reaction and responses to each survey item. C-NICAS: Canadian Nurse Informatics Competency Assessment Scale C-NICAS Survey Questions (bold) Scripted prompts – think aloud/probing (italicized) 1. Use Information and Communication Technology (ICT) devices. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 1.1 What did you understand by the term, “information and communication technology (ICT)”? Probe 1.2 What did you understand by the term, “devices”? Probe 1.3 (If NA) Can you explain why you chose Not Applicable? Probe 1.4 How easy or difficult was it to select an answer from the options provided? Why? Probe 1.5 What time period where you thinking about when answering? From when until when? INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 188 2. Uses ICT applications. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 2.1 What did you understand by the phrase, “ICT applications”? Probe 2.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 2.3 How easy or difficult was it select an answer from the options provided? Why? Probe 2.4 What time period where you thinking about when answering? From when until when? 3. Performs search and critical appraisal of on-line literature and resources. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 3.1 In your own words what do you think this question is trying to ask? Probe 3.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 3.3 How easy or difficult was it select an answer from the options provided? Why? Probe 3.4 What time period where you thinking about when answering? From when until when? 4. Analyses, interprets, and documents pertinent nursing and patient data using standardized languages. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 189 Verbal Probing: Probe 4.1 What did you understand by the phrase, “pertinent nursing and patient data”? Probe 4.2 What did you understand by the phrase, “standardized languages”? Option: Omit Probe 4.1 and 4.2, and, instead, ask 4.3. Probe 4.3 In your own words what do you think this question is trying to ask? Probe 4.4 (If answered NA) Can you explain why you chose Not Applicable? Probe 4.5 What time period where you thinking about when answering? From when until when? Probe 4.6 How easy or difficult was it select an answer from the options provided? Why? 5. Assists patients and their families to access, review and evaluate online information. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 5.1 What did you understand by the phrase, “pertinent nursing and patient data?’ Probe 5.2 What did you understand by the phrase, “online information”? Option: Omit Probe 5.1 and 5.2, and, instead, ask 5.3. Probe 5.3 In your own words what do you think this question is trying to ask? Probe 5.4 (If answered NA) Can you explain why you chose Not Applicable? Probe 5.5 (If not NA) Can you recall when you last assisted patients or their families to access, review or evaluate online information? Option omit Probe 5.5 and instead, ask 5.6 Probe 5.6 How easy or difficult was it select an answer from the options provided? Why? 6. Describes the processes of data gathering, recording and retrieval in paper and electronic records. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 190 Probe 6.1 In your own words what do you think this question is trying to ask? Probe 6.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 6.3 How did you work out your answer to this question? Probe 6.4 How easy or difficult was it select an answer from the options provided? Why? 7. Articulates the significance of information standards for interoperable electronic health records. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 7.1 What did you understand by the phrase, “information standards”? Probe 7.2 What did you understand by the term,“interoperable”? Option: omit Probe 7.1 and 7.2, and, instead, ask 7.3. Probe 7.3 In your own words what do you think this question is trying to ask? Probe 7.4 (If answered NA) Can you explain why you chose Not Applicable? Probe 7.5 How easy or difficult was it select an answer from the options provided? Why? 8. Articulates the importance of standardized nursing data to reflect nursing practice and advance nursing knowledge. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 8.1 What did you understand by the phrase, “standardized nursing data”? Option: omit Probe 8.1. Instead, ask 8.2. Probe 8.2 In your own words what do you think this question is trying to ask? Probe 8.3 (If answered NA) Can you explain why you chose Not Applicable? Probe 8.4 How did you work out your answer to this question? INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 191 Probe 8.5 How easy or difficult was it select an answer from the options provided? Why? 9. Critically evaluates data and information from a variety of credible sources to inform nursing care. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 9.1 In your own words what do you think this question is trying to ask? Probe 9.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 9.3 How did you work out your answer to this question? Probe 9.4 How easy or difficult was it select an answer from the options provided? Why? 10. Complies with legal and regulatory requirements, ethical standards and organizational policies Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 10.1 In your own words what do you think this question is trying to ask? Probe 10.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 10.3 How did you work out your answer to this question? Probe 10.4 How easy or difficult was it select an answer from the options provided? Why? 11. Advocates for the use of current and innovative ICTs in health care. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 192 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 11.1 In your own words what do you think this question is trying to ask? Probe 11.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 11.3 How did you work out your answer to this question? Probe 11.4 How easy or difficult was it select an answer from the options provided? Why? 12. Identifies and reports system process and functional issues according to organizational policies. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 12.1 What did you understand by the phrase, “system process and functional issues”? Option: omit Probe 12.1. Instead, ask 12.2 Probe 12.2 In your own words what do you think this question is trying to ask? Probe 12.3 (If answered NA) Can you explain why you chose Not Applicable? Probe 12.4 How did you work out your answer to this question? Probe 12.5 How easy or difficult was it select an answer from the options provided? Why? 13. Maintains effective nursing practice and patient safety during system unavailability. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 193 Verbal Probing: Probe 13.1 What did you understand by the phrase, “system unavailability”? Option: Omit Probe 13.1. Instead, ask 13.2. Probe 13.2 In your own words what do you think this question is trying to ask? Probe 13.3 (If answered NA) Can you explain why you chose Not Applicable? Probe 13.4 How did you work out your answer to this question? Probe 13.5 (If not NA) Can you recall when you last maintained effective nursing practice and patient safety during system unavailability? 14. Demonstrates professional judgment in the presence of technologies. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 14.1 In your own words what do you think this question is trying to ask? Probe 14.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 14.3 How did you work out your answer to this question? Probe 14.4 How easy or difficult was it select an answer from the options provided? Why? 15. Recognizes the importance of nurses' involvement in the design, selection, implementation and evaluation of ICTs applications and systems in health care. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 15.1 What did you understand by the phrase, “ICTs applications and systems in health care”? Option: omit Probe 15.1. Instead, ask 15.2? Probe 15.2 In your own words what do you think this question is trying to ask? INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 194 Probe 15.3 (If answered NA) Can you explain why you chose Not Applicable? Probe 15.4 How did you work out your answer to this question? Probe 15.5 How easy or difficult was it select an answer from the options provided? Why? 16. Identifies and demonstrates appropriate use of a variety of ICTs to deliver care. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 16.1 In your own words what do you think this question is trying to ask? Probe 16.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 16.3 How did you work out your answer to this question? Probe 16.4 How easy or difficult was it select an answer from the options provided? Why? 17. Uses decision support tools to assist clinical judgment. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 17.1 What did you understand by the phrase, “decision support tools”? Option: Omit Probe 17.1. Instead, ask 17.2. Probe 17.2 In your own words what do you think this question is trying to ask? Probe 17.3 (If answered NA) Can you explain why you chose Not Applicable? Probe 17.4 How did you work out your answer to this question? Probe 17.5 How easy or difficult was it select an answer from the options provided? Why? 18. Uses ICTs in a manner that supports the nurse-patient relationship. INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 195 Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 18.1 In your own words what do you think this question is trying to ask? Probe 18.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 18.3 How did you work out your answer to this question? Probe 18.4 How easy or difficult was it select an answer from the options provided? Why? 19. Describes the various components of health information systems. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 19.1 What did you understand by the phrase, “health information systems”? Option: omit Probe 19.1. Instead, ask 19.2. Probe 19.2 In your own words what do you think this question is trying to ask? Probe 19.3 (If answered NA) Can you explain why you chose Not Applicable? Probe 19.4 How did you work out your answer to this question? Probe 19.5 How easy or difficult was it select an answer from the options provided? Why? 20. Describes various types of electronic records used in care. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 196 Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 20.1 In your own words what do you think this question is trying to ask? Probe 20.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 20.3 How did you work out your answer to this question? Probe 20.4 How easy or difficult was it select an answer from the options provided? Why? 21. Describes benefits of informatics to improve health systems and quality of care. Please rate your self-perceptions of informatics competencies next to each indicator listed below as per the following criteria. 1 = Not competent 2 = Somewhat competent 3 = Competent 4 = Very Competent NA = Not Applicable Think aloud: Please say, out loud, what you are thinking. What’s going through your mind as you answer this question? Verbal Probing: Probe 21.1 In your own words what do you think this question is trying to ask? Probe 21.2 (If answered NA) Can you explain why you chose Not Applicable? Probe 21.3 How did you work out your answer to this question? Probe 21.4 How easy or difficult was it select an answer from the options provided? Why? INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE Appendix G 197 INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 198 Appendix H Participant Consent Form Interpretability of the Canadian Nurse Informatics Competency Assessment Scale among fourth-year Nursing Students Date of Submission: September 25, 2018 Date of Approval: October 19, 2018 Principal Investigator: Andrea Elizabeth Dresselhuis, RN, BSN, Graduate Student, Masters of Science in Nursing, Trinity Western University, Langley, BC, Email: Andrea.Dresselhuis@mytwu.ca Supervisor: Richard Sawatzky PhD, RN. Professor, Trinity Western University School of Nursing, 7600 Glover Road, Langley, British Columbia, V2Y 1Y1. Phone: (604) 513-2121 ext. 3274 Email: Rick.Sawatzky@twu.ca Purpose: The purpose of this thesis project is to investigate how well each survey question in the Canadian Nurse Informatics Competency Assessment Scale (C-NICAS) is understood. By establishing interpretability of the C-NICAS, I hope to strengthen its future usefulness as a tool for assessing nursing informatics in the entry-to-practice nursing population. Procedures: If you agree to participate, you will be interviewed for 30 – 90 minutes during which you will complete a survey entitled the Canadian Nurse Informatics Competency Assessment Scale (C-NICAS) by the Principal Investigator at a mutually agreed upon time and location. During completion of the survey, the Principal Investigator will interview you to ask you questions related to your interpretation of each of the survey’s 21 questions. By consenting to this study, you are consenting to having the interview audio-recorded and reviewed by the principal investigator. After the interview there will be a short debriefing session. You will receive a copy of the consent to take home. You will be given the opportunity to receive results of the survey and how you would like these to be sent to you. Potential Risks and Discomforts: There are no anticipated potential risks or discomforts associated with this study. If you feel at any point you need to withdraw from the study, please know you can do so with no negative consequences. Potential Benefits to Participants and/or Society: By participating in this research, you will be aiding in drawing forth recommendations for wording improvement with the C-NICAS as well as potentially improving the C-NICAS’ future usefulness in other nursing populations. Additionally, you may find the results of your survey score helpful for self-reflection. Confidentiality: Your anonymity and privacy are very important. Any information that is obtained in connection with this study and that can be identified with you will remain confidential and will be disclosed only with your permission or as required by law. Information related to your personal identity will be removed from all documents, and your name will be replaced with an identifier code that can only be interpreted and recognized by the principal researcher and a professional medical transcriptionist. All photocopied transcripts will be kept under lock and key and all electronic data files will be saved on a password-protected computer INTERPRETABILITY OF A CANADIAN INFORMATICS SCALE 199 of the principal investigator. The medical transcriptionist will sign a confidentiality agreement, and all audio-recorded information exchanged with the medical transcriptionist will either be sent via a password protected file format or hand delivered. All recorded data and transcripts will be kept for 5 years, after which time all recorded data will be permanently deleted and all transcripts shredded. Remuneration/Compensation: To thank you for your participation in this study you will receive a thank you card and a $15 coffee gift card. If you withdraw part way through the study, you will still receive the gift card and thank you card. Contact for Information about this Study: If you have any further questions or desire further information with respect to this study, you may contact Andrea Dresselhuis (Principal Investigator) at Andrea.Dresselhuis@mytwu.ca or her thesis Supervisor Dr. Rick Sawatzky at Rick.Sawatzky@twu.ca . Contact for concerns about the rights of research participants: If you have any concerns about your treatment or rights as a research participant, you may contact Elizabeth Kreiter in the Office of Research, Trinity Western University at 604-513-2167 or researchethicsboard@twu.ca. Consent: Your participation in this study is entirely voluntary and you may refuse to participate or withdraw from this study at any time without any negative outcomes to you related to this study. If you decide to withdraw from the study, at any time, please let the Principal Investigator know of your decision not to continue and your answers and information will be removed from the study and destroyed. No information that you have given will be included in the study. Signature: Your signature indicates your consent to participate in this study, and that your responses may be put in anonymous form and kept for further use after the completion of the study for up to five years. At this time, transcriptions will be permanently deleted and/or shredded. Your signature below indicates your questions about the study have been answered to your satisfaction and you have received a copy of this consent form for your own records. ____________________________________ Signature of Research Participant _____________________ Date ____________________________________ Printed Name of Research Participant ____________________________________ Signature of the Researcher Obtaining Consent _____________________ Date _____________________________________ Printed Name of the Researcher Obtaining Consent