In a research project on the concerns and achievements of newly qualified teachers we used a qualitative data analysis software package for the Macintosh computer. This package allows the storage of documents such as interview transcripts, the coding and indexing of text-units and provides a tool for establishing and refining categories within data. However, although a computer-aided analysis dramatically decreases the time conventionally needed for the cutting, sorting and pasting of interview data, it poses several challenges when used in a collaborative context. In this paper we would like to discuss some of the practical and methodological considerations involved in using the package collaboratively and provide some basic recommendations for successful implementation.
In a research project on the concerns and achievements of newly qualified teachers we used a qualitative data analysis software package for the Macintosh computer. The combination of collaboration and qualitative data analysis with a computer seemed particularly challenging. We, therefore, kept a detailed logbook allowing us to trace both the progress made in the analysis of the data as well as the process and changes in the process of data analysis itself, focusing on practical and methodological issues. In this paper we will first present a brief outline of the full research project and its methodology. We will then discuss some of the practical and methodological issues that emerged from our reflections on the research process itself, aided by the logbook. Finally, we will attempt to place our experiences within a theoretical framework of collaboration.
The research described in this paper forms part of a broader focus of evaluation of the 'New Teacher in School' course, first run at the Department of Education at Newcastle University in 1994/5. This was a course designed for newly qualified teachers to support them during the first two years of their professional practice in primary and secondary compulsory education.
This new course was evaluated through a combination of quantitative and qualitative methodologies, including a survey, questionnaires and semi-structured interviews. The qualitative data concerned the teachers' perceptions of the support they had received from their school and their Local Education Authority and their perceptions of the New Teacher in School course itself. We also analysed the teachers' concerns and achievements after one month and one year of teaching. Results of this study have been published elsewhere (Ford, Higgins, & Oberski, 1996; Oberski, Ford, Higgins, & Fisher, 1999). In this paper we will only focus on the process of analysis of the semi structured interviews with the teachers who participated in the course, especially on the issues arising from conducting a collaborative computer-aided analysis of these interviews.
This paper is drawn from our reflections on the research process and our re-reading of the logbook we kept during the analysis phase of the research. To allow the reader to put this in context we will first outline how the main data were analysed.
The tapes with the recorded interviews were transcribed by a professional audio-typist and checked by the researchers. The transcripts were coded into categories which were descriptive or interpretative (Miles & Huberman, 1994) by using a combination of manual and computer-aided methods. We used NUD*IST (Non-numerical Unstructured Data: Indexing Searching Theorising, see Richards & Richards, 1995), a software tool that supports the development of hierarchical categories of coding. We used a methodology of grounded theory and progressive focusing (Glaser & Strauss, 1967) to analyses the main data. We kept a detailed electronic logbook to aid the joint development of concepts and categories and to allow us to trace the research process and progress.
We started the analysis with the longest and most complex interview by following the advice to "categorise richly and to code liberally" (Richards & Richards, 1995) As anticipated, a large number of categories resulted, as we literally indexed everything at this stage. After this, we reviewed our procedures and objectives and agreed what aspects of the data should be looked at in more detail. We then used the now existing categories to manually and individually code two further interviews in a much more focused way, only coding what we thought was relevant to our now much more clearly defined research objectives. At our meetings we then compared our individual coding of the same scripts and entered agreed coding into NUD*IST. Informal assessment of reliability suggested that at this stage there was substantial agreement on what needed to be coded, and almost as much consensus about how it needed to be coded. We now reviewed our analysis methods and objectives again and decided it was justified to allow some individual working at this stage of the project.
Two of the researchers (KF & SH) had been involved in the main data collection of the project, while one researcher (IO) was exclusively involved with the data analysis. All three were new to NUD*IST but two (SH & IO) had had extensive previous experience of using computers in data analysis. All three researchers wished to be involved in the actual analysis of the data and initially we established regular meetings to work on the data with NUD*IST as a team. However, it was quickly discovered that it was impractical for three people to work simultaneously with NUD*IST on one machine as only one person could operate the software and rotating did not allow enough time for everyone to become familiar with the software.
To solve this, we decided to work in pairs rather than as a team. Initially we agreed that no one would work on the data on his or her own. This was to ensure that the analysis remained within the collaborative domain and did not develop as one person's individual interpretation. In addition, we scheduled the analysis so that each subsequent session was conducted by a different pair, ensuring that not more than one session was ever missed by any of the researchers. Within the analysis platform of NUD*IST it is easy to create new categories, delete old categories, re-organise existing categories and re-index sections of interview transcripts. Working in pairs in this way made it easier to keep up with the latest developments in the analysis. The agreement was developed into a set of protocols (Appendix 1).
At the beginning of the project, working in pairs was a valuable way of ensuring that our skills in using the software developed steadily, as well as a way of developing a shared understanding of the categories we were developing in our analysis of the data. However, it was very time consuming and difficult to read, discuss and code the transcripts simultaneously and on-screen. Implicit in the following record is the realisation that on-screen working may have to be supplemented by analysis of hard copies of the interview transcripts:
Kate and Steve
Read through Brian's transcript on screen - did not index anything but raised several issues. Decided that we need to examine transcripts individually and bring some first thoughts about categories even at this initial stage. Two reasons 1. We found it hard to generate categories on screen when 'new' to the data 2. It is also helpful not to have ideas influenced by others as they develop or to be overwhelmed by the group dynamic.
Another part of the working agreement was establishing guidelines for where the data were kept, who was responsible for backing up the project and who had the most recent version. Being part of an Apple network allowed us to designate shared folders that we could all access. As the project expanded, it became necessary to set up a shared account on the University server.
As the project developed we found we were increasingly using hard copy printouts rather than working on screen. This was partly due to practical reasons in that we did not all have a computer capable of running the software. Also NUD*IST did not allow us to see or print out the whole "tree" of categories at once. Thus, as we developed and refined categories we found we needed to print out a list of all categories in the project for reference during indexing.
Kate and Steve
[...] Printed and re read log. Tried to print out sections of the tree to familiarise ourselves with the categories [...] Drafted the categories onto the board so that we could print out a copy and review them. Decided that we should agree a (short) section of text to index by hand then discuss at the computer when working in pairs. We suggest that the person working in the next pair is to e-mail relevant section details for discussion and copy memo to third person as part of working protocol [...]
This reliance on paper has had an interesting echo in a mailbase discussion (Carter, 1996) about the advantages of paper in which the main arguments were centred around the attraction of remaining with the more physically appealing and aesthetic "interface" of paper (for details of other electronic sources of information about computer aided qualitative data analysis please see Appendix 2). Eventually, our working arrangement had changed so that we would first prepare transcript sections individually on paper, then discuss our ideas in pairs and finally index the sections on the machine. At the start of the project we had anticipated we would be working almost exclusively on-screen, but this proved to be misconceived and hard copies played an increasingly important role in the analysis process.
The change in method of analysis described above was primarily an attempt to reduce the time necessary for data analysis: rather than spending many hours in discussion about what to index where and why, we would individually make up our minds and then compare notes. Naturally, this was only possible after we had already established a large number of categories and agreed on their meaning. Therefore, these changes should be viewed as an integral part of the analysis process where each phase of the analysis requires a different working method.
Thus, at the start of the analysis we needed to agree on our categories and indexed everything to facilitate saturation of categories. We first worked as a team, then in pairs and some time was inevitably spent updating each other on the latest developments in the analysis. Once the categories had become established, we were able to prepare individually by indexing sections of hard copy scripts in advance, discuss them in pairs, resolve any differences and then index the sections as agreed. We found that there was substantial agreement in assigning sections of the transcripts to categories.
However, the whole analysis process remained very time consuming. This was not only due to the indexing process itself, however; but also to us working through the agreed procedures for making back ups and keeping the log up to date. There can be little doubt that computer-aided qualitative analysis, if a carried out by one individual, can drastically reduce the time spent otherwise with pen, paper, scissors and glue, especially when one considers the ease with which changes in indexing can be made. However, in a collaborative analysis, the time saved on this aspect of the research process might be offset by the extra time spent in integrating the technology into the team process and following protocols necessary to maintain true collaboration. On the other hand, the flexibilty in working with the data which the technology provides and skills gained by the researchers may prove beneficial beyond the life of the current project.
Early log entries also show a developing focus on methodological issues in addition to practical or technical problems.
[...] Discussed hierarchy of tree and [...] some methodological issues around interview length and percentage of talk by interviewer and respondent.
The time we spent working in pairs early on in the project gave us an opportunity to assess inter-rater reliability for coding the transcripts. However the collaborative nature of the working arrangement also meant that to some extent at least this reliability was supported and perhaps constructed by our regular discussions. By this we mean that good inter-rater reliability was an unsurprising development in that we spent several months discussing the definition of new and emerging categories so our accuracy in assigning sections of transcripts to those and subsequent categories was to be expected. However we would also argue that this process of discussion is evidence of greater inter observer reliability (following the distinction made by LeCompte & Goetz, 1982) in that our definitions and their assumptions have been discussed and tested in discussion. Differences in interpretation had to be negotiated (Crow, Levine, & Nager, 1992) and the paired sessions in front of the computer regularly provided an opportunity for that negotiation.
Iddo and Kate
- discussion re non-verbal behaviour in interviews and the possible impact upon 'translating/understanding' meaning in scripts.
Started to index documents for 'motivation' aspect/reasons given for coming to the NTIS course. Needs to be completed....
Kate and Steve
Read through Brian's transcript on screen - did not index anything but raised several issues....
Iddo and Steve
Continued indexing Brian and generating new categories. "Concerns" at first level with "Conflict" underneath. Discussed the difference between conflict within the role of the teacher e.g., between discipline and rapport and between being a teacher and one's personal life...
Discussed at length new category under "teacher" called "development" where Brian talks about techniques and skills at text unit 917-919...
This perhaps is a reflection of the idea that team working increases the internal reliability of the research (LeCompte & Goetz, 1982, see sections on multiple researchers and peer examination), and therefore its internal validity. The relationship between reliability and validity in qualitative research is a regularly disputed topic (Hammersley, 1993; see Scheurich, 1996) and it is not the intention of this paper to enter this particular arena. It is important to remember however that agreement can be reached in groups or pairs for other than intellectual reasons and that there is a difference between a negotiated consensus and an individual opinion.
Working with a computer and having computer-printed extracts and coding can provide a veneer of objectivity. Obviously the quality of the research is still dependent upon the quality of the researchers' efforts in checking the accuracy of transcripts, the definitions of categories for coding and the accuracy of applying those definitions. Some types of categories, described as factual (Richards & Richards, 1995) are relatively easy to define (e.g., female/male, primary/secondary teacher). Referential categories (Richards & Richards, 1995) are those dependent upon textual references in the transcripts. Similarly these can be largely descriptive or more interpretative (Miles & Weitzman, 1994).
For example, a section of an interview describing how a teacher heard about a particular course is assigned to a descriptive category about how teachers received particular information, while a category about skills as an aspect of a teacher's development depends to a larger extent on an interpretation of the text. It is clearly in this latter grouping of categories that a research team particularly needs to ensure that the definition and attribution of coding is clear. These are problems for any similar qualitative research. In a collaborative project they quickly become issues for discussion which could have been missed or glossed over in a group project where members of the team worked individually. Once categories are defined and coded on the computer it is easy to be seduced into accepting them and it is, therefore, important to continue to question categories and definitions.
So far our paper has mainly considered the practical and methodological dimensions of this collaborative research project. However the group's shared experience also raises other important issues which seem to us to lie outside the strict confines of such headings. Much of what is interesting and which provides the basis of our shared learning lies within the sociological and psychological areas of experience. These concerns centre around self-awareness, individual perceptions and interpersonal relationships, in other words the dynamic of individuals and groups. As such, they are less susceptible to easy analysis but are no less important in relation to their impact upon key aspects of the research process and its outcomes.
In attempting to establish some conceptual or theoretical reference points or to set our collaborative experience in a wider context, it is interesting to note that the literature on collaboration within research teams seems to fall into three types. Firstly, there is the literature on social research generally and in relation to action research in particular (Louden, 1991; Altricher, Posh, & Somekh, 1993; Hammersley, 1993). Within this grouping there is also some debate about the nature of relationships between researchers and the researched to which an exploration of concepts such as power, status and hierarchy are central (e.g., Troyna, 1988). Secondly, the concept of collaboration is a central thread in feminist research (e.g., Acker & Piper, 1984; Griffiths, 1995). In this context, it is often concerned with the major issues of differing philosophies and perspectives (consider for example the range of authors reviewed in Griffiths). The day to day experiences of women within research teams has received less attention, although Porter and Scott (in Acker & Piper, 1984) offer illuminating insights into their experience as research associates, and what they perceived to be the lack of value accorded to their work. Thirdly, as multi-disciplinary development work gains momentum, there is a growing realisation that the personal experience of collaborative, multi-professional research teams can provide a 'mirror' which reflects what exposure to other perspectives means for those increasingly encouraged to work in this way (e.g., Crow, Levine, & Nager, 1992).
Accounts of such collaboration are still relatively few and there is still little evidence in the literature of analysis with regard to individuals' experiences within research teams and the consequences for the research process and outcomes. Our experience does not match neatly any one of the scenarios outlined above, rather it bears comparison with some features of all three. We do not attempt to provide a 'model framework' for collaboration, only to chronicle some of our shared experiences and to consider the implications of what we have learned.
The emergence, composition and rationale for the group were unusual. Researchers tend to know each other before they start to work together. This was not our experience. The suggestion that we might work together was driven largely by pragmatic considerations: the concern that we should quickly become active researchers. The focus for the research and the possible application of the NUD*IST package was suggested to us and centred around the evaluation of a new University course. Whilst not multi-disciplinary in the strict sense of the term, the collaborative research 'team' comprised initially of two individuals considerably different in age, expertise, experience, aspiration, gender, research backgrounds and interests. The complexity was further compounded by the arrival of a new departmental Research Associate with a different experiential and educational background. Like us, our new colleague brought a research philosophy and personal experience different from the computer aided qualitative data analysis to which we were 'committed'.
The literature from several fields of research offers scope for examining group interaction and collaboration. 'Organisational theory' (Pugh, 1985; Pugh, Hickson, & Jennings, 1983) focuses closely on groups, interactions and team roles (Belbin, 1981). There appears to be a strand focusing on the creation of 'types' and the work has extended, amidst some controversy (e.g., Zuckerman and Fletcher's work quoted in Jirasinghe & Lyons, 1996, p. 61) to the development of personality questionnaires which claim to offer sound information on the suitability of individuals for particular jobs or membership of groups. However, to our minds, traditional organisational theory, in its desire to create 'types', does not take sufficient account of the complexity and contradictions of human beings or their relationships with each other.
In the educational context, researchers such as Nias and Hargreaves provide more useful parallels with our own experience, in particular, Nias' (1989) comprehensive analysis of self identity and Hargreaves' thorough exploration of the concept of collaborative cultures (Hargreaves & Fullan, 1992). A consistent and common feature in the work of each, and one which seems relevant to our own experience, is the concern to understand the individual in relation to the group. Hargreaves' arguments concerning teachers are no less true of researchers.
Teachers teach in the way they do, not just because of the skills they have or have not learned. The ways they teach are grounded in their backgrounds, their biographies, in the kinds of people they have become. Their careers - their hopes and dreams, their opportunities and aspirations, or the frustration of these things - are also important as are their relationships with colleagues. (Hargreaves, 1994, p. ix)
Hargreaves' perception of a truly collaborative culture is one in which positive working relationships are characterised by being spontaneous, voluntary, development-oriented and pervasive across time and space. His concern is that much of what passes for collaboration is in fact a contrived relationship bearing little semblance to free and equal association but rather more to power and control.
In contrast to freely entered into collaboration, the hallmarks of such 'contrived collegiality' are regulation, compulsion, implementation and predictability. It may be argued that our experience, as described in this paper, appears to reflect many of the elements of 'contrived collegiality'. The fact that the collaborative framework suggested by others was seen by all three researchers as supportive encouragement, offering opportunities rather than imposing unwanted burdens must of course be set against the political factors also operating, not least the issue of power. Nevertheless within the broad structure suggested, there was considerable freedom to shape, to plan and design the project. Our motivations were various but real, and working relationships were perceived by all of us to be non-hierarchical.
The work of Biott and Easen (1994) derives from the same concerns as those explored by Nias and Hargreaves and seems to synthesise the work of both. They argue that collaborative learning for adults and children is a complex process which defies easy or narrow definitions, that it is "essentially about the development of the self in a social context and this implies both personal and contextual features which are closely interwoven". Analysis of our collective research experience by reference to these personal and contextual dimensions has proved a useful framework and one in which the interactions between the dimensions become apparent.
Each of us came from very different social, cultural, educational and academic backgrounds, with considerable age differences. In other words we were very different people with little in common except our 'newness'. The importance of something as apparently innocuous as the 'newness' factor should not be underestimated. It is closely linked to what Biott and Easen (1994) perceive as "two key personal features of collaboration", that is, perceptions of the goals of the specific collaboration and perceptions of the nature of the specific collaborative activity. They further suggest that the goals of collaboration may be concerned with either 'adequate performance' or with the opportunity to 'wrestle with and understand something new'. Our experience suggests that both of these factors were present for each of us to a greater or lesser degree, depending upon our individual personalities, perceptions of each other and of the working context in which we found ourselves. Furthermore, individual perceptions of 'the nature of the specific collaborative activity', which Biott and Easen argue are likely to be viewed by participants as 'challenging' or 'easy', were different for each of us. Whilst we might all agree that the undertaking was 'challenging' the meaning of 'challenge' for each of us, was more than a matter of semantics, depending upon our facility with computers, knowledge of the software, understandings about qualitative research, self perception, confidence and degree of uncertainty posed either by the collaborative process or the project itself. For example one member of the team, initially at least, felt that there was both a gender and generation gap in relation to her confidence and expertise with the NUD*IST package. The matter of technical computer expertise and its potential for hierarchy could be interpreted as a gender issue. There is also a subtle sub-text here in relation to the concept of power. The extent to which the 'power of the pen' may have been overtaken by the power of the technology becomes an important part of the context. The occasional feeling of not being in control of the data, or of not being close to it, was a side effect of this situation, and one that could only be addressed openly because of both our established working protocols and the positive working relationship that had been established. These seem to us to owe less to management and organisational theory and rather more to humanistic personal values and attitudes which, for all three of us, appeared to depend upon parity in our roles - recognising that each of us brought different strengths to the project and valuing the contribution of each.
The establishment of protocols for paired working, including the regular completion of a 'computer log' not only regulated and gave a sense of purpose to our working patterns, but assisted in the sharing of knowledge and expertise. Furthermore it added a new dimension to the group dynamic and a feeling by all of us that this enhanced rather than detracted from the quality of our discussion. A paired debate is very different from small group discussion, even in a trio such as ours. Even with the most sensitive inter-personal skills, the capacity to overwhelm or unintentionally marginalise ideas is considerable and is perhaps an inevitable part of the process. The line between agreement and acquiescence is finely drawn.
Words like 'community and co-operation' tend to conceal the fact that the individual gets lost. What may feel like a group decision will feel like coercion to others. (Griffiths, 1995)
In order to be truly collaborative then it is perhaps essential to have a well-developed sense of self and confidence in one's own self-esteem as well as an active willingness to understand another's perspective. This takes us back to Hargreaves' insistence that individuality should not be overlooked or dismissed in the search for collaboration. Our chosen way of working developed paired collaboration through our sessions at the computer which then contributed to an effective collaboration as a trio. Without the work in pairs we feel that the overall collaboration as a very small team of three would not have been so easily achievable.
Some obvious conclusions are perhaps indicated by the sections above. As a team we certainly found it necessary to have an agreed set of procedures and document protocols written down. This might seem an overly formal approach which militates against effective collaboration. However, a record of the practical details in such a new and technical enterprise was appreciated by the three of us. In addition, wrestling with a new piece of software was demanding in terms of time and also required a planned logistical approach. This was felt to be necessary both with regard to sharing the workload as well as pragmatically ensuring adequate precautions were made with backups of the project.
Using the computer certainly altered our way of working. We all agreed that working in pairs at the computer provided an important forum for both furthering our knowledge of the data and negotiating our understanding of the interpretation of that data. The computer provided a powerful tool for qualitative analysis and offered a framework for developing and analysing hierarchical categories deriving from the data. The working methods of qualitative research are an important consideration in assessing its reliability and validity, even in these post-modern days. When using a computer to such an extent, it is therefore important to consider any unintended impact this way of working has upon the methodology of the research.
The computer is also a powerful part of the context in social terms and this needs to be considered in deciding how to arrange groups for collaborative working. Confidence or lack of confidence with technology also becomes a part of that social dynamic. Increased inter-rater reliability could be produced by the interpersonal dynamics and is certainly to be expected as a common vocabulary develops. Our chosen method of working in alternating pairs could perhaps have been expected, therefore, to increase and facilitate this process. Any increased inter observer reliability is therefore dependent upon the robustness of the debate within a team and willingness to try to understand another researcher's point of view.
In personal terms our experience suggests that it is important to consider individual differences and that a range of understandings are possible in interpreting group dynamics. From the inside of that grouping we felt that no one way of characterising the relationships gave an adequate description of the processes involved.
Acker, S., & Piper, D. W. G. (1984). Is higher education fair to women? SRHE & NFER Nelson.
Altricher, H., Posh, P., & Somekh, B. (1993). Teachers investigate their work. London: Routledge.
Belbin, R. M. (1981). Management teams: Why they succeed or fail. Oxford: Heineman.
Biott, C., & Easen, P. (1994). Collaborative learning in classrooms and staffrooms. London: Fulton.
Carter, P. (1996). Advantages of paper. Qual-software mailbase.
Crow, G., Levine, L., & Nager, N. (1992). Are three heads better than one? Reflections on doing collaborative interdisciplinary research. American Educational Research Journal, 29, 737-753.
Ford, K., Higgins, S., & Oberski, I. (1996). The new teacher in school. University of Newcastle.
Glaser, B., & Strauss, A. (1967). The discovery of grounded theory. Chicago: Aldine.
Griffiths, M. (1995). Making a difference:
feminism, postmodernism and the methodology of educational research. British
Educational Research Journal, 21, 219-235.
Hammersley, M. (1993). Social research:
Philosophy, politics and practice. Milton Keynes: Open University. Hargreaves, A. (1994). Changing teachers,
changing times. London: Cassell. Hargreaves, A., & Fullan, M. G. (1992).
Understanding teacher development. London: Cassell. Jirasinghe, D., & Lyons, G. (1996). The
competent head. London: Falmer. LeCompte, M., & Goetz, J. P. (1982). Problems
of reliability and validity in ethnographic research. Review of Educational
Research, 52, 31-60. Louden, W. (1991). Understanding teaching:
Continuity and change in teacher's knowledge. Cassell, London. Miles, M. B., & Huberman, A. M. (1994).
Qualitative data analysis: An expanded sourcebook (2nd ed.). London:
Sage. Miles, M. B., & Weitzman, E. (1994).
Appendix: Choosing computer programs for qualitative data analysis. In M. Miles &
A. Huberman (Eds.), Qualitative data analysis: An expanded sourcebook
(2nd ed.). (pp. 311-317). London: Sage. Nias, J. (1989). Primary teacher talking: A
study of teaching as work. London: Routledge. Oberski, I., Ford, K., Higgins, S. & Fisher,
P. (1999). The importance of relationships in teacher education. Journal of
Education for Teaching, 24(2), 135-150. Pugh, D. S. (1985). Organisational
theory. Harmondsworth: Penguin. Pugh, D. S., Hickson, D. J., & Jennings, C. R.
(1983). Writers and organisations. Harmondsworth: Penguin. Richards, T., & Richards, L. (1995). Using
hierarchical categories in qualitative data analysis. In U. Kelle (Ed.),
Computer-aided qualitative analysis (pp. 80-218). London: Sage. Scheurich, J. J. (1996) The masks of validity:
A deconstructive investigation. Qualitative Studies in Education, 9(1),
49-60. Troyna, B., & Foster, P. (1988). Conceptual
and ethical dilemmas of collaborative research: Reflections on a case study.
Educational Review, 40, 289-300. Raw text files have name of interviewee only as title. Sample document format: *New Teacher In School Research project *Kate Interview (Word) and raw file (TOLB) back up copies kept on floppy by
Steve. Work in pairs on project (general) (S&I, S&K, K&I). The two following lists are offered for interest and information. The lists
are not claimed as either original or exhaustive and are derived from a
collection of WWW site and their links. New WWW sites are appearing almost daily
and it is difficult to keep up to date with the rapid developments. The e-mail
discussion groups can be a valuable way of keeping informed and provide a forum
for sharing and solving practical problems as well as airing broader issues to do
with computer aided qualitative data analysis.
Appendix 1: Document Protocols and Working Agreement
NTIS Research Project
Standard header format is followed.
No empty lines in the body of interviews.
Page width is set to 5" (and fixed width font such as Courier to ensure less than 72 characters per line)
Documents made anonymous and checked in Word.
Only saved as Text Only with Line Breaks just prior to introduction into NUD*IST
Text unit is 1 line.
No alterations to documents once introduced.
*University staff interview
*Interview with Mark
*Date of interview - 6.11.95
*Interviewer - Kate Ford
*Observer - Steve Higgins
If we could just start very generally, Mark, with your
involvement as a tutor, and the practicalities of that, whatever
Speaking of me as being one of the NTIS tutors?
Master copy of named interviews kept on floppy by Steve
All qualitative indexing done in pairs or as a group (except for reliability comparison tests).
Master copy of project kept in shared nntis Unix space (added once shared space negotiated with University Computing Centre).
Project folder copied at beginning of working session to ensure work takes place on a back up version.
Nodes which contain coded sections to be lower case, "folder" nodes to be in BLOCK caps e.g., TEACHING is a header title but sub categories Aspirations, Expectations and Surprises contain sections of text.
Log file in Word to be reviewed and added to each session and kept in Project folder.
New list of categories/nodes to be printed out at the end of each session with a copy given to the absent member of the group (added half way through project).
Sections of transcripts for review prior to next session copied and put in pigeonholes (added half way through project).
Project folder dated on completion and copy returned to nntis space.
Any individual exploratory work to be done on copy of project, discussed and explored by group before being incorporated into project.
Always keep copy of previous saved and dated version.
Monthly back up kept in Apple share NTIS folder on Steve's computer.
Compressed Archive back ups kept on floppy disk in Iddo's room.
Appendix 2: Some On-Line Resources
Qualitative Research Web Sites
Q.S.R. - Qualitative Solutions & Research WWW Site. Developers of NUD*IST software
The ATLAS/ti Home Page. This is another software package for the analysis of qualitative data
The Ethnograph v5.0: Qualitative Research and Data Analysis Software
Computer Assisted Qualitative Data Analysis Software (CAQDAS) Networking Project Page
QualPage - Resources for Qualitative Researchers
The Knowledge Base: An online hypertext textbook on applied social research methods by Bill Trochim of Cornell University
Qualitative research text resources
Qualitative research email resources
+Kate Ford worked as a lecturer at
Newcastle University until 1998. Before that she had a successful career as a
primary teacher, headteacher and in LEA advisory and inspection work.
Hammersley, M. (1993). Social research: Philosophy, politics and practice. Milton Keynes: Open University.
Hargreaves, A. (1994). Changing teachers, changing times. London: Cassell.
Hargreaves, A., & Fullan, M. G. (1992). Understanding teacher development. London: Cassell.
Jirasinghe, D., & Lyons, G. (1996). The competent head. London: Falmer.
LeCompte, M., & Goetz, J. P. (1982). Problems of reliability and validity in ethnographic research. Review of Educational Research, 52, 31-60.
Louden, W. (1991). Understanding teaching: Continuity and change in teacher's knowledge. Cassell, London.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). London: Sage.
Miles, M. B., & Weitzman, E. (1994). Appendix: Choosing computer programs for qualitative data analysis. In M. Miles & A. Huberman (Eds.), Qualitative data analysis: An expanded sourcebook (2nd ed.). (pp. 311-317). London: Sage.
Nias, J. (1989). Primary teacher talking: A study of teaching as work. London: Routledge.
Oberski, I., Ford, K., Higgins, S. & Fisher, P. (1999). The importance of relationships in teacher education. Journal of Education for Teaching, 24(2), 135-150.
Pugh, D. S. (1985). Organisational theory. Harmondsworth: Penguin.
Pugh, D. S., Hickson, D. J., & Jennings, C. R. (1983). Writers and organisations. Harmondsworth: Penguin.
Richards, T., & Richards, L. (1995). Using hierarchical categories in qualitative data analysis. In U. Kelle (Ed.), Computer-aided qualitative analysis (pp. 80-218). London: Sage.
Scheurich, J. J. (1996) The masks of validity: A deconstructive investigation. Qualitative Studies in Education, 9(1), 49-60.
Troyna, B., & Foster, P. (1988). Conceptual and ethical dilemmas of collaborative research: Reflections on a case study. Educational Review, 40, 289-300.
Raw text files have name of interviewee only as title.
Sample document format:
*New Teacher In School Research project
Interview (Word) and raw file (TOLB) back up copies kept on floppy by
Work in pairs on project (general) (S&I, S&K, K&I).
The two following lists are offered for interest and information. The lists are not claimed as either original or exhaustive and are derived from a collection of WWW site and their links. New WWW sites are appearing almost daily and it is difficult to keep up to date with the rapid developments. The e-mail discussion groups can be a valuable way of keeping informed and provide a forum for sharing and solving practical problems as well as airing broader issues to do with computer aided qualitative data analysis.
Iddo Oberski, Ph.D. is lecturer in educational research methods at Glasgow University. He is active in the facilitation of research development in the new Faculty of Education and has an interest in research methods, thinking skills and teacher professional development. He can be contacted Faculty of Education, Department of Curriculum Studies, University of Glasgow, St Andrew's Campus, Glasgow G61 4QA. Telephone +44 (0)141 3303404 Fax +44 (0)141 3303005; E-mail: I.Oberski@educ.gla.ac.uk
Steve Higgins is a senior lecturer in primary education at Newcastle University. He teaches on a range of PGCE and CPD courses. His research interests are in teacher development, the educational use of ICT and in teaching thinking. He can be reached at the Department of Education, University of Newcastle upon Tyne, St Thomas Street, Newcastle NE1 7RU, Telephone +44 (0)191 2225470 Fax +44 (0)191 2226553 and his email is S.E.Higgins@ncl.ac.uk.
Ford, K., Oberski, I., & Higgins, S. (2000, March). Computer-aided qualitative analysis of interview data: Some recommendations for collaborative working. [45 paragraphs]. The Qualitative Report [On-line serial], 4(3/4). Available: http://www.nova.edu/ssss/QR/QR4-3/oberski.html
Kate Ford, Iddo Oberski, & Steve Higgins
Return to the top of the paper.
Return to the Table of Contents.