Recently, I’ve been working on an NVivo project that has been set up for a research group just beginning the analysis stage. The data consist of several hundred respondents from various communities in two different countries, some of which have been surveyed and/or interviewed three or four times and some only once. The two immediate objectives in using NVivo are to be able to draw together all the data from each individual, and to search (query) the data according to certain variables, such as location, gender and marital status.
Jumping into an NVivo project that has already been started for a research group entering its final stages has made me reflect on how important it is to understand right from the beginning the technical as well as conceptual elements of the analysis process, and to think through the implications of these for other methodological and organisational tools. The following are just a few tips and potential pitfalls that have emerged.
1. Think about how you are going to organise importing your data into NVivo. Will you wait until all the transcripts (if that’s what you’re using) are ready and then import them all at the same time? With a big project like this that employs several researchers in different locations to gather and transcribe the data, it’s more likely that transcripts will drip through in semi-organised batches and be imported either as they arrive or in a few separate phases. Where will they be saved before importing them? How will you know for certain which have been imported and which have yet to be? If you lose track of what exactly is in your NVivo project it takes a long time to go back and check.
2. When you import your data, organise your sources into folders – usually different types of sources, such as ’round one’ and ’round two’, or ‘interview transcripts’, ‘survey notes’ and ‘group discussions’. This can help with queries later on, but more importantly it divides your data into manageable sections, rather than a single list of hundreds or even thousands of documents.
3. Keep coding systems consistent. By coding, in this case, I mean the codes you use to refer to the people (or other case units) in your study. If, in your external code book, you have used LONSGF-01 to refer to the first single female in your London sample, don’t then represent the same person in NVivo with a node called LSF01. It might look more streamlined, but it will cause you big problems when you want to import information about your respondents directly from your code book into your NVivo project.
4. Learn about classifications and attributes. If you want to compare between or focus on specific segments of your sample, it’s much more useful in the long run to assign attribute values (e.g. ‘male’ and ‘female’) to your cases than to try to do the same job by creating corresponding nodes.
5. Give consideration to which variables (gender, age, occupation etc.) are likely to be useful in your analysis and make sure you not only collect this information but also record it in a way that’s easy to import into NVivo. The best way to do this is to collate it in a spreadsheet, with the first column containing the names of the nodes in NVivo representing each person (or case). Don’t put multiple bits of information in a single cell, e.g. instead of one cell for location containing ‘R; Neston’, or ‘U; Ealing’ (where R = rural and U = urban), create two cells, one containing ‘rural’ or ‘urban’ and the other the name of the locality.
This is by no means a comprehensive list, just a few things to think about. Big research projects can be a nightmare if they get out of hand, but if they’re well organised most of the functions of NVivo will work just as well as for small projects. A bit of advance organisation can save a huge amount of time and effort.