How much can a KAP survey tell us about people's knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi

By Annika Launiala (University of Tampere and University of Kuopio, Finland)

Knowledge, attitude, and practice (KAP) surveys are widely used to gather information for planning public health programmes in countries in the South. However, there is rarely any discussion about the usefulness of KAP surveys in providing appropriate data for project planning, and about the various challenges of conducting surveys in different settings. The aim of this article is two-fold: to discuss the appropriateness of KAP surveys in understanding and exploring health-related knowledge, attitudes, and practices, and to describe some of the major challenges encountered in planning and conducting a KAP survey in a specific setting. Practical examples are drawn from a medical anthropology study on socio-cultural factors affecting treatment and prevention of malaria in pregnancy in rural Malawi, southern Africa. The article presents issues that need to be critically assessed and taken into account when planning a KAP survey.

Background: KAP surveys

There is increasing recognition within the international aid community that improving the health of poor people across the world depends upon adequate understanding of the socio-cultural and economic aspects of the context in which public health programmes are implemented. Such information has typically been gathered through various types of cross-sectional surveys, the most popular and widely used being the knowledge, attitude, and practice (KAP) survey, also called the knowledge, attitude, behaviour and practice (KABP) survey (Green 2001, Hausmann-Muela et al. 2003, Manderson and Aaby 1992, Nichter 2008:6-7).

The KAP survey tradition was first born in the field of family planning and population studies in the 1950s. KAP surveys were designed to measure the extent to which an obvious hostility to the idea and organisation of family planning existed among different populations, and to provide information on the knowledge, attitudes, and practices in family planning that could be used for programme purposes around the world (Cleland 1973, Ratcliffe 1976). In the 1960s and 1970s, KAP surveys began to be utilised for understanding family planning perspectives in Africa (Schopper et al. 1993). Around the same time, the amount of studies on community perspectives and human behaviour grew rapidly in response to the needs of the primary health care approach adopted by international aid organisations. Hence KAP surveys established their place among the methodologies used to investigate health behaviour, and today they continue to be widely used to gain information on health-seeking practices (Hausmann-Muela et al. 2003, Manderson and Aaby 1992).

The attractiveness of KAP surveys is attributable to characteristics such as an easy design, quantifiable data, ease of interpretation and concise presentation of results, generalisability of small sample results to a wider population, cross-cultural comparability, speed of implementation, and the ease with which one can train numerators (Bhattacharyya 1997, Stone and Campbell 1984).

Nevertheless, over the years some researchers have criticised KAP surveys for taking for granted that the data provided offers accurate information about knowledge, attitudes, and practices that can be used for programme planning purposes (Cleland 1973, Nichter 1993, Pelto and Pelto 1997, Yoder 1997, see also Green 2001). A number of social scientists have also voiced their concern over the applicability of KAP surveys (Cleland 1973, Caldwell et al. 1994, Green 2001, Manderson and Aaby 1992, Nichter 1993, Ratcliffe 1976, Smith 1993). Yet in the international health community and among health programme planners, there is rarely any discussion about whether KAP surveys are an appropriate methodology to explore health-seeking practices that can be used for programme planning or not (Foster 1987). Lately there has not been much critical discussion among social scientists regarding this issue either.

My experience with the use of KAP surveys for programme planning comes from Malawi, where I worked as a project officer for UNICEF from 1998 to 2001. During this time I was involved in several KAP surveys conducted by UNICEF in collaboration with their local partners. At that time KAP survey research was common practice in international health (see also Nichter 2008:6-7) Why KAP surveys? In the UNICEF Malawi office, there were several reasons. First of all, there were Malawians who had received training on survey and quantitative research (compared to only two medical anthropologists in the entire country according to my knowledge). Secondly, surveys were easy to conduct, rather cost-effectively, even nationwide. Thirdly, there was an assumption that the results could be generalised nationwide; and, moreover, the results, "hard numbers", could be used to show progress to the funding agencies. During my three years in UNICEF I became rather doubtful about the usefulness of KAP survey data in programme planning because we rarely discussed the data quality and thus the usefulness of the results (see also Gill 1993). As a matter of fact, the findings were used only to a limited extent for programme purposes. This problem was recognized by many of us, both by national and international staff working in UNICEF as well as Malawian counterparts, but due to time constraints and lack of skills and mechanisms for translating the results into practice, research reports were frequently underutilised.

When I started to develop a PhD research plan in 2002 for a medical anthropology study on malaria in pregnancy, I was interested in adding a KAP survey to the study design to gain first-hand practical experience with the method and to clarify my doubts and concerns about it. Thus, in addition to in-depth interviews, focus groups discussions and participant observation, I carried out a KAP survey with the assistance of four local research assistant at the antenatal clinic of the Lungwena Health Centre and in six villages of the health centre catchment area. In the villages 200 interviews were aimed at altogether, with samples proportional to the size of the village. At the antenatal clinic, every third woman was selected over four days. This sampling procedure resulted in 248 interviews, 200 in the villages and 48 at the Health Centre (see more details Launiala and Kulmala 2006).

The aim of this article is two-fold: to discuss the appropriateness of KAP surveys in understanding health-related knowledge, attitudes, and practices, and to describe some of the major challenges encountered in planning and conducting a KAP survey in this setting. This article is therefore divided into two main sections. The first section looks more closely at the main aspects of each element in a KAP survey, and the second concentrates on challenges encountered in the field. Throughout I will draw examples from the KAP survey I conducted as part of my PhD research on socio-cultural factors affecting treatment and prevention of malaria in pregnancy among the Yao in rural Malawi.

Main aspects of a KAP survey

Whose knowledge counts

In KAP surveys, the knowledge part is normally used only to assess the extent of community knowledge about public health concepts related to national and international public health programmes. Investigation of other types of knowledge, such as culture-specific knowledge of illness notions and explanatory models, or knowledge related to health systems, e.g. access, referral, and quality, is highly neglected (Hausmann-Muela et al. 2003). Lack of investigation of illness notions and explanatory models is probably due to the fact that community knowledge is the embodied knowledge of explanatory illness models, and treatment practices. It is contextualised, practice-based, and emergent in times of illness, and, therefore, very difficult to detect using KAP surveys as pointed out by Nichter (1993). The narrow focus on knowledge can further be explained by the definition of knowledge and the agreement on whose knowledge counts. Pelto and Pelto (1997) have pointed out that public health professionals usually share the view that knowledge and beliefs are contrasting terms. They have an implicit assumption that knowledge is based on scientific facts and universal truths (refers to "knowing" about biomedical information). In contrast, beliefs refer to traditional ideas, which are erroneous from the biomedical perspective, and which form obstacles to appropriate behaviour and treatment-seeking practices (see also Good 1994). This narrow definition of knowledge is also shared by international health communities. While they have recognized the role and engagement of communities in the management and prevention of diseases, such as malaria and acute respiratory infections (ARI), they still fail to recognize the value of the knowledge that the communities possess (Nichter 1993). There is, however, no specific reason why knowledge related to health systems are rarely investigated in KAP surveys.

In anthropology, knowledge and beliefs are not contrasting terms (Pelto and Pelto 1997). In my study, I considered Yao women's knowledge to include local indigenous knowledge and beliefs, and biomedical knowledge. For example, during the qualitative phase of my study I investigated the meaning of malungo (a local word used for malaria), and the results revealed that malungo is an ambiguous term with multiple meanings and definitions, which are used interchangeably to refer to many types of feverish illnesses, not just malaria. More than 10 different types of malungo were noted, but I observed that these local categories were vague, ambiguous and not shared by all members of the community. Instead categories were produced and reproduced in encounters with other community members (Launiala and Kulmala 2006). In the KAP survey I also tried to go beyond public health and biomedical knowledge by investigating types of malungo. I asked: "Are there different kinds of malungo?", and 75% (n=248) of respondents said no, meaning that there is only one type of malungo. Of the remaining 25%, who said yes and were further asked to name the different types in an open-ended question, the majority said there were two types: normal malungo and/or severe malungo. This KAP survey result showed the difficulty (and pointlessness) of asking questions related to local notions of illnesses in the format of a KAP survey.

Attitudes - can they be measured?

Measuring attitudes is the second part of a standard KAP survey questionnaire. However, many KAP studies do not present results regarding attitudes, probably because of the substantial risk of falsely generalising the opinions and attitudes of a particular group (Cleland 1973, Hausmann-Muela et al. 2003). In everyday English, the term attitude is usually used to refer to a person's general feelings about an issue, object, or person (Petty and Cacioppo 1981). Furthermore, attitudes are interlinked with the person's knowledge, beliefs, emotions, and values, and they are either positive or negative. Pelto and Pelto (1994) have also described causal attitudes or erroneous attitudes, which are considered derivatives of beliefs and/or knowledge.

The act of measuring attitudes via a survey has been criticised for many reasons. When confronted with a survey question, people tend to give answers which they believe to be correct or in general acceptable and appreciated. Sensitive topics are particularly demanding. The survey interview context may influence the answer; whether the interview is conducted at a clinic or in a village, whether there are other people present, etc. The question formulation can be manipulative towards a favourable answer. Sometimes, the respondents may be uninformed about the issue and thus find it strange, but their attitudes are nonetheless measured. On occasion, the attitude scales (numbers/verbal) may fail to reflect the respondents' answers (Cleland 1973, Hausmann-Muela et al. 2003, Pelto and Pelto 1994).

I also included a section on attitudes in the KAP questionnaire that I used, following the typical statement formulation with three response alternatives ("agree", "not sure", "disagree"). I formulated the statements based on some key findings from the qualitative phase of my study, with the purpose of obtaining a clearer picture regarding whether these findings were shared by a larger proportion of Yao women, or if they were just solitary statements by individuals. The questionnaire contained 11 statements altogether. In addition to three response alternatives, I added an open-ended question to "disagree" responses in order to gain some understanding of why respondents disagreed. Moreover, I instructed the assistants to mark down when a respondent said that she did not know the answer.

Analysis of the results raises some concerns about the possibility of measuring attitudes through a questionnaire. The high proportion of "agree" answers was eye-catching. In nine statements out of 11, less than 10% (24/248) disagreed, between 63% (157/248) and 99% (246/248) agreed, and between 2% (5/248) and 29% (72/248) were not sure. There were slightly more "agree" answers among the women who gave responses at the antenatal clinic than among the village respondents. There was only one statement to which there were more disagreeing than agreeing answers. This statement concerned the role of the maternal uncle: "You ask advice from your maternal uncle when you are severely sick." To this, only 39% (97/248) agreed.

There may be several explanations for the high proportion of agreeing answers. One explanation could be that there is indeed strong agreement and cultural homogeneity among the Yao women. However, when taking into account the Yao women's socio-cultural background, which often includes little formal (Western-style) schooling and which emphasises the value of being non-confrontational, it is also possible that the question formulation can influence attitudes towards favourable, "agreeing" answers. For example, 99% (246/248) agreed to the statement which put forth the argument: "You can trust the advice and medication given by the nurses, because nurses are educated." This statement was formulated on the basis of the qualitative findings, but the problem is that it would require a lot of courage from the women to disagree with this statement even if they thought otherwise. I concur with other studies (Cleland 1973, Hausmann-Muela et al. 2003) that researchers should be very cautious regarding the interpretation of results related to attitude measurement. It is important to take into consideration the underlying contextual factors that affect the reliability of the data. One way to improve the reliability of measuring attitudes is to transform some of the attitude statements into direct questions in the other sections and to assess whether there is any discrepancy between the results or not.

What does a KAP survey tell us about practices?

A third and integral part of KAP surveys is the investigation of health-related practices. Questions normally concern the use of different treatment and prevention options and are hypothetical. KAP surveys have been criticised for providing only descriptive data which fails to explain why and when certain treatment prevention and practices are chosen. In other words, the surveys fail to explain the logic behind people's behaviour (Hausmann-Muela et al. 2003, Nichter 1993, Pelto and Pelto 1994, Yoder 1997). Another concern is that KAP survey data is often used to plan activities aimed at changing behaviour, based on the false assumption that there is a direct relationship between knowledge and behaviour. Several studies have, however, shown that knowledge is only one factor influencing treatment-seeking practices, and in order to change behaviour, health programmes need to address multiple factors ranging from socio-cultural to environmental, economical, and structural factors, etc. (Balshem 1993, Farmer 1997, Launiala and Honkasalo 2007).

I was aware of the limitation of KAP surveys when it came to explaining the logic behind treatment-seeking practices and the difficulty of formulating a structured question to elicit these practices, and thus I included very few questions about this subject in my KAP questionnaire. I had one question about the time elapsed between onset of symptoms and treatment, thinking that this was a relatively straightforward question. According to the results, 26% (63/240) received treatment between one to three hours, 25% (60/240) within 24 hours, 5% (13/240) within 2 days, but 41% (99/240) fell into the category "other". Those who said "other" were asked to specify their answer. The most typical explanation was either "immediately upon attack", or that they "took pills" (ranging from panado, aspirin, or fansidar to penicillin) when symptoms appeared. It seems that the respondents interpreted the meaning of the question differently. The time categories seemed not to make much sense. Some respondents wanted to emphasise that they take pills immediately when symptoms appear. The problem was, however, that these answers did not explain what pills were taken for what symptoms and why. The answer "taking medication immediately" is also questionable based on findings from the qualitative data and from other social scientific studies explaining treatment-seeking practices (for example Agyepong and Manderson 1994, Hausmann-Muela et al. 1998, Nyamongo 2002). The choice of treatment depends on the severity of the symptoms. It is common that people wait and see how the symptoms develop before deciding on the choice of treatment. Mild fever is commonly treated with panado and aspirin at home, and if the fever persists, one may visit a health centre, or if the fever develops to high fever causing convulsion, relatives may seek treatment from a traditional healer.

During the past decades there has also been discussion concerning informant accuracy in reporting past events and how accurately the reporting reflects reality. According to Bernard et al. (1984:503), "on average, about half of what informants report is probably incorrect in some way", causing major concern regarding the validity of the data. All this suggests that analysis of survey research should pay more attention to the interpretation and elaboration of results. Understanding and taking account of the research context should also be a prerequisite for survey research as it is for any kind of qualitative research.

Challenges encountered in the field and some explanations for them

Despite the effort to avoid the weaknesses of survey research through careful planning, I encountered several practical challenges that require discussion (see Cleland 1973, Ratcliffe 1976, Stone and Campbell 1984; see also O'Barr et al. 1973 and Ross and Vaughan 1986). The first challenge was the translation of the questionnaire: each of my research assistants translated part of the questionnaire from English into Chiyao. Then I exchanged the translations among the assistants and asked them to translate the questionnaire back into English. This exercise showed how difficult translation is, because meanings change. For example, the Yao use the word malungo to refer to malaria, but the meaning of malungo is complex as it can be used to refer to any feverish disease, its meaning varying from body pains to fever and malaria (Launiala and Kulmala 2006). So, sometimes the research assistants translated malungo as fever, sometimes as body pains, and sometimes as malaria, complicating the formulation of the questions and the interpretation of the results.

Another experience of how the meanings of the questions can change occurred after I had returned home from the field. During data analysis I needed to double-check the translation of two questions regarding fever in pregnancy. I sent the questions (in Chiyao) back to Malawi and asked my research assistants to translate them back into English. Surprisingly (or perhaps predictably), the meaning of both questions changed from the original, making it questionable to use the results based on these questions, which also caused doubts about the validity of the other results. I came across this problem of changing meanings already during the in-depth interviews that I conducted (with a research assistant who acted as an interpreter), but due to the nature of the interview method, I was able to better confirm the concepts and meanings used during the interviews. In surveys, the control of cultural reinterpretation of questions is more difficult, because of the lack of in-depth conversation and because a number of different research assistants are often used to collect the data.

There are several explanations for these linguistic challenges. Chiyao, the language spoken in the area in which I was conducting research, is an exclusively oral language, containing concepts and words with no vernacular equivalents in English, and vice versa. And among the Yao, knowledge and information exchange often occurs through various social networks (Soldan 2004). Furthermore, the Yao have a specific explanatory model for malaria that is embedded in local cultural understandings, which affects their perception, knowledge, conceptualisation, treatment-seeking practices, and so on (Helitzer-Allen and Kendall 1992, Launiala and Kulmala 2006), similarly to many other ethnic groups in sub-Saharan countries (Hausmann-Muela et al. 1998, Kengeya-Kayondo et al. 1994, Nyamongo 2002, Winch et al. 1996). Researchers with western scientific training often have little knowledge, understanding and sensitivity concerning the socio-cultural context in which they conduct their studies. This cultural gap between researchers with western scientific training and local respondents may not only cause misinterpretation and cultural reinterpretation of questions, but also throws up constraints to data analysis (Ratcliffe 1976, Stone and Campbell 1984). According to Stone and Campbell (1984), many surveys conducted in rural areas in the South can be faulted for failure to meet even the fundamental requirement of formulating questions in meaningful local categories that make sense to the respondents.

The authors found yet a bigger problem in concepts which evoke special meanings and associations in respondents, who then give their answer based on the meaning (connotation) of the question rather than the formal content. For example, in their study in Nepal, a great number of "don't know" answers were received, because many of the respondents had interpreted the question "Have you heard of abortion?" as asking about knowledge of abortion technique or about knowledge of who had had an abortion. Thus, even when a questionnaire is designed using local concepts and the questions are formulated on the basis of culture-specific data, it is challenging to control misunderstandings, changing meanings, and cultural reinterpretations. One way to try to address this challenge is to loosen the time schedule (if possible) and to spend time every day going through the survey responses together with the research assistants (thus including continuous quality check up and training).

Another problem I encountered was the difficulty of obtaining information concerning sensitive topics, e.g. use of traditional healers and witchcraft. Although I had used cultural information to formulate the questions, it was inadequate to overcome the problem. For example, concerning causes of complications during pregnancy, 72% (178/248) of the respondents knew of no cause. Only 8% (20/248) mentioned witchcraft as the cause of complications, yet during the focus group discussions and in-depth interviews, women told several stories related to e.g. miscarriage, complications, and even maternal death caused by witchcraft. There are several explanations for this. Firstly, many Malawians still feel uncomfortable expressing their negative feelings and opinions openly, and discussing sensitive issues such as traditional healers and witchcraft. According to some of my Malawian colleagues in UNICEF, this was in large part due to the oppressive era of Kamuzu Banda (1964-1994). Under Banda's rule, people lived in constant fear because there were spies everywhere, and people were known to disappear as a consequence of saying and doing the wrong things. During Banda's time, the use of traditional healers was also strictly prohibited and sanctioned. Nevertheless, most Malawians have a strong belief in witchcraft (ufiti) as an active force. Its presence in everyday life becomes explicit in the so called ufiti discourse used to maintain or preserve a mystical construct, to stop a certain direction of discourse, and to serve as the ultimate explanation (Englund 1996, Lwanda 2002). My experience is that a questionnaire is a poor instrument to gather information on sensitive issues, because it does not allow for building rapport between the interviewer and the respondent.

I was also interested in reaching beyond the "yes" and "no" answers, and therefore included open-ended questions for additional explanations in my KAP questionnaire. The results were, however, rather disappointing; they contained little information and many questions were left unanswered. On the other hand this weakness could be due to the fact that I unintentionally emphasised quantity over quality, as I did not put any limitation to the number of survey interviews conducted per day. So the faster the research assistants managed to complete the surveys, the sooner they received their salary to support their families. On the other hand, there is always the pressure of time in the field, too. We were rushing through the survey because of anticipation that the rainy season would start any day, making it difficult to reach the respondents in the villages. I too was busy conducting interviews and did not constantly follow up the work of the research assistants every day. The skills and enthusiasm of research assistants also vary and some are better at probing than others. All these factors may lead to many of the open-ended questions being left unanswered.

I also encountered the problems caused by the previous training of the other research teams; unlearning previous ways of collecting data proved hard. There are a limited number of research assistants available at the study area, yet there have been many research teams over the years. This means that the same research assistants are employed in many studies, all having different aims and methods. The previous research teams conducting surveys had trained the assistants to probe the alternatives but I wanted the assistants not to probe unnecessarily, and to make a note when alternatives were probed. As a result some assistants experienced difficulties in learning to avoid unnecessary probing, and others forgot to indicate when they had probed the alternatives. As pointed out by Cleland (1973) already in the 1970s, probing or non-probing makes a difference to the results. My advice to proceed cautiously with probing the alternative answers led to a high number of "don't know" responses. An alternative explanation, however, is that women's knowledge most often concentrates on local, indigenous issues, and they had little or nothing to say when presented with questions emphasising public health and biomedical knowledge. Or the women may have been worried about giving wrong answers, or may have been afraid of answering, especially if their relatives and other community members were present, as was often the case in the villages.

Lastly, there was also the problem of courtesy bias, meaning that respondents produced answers which they believed that the research assistants and health centre staff wanted to hear. Malawians are a polite people, and, disliking the idea of conflict, they rarely refuse to participate in a survey. This may cause a problem of courtesy bias, as reported in many studies criticising the use of surveys (Bhattacharyya 1997, Stone and Campbell 1984). The courtesy bias could have been further worsened by the fact that most respondents continuously assumed that this survey had something to do with the Lungwena health centre, which may have made them worry about what type of treatment they would receive if they were critical towards the services and care provided by the health centre. For example, answers to the survey questions related to the use of the antenatal clinic's services seemed to be positive, yet during the in-depth interviews women voiced their criticism towards the antenatal clinic's services. The problem of courtesy bias is further strengthened by the fact that local people are used to receiving money or goods in exchange for their knowledge.

Interestingly, an unpublished report from a results dissemination meeting in the present study area shows that, given an opportunity, people are willing to express their concerns and even negative opinions about surveys (TUMCHP 2005). The report revealed that the community in question was tired of participating in the many on-going research activities. Many of the community members did not even differentiate between the different studies, and lacked understanding of the aims of these studies. Also, they expected handouts from the researchers after taking part in surveys, especially after answering very long questionnaires. Furthermore, they felt that some researchers exploited them by asking intimate questions about sexual issues and, at the same time, their private lives, as such questions are considered culturally inappropriate and against their moral code.

More critical discussion about the challenges encountered in the field is needed My experience in using a standard KAP survey questionnaire to collect data on knowledge, attitudes and practices concerning malaria in pregnancy in rural Malawi strengthens my opinion that as a method the KAP survey contains several weaknesses. Some of these weaknesses can be overcome through careful planning, pre-testing and training of research assistants. A bigger concern, however, is the appropriateness of a KAP survey to collect data, particularly on attitudes and practices, and the way the results are interpreted without contextual understanding. Ratcliffe (1976) argued already in the 1970s that uncritical reliance on a KAP survey's ability to produce accurate data combined with limited comprehension of the socio-cultural context is likely to deliver a narrow understanding of the underlying factors, or even worse, a bogus interpretation of data.

Another major problem is that many investigators use KAP surveys to explain health seeking behaviour assuming that there is a direct relationship between knowledge and action, as pointed out by Hausmann-Muela et al. (2003). I agree with other authors that a KAP survey is a poor method for obtaining information about sensitive issues, such as traditional treatment and prevention practices, and sexual behavior (Schopper et al. 1993, Smith 1993). At the most, it can be used to assess people's knowledge about practices in general, but not about their actual day-to-day practices and the explanations behind them (Hausmann-Muela et al. 2003, Nichter 1993). I also agree with Radcliffe (1976) and Stone and Campbell (1984) who have argued that any kind of survey questionnaire is a rather unnatural method for collecting information in a rural setting in a non-Western culture.

The name "Knowledge, Attitudes, and Practice" (KAP) survey itself gives a misleading impression that we can easily use a KAP survey to collect data on health seeking practices and that this will be useable for programme planning. Professionals working in international health do not often have thorough methodological research training and thus they may take the use of certain methods for granted. What is lacking in today's scientific discussion is an analytical discussion about the strengths and weaknesses in survey design and methodology, and the limitations of survey research in general. Often the data collection process is described superficially, following the standard procedures and leaving out the contextual description. Yet an open and transparent discussion is the only way to improve methods and to learn from mistakes. Presumably, despite the weaknesses of survey questionnaires, they will still be used for data collection in settings across the world. Therefore, I would argue that in addition to the open discussion of the limitations of the method, minimum prerequisites are to carefully consider what type of information can be collected with a questionnaire and to take into account the socio-cultural context in both the planning stage and when interpreting the results.

Within public health programme research there has been an increasing trend towards multiple-method designs composed of a variety of qualitative and quantitative methods, in order to lessen the limitations of single method designs (Bhattacharyya 1997, Stone and Campbell 1984, see also Lambert and McKevitt 2002). The use of a multiple-method design allows contextualisation of knowledge and makes it possible to understand the logic behind treatment-seeking practices. One of the advantages of combining qualitative and quantitative methods is that it increases the validity of data if the study is appropriately designed. One should, however, keep in mind that any successful research outcome depends heavily on the skills of the researcher.

Conclusion: the challenges and value of interdisciplinarity

Today's health problems around the world cannot be solved by any discipline alone. The way forward is to enhance interdisciplinary cooperation between the social sciences and medicine. There are, however, challenges that need to be overcome before true interdisciplinarity can be achieved. One such challenge to anthropologists is to find a way to communicate with medical professionals and to be able to argue convincingly what anthropology can offer (Pool and Geissler 2007). When working with UNICEF in Malawi I often encountered resistance concerning my "anthropological" ideas and suggestions, presumably because most of my colleagues failed to understand the relevance of the study, and because I was unable to explain my ideas convincingly using the appropriate "public health language". Napolitano and Jones (2006) have described similar problems and experiences among public health practitioners in the UK and in the Gambia, referring to their limited understanding of anthropology and its contributions, and the existence of ethnocentric fears. Some anthropologist might wonder why we should make an effort if medical researchers are perceived not to be taking any steps towards understanding anthropology. I guess it all depends on what drives us to do research. My motivation for trying to enhance collaboration between medicine and medical anthropology is based on the hope that in the end it will improve the well-being of rural Malawians.

Conducting a KAP survey in a rural African setting - and in other types of settings as well - is problematic for a number of reasons. These problems and challenges should be openly discussed in scientific publications and communicated to programme planners. A KAP survey can be useful when the research plan is to obtain general information about public health knowledge regarding treatment and prevention practices, or about sociological variables, such as income, education, occupation, and social status. It is important, however, to know and understand what type of data can be generated by which method, and to choose appropriate methods in relation to the study objectives. If the objective is to study health-seeking knowledge, attitudes, and practices in context, there are suitable ethnographic methods available, including focus group discussions, in-depth interviews, participant observation, and various participatory methods. A combination of qualitative and quantitative methods may also prove effective, but I believe that the best value can be achieved only when the research team consists of experts from both qualitative and quantitative research traditions. Anthropologists working in international and public health should strive to find ways to enhance true interdisciplinarity.

References

Agyepong, I. and L. Manderson. 1994. The diagnosis and management of fever at household level in the Greater Accra Region, Ghana. Acta Tropica 58, 317-330.

Balshem, M. 1993. Cancer in the community: Class and medical authority. Washington, DC: Smithson Inst. Press.

Bernard, H. R., P. Killworth, D. Kronenfeld and L. Sailer. 1984. The problem of informant accuracy: The validity of retrospective data. Annual Review of Anthropology 13, 495-517.

Bhattacharyya, K. 1997. Key informants, pile sorts, or surveys? Comparing behavioral research methods for the study of acute respiratory infections in West Bengal. In The anthropology of infectious diseases: Theory and practice on medical anthropology and international health (eds) M. C. Inhorn and P. J. Brown, 211-238. Amsterdam: Routledge Publishers.

Caldwell, J.C., P. Caldwell, and P. Quiggen. 1994. The social context of AIDS in sub-Saharan Africa. New York: Population Council.

Cleland, J. 1973. A critique of KAP studies and some suggestions for their improvement. Studies in Family Planning 4(2), 42-47.

Englund, H., 1996. Witchcraft, modernity and the person: The morality of accumulation in Central Malawi. Critique of Anthropology 16(3), 257-279.

Farmer, P.E. 1997. Social scientists and new tuberculosis. Social Science and Medicine. 44(3), 347-358.

Foster, G. M. 1987. World Health Organization behavioural science research: Problems and prospects. Social Science & Medicine 24(9), 709-717.

Gill, G. J. 1993. O.K., the data's lousy, but it's all we've got (being a critique of conventional methods). Gatekeeper Series no. 38. London: International Institute for Environment and Development (www.iied.org).

Good, B. 1994. Medicine, rationality and experience: An anthropological perspective. Cambridge: Cambridge University Press.

Green, C. E. 2001. Can qualitative research produce reliable quantitative findings? Field Methods 13(3), 3-19.

Hausmann-Muela, S., R. J. Muela and M. Tanner. 1998. Fake malaria and hidden parasites - the ambiguity of malaria. Anthropology and Medicine 5(1), 43-61.

Hausmann-Muela, S., R. J. Muela and I. Nyamongo. 2003. Health-seeking behaviour and the health system's response. DCPP Working Paper no. 14.

Helitzer-Allen, DL. and C. Kendall. 1992. Explaining differences between qualitative and quantitative data: A study of chemoprophylaxis during pregnancy. Health Education Quaterly 19, 41-54.

Kengeya-Kayondo, J. F., J. A. Seeley, E. Kajura-Bajenja, E. Kabunga, E. Mubiru, F. Sembajja and D. W. Mulder. 1994. Recognition: treatment seeking behaviour and perceptions of cause of malaria among the rural women in Uganda. Acta tropica 58, 255-266.

Lambert, H. and C. McKevitt. 2002. Anthropology in health research: From qualitative methods to multidisciplinarity. British Medical Journal 325, 210-213.

Launiala, A. and M-L. Honkasalo. 2007. Ethnographic study of factors influencing compliance to intermittent preventive treatment of malaria during pregnancy among Yao women in rural Malawi. Transactions of the Royal Society of Tropical Medicine and Hygiene 101(10), 980-989.

Launiala, A. and T. Kulmala. 2006. The importance of understanding the local context: Women's perceptions and knowledge concerning malaria in pregnancy in rural Malawi. Acta Tropica 98, 111-117.

Lwanda, J. 2002. Tikutha: the political culture of the HIV/AIDS epidemic in Malawi. In A democracy of chameleons: Politics and culture in the new Malawi (ed.) H. Englund, 151-165. Blantyre: Christian Literature Association in Malawi.

Manderson, L. and P. Aaby. 1992. An epidemic in the field? Rapid assessment procedures and health research. Social Science & Medicine 35(7), 839-50.

Napolitano, D. A. and C. O. H. Jones. 2006. Who needs "pukka anthropologists"? A study of the perceptions of the use of anthropology in tropical public health research. Tropical Medicine & International Health 11(8), 1264-1275.

Nichter, M. 1993. Social science lessons from diarrhea research and their application to ARI. Human Organization 52(1), 53-67.

---------. 2008. Global health: Why cultural perceptions, social representations, and biopolitics matter. Tuscon: University of Arizona Press.

Nyamongo, I. K. 2002. Health care switching behaviour of patients in a Kenyan rural community. Social Science & Medicine 54(3), 377-386.

O'Barr, W., D. Spain and M. Tessler. 1973. Survey research in Africa: Its applications and limits. Evanston, IL: Northwestern University Press.

Pelto, J. P., and G. H. Pelto. 1997. Studying knowledge, culture, and behavior in applied medical anthropology. Medical Anthropology Quarterly 11(2), 147-163.

Petty, R. E., and J. P. Cacioppo. 1981. Attitudes and persuasion-classic and contemporary approaches. Dubuque, IA: W. C. Brown Co. Publishers.

Pool, R., and W. Geissler 2007. Medical anthropology: Understanding public health. Berkshire: Open University Press.

Ratcliffe, J. W. 1976. Analyst biases in KAP surveys: A cross-cultural comparison. Studies in Family Planning 7(11), 322-330.

Ross, D. A., and J. P. Vaughan. 1986. Health interview surveys in developing countries: A methodological review. Studies in Family Planning 17(2), 78-94.

Schopper, D., S. Doussantousse and J. Orav. 1993. Sexual behaviors relevant to HIV transmission in a rural African population: How much can a KAP survey tell us? Social Science & Medicine 37(3), 401-412.

Soldan, V. A. P. 2004. How family planning ideas are spread within social groups in rural Malawi. Studies in Family Planning 35(4), 275-290.

Smith, H. L. 1993. On the limited utility of KAP-style survey data in the practical epidemiology of AIDS, with reference to the AIDS epidemic in Chile. Health Transition Review 3(1), 1-15.

Stone, L. and J. G. Campbell. 1984. The use and misuse of surveys in international development: An experiment from Nepal. Human Organization 43(1), 27-34.

TUMCHP. 2005. Unpublished meeting report, distributed via email 13 July 2005. Department of International Health, University of Tampere Medical School, Finland.

Winch, P.J., A. M. Makemba, S. R. Kamazima, M. Lurie, G. K. Lwihula, Z. Premji, J. N. Minjas and C. J. Shiff. 1996. Local terminology for febrile illnesses in Bagamoyo District, Tanzania and its impact on the design of a community-based malaria control programme. Social Science and Medicine 42(7), 1057-1067.

Yoder, P. S. 1997. Negotiating relevance: Beliefs, knowledge and practice in international health projects. Medical Anthropology Quarterly 11(2), 131-146.

About the author

Annika Launiala holds an MA in Cultural Anthropology and is currently working as a project manager on a multidisciplinary project called "Multicultural aspects of Health" at the School of Public Health and Clinical Nutrition, University of Kuopio, Finland. She is also working on her PhD research concerning socio-cultural factors affecting treatment and prevention of malaria in pregnancy in rural Malawi. She can be contacted at annika.launiala(AT)uta.fi or at annika.launiala(AT)uku.fi.



Anthropology Matters Journal ISSN: 17586453 Publisher: Anthropology Matters url: www.anthropologymatters.com