Skip to main content

Multi-criteria decision analysis of breast cancer control in low- and middle- income countries: development of a rating tool for policy makers

Abstract

Background

The objective of this study was to develop a rating tool for policy makers to prioritize breast cancer interventions in low- and middle- income countries (LMICs), based on a simple multi-criteria decision analysis (MCDA) approach. The definition and identification of criteria play a key role in MCDA, and our rating tool could be used as part of a broader priority setting exercise in a local setting. This tool may contribute to a more transparent priority-setting process and fairer decision-making in future breast cancer policy development.

Methods

First, an expert panel (n = 5) discussed key considerations for tool development. A literature review followed to inventory all relevant criteria and construct an initial set of criteria. A Delphi study was then performed and questionnaires used to discuss a final list of criteria with clear definitions and potential scoring scales. For this Delphi study, multiple breast cancer policy and priority-setting experts from different LMICs were selected and invited by the World Health Organization. Fifteen international experts participated in all three Delphi rounds to assess and evaluate each criterion.

Results

This study resulted in a preliminary rating tool for assessing breast cancer interventions in LMICs. The tool consists of 10 carefully crafted criteria (effectiveness, quality of the evidence, magnitude of individual health impact, acceptability, cost-effectiveness, technical complexity, affordability, safety, geographical coverage, and accessibility), with clear definitions and potential scoring scales.

Conclusions

This study describes the development of a rating tool to assess breast cancer interventions in LMICs. Our tool can offer supporting knowledge for the use or development of rating tools as part of a broader (MCDA based) priority setting exercise in local settings. Further steps for improving the tool are proposed and should lead to its useful adoption in LMICs.

Background

As the second most common cancer in the world and the most common cancer in women, breast cancer is an important health problem globally [1]. Although it was originally considered to be a disease of the developed world, low- and middle-income countries (LMICs) are experiencing large increases in incidence [2]. Mortality-to-incidence rates remain relatively high in these areas [3], possibly due to relatively poor breast cancer control strategies (e.g. awareness raising, early detection, treatment) and differences in cultural beliefs [2, 4]. Because strong early detection programs are beneficial, the World Health Organization (WHO) seeks to improve appropriate breast cancer control programs in LMICs.

Cost-effectiveness analyses (CEAs), based on the maximization of health benefits, have often been used for the selection of breast cancer control strategies. To provide an evidence base for the cost-effectiveness of breast cancer interventions in LMICs, a consortium of the WHO, Erasmus University Rotterdam, the Radboud University Nijmegen Medical Center, and the Susan G. Komen for the Cure Foundation initiated an international study in 2010 [5]. Such CEAs may help governments decide how to spend scarce health care resources more efficiently. However, decision-makers often deviate from CEA results because other principles such as equal treatment and priority to the worst-off [68] and other factors like feasibility or acceptability influence decisions, as well [911]. Ignorance about these criteria may induce implementation problems or inequality among certain patient groups [1214].

Multi Criteria Decision Analysis (MCDA) is a well-accepted framework that can simultaneously assess multiple criteria for priority setting of interventions [15]. Different approaches of MCDA are proposed but contain at least the following elements: 1) selection of relevant interventions, 2) selection of criteria for priority setting, 3) collecting evidence and rating the performance of interventions on selected criteria, 4) deliberation on the evidence and performance of interventions with the aim to select the best interventions for implementation [1619].

Several studies have shown the potential of MCDA in prioritizing health interventions, however, it has not yet been applied for the selection of breast cancer control interventions [2023]. Recently, MCDA has been criticized for being technocratic and conceptually challenging for local decision makers [24]. Therefore, the development of a tool to support local policy makers in selecting criteria and in rating the performance of interventions on these criteria is warranted.

The objective of this study is to develop a rating tool to assess breast cancer interventions along the continuum of care, within the context of the overarching breast cancer CEA project [5]. The rating tool will be composed of criteria, criteria definitions, criteria weights and rating scales to measure the overall impact of breast cancer interventions and support the priority setting process. Such a rating tool can be used in a broader, MCDA based, priority setting process to develop cancer control strategies in a local setting.

Methods

To develop the rating tool we established an expert panel (n = 5) of breast cancer and priority-setting experts from WHO and the Radboud University Nijmegen Medical Centre. The expert panel consisted of two health economists, a scientific researcher, a public health specialist and a student on health technology assessment. Three of the experts are co-authors of this article (KV, SZ and JL). A detailed overview of the considerations made by the expert panel in the development of the rating tool is provided as additional information (Additional file 1). Below we describe the most important steps that were taken to develop the tool.

A literature search using PubMed and Google Scholar was performed for the identification of a first set of predefined criteria. Different combinations of the terms ‘criteria’, ‘values’, ‘factors’, ‘priority setting’, ‘decision making’, and ‘policy making’ were used as the query. The expert panel discussed the list in order to avoid overlap among the criteria. For the remaining criteria, clear definitions were defined with the help of glossaries and documents published by the WHO [2527].

To develop the scoring scales, another literature study was performed for each criterion of this predefined list. When no or little information was available, scoring scales were mainly based on discussions with the expert panel.

The Delphi study

The list of predefined criteria and scoring scales was further refined by the opinion of experts from all over the world. A Delphi study was chosen because of the anonymity of participants, the opportunity to include participants globally, and the time and money available to conduct the study [28]. Delphi studies have proven to be appropriate for finding a core list of evaluation criteria [29].

Participants

Experts were selected following WHO selection criteria that include a balanced geographical and gender representation, expertise in the technical area (particularly in LMICs), and absence of any relevant interest in the personal declaration of interest form. Twenty-nine experts with expertise in priority setting or breast cancer policies in LMICs were involved, ensuring methodological as well as substantive quality. Experts were identified by approaching authors of relevant articles and by snowball sampling. Among the experts there were epidemiologists, cancer survivors, pathologists, guideline-developers, public health specialists, radiotherapists, surgeons, researchers, managers, strategists and ethicists.

First round

In this round, the list with criteria based on the performed literature study was presented to the participants. The participants were asked to score the criteria on five-point Likert scales, according to whether they agreed that interventions scoring high on the criterion should be more prioritized (1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree). The experts could give comments on the list and mention whether important criteria were missing. In addition, the definitions and scoring scales of the criteria were presented and participants were asked to provide comments. Likert scales were chosen for this first round because this method is reviewed as acceptable for a Delphi study and is simple and easy to perform [30].

Second round

This second round showed the scores and comments given in the first round, together with the adaptations to which they had led. Subsequently, participants were asked whether they agreed on the adaptations and if they could clarify their answers.

Third round

Based on the proportion of participants agreeing on the adaptations made after the first round and on the comments provided, some final changes were made to the criteria list. These final criteria and their definitions and scoring scales were shown to the participants, who were asked whether they agreed that this final list contained the most relevant criteria for the prioritization of breast cancer interventions. Furthermore, participants were asked to divide 100 points over the criteria according to their relative importance for the evaluation of breast cancer interventions.

The analysis

The analysis of the answers was both quantitative and qualitative. After the first Delphi round, mean and median scores on the Likert scales for “the importance” of criteria were calculated. The second round resulted in a percentage of participants who agreed with the suggested adaptations. After the third round, the mean and median weights given according to the importance of criteria were calculated. All participant comments were quantitatively analyzed.

Results

The literature search on criteria resulted in a total of 33 criteria (Figure 1). After the expert panel discussed these criteria, nine criteria remained for the Delphi study. Two criteria, effectiveness and feasibility, were divided into three and four subcomponents. For each of these nine criteria and the subcomponents, a definition and a potential scoring scale were defined.

Figure 1
figure 1

Overview of the development of the criteria list.

Participants

Out of 72 experts who were asked, 29 were willing to participate. Of these, 17 were experts on priority setting, and 12 were experts on breast cancer policies. The first questionnaire was completed by 23 participants, the second questionnaire by 19 participants, and the third questionnaire by 15 participants. Reasons for not completing a questionnaire were private circumstances and disagreement with the aim of this research (n = 1). Most participants, however, gave no reason.

First round

Based on the results of the first round, two criteria (‘Severity of breast cancer’ and ‘Age’) and one of the components (‘The time until the effect emerges’) were suggested for removal; two of the components were suggested to be combined with two criteria; and all the other components were suggested for separation into different criteria. Also, a new criterion was suggested (‘Political will’), two definitions were refined, and four scoring scales were adapted. These adaptations led to a list of 10 criteria. For all criteria, except for the criterion ‘Effectiveness’, there was divergence in Likert scale scores. The average and median Likert scale values and most important comments on the criteria are shown in Table 1.

Table 1 Initial criteria including Likert scores and important comments given in the Delphi study

Second round

Based on the results of the second round, the new suggested criterion (‘Political will’) was removed again because participants argued that political will would also depend on the results of interventions on the other criteria; political will changes too often; and MCDA aims at a more fair priority-setting process while political will might even be clearly unfair. Two criteria (‘Equal access’ and ‘Acceptability’) were separated into two different criteria (‘Geographical coverage’ and ‘Accessibility’; ‘Acceptability’ and ‘Safety’); two criteria were combined (‘Catastrophic health expenditures’ and ‘Acceptability’); and some small refinements to most of the definitions and scoring scales were made. An overview of the changes made in the criteria list is shown in Figure 1. The second round resulted in a final list of 10 criteria (Table 2).

Table 2 Final criteria list for the prioritization of breast cancer interventions including weights

Third round

All participants agreed that the list after the second round covered the most relevant criteria for the prioritization of breast cancer interventions. Three participants mentioned, however, that some criteria might be still overlapping. As one participant noted: “Doing the relative weighting exercise above, I realized that some criteria are overlapping and it was difficult to assess independent relative weights to them; for example, ‘effectiveness’ and ‘quality of the evidence’ are inseparable whereas we would not perhaps say something is effective if the quality of the evidence is weak”. There were also doubts about ‘cost-effectiveness’ being covered by the ‘affordability’ and ‘effectiveness’ and about ‘safety’ being covered by ‘effectiveness’ and ‘geographical coverage’ being covered by ‘effectiveness’. The criterion ‘geographical coverage’ was rated relatively low in its importance for the evaluation of breast cancer interventions, followed by ‘safety’ and ‘affordability’, respectively. The importance of the criterion ‘Effectiveness’ was rated highest (Table 2).

Discussion

This study describes the development of a rating tool to measure the impact of breast cancer interventions based on multiple criteria in LMICs. Ten criteria, including definitions and potential scoring scales, have been indicated. The results of this study show that effectiveness, quality of the evidence, magnitude of individual health impact, acceptability, cost-effectiveness, technical complexity, affordability, safety, geographical coverage, and accessibility seem to be important principles in the selection of breast cancer control strategies. Although selecting and defining interventions and criteria for breast cancer control is context specific, we think that this rating tool can be a starting point for local policy makers as part of a broader, MCDA based, priority setting process.

Use of the tool in a LMIC

The tool could be used as part of the integrated MCDA and accountability for reasonableness (A4R) approach for priority setting, recently proposed by Baltussen et al. [16] (Figure 2). This new approach combines strong components of both frameworks and requires a set-up of a multi-stakeholder consultation panel (step one). In this way a democratic learning process is started in which stakeholders are involved in all steps of the priority setting. Compared to the stand-alone MCDA framework, this combined approach may increase the acceptance of decisions among stakeholders and the likelihood of implementation of prioritized programs. The rating tool can be part of step two and three (Figure 1) of the approach that aim to identify criteria for priority setting and assess (i.e. rate) the performance of interventions on the selected criteria.

Figure 2
figure 2

Elements of a priority setting process based on MCDA [ [16] ].

An important next step in the local use of the rating tool is to investigate how the tool and its components are understood in LMICs in a pilot study. Users of the tool could for example select relevant stakeholders (e.g. patients, lay-people policy makers, caregivers, public health specialists) and establish a consultation panel (step 1). These stakeholders could discuss the interventions, criteria, the attitude of decision-makers against the criteria and scoring scales using democratic elements (e.g. Nominal Group Technique) (step 2). After the collection of all relevant (local) evidence, the users could use our tool as an input for a performance matrix (step 3), and then interpret and deliberate on the results of this matrix (step 4 and 5). Users should be well informed and plan enough time for this process, and should try to ensure that the tool is perceived as a simple and legitimate way to frame policy discussions in a more rapid and balanced manner. The potential of this tool could also be investigated for other cancers.

Limitations of the study

Our study has a number of limitations. First, prior to the Delphi study, the expert panel made a selection of 9 criteria out of 33 criteria. This selection was based on overlap between criteria and whether criteria would be appropriate for the selection of breast cancer interventions. However, there is no certainty that personal preferences did not lead to bias in this selection.

Second, we used Delphi studies to define a list with core criteria including definitions and scoring scales. The Delphi method ensures participant anonymity and provides enough time to properly consider one’s own answers and those of others. However, the Delphi questionnaire may not allow for adequate elaboration on difficult concepts such as equity and social welfare. Also, Delphi questionnaires can be relatively time consuming, which may have partly caused 14 participants to withdraw from this study. We do not expect that these withdrawals biased the results because they varied in gender and type of expert and the number of comments remained high in each questionnaire.

Third, the wide variety of comments and views of the participants made us aware of the difficulties in developing a clear, consensus-based, non-overlapping criteria list and scoring scales. There are many possible compositions and definitions of criteria [3133]. Besides there are many ways to divide a scoring scale into different categories and this also depends on the variability of the interventions that are considered (i.e. discriminative power of the scoring scale). Further research should focus on more informed contextualized categories for scoring scales.

The difficulty of avoiding overlap between criteria may be explained by a lack of a broader theory on the relationship between criteria. Some disagreement between participants remained until the end of the process, and some overlap was still suspected in the final criteria list. These potential overlaps will need attention in the further development of this tool because criteria should preferentially be independent from each other [15, 42]. Especially effectiveness has a risk of overlap with other criteria, like cost-effectiveness, safety and geographical coverage. Further overlap between criteria should be identified and distinctions should be made and clearly described in the definitions.

Limitations of the tool

The tool also has some practical limitations that one should be aware of. First, the tool does not provide guidance to convert the performance matrix into a final prioritization of interventions. This tool stops at rating interventions after which a choice should be made based on a democratic learning process (Figure 1). This tool does not facilitate a democratic learning process, which makes it less likely that good rated interventions are implemented. The accountability for reasonableness framework (A4R) is successful in introducing such a learning process [43]. We recommend making the tool part of the integrated MCDA A4R approach for priority setting in health as proposed by Baltussen et al. [16], however local capacity should be present or established to facilitate such a complete process.

Second, the proposed rating tool is based on decision-maker values and preferences while the views of other stakeholder groups are also considered important in priority setting exercises. Different stakeholder groups are likely to have different preferences for criteria [22, 44]. This limitation of the tool could be solved while applying the tool in a local setting. At that stage, other stakeholder groups (like patients, the public, and health care workers) can be asked to comment on the relevance of the criteria included in the tool and the relative importance and the tool can be adapted accordingly.

Third, there are limitations to the collection of information, and it may sometimes be difficult to assess interventions on certain criteria. This is however a problem across the field of health priority setting and we recommend to be transparent on the available evidence and its quality. A sensitivity analysis may help to give insight in the uncertainty of the scoring performances of intervention options. In this way, quality of evidence is not used as a single criterion but as an uncertainty factor per criterion per intervention [45].

Conclusion

This study describes the development of a rating tool to assess the impact of breast cancer interventions on multiple criteria. This tool may be a starting point for local decision makers that would like to conduct multi criteria decision analysis to set priorities for breast cancer control.

References

  1. Ferlay J, Shin HR, Bray F, Forman D, Mathers C, Parkin DM: Estimates of worldwide burden of cancer in 2008: GLOBOCAN 2008. Int J Cancer 2010,127(Suppl 12):2893–2917.

    Article  CAS  PubMed  Google Scholar 

  2. Porter P: “Westernizing” women’s risks? Breast cancer in lower-income countries. N Engl J Med 2008,358(Suppl 3):213–216.

    Article  CAS  PubMed  Google Scholar 

  3. World Health Organization: The Global Burden of Disease: 2004 Update. Geneva: World Health Organization; 2008.

    Google Scholar 

  4. Kamangar F, Dores GM, Anderson WF: Patterns of cancer incidence, mortality, and prevalence across five continents: defining priorities to reduce cancer disparities in different geographic regions of the world. J Clin Oncol 2006,24(Suppl 14):2137–2150.

    Article  PubMed  Google Scholar 

  5. World Health Organization: Breast cancer: prevention and control. [http://www.who.int/cancer/detection/breastcancer/en/]

  6. Goddard M, Hauck K, Smith PC: Priority setting in health - a political economy perspective. Health Econ Policy Law 2006, 1: 79–90.

    Article  PubMed  Google Scholar 

  7. Muir Gray JA: Evidence based policy making: is about taking decisions based on evidence and the needs and values of the population. BMJ 2004,329(Suppl 7473):988–989.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  8. Persad G, Wertheimer A, Emanuel EJ: Principles for allocation of scarce medical interventions. Lancet 2009,373(Suppl 9661):423–431.

    Article  PubMed  Google Scholar 

  9. Evans DB, Lim SS, Adam T, Edejer TT: Evaluation of current strategies and future priorities for improving health in developing countries. BMJ 2005,331(Suppl 7530):1457–1461.

    Article  PubMed Central  PubMed  Google Scholar 

  10. Laxminarayan R, Mills AJ, Breman JG, Measham AR, Alleyne G, Clarson M, Jha P, Musgrove P, Chow J, Shahid-Salles S, Jamison DT: Advancement of global health: key messages from the Disease Control Priorities Project. Lancet 2006,367(Suppl 9517):1193–1208.

    Article  PubMed  Google Scholar 

  11. Oliver A, Mossialos E, Robinson R: Health technology assessment and its influence on health-care priority setting. Int J Technol Assess Health Care 2004,20(Suppl 1):1–10.

    PubMed  Google Scholar 

  12. Anderson BO, Yip CH, Ramsey SD, Bengoa R, Braun S, Fitch M, Groot M, Sandcho-Garnier H, Tsu VD: Breast cancer in limited-resource countries: health care systems and public policy. Breast J 2006,12(Suppl 1):S54–69.

    Article  PubMed  Google Scholar 

  13. Breen N, Kessler LG, Brown ML: Breast cancer control among the underserved–an overview. Breast Cancer Res Treat 1996,40(Suppl 1):105–115.

    Article  CAS  PubMed  Google Scholar 

  14. Reichenbach L: The politics of priority setting for reproductive health: breast and cervical cancer in Ghana. Reprod Health Matters 2002,10(Suppl 20):47–58.

    Article  PubMed  Google Scholar 

  15. Baltussen R, Niessen L: Priority setting of health interventions: the need for multi-criteria decision analysis. Cost Eff Resour Alloc 2006, 4: 14. 10.1186/1478-7547-4-14

    Article  PubMed Central  PubMed  Google Scholar 

  16. Baltussen R, Mikkelsen E, Tromp N, Hurtig A, Byskov J, Olsen O, Bærøe K, Hontelez JA, Singh J, Norheim OF: Balancing efficiency, equity and feasibility of HIV treatment in South Africa - development of programmatic guidance. Cost Eff Resour Alloc 2013,11(Suppl 1):26.

    Article  PubMed Central  PubMed  Google Scholar 

  17. Devlin NJ, Sussex J: Incorporating Multiple Criteria in HTA. Methods and Processes. London: Office of Health Economics; 2011.

    Google Scholar 

  18. Dodgson JS, Spackman M, Pearman A, Phillips LD: Multi-Criteria Analysis: A Manual. London: Department for Communities and Local Government; 2009.

    Google Scholar 

  19. Dolan JG: Multi-criteria clinical decision support: a primer on the use of multiple criteria decision making methods to promote evidence-based, patient-centered healthcare. Patient 2010,3(Suppl 4):229–248.

    Article  PubMed Central  PubMed  Google Scholar 

  20. Baltussen R, Stolk E, Chisholm D, Aikins M: Towards a multi-criteria approach for priority setting: an application to Ghana. Health Econ 2006,15(Suppl 7):689–696.

    Article  PubMed  Google Scholar 

  21. Baltussen R, Youngkong S, Paolucci F, Niessen L: Multi-criteria decision analysis to prioritize health interventions: capitalizing on first experiences. Health Policy 2010,96(Suppl 3):262–264.

    Article  PubMed  Google Scholar 

  22. Youngkong S, Baltussen R, Tantivess S, Koolman X, Teerawattananon Y: Criteria for priority setting of HIV/AIDS interventions in Thailand: a discrete choice experiment. BMC Health Serv Res 2010, 10: 197. 10.1186/1472-6963-10-197

    Article  PubMed Central  PubMed  Google Scholar 

  23. Youngkong S, Baltussen R, Tantivess S, Mohara A, Teerawattananon Y: Multicriteria decision analysis for including health interventions in the universal health coverage benefit package in Thailand. Value Health 2012,15(Suppl 6):961–970.

    Article  PubMed  Google Scholar 

  24. Hipgrave DB, Alderman KB, Anderson I, Soto EJ: Health sector priority setting at meso-level in lower and middle income countries: lessons learned, available options and suggested steps. Soc Sci Med 2014, 102: 190–200.

    Article  PubMed  Google Scholar 

  25. World Health Organization: Health systems strengthening glossary. [http://www.who.int/healthsystems/hss_glossary/en/index5.html]

  26. World Health Organization: Global Status Report on Noncommunicable Diseases. Geneva: World Health Organization; 2010.

    Google Scholar 

  27. World Health Organization: IPCS Risk Assessment Terminology. Geneva: World Health Organization; 2004.

    Google Scholar 

  28. Hsu CC, Sandford BA: The Delphi technique, making sense of consensus. Pract Asessment Res Eval 2007.,12(Suppl 10): http://pareonline.net/getvn.asp?v=12&n=10

    Google Scholar 

  29. Verhagen AP, de Vet HC, de Bie RA, Kessels AG, Broers M, Bouter LM, Knipschild PG: The Delphi list: a criteria list for quality assessment of randomized clinical trials for conducting systematic reviews developed by Delphi consensus. J Clin Epidemiol 1998,51(Suppl 12):1235–1241.

    Article  CAS  PubMed  Google Scholar 

  30. Scheibe M, Skutsch M, Schofer J: Experiments in Delphi Methodology. In The Delphi Method: Techniques and Applications. Boston: Addison-Wesley Publishing Company; 2002.

    Google Scholar 

  31. Goetghebeur MM, Wagner M, Khoury H, Evitt RJ, Erickson LJ, Rindress D: Evidence and value: impact on decision making–the EVIDEM framework and potential applications. BMC Health Serv Res 2008, 8: 270. 10.1186/1472-6963-8-270

    Article  PubMed Central  PubMed  Google Scholar 

  32. Baltussen R, Norheim OF, Johri M: Fairness in service choice: an important yet underdeveloped path to universal coverage. Trop Med Int Health 2011,16(Suppl 7):838–839.

    Article  CAS  PubMed  Google Scholar 

  33. Southwark NHS: Policy on Prioritisation for Investment and Disinvestment in Health Services. London: NHS Southwark CCG; 2010.

    Google Scholar 

  34. Hanvoravongchai P: Health system and equity perspectives in health technology assessment. J Med Assoc Thai 2008,91(Suppl 2):S74–87.

    PubMed  Google Scholar 

  35. The NHS Confederation: Priority Setting: An Overview. London: NHS Confederation; 2007.

    Google Scholar 

  36. Zelle SG, Nyarko KM, Bosu WK, Aikins M, Niëns LM, Lauer JA, Sepulveda CR, Hontelez JA, Baltussen R: Costs, effects and cost-effectiveness of breast cancer control in Ghana. Trop Med Int Health 2012,17(Suppl 8):1031–1043.

    Article  PubMed  Google Scholar 

  37. National Institute for health and Clinical Excellence: Social Value Judgements: Principles for the Development of NICE Guidance. London: NICE; 2005.

    Google Scholar 

  38. Confederation NHS: Priority Setting: Strategic Planning. London: NHS Confederation; 2008.

    Google Scholar 

  39. Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, Vist GE, Falck-Ytter Y, Meerpohl J, Norris S, Guyatt GH: GRADE guidelines: 3 rating the quality of evidence. J Clin Epidemiol 2011,64(Suppl 4):401–406.

    Article  PubMed  Google Scholar 

  40. Bowen DJ, Kreuter M, Spring B, Cofta-Woerpel L, Linnan L, Weiner D, Bakken S, Kaplan CP, Squiers L, Fabrizio C, Fernandez M: How we design feasibility studies. Am J Prev Med 2009,36(Suppl 5):452–457.

    Article  PubMed Central  PubMed  Google Scholar 

  41. World Health Organization: WHO-CHOICE. [http://www.who.int/choice/en/]

  42. Peacock S, Mitton C, Bate A, McCoy B, Donaldson C: Overcoming barriers to priority setting using interdisciplinary methods. Health Policy 2009,92(Suppl 2–3):124–132.

    Article  PubMed  Google Scholar 

  43. Daniels N: Accountability for reasonableness. BMJ 2000,321(Suppl 7272):1300–1301.

    Article  CAS  PubMed Central  PubMed  Google Scholar 

  44. Kapiriri L, Norheim OF: Criteria for priority-setting in health care in Uganda: exploration of stakeholders’ values. Bull World Health Organ 2004,82(Suppl 3):172–179.

    PubMed Central  PubMed  Google Scholar 

  45. Tromp N, Baltussen R: Mapping of multiple criteria for priority setting of health interventions: an aid for decision makers. BMC Health Serv Res 2012, 12: 454. 10.1186/1472-6963-12-454

    Article  PubMed Central  PubMed  Google Scholar 

Download references

Acknowledgements

With funding from Susan G. Komen for the Cure, this research and the collaboration with the World Health Organization, Radboud University Nijmegen, and international experts could be established. The authors gratefully acknowledge Benjamin Anderson, Yukiko Asada, Baffour Awuah, Kristine Bærøe, Jaime Verdugo Bosch, Jens Byskov, Francois Dionne, Hellen Gelband, David Hadorn, Jeffrey Hoch, Samia Hurst, Lydia Kapiriri, Sharon Kletchko, Neeta Kumar, Sandra Leggat, Janet Martin, Olga Georgina Martinez, Louis Niessen, Ole Norheim, Anggie Ramires, Rengaswamy Sankaranarayanan, Mario Tristán, Juan Carlos Vazquez, and Cheng Har Yip (international experts on breast cancer or on priority setting) for their valuable participation in the Delphi study.

We also thank Daniel H. Chisholm for his input on key considerations for the development of the tool and our discussions about the initial criteria list, Cecilia Sepulveda who provided supervision in the context of WHO internship programme and Rob Baltussen, who provided overall supervision in the design and coordination of the study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kristie Venhorst.

Additional information

Competing interests

The authors declare that they have no competing interests. The views expressed in this paper are those of the authors, and the funding organization has had no influence on them.

Authors’ contributions

KV and SZ made substantial contributions to the conception and design of the study, acquisition of data, and analysis and interpretation of data. They also participated in the expert panel and drafted the manuscript. JL contributed to the design and methodology of the study and participated in the expert panel. NT assisted in the design of the study and has been involved in revising the manuscript critically for important intellectual content. All authors have read and approved the final manuscript.

Electronic supplementary material

12962_2013_196_MOESM1_ESM.pdf

Additional file 1: Key considerations for the development of the rating tool. In this additional file we elucidate on the key considerations made for development of the rating tool. (PDF 103 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Venhorst, K., Zelle, S.G., Tromp, N. et al. Multi-criteria decision analysis of breast cancer control in low- and middle- income countries: development of a rating tool for policy makers. Cost Eff Resour Alloc 12, 13 (2014). https://doi.org/10.1186/1478-7547-12-13

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1478-7547-12-13

Keywords