Ethics-Recommendations of Big Data in Insurance

The Consolidation Report outlining the results of the NRP 75 project is available.

Since the inception of the insurance industry, accurate and relevant data has been crucial for risk-based calculations. Insurance companies thus show a keen interest in the fast-growing possibility of generating, accessing and sharing multidimensional data emerging from all spheres of life. Big Data influences the interrelation between solidarity and risk fairness in various ways. The power of personalisation provided by Big Data applications may involve the risk of discrimination and put values such as privacy, fairness or solidarity at risk. However, the same applications may also be used to prevent individual and societal damage and thus may not only increase the profitability of the industry but also enhance the public good.

A Consolidation Report provides the condensed results of the research project “Between Solidarity and Personalisation – Dealing with Ethical and Legal Big Data Challenges in the Insurance Industry”. Within 30 months, an interdisciplinary team of researchers from the University of Zurich (UZH) and the University of Applied Sciences of the Grisons (FHGR) – in collaboration with experts from Swiss Re – investigated the ethical, legal and societal aspects of using Big Data in private insurance. Based on their findings, the team has formulated recommendations that directly emerge from the research.

Main Findings – media analysis

The study provides a systematic analysis of frames (interpretive patterns) present in the debate on Big Data. It was based on a quantitative content analysis of articles in Swiss (N=251) and U.S. newspapers (N=258) published between 2011 and 2018. In total, five dominating frames were identified. One focuses on the critical aspects of Big Data (abuse of data), whereas the other four emphasise the positive aspects of Big Data (advances in research, medicine and business models, product innovation, process improvement, marketing optimisation). Compared to the U.S., the critical element is somewhat stronger in Switzerland and the peak intensity of the debate is later. This indicates that the overall public discourse on Big Data is rather opportunity-oriented while simultaneously considering the risks of the applications.

Further reading:

Main Findings – legal analysis

The analysis of Swiss law demonstrated that insurance law does not limit the personalisation of private insurance contracts. The leeway for personalisation is also hardly limited by anti-discrimination law, at least as long as individual offers are based on a state-of-the-art risk assessment. The comparative law analysis focused on insurance, anti-discrimination and data protection laws in Switzerland and the U.S./California, as they vary greatly with regard to the regulatory approach. Whereas private insurance law is dominated by the principle of freedom of contract in Switzerland, this sector of insurance is densely regulated in California and the rates are subject to prior approval by the California Insurance Commissioner. In addition, U.S. law puts a much stronger emphasis on anti-discrimination, whereas data protection law is both more comprehensive and more restrictive in Switzerland. Consequently, the leeway for personalisation is much greater in Switzerland than in the U.S./California – but only if the requirements of data protection law are met. This body of the law, however, should protect privacy and provide individuals with a fair amount of control with regard to the collection and use of their personal data. This indicates that data protection law is not the suitable body of law to determine if and to what extent insurance companies should be allowed to personalise their offers.

Instead, a public dialogue is needed in order to determine which types of insurance should be dominated by the principle of solidarity (e.g. mandatory health insurance law) and in which sectors the personalisation of insurance contracts should be allowed (e.g. household and car insurance). Such dialogue could be initiated and moderated by the insurance industry and provide important insights on which the regulator could base its future regulatory decisions.

Further reading:

Main Findings – conceptual ethics research

The debate in ethics is shifting away from privacy-related aspects of collecting Big Data in insurance (and elsewhere) to the use of such data in machine learning for predictive analytics, quantifying risk and identifying willingness to pay or fraud detection. There is an increasing consensus that the distinction between direct and indirect discrimination is becoming less salient and harder to define. Personalisation shifts the attention of both the client and the regulator away from socially salient group membership, e.g. sex or race. Decisions are increasingly made based on predictive traits that are not socially salient, such as measuring driving style or lifestyle decisions such as joining a gym. This makes risk assessment less problematic from the perspective of discrimination as traditionally understood. At the same time, nonsocially salient traits (e.g. gym use) are often significantly correlated with membership to socially salient groups – and those correlations may reflect past or current discriminatory practices. However, it is often not possible to eliminate indirect discrimination without reducing the accuracy of the predictions made with Big Data. Since the accuracy of risk assessment, fraud detection and willingness to pay plays an important role in establishing the economic sustainability of companies, there can be ethical reasons to accept Big Data methods, in spite of their indirect discrimination.

Therefore, implementing decisions based on “fair” predictions involve trade-offs between different fairness intuitions or other relevant ethical values. This indicates that frameworks for “fairness by design” may be required to decrease reputation risks when using machine learning in insurance.

Further reading:

Main Findings – empirical survey research

The survey included answers from Switzerland (NGerman=764, NFrench=317) and the U.S. (NUSA=1083). Three insights are noteworthy:

First, the willingness to share information is linked to the type of information and the trust placed in institutions (e.g. insurance companies) and Internet companies. Two main patterns were found: Individuals trusting in the “old economy” (insurance, media, government etc.) are more likely to share factual data (name, age etc.) with others, whereas people who trust in the “new economy” (Internet companies) are more likely to share emotional data (photos, comments, opinions etc.). This indicates that individuals are selective about the information they share and that they are more likely to share sensitive information if they trust the companies providing Big Data applications.

Second, people show resistance to data use in insurance products as soon as the data seems to be unrelated to the object of insurance; this resistance is higher when the values fairness, privacy and solidarity are “protected”. This indicates that customers expect a plausible relationship between the data to be used in a product and the insurance target of the product.

Third, regarding the expert survey, only preliminary findings are possible due to the low number of responses (N=23). Experts are confronted with ethical and legal issues related to Big Data on average on a monthly basis. Ethics expertise is usually available, but guidelines tend to be missing within companies. This indicates a possible gap between the willingness to handle ethical issues and the availability of tools for doing this effectively.

Further reading:

  • Tanner C, Christen M, et al. (in preparation): Clients’ perceptions and responses to threats to ethical values through Big Data. Contact research team for further information.
  • Loi, M, Christen, M, Tanner C (in preparation): Philosophical implications for trust and value perceptions in insurance. Contact research team for further information.
  • Sharon A, Hauser C et al. (in preparation): Information sharing behaviour on social networks: Which role does trust play? Contact research team for further information.
  • Dahinden, U, Tanner, C, et al. (in preparation): The relationship between client’s values and trust. Contact research team for further information.


  • The question if, under what conditions and to what extent insurance companies should be allowed to personalise their insurance contracts based on Big Data analytics should not be resolved indirectly by applying the general principles of data protection and anti-discrimination law.
  • The Swiss regulator should continuously monitor and anticipate the use of Big Data for the personalisation of insurance contracts, identify unwanted forms of personalisation, and create specific provisions in insurance law, where needed, to either prohibit personalisation or define the conditions and the extent of permissible personalisation.
  • Insurance companies should avoid using data sources that are not related to the insured risk, as this may undermine the customer’s trust in the products and services of the industry.
  • Insurance companies should demonstrate to their clients how they protect core values such as privacy, fairness or solidarity from risks posed by Big Data analytics.
  • Insurance companies should increase their awareness about the nature and impact of the unwanted discriminatory use of Big Data-based machine learning in prediction, pricing and fraud detection.
  • Insurance companies should adapt their general business ethics principles for achieving accountability to the systematic handling of ethical issues resulting from the digitalization of the industry.