{"id":103685,"date":"2022-09-30T04:05:51","date_gmt":"2022-09-30T04:05:51","guid":{"rendered":"https:\/\/papersspot.com\/blog\/2022\/09\/30\/the-tyranny-of-data-the-bright-and-dark-sides-of-data-driven-decision-making\/"},"modified":"2022-09-30T04:05:51","modified_gmt":"2022-09-30T04:05:51","slug":"the-tyranny-of-data-the-bright-and-dark-sides-of-data-driven-decision-making","status":"publish","type":"post","link":"https:\/\/papersspot.com\/blog\/2022\/09\/30\/the-tyranny-of-data-the-bright-and-dark-sides-of-data-driven-decision-making\/","title":{"rendered":"The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making"},"content":{"rendered":"<p>The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good<\/p>\n<p> May 2017<\/p>\n<p> DOI:10.1007\/978-3-319-54024-5_1<\/p>\n<p> In book: Transparent Data Mining for Big and Small Data (pp.3-24)<\/p>\n<p> Authors:<\/p>\n<p> Bruno Lepri<\/p>\n<p> Fondazione Bruno Kessler<\/p>\n<p> Jacopo Staiano<\/p>\n<p> Universit\u00e0 degli Studi di Trento<\/p>\n<p> David Sangokoya<\/p>\n<p> Emmanuel Francis Letouz\u00e9<\/p>\n<p> Massachusetts Institute of Technology<\/p>\n<p> Show all 5 authors<\/p>\n<p> Download full-text PDFRead full-text<\/p>\n<p> Download full-text PDF<\/p>\n<p> Read full-text<\/p>\n<p> Download citation<\/p>\n<p> Citations (64)<\/p>\n<p> References (116)<\/p>\n<p> Figures (2)<\/p>\n<p> Abstract and Figures<\/p>\n<p> The unprecedented availability of large-scale human behavioral data is profoundly changing the world we live in. Researchers, companies, governments, financial institutions, non-governmental organizations and also citizen groups are actively experimenting, innovating and adapting algorithmic decision-making tools to understand global patterns of human behavior and provide decision support to tackle problems of societal importance. In this chapter, we focus our attention on social good decision-making algorithms, that is algorithms strongly influencing decision-making and resource optimization of public goods, such as public health, safety, access to finance and fair employment. Through an analysis of specific use cases and approaches, we highlight both the positive opportunities that are created through data-driven algorithmic decision-making, and the potential negative consequences that practitioners should be aware of and address in order to truly realize the potential of this emergent field. We elaborate on the need for these algorithms to provide transparency and accountability, preserve privacy and be tested and evaluated in context, by means of living lab approaches involving citizens. Finally, we turn to the requirements which would make it possible to leverage the predictive power of data-driven human behavior analysis while ensuring transparency, accountability, and civic participation.<\/p>\n<p> Requirements summary for positive data-driven disruption.<\/p>\n<p> \u2026\u00a0<\/p>\n<p> Summary table for the literature discussed in Section 2.<\/p>\n<p> \u2026\u00a0<\/p>\n<p> Figures &#8211; uploaded by Nuria Oliver<\/p>\n<p> Author content<\/p>\n<p> Content may be subject to copyright.<\/p>\n<p> Discover the world&#8217;s research<\/p>\n<p> 20+ million members<\/p>\n<p> 135+ million publications<\/p>\n<p> 700k+ research projects<\/p>\n<p> Join for free<\/p>\n<p> Content uploaded by Nuria Oliver<\/p>\n<p> Author content<\/p>\n<p> Content may be subject to copyright.<\/p>\n<p> The Tyranny of Data?<\/p>\n<p> The Bright and Dark Sides of<\/p>\n<p> Data-Driven Decision-Making for<\/p>\n<p> Social Good<\/p>\n<p> Bruno Lepri, Jacopo Staiano, David Sangokoya, Emmanuel Letouz\u00b4e and<\/p>\n<p> Nuria Oliver<\/p>\n<p> Abstract The unprecedented availability of large-scale human behavioral<\/p>\n<p> data is profoundly changing the world we live in. Researchers, companies,<\/p>\n<p> governments, \ufb01nancial institutions, non-governmental organizations and also<\/p>\n<p> citizen groups are actively experimenting, innovating and adapting algorith-<\/p>\n<p> mic decision-making tools to understand global patterns of human behavior<\/p>\n<p> and provide decision support to tackle problems of societal importance. In this<\/p>\n<p> chapter, we focus our attention on social good decision-making algorithms,<\/p>\n<p> that is algorithms strongly in\ufb02uencing decision-making and resource opti-<\/p>\n<p> mization of public goods, such as public health, safety, access to \ufb01nance and<\/p>\n<p> fair employment. Through an analysis of speci\ufb01c use cases and approaches,<\/p>\n<p> we highlight both the positive opportunities that are created through data-<\/p>\n<p> driven algorithmic decision-making, and the potential negative consequences<\/p>\n<p> that practitioners should be aware of and address in order to truly realize<\/p>\n<p> the potential of this emergent \ufb01eld. We elaborate on the need for these algo-<\/p>\n<p> rithms to provide transparency and accountability, preserve privacy and be<\/p>\n<p> tested and evaluated in context, by means of living lab approaches involving<\/p>\n<p> citizens. Finally, we turn to the requirements which would make it possible to<\/p>\n<p> leverage the predictive power of data-driven human behavior analysis while<\/p>\n<p> ensuring transparency, accountability, and civic participation.<\/p>\n<p> Bruno Lepri<\/p>\n<p> Fondazione Bruno Kessler e-mail: lepri@fbk.eu<\/p>\n<p> Jacopo Staiano<\/p>\n<p> Fortia Financial Solutions e-mail: jacopo.staiano@fortia.fr<\/p>\n<p> David Sangokoya<\/p>\n<p> Data-Pop Alliance e-mail: dsangokoya@datapopalliance.org<\/p>\n<p> Emmanuel Letouz\u00b4e<\/p>\n<p> Data-Pop Alliance and MIT Media Lab e-mail: eletouze@mit.edu<\/p>\n<p> Nuria Oliver<\/p>\n<p> Data-Pop Alliance e-mail: nuria@alum.mit.edu<\/p>\n<p> 1<\/p>\n<p> arXiv:1612.00323v2 [cs.CY] 2 Dec 2016<\/p>\n<p> 2 Authors Suppressed Due to Excessive Length<\/p>\n<p> 1 Introduction<\/p>\n<p> The world is experiencing an unprecedented transition where human behav-<\/p>\n<p> ioral data has evolved from being a scarce resource to being a massive and<\/p>\n<p> real-time stream. This availability of large-scale data is profoundly chang-<\/p>\n<p> ing the world we live in and has led to the emergence of a new discipline<\/p>\n<p> called computational social science [45]; \ufb01nance, economics, marketing, pub-<\/p>\n<p> lic health, medicine, biology, politics, urban science and journalism, to name<\/p>\n<p> a few, have all been disrupted to some degree by this trend [41].<\/p>\n<p> Moreover, the automated analysis of anonymized and aggregated large-<\/p>\n<p> scale human behavioral data o\ufb00ers new possibilities to understand global<\/p>\n<p> patterns of human behavior and to help decision makers tackle problems<\/p>\n<p> of societal importance [45], such as monitoring socio-economic depriva-<\/p>\n<p> tion [8, 75, 76, 88] and crime [11, 10, 84, 85, 90], mapping the propaga-<\/p>\n<p> tion of diseases [37, 94], or understanding the impact of natural disasters<\/p>\n<p> [55, 62, 97]. Thus, researchers, companies, governments, \ufb01nancial institutions,<\/p>\n<p> non-governmental organizations and also citizen groups are actively exper-<\/p>\n<p> imenting, innovating and adapting algorithmic decision-making tools, often<\/p>\n<p> relying on the analysis of personal information.<\/p>\n<p> However, researchers from di\ufb00erent disciplinary backgrounds have iden-<\/p>\n<p> ti\ufb01ed a range of social, ethical and legal issues surrounding data-driven<\/p>\n<p> decision-making, including privacy and security [19, 22, 23, 56], transparency<\/p>\n<p> and accountability [18, 61, 99, 100], and bias and discrimination [3, 79]. For<\/p>\n<p> example, Barocas and Selbst [3] point out that the use of data-driven decision<\/p>\n<p> making processes can result in disproportionate adverse outcomes for disad-<\/p>\n<p> vantaged groups, in ways that look like discrimination. Algorithmic decisions<\/p>\n<p> can reproduce patterns of discrimination, due to decision makers\u2019 prejudices<\/p>\n<p> [60], or re\ufb02ect the biases present in the society [60]. In 2014, the White House<\/p>\n<p> released a report, titled \u201cBig Data: Seizing opportunities, preserving values\u201d<\/p>\n<p> [65] that highlights the discriminatory potential of big data, including how<\/p>\n<p> it could undermine longstanding civil rights protections governing the use of<\/p>\n<p> personal information for credit, health, safety, employment, etc. For exam-<\/p>\n<p> ple, data-driven decisions about applicants for jobs, schools or credit may be<\/p>\n<p> a\ufb00ected by hidden biases that tend to \ufb02ag individuals from particular de-<\/p>\n<p> mographic groups as unfavorable for such opportunities. Such outcomes can<\/p>\n<p> be self-reinforcing, since systematically reducing individuals\u2019 access to credit,<\/p>\n<p> employment and educational opportunities may worsen their situation, which<\/p>\n<p> can play against them in future applications.<\/p>\n<p> In this chapter, we focus our attention on social good algorithms, that is<\/p>\n<p> algorithms strongly in\ufb02uencing decision-making and resource optimization of<\/p>\n<p> public goods, such as public health, safety, access to \ufb01nance and fair em-<\/p>\n<p> ployment. These algorithms are of particular interest given the magnitude of<\/p>\n<p> their impact on quality of life and the risks associated with the information<\/p>\n<p> asymmetry surrounding their governance.<\/p>\n<p> Title Suppressed Due to Excessive Length 3<\/p>\n<p> In a recent book, William Easterly evaluates how global economic devel-<\/p>\n<p> opment and poverty alleviation projects have been governed by a \u201ctyranny of<\/p>\n<p> experts\u201d \u2013 in this case, aid agencies, economists, think tanks and other ana-<\/p>\n<p> lysts \u2013 who consistently favor top-down, technocratic governance approaches<\/p>\n<p> at the expense of the individual rights of citizens [28]. Easterly details how<\/p>\n<p> these experts reduce multidimensional social phenomena such as poverty or<\/p>\n<p> justice into a set of technical solutions that do not take into account either<\/p>\n<p> the political systems in which they operate or the rights of intended bene\ufb01-<\/p>\n<p> ciaries. Take for example the displacement of farmers in the Mubende district<\/p>\n<p> of Uganda: as a direct result of a World Bank project intended to raise the re-<\/p>\n<p> gion\u2019s income by converting land to higher value uses, farmers in this district<\/p>\n<p> were forcibly removed from their homes by government soldiers in order to<\/p>\n<p> prepare for a British company to plant trees in the area [28]. Easterly under-<\/p>\n<p> lines the cyclic nature of this tyranny: technocratic justi\ufb01cations for speci\ufb01c<\/p>\n<p> interventions are considered objective; intended bene\ufb01ciaries are unaware of<\/p>\n<p> the opaque, black box decision-making involved in these resource optimiza-<\/p>\n<p> tion interventions; and experts (and the coercive powers which employ them)<\/p>\n<p> act with impunity and without redress.<\/p>\n<p> If we turn to the use, governance and deployment of big data approaches in<\/p>\n<p> the public sector, we can draw several parallels towards what we refer to as the<\/p>\n<p> \u201ctyranny of data\u201d, that is the adoption of data-driven decision-making under<\/p>\n<p> the technocratic and top-down approaches higlighted by Easterly [28]. We<\/p>\n<p> elaborate on the need for social good decision-making algorithms to provide<\/p>\n<p> transparency and accountability, to only use personal information \u2013 owned<\/p>\n<p> and controlled by individuals \u2013 with explicit consent, to ensure that privacy is<\/p>\n<p> preserved when data is analyzed in aggregated and anonymized form, and to<\/p>\n<p> be tested and evaluated in context, that is by means of living lab approaches<\/p>\n<p> involving citizens. In our view, these characteristics are crucial for fair data-<\/p>\n<p> driven decision-making as well as for citizen engagement and participation.<\/p>\n<p> In the rest of this chapter, we provide the readers with a compendium<\/p>\n<p> of the issues arising from current big data approaches, with a particular fo-<\/p>\n<p> cus on speci\ufb01c use cases that have been carried out to date, including urban<\/p>\n<p> crime prediction [10], inferring socioeconomic status of countries and individ-<\/p>\n<p> uals [8, 49, 76], mapping the propagation of diseases [37, 94] and modeling<\/p>\n<p> individuals\u2019 mental health [9, 20, 47]. Furthermore, we highlight factors of<\/p>\n<p> risk (e.g. privacy violations, lack of transparency and discrimination) that<\/p>\n<p> might arise when decisions potentially impacting the daily lives of people are<\/p>\n<p> heavily rooted in the outcomes of black-box data-driven predictive models.<\/p>\n<p> Finally, we turn to the requirements which would make it possible to leverage<\/p>\n<p> the predictive power of data-driven human behavior analysis while ensuring<\/p>\n<p> transparency, accountability, and civic participation.<\/p>\n<p> 4 Authors Suppressed Due to Excessive Length<\/p>\n<p> 2 The rise of data-driven decision-making for social<\/p>\n<p> good<\/p>\n<p> The unprecedented stream of large-scale, human behavioral data has been<\/p>\n<p> described as a \u201ctidal wave\u201d of opportunities to both predict and act upon<\/p>\n<p> the analysis of the petabytes of digital signals and traces of human actions and<\/p>\n<p> interactions. With such massive streams of relevant data to mine and train<\/p>\n<p> algorithms with, as well as increased analytical and technical capacities, it is<\/p>\n<p> of no surprise that companies and public sector actors are turning to machine<\/p>\n<p> learning-based algorithms to tackle complex problems at the limits of human<\/p>\n<p> decision-making [36, 96]. The history of human decision-making \u2013 particularly<\/p>\n<p> when it comes to questions of power in resource allocation, fairness, justice,<\/p>\n<p> and other public goods \u2013 is wrought with innumerable examples of extreme<\/p>\n<p> bias, leading towards corrupt, ine\ufb03cient or unjust processes and outcomes [2,<\/p>\n<p> 34, 70, 87]. In short, human decision-making has shown signi\ufb01cant limitations<\/p>\n<p> and the turn towards data-driven algorithms re\ufb02ects a search for objectivity,<\/p>\n<p> evidence-based decision-making, and a better understanding of our resources<\/p>\n<p> and behaviors.<\/p>\n<p> Diakopoulos [27] characterizes the function and power of algorithms in<\/p>\n<p> four broad categories: 1) classi\ufb01cation, the categorization of information into<\/p>\n<p> separate \u201cclasses\u201d, based on its features; 2) prioritization, the denotation<\/p>\n<p> of emphasis and rank on particular information or results at the expense of<\/p>\n<p> others based on a pre-de\ufb01ned set of criteria; 3) association, the determination<\/p>\n<p> of correlated relationships between entities; and 4) \ufb01ltering, the inclusion or<\/p>\n<p> exclusion of information based on pre-determined criteria.<\/p>\n<p> Table 1 provides examples of types of algorithms across these categories.<\/p>\n<p> Table 1 Algorithmic function and examples, adapted from Diakopoulos [27] and Latzer<\/p>\n<p> et al. [44]<\/p>\n<p> Function Type Examples<\/p>\n<p> Prioritization<\/p>\n<p> General and search engines,<\/p>\n<p> meta search engines, semantic<\/p>\n<p> search engines, questions &amp;<\/p>\n<p> answers services<\/p>\n<p> Google, Bing, Baidu;<\/p>\n<p> image search; social<\/p>\n<p> media; Quora; Ask.com<\/p>\n<p> Classi\ufb01cation Reputation systems, news scoring,<\/p>\n<p> credit scoring, social scoring<\/p>\n<p> Ebay, Uber, Airbnb;<\/p>\n<p> Reddit, Digg;<\/p>\n<p> CreditKarma; Klout<\/p>\n<p> Association Predicting developments and<\/p>\n<p> trends<\/p>\n<p> ScoreAhit, Music Xray,<\/p>\n<p> Google Flu Trends<\/p>\n<p> Filtering<\/p>\n<p> Spam \ufb01lters, child protection \ufb01lters,<\/p>\n<p> recommender systems, news<\/p>\n<p> aggregators<\/p>\n<p> Norton; Net Nanny;<\/p>\n<p> Spotify, Net\ufb02ix;<\/p>\n<p> Facebook Newsfeed<\/p>\n<p> This chapter places emphasis on what we call social good algorithms \u2013 al-<\/p>\n<p> gorithms strongly in\ufb02uencing decision-making and resource optimization for<\/p>\n<p> Title Suppressed Due to Excessive Length 5<\/p>\n<p> public goods. These algorithms are designed to analyze massive amounts<\/p>\n<p> of human behavioral data from various sources and then, based on pre-<\/p>\n<p> determined criteria, select the information most relevant to their intended<\/p>\n<p> purpose. While resource allocation and decision optimization over limited re-<\/p>\n<p> sources remain common features of the public sector, the use of social good<\/p>\n<p> algorithms brings to a new level the amount of human behavioral data that<\/p>\n<p> public sector actors can access, the capacities with which they can analyze this<\/p>\n<p> information and deliver results, and the communities of experts and common<\/p>\n<p> people who hold these results to be objective. The ability of these algorithms<\/p>\n<p> to identify, select and determine information of relevance beyond the scope of<\/p>\n<p> human decision-making creates a new kind of decision optimization faciliated<\/p>\n<p> by both the design of the algorithms and the data from which they are based.<\/p>\n<p> However, as discussed later in the chapter, this new process is often opaque<\/p>\n<p> and assumes a level of impartiality that is not always accurate. It also creates<\/p>\n<p> information asymmetry and lack of transparency between actors using these<\/p>\n<p> algorithms and the intended bene\ufb01ciaries whose data is being used.<\/p>\n<p> In the following sub-sections, we assess the nature, function and impact<\/p>\n<p> of the use of social good algorithms in three key areas: criminal behavior<\/p>\n<p> dynamics and predictive policing; socio-economic deprivation and \ufb01nancial<\/p>\n<p> inclusion; and public health.<\/p>\n<p> 2.1 Criminal behavior dynamics and predictive policing<\/p>\n<p> Researchers have turned their attention to the automatic analysis of criminal<\/p>\n<p> behavior dynamics both from a people- and a place-centric perspectives. The<\/p>\n<p> people-centric perspective has mostly been used for individual or collective<\/p>\n<p> criminal pro\ufb01ling [67, 72, 91]. For example, Wang et al. [91] proposed a ma-<\/p>\n<p> chine learning approach, called Series Finder, to the problem of detecting<\/p>\n<p> speci\ufb01c patterns in crimes that are committed by the same o\ufb00ender or group<\/p>\n<p> of o\ufb00enders.<\/p>\n<p> In 2008, the criminologist David Weisburd proposed a shift from a people-<\/p>\n<p> centric paradigm of police practices to a place-centric one [93], thus focusing<\/p>\n<p> on geographical topology and micro-structures rather than on criminal pro\ufb01l-<\/p>\n<p> ing. An example of a place-centric perspective is the detection, analysis, and<\/p>\n<p> interpretation of crime hotspots [16, 29, 53]. Along these lines, a novel appli-<\/p>\n<p> cation of quantitative tools from mathematics, physics and signal processing<\/p>\n<p> has been proposed by Toole et al. [84] to analyse spatial and temporal pat-<\/p>\n<p> terns in criminal o\ufb00ense records. Their analyses of crime data from 1991 to<\/p>\n<p> 1999 for the American city of Philadelphia indicated the existence of multi-<\/p>\n<p> scale complex relationships in space and time. Further, over the last few years,<\/p>\n<p> aggregated and anonymized mobile phone data has opened new possibilities<\/p>\n<p> to study city dynamics with unprecedented temporal and spatial granular-<\/p>\n<p> 6 Authors Suppressed Due to Excessive Length<\/p>\n<p> ities [7]. Recent work has used this type of data to predict crime hotspots<\/p>\n<p> through machine-learning algorithms [10, 11, 85].<\/p>\n<p> More recently, these predictive policing approaches [64] are moving from<\/p>\n<p> the academic realm (universities and research centers) to police departments.<\/p>\n<p> In Chicago, police o\ufb03cers are paying particular attention to those individ-<\/p>\n<p> uals \ufb02agged, through risk analysis techniques, as most likely to be involved<\/p>\n<p> in future violence. In Santa Cruz, California, the police have reported a dra-<\/p>\n<p> matic reduction in burglaries after adopting algorithms that predict where<\/p>\n<p> new burglaries are likely to occur. In Charlotte, North Carolina, the police<\/p>\n<p> department has generated a map of high-risk areas that are likely to be hit<\/p>\n<p> by crime. The Police Departments of Los Angeles, Atlanta and more than<\/p>\n<p> 50 other cities in the US are using PredPol, an algorithm that generates 500<\/p>\n<p> by 500 square foot predictive boxes on maps, indicating areas where crime<\/p>\n<p> is most likely to occur. Similar approaches have also been implemented in<\/p>\n<p> Brasil, the UK and the Netherlands. Overall, four main predictive policing<\/p>\n<p> approaches are currently being used: (i) methods to forecast places and times<\/p>\n<p> with an increased risk of crime [32], (ii) methods to detect o\ufb00enders and \ufb02ag<\/p>\n<p> individuals at risk of o\ufb00ending in the future [64], (iii) methods to identify<\/p>\n<p> perpetrators [64], and (iv) methods to identify groups or, in some cases, in-<\/p>\n<p> dividuals who are likely to become the victims of crime [64].<\/p>\n<p> 2.2 Socio-economic deprivation and \ufb01nancial inclusion<\/p>\n<p> Being able to accurately measure and monitor key sociodemographic and eco-<\/p>\n<p> nomic indicators is critical to design and implement public policies [68]. For<\/p>\n<p> example, the geographic distribution of poverty and wealth is used by govern-<\/p>\n<p> ments to make decisions about how to allocate scarce resources and provides a<\/p>\n<p> foundation for the study of the determinants of economic growth [33, 43]. The<\/p>\n<p> quantity and quality of economic data available have signi\ufb01cantly improved<\/p>\n<p> in recent years. However, the scarcity of reliable key measures in develop-<\/p>\n<p> ing countries represents a major challenge to researchers and policy-makers1,<\/p>\n<p> thus hampering e\ufb00orts to target interventions e\ufb00ectively to areas of great-<\/p>\n<p> est need (e.g. African countries) [26, 40]. Recently, several researchers have<\/p>\n<p> started to use mobile phone data [8, 49, 76], social media [88] and satellite<\/p>\n<p> imagery [39] to infer the poverty and wealth of individual subscribers, as well<\/p>\n<p> as to create high-resolution maps of the geographic distribution of wealth<\/p>\n<p> and deprivation.<\/p>\n<p> The use of novel sources of behavioral data and algorithmic decision-<\/p>\n<p> making processes is also playing a growing role in the area of \ufb01nancial services,<\/p>\n<p> for example credit scoring. Credit scoring is a widely used tool in the \ufb01nancial<\/p>\n<p> sector to compute the risks of lending to potential credit customers. Providing<\/p>\n<p> 1http:\/\/www.undatarevolution.org\/report\/<\/p>\n<p> Title Suppressed Due to Excessive Length 7<\/p>\n<p> information about the ability of customers to pay back their debts or con-<\/p>\n<p> versely to default, credit scores have become a key variable to build \ufb01nancial<\/p>\n<p> models of customers. Thus, as lenders have moved from traditional interview-<\/p>\n<p> based decisions to data-driven models to assess credit risk, consumer lending<\/p>\n<p> and credit scoring have become increasingly sophisticated. Automated credit<\/p>\n<p> scoring has become a standard input into the pricing of mortgages, auto<\/p>\n<p> loans, and unsecured credit. However, this approach is mainly based on the<\/p>\n<p> past \ufb01nancial history of customers (people or businesses) [81], and thus not<\/p>\n<p> adequate to provide credit access to people or businesses when no \ufb01nancial<\/p>\n<p> history is available. Therefore, researchers and companies are investigating<\/p>\n<p> novel sources of data to replace or to improve traditional credit scores, po-<\/p>\n<p> tentially opening credit access to individuals or businesses that traditionally<\/p>\n<p> have had poor or no access to mainstream \ufb01nancial services \u2013e.g. people who<\/p>\n<p> are unbanked or underbanked, new immigrants, graduating students, etc.<\/p>\n<p> Researchers have leveraged mobility patterns from credit card transactions<\/p>\n<p> [73] and mobility and communication patterns from mobile phones to au-<\/p>\n<p> tomatically build user models of spending behavior [74] and propensity to<\/p>\n<p> credit defaults [71, 73]. The use of mobile phone, social media, and browsing<\/p>\n<p> data for \ufb01nancial risk assessment has also attracted the attention of several<\/p>\n<p> entrepreneurial e\ufb00orts, such as Cigni\ufb012, Lenddo3, InVenture4, and ZestFi-<\/p>\n<p> nance5.<\/p>\n<p> 2.3 Public health<\/p>\n<p> The characterization of individuals and entire populations\u2019 mobility is of<\/p>\n<p> paramount importance for public health [57]: for example, it is key to predict<\/p>\n<p> the spatial and temporal risk of diseases [35, 82, 94], to quantify exposure to<\/p>\n<p> air pollution [48], to understand human migrations after natural disasters or<\/p>\n<p> emergency situations [4, 50], etc. The traditional approach has been based on<\/p>\n<p> household surveys and information provided from census data. These meth-<\/p>\n<p> ods su\ufb00er from recall bias and limitations in the size of the population sample,<\/p>\n<p> mainly due to excessive costs in the acquisition of the data. Moreover, survey<\/p>\n<p> or census data provide a snapshot of the population dynamics at a given<\/p>\n<p> moment in time. However, it is fundamental to monitor mobility patterns in<\/p>\n<p> a continuous manner, in particular during emergencies in order to support<\/p>\n<p> decision making or assess the impact of government measures.<\/p>\n<p> Tizzoni et al. [82] and Wesolowski et al. [95] have compared traditional<\/p>\n<p> mobility surveys with the information provided by mobile phone data (Call<\/p>\n<p> 2http:\/\/cignifi.com\/<\/p>\n<p> 3https:\/\/www.lenddo.com\/<\/p>\n<p> 4http:\/\/tala.co\/<\/p>\n<p> 5https:\/\/www.zestfinance.com\/<\/p>\n<p> 8 Authors Suppressed Due to Excessive Length<\/p>\n<p> Detail Records or CDRs), speci\ufb01cally to model the spread of diseases. The<\/p>\n<p> \ufb01ndings of these works recommend the use of mobile phone data, by them-<\/p>\n<p> selves or in combination with traditional sources, in particular in low-income<\/p>\n<p> economies where the availability of surveys is highly limited.<\/p>\n<p> Another important area of opportunity within public health is mental<\/p>\n<p> health. Mental health problems are recognized to be a major public health<\/p>\n<p> issue6. However, the traditional model of episodic care is suboptimal to pre-<\/p>\n<p> vent mental health outcomes and improve chronic disease outcomes. In order<\/p>\n<p> to assess human behavior in the context of mental wellbeing, the standard<\/p>\n<p> clinical practice relies on periodic self-reports that su\ufb00er from subjectivity<\/p>\n<p> and memory biases, and are likely in\ufb02uenced by the current mood state.<\/p>\n<p> Moreover, individuals with mental conditions typically visit doctors when<\/p>\n<p> the crisis has already happened and thus report limited information about<\/p>\n<p> precursors useful to prevent the crisis onset. These novel sources of behav-<\/p>\n<p> ioral data yield the possibility of monitoring mental health-related behaviors<\/p>\n<p> and symptoms outside of clinical settings and without having to depend on<\/p>\n<p> self-reported information [52]. For example, several studies have shown that<\/p>\n<p> behavioral data collected through mobile phones and social media can be<\/p>\n<p> exploited to recognize bipolar disorders [20, 30, 59], mood [47], personality<\/p>\n<p> [25, 46] and stress [9].<\/p>\n<p> Table 2 summarizes the main points emerging from the literture reviewed<\/p>\n<p> in this section.<\/p>\n<p> Table 2 Summary table for the literature discussed in Section 2.<\/p>\n<p> Key Area Problems Tackled References<\/p>\n<p> Predictive Policing<\/p>\n<p> Criminal behavior pro\ufb01ling<\/p>\n<p> Crime hotspot prediction<\/p>\n<p> Perpetrator(s)\/victim(s) identi\ufb01cation<\/p>\n<p> [67, 72, 91]<\/p>\n<p> [10, 11, 32, 85]<\/p>\n<p> [64]<\/p>\n<p> Finance &amp; Economy<\/p>\n<p> Wealth &amp; deprivation mapping<\/p>\n<p> Spending behavior pro\ufb01ling<\/p>\n<p> Credit scoring<\/p>\n<p> [8, 49, 39, 76, 88]<\/p>\n<p> [74]<\/p>\n<p> [71, 73]<\/p>\n<p> Public Health<\/p>\n<p> Epidemiologic studies<\/p>\n<p> Environment and emergency mapping<\/p>\n<p> Mental Health<\/p>\n<p> [35, 82, 94]<\/p>\n<p> [4, 48, 50]<\/p>\n<p> [9, 20, 25, 30, 46, 47, 52, 59]<\/p>\n<p> 3 The dark side of data-driven decision-making for<\/p>\n<p> social good<\/p>\n<p> The potential positive impact of big data and machine learning-based ap-<\/p>\n<p> proaches to decision-making is huge. However, several researchers and ex-<\/p>\n<p> 6http:\/\/www.who.int\/topics\/mental_health\/en\/<\/p>\n<p> Title Suppressed Due to Excessive Length 9<\/p>\n<p> perts [3, 19, 61, 79, 86] have underlined what we refer to as the dark side<\/p>\n<p> of data-driven decision-making, including violations of privacy, information<\/p>\n<p> asymmetry, lack of transparency, discrimination and social exclusion. In this<\/p>\n<p> section we turn our attention to these elements before outlining three key<\/p>\n<p> requirements that would be necessary in order to realize the positive im-<\/p>\n<p> pact, while minimizing the potential negative consequences of data-driven<\/p>\n<p> decision-making in the context of social good.<\/p>\n<p> 3.1 Computational violations of privacy<\/p>\n<p> Reports and studies [66] have focused on the misuse of personal data dis-<\/p>\n<p> closed by users and on the aggregation of data from di\ufb00erent sources by<\/p>\n<p> entities playing as data brokers with direct implications in privacy. An often<\/p>\n<p> overlooked element is that the computational developments coupled with the<\/p>\n<p> availability of novel sources of behavioral data (e.g. social media data, mobile<\/p>\n<p> phone data, etc.) now allow inferences about private information that may<\/p>\n<p> never have been disclosed. This element is essential to understand the issues<\/p>\n<p> raised by these algorithmic approaches.<\/p>\n<p> A recent study by Kosinski et al. [42] combined data on Facebook \u201cLikes\u201d<\/p>\n<p> and limited survey information to accurately predict a male user\u2019s sexual ori-<\/p>\n<p> entation, ethnic origin, religious and political preferences, as well as alcohol,<\/p>\n<p> drugs, and cigarettes use. Moreover, Twitter data has recently been used to<\/p>\n<p> identify people with a high likelihood of falling into depression before the<\/p>\n<p> onset of the clinical symptoms [20].<\/p>\n<p> It has also been shown that, despite the algorithmic advancements in<\/p>\n<p> anonymizing data, it is feasible to infer identities from anonymized human<\/p>\n<p> behavioral data, particularly when combined with information derived from<\/p>\n<p> additional sources. For example, Zang et al. [98] have reported that if home<\/p>\n<p> and work addresses were available for some users, up to 35% of users of the<\/p>\n<p> mobile network could be de-identi\ufb01ed just using the two most visited tow-<\/p>\n<p> ers, likely to be related to their home and work location. More recently, de<\/p>\n<p> Montjoye et al. [22, 23] have demonstrated how unique mobility and shop-<\/p>\n<p> ping behaviors are for each individual. Speci\ufb01cally, they have shown that<\/p>\n<p> four spatio-temporal points are enough to uniquely identify 95% of people in<\/p>\n<p> a mobile phone database of 1.5M people and to identify 90% of people in a<\/p>\n<p> credit card database of 1M people.<\/p>\n<p> 3.2 Information asymmetry and lack of transparency<\/p>\n<p> Both governments and companies use data-driven algorithms for decision<\/p>\n<p> making and optimization. Thus, accountability in government and corporate<\/p>\n<p> 10 Authors Suppressed Due to Excessive Length<\/p>\n<p> use of such decision making tools is fundamental in both validating their<\/p>\n<p> utility toward the public interest as well as redressing harms generated by<\/p>\n<p> these algorithms.<\/p>\n<p> However, the ability to accumulate and manipulate behavioral data about<\/p>\n<p> customers and citizens on an unprecedented scale may give big companies<\/p>\n<p> and intrusive\/authoritarian governments powerful means to manipulate seg-<\/p>\n<p> ments of the population through targeted marketing e\ufb00orts and social control<\/p>\n<p> strategies. In particular, we might witness an information asymmetry situa-<\/p>\n<p> tion where a powerful few have access and use knowledge that the majority<\/p>\n<p> do not have access to, thus leading to an \u2013or exacerbating the existing\u2013 asym-<\/p>\n<p> metry of power between the state or the big companies on one side and the<\/p>\n<p> people on the other side [1]. In addition, the nature and the use of various<\/p>\n<p> data-driven algorithms for social good, as well as the lack of computational<\/p>\n<p> or data literacy among citizens, makes algorithmic transparency di\ufb03cult to<\/p>\n<p> generalize and accountability di\ufb03cult to assess [61].<\/p>\n<p> Burrell [12] has provided a useful framework to characterize three di\ufb00er-<\/p>\n<p> ent types of opacity in algorithmic decision-making: (1) intentional opacity,<\/p>\n<p> whose objective is the protection of the intellectual property of the inventors<\/p>\n<p> of the algorithms. This type of opacity could be mitigated with legislation<\/p>\n<p> that would force decision-makers towards the use of open source systems.<\/p>\n<p> The new General Data Protection Regulations (GDPR) in the EU with a<\/p>\n<p> \u201cright to an explanation\u201d starting in 2018 is an example of such legislation7.<\/p>\n<p> However, there are clear corporate and governmental interests in favor of in-<\/p>\n<p> tentional opacity which make it di\ufb03cult to eliminate this type of opacity; (2)<\/p>\n<p> illiterate opacity, due to the fact that the vast majority of people lack the<\/p>\n<p> technical skills to understand the underpinnings of algorithms and machine<\/p>\n<p> learning models built from data. This kind of opacity might be attenuated<\/p>\n<p> with stronger education programs in computational thinking and by enabling<\/p>\n<p> that independent experts advice those a\ufb00ected by algorithm decision-making;<\/p>\n<p> and (3) intrinsic opacity, which arises by the nature of certain machine learn-<\/p>\n<p> ing methods that are di\ufb03cult to interpret (e.g. deep learning models). This<\/p>\n<p> opacity is well known in the machine learning community (usually referred<\/p>\n<p> to as the interpretability problem). The main approach to combat this type<\/p>\n<p> of opacity requires using alternative machine learning models that are easy<\/p>\n<p> to interpret by humans, despite the fact that they might yield lower accuracy<\/p>\n<p> than black-box non-interpretable models.<\/p>\n<p> Fortunately, there is increasing awareness of the importance of reducing<\/p>\n<p> or eliminating the opacity of data-driven algorithmic decision-making sys-<\/p>\n<p> tems. There are a number of research e\ufb00orts and initiatives in this direction,<\/p>\n<p> including the Data Transparency Lab8which is a \u201ccommunity of technolo-<\/p>\n<p> 7Regulation (EU) 2016\/679 of the European Parliament and of the Council of 27 April<\/p>\n<p> 2016 on the protection of natural persons with regard to the processing of personal data<\/p>\n<p> and on the free movement of such data, and repealing Directive 95\/46\/EC (General Data<\/p>\n<p> Protection Regulation) http:\/\/eur-lex.europa.eu\/eli\/reg\/2016\/679\/oj<\/p>\n<p> 8http:\/\/www.datatransparencylab.org\/<\/p>\n<p> Title Suppressed Due to Excessive Length 11<\/p>\n<p> gists, researchers, policymakers and industry representatives working to ad-<\/p>\n<p> vance online personal data transparency through research and design\u201d, and<\/p>\n<p> the DARPA Explainable Arti\ufb01cial Intelligence (XAI) project9. A tutorial<\/p>\n<p> on the subject has been held at the 2016 ACM Knowledge and Data Dis-<\/p>\n<p> covery conference [38]. Researchers from New York University\u2019s Information<\/p>\n<p> Law Institute, such as Helen Nissenbaum and Solon Barocas, and Microsoft<\/p>\n<p> Research, such as Kate Crawford and Tarleton Gillespie, have held several<\/p>\n<p> workshops and conferences during the past few years on the ethical and le-<\/p>\n<p> gal challenges related to algorithmic governance and decision-making.10 A<\/p>\n<p> nominee for the National Book Award, Cathy O\u2019Neil\u2019s book, \u201cWeapons of<\/p>\n<p> Math Destruction,\u201d details several case studies on harms and risks to public<\/p>\n<p> accountability associated with big data-driven algorithmic decision-making,<\/p>\n<p> particularly in the areas of criminal justice and education [58]. Recently, in<\/p>\n<p> partnership with Microsoft Research and others, the White House O\ufb03ce of<\/p>\n<p> Science and Technology Policy has co-hosted several public symposiums on<\/p>\n<p> the impacts and challenges of algorithms and arti\ufb01cial intelligence, speci\ufb01-<\/p>\n<p> cally in social inequality, labor, healthcare and ethics.11<\/p>\n<p> 3.3 Social exclusion and discrimination<\/p>\n<p> From a legal perspective, Tobler [83] argued that discrimination derives from<\/p>\n<p> \u201cthe application of di\ufb00erent rules or practices to comparable situations, or of<\/p>\n<p> the same rule or practice to di\ufb00erent situations\u201d. In a recent paper, Barocas<\/p>\n<p> and Selbst [3] elaborate that discrimination may be an artifact of the data<\/p>\n<p> collection and analysis process itself; more speci\ufb01cally, even with the best<\/p>\n<p> intentions, data-driven algorithmic decision-making can lead to discrimina-<\/p>\n<p> tory practices and outcomes. Algorithmic decision procedures can reproduce<\/p>\n<p> existing patterns of discrimination, inherit the prejudice of prior decision<\/p>\n<p> makers, or simply re\ufb02ect the widespread biases that persist in society [19]. It<\/p>\n<p> can even have the perverse result of exacerbating existing inequalities by sug-<\/p>\n<p> gesting that historically disadvantaged groups actually deserve less favorable<\/p>\n<p> treatment [58].<\/p>\n<p> Discrimination from algorithms can occur for several reasons. First, input<\/p>\n<p> data into algorithmic decisions may be poorly weighted, leading to disparate<\/p>\n<p> impact; for example, as a form of indirect discrimination, overemphasis of<\/p>\n<p> zip code within predictive policing algorithms can lead to the association of<\/p>\n<p> low-income African-American neighborhoods with areas of crime and as a<\/p>\n<p> result, the application of speci\ufb01c targeting based on group membership [17].<\/p>\n<p> Second, discrimination can occur from the decision to use an algorithm itself.<\/p>\n<p> 9http:\/\/www.darpa.mil\/program\/explainable-arti\ufb01cial-intelligence<\/p>\n<p> 10 http:\/\/www.law.nyu.edu\/centers\/ili\/algorithmsconference<\/p>\n<p> 11 https:\/\/www.whitehouse.gov\/blog\/2016\/05\/03\/preparing-future-arti\ufb01cial-intelligence<\/p>\n<p> 12 Authors Suppressed Due to Excessive Length<\/p>\n<p> Categorization \u2013 through algorithmic classi\ufb01cation, prioritization, association<\/p>\n<p> and \ufb01ltering \u2013 can be considered as a form of direct discrimination, whereby<\/p>\n<p> algorithms are used for disparate treatment [27]. Third, algorithms can lead to<\/p>\n<p> discrimination as a result of the misuse of certain models in di\ufb00erent contexts<\/p>\n<p> [14]. Fourth, in a form of feedback loop, biased training data can be used both<\/p>\n<p> as evidence for the use of algorithms and as proof of their e\ufb00ectiveness [14].<\/p>\n<p> The use of algorithmic data-driven decision processes may also result in<\/p>\n<p> individuals mistakenly being denied opportunities based not on their own<\/p>\n<p> action but on the actions of others with whom they share some characteristics.<\/p>\n<p> For example, some credit card companies have lowered a customer\u2019s credit<\/p>\n<p> limit, not based on the customer\u2019s payment history, but rather based on<\/p>\n<p> analysis of other customers with a poor repayment history that had shopped<\/p>\n<p> at the same establishments where the customer had shopped [66].<\/p>\n<p> Indeed, we \ufb01nd increasing evidence of detrimental impact already taking<\/p>\n<p> place in current non-algorithmic approaches to credit scoring and generally<\/p>\n<p> to backgrounds checks. The latter have been widely used in recent years<\/p>\n<p> in several contexts: it is common to agree to be subjected to background<\/p>\n<p> checks when applying for a job, to lease a new apartment, etc. In fact, hun-<\/p>\n<p> dreds of thousands of people have unknowingly seen themselves adversely<\/p>\n<p> a\ufb00ected on existential matters such as job opportunities and housing avail-<\/p>\n<p> ability due to simple but common mistakes (for instance, misidenti\ufb01cation) in<\/p>\n<p> the procedures used by external companies to perform background checks12.<\/p>\n<p> It is worth noticing that the trivial procedural mistakes causing such ad-<\/p>\n<p> verse outcomes are bound to disappear once fully replaced with data-driven<\/p>\n<p> methodologies. Alas, this also means that should such methodologies not be<\/p>\n<p> transparent in their inner workings, the e\ufb00ects are likely to stay though with<\/p>\n<p> di\ufb00erent roots. Further, the e\ufb00ort required to identify the causes of unfair<\/p>\n<p> and discriminative outcomes can be expected to be exponentially larger, as<\/p>\n<p> exponentially more complex will be the black-box models employed to as-<\/p>\n<p> sist in the decision-making process. This scenario highlights particularly well<\/p>\n<p> the need for machine learning models featuring transparency and account-<\/p>\n<p> ability: adopting black-box approaches in scenarios where the lives of people<\/p>\n<p> would be seriously a\ufb00ected by a machine-driven decision could lead to forms<\/p>\n<p> of algorithmic stigma13, a particularly creepy scenario considering that those<\/p>\n<p> stigmatized might never become aware of being so, and the stigmatizer will be<\/p>\n<p> an unaccountable machine. Recent advances in neural network-based (deep<\/p>\n<p> learning) models are yielding unprecedented accuracies in a variety of \ufb01elds.<\/p>\n<p> However, such models tend to be di\ufb03cult \u2013 if not impossible \u2013 to interpret, as<\/p>\n<p> 12 See, for instance, http:\/\/www.chicagotribune.com\/business\/<\/p>\n<p> ct-background- check-penalties-1030-biz-20151029-story.html<\/p>\n<p> 13 As a social phenomenon, the concept of stigma has received signi\ufb01cant attention by soci-<\/p>\n<p> ologists, who under di\ufb00erent frames highlighted and categorized the various factors leading<\/p>\n<p> individuals or groups to be discriminated by society, the countermoves often adopted by<\/p>\n<p> the stigmatized, and analyzed dynamics of reactions and evolution of stigma. We refer the<\/p>\n<p> interested reader to the review provided by Major and O\u2019Brian [51].<\/p>\n<p> Title Suppressed Due to Excessive Length 13<\/p>\n<p> previously explained. In this chapter, we highlight the need for data-driven<\/p>\n<p> machine learning models that are interpretable by humans when such models<\/p>\n<p> are going to be used to make decisions that a\ufb00ect individuals or groups of<\/p>\n<p> individuals.<\/p>\n<p> 4 Requirements for positive disruption of data-driven<\/p>\n<p> policies<\/p>\n<p> As noted in the previous sections, both governments and companies are in-<\/p>\n<p> creasingly using data-driven algorithms for decision support and resource<\/p>\n<p> optimization. In the context of social good, accountability in the use of such<\/p>\n<p> powerful decision support tools is fundamental in both validating their utility<\/p>\n<p> toward the public interest as well as redressing corrupt or unjust harms gener-<\/p>\n<p> ated by these algorithms. Several scholars have emphasized elements of what<\/p>\n<p> we refer to as the dark side of data-driven policies for social good, including<\/p>\n<p> violations of individual and group privacy, information asymmetry, lack of<\/p>\n<p> transparency, social exclusion and discrimination. Arguments against the use<\/p>\n<p> of social good algorithms typically call into question the use of machines in<\/p>\n<p> decision support and the need to protect the role of human decision-making.<\/p>\n<p> However, therein lies a huge potential and imperative for leveraging large<\/p>\n<p> scale human behavioral data to design and implement policies that would<\/p>\n<p> help improve the lives of millions of people. Recent debates have focused<\/p>\n<p> on characterizing data-driven policies as either \u201cgood\u201d or \u201cbad\u201d for society.<\/p>\n<p> We focus instead on the potential of data-driven policies to lead to positive<\/p>\n<p> disruption, such that they reinforce and enable the powerful functions of<\/p>\n<p> algorithms as tools generating value while minimizing their dark side.<\/p>\n<p> In this section, we present key human-centric requirements for positive dis-<\/p>\n<p> ruption, including a fundamental renegotiation of user-centric data ownership<\/p>\n<p> and management, the development of tools and participatory infrastructures<\/p>\n<p> towards increased algorithmic transparency and accountability, and the cre-<\/p>\n<p> ation of living labs for experimenting and co-creating data-driven policies.<\/p>\n<p> We place humans at the center of our discussion as humans are ultimately<\/p>\n<p> both the actors and the subjects of the decisions made via algorithmic means.<\/p>\n<p> If we are able to ensure that these requirements are met, we should be able<\/p>\n<p> to realize the positive potential of data-driven algorithmic decision-making<\/p>\n<p> while minimizing the risks and possible negative unintended consequences.<\/p>\n<p> 4.1 User-centric data ownership and management<\/p>\n<p> A big question on the table for policy-makers, researchers, and intellectuals<\/p>\n<p> is: how do we unlock the value of human behavioral data while preserving<\/p>\n<p> 14 Authors Suppressed Due to Excessive Length<\/p>\n<p> the fundamental right to privacy? This question implicitly recognizes the<\/p>\n<p> risks, in terms not only of possible abuses but also of a \u201cmissed chance for<\/p>\n<p> innovation\u201d, inherent to the current paradigm: the dominant siloed approach<\/p>\n<p> to data collection, management, and exploitation, precludes participation to<\/p>\n<p> a wide range of actors, most notably to the very producers of personal data<\/p>\n<p> (i.e. the users).<\/p>\n<p> On this matter, new user-centric models for personal data management<\/p>\n<p> have been proposed, in order to empower individuals with more control of<\/p>\n<p> their own data\u2019s life-cycle [63]. To this end, researchers and companies are<\/p>\n<p> developing repositories which implement medium-grained access control to<\/p>\n<p> di\ufb00erent kinds of personally identi\ufb01able information (PII), such as passwords,<\/p>\n<p> social security numbers and health data [92], location [24] and personal data<\/p>\n<p> collected by means of smartphones or connected devices [24]. A pillar of<\/p>\n<p> these approaches is represented by a Personal Data Eco-system, composed<\/p>\n<p> by secure vaults of personal data whose owners are granted full control of.<\/p>\n<p> Along this line, an interesting example is the Enigma platform [101] that<\/p>\n<p> leverages the recent technological trend of decentralization: advances in the<\/p>\n<p> \ufb01elds of cryptography and decentralized computer networks have resulted<\/p>\n<p> in the emergence of a novel technology \u2013 known as the blockchain \u2013 which<\/p>\n<p> has the potential to reduce the role of one of the most important actors in<\/p>\n<p> our society: the middle man [5, 21]. By allowing people to transfer a unique<\/p>\n<p> piece of digital property or data to others, in a safe, secure, and immutable<\/p>\n<p> way, this technology can create digital currencies (e.g. bitcoin) that are not<\/p>\n<p> backed by any governmental body [54]; self-enforcing digital contracts, called<\/p>\n<p> smart contracts, whose execution does not require any human intervention<\/p>\n<p> (e.g. Ethereum) [80]; and decentralized marketplaces that aim to operate<\/p>\n<p> free from regulations [21]. Hence, Enigma tackles the challenge of providing<\/p>\n<p> a secure and trustworthy mechanism for the exchange of goods in a personal<\/p>\n<p> data market. To illustrate how the platform works, consider the following<\/p>\n<p> example: a group of data analysts of an insurance company wishes to test<\/p>\n<p> a model that leverages people\u2019s mobile phone data. Instead of sharing their<\/p>\n<p> raw data with the data analysts in the insurance company, the users can<\/p>\n<p> securely store their data in Enigma, and only provide the data analysts with<\/p>\n<p> a permission to execute their study. The data analysts are thus able to execute<\/p>\n<p> their code and obtain the results, but nothing else. In the process, the users<\/p>\n<p> are compensated for having given access to their data and the computers in<\/p>\n<p> the network are paid for their computing resources [78].<\/p>\n<p> 4.2 Algorithmic transparency and accountability<\/p>\n<p> The deployment of a machine learning model entails a degree of trust on how<\/p>\n<p> satisfactory its performance in the wild will be from the perspectives of both<\/p>\n<p> the builders and the users. Such trust is assessed at several points during<\/p>\n<p> Title Suppressed Due to Excessive Length 15<\/p>\n<p> an iterative model building process. Nonetheless, many of the state-of-the-<\/p>\n<p> art machine learning-based models (e.g. neural networks) act as black-boxes<\/p>\n<p> once deployed. When such models are used for decision-making, the lack of<\/p>\n<p> explanations regarding why and how they have reached their decisions poses<\/p>\n<p> several concerns. In order to address this limitation, recent research e\ufb00orts in<\/p>\n<p> the machine learning community have proposed di\ufb00erent approaches to make<\/p>\n<p> the algorithms more amenable to ex ante and ex post inspection. For example,<\/p>\n<p> a number of studies have attempted to tackle the issue of discrimination<\/p>\n<p> within algorithms by introducing tools to both identify [6] and rectify [13,<\/p>\n<p> 6, 31] cases of unwanted bias. Recently, Ribeiro et al. [69] have proposed a<\/p>\n<p> model-agnostic method to derive explanations for the predictions of a given<\/p>\n<p> model.<\/p>\n<p> An interesting ongoing initiative is the Open Algorithms (OPAL) project<\/p>\n<p> 14, a multi-partner e\ufb00ort led by Orange, the MIT Media Lab, Data-Pop Al-<\/p>\n<p> liance, Imperial College London, and the World Economic Forum, that aims<\/p>\n<p> to open -without exposing- data collected and stored by private companies<\/p>\n<p> by \u201csending the code to the data\u201d rather than the other way around. The<\/p>\n<p> goal is to enable the design, implementation and monitoring of development<\/p>\n<p> policies and programs, accountability of government action, and citizen en-<\/p>\n<p> gagement while leveraging the availability of large scale human behavioral<\/p>\n<p> data. OPAL\u2019s core will consist of an open platform allowing open algorithms<\/p>\n<p> to run on the servers of partner companies, behind their \ufb01rewalls, to extract<\/p>\n<p> key development indicators and operational data of relevance for a wide range<\/p>\n<p> of potential users. Requests for approved, certi\ufb01ed and pre-determined indi-<\/p>\n<p> cators by third parties \u2013e.g. mobility matrices, poverty maps, population<\/p>\n<p> densities\u2013 will be sent to them via the platform; certi\ufb01ed algorithms will run<\/p>\n<p> on the data in a multiple privacy-preserving manner, and results will be made<\/p>\n<p> available via an API. The platform will also be used to foster civic engage-<\/p>\n<p> ment of a broad range of social constituents \u2013academic institutions, private<\/p>\n<p> sector companies, o\ufb03cial institutions, non-governmental and civil society or-<\/p>\n<p> ganizations. Overall, the OPAL initiative has three key objectives: (i) engage<\/p>\n<p> with data providers, users, and analysts at all the stages of algorithm develop-<\/p>\n<p> ment; (ii) contribute to building local capacities and help shaping the future<\/p>\n<p> technological, ethical and legal frameworks that will govern the collection,<\/p>\n<p> control and use of human behavioral data to foster social progress; and (iii)<\/p>\n<p> build data literacy among users and partners, conceptualized as \u201cthe ability<\/p>\n<p> to constructively engage in society through and about data\u201d. Initiatives such<\/p>\n<p> as OPAL have the potential to enable more human-centric accountable and<\/p>\n<p> transparent data-driven decision-making and governance.<\/p>\n<p> 14 http:\/\/datapopalliance.org\/open-algorithms- a-new-paradigm-for-using-private-data-for-social- good\/<\/p>\n<p> 16 Authors Suppressed Due to Excessive Length<\/p>\n<p> 4.3 Living labs to experiment data-driven policies<\/p>\n<p> The use of real-time human behavioral data to design and implement policies<\/p>\n<p> has been traditionally outside the scope of the way of working in policy mak-<\/p>\n<p> ing. However, the potential of this type of data will only be realized when<\/p>\n<p> policy makers are able to analyze the data, to study human behavior and to<\/p>\n<p> test policies in the real world. A possible way is to build living laboratories<\/p>\n<p> -communities of volunteers willing to try new ways of doing things in a nat-<\/p>\n<p> ural setting- in order to test ideas and hypotheses in a real life setting. An<\/p>\n<p> example is the Mobile Territorial Lab (MTL), a living lab launched by Fon-<\/p>\n<p> dazione Bruno Kessler, Telecom Italia, the MIT Media Lab and Telefonica,<\/p>\n<p> that has been observing the lives of more than 100 families through multiple<\/p>\n<p> channels for more than three years [15]. Data from multiple sources, includ-<\/p>\n<p> ing smartphones, questionnaires, experience sampling probes, etc. has been<\/p>\n<p> collected and used to create a multi-layered view of the lives of the study<\/p>\n<p> participants. In particular, social interactions (e.g. call and SMS communica-<\/p>\n<p> tions), mobility routines and spending patterns, etc. have been captured. One<\/p>\n<p> of the MTL goals is to devise new ways of sharing personal data by means of<\/p>\n<p> Personal Data Store (PDS) technologies, in order to promote greater civic en-<\/p>\n<p> gagement. An example of an application enabled by PDS technologies is the<\/p>\n<p> sharing of best practices among families with young children. How do other<\/p>\n<p> families spend their money? How much do they get out and socialize? Once<\/p>\n<p> the individual gives permission, MyDataStore [89], the PDS system used by<\/p>\n<p> MTL participants, allows such personal data to be collected, anonymized,<\/p>\n<p> and shared with other young families safely and automatically.<\/p>\n<p> The MTL has been also used to investigate how to deal with the sensitiv-<\/p>\n<p> ities of collecting and using deeply personal data in real-world situations. In<\/p>\n<p> particular, a MTL study investigated the perceived monetary value of mobile<\/p>\n<p> information and its association with behavioral characteristics and demo-<\/p>\n<p> graphics; the results corroborate the arguments towards giving back to the<\/p>\n<p> people (users, citizens, according to the scenario) control on the data they<\/p>\n<p> constantly produce [77].<\/p>\n<p> Along these lines, Data-Pop Alliance and the MIT Media Lab launched in<\/p>\n<p> May 2016 a novel initiative called \u201cLaboratorio Urbano\u201d in Bogot\u00b4a, Colom-<\/p>\n<p> bia, in partnership with Bogot\u00b4a\u2019s city government and Chamber of Com-<\/p>\n<p> merce. The main objective of the Bogot\u00b4a Urban Laboratory is to contribute<\/p>\n<p> to the city\u2019s urban vitality, with a focus on mobility and safety, through<\/p>\n<p> collaborative research projects and dialogues involving the public and pri-<\/p>\n<p> vate sectors, academic institutions, and citizens. Similar initiatives are being<\/p>\n<p> planned in other major cities of the global south, including Dakar, Senegal,<\/p>\n<p> with the goal of strengthening and connecting local ecosystems where data-<\/p>\n<p> driven innovations can take place and scale.<\/p>\n<p> Figure 1 provides the readers with a visual representation of the factors<\/p>\n<p> playing a signi\ufb01cant role in positive data-driven disruption.<\/p>\n<p> Title Suppressed Due to Excessive Length 17<\/p>\n<p> Fig. 1 Requirements summary for positive data-driven disruption.<\/p>\n<p> 5 Conclusion<\/p>\n<p> In this chapter we have provided an overview of both the opportunities and<\/p>\n<p> the risks of data-driven algorithmic decision-making for the public good. We<\/p>\n<p> are witnessing an unprecedented time in our history, where vast amounts of<\/p>\n<p> \ufb01ne grained human behavioral data are available. The analysis of this data<\/p>\n<p> has the impact to help inform policies in public health, disaster management,<\/p>\n<p> safety, economic development and national statistics among others. In fact,<\/p>\n<p> the use of data is at the core of the 17 Sustainable Development Goals (SDGs)<\/p>\n<p> de\ufb01ned by United Nations, both in order to achieve the goals and to measure<\/p>\n<p> progress towards their achievement.<\/p>\n<p> While this is an exciting time for researchers and practitioners in this<\/p>\n<p> new \ufb01eld of computational social sciences, we need to be aware of the risks<\/p>\n<p> associated with these new approaches to decision making, including violation<\/p>\n<p> of privacy, lack of transparency, information asymmetry, social exclusion and<\/p>\n<p> discrimination. We have proposed three human-centric requirements that we<\/p>\n<p> consider to be of paramount importance in order to enable positive disruption<\/p>\n<p> of data-driven policy-making: user-centric data ownership and management;<\/p>\n<p> algorithmic transparency and accountability; and living labs to experiment<\/p>\n<p> with data-driven policies in the wild. It will be only when we honor these<\/p>\n<p> requirements that we will be able to move from the feared tyranny of data<\/p>\n<p> and algorithms to a data-enabled model of democratic governance running<\/p>\n<p> against tyrants and autocrats, and for the people.<\/p>\n<p> 18 Authors Suppressed Due to Excessive Length<\/p>\n<p> References<\/p>\n<p> 1. G.A. Akerlof. The market for \u201clemons\u201d: Quality uncertainty and the market mecha-<\/p>\n<p> nism. The Quarterly Journal of Economics, 84(3):488\u2013500, 1970.<\/p>\n<p> 2. G.A. Akerlof and R.J. Shiller. Animal spirits: How human psychology drives the<\/p>\n<p> economy, and why it matters for global capitalism. Princeton University Press, 2009.<\/p>\n<p> 3. S. Barocas and A.D. Selbst. Big data\u2019s disparate impact. California Law Review,<\/p>\n<p> 104:671\u2013732, 2016.<\/p>\n<p> 4. L. Bengtsson, X. Lu, A. Thorson, R. Gar\ufb01eld, and J. Von Schreeb. Improved response<\/p>\n<p> to disasters and outbreaks by tracking population movements with mobile phone<\/p>\n<p> network data: a post-earthquake geospatial study in haiti. PloS Medicine, 8(8), 2011.<\/p>\n<p> 5. Y. Benkler. The wealth of networks. Yale University Press, New Haven, 2006.<\/p>\n<p> 6. B. Berendt and S. Preibusch. Better decision support through exploratory<\/p>\n<p> discrimination-aware data mining: Foundations and empirical evidence. Arti\ufb01cial<\/p>\n<p> Intelligence and Law, 22(2):1572\u20138382, 2014.<\/p>\n<p> 7. V. D. Blondel, A. Decuyper, and G. Krings. A survey of results on mobile phone<\/p>\n<p> datasets analysis. EPJ Data Science, 4(10), 2015.<\/p>\n<p> 8. J. Blumenstock, G. Cadamuro, and R. On. Predicting poverty and wealth from<\/p>\n<p> mobile phone metadata. Science, 350(6264):1073\u20131076, 2015.<\/p>\n<p> 9. A. Bogomolov, B. Lepri, M. Ferron, F. Pianesi, and A. Pentland. Daily stress recogni-<\/p>\n<p> tion from mobile phone data, weather conditions and individual traits. In Proceedings<\/p>\n<p> of the 22nd ACM International Conference on Multimedia, pages 477\u2013486. 2014.<\/p>\n<p> 10. A. Bogomolov, B. Lepri, J. Staiano, E. Letouz\u00b4e, N. Oliver, F. Pianesi, and A. Pent-<\/p>\n<p> land. Moves on the street: Classifying crime hotspots using aggregated anonymized<\/p>\n<p> data on people dynamics. Big Data, 3(3):148\u2013158, 2015.<\/p>\n<p> 11. A. Bogomolov, B. Lepri, J. Staiano, N. Oliver, F. Pianesi, and A. Pentland. Once upon<\/p>\n<p> a crime: Towards crime prediction from demographics and mobile data. In Proceedings<\/p>\n<p> of the International Conference on Multimodal Interaction (ICMI), pages 427\u2013434,<\/p>\n<p> 2014.<\/p>\n<p> 12. J. Burrell. How the machine \u2018thinks\u2019: Understanding opacity in machine learning<\/p>\n<p> algorithms. Big Data &amp; Society, 3(1), 2016.<\/p>\n<p> 13. T. Calders and S. Verwer. Three naive bayes approaches for discrimination-free<\/p>\n<p> classi\ufb01cation. Data Mining and Knowledge Discovery, 21(2):277\u2013292, 2010.<\/p>\n<p> 14. T. Calders and I. Zliobaite. Why unbiased computational processes can lead to<\/p>\n<p> discriminative decision procedures. In B. Custers, T. Calders, B. Schermer, and<\/p>\n<p> T. Zarsky, editors, Discrimination and Privacy in the Information Society, pages<\/p>\n<p> 43\u201357. 2013.<\/p>\n<p> 15. S. Centellegher, M. De Nadai, M. Caraviello, C. Leonardi, M. Vescovi, Y. Ramadian,<\/p>\n<p> N. Oliver, F. Pianesi, A. Pentland, F. Antonelli, and B. Lepri. The mobile territorial<\/p>\n<p> lab: A multilayered and dynamic view on parents daily lives. EPJ Data Science, 5(3),<\/p>\n<p> 2016.<\/p>\n<p> 16. S.P. Chainey, L. Tompson, and S. Uhlig. The utility of hotspot mapping for predicting<\/p>\n<p> spatial patterns of crime. Security Journal, 21:4\u201328, 2008.<\/p>\n<p> 17. A. Christin, A. Rosenblatt, and d. boyd. Courts and predictive algorithms. Data &amp;<\/p>\n<p> Civil Rights Primer, 2015.<\/p>\n<p> 18. D.K. Citron and F. Pasquale. The scored society. Washington Law Review, 89(1):1\u2013<\/p>\n<p> 33, 2014.<\/p>\n<p> 19. K. Crawford and J. Schultz. Big data and due process: Toward a framework to redress<\/p>\n<p> predictive privacy harms. Boston College Law Review, 55(1):93\u2013128, 2014.<\/p>\n<p> 20. M. De Choudhury, M. Gamon, S. Counts, , and E. Horvitz. Predicting depression via<\/p>\n<p> social media. In Proceedings of the 7th International AAAI Conference on Weblogs<\/p>\n<p> and Social Media, 2013.<\/p>\n<p> 21. P. De Filippi. The interplay between decentralization and privacy: The case of<\/p>\n<p> blockchain technologies. Journal of Peer Production, 7, 2015.<\/p>\n<p> Title Suppressed Due to Excessive Length 19<\/p>\n<p> 22. Y.-A. de Montjoye, C. Hidalgo, M. Verleysen, and V. Blondel. Unique in the crowd:<\/p>\n<p> The privacy bounds of human mobility. Scienti\ufb01c Reports, 3, 2013.<\/p>\n<p> 23. Y.-A. de Montjoye, L. Radaelli, V.K. Singh, and A. Pentland. Unique in the shopping<\/p>\n<p> mall: On the re-identi\ufb01ability of credit card metadata. Science, 347(6221):536\u2013539,<\/p>\n<p> 2015.<\/p>\n<p> 24. Y.-A. de Montjoye, E. Shmueli, S. Wang, and A. Pentland. Openpds: Protecting the<\/p>\n<p> privacy of metadata through safeanswers. PloS One, (10.1371), 2014.<\/p>\n<p> 25. R. de Oliveira, A. Karatzoglou, P. Concejero Cerezo, A. Armenta Lopez de Vicu\u02dcna,<\/p>\n<p> and N. Oliver. Towards a psychographic user model from mobile phone usage. In<\/p>\n<p> CHI\u201911 Extended Abstracts on Human Factors in Computing Systems, pages 2191\u2013<\/p>\n<p> 2196. ACM, 2011.<\/p>\n<p> 26. S. Devarajan. Africa\u2019s statistical tragedy. Review of Income and Wealth, 59(S1):S9\u2013<\/p>\n<p> S15, 2013.<\/p>\n<p> 27. N. Diakopoulos. Algorithmic accountability: Journalistic investigation of computa-<\/p>\n<p> tional power structures. Digital Journalism, 2015.<\/p>\n<p> 28. W. Easterly. The Tyranny of Experts. Basic Books, 2014.<\/p>\n<p> 29. J. Eck, S. Chainey, J. Cameron, and R. Wilson. Mapping crime: understanding<\/p>\n<p> hotspots. National Institute of Justice: Washington DC, 2005.<\/p>\n<p> 30. M. Faurholt-Jepsena, M. Frostb, M. Vinberga, E.M. Christensena, J.E. Bardram, and<\/p>\n<p> L.V. Kessinga. Smartphone data as ob jective measures of bipolar disorder symptoms.<\/p>\n<p> Psychiatry Research, 217:124\u2013127, 2014.<\/p>\n<p> 31. M. Feldman, S.A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian.<\/p>\n<p> Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD<\/p>\n<p> International Conference on Knowledge Discovery and Data Mining, pages 259\u2013268,<\/p>\n<p> 2015.<\/p>\n<p> 32. A.G. Ferguson. Crime mapping and the fourth amendment: Redrawing high-crime<\/p>\n<p> areas. Hastings Law Journal, 63:179\u2013232, 2012.<\/p>\n<p> 33. G. Fields. Changes in poverty and inequality. World Bank Research Observer, 4:167\u2013<\/p>\n<p> 186, 1989.<\/p>\n<p> 34. S.T. Fiske. Stereotyping, prejudice, and discrimination. In D.T. Gilbert, S.T. Fiske,<\/p>\n<p> and G. Lindzey, editors, Handbook of Social Psychology, pages 357\u2013411. Boston:<\/p>\n<p> McGraw-Hill, 1998.<\/p>\n<p> 35. E. Frias-Martinez, G. Williamson, and V. Frias-Martinez. An agent-based model<\/p>\n<p> of epidemic spread using human mobility and social network information. In So-<\/p>\n<p> cial Computing (SocialCom), 2011 International Conference on, pages 57\u201364. IEEE,<\/p>\n<p> 2011.<\/p>\n<p> 36. T. Gillespie. The relevance of algorithms. In T. Gillespie, P. Boczkowski, and<\/p>\n<p> K. Foot, editors, Media technologies: Essays on communication, materiality, and<\/p>\n<p> society, pages 167\u2013193. MIT Press, 2014.<\/p>\n<p> 37. J. Ginsberg, M.H. Mohebbi, R.S. Patel, L. rammer, M.S. Smolinski, and L. Brilliant.<\/p>\n<p> Detecting in\ufb02uenza epidemics using search engine query data. Nature, 457:1012\u20131014,<\/p>\n<p> 2009.<\/p>\n<p> 38. Sara Ha jian, Francesco Bonchi, and Carlos Castillo. Algorithmic bias: From discrim-<\/p>\n<p> ination discovery to fairness-aware data mining. In Proceedings of the 22nd ACM<\/p>\n<p> SIGKDD International Conference on Knowledge Discovery and Data Mining, pages<\/p>\n<p> 2125\u20132126. ACM, 2016.<\/p>\n<p> 39. N. Jean, M. Burke, M. Xie, W.M. Davis, D.B. Lobell, and S. Ermon. Combining<\/p>\n<p> satellite imagery and machine learning to predict poverty. Science, 353(6301):790\u2013<\/p>\n<p> 794, 2016.<\/p>\n<p> 40. M. Jerven. Poor numbers: How we are misled by african development statistics and<\/p>\n<p> what to do about it. Cornell University Press, 2013.<\/p>\n<p> 41. G. King. Ensuring the data-rich future of the social sciences. Science, 2011.<\/p>\n<p> 42. M. Kosinski, D. Stillwell, and T. Graepel. Private traits and attributes are predictable<\/p>\n<p> from digital records of human behavior. Proceedings of the National Academy of<\/p>\n<p> Sciences, 110(15):5802\u20135805, 2013.<\/p>\n<p> 20 Authors Suppressed Due to Excessive Length<\/p>\n<p> 43. S. Kuznets. Economic growth and income inequality. American Economic Review,<\/p>\n<p> 45:1\u201328, 1955.<\/p>\n<p> 44. M. Latzer, K. Hollnbuchner, N. Just, and F. Saurwein. The economics of algorithmic<\/p>\n<p> selection on the internet. In J. Bauer and M. Latzer, editors, Handbook on the<\/p>\n<p> Economics of the Internet. Edward Elgar, Cheltenham, Northampton, 2015.<\/p>\n<p> 45. D. Lazer, A. Pentland, L. Adamic, S. Aral, A-L. Barabasi, D. Brewer, N. Christakis,<\/p>\n<p> N. Contractor, J. Fowler, M. Gutmann, T. Jebara, G. King, M. Macy, D. Roy, and<\/p>\n<p> M. Van Alstyne. Computational social science. Science, 323(5915):721\u2013723, 2009.<\/p>\n<p> 46. B. Lepri, J. Staiano, E. Shmueli, F. Pianesi, and A. Pentland. The role of personality<\/p>\n<p> in shaping social networks and mediating behavioral change. User Modeling and<\/p>\n<p> User-Adapted Interaction, 26(2):143\u2013175, 2016.<\/p>\n<p> 47. R. LiKamWa, Y. Liu, N.D. Lane, and L. Zhong. Moodscope: Building a mood sensor<\/p>\n<p> from smartphone usage patterns. In Proceedings of the 11th Annual International<\/p>\n<p> Conference on Mobile Systems, Applications, and Service (MobiSys), pages 389\u2013402.<\/p>\n<p> 2013.<\/p>\n<p> 48. H.Y. Liu, E. Skjetne, and M. Kobernus. Mobile phone tracking: In support of mod-<\/p>\n<p> elling tra\ufb03c-related air pollution contribution to individual exposure and its impli-<\/p>\n<p> cations for public health impact assessment. Environmental Health, 12, 2013.<\/p>\n<p> 49. T. Louail, M. Lenormand, O. G. Cantu Ros, M. Picornell, R. Herranz, E. Frias-<\/p>\n<p> Martinez, J. J. Ramasco, and M. Barthelemy. From mobile phone data to the spatial<\/p>\n<p> structure of cities. Scienti\ufb01c Reports, 4(5276), Jun 2014.<\/p>\n<p> 50. X. Lu, L. Bengtsson, and P. Holme. Predictability of population displacement af-<\/p>\n<p> ter the 2010 haiti earthquake. Proceedings of the National Academy of Sciences,<\/p>\n<p> 109:11576\u201381, 2012.<\/p>\n<p> 51. B. Ma jor and L.T. O\u2019Brien. The social psychology of stigma. Annual Review of<\/p>\n<p> Psychology, 56:393\u2013421, 2005.<\/p>\n<p> 52. A. Matic and N. Oliver. The untapped opportunity of mobile network data for mental<\/p>\n<p> health. In Future of Pervasive Health Workshop. ACM, 6 2016.<\/p>\n<p> 53. G.O. Mohler, M.B. Short, P.J. Brantingham, F.P. Schoenberg, and G.E. Tita. Self-<\/p>\n<p> exciting point process modeling of crime. Journal of the American Statistical Asso-<\/p>\n<p> ciation, (106):100\u2013108, 2011.<\/p>\n<p> 54. S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. Technical report, Kent<\/p>\n<p> University, 2009.<\/p>\n<p> 55. F. O\ufb02i, P. Meier, M. Imran, C. Castillo, D. Tuia, N. Rey, J. Briant, P. Millet, F. Rein-<\/p>\n<p> hard, M. Parkan, and S. Joost. Combining human computing and machine learning<\/p>\n<p> to make sense of big (aerial) data for disaster response. Big Data, 4:47\u201359, 2016.<\/p>\n<p> 56. P. Ohm. Broken promises of privacy: Responding to the surprising failure of<\/p>\n<p> anonymization. UCLA Law Review, 57:1701\u20131777, 2010.<\/p>\n<p> 57. N. Oliver, A. Matic, and E. Frias-Martinez. Mobile network data for public health:<\/p>\n<p> Opportunities and challenges. Frontiers in Public Health, 3:189, 2015.<\/p>\n<p> 58. C. O\u2019Neil. Weapons of math destruction: How big data increases inequality and<\/p>\n<p> threatens democracy. Crown, 2016.<\/p>\n<p> 59. V. Osmani, A. Gruenerbl, G. Bahle, Lukowicz P. Haring, C., and Mayora O. Smart-<\/p>\n<p> phones in mental health: Detecting depressive and manic episodes. IEEE Pervasive<\/p>\n<p> Computing, 14(3):10\u201313, 2015.<\/p>\n<p> 60. D. Pager and H. Shepherd. The sociology of discrimination: Racial discrimination<\/p>\n<p> in employment, housing, credit and consumer market. Annual Review of Sociology,<\/p>\n<p> 34:181\u2013209, 2008.<\/p>\n<p> 61. F. Pasquale. The Black Blox Society: The secret algorithms that control money and<\/p>\n<p> information. Harvard University Press, 2015.<\/p>\n<p> 62. D. Pastor-Escuredo, Y. Torres Fernandez, J.M. Bauer, A. Wadhwa, C. Castro-Correa,<\/p>\n<p> L. Romano\ufb00, J.G. Lee, A. Rutherford, V. Frias-Martinez, N. Oliver, Frias-Martinez<\/p>\n<p> E., and M. Luengo-Oroz. Flooding through the lens of mobile phone activity. In<\/p>\n<p> IEEE Global Humanitarian Technology Conference, GHTC\u201914. IEEE, 2014.<\/p>\n<p> Title Suppressed Due to Excessive Length 21<\/p>\n<p> 63. A. Pentland. Society\u2019s nervous system: Building e\ufb00ective government, energy, and<\/p>\n<p> public health systems. IEEE Computer, 45(1):31\u201338, 2012.<\/p>\n<p> 64. W.L. Perry, B. McInnis, C.C. Price, S.C. Smith, and J.S. Hollywood. Predictive polic-<\/p>\n<p> ing: The role of crime forecasting in law enforcment operations. Rand Corporation,<\/p>\n<p> 2013.<\/p>\n<p> 65. J. Podesta, P. Pritzker, E.J. Moniz, J. Holdren, and J. Zients. Big data: Seizing<\/p>\n<p> opportunities, preserving values. Technical report, Executive O\ufb03ce of the President,<\/p>\n<p> 2014.<\/p>\n<p> 66. E. Ramirez, J. Brill, M.K. Ohlhausen, and T. McSweeny. Big data: A tool for inclusion<\/p>\n<p> or exclusion? Technical report, Federal Trade Commission, January 2016.<\/p>\n<p> 67. J.H. Ratcli\ufb00e. A temporal constraint theory to explain opportunity-based spatial<\/p>\n<p> o\ufb00ending patterns. Journal of Research in Crime and Delinquency, 43(3):261\u2013291,<\/p>\n<p> 2006.<\/p>\n<p> 68. M. Ravallion. The economics of poverty: History, measurement, and policy. Oxford<\/p>\n<p> University Press, 2016.<\/p>\n<p> 69. M.T. Ribeiro, S. Singh, and C. Guestrin. \u201dwhy should I trust you?\u201d: Explaining the<\/p>\n<p> predictions of any classi\ufb01er. In Proceedings of the 22nd ACM SIGKDD International<\/p>\n<p> Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA,<\/p>\n<p> August 13-17, 2016, pages 1135\u20131144, 2016.<\/p>\n<p> 70. W. Samuelson and R. Zeckhauser. Status quo bias in decision making. Journal of<\/p>\n<p> Risk and Uncertainty, (1):7\u201359, 1988.<\/p>\n<p> 71. J. San Pedro, D. Proserpio, and N. Oliver. Mobiscore: Towards universal credit<\/p>\n<p> scoring from mobile phone data. In Proceedings of the International Conference on<\/p>\n<p> User Modeling, Adaptation and Personalization (UMAP), pages 195\u2013207, 2015.<\/p>\n<p> 72. M. B. Short, M. R. D\u2019Orsogna, V. B. Pasour, G. E. Tita, P. J. Brantingham, A. L.<\/p>\n<p> Bertozzi, and L. B. Chayes. A statistical model of criminal behavior. Mathematical<\/p>\n<p> Models and Methods in Applied Sciences, 18(supp01):1249\u20131267, 2008.<\/p>\n<p> 73. V. K. Singh, B. Bozkaya, and A. Pentland. Money walks: Implicit mobility behavior<\/p>\n<p> and \ufb01nancial well-being. PLOS ONE, 10(8):e0136628, 2015.<\/p>\n<p> 74. V.K. Singh, L. Freeman, B. Lepri, and A. Pentland. Predicting spending behavior<\/p>\n<p> using socio-mobile features. In Social Computing (SocialCom), 2013 International<\/p>\n<p> Conference on, pages 174\u2013179. IEEE, 2013.<\/p>\n<p> 75. C. Smith-Clarke, A. Mashhadi, and L. Capra. Poverty on the cheap: Estimating<\/p>\n<p> poverty maps using aggregated mobile communication networks. In Proceedings of<\/p>\n<p> the 32nd ACM Conference on Human Factors in Computing Systems (CHI2014),<\/p>\n<p> 2014.<\/p>\n<p> 76. V. Soto, V. Frias-Martinez, J. Virseda, and E. Frias-Martinez. Prediction of socioeco-<\/p>\n<p> nomic levels using cell phone records. In Proceedings of the International conference<\/p>\n<p> on UMAP, pages 377\u2013388, 2011.<\/p>\n<p> 77. J. Staiano, N. Oliver, B. Lepri, R. de Oliveira, M. Caraviello, and N. Sebe. Money<\/p>\n<p> walks: a human-centric study on the economics of personal mobile data. In Proceed-<\/p>\n<p> ings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous<\/p>\n<p> Computing, pages 583\u2013594. ACM, 2014.<\/p>\n<p> 78. J. Staiano, G. Zyskind, B. Lepri, N. Oliver, and A. Pentland. The rise of decentral-<\/p>\n<p> ized personal data markets. In D. Shrier and A. Pentland, editors, Trust::Data: A<\/p>\n<p> New Framework for Identity and Data Sharing. CreateSpace Independent Publishing<\/p>\n<p> Platform, 2016.<\/p>\n<p> 79. L. Sweeney. Discrimination in online ad delivery. Available at SSRN:<\/p>\n<p> http:\/\/ssrn.com\/abstract=2208240, 2013.<\/p>\n<p> 80. N. Szabo. Formalizing and securing relationships on public networks. First Monday,<\/p>\n<p> 2(9), 1997.<\/p>\n<p> 81. L. Thomas. Consumer credit models: Pricing, pro\ufb01t, and portfolios. New York:<\/p>\n<p> Oxford University Press, 2009.<\/p>\n<p> 22 Authors Suppressed Due to Excessive Length<\/p>\n<p> 82. M. Tizzoni, P. Bajardi, A. Decuyper, G. Kon Kam King, C.M. Schneider, V. Blondel,<\/p>\n<p> Z. Smoreda, M.C. Gonzalez, and V. Colizza. On the use of human mobility proxies<\/p>\n<p> for modeling epidemics. PLoS Computational Biology, 10(7), 2014.<\/p>\n<p> 83. C. Tobler. Limits and potential of the concept of indirect discrimination. Technical<\/p>\n<p> report, European Network of Legal Experts in Anti-Discrimination, 2008.<\/p>\n<p> 84. J.L. Toole, N. Eagle, and J.B. Plotkin. Spatiotemporal correlations in criminal o\ufb00ense<\/p>\n<p> records. ACM Transactions on Intelligent Systems and Technology, 2(4):38:1\u201338:18,<\/p>\n<p> July 2011.<\/p>\n<p> 85. M. Traunmueller, G. Quattrone, and L. Capra. Mining mobile phone data to inves-<\/p>\n<p> tigate urban crime theories at scale. In Proceedings of the International Conference<\/p>\n<p> on Social Informatics, pages 396\u2013411, 2014.<\/p>\n<p> 86. Z. Tufekci. Algorithmic harms beyond facebook and google: Emergent challenges of<\/p>\n<p> computational agency. Colorado Technology Law Journal, 13:203\u2013218, 2015.<\/p>\n<p> 87. A. Tverksy and D. Kahnemann. Judgment under uncertainty: Heuristics and biases.<\/p>\n<p> Science, 185(4157):1124\u20131131, 1974.<\/p>\n<p> 88. A. Venerandi, G. Quattrone, L. Capra, D. Quercia, and D. Saez-Trumper. Mea-<\/p>\n<p> suring urban deprivation from user generated content. In Proceedings of the 18th<\/p>\n<p> ACM Conference on Computer Supported Cooperative Work &amp; Social Computing<\/p>\n<p> (CSCW2015), 2015.<\/p>\n<p> 89. M. Vescovi, C. Perentis, C. Leonardi, B. Lepri, and C. Moiso. My data store: To-<\/p>\n<p> ward user awareness and control on personal data. In Proceedings of the 2014 ACM<\/p>\n<p> International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct<\/p>\n<p> Publication, pages 179\u2013182, 2014.<\/p>\n<p> 90. H. Wang, Z. Li, D. Kifer, and C. Graif. Crime rate inference with big data. In<\/p>\n<p> Proceedings of International conference on KDD, 2016.<\/p>\n<p> 91. T. Wang, C. Rudin, D. Wagner, and R. Sevieri. Learning to detect patterns of<\/p>\n<p> crime. In Machine Learning and Knowledge Discovery in Databases, pages 515\u2013530.<\/p>\n<p> Springer, 2013.<\/p>\n<p> 92. R. Want, T. Pering, G. Danneels, M. Kumar, M. Sundar, and J. Light. The personal<\/p>\n<p> server: Changing the way we think about ubiquitous computing. In Proceedings of<\/p>\n<p> 4th International Conference on Ubiquitous Computing, pages 194\u2013209, 2002.<\/p>\n<p> 93. D. Weisburd. Place-based policing. Ideas in American Policing, 9:1\u201316, 2008.<\/p>\n<p> 94. A. Wesolowski, N. Eagle, A. Tatem, D. Smith, R. Noor, and C. Buckee. Quantifying<\/p>\n<p> the impact of human mobility on malaria. Science, 338(6104):267\u2013270, 2012.<\/p>\n<p> 95. A. Wesolowski, G. Stresman, N. Eagle, J. Stevenson, C. Owaga, E. Marube,<\/p>\n<p> T. Bousema, C. Drakeley, J. Cox, and C.O. Buckee. Quantifying travel behavior for<\/p>\n<p> infectious disease research: A comparison of data from surveys and mobile phones.<\/p>\n<p> Scienti\ufb01c Reports, 4, 2014.<\/p>\n<p> 96. M. Willson. Algorithms (and the) everyday. Information, Communication &amp; Society,<\/p>\n<p> 2016.<\/p>\n<p> 97. R. Wilson, E. Erbach-Schoenengerg, M. Albert, D. Power, Tudge S., and Gonzalez M.<\/p>\n<p> et al. Rapid and Near Real-time Assessments of Population Displacement Using<\/p>\n<p> Mobile Phone Data Following Disasters: The 2015 Nepal Earthquake. PLOS Current<\/p>\n<p> Disasters, February 2016.<\/p>\n<p> 98. H. Zang and J. Bolot. Anonymization of location data does not work: A large-scale<\/p>\n<p> measurement study. In Proceedings of 17th ACM Annual International Conference<\/p>\n<p> on Mobile Computing and Networking, pages 145\u2013156, 2011.<\/p>\n<p> 99. T. Zarsky. The trouble with algorithmic decisions: An analytic road map to ex-<\/p>\n<p> amine e\ufb03ciency and fairness in automated and opaque decision making. Science,<\/p>\n<p> Technology, and Human Values, 41(1):118\u2013132, 2016.<\/p>\n<p> 100. T.Z. Zarsky. Automated prediction: Perception, law and policy. Communications of<\/p>\n<p> the ACM, 4:167\u2013186, 1989.<\/p>\n<p> 101. G. Zyskind, O. Nathan, and A. Pentland. Decentralizing privacy: Using blockchain to<\/p>\n<p> protect personal data. In Proceedings of IEEE Symposium on Security and Privacy<\/p>\n<p> Workshops, pages 180\u2013184. 2014.<\/p>\n<p> &#8230; Finally, the controller is concerned with assuring the trustworthiness of the AI, either as the entity developing [115], purchasing (see [49], omitted from corpus), monitoring\/assessing [80], or regulating AI (alluded to in but not a particular focus of this corpus). It is important to recognize that these types of controllers can have quite different agendas and capabilities. &#8230;<\/p>\n<p> &#8230; This &#8220;macroscopic societal accountability&#8221; is not satisfied through well-crafted explanations of single AIs [1], but rather through building trust in broader social systems. It may be an indicator of structures in need of trustbuilding that reliance on big data for decision making [60,80] is more prominent in cultures where institutional trust has eroded (also see [109]); and that users (e.g. of Bitcoin) &#8220;prefer algorithmic 6 Institutional trust is important in complex societies where the formation of interpersonal trust is impractical (see [69], not in corpus), though it can produce less stable trust [68]. 7 Sanctions can be applied by peers, as in the case of reputation in e-commerce [120]. &#8230;<\/p>\n<p> The Many Facets of Trust in AI: Formalizing the Relation Between Trust and Fairness, Accountability, and Transparency<\/p>\n<p> Preprint<\/p>\n<p> Full-text available<\/p>\n<p> Aug 2022<\/p>\n<p> Bran Knowles<\/p>\n<p> John T. Richards<\/p>\n<p> Frens Kroeger<\/p>\n<p> View<\/p>\n<p> &#8230; In this paper, we focus only on technical aspects of fairness, and thus refer the reader to other key survey papers for a more general introduction to fairness in Machine Learning as well as an overview of the current state of the art (Romei &amp; Ruggieri, 2014;Mitchell et al., 2018;Hutchinson &amp; Mitchell, 2019;Suresh &amp; Guttag, 2019;Blodgett et al., 2020;Caton &amp; Haas, 2020;Mehrabi et al., 2021). Similarly, for more ethical discussions around notions of fairness and related ethical principles, we refer the reader to the following studies: Skirpan and Gorelick (2017), Dignum (2021), Lepri et al. (2017), Binns (2018), Sokolovska and Kocarev (2018), Veale et al. (2018), Feldman et al. (2015. Specifically, as we highlighted in Caton and Haas (2020), there is a significant number of dilemmas that fairness in Machine Learning researchers still need to address. &#8230;<\/p>\n<p> Impact of Imputation Strategies on Fairness in Machine Learning<\/p>\n<p> Article<\/p>\n<p> Full-text available<\/p>\n<p> Jun 2022<\/p>\n<p> JAIR<\/p>\n<p> \uf0b7 Simon James Caton<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Saiteja Malisetty<\/p>\n<p> \uf0b7 <\/p>\n<p> Christian Haas<\/p>\n<p> View<\/p>\n<p> &#8230; Although deep learning models are often very accurate, even exceeding human performance (e.g., in [4,36,39,49]), they are very opaque and defined as &#8220;black-boxes&#8221;: given an input, deep learning models provide an output, without any human-understandable insight about their inner behavior. The huge amount of data required to train these black-box models is usually collected from people&#8217;s daily lives (e.g., web searches, social networks, e-commerce), increasing the risk of inheriting human prejudices, racism, gender discrimination, and other forms of bias [5,26]. For these reasons, new eXplainable Artificial Intelligence (XAI) solutions are needed to produce more credible and reliable information and services. &#8230;<\/p>\n<p> Trusting deep learning natural-language models via local and global explanations<\/p>\n<p> Article<\/p>\n<p> Full-text available<\/p>\n<p> Jul 2022<\/p>\n<p> KNOWL INF SYST<\/p>\n<p> \uf0b7 Francesco Ventura<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Salvatore Greco<\/p>\n<p> \uf0b7 Daniele Apiletti<\/p>\n<p> \uf0b7 <\/p>\n<p> Tania Cerquitelli<\/p>\n<p> View<\/p>\n<p> &#8230; The underlying motives of adopting automated decision systems (ADS) 1 are manifold: they range from cost-cutting to improving performance and enabling more robust and objective decisions [53,72,93]. Hopes are also that, if properly designed, ADS can be a valuable tool for breaking out of vicious patterns of human stereotyping and contributing to social equity, e.g., in the realms of recruitment [21,67], health care [50,119], or financial inclusion [81]. However, ADS are typically based on ML techniques, which, in turn, rely on historical data. &#8230;<\/p>\n<p> \u201cThere Is Not Enough Information\u201d: On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making<\/p>\n<p> Conference Paper<\/p>\n<p> Jun 2022<\/p>\n<p> Jakob Schoeffer<\/p>\n<p> Niklas K\u00fchl<\/p>\n<p> Yvette Machowski<\/p>\n<p> View<\/p>\n<p> &#8230; AI-based algorithms are increasingly being deployed in contemporary work settings to support managerial and organizational decisions Glikson &amp; Woolley, 2020;Komiak, Wang, &amp; Benbasat, 2005;Lepri et al., 2017;Prahl &amp; Van Swol, 2017). They have been defined as systems with the ability &#8220;to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation&#8221; (Kaplan &amp; Haenlein, 2018, p. 15). &#8230;<\/p>\n<p> Why do users trust algorithms? A review and conceptualization of initial trust and trust over time<\/p>\n<p> Article<\/p>\n<p> Jun 2022<\/p>\n<p> Eur Manag J<\/p>\n<p> \uf0b7 Francesca Cabiddu<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Ludovica Moi<\/p>\n<p> \uf0b7 <\/p>\n<p> Gerardo Patriotta<\/p>\n<p> David G. Allen<\/p>\n<p> View<\/p>\n<p> &#8230; The underlying motives of adopting automated decision systems (ADS) 1 are manifold: they range from cost-cutting to improving performance and enabling more robust and objective decisions [53,72,93]. Hopes are also that, if properly designed, ADS can be a valuable tool for breaking out of vicious patterns of human stereotyping and contributing to social equity, e.g., in the realms of recruitment [21,67], health care [50,119], or financial inclusion [81]. However, ADS are typically based on ML techniques, which, in turn, rely on historical data. &#8230;<\/p>\n<p> &#8220;There Is Not Enough Information&#8221;: On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making<\/p>\n<p> Preprint<\/p>\n<p> Full-text available<\/p>\n<p> May 2022<\/p>\n<p> Jakob Schoeffer<\/p>\n<p> Niklas K\u00fchl<\/p>\n<p> Yvette Machowski<\/p>\n<p> View<\/p>\n<p> &#8230; One widespread assumption is that ADS can also avoid human biases in the decision-making process [32]. In fact, if properly designed, ADS can be a valuable tool for breaking out of vicious patterns of stereotyping and contributing to social equity, for instance, in the realms of recruitment [8,30], health care [20,57], or financial inclusion [39]. However, ADS are typically based on artificial intelligence (AI)-particularly machine learning (ML)-techniques, which, in turn, generally rely on historical data. &#8230;<\/p>\n<p> A Human-Centric Perspective on Fairness and Transparency in Algorithmic Decision-Making<\/p>\n<p> Preprint<\/p>\n<p> Full-text available<\/p>\n<p> Apr 2022<\/p>\n<p> Jakob Schoeffer<\/p>\n<p> View<\/p>\n<p> &#8230; The meaning of big data has been widely discussed (11)(12)(13)(14)(15)(16)(17)(18)(19)(20) as efforts have been made to delineate the concept. Fothergill et al. (11) summarize the literature that has attempted to define and explain big data. &#8230;<\/p>\n<p> Responsible Governance for a Food and Nutrition E-Infrastructure: Case Study of the Determinants and Intake Data Platform<\/p>\n<p> Article<\/p>\n<p> Full-text available<\/p>\n<p> Mar 2022<\/p>\n<p> \uf0b7 Lada Timotijevic<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Indira Carr<\/p>\n<p> \uf0b7 Javier de la Cueva<\/p>\n<p> \uf0b7 <\/p>\n<p> Karin L. Zimmermann<\/p>\n<p> View<\/p>\n<p> From algorithmic governance to govern algorithm<\/p>\n<p> Article<\/p>\n<p> Full-text available<\/p>\n<p> Sep 2022<\/p>\n<p> AI Soc<\/p>\n<p> Zichun Xu<\/p>\n<p> View<\/p>\n<p> Conditionality and contentment: Universal Credit and UK welfare benefit recipients\u2019 life satisfaction<\/p>\n<p> Article<\/p>\n<p> Mar 2022<\/p>\n<p> ISAAC THORNTON<\/p>\n<p> Francesco Iacoella<\/p>\n<p> View<\/p>\n<p> Recommendations<\/p>\n<p> Discover more about:\u00a0Darkness<\/p>\n<p> Project<\/p>\n<p> Psychometric Attribute Prediction from Digital Data<\/p>\n<p> \uf0b7 Kyriaki Kalimeri<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Fabio Pianesi<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Bruno Lepri<\/p>\n<p> \uf0b7 [&#8230;]<\/p>\n<p> \uf0b7 <\/p>\n<p> Ailbhe Finnerty<\/p>\n<p> Automatic inference of personality (BIG-5), personal values and advanced demographic attributes from digital data, including cell phone data, applications, Facebook Likes, webpage browsing etc. <\/p>\n<p> View project<\/p>\n<p> Project<\/p>\n<p> Mobile Territorial Lab<\/p>\n<p> \uf0b7 Chiara Leonardi<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Massimo Zancanaro<\/p>\n<p> \uf0b7 <\/p>\n<p> Bruno Lepri<\/p>\n<p> View project<\/p>\n<p> Project<\/p>\n<p> Modeling Dominance Effects on Nonverbal Behaviour with Granger Causality<\/p>\n<p> \uf0b7 Kyriaki Kalimeri<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Bruno Lepri<\/p>\n<p> \uf0b7 <\/p>\n<p> Fabio Pianesi<\/p>\n<p> [&#8230;]<\/p>\n<p> Daniel Gatica-Perez<\/p>\n<p> View project<\/p>\n<p> Project<\/p>\n<p> The relationships among genes, psychological traits, and social behavior<\/p>\n<p> \uf0b7 Ilaria Cataldo<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Gianluca Esposito<\/p>\n<p> \uf0b7 <\/p>\n<p> \uf0b7 Andrea Bonassi<\/p>\n<p> \uf0b7 [&#8230;]<\/p>\n<p> \uf0b7 <\/p>\n<p> Jia Nee Foo<\/p>\n<p> View project<\/p>\n<p> Article<\/p>\n<p> Full-text available<\/p>\n<p> The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good<\/p>\n<p> December 2016<\/p>\n<p> \uf0b7 Bruno Lepri<\/p>\n<p> \uf0b7 \uf0b7 Jacopo Staiano<\/p>\n<p> \uf0b7 David Sangokoya<\/p>\n<p> \uf0b7 [&#8230;]<\/p>\n<p> \uf0b7 Nuria Oliver<\/p>\n<p> The unprecedented availability of large-scale human behavioral data is profoundly changing the world we live in. Researchers, companies, governments, financial institutions, non-governmental organizations and also citizen groups are actively experimenting, innovating and adapting algorithmic decision-making tools to understand global patterns of human behavior and provide decision support to &#8230; [Show full abstract]<\/p>\n<p> View full-text<\/p>\n<p> Article<\/p>\n<p> Full-text available<\/p>\n<p> Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed&#8230;<\/p>\n<p> December 2018 \u00b7 Philosophy &amp; Technology<\/p>\n<p> \uf0b7 Bruno Lepri<\/p>\n<p> \uf0b7 \uf0b7 Nuria Oliver<\/p>\n<p> \uf0b7 \uf0b7 Emmanuel Francis Letouz\u00e9<\/p>\n<p> \uf0b7 [&#8230;]<\/p>\n<p> \uf0b7 Patrick Vinck<\/p>\n<p> The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, &#8230; [Show full abstract]<\/p>\n<p> View full-text<\/p>\n<p> Article<\/p>\n<p> Diversity of Idea Flows and Economic Growth<\/p>\n<p> September 2020<\/p>\n<p> Alex Pentland<\/p>\n<p> What role does access to diverse ideas play in economic growth? New forms of geo-located communications and economic data allow measurement of human interaction patterns and prediction of economic outcomes for individuals, communities, and nations at a fine granularity, with the strongest predictors of income, productivity, and growth being measures of diversity and frequency of physical &#8230; [Show full abstract]<\/p>\n<p> Read more<\/p>\n<p> Conference Paper<\/p>\n<p> Full-text available<\/p>\n<p> The Death and Life of Great Italian Cities : A Mobile Phone Data Perspective<\/p>\n<p> April 2016<\/p>\n<p> \uf0b7 Jacopo Staiano<\/p>\n<p> \uf0b7 \uf0b7 Bruno Lepri<\/p>\n<p> \uf0b7 Marco De Nadai<\/p>\n<p> [&#8230;]<\/p>\n<p> Roberto Larcher<\/p>\n<p> The Death and Life of Great American Cities was written in 1961 and is now one of the most influential book in city planning. In it, Jane Jacobs proposed four conditions that promote life in a city. However, these conditions have not been empirically tested until recently. This is mainly because it is hard to collect data about &#8220;city life&#8221;. The city of Seoul recently collected pedestrian activity &#8230; [Show full abstract<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good May 2017 DOI:10.1007\/978-3-319-54024-5_1 In book: Transparent Data Mining for Big and Small Data (pp.3-24) Authors: Bruno Lepri Fondazione Bruno Kessler Jacopo Staiano Universit\u00e0 degli Studi di Trento David Sangokoya Emmanuel Francis Letouz\u00e9 Massachusetts Institute of Technology Show all 5 authors [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[10],"class_list":["post-103685","post","type-post","status-publish","format-standard","hentry","category-research-paper-writing","tag-writing"],"_links":{"self":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/posts\/103685","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/comments?post=103685"}],"version-history":[{"count":0,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/posts\/103685\/revisions"}],"wp:attachment":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/media?parent=103685"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/categories?post=103685"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/tags?post=103685"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}