{"id":78961,"date":"2021-12-02T14:06:23","date_gmt":"2021-12-02T14:06:23","guid":{"rendered":"https:\/\/papersspot.com\/blog\/2021\/12\/02\/2-effects-of-algorithmic-bias-and-reduction-methods-1-running-head-effects\/"},"modified":"2021-12-02T14:06:23","modified_gmt":"2021-12-02T14:06:23","slug":"2-effects-of-algorithmic-bias-and-reduction-methods-1-running-head-effects","status":"publish","type":"post","link":"https:\/\/papersspot.com\/blog\/2021\/12\/02\/2-effects-of-algorithmic-bias-and-reduction-methods-1-running-head-effects\/","title":{"rendered":"2 EFFECTS OF ALGORITHMIC BIAS AND REDUCTION METHODS 1 Running Head: EFFECTS"},"content":{"rendered":"<p>2<\/p>\n<p> EFFECTS OF ALGORITHMIC BIAS AND REDUCTION METHODS<\/p>\n<p> 1<\/p>\n<p> Running Head: EFFECTS OF ALGORITHMIC BIAS AND REDUCTION METHODS<\/p>\n<p> THE EFFECTS OF ALGORITHMIC BIAS ON DAILY USERS <\/p>\n<p> AND METHODS TO MINIMIZE THE SIDE<\/p>\n<p> EFFECTS OF ALGORITHMIC BIAS<\/p>\n<p> Prepared for<\/p>\n<p> The Government of Canada<\/p>\n<p> Prepared by<\/p>\n<p> Firstname Lastname (Anonymized)<\/p>\n<p> November 18, 2019<\/p>\n<p> Executive Summary<\/p>\n<p> Modifications must be made to the Algorithmic Impact Assessment in order to minimize the impacts of algorithmic bias. Algorithmic bias can take on many forms, including but not limited to ageism, genderism, and racism. Sometimes, the presence of algorithmic bias may benefit a small group of people that is positively stereotyped by the algorithm while a vast majority is negatively impacted. Nonetheless, the use of a biased algorithm will result in an unfair decision-making system that returns biased results. Therefore, it is important for algorithms to be screened for biases before they are implemented into a decision-making system. <\/p>\n<p> The Government of Canada currently provides the Algorithmic Impact Assessment for developers to test their algorithms. However, changes should be made in order to improve the assessment process. First, a statistical analysis should be performed on the data instead of having the developer answer questions about how the data were obtained. Second, prior to the widespread use of the algorithm, a select but diverse group of people should test the algorithm for any possible biases. This is because the main source of bias in algorithms comes from data, and screening the physical data will minimize the effects of data bias. Additionally, since most algorithms incorporate machine learning, prior tests with a diverse group will determine whether the algorithm will develop biases while being used. <\/p>\n<p> The Effects of Algorithmic Bias on Daily Users and Methods to<\/p>\n<p> Minimize the Effects of Algorithmic Bias<\/p>\n<p> Introduction<\/p>\n<p> Today, as automated decision-making systems continue to replace existing practices, many are affected by the presence of bias, otherwise known as algorithmic bias. Different individuals can be positively or negatively impacted by the presence of algorithmic bias. Nonetheless, a biased algorithm is an unjust system and measures must be taken to minimize the impacts it makes on its users. However, users are not the only ones who experience the effects of algorithmic bias. Once identified, it will be negatively reflect on the developer and the authorizing association as they allowed the implementation of a biased algorithm. Thus, modifications should be made to Canada\u2019s Algorithmic Impact Assessment to better its abilities of identifying bias in algorithms. <\/p>\n<p> Background<\/p>\n<p> Automated decision-making systems are algorithms that use machine learning to recognize patterns in data and make logical inferences while carrying out tasks (Monteith et al, 2016). However, if the data used for pattern recognition are biased, the algorithm will make biased inferences. In a more human context, if one learnt that 2+2 = 5, all future calculations involving 2+2 would be incorrect.<\/p>\n<p> To minimize this effect, the Government of Canada provides the Algorithmic Impact Assessment (AIA) for governments and companies to assess their algorithms prior to implementing it into an automated decision-making system. It is an open source project consisting of 60 questions related to the business process, the design decisions, and the collected data used for the algorithm (Government of Canada, 2019).<\/p>\n<p> Summary of the Problem<\/p>\n<p> The purpose of this study is to identify the major cause of algorithmic bias and its impacts as well as methods that can help reduce algorithmic bias and improve the AIA.<\/p>\n<p> Study of the Problem<\/p>\n<p> Methods<\/p>\n<p> All scholarly articles provided in the findings were found in the Waterloo library database. The authors were evaluated for their credentials as well as those who peer-reviewed the journal article. News articles that were used to serve as examples of algorithmic bias are all up to date from well-known newspapers. Other sources were used to cross reference these articles to ensure that their report was accurate and unbiased. <\/p>\n<p> Summary of Results<\/p>\n<p> Algorithmic bias describes the case when a software system makes an unjust decision that impacts individuals in various ways depending on the context the algorithm is used. <\/p>\n<p> To begin, Monteith et al (2016) addressed how algorithmic bias is affected by data bias. Specifically, the article focused on data that was collected through a person\u2019s digital footprint. Two key points were addressed in this article: data may not be representative, and data may be wrong and misleading. Firstly, digital data may not be representative because it is collected through a person\u2019s digital activities. Although the majority of the population has access to digital technology, not everyone can access it regularly. Therefore, the data will not have regular entries from those who lack access to technology. These people could include those who suffer from poverty or cognitive disabilities. The data will be skewed for this group of people and the algorithm can be biased towards them. Secondly, digital data may be wrong as incorrect information can be collected. For instance, due to incorrect data, an estimated amount of 23 million Americans have incorrect credit reports affecting their credit scores (Monteith et al, 2016). This will penalize their access to credit limits as they are misrepresented by their data. <\/p>\n<p> Additionally, in Limaye\u2019s (2018) article, the impact of algorithmic bias in HR practices is discussed. Currently, many companies face criticism for lacking diversity in their workforce. According to the data collected, tech companies like Google, Facebook, and Twitter showed that women only held 30% of leadership roles and 27% of technical roles. Therefore, in an effort to minimize the effects of human bias, companies have turned to computer systems to make hiring decisions. However, even computer algorithms are not exempt from bias. The main cause of this is because the data on which these algorithms function is collected and inputted by humans. The article examined an experiment conducted by the National Bureau of Economic Research and showed how recruiters often chose White sounding names over Black sounding names. In a similar fashion, if the algorithm used to screen applicants was trained with data of the same mentality, the same results will be produced (Limaye, 2018). <\/p>\n<p> An example of algorithmic impacts in HR practices would be Amazon.com Inc\u2019s recruiting algorithm. The recruiting algorithm incorporated machine learning by observing patterns in resumes submitted to the company over a 10-year period. However, most of these applications came from male applicants and the algorithm taught itself that male candidates were more preferable than female candidates. This is the result of data bias, in fact, resumes that contained the word \u201cwomen\u2019s\u201d were penalized and downgraded (Dastin, 2018). Thus, it is evident that Amazon.com Inc\u2019s implementation of a recruiting algorithm was unethical towards women and devalued them in the form of genderism. <\/p>\n<p> Similarly, Apple\u2019s recent Apple Card incident is another example of genderism in algorithms. It was reported that the Apple Card algorithm was biased against women and gave men higher credit limits despite of their credit scores (Telford, 2019). This situation was brought to the attention of Linda Lacewell, superintendent of New York\u2019s State Department of Financial Services, and her office will be investigating the algorithm over claims of discrimination. Although women are more prone to be credit risks than men, it is illegal to discriminate the factual data (credit scores) and give men better credit terms (Telford, 2019). Therefore, it is evident that algorithmic biases can make impacts on both the individual and legal scale. <\/p>\n<p> Furthermore, Obermeyer et al (2019) studied the effects of algorithmic bias in health systems in the United States. Being provided with the literal data, the research identified racial bias when the algorithm was used to predict high-risk patients. A total of 49,618 patients was selected to be part of the experiment and a risk score was generated by the algorithm based on their medical conditions. It was found that although Black patients were considerably sicker than White patients, both were given the same risk score. This discrepancy occurred because the algorithm also took into consideration the costs of health care. Due to unequal access to health care in the United States, the algorithm lowered the risk score of Black patients. Once this discrepancy in data was corrected, it was observed that the percentage of Black patients receiving additional help due to their risk scores rose from 17.7% to 46.5% (Obermeyer et al, 2019). Henceforth, like all previous examples, it is vital for algorithms to eliminate data bias in order to produce ethical results. <\/p>\n<p> Policy Recommendations<\/p>\n<p> It is evident that the major cause of algorithmic bias is data bias, and two adjustments should be made to the AIA to increase the probability of eliminating data bias. For the benefits of Canadian citizens and Canadian corporations, the Government of Canada should: <\/p>\n<p> Include a statistical analysis in the AIA process to ensure there is no bias in the collected data.<\/p>\n<p> Enforcing this will ensure that the data collected will fit a normal distribution curve and will not favor any group of individuals. This will help the case of data bias shown in the health care example as it can eliminate unfair factors from the data pool.<\/p>\n<p> Test the algorithm with a select but diverse group of individuals prior to its implementation. <\/p>\n<p> This policy adjustment aims to test the machine learning aspect of the algorithm. Algorithmic outcomes will be observed during the testing period to see if any bias will be developed while the algorithm is in use. This will help in cases like the Amazon.com Inc\u2019s recruiting algorithm in which the algorithm taught itself to favor males over females. <\/p>\n<p> These two policy recommendations should be enforced by the Government of Canada for the interest of Canadian citizens and Canadian corporations. The Canadian Human Rights Act aims to protect citizen\u2019s rights to equality and non-discrimination and the above actions should be taken by the Canadian government to protect the rights of citizens in cases of algorithmic bias.<\/p>\n<p> References<\/p>\n<p> Dastin, J. (2018, October 10). Amazon Ditches AI Recruiting Tool That Didn&#8217;t Like Women. Global News. <\/p>\n<p> Government of Canada. (2019, March 29). Algorithmic Impact Assessment. Open Government.<\/p>\n<p> Limaye, M. (2018). The Impact of Algorithm Bias on HR Practices. HR Strategy and Planning Excellence Essentials; Aurora. <\/p>\n<p> Monteith, S., &amp; Glenn, T. (2016). Automated Decision-Making and Big Data: Concerns for People With Mental Illness.\u00a0Current Psychiatry Reports,\u00a018(12). <\/p>\n<p> Obermeyer, Z., Powers, B., Vogeli, C., &amp; Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations.\u00a0Science,\u00a0366(6464), 447\u2013453.<\/p>\n<p> Telford, T. (2019, November 11). Apple Card Algorithm Sparks Gender Bias Allegations Against Goldman Sachs. The Washington Post.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>2 EFFECTS OF ALGORITHMIC BIAS AND REDUCTION METHODS 1 Running Head: EFFECTS OF ALGORITHMIC BIAS AND REDUCTION METHODS THE EFFECTS OF ALGORITHMIC BIAS ON DAILY USERS AND METHODS TO MINIMIZE THE SIDE EFFECTS OF ALGORITHMIC BIAS Prepared for The Government of Canada Prepared by Firstname Lastname (Anonymized) November 18, 2019 Executive Summary Modifications must be [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[10],"class_list":["post-78961","post","type-post","status-publish","format-standard","hentry","category-research-paper-writing","tag-writing"],"_links":{"self":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/posts\/78961","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/comments?post=78961"}],"version-history":[{"count":0,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/posts\/78961\/revisions"}],"wp:attachment":[{"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/media?parent=78961"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/categories?post=78961"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/papersspot.com\/blog\/wp-json\/wp\/v2\/tags?post=78961"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}