Ethics paper

We're the ideal place for homework help. If you are looking for affordable, custom-written, high-quality and non-plagiarized papers, your student life just became easier with us. Click either of the buttons below to place your order.


Order a Similar Paper Order a Different Paper

SEE ATTACHED FILE

INSTRUCTIONS

1. The ethics paper must be typed and submitted on Canvas for evaluation. This written assignment should be 4-page paper, double-spaced, 11-12 readable font, excluding reference section. Please do not exceed four
(4) typed pages.

2. The ethics paper should be typed with one (1) inch margins for the sides, top and bottom. It should contain a heading that includes, title of the assignment, date, and instructor’s name (No cover sheet required). It should include in-text citations pursuant to APA formatting style. The paper should include a reference section at the end of the document (endnotes) with proper citation for each source. For APA guide and APA citation examples please visit https://owl.purdue.edu/owl/research_and_citation/apa_style/apa_style_introduction.html. Also, you can watch https://youtu.be/Q7TBmi8-9G0on how to write in APA style.

3. This paper requires critical thinking and research. In your writing avoid informal verbiage.

4. You may use any materials you want. When you use textbook, articles, reports, supplementary materials or class handouts, you must provide an ACCURATE citation each time you use such materials in your writing. A link to APA citations is provided in your syllabus as well as under item 3 above.

5. Read the facts in part A and questions carefully and outline your answers prior to writing. Organization is Key.

6.
There is no single correct answer. If your arguments are persuasive and compelling, based on your outline and organization of the essay, content and proper format you will be eligible to gain all points no matter what your chosen answer to the question is.


Part A: Facts

While new technologies can break the barriers and empower people by combating discrimination and encouraging dignity assurance, there are alerting concerns around the discriminatory outcomes these machines themselves can create.[footnoteRef:1] An increasing body of research suggests that algorithms and artificial intelligence aren’t necessarily a panacea for ending prejudice, and they can have disproportionate impacts on groups that are already socially disadvantaged.[footnoteRef:2] [1: White Paper “How to Prevent Discriminatory Outcomes in Machine Learning,” World Economic Forum Global Future Council on Human Rights 2016-2018, March 2018, file:///C:/Users/WB524859/Desktop/WEF_40065_White_Paper_How_to_Prevent_Discriminatory_Outcomes_in_Machine_Learning.pdf. ] [2: Laura Hudson, Technology Is Biased Too. How Do We Fix It?, July 20, 2017, https://fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it/. ]

As Artificial intelligence (AI) algorithms learn patterns in the data, they also absorb biases in it. For example, Google showed more ads for lower paying-jobs to women than to men, Amazon’s same-day delivery bypassed black neighborhoods, and the software on several types of digital cameras struggled to recognize the faces of non-white users. In one of the most striking examples, an algorithm called COMPAS, used by law enforcement agencies across multiple states to assess a defendant’s risk of reoffending, was found to falsely flag black individuals almost twice as often as whites, according to a ProPublica investigation.[footnoteRef:3] There are similar concerns about algorithmic bias in facial-recognition technology, which already has a far broader impact than most people realize: Over 117 million American adults have had their images entered into a law-enforcement agency’s face-recognition database, often without their consent or knowledge, and the technology remains largely unregulated.[footnoteRef:4] Another example of a data bias emerged in Nikon’s camera software, which misread images of Asian people 
as blinking, and in Hewlett-Packard’s web camera software, which had 
difficulty recognizing people with dark skin tones.[footnoteRef:5] These challenges are either related to the data itself or to the way algorithms are designed, developed and deployed.[footnoteRef:6] Machine learning systems lack transparency and auditability and are almost entirely developed by small, homogenous teams most often of men.[footnoteRef:7] [3: Bahar Gholipour, We Need to Open the AI Black Box Before It’s Too Late. If we don’t, the biases of our past could dictate our future, January 18, 2018, https://futurism.com/ai-bias-black-box/. ] [4: Laura Hudson, supra note 2. ] [5: Id.] [6: “How to Prevent Discriminatory Outcomes in Machine Learning”, Supra note 1 at 7. For example, race is not included in U.S. data sets on credit applicants (data itself is not biased), but the use of proxy indicators such as zip codes can still result in racial minorities’ access to credit being unfairly limited (the way algorithms are designed and deployed). ] [7: Kate Crawford, “Artificial Intelligence’s White Guy Problem”, June 25, 2016, https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html. These challenges have caused the public understanding of them as ‘black boxes.’ The ‘black box’ problem refers to the situation where people don’t know how AI comes up with its decisions, therefore, they won’t trust it. See Jason Bloomberg, “Don’t Trust Artificial Intelligence? Time To Open The AI ‘Black Box’,” September 16, 2018, https://www.forbes.com/sites/jasonbloomberg/2018/09/16/dont-trust-artificial-intelligence-time-to-open-the-ai-black-box/#78ecf3473b4a. ]

On August 13, 2018, the Assistant Secretary for Fair Housing and Equal Opportunity (“Assistant Secretary”) filed a complaint with the Department of Housing and Urban Development (“HUD”) alleging that Facebook Inc. violated the Fair Housing Act by discriminating because of race, color, religion, sex, familial status, national origin and disability. Subsequently, HUD issued a Charge of Discrimination (“Charge”) on behalf of aggrieved persons following an investigation and a determination that reasonable cause exists to believe that a discriminatory housing practice has occurred.

Facebook is the second largest online advertiser in the United States and is responsible for approximately twenty percent of all online advertising nationwide. Facebook collects millions of data points about its users, draws inferences about each user based on this data, and then charges advertisers for the ability to micro target ads to users based on Facebook’s inferences about them. These ads are then shown to users across the web and in mobile applications. As Facebook explains, its advertising platform enables advertisers to “[r]each people based on zip code, age and gender, specific languages, the interests they’ve shared, their activities, the Pages they’ve liked, their purchase behaviors or intents, device usage and more.” Thus, Facebook “uses location-related information-such as your current location, where you live, the places you like to go, and the businesses and people you’re near to provide, personalize and improve its Products, including ads, for you and others.”

Facebook holds out its advertising platform as a powerful resource for advertisers in many industries, including housing and housing-related services. Such ads include ads for mortgages from large national lenders, ads for rental housing from large real estate listing services, and ads for specific houses for sale from real estate agents.

Facebook has provided a toggle button that enables advertisers to exclude men or women from seeing an ad, a search-box to exclude people who do not speak a specific language from seeing an ad, and a map tool to exclude people who live in a specified area from seeing an ad by drawing a red line around that area. Facebook also provides drop-down menus and search boxes to exclude or include people who share specified attributes. Facebook has offered advertisers hundreds of thousands of attributes from which to choose, for example to exclude “women in the workforce,” “moms of grade school kids,” “foreigners,” “Puerto Rico Islanders,” or people interested in “parenting,” “accessibility,” “service animal,” “Hijab Fashion,” or “Hispanic Culture.” Facebook also has offered advertisers the ability to limit the audience of an ad by selecting to include only those classified as, for example, “Christian” or “Childfree.”

Facebook alone, not the advertiser, determines which users will constitute the “actual audience” for each ad. Facebook structured its ad delivery system such that it generally will not deliver an ad to users whom the system determines are unlikely to engage with the ad, even if the advertiser explicitly wants to reach those users regardless. Facebook uses machine learning and other prediction techniques to classify and group users to project each user’s likely response to a given ad. In doing so, Facebook inevitably recreates groupings defined by their protected class. For example, the top Facebook pages users “like” vary sharply by their protected class, according to Facebook’s “Audience Insights” tool. Therefore, by grouping users who “like” similar pages (unrelated to housing) and presuming a shared interest or disinterest in housing-related advertisements, Facebook’s mechanisms function just like an advertiser who intentionally targets or excludes users based on their protected class.


Part B – Questions

Based on facts provided above, write an essay and answer below questions:

1) Should Facebook be held ethically liable for its biased dwelling advertisements? Why?

2) Identify procedures that help Facebook prevent data bias, ensure data quality and improve its ethical responsibility towards its users. What safeguards should Facebook have in place to maintain the ethical integrity of its AI driven business model?

3) Do you think Facebook must have strict liability? Meaning that all Facebook agents, employees, successors, and all other persons in active concert or participation with it, have participated in discriminating because of race, color, religion, sex, familial status, national origin or disability in any aspect of the sale, rental, use, marketing, or advertising of dwellings and related services must be held liable? Or we should only hold Facebook liable based on theory of negligence?

End of the Ethics Paper Assignment

1

Writerbay.net

Do you need academic writing help? Our quality writers are here 24/7, every day of the year, ready to support you! Instantly chat with a customer support representative in the chat on the bottom right corner, send us a WhatsApp message or click either of the buttons below to submit your paper instructions to the writing team.


Order a Similar Paper Order a Different Paper
Writerbay.net