Advertisement

Editor’s note: This is part II in the two-part series Top 10 Ethical Dilemmas in Science for 2018. Click here to read part I. 

The John J. Reilly Center for Science, Technology and Values at the University of Notre Dame has released its annual list of emerging ethical dilemmas and policy issues in science and technology for 2018. This year, the list is decidedly slanted, as it was compiled by those participating in a course at Notre Dame titled, “Man and Machine: Humanity, Technology and the Future.”

Still, the list is important as science, technology and society look—and move—ahead.

“The annual list is designed to get people thinking about the ethics of potentially controversial technology, but the 2018 list shows many of these issues are already here,” the school said in a press release.

Let’s take a look at the bottom half of the 2018 list:

6. China’s social rating system

If a social rating system sounds like something out of a George Orwell novel, you’re not too far off. But China has decided the dystopian-esque future is now. The nation released a report titled “Planning Outline for the Construction of a Social Credit System,” in which it alerted Chinese citizens of the plan to unveil a personal scorecard for every person and business in China, based on their level of trustworthiness. Participation is mandatory for every Chinese citizen by the year 2020. According to the report, as of now, there are four areas a person will be rated on: honesty in government affairs, commercial integrity, societal integrity and judicial credibility.

Beyond that, a person’s individual score (from 350 to 950) can be impacted by the scores of friends, acquaintances and family members—for better or worse.

The possible problems with this are almost too many to name. Can it be hacked? What if someone makes a false allegation against you? What is your young child has a low score? And the list goes on.

7. A lifelogging camera with AI

In October 2017, Google released “Google Clips,” a hands-free device that captures motion pictures. Clips uses both AI and facial recognition software to capture the “best moments” with great lighting and framing—and it gets smarter over time.

According to Google, Clips learns to recognize familiar faces, so the most you’re with someone, the most the device learns to capture clips of them, including household pets. It will not take photos of unfamiliar faces, like a workman or household cleaner. Clips can be handheld, clipped to a person, or set down to automatically capture your life moments.

The device does not need a network connection to capture or view anything, and only photos you want to save and share are synced to your Google account.

Of course, some will still see privacy issues with the device. As Notre Dame points out, “There will be those who don’t believe anything Google says about the camera not being a surveillance device (a fairer argument would be that it’s certainly not optimized to be one). But there’s also an interesting issue here about letting Google’s new AI algorithm Moment IQ decide which of your life’s moments truly deserve to be captured.”

8. Sentencing software

Would a justice system based off AI be fairer than one with judges and juries? That’s up for debate, especially when private companies enter the picture.

Northpointe, Inc. designed the COMPAS assessment, a program marketed as a means to help guide courts in their sentencing. The problem here is Northpointe is a private company, and therefore its sentencing algorithm is proprietary. The Wisconsin Supreme Court ruled so in a hearing recently. But if defendants are not allowed to know how the algorithm works, how can their attorneys prepare a case against it?

In addition, a ProPublica study found that COMPAS routinely gives worse scores to black defendants. So does the algorithm—or the algorithm’s creators—have bias? That’s not a question we can answer without knowledge of how the (proprietary) tool works in the first place.

9. Friendbots

If someone close to you has ever passed away, you may have dealt with the desire to speak to them “just one last time.”

When Eugenia Kuyda’s best friend Roman Mazurenko died, she channeled the urge to speak to him once more into her startup technology company—ultimately creating a “chatbot” in his image and likeness. Thanks to today’s digital world, a lot of people now leave behind an increasingly large digital footprint upon their death. Photos, videos, text messages, social media posts—everything lives on after we die.

After Mazurenko’s death Kuyda collected as much of his digital footprint as she could, and her team at Luka helped build a neural network in Russian. Less than a year after his death, Kuyda successfully created a bot that spoke in the likeness of Mazurenko, even using some of his favorite words and phrases. She released the bot publically—to both positive and negative reception, although more on the favorable side. (If you’re interested, you can read about the process in full detail here. It’s worth the time.)

Kuyda’s Roman bot raises questions about the traditional grieving process, as well as ethical questions about the digital footprint we may leave behind.

10. A crime reporting and tracking app

Tech company Sp0n released an app named “Citizen,” marketed as a way for people to be aware of and avoid high-crime areas, but some maintain it is anything but safe. It was originally released under the name “Vigilante,” before Apple’s App Store removed it after one day due to concerns about user safety.

Remarketed, rebranded and rereleased in March 2017 as “Citizen,” the app is notifies its users of ongoing crimes in a specific area. According to The Washington Post, Sp0n employees monitor the unencrypted police and fire scanner frequencies in New York, listening to emergency dispatches and mapping them, along with brief descriptions of the event. Then they push alerts to people in the vicinity of the events, as well as create a searchable map of recent incidents. Users who receive alerts can either avoid the area, keeping themselves safe, or approach and observe the situation, possibly live streaming the video through the app.”

So, will this app allow for better monitoring of police brutality, for example, or will there be a controversial surge in vigilantism?

Advertisement
Advertisement