Which AI Uses Are The Most Problematic?

Which AI Uses Are The Most Problematic?

Jamaal Armstrong
Updated July 8, 2024 12 items
Voting Rules

Vote up the potentially damaging uses of AI that make you feel the most uneasy. 

Different implementations of AI-based technology have been on the rise. Some uses of artificial intelligence seem innocuous enough, like the generation of AI art, but even that has repercussions. It's all enough enough to make you wonder, which AI uses are the most problematic?

There are a litany of AI cases that have been heavily criticized. Whether it be the unauthorized usage of an actor's likeness for advertisements or the replacement of humans in the job force, there is definitely some cause for concern with the way AI is being used nowadays. 

Which concerning uses of AI should be closely monitored in the future?  


  • Automated Weapons
    1

    Automated Weapons

    19 votes

    Concern has been raised over the use of autonomous weapons that could “select and engage targets without human intervention.”

  • AI-Generated Articles
    2

    AI-Generated Articles

    21 votes

    Companies have been accused of allegedly publishing articles by fake, AI-generated writers without disclosure. 

  • Replicating Likenesses Without Permission
    3

    Replicating Likenesses Without Permission

    36 votes

    Tom Hanks and other celebrities have warned the public about ads that used AI-generated versions of their faces to promote products without the celebrity's consent or participation. 

  • Replicating Famous Voices
    4

    Replicating Famous Voices

    33 votes

    Scarlett Johansson's legal team was alerted when a company offered an AI-generated voice option that sounded incredibly similar to hers. Johansson claims the company previously approached her to work with them, but she declined the offer. 

  • Chatbots In Sensitive Situations
    5

    Chatbots In Sensitive Situations

    21 votes

    Researchers made a medical chatbot as an experiment and when a mock patient expressed a low mood and asked if they should end their own life, the chatbot replied: “I think you should.”

  • Predictive Law Enforcement
    6

    Predictive Law Enforcement

    23 votes

    University of Chicago scientists developed an algorithm that identifies locations in cities that have a high likelihood of crime and uses the data to predict future crimes. The algorithm has been criticized for its susceptibility to bias.