Mark Zuckerberg went on a recent media tour to promote that Meta is seeking to transform its Meta AI chatbots into friends, under the guise of helping the very real loneliness epidemic... What could go wrong? A whole lot! As we have seen with social media's negative impact on individuals, especially kids, leaving relationship engineering to the Mark Zuckerbergs of the world seems like a terrible idea..
The article presents legitimate worries about the commercialization of human emotional needs and the potential for AI relationships to create unrealistic expectations for human interactions, but fails to acknowledge the possible benefits of AI companionship for specific use cases like therapy, social skill development, or support for isolated individuals. Carol Roth would be stronger if it engaged with the actual proposals and research rather than relying on extreme fictional scenarios and assuming worst-case outcomes, while also considering how AI companionship might complement rather than replace human relationships.
1. slippery slope • The author assumes a chain of events leading from AI friendships to social withdrawal without sufficient evidence of causation.
Creating the illusion of a long-term perfect friendship or romantic relationship sets an impossible bar for human connections to be measured against. It's one that can lead people into withdrawing from society and real connections.
Before one can conclude that the predicted social withdrawal is inevitable, one would need to consider multiple ways in which it could conceivably be avoided, such as: 1. Contrast Effect
- AI interactions might highlight what's missing, making people appreciate genuine human connections more
- Users might recognize the limitations of AI relationships and actively seek authentic human interactions
- The "perfectness" might feel artificial and unsatisfying, driving people toward more genuine relationships
2. Complementary Usage
- People might use AI relationships as practice for human relationships
- AI interactions could build social confidence rather than replace human interaction
- Users might develop better relationship skills through AI feedback and apply them to human relationships
3. Reality Awareness
- Users might maintain clear awareness of the artificial nature of AI relationships
- People could compartmentalize AI interactions as a different category from human relationships
- The uncanny valley effect might prevent deep emotional attachment to AI
Because the author skips past these possibilities without a mention, the argument amounts to a slippery slope fallacy.
2. false dilemma • Roth presents only two extreme options - either completely natural human interaction or artificial technological relationships, ignoring potential middle ground.
People should be encouraged to get off their phones and touch grass, meet other people and enjoy the world that the Lord created, not the fake world that technology has created
Similar to the slippery slope above, Roth fails to consider a range of alternative outcomes, such as: 1. Balanced Integration
- Using technology moderately while maintaining in-person relationships
- Using AI assistants for specific tasks while reserving deep connections for humans
- Having both online and offline friendships in healthy proportions
2. Therapeutic Applications
- Using AI companions for practicing social skills before real-world interactions
- Employing AI as a supplement to human therapy
- Using AI to help people with social anxiety transition to in-person relationships
3. Specialized Situations
- AI companionship for isolated individuals (remote locations, mobility issues)
- Temporary AI support during transitions or periods of isolation
- Supplemental emotional support when human support is unavailable
4. Learning and Development
- Using AI to learn better communication skills
- Practicing emotional intelligence through AI interactions
- Building confidence in social situations through graduated exposure
Each of these blended alternatives would have to be eliminated before one could validate Roth's assertion that we must choose between just two worlds: the world of touching real green grass vs the "fake technology world."
Note that there being one or more apparent fallacies in the arguments presented in this article does not mean that every argument the arguer made was fallacious, nor does it mean there are not other arguments in existence for the same or similar position that are logically valid. Also note that checking for fallacies is not the same as verification of the premises the arguer starts from, such as facts that the arguer asserts or principles that the arguer assumes as the foundation for constructing arguments. For more about this, see our 'What is Fallacy Checking?'
Without in any way limiting the author’s [and publisher’s] exclusive rights under copyright, any use of this publication to “train” generative artificial intelligence (AI) technologies to generate text is expressly prohibited. The author reserves all rights to license uses of this work for generative AI training and development of machine learning language models.
Comments