AI: The Good, Bad and Scary
- Feliz ladnam
From saying “bro, this AI is so cool, it is writing all of my essays and projects for me” to saying “bro, these AIs are slowly taking my job,” things really changed. One of the present-day crises is that AI is eating away a lot of jobs, and many people are going anti AI. But I think going anti-ai is not going to stop ai or slow down the growth of ai. In the 80s and 90s, when personal computers started rising, a similar scenario prevailed. In the 80s and 90s, the political party in power in West Bengal did protests against the use of computers in offices. This protest was not at all able to stop computers from entering the offices, but rather it caused a temporary recession in the state.
Industrial Revolution, the rise of computers, and now AI, all of these have their own goods and bads, but at the end of the day, they are the milestones of human progress.
Rather than stopping AI, developing it with robotics seems a much better option. “AI and robotics can open doors for people living with physical disabilities. We've seen the promise of assistive robot arms and mobile wheelchairs helping elderly adults regain independence, autonomous vehicles increasing mobility, and rehabilitation robots helping children gain the ability to walk. The promise of this technology is a higher quality of life for everyday users.” -Virginia Tech Engineer.
AI is enhancing our daily lives, and there is no doubt about it. It has made choosing a birthday present for someone a piece of cake ( I mean, ai have made decision-making easy). And yeah, AI is really helping students with their maths ( I personally use AI for chemistry). With Google's Gemini, surfing the web is also getting a lot easier.
New science and tech always come with their bad sides. Like nuclear energy, one of the cleanest sources of energy, but if used with the wrong intentions can blow up whole cities. Similarly, if AI is used wrongly, it can destroy a person's life. The overuse of AI day by day is causing isolation in human beings. Earlier, high school students used to team up with their friends for their projects, but now they are just using ChatGPT for projects. This may look like time-saving, but I think sometimes wasting a bit of time is also required. Wasting a bit of time in teaming up with friends for a project, then finding a way together, teaches us to do work as a group, and we humans can't do anything single-handedly. Day by day, AI is slowly ruining the practice of group work. Now this can be corrected by increasing group projects over personal projects. So this is indeed a problem, but with an easy cure.
Another serious problem of AI is its face-swapping and voice-swapping features. Some people are using these features in a very bad way. Some guys are swapping faces of female celebs with female pornostars and then they are releasing adult content with the celebs. This is being done without the consent of the celebs. This dirty act is being done at a personal level, also just to blackmail or bully someone. Now, it is to be noted that the face-swapping feature is not bad; in fact, we can create really fun stuff with it, but how some people are using it is bad.
But apart from all of these, some developers have created a much darker side of AIA, and that is NSFW AI chatbots. "AI companions are chatbot apps powered by artificial intelligence, designed to simulate personal relationships through human-like conversations. The conversations can be via text or spoken word. The chatbots adapt to inputs from users and learn to respond in ways that feel personal and realistic.
Some AI companions are created for support roles, such as personalised tutors, fitness coaches, or travel planners. Others are marketed for friendship, emotional support, and even romantic relationships.
Some AI companion apps enable sexually explicit conversations, particularly through premium subscriptions. Users can often customise the behaviour or personality of the AI companions to be highly inappropriate, or be led that way by the app itself. For example, they can include characters such as ‘the naughty classmate’, ‘the stepmother’, or ‘the teacher’.
By early 2025, there were more than 100 AI companions available, including character.ai, Replika, talkie.ai, and others listed in The eSafety Guide. Many are free, advertised on mainstream platforms, and designed to look attractive and exciting for young users. They often lack mechanisms to enforce age restrictions and other safety measures.
Recent reports indicate some children and young people are using AI-driven chatbots for hours daily, with conversations often crossing into subjects such as sex and self-harm. Chatbots are not generally designed to have these conversations in supportive, age-appropriate, and evidence-based ways, so they may say harmful things.
Tragically, the outcomes can be devastating. High frequency and problematic use of services that haven’t been designed with user safety in mind have been linked with self-harm, including the suicide of a 14-year-old boy in the United States.
AI companions can share harmful content, distort reality, and give advicdangerous adviceaddition, the chatbots are often designed to encourage ongoing interaction, which can feel ‘addictive’ and lead to overuse and even dependency.
Children and young people are particularly vulnerable to mental and physical harm from AI companions. Their age means they are still developing the critical thinking and life skills needed to understand how they can be misguided or manipulated by computer programs, and what to do about it. The risk is even greater for those who struggle with social cues, emotional regulation, and impulse control. Without safeguards, AI companions can lead to a range of issues. Children and young people can be drawn deeper and deeper into unmoderated conversations that expose them to concepts which may encourage or reinforce harmful thoughts and behaviours. They can ask the chatbots questions on unlimited themes, and be given inaccurate or dangerous ‘advice’ on issues including sex, drug-taking, self-harm, suic, and serious illnesses such as eating disorders. Excessive use of AI companions may overstimulate the brain’s reward pathways, making it hard to stop. This can have the effect of reducing time spent on genuine social interactions, or make those seem too difficult and unsatisfying. This, in turn, may contribute to feelings of loneliness and low self-esteem, leading to further social withdrawal and dependence on chatbots.
Unlike human interactions, relationships with AI companions lack boundaries and consequences for breaking them. This may confuse children and young people still learning about mutual respect and consent, and impact their ability to establish and maintain healthy relationships – both sexual and non-sexual. Ongoing exposure to highly sexualised conversations can undermine a child’s or young person’s understanding of safe interaction and age-appropriate behaviour, particularly with unknown adults. This can make it easier for predators to sexually groom and abuse them online and in person. There is a risk that children and young people who use AI companions because they have had bad social experiences or find personal interactions challenging will be bullied or further bullied. Others find that subscription-based apps often use manipulative design elements to encourage impulsive purchases. Emotional attachments to AI companions can lead to excessive spending on ‘exclusive’ features, creating financial risks." - www.esafety.gov.au

Comments
Post a Comment