Advertisement
X

Who Gets To Raise Our Children: Parents, Teachers, Algorithms — Or Mentalloy?

Exploring how AI is shaping young minds and why human guidance remains crucial.

Children mirror what they see. But what happens when they mirror machines? Ilustration by Vikas Thakur
Summary
  • AI influences children’s minds, shaping thinking, empathy, and learning in ways adults often overlook.

  • “Mentalloy” describes how algorithms fuse into growing minds, subtly altering habits of thought and feeling.

  • Protecting childhood requires deliberate human guidance, AI-free zones, and balanced experiences.

Every child alive today is part of the largest psychological experiment in human history — one without consent, supervision, or even awareness. Their growing minds are being shaped every day by AI systems that were not designed for their growth but for profit. The results of this experiment will not show up tomorrow. They will appear years later, when these children become adults. And then it will be too late to change what has been written into their minds.

We often argue about AI in terms of jobs, wars, or national security. But those arguments miss the deepest risk. The true danger is not what AI does to adults. The danger is what AI does to children while they are still forming. Because childhood is not simply a waiting room before adulthood. Childhood is where the foundations of humanity are built. If AI changes those foundations, then the very shape of humanity will change and I would say the results would wreck havoc.

Childhood Is Training For Life

Childhood is not just about keeping kids busy until they grow up. It is a period of training for the mind and heart. Children need slow lessons: how to wait, how to share, how to handle boredom, how to imagine, how to deal with people who think differently. These lessons come from play, family, school, and community.

But AI changes the environment in which children learn. A child who spends hours with YouTube’s “next video” button or with a chatbot designed to always entertain them will learn a different way of thinking than a child who spends hours in the park, in a library, or in arguments with siblings. Psychologists call the human mind “plastic,” which means it changes shape with experience. That is especially true for children. If their experiences are built by AI, their minds will take a shape that suits the logic of machines more than the logic of human life.

Advertisement

I have coined a word for this: Mentalloy. The way AI fuses itself into the growing mind, changing its natural shape forever. It means that over time, the typical child in a society develops a slightly different mind than before. It does not happen suddenly and it does not happen to everyone in the same way, but slowly, the habits of thought and feeling change.

What Kind Of Minds Are We Raising?

We know that corporations aspire for attention, clicks, and profits. AI is being trained to keep children watching, tapping, or playing. The design is not evil in itself — but it is single-minded. It creates fast feedback, clear rewards, and predictable loops. This is very different from the messy, slow, and uncertain lessons of human life that actually shape one's character.

Children who grow up in these loops may become very good at certain skills: quick scanning, instant reaction, following patterns. But they may struggle with slower and harder skills: imagination that takes time, patience with people who are difficult, the ability to start a long project with no immediate reward.

Advertisement

Here is one strange but important point: empathy — the ability to feel with others — is learned by watching people, copying their gestures, and experiencing their moods. Children mirror what they see. But what happens when they mirror machines? A chatbot can imitate kindness, but it is programmed to always respond in a neat way. If children grow up expecting such neatness, their empathy might become shallow. They might expect every real human to respond like a chatbot — instantly, predictably, without frustration. Real life is not like that.

Some may argue: if AI makes children learn faster, isn’t that good? Imagine a child who learns math or coding quickly with an AI tutor. But speed is not the whole picture. Human growth also requires patience, resilience, and the ability to work through boredom or uncertainty. A child who only knows fast and easy learning may become what I call a “brittle genius” — bright in narrow tasks, fragile in real life.

Advertisement

Societies of brittle geniuses may solve puzzles quickly but struggle to handle democracy, ethics, or human conflict. These things need patience, imagination, and tolerance for ambiguity. That is why this danger is deeper than lost jobs. Jobs can be replaced, but if we lose the qualities that make us human, we cannot easily get them back.

Who Is Writing Childhood Now?

For most of history, the authors of childhood were families, teachers, neighbors, and sometimes religious or cultural leaders. They were imperfect, but they were humans, with values and stories to pass on. Now corporations are quietly stepping into that role. Through platforms, apps, and AI systems, they are writing parts of the child’s inner world.

This is not always intentional. A company simply wants a child to stay on the platform longer. But in doing so, it is deciding which stories, emotions, and social patterns the child sees. It becomes a co-author of their development. The danger is not “evil corporations raising our kids” — it is misaligned authorship. Companies write for profit, whereas childhood needs to be written for growth.

Advertisement

What Can Be Done?

We cannot remove AI from children’s lives entirely. Nor should we: some tools can genuinely help learning. The question is how to make sure AI becomes one tool among many, not the main author of young minds. Designing AI with friction is one way to achieve this. Most children’s apps aim to remove every barrier — instant play, instant response, instant reward — but friction is important. Systems should sometimes slow the child down, make them wait, or leave gaps that require imagination; for instance, instead of auto-playing the next video, an app could ask the child to think or choose before continuing.

Equally important is protecting “AI-free zones.” Just as societies once banned child labor or limited advertising to children, we could establish rules ensuring certain hours or spaces are free from AI interference. In schools, parks, and at home during designated periods, children should interact only with humans and experience the unpredictability of real life.

Alongside this, companies making child-focused AI should provide “developmental impact statements,” similar to health warnings on cigarettes, explaining what capacities the tool builds and which it may weaken. This approach would compel designers to consider outcomes beyond profit.

Parents and teachers also need new skills to balance AI with other activities. Limiting screen time alone is insufficient; adults should actively create variety, blending AI use with outdoor play, storytelling, boredom, and real social interactions. Even when children use AI tutors, human mentorship remains essential. AI can help with facts and exercises, but only humans can guide values, meaning, and moral choices. AI should never be the sole voice shaping a child’s worldview.

These measures may sound ambitious, but they are achievable. They require political will, public demand, and a cultural shift to ensure that AI supports childhood rather than replaces the human experiences that shape it.

A Cultural Shift & Moral Choice

Beyond rules and tools, we need to change how we talk about childhood success. Right now, society celebrates children who are early experts — the kid who codes at seven, or the influencer at ten. This pushes families to feed children into algorithmic systems early. Instead, we should celebrate late bloomers, slow learners, and children who spend time in unstructured play. Great human capacities often come from wandering, failing, and waiting, not just from quick wins.

Some people say this is paternalistic — telling families how to raise their kids. But think of it this way: protecting childhood is not about limiting freedom; it is about expanding future freedom. When children grow with open, varied experiences, they become adults who can choose many paths. When they grow only in algorithmic loops, their choices narrow. Freedom is preserved by protecting the raw material of becoming human.

AI will not stop shaping childhood. But we can choose whether that shaping is accidental and profit-driven, or deliberate and humane.

If we do nothing, children may grow into adults who are efficient but brittle — brilliant at games and puzzles, fragile in love, democracy, and community. They may carry a quiet loneliness, because their first mirrors were not human faces but optimized algorithms.

If we act, we can use AI as a tool but not let it replace human imagination, empathy, and patience. Childhood is not a market to be mined. It is the soil in which every society’s future is planted.

We cannot allow profit-driven algorithms to write the first draft of our children’s minds. That draft must remain a human story.

(views expressed are personal)

Nishant Sahdev is a physicist at the University of North Carolina at Chapel Hill, United States.

Published At:
US