Startup Snapshot

How a bunch of high schoolers surprised me with their take on AI

I joined a small group of students to discuss the future of AI. The discussion quickly turned into a philosophical debate on human purpose, echoing the worries of today’s top technologists and thinkers.

Startup Snapshot - Ilya Venger

Ilya Venger

How a bunch of high schoolers surprised me with their take on AI - Startup Snapshot

A dear friend invited me to talk with a small group of youngsters as part of her extracurricular course focused on Greek mythology. They were learning about Pandora’s Box – how curiosity unleashed all the world’s evils. The analogy was obvious – consider the risks and promises of AI.

The kids had a very insightful take on the promise of abundance and potential misuse of AI. They also showed an intuitive grasp of elements of existential risks. But one thing a shy 15-year-old girl in the corner said really got to me. Almost verbatim: “I tried ChatGPT and it can do a lot of things much better than I can – even if I learn hard. If the systems become even smarter, and can do anything I can do better… Then what am I here for?”. This became the focus of discussion for the class that apparently continued in their chat group for a few weeks.

It is the task of adolescence to discover one’s own self and purpose, but this philosophical discussion was a bit much for me. As the adult in the room and as a professional developing AI tools at one of the largest tech giants globally, I was surprised to see that I didn’t have a good answer. What direction should I take? “Do things you love? Do something uniquely human?” or “Trust me, you’ll figure it out?”

Adults are often numb to the problem of self-definition of the younger generation. We suggest that their objective is to learn and achieve, sometimes to overshadow our accomplishments. We’re not great at explaining WHY. Particularly not when big changes are coming.

 

From Highschool to Big Tech

In the last few days, my feed has been full of coverage of a conversation between Bill Gates and Sam Altman. Most AI influencers focused on hints to GPT-5 capabilities, discussions on multi-modality, reasoning, personalization and data.

But a particular exchange caught my ear. Bill Gates is obviously having the same existential thoughts as the young students in the mythology class.

In the video, Bill Gates states, “Now, with AI, to me, if you do get to the incredible capability, AGI, AGI+, there are three things I worry about. … The one that sort of befuddles me is human purpose. I get a lot of excitement that, hey, I’m good at working on malaria, and malaria eradication, and getting smart people and applying resources to that. When the machine says to me, ‘Bill, go play pickleball, I’ve got malaria eradication. You’re just a slow thinker,’ then it is a philosophically confusing thing.”

This made me do a double-take. The worries of a sixty-plus-year-old visionary tech billionaire and the anxieties of a bright, yet random, fifteen-year-old are the same.

 

What the future holds

No answers are provided by either Sam or Bill, apart from the simple hypothesis that our horizons are probably too narrow. Our scarcity-oriented mindset cannot process a future where a benevolent powerful AI renders our sustenance efforts unnecessary.

Yuval Noah Harari, historian and author of Sapiens, in his book “21 Lessons for the 21st Century” dubs our time as the “age of bewilderment”. “By the middle of the twenty-first century … ‘Who am I?’ will be a more urgent and complicated question than ever before.” He stresses that adaptation to new circumstances will require accepting the unknown. It requires us to endow children first and foremost with emotional resilience. But it is a much harder skill both to teach and to master than knowledge of historical facts or laws of physics. Moreover, to perpetuate our humanity we will need to explore new models for post-work societies, post-work economies, and post-work politics. We must search and find new meaningful pursuits.

Harari is generally a prophetic author. I revisited the book while writing this piece. Five years ago when he already foresaw my predicament and wrote: “So the best advice I could give a fifteen-year-old stuck in an outdated school somewhere in Mexico, India or Alabama is: don’t rely on the adults too much. Most of them mean well, but they just don’t understand the world.”

It’s true, I don’t fully understand the world, and I have my own (additional) worries about an AI-driven future. However, this cross-sectional and intergenerational echoing of concerns about human purpose voiced by tech leaders, intellectuals and youngsters makes the problem stand out for me now more than ever before. It’s a topic that I owe my 8-year old daughter to discuss. And I guess I too would like to get some comfort in the process.

 

About the Author: 

Ilya Venger is a Principal Product Lead for Industry AI at Microsoft, where he focuses on defining the foundations that enable customers and partners to create GenAI-driven Copilots. Before joining Microsoft, Ilya led data architecture for the Group CTO at UBS and spent nearly a decade as a strategy consultant, advising Fortune 100 executives on digital transformation and proposition development. He holds a PhD in Systems Biology from the Weizmann Institute of Science.

Browse all our reports

Check out our data deep-dive

Rothschild 45
Tel Aviv, Israel
61000
+972-505-1133-11
info@ybenjamin.com