Published Friday, January 23, 2026
by Ken Lo

The Irvine Book Club hosted a discussion of The Worlds I See by Fei-Fei Li, featuring former president Hsia Kai Ming and members Francoise Mayle and William Chen , who explored the memoir through the lenses of AI, life experience, and human values, on January 23.

Held at the South Coast Chinese Cultural Center, the event sparked lively discussion and encouraged participants to move beyond a technology-first mindset, rethinking how AI shapes civilization, ethics, and the future.

Machines Aren’t Gods · Humanity Leads

Member Francoise Mayle noted that Fei-Fei Li shifts the focus from how intelligent machines can be to how humans should coexist with AI. 

She described The Worlds I See as not just an AI history, but a reflection on humanity’s place in a technology-driven age.

She highlighted the book’s emphasis on ethics, bias, and social responsibility, noting that biased data only amplifies inequality. 

Li’s call for “AI for Good,” she said, aims to broaden participation—especially among women and underrepresented groups—to ensure fairness in AI development.

Francoise also pointed to the creation of ImageNet as a pivotal breakthrough, where Li bridged linguistics and computer vision to help machines learn to “see” at scale—laying the groundwork for deep learning and generative AI.

She added that Li repeatedly stresses AI has no consciousness or moral judgment, but merely reflects human values. 

Quoting NVIDIA CEO Jensen Huang, she concluded that as AI makes intelligence abundant, taste and judgment will become humanity’s most valuable assets.

Technology Limits · Human Judgment

Member William Chen shared that, based on his experience in computer and engineering systems, what struck him most about The Worlds I See was that Fei-Fei Li does not mythologize AI, but clearly explains its logic and limits.

Chen noted that Li’s major breakthrough came from recognizing the importance of data structure. 

By applying linguistic word-network concepts, Li built a massive image database and mobilized global contributors to achieve in years what once required decades—an example of collective intelligence at work.

Chen emphasized that Li also makes clear that AI remains limited to recognition and description, far from human contextual understanding or imagination. 

While humans grasp meaning early in life, most AI systems still operate within narrow bounds.

He concluded that the book reinforces a key point : the issue is not whether AI will replace humans, but how humans choose to guide it—with judgment, clarity, and responsibility.

Upbringing as Bow · North Star as Guide

Former president Hsia Kai Min highlighted Fei-Fei Li’s upbringing as key to her human-centered approach to AI. 

Raised by a resilient mother and a curious engineer father who encouraged exploration and questioning, Li learned early to value inquiry over easy answers.

After immigrating to the U.S., she overcame cultural and language barriers with the guidance of math teacher Bob Zabel, who recognized her talent and helped shape her academic path. 

Facing financial hardship, Li ultimately chose academia after her mother urged her to “follow your North Star.”

Now a mother herself, Li views technology as a responsibility to future generations, once comparing parenting to drawing a bow—adults steady the aim, while children are the arrows sent toward their own future.

Double Edge · Human Measure

IBC president Rose voiced concern over potential AI backlash, echoing the author’s call for a human-centered approach. She stressed that without ethical and humanistic guidance, technology risks losing its purpose.

Vice president Jennifer Lin praised Fei-Fei Li’s focus on AI’s impact on medicine, humanity, and society, invoking the “North Star” as a reminder that every life stage needs a guiding core value.

The discussion drew active participation. Presentation coordinator Gloria shared her experience using AI for slides, noting that AI is most effective only when humans first define the content and intent—reinforcing the event’s central message : AI must remain human-centered.



Book Profile :

Written by leading AI scholar Fei-Fei Li, “The Worlds I See” is not just a technology book—it is a powerful life narrative spanning immigration, womanhood, science, and the humanities. 

Blending memoir, science history, and ethical reflection, Li traces her journey from poverty and adversity to the front lines of AI that reshaped the modern world.

Li is Co-Director of the Stanford Human-Centered AI Institute (Stanford HAI) and a member of the U.S. National Academies of Engineering, Medicine, and Arts & Sciences—widely regarded as one of the rare female pioneers of modern AI. 

She led the creation of ImageNet, the foundation of deep learning and generative AI, and has served as Google’s Chief Scientist of AI/ML and an independent board director at Twitter (now X). 

She also founded “World Labs”, standing at the crossroads of innovation and public responsibility.

What makes this book resonate is not technical triumph, but a deeper question it ask :
As machines grow smarter, can humanity grow wiser?
Li’s answer is clear—AI must be guided not only by power and speed, but by ethics, equity, education, and human dignity. 

Technology may reshape the world, but its direction must always be decided by people.

Top