In today’s episode of “We can, but should we…,” we use your Google profile image to play “Hot or Not.”
Depending on your level of optimism, today’s blog inspiration presents itself as a TED talk or a Black Mirror episode. You can decide.
The source: A NYT article titled, The Map Rating Restaurants Based on How Hot the Customers Are.
The premise: A coder (Riley Walz) generated a map of New York’s “hottest” restaurant-goers for his site, LooksMapping.com.
How he did it: Coder Riley scraped Google profile images from restaurant-goers to prompt his bot whether they are “hot-or-not.” The data set comes from 2.8 million Google reviews, from 1.5 million unique accounts, which identified 587,000 profile images with distinguishable faces. He details his procedure here.

The result: dot-distribution maps with a choropleth flair that show where New York’s “hot” diners hang out-broken down further by gender and age. Riley emphasizes that these results reflect AI’s perspective, not his own (at least we hope not).
Riley claims that this project is more of a “cultural commentary than practical resource, its premise speaks to a growing trend of diners prioritizing a restaurant’s clientele over its food or atmosphere,” and admits that the data his map uses is biased, flawed, and reductive.
“The model was fairly accurate at detecting apparent age and gender, Mr. Walz said, as the A.I. gave estimates of age and gender on a probability between 0 (younger; male) to 1 (older; female). Enough photos were ambiguous that he opted to round up or down to save time, but a more thorough assessment could have treated values near the midpoint as indeterminate or representative of middle age or other gender identities.
The way it scored attractiveness was “admittedly a bit janky.” It favored seemingly arbitrary details to gauge hotness, like whether a profile image depicts a person wearing a wedding dress (hot), and if a photo is blurry (not). “The model isn’t just looking at the face,” Mr. Walz said. “It’s picking up on other visual cues, too.””
It doesn’t take much time to observe what the biased prompts or training data lead viewers to see. As one food-writer points out about Walz’ Gen-AI map for San Francisco, “The algorithm seems to have a thing for Asians, and a bias against places that are Black-owned and/or in Black neighborhoods, like the Bayview.” Indeed, you can recognize similar patterns for Manhattan.
“New York’s “hot” restaurants (marked by red pins) are mostly concentrated in largely white, affluent neighborhoods downtown, and businesses grow less attractive (marked by blue pins) as you move uptown and toward the Bronx.”
Time to reflect: Teachers, if you are diving into conversations about the use of generative-AI with your students, remind them that the outputs are as good as the inputs (a solid metaphor for life). Generative-AI models that scrape our data and bake-in biased coding, will perpetuate skewed perceptions about race, ethnicity, religion, gender, political leanings….[insert your imagination’s opportunity to discriminate anyone here].
If the topic ever arises, it is a good opportunity to have students reflect on the usefulness of generating such maps and how they can help or hurt society. It is also a teachable moment about how our personal data can be repurposed in ways that we never imagined or intended.
Let’s remind ourselves that this doesn’t end with image data. LLMs trained on biased text, that consists of already reductive text-sources can result in some pretty garbage outputs. Whether it is AI-generated lesson plans, AI test generators, or AI-tutors, the mantra holds: junk in, junk out. We can thank Quizlet, 5-minute video scripts, and Wikipedia for even greater condensed and watered-down outputs.
Note: I appreciate Riley’s transparency and his use of cartography to poke fun at AI. We need case studies like these to ground our debates surrounding ethical use of gen-AI.
Thank you for listening to my Ted-talk.