At some point in the last year, we’ve all heard, “What does AI think of us?” during a meeting. And it is usually followed by someone opening a laptop and typing something into ChatGPT, or Claude, or Gemini…You get it.

  • “Do we rank for this?”
  • “Who does Google say is the best in our category?”
  • “What happens if you just search my name?”
  • “What does it say about the company?”

We are all doing it, if we want to admit it or not.

Executives search their own names. Founders search their companies. Marketing teams run “best {category} in {city}” queries to see who appears. Someone inevitably pastes the output of an AI summary into Slack and says, “This is interesting.”

The behavior is not rare. It is constant. What is rare is creating meaningful structure around the query. Most of these tests are done informally, inside logged-in browsers, with half-formed queries, and without any awareness of how personalization or conversational context skews the output. We glance at the results and move on, assuming we have learned something meaningful.

That assumption is usually wrong.

The Perception Scanner was built to formalize what people are already testing, and to do it in a way that removes obvious bias, adds structure, and measures narrative dominance rather than just surface-level visibility.

The Fallacy in Manual Testing

If someone genuinely wanted to understand how they are perceived online, they would need to do significantly more than run a single search. They would need to test multiple identity and positioning surfaces intentionally.

For individuals, that includes name-only searches, qualified searches combining name with role or company, authority-based searches that probe for credibility, interviews or publications, shadow searches that look for alternate identities or outdated affiliations, and category-placement searches such as “best {role} in {city}.” When structured correctly, that quickly becomes twenty or more distinct queries.

For companies, the framework shifts slightly. Brand queries measure navigational dominance. Category queries measure discovery visibility. Offer and service queries test transactional alignment. Comparative queries reveal competitive clustering. Geographic queries expose regional positioning. To execute that rigorously, you’d again end up running fifteen to twenty structured searches across multiple systems.

Even if someone is disciplined enough to do that, there is the issue of personalization bias. Logged-in Google sessions, prior browsing behavior, frequent visits to one’s own website, and location signals all influence SERP outputs. AI tools with history operate conversational context windows that subtly shape summaries. Without neutralized sessions and clean prompts, what appears to be objective reality is often a personalized reflection.

In other words, informal testing produces incomplete and biased insight.

Enter the Perception Scanner

Someone had posted on Linkedin about asking AI how it described their company. It was interesting, but the approach felt random. A single prompt, a single snapshot, and then conclusions drawn from that.

Our reaction was, “That’s not a very good sample.” It could be done more methodically.

Within a couple of hours we had a version running that pulled multiple perspectives and compared the responses in a more structured way. With a couple more hours, we built a second version that worked for individuals instead of companies.

That is usually how these things happen for us. We don’t set out to build products or tools. We just see a process that could be clearer, more structured, or easier to run repeatedly, and we build the thing we wish existed.

And if you have followed our previous experiments with Python automation, Excel wrangling, and the Business Health Assessment, you will probably recognize the pattern.

The Company Framework

The original Perception Scanner focused on companies because the boundaries were slightly clearer. A business generally has a defined brand name, a primary domain, and a narrower category scope than an individual.

The challenge was not retrieving search results; it was designing query architecture that worked across industries without hardcoding vertical assumptions. The scanner extracts structural signals from the website—brand identifiers, offer nouns, category descriptors, problem framing, and geographic indicator —and maps them to universal search-intent classes. These classes correspond to how real buyers search, not how marketing decks describe positioning.

Each query is executed through a neutralized SERP context, and results are clustered by domain frequency and entity type rather than evaluated solely on rank position. The output is not a simplistic ranking report. It is a narrative dominance analysis that reveals how the company is categorized, which competitors dominate category-level queries, and how AI systems summarize the business when provided structured evidence.

The goal is clarity about category placement and competitive context, not vanity metrics.

The Identity Framework for Individuals

After the company scanner was built, we asked, “Can it work on people?” It could, but the structure had to change. Companies compete on products and categories. People compete on identity.

An individual’s digital footprint is inherently more complex. A person may have a legal name, previous names, middle initials, nicknames, or public handles that have accumulated digital history. There may be directory listings, speaking bios, academic publications, government records, or entirely unrelated individuals with the same name. Identity variance is the norm, not the exception.

The people version of the Perception Scanner runs a structured set of twenty-four queries across five platforms to test distinct identity surfaces. These include unqualified name-only searches to measure dominant narrative, qualified searches to assess stabilization through context, authority probes to identify third-party validation, shadow queries to detect alternate or outdated identity clusters, and category-placement searches to determine competitive visibility within a professional role.

Results are classified into universal identity surfaces such as professional profiles, directories, social accounts, media references, academic citations, government records, commerce listings, entertainment entities, or legal records. From there, the system measures narrative dominance and compares it against the identity the individual states they want to be known for. The analysis surfaces identity variance, collision risk, authority gaps, and drift between intended positioning and observed clustering.

Productizing Discipline

There is nothing proprietary about typing search queries into Google or asking AI to summarize a name. Anyone can replicate the core mechanics manually. The efficacy lies in structure and neutrality.

To perform this and achieve meaningful results, one would need to generate structured query sets, execute them in neutral browsing environments, clear conversational context in AI tools, capture outputs systematically, cluster domains and entities, and compare narrative patterns objectively. Most people will not invest that level of effort, and even those who try may overlook subtle bias.

That’s what the Perception Scanner is for! One simple query results in a coherent, consistent, neutral framework of results.

If you prefer scientific measurement over assumption. Try the Perception Scanner!

Likes