The Scan Does Not Generate. It Finds.
The AI concern is legitimate and has a real answer. The scan does not generate claims about you. It finds what is already in your words. You put it in. It reflects it back.
The concern comes up regularly, and it is worth taking seriously. Someone hears “AI” and thinks of chatbots confidently stating false things. Of systems that fill in gaps with plausible-sounding fabrications. Of the fundamental problem of technology that generates output without being anchored to truth.
That concern is legitimate. It applies to a lot of AI-assisted processes. It does not apply here. Here is why.
The problem that makes the concern real
Hallucination in large language models is a real and well-documented failure mode. These systems produce text by predicting what word is most likely to come next, based on patterns in their training data. When asked about something outside their knowledge, they do not say “I do not know.” They generate a plausible-sounding answer anyway. This is what produces confident fabrication: a system optimized to produce coherent output, regardless of whether the output is true.[1]
If ReLoHu were using AI to generate claims about you from scratch, that would be the right concern to have. The system would be filling in gaps about your psychology with its best guess, and you would have no reliable way to know which parts were found and which parts were invented.
That is not what is happening.
Generative versus reflective
The input to a ReLoHu session is a long, structured conversation in which you describe your own life, in your own words, at whatever depth you choose to go. The AI does not produce a portrait of some average person, or some probable person, or some composite assembled from training data. It works with what you actually said.
The task is not generation. It is pattern recognition within a bounded corpus: the conversation itself. What keeps appearing? What is conspicuously absent? How do you describe the people who shaped you? What framing do you reach for when something went wrong? These are questions about structure in what you provided, not inferences about what you probably believe.
The system cannot find what you did not say. It cannot invent a wound you did not describe, a pattern you did not demonstrate, a theme that is not present in the conversation. The map is bounded by the conversation. What was not discussed is not in the map.
Your words already carry your psychology
The “garbage in, garbage out” framing carries an assumption worth examining: that if you do not have full conscious access to your own psychology, your input is flawed. But this is not how language works.
Richard Nisbett and Timothy Wilson documented this precisely in a foundational 1977 paper: people frequently cannot accurately report their own mental processes, even when those processes are clearly influencing their behavior.[2] Conscious self-report is not a reliable window into the mechanisms actually driving you. What you believe about yourself and what is actually operating underneath are often different things.
And yet the language you use, the structures you reach for, what you return to, what you avoid, how you frame agency and causality and other people, all of this encodes psychological reality with surprising precision, regardless of whether you are aware of it as you speak. Decades of computational text analysis research confirm this: the words people choose are not neutral containers for their intended meaning. They are data.[3] They reveal what people cannot easily report directly, and they do so consistently enough to be measured.
This is the reframe that matters for the garbage objection. The input to the process is not your theory of yourself. It is your actual words. Those are different things. And the linguistic signal in what you say is often more revealing than the self-interpretation you would offer if asked directly.
The human in the loop
The AI component of a ReLoHu session does not operate unsupervised and does not produce the final map. The conversation is reviewed by a practitioner who reads it, uses AI-assisted analysis, and then drafts, edits, and takes responsibility for the written document. The map does not go out as raw AI output. It goes out as a practitioner-reviewed, practitioner-edited portrait. The AI finds structure. A human decides what that structure means and whether it is worth including.
If something appears in the map that does not ring true, that is a legitimate response worth voicing. The map can be wrong. Accuracy is the goal, and accuracy is checked against your own recognition.
What the concern is usually about
The AI objection is often a proxy for a deeper question: will this map say something about me that is not true, and will I be expected to accept it because a system said so?
That concern is worth naming directly because it is not the same as the hallucination concern. It is a concern about authority, about being told something by an external process and having no recourse. The answer is that a ReLoHu map is not handed down as a verdict. It is offered as a reflection. The standard it is held to is whether it is accurate, and the person best positioned to evaluate that is the one being described.
People who receive maps that are accurate do not usually describe them as feeling like AI output. They describe them as uncomfortably, specifically true. Like something they had been circling for years was named cleanly, from the outside, without agenda.
That is not what hallucination feels like. That is what being seen feels like.
References
- [1]Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y.J., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38. (A comprehensive survey of the hallucination problem in large language models: the failure mode by which generative systems produce confident, plausible-sounding output that is factually false, because generation is optimized for coherence rather than grounded in truth.)
- [2]Nisbett, R.E., & Wilson, T.D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. (Foundational paper demonstrating that people frequently cannot accurately report the mental processes actually driving their behavior: conscious self-report and actual psychological mechanism are often misaligned, which means that language can reveal what direct self-description cannot.)
- [3]Tausczik, Y.R., & Pennebaker, J.W. (2010). The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology, 29(1), 24–54. (Documents how computational analysis of natural language reliably reveals psychological states, personality, and underlying patterns that speakers are often unaware of: words carry psychological information independent of whether the speaker is consciously conveying it.)
Read a map before you decide.
ReLoHu is a one-session psychological mapping service. One conversation, a complete written portrait of your terrain. Read a real session report to see exactly what the process produces.