Aneesh Sathe


The Mind as Semi-Solid Smoke

July 7, 2025

This post continues the series on Socratic Thinking, turning the space-and-place lens inward to examine the mind itself. Human minds can be thought of as an imperfect place with the ability to create their own insta-places to navigate ambiguity.

Image

Exploration in any real or conceptual space needs navigational markers with sufficient meaning. Humans are biologically predisposed to seek out and use navigational markers. This tendency is rooted in our neural architecture, emerges early in life, and is shared with other animals, reflecting its deep evolutionary origins 1,2 .  Even the simplest of life performing chemotaxis uses the signal-field of food to navigate.

When you’re microscopic, the territory is the map; at human scale, we externalise those cues as landmarks—then mirror the process inside our heads. Just as cells follow chemical gradients, our thoughts follow self-made landmarks, yet these landmarks are vaporous.

From the outside our mind is a single place, it is our identity. Probe closer and our identity is nebulous and dissolves the way a city dissolves into smaller and smaller places the closer you look. We use our identity to create the first stable place in the world and then use other places to navigate life. However, these places come from unreliable sources, our internal and external environments.  How do we know the places are even real, and do we have the knowledge to trust their reality? Well, we don’t. We can’t judge our mental landmarks false. Callard calls this normative self-blindness: the built-in refusal to saw off the branch we stand on.

Normative self-blindness is a trick to gloss over details and keep moving. Insta-places are conjured from our experience and are treated as solid no matter how poorly they are tied down by actual knowledge. We can accept that a place was loosely formed in the past, an error, or is not yet well defined in the future, is unknown. However, in the moment, the places exist and we use them to see.

Understanding and accepting that our minds work this way is a key tenet of Socratic Thinking. It makes adopting the posture of inquiry much easier. Socratic inquiry begins by admitting that everyone’s guiding landmarks may be made of semi-solid smoke.


1Chan, Edgar, Oliver Baumann, Mark A. Bellgrove, and Jason B. Mattingley. “From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation.” Frontiers in Psychology 3 (2012). https://doi.org/10.3389/fpsyg.2012.00304

2Freas, Cody A., and Ken Cheng. “The Basis of Navigation Across Species.” Annual Review of Psychology 73, no. 1 (January 4, 2022): 217–41. https://doi.org/10.1146/annurev-psych-020821-111311.


Thinking with places

July 6, 2025

“A farmer has to cut down trees to create space for his farmstead and fields. Yet once the farm is established it becomes an ordered world of meaning—a place—and beyond it is the forest and space.” — Yi-Fu Tuan

Thinking itself is place-making: the act of converting undifferentiated possibility into navigable meaning.

A place comes into being the moment we interrupt undifferentiated space. Place-making is fundamentally an act of interruption. Space is thought of as possibility but is unavailable without the signposts of place. When a place is created we impose a way of looking, being, and acting on the space of choice. The place you pick to navigate your space defines the identity you will inhabit during your quest. Every tool is a micro-place: it frames what can be thought and forecloses alternative moves. They enforce the kind of thoughts that can be had, the type of exploration that can be done, and configures space in an opinionated way.

Image

Picking a tool commits us to a world view. Consider the space of ‘good TV shows’. Family, friends and culture have made the choice of what good means. When Netflix suggests shows it uses your watching history as a probe to create place so that every individual is always watching ‘good’ shows. The pure possibility space of the search bar is disrupted by the suggestions provided.

Like algorithmic curation, Socratic dialogue also interrupts space, it is interrogation as cartography. Socratic thinking is also an act of interruption and making concrete what was nebulous. It’s asking us to specify which show, if we claim to love TV. Socratic thinking (henceforth referred to as just thinking) starts by probing that which does not need questioning, the answers that are obvious the ones that everyone knows. This may seem foreign at first glance but we do this all the time, say we make a list of our favorite TV shows, someone always says you are missing this or that show and that this list is completely wrong. This kind of disagreement leads to the shared quest of answering the question, ‘What is it to be entertained?’.

Thinking pursues knowledge through the act of stabilizing answers to such questions by creating places in those unexamined areas. Discussion allows us to map. There is usually no well defined answer for such questions, if there were, they would simply be problems that we could solve with a google search. The quest stops when the parties involved are satisfied that they have arrived at an answer. Thinking is the act of place-making by taking something that was ungraspable and tying it down with knowledge. Place is, after all, an “ordered world of meaning” and we can use these places to create home bases from which to explore.

Even without other people simply engaging with the reality of the universe is sufficient for thought. Places are stable systems which provide a surface on which your thoughts and hypothesis can be tested. Even if there is no other person around and you’re simply engaged with looking at the world can uncover a new truth tied down by knowledge.

Thinking is the process of updating beliefs based on the mini places that make up the space that you’re interrogating. Each place is a noisy pointer to the underlying truth, and each updating of belief allows you to get closer to the knowledge you seek.


Chatbots, Bats & Broken Oracles

July 5, 2025

I had the strangest conversation with my son today. There used to be a time when computers never made a mistake. It was always the user that was in error. The computer did exactly  what you asked it to do. If something went wrong it was you, the user, that didn’t know what you wanted. After decades of that being etched in today I found myself telling him that computers make mistakes, you have to check if the computer has done the right thing and that is actually ok. A computer that hallucinates also provides a surface for exploration and seeking answers to questions.

Image

In her book, Open Socrates, Agnes Callard draws our attention to the differences between problems and questions. I’ll get to those in a bit, but the fundamental realization I had was that until recently all we could use computers (CPUs, spreadsheets, internet) for was solving problems. This started all the way back with Alan Turing when he designed the Turing test. He turned the question of what is it to think into the problem of how do you detect thought. As Callard mentions, LLMs smash the Turing test but we still can’t quite accept the result as proof of thinking. What is thinking then? What are problems? What are questions? How do we answer questions?

Problems are barriers that stand in your way when you are trying to do something. You want to train a deep learning algorithm to write poetry, how to get training data is a problem. You want something soothing for lunch, getting the recipe for congee is the problem. The critical point here is that as soon as you have the solution, the data, the recipe, the problem disappears. This is the role of technology.

When we work with computers to solve problems we are essentially handing off the task to the computer without caring that the computer wants to or even can want to write poetry or have a nice lunch. So we ask the LLM to write code, we command google to give us a congee recipe. Problems don’t need a shared purpose, only methods to solve them to our satisfaction. Being perpetually dissatisfied with existing answers is the stance of science.

Science and technology are thus tools to move towards dealing with questions. Unlike problems which dissolve when you solve them, questions give you a new understanding of the world. The thing with asking questions is that there is no established way, at least in your current state, to solve them. Thus asking a question is the first step of starting a quest. In terms of science the quest is better understanding of something and you use technology along the way to dissolve problems that stand in your way.

 AI lets us explore questions with, rather than merely through, computers. Granted that most common use of AI is still to solve problems, LLMs and their ability to do back and forth chat in natural language does provide the affordance to ask questions. Especially, the kind that seem to come pre-answered because we are operating from a posture where not having an answer would dissolve the posture altogether.

The Socratic Co-pilot #

As a scientist, the question “what is it to be a good scientist?” comes pre answered for me. Until I am asked this question I have not really thought about it but rush to provide answers. Scientists conduct experiments carefully, they know how to do use statistics, they publish papers and so on. However, this still does not answer what it is to be a good scientist. Playing this out with an AI, I assert “rigorous statistics,” the AI counters with an anecdote on John Snow’s cholera map and I’m forced to pivot. None of these by themselves answers the root question, but it allows generation of some problems which can be answered or agreed on. This is knowledge.

Knowledge draws boundaries, or as I have explored earlier, creates places around the space that you wish to explore. In the space of “being a good scientist”, we can agree that the use the scientific method is an important factor. Depending on who you are, this could be the end of quest.

Even if no methodology exists for a given problem, simply approaching any problem with an inquisitive posture creates a method, however crude. In his book What Is It Like to Be a Bat?  Thomas Nagel tackles an impossible to solve problem but a great question, through the process of a thought experiment. If I were to undertake this, I may try to click in a dark room, hang upside down. Okay, maybe not the last bit, but only maybe. Even this crude approach has now put me in the zone to answer the problem. Importantly my flapping about has created surface area where others can criticize, as Nagel was. Perhaps future brain-computer-interface chips will actually enable us to be a bat. However, lacking such technology, this is better than nothing as long as you are interested in inquiring about the bat-ness.

This kind of inquiry, this pursuit of answering questions is thinking. Specifically, as Callard puts it, thinking is “a social quest for better answers to the sorts of questions that show up for us already answered”. Breaking that down further it’s social because it’s done with a partner who disagrees with you because they have their own views about the question. It’s a quest because the both parties are seeking knowledge. The last bit about questions being already answered is worth exploring a bit.

Why bother answering questions you already have answers to? This is trivial to refute when you know nothing about a subject. For example let’s say you knew nothing about gravity and your answer to why you are stuck to the earth cause we are beings of the soil and to the soil we must go, the soil always calls us. If that is the worldview then you already have the answer. The only way to arrive at a better answer, gravity, is to have someone question you on the matter. Refuting specific points based on their own points of view. This may come in the form of a conversation, a textbook, a speech etc. I suspect this social role may soon be played by AI.

Obviously hallucinations themselves aren’t great but the ability to hallucinate is. In the coming years I expect AI will gain significant amounts of knowledge access not just in the form of training but in the form of reference databases containing data broadly accepted as knowledge. In the process we will probably have to undergo significant social pains to agree on what Established Knowledge constitutes. Such a system will enable LLMs to play the role of Socrates and help the user avoid falsehoods by questioning the beliefs held by the user.

Until now computers couldn’t play this role because there wasn’t enough “humanness” involved. In the bat example, a bat cannot serve as Socrates or as the interlocutor to a human partner because there isn’t a shared world view. LLMs, trained on human generated knowledge would have enough in common to provide a normative mirror. The AI comes with the added benefit of having both infinite patience and no internal urge to be right. This would allow the quest to provide an answer that is satisfactory to the user searching at every level of understanding. LLMs can be useful even before they gain the ability to access established knowledge. Simply by providing a surface on which to hang questions the user can become adept at the art of inquiry.

So the next time you have a chat with your pet AI understand that it starts as a session of pure space. Each word we put in ties down the AI to specific vantage points to help us explore. Go ahead—pick a question you think you’ve already answered and let the machine argue with you.


Reflection on the VGR bookclub

June 28, 2025

This bookclub is the most fun thing I’ve done in a decade… this includes starting, expanding, and leaving a startup ;)

The last fun thing was post-PhD rapid exploration of applying AI for bio, which laid the foundation for where I am now.

The book club feels like a philosophical anchoring to understand complexities of the world and, as I turn 40, my place in it.

Read more about it: The Modernity Machine