Rules of the Roomba

I have spent the past 10 minutes watching a Roomba zigzag its way across the floors of a house on the Eastern shore of Maryland. The motion of the machine is simple but mesmerizing. It starts along one direction of motion only to seemingly change its mind, spin, and then head off in a new direction. At times it would hit a wall and turn around as if deciding where to go next, and then, with no prompting, the decision would be made, and it would start moving again. I found it hard to look away. In its seemingly random steps, I thought of the times I saw squirrels or robins search for food. Yet the Roomba was not alive, at least not by any informally accepted definition of “alive,” and it was not necessarily searching for anything. Rarely would the Roomba find dirt, for the room was quite clean, but it kept at its task. It had wires through which electrons flowed and circuit boards and programs that directed its motion. The programs gave the Roomba a directive, perhaps some mechanical analog of a purpose, but all of its parts were engineered. It was likely another machine that facilitated much of this engineering, and maybe there was a machine before that one to facilitate the creation of the second. But eventually, the recursion would find its starting place and that place would be the hands and the mind of a human.

  • i. Start moving in one direction
  • ii. Randomly change direction
  • iii. Turn when an obstacle is encountered
  • iv. Repeat until
    • iv. a. Entire area has been traversed
    • iv. b. Some time has elapsed
  • v. Return to the charging port

The Roomba’s human handlers programmed it to move about mostly flat spaces, bristles helicoptering beneath it as it sought to clean some terrain. The Roomba seems to know only a few rules to achieve this objective:

This is certainly some representation of intelligence, some representation of an entity responding to an environment as it seeks to satisfy an objective. We cannot answer the question of whether the Roomba is thinking, but it is certainly computing things according to conditions around it and doing so continuously. In these acts, there are lessons in intellect even if they only extend from its limited representation through silicon and code. If we can look beyond the materials of the entity that manifests that intellect and watch only the consequences, we are able to see models for our own biological intelligence, models that help us understand intellect’s power and its limitations. And for the Roomba that exists a few feet away from me, one sees the result of a life governed by a few rules.


Like a metallic raft on a flat sea of wood, the Roomba confidently traversed the dining room floor. It did this as if lured by a beacon, never deviating from a straight line and continuing on for many meters. Soon the Roomba was deep into the kitchen and then it suddenly stopped and rotated, pointing itself in the direction of the open bathroom.

Through this door the Roomba again moved with swagger, and after it had entered the bathroom, it zigged and zagged across the blue-tiled floor. The door to the bathroom had hinges that opened the door inward, and eventually, the Roomba found it. Following its rule to turn after hitting an object, the Roomba circled the door, eventually moving around to the other side. On this side, it continued its pattern of motion: hitting a vertical surface, turning, then moving, then hitting another vertical surface. But after it hit the bathroom wall and turned around, the Roomba found the back of the bathroom door. The door was light enough that the Roomba could move it. And so the door was moved and pushed steadily closed until the Roomba was trapped inside the bathroom. I only realized this when the buzz of the motor seemed to drop an octave, the sound shielded by the wood of the now-enclosed room. I went to the bathroom, opened the door, and saw the Roomba continuing as usual, oblivious to the fact that it had locked itself in a space that it could now never escape.

I waited until the Roomba was far from the door and then opened it wide. I held it there and waited, believing that the Roomba would eventually find its way out of the bathroom again. The pivotal moment came when the circular machine came up to the bathroom’s entrance, but then it seemed to hit something. It then turned around and went back from where it came. I squatted down to get a better look at the bathroom entrance and saw the problem. The floor of the bathroom was an inch lower than the floor outside. Entering the bathroom, the Roomba would go down the one-inch step, but when its trajectory led it back to this entrance, the machine would hit that one-inch step from the lower side, and that small step might as well have been a mile-high wall. So the Roomba could enter this bathroom, but it could never leave. By holding the door open, I believed I was helping the robot find its way out again, but even if the door had not been there, the Roomba would have been trapped.

The rules that governed the Roomba’s motion, the rules that were effectively its purpose for existing, had taken it far from its origins on the wooden planks of the living room charging port to a distant and exotic place of ceramic and tile. But continuing to follow its rules had now trapped it in this new and foreign place, and through the rules alone it could not be saved. I wonder, if the Roomba was given just one or two more rules would it have been able to foresee and then avoid its trapping? Or would those additional rules have just extended the Roomba’s agency into additional dimensions which too had their own self-trapping doors and one-way traversable ledges?

The Roomba is simple and therein lies its benefit to us. It is what the physicist calls a "toy model," a representation of a situation of interest that strips away all but the most salient parts. The Roomba’s half-a-dozen rules could not save it from an environment that the rules had not prepared it for. And worse still, it seemed incapable of learning new ones. But the ability to learn additional rules when presented with unfamiliar terrain is just a new set of rules, rules with their own limitations despite our intuitive sense that the ability to learn is boundlessly transcendent. The Roomba’s rules and the place it found itself make me wonder if it is better to have no rules, no structure by which to decide action? Would it then have been saved? Or was it fundamentally limited, doomed to be trapped because of the constraints in how it was constructed?

What in the human’s life is the analog of the door that slowly shuts as one lives according to rules that have taken one far from a place that was home? What in the human’s life is the boundary that cannot be traversed because of limitations in how we are built, either by biology or by our experiences?

In these questions, I am reminded of the standard observation about the course of life and about that crisis that comes in the middle when one wonders whether what one has achieved was worth all the effort. The claim is that mid-life crises are like plateaus. It is possible to get past them but only if one abandons the foundational premises of work that allowed one to climb to the plateau in the first place. Only if one relinquishes the rule can one escape the place that the rule has led one to and trapped one within. Of course, knowing this does not always help one at the moment of a decision, and does not always help one find a way to open the door or step over the boundary. It does not lead to a rule that can ultimately save one. But this second-order knowledge—the knowledge that knowledge will not always save us—certainly has the potential to be a saving grace. It can allow us to see a problem beyond its initial definition and see its connections to the world around us, to how we have been led to this place where the problem has acquired new import even if it cannot be solved. And in this we see that what our intellect has given us—a thing that is currently denied the Roomba—is an intimate knowledge of its own limitations. Not merely a rule that one should learn in new environments, but that despite all one knows or learns, some problems remain intractable for reasons we cannot discern.

To some, this may seem pessimistic but to me it is beautiful. For although we know that limitations and boundaries exist, we do not know where they are and thus there exists for us an entire world of possibility and the potential to better understand our world by hitting upon the walls beyond which intellect cannot take us. Through such obstacles, we may better understand the intellect itself, and may better understand ourselves. In this too we have rules.

  • i. Start from somewhere.
  • ii. See where you can go.
  • iii. What you find may not save you.

Still, what you find will have taken you far from where you started, and within this new place, you may better understand why the journey was necessary in the first place.


Send any comments or questions about the work to

Other Writing

Cover Image for Those Who Walk Toward Banality

Always what is hidden from view reveals what is foundational, and what one plainly sees (and what one takes for granted) are the superficial concerns of people who do not realize on whose backs their status and safety rests.

Cover Image for Pecan Pie

It is true what they say about time seeming to dilate at the precipice of the unexpected. Imprecise writers often tout this as a manifestation of Einstein’s claim that the passage of time is relative, the proverbial instant of happiness juxtaposed with a seemingly unending boredom.

Cover Image for An Education in Leetcode

The latent purpose of these tests is that they provide a clear signal of extensive prior preparation which is indicative, not merely of ability or intelligence, but of devotion.

Cover Image for A Constant State of Dissatisfaction

And here, too, is the goal: To see our experiences through a lens which takes them as ends in of themselves and not simply as means for some alternative present.