There is a particular quality of light in a small office in Brooklyn at ten in the morning. Not the ambitious light of a corner suite in Midtown, not the fluorescent wash of a hospital ward. Something quieter. A window behind the desk letting in whatever the street offers, which in this part of the borough is mostly the side of another building and a narrow channel of sky. The light falls on a desk covered in papers, on a monitor displaying a Google Sheet with a hundred columns, on the face of a woman who has been reading emails for two hours and will continue reading them for two more.
Her name does not matter for this story, but what she does matters enormously. She works at Big Minds Tiny Hands, an Early Intervention agency in New York City. Every morning, referral emails arrive. Dozens of them, sometimes more. Each one represents a child, usually under three years old, whose pediatrician or parent or social worker has identified a developmental delay and requested services. The emails contain names, addresses, insurance codes, dates of birth, parent contact information, referral sources, diagnostic codes. All of it must be entered into a spreadsheet before the agency can act on it.
So she reads an email. She finds the child's name. She types it into a cell. She finds the address. She types it into the next cell. She finds the insurance code, the date of birth, the parent's phone number. She types each one, cell by cell, field by field, email by email. Four hours. Sometimes six. Every single morning.
I want you to sit with that for a moment. Not the inefficiency of it. Not the obvious waste. I want you to imagine what it feels like at hour three. The emails start to blur. The names stop being names and become strings of characters. The addresses stop being places where families live and become data to be transcribed. You have been doing this for three hours. Your eyes ache. Your fingers know the path from inbox to spreadsheet so well that the motion has become automatic, which means your mind is free to wander, which means you are aware, fully and painfully aware, that you are spending the best hours of your working day performing a task that a moderately well-configured script could do in minutes.
But you are not a script. You are a person with a degree and professional expertise and a genuine vocation for helping children. And every hour you spend typing insurance codes into a spreadsheet is an hour you are not spending on that vocation. The weight of this accumulates. Not dramatically, not as crisis, but as a slow, grinding erosion of purpose. You chose this work because it mattered. The data entry is the tax you pay for the privilege of doing work that matters, and the tax keeps going up.
I walked into this office in the fall of 2025, and I watched her work for an entire morning before I said a word about technology.
To understand why those four hours matter, you have to understand what Early Intervention actually is. Not the policy abstraction. Not the line item in a municipal budget. The thing itself, as it happens in living rooms and kitchens across Brooklyn.
A child is eighteen months old. She is not babbling the way other eighteen-month-olds babble. She does not point at things. She does not respond when her name is called. Her pediatrician refers her for an evaluation, and the evaluation confirms what her parents already suspected: she has a significant speech delay, possibly indicative of something broader. She is referred to an Early Intervention program.
What happens next, if the system works, is that a speech-language pathologist comes to her home twice a week. Not to a clinic, not to a hospital. To her living room. The therapist sits on the floor with her. They play. They sing. They practice sounds. The therapist models language for the parents, shows them how to narrate daily routines, how to create opportunities for the child to communicate. The sessions are forty-five minutes. They look like play. They are among the most consequential forty-five minutes in that child's developmental trajectory.
The neuroscience is unambiguous on this point. The first three years of life represent a period of neural plasticity that does not recur. The brain is building its architecture, laying down the pathways that will support language, social cognition, motor planning, emotional regulation. Intervention during this window does not merely help. It reshapes the developmental landscape. A child who receives consistent speech therapy between eighteen months and three years does not just catch up; she builds neural infrastructure that would not have formed without the intervention. The window closes. What is built during it persists. What is not built during it becomes exponentially harder to build later.
This is what makes an extra week of delay something other than a bureaucratic inconvenience. Every day that a referral sits in an inbox, waiting to be manually transcribed into a spreadsheet so that the intake process can begin so that an evaluation can be scheduled so that services can be authorized, is a day subtracted from a finite window. The child does not experience the delay as paperwork. She experiences it as silence. As one more day without the sounds and structures and interactions that her brain is ready to receive, that her brain is hungry to receive, that her brain will not always be ready to receive.
The woman typing emails into a spreadsheet knew this. She knew it better than I did. That was the particular cruelty of her situation: she understood exactly what was at stake with every hour she spent on data entry, and she did it anyway, because there was no other way to get it done. The system demanded transcription before it would permit action. So she transcribed.
I spent the first two weeks doing nothing that looked like engineering. I sat beside her. I watched her work. I asked questions that must have seemed obvious: Why do you start with this field? What happens when the insurance code is missing? How do you know which sheet to put this in? Why is this column formatted differently from that one?
Ivan Illich wrote in "Tools for Conviviality" that the most dangerous tools are the ones that create dependency, that deskill the user, that replace human judgment with institutional process. The spreadsheet was not the problem. The spreadsheet was fine. The problem was the gap between the emails and the spreadsheet, a gap that the institution had decided to fill with a human being performing mechanical labor. The tool had created a dependency: without the transcription, nothing moved. And the dependency had deskilled the process: the rich, contextual understanding that this woman brought to each referral, her ability to notice that a family's address suggested they might also qualify for other services, her recognition that a particular referring physician tended to understate severity, all of that was being consumed by the act of copying text from one rectangle on a screen to another rectangle on a screen.
Matthew Crawford, in "Shop Class as Soulcraft," describes the difference between work that engages human judgment and work that merely occupies human time. Crawford argues that the modern economy has a particular talent for disguising the second kind as the first, for creating jobs that require a human body in a chair but not a human mind in the work. The woman at Big Minds Tiny Hands was not doing knowledge work when she transcribed emails. She was doing something closer to what Crawford calls "clerking," the performance of a task that has been so thoroughly routinized that it could be done by anyone, or by anything, but that has not yet been automated because no one with the technical capacity to automate it has bothered to look.
I looked. And the more I watched, the more I understood not just what needed to be built, but what needed to be preserved. Her expertise was real. Her contextual understanding was irreplaceable. The system I built could not be a replacement for her judgment. It had to be a removal of everything that was not her judgment, so that her judgment could finally breathe.
The emails were the hard part. Not because email is technically complex, but because referral emails are, to borrow a term from the field, "semi-structured," which is a polite way of saying that each referral source had apparently decided, independently and with great conviction, on its own format for communicating the same information.
Some emails arrived in neat tables. Child's name here, date of birth there, insurance code in the third row. These were the easy ones, the ones that made you believe the problem was simple. Then you would open the next email and find three paragraphs of prose, the child's name buried in the second sentence, the address split across two lines with the zip code somehow in the subject line, the insurance information expressed as a parenthetical aside in a sentence about the referring physician's scheduling preferences.
There were emails where the parent's phone number appeared twice, differently, with no indication of which was correct. Emails where the insurance code was a valid Medicaid number but for the wrong state. Emails where the child's name was spelled one way in the greeting and another way in the body. Emails where critical fields were simply absent, and the absence was not marked by a blank space or a placeholder but by the field's total nonexistence, as though the referral source had never heard of it.
A regex-based approach would have shattered against this reality within hours. Pattern matching works when patterns exist. These emails were not patterned. They were human, in all the messy, inconsistent, context-dependent ways that human communication is human.
What I built instead was a pipeline using Google's Gemini AI, running entirely within Google Apps Script, which meant zero infrastructure costs, zero new tools for the agency to learn, zero additional logins. The system lived inside the Google Workspace they were already using, which was a deliberate choice and, I would argue, the most important technical decision in the entire project.
The pipeline had four stages, but describing them as stages makes the process sound more mechanical than it was. The first stage identified the referral source, because knowing who sent the email determined everything about how to read it. A referral from a hospital used different conventions than a referral from a pediatrician's office, which used different conventions than a referral from a concerned parent who had found the agency's number online. The system learned to recognize these sources, not by matching sender addresses, which changed constantly, but by reading the shape and language of the email itself.
The second stage extracted the data. This is where the AI earned its keep. I wrote system prompts that encoded years of domain knowledge, knowledge I had absorbed by sitting next to the woman who did this work manually. The prompts did not say "find the phone number." They said, in effect, "the phone number may appear in the header, the body, or the signature; it may be formatted with dashes, dots, parentheses, or spaces; it may be preceded by 'phone,' 'tel,' 'cell,' 'mobile,' 'contact,' or nothing at all; if two numbers appear, the one closer to the parent's name is more likely to be correct." Every field had instructions like this. Every instruction reflected something I had learned by watching.
The third stage validated. This was where the domain knowledge became most explicit. Insurance codes had to match known formats. Dates had to be plausible: a child referred for Early Intervention should be under three years old, so a date of birth that implied an age of seven was a flag, not an error to be silently accepted. Addresses had to resolve to locations within the agency's service area. The validation rules were not complex individually, but collectively they represented something valuable: the institutional memory of an agency that had been doing this work for years, crystallized into code.
The fourth stage wrote to the spreadsheet. Column mapping, proper formatting, the small details that make the difference between data that can be used and data that has to be cleaned before it can be used. The system also flagged anomalies: missing fields, unusual values, potential duplicates. These flags were not errors. They were invitations for human review, moments where the system said, in effect, "I have done what I can; a person should look at this."
The first morning the system ran, I was there. Not because I needed to be. The system was tested, validated, stable. I was there because I wanted to see.
She arrived at her usual time. She sat down at her desk. She opened her email, which was habit, the way you check your phone when you wake up even when you are not expecting a message. The referral emails were there, the same as every morning. Dozens of them. But next to her email, in the Google Sheet that had consumed so many of her mornings, the data was already populated. Names, addresses, insurance codes, dates of birth. All of it extracted, validated, formatted, and entered. The anomalies were highlighted in yellow, waiting for her review.
She scrolled through the sheet. She checked a few entries against the original emails. She corrected one phone number that the system had extracted incorrectly, a digit transposed because the original email had a typo that the AI had faithfully reproduced. She cleared the anomaly flags on three entries that were fine, and she followed up on two that genuinely needed attention: one with a missing insurance code, one with an address that did not match the referral source's service area.
Fifteen minutes. The whole thing took fifteen minutes.
She sat there for a moment after she finished, and I do not want to overdramatize what happened next, because it was not dramatic. There was no tears, no speech, no cinematic moment of transformation. She simply looked at the clock, and it was quarter past nine, and she had the entire morning ahead of her.
She picked up the phone and called a family whose intake had been pending. She reviewed a service plan that needed updating. She scheduled an evaluation that had been waiting for an opening in her calendar, an opening that had never existed before because her mornings were consumed by transcription. She talked to a colleague about a child whose progress had plateaued and who might benefit from a different therapeutic approach.
She did, in other words, her actual job. The work she had trained for. The work she had chosen. The work that required her particular intelligence, her particular empathy, her particular knowledge of the families she served. She did the work that no system I could build would ever be able to do.
I sat at a table nearby, pretending to work on something else, and I watched her make phone calls and write notes and consult with colleagues, and I felt something that I have not felt on many engineering projects: the specific, physical satisfaction of watching unnecessary suffering end. Not dramatic suffering. Not the kind that makes the news. The quiet kind. The kind that accumulates in the shoulders and behind the eyes and in the particular weariness of a person who knows they are capable of meaningful work and is instead performing meaningless labor. That suffering was gone. Not reduced. Gone.
The children whose referrals arrived that morning entered the system hours earlier than they would have the day before. The intake process began sooner. The evaluations would be scheduled sooner. The services would start sooner. For a two-year-old with a speech delay, "sooner" is not an administrative convenience. "Sooner" is measured in neural connections formed or not formed, in words acquired or not acquired, in a developmental window that opens and closes regardless of how long it takes an agency to transcribe an email.
I have thought a great deal about what this project taught me, and the lesson is not what I expected it to be when I started.
I expected to learn something about AI, about prompt engineering, about the technical challenges of extracting structured data from unstructured text. And I did learn those things. But the deeper lesson was about engineering itself, about what it is for, about the relationship between the builder and the people who use what is built.
Illich warned that tools, left unchecked, tend to become self-serving. They create new needs to justify their own existence. They grow in complexity until they require specialists to operate, specialists who become gatekeepers, who become a new class of dependency. I felt this temptation. I could have built a custom web application with a database, a dashboard, role-based access control, analytics, a notification system. I could have built something that would have looked impressive in a portfolio, something that would have demonstrated my technical range.
Instead, I built on Google Sheets. On Gmail. On Apps Script. On the tools the agency already used, in the environment they already understood, with zero new logins and zero training required. The woman who had spent four hours a day on data entry did not need to learn a new system. She opened the same spreadsheet she had always opened. It was simply full now, filled by a process that ran in the background, invisible and reliable, the way good infrastructure should be.
Crawford writes that the most meaningful work is work that connects the worker to the consequences of their labor, work where you can see, directly and immediately, the effect of what you have done. The modern economy, he argues, systematically severs this connection, inserting layers of abstraction between the worker and the outcome until the work becomes meaningless, even when the outcome is important. What I did at Big Minds Tiny Hands was, in a sense, the opposite of what Crawford describes: I removed a layer of abstraction. The woman at the desk had always been connected to meaningful outcomes. The data entry was the layer that separated her from them. I removed the layer. I gave her back the connection that the process had taken from her.
This is, I think, what technology is supposed to do. Not to replace human beings, but to remove the obstacles between human beings and the work that only human beings can do. Not to automate judgment, but to automate everything that is not judgment, so that judgment has room to operate. Not to make people unnecessary, but to make the unnecessary parts of people's days disappear, so that the necessary parts can expand to fill the space.
The most sophisticated engineering decision I made on this project was choosing not to engineer. Not to build the custom app. Not to design the database schema. Not to architect the microservices. Not to create the dashboard with the charts that would have looked so compelling in a demo. I chose instead to build the smallest possible thing that would solve the actual problem, to embed it in the tools that already existed, and to make it so invisible that the person it served barely had to think about it. She did not need to think about my system. She needed to think about children and families and service plans and evaluations. My job was to give her back the hours to think about those things. Nothing more.
Sometimes the most sophisticated engineering decision is choosing not to engineer at all. This is not modesty. It is not anti-intellectualism. It is a recognition that engineering is not an end in itself. It is a means, and the end is always human. The end is a woman at a desk in Brooklyn who now spends her mornings doing work that matters, instead of work that a machine can do. The end is a child in a living room, sitting on a carpet with a speech therapist, making sounds she could not make last month. The end is the distance between those two facts, the woman and the child, shortened by a few hours, which in the life of a developing brain is not a small thing at all.
Illich wrote that a convivial tool is one that enlarges the user's capacity to act, that serves the user rather than demanding service from the user, that increases autonomy rather than creating dependency. Crawford wrote that meaningful work is work that allows the worker to see the effect of their labor, to exercise judgment, to be present in the act of making. What I built was, by these definitions, a convivial tool: it enlarged one person's capacity to do the work that mattered, it demanded nothing of her except the fifteen minutes of review that her expertise made irreplaceable, and it made visible the connection between her labor and its consequences, a connection that had been buried under four hours of daily transcription.
I do not think this is a story about artificial intelligence. I think it is a story about attention. About what we ask people to pay attention to, and what we allow them to ignore. About the difference between work that requires a human mind and work that merely requires a human body. About the quiet violence of systems that consume expertise in the service of transcription, and the quiet repair of building something that gives that expertise back.
The woman at Big Minds Tiny Hands still opens her email every morning. The referrals still arrive. The system still runs. And every morning, by quarter past nine, she is doing the work she was meant to do. Somewhere in Brooklyn, a child is making sounds she could not make last month. The distance between the spreadsheet and the living room is a little shorter now.
That is all I built. It was enough.
Sources and influences: Ivan Illich, Tools for Conviviality (1973), on the distinction between convivial and manipulative tools, and the tendency of institutions to subordinate human judgment to procedural demand. Matthew B. Crawford, Shop Class as Soulcraft: An Inquiry into the Value of Work (2009), on the difference between work that engages human intelligence and work that merely occupies human time. On the neuroscience of early intervention and critical periods of neural plasticity: Shonkoff, J.P. and Phillips, D.A., From Neurons to Neighborhoods: The Science of Early Childhood Development (2000). On the effectiveness of Early Intervention services in New York State: New York State Department of Health, Early Intervention Program, Clinical Practice Guidelines.