The work was introducing product thinking to a regulator that had never had a seat for it.
The gap I saw in my first ninety days at TLC was less about data than about discipline. Systems did just enough to operationalize the law and no more. Decisions got made on whoever's anecdote was loudest. There was no shared way of asking who actually uses this, what are they trying to do, and is the system helping them or just processing them. I'd been hired as Director of Analytics to bring data discipline to the agency — which regulates New York City's for-hire vehicle industry, the vehicles, the drivers, and the businesses behind them. What I argued for, built out, and ultimately left behind was something larger: a product practice, inside a regulator that had never had one. I wrote the diagnosis into a proposal — a team, a way of working, a theory of where the discipline should sit — and pitched it to a few commissioners. They bought in.
The discipline: who, and what they worked on
I started by upskilling the analysts who were already there. The agency had inherited talented people who hadn't been given new tools or methods, and the institutional knowledge they carried was the asset I had the least appetite to throw away. Familiar faces using new techniques opened doors that an outside team would have spent a year trying to open.
The first wave of external hires came next: a data scientist, a data engineer, a more senior analyst. None had worked in government. They were chosen specifically to do work that would move a visible needle quickly, because credibility in this kind of agency is bought one delivered improvement at a time. Once we had a few of those wins, I brought in a deputy director who'd run the data group as a data product manager, then a designer and a researcher. By that point I'd shifted into a true product role and we could start looking at problems no one had yet thought of as product problems.
The sequence — trust first, capability second, full discipline third — is the part I'd repeat anywhere. The institution rejects transplants. It accepts grafts.
The harder question — and the one I think agencies trying to do this kind of work most often get wrong — is what to work on at all. When I joined, prioritization happened the way it does almost everywhere in resource-strapped government: feature-by-feature, request-by-request, anecdote-by-anecdote. A commissioner heard a complaint and a project got launched. A legacy system's quirk caused enough pain to escalate and a fix went into the backlog. The work got done, but the connection between any given output and an outcome the agency actually cared about was rarely examined.
We changed that. Once the team had enough credibility to insist on it, our posture was that we wouldn't take on a project unless we could forecast a quantitative outcome it would move. Wait times. Application processing time. Resolution rates. Repeat-contact volume. Before scoping anything, we asked: if we do this work and it succeeds, what number changes, and by roughly how much? Projects that couldn't answer that question got pushed back on. Projects that could — even when the forecasted lift was modest — got prioritized over flashier work with vaguer outcomes.
This is the part of product thinking I think is hardest to import into a regulatory environment. Regulators are wired to deliver features specified in advance — by laws, by contracts, by political commitments — and asking what outcome a piece of work will move can read as insubordination. It implies the originating request might not be the right thing to work on, which is a question many agencies aren't structured to entertain. The way we made it stick was by always answering the question first, ourselves, before asking it of anyone else — bringing forecasts to scoping conversations rather than asking stakeholders to justify their requests. That single shift reshaped the team's mandate over time more than any individual project did.
What the team actually shipped
The work I'd point to first is what we did with customer service. TLC had multiple contact channels — a call center, a web portal, an email inbox, and a walk-in center that operated like a DMV — managed separately, instrumented differently, and sharing no view of who was contacting the agency or why. The team pooled the data across all of them.
What surfaced was a pattern that was invisible inside any single channel and obvious once you could see across them: people were getting bounced between units — the same person showing up across the call center and the email queue and the walk-in center, working on what was effectively one issue, never reaching resolution. It wasn't that the channels were handing people off badly. It was that the units behind the channels weren't talking to each other, and the only place that pattern showed up was in the data once you looked at it together.
We could see it because we'd pooled the data. More importantly, because we'd built the team with a designer, a researcher, and a deputy who could think in product terms, we could do something about it. We prototyped fixes — single-owner cases for repeat contacts, shared customer history across units, clearer routing. We tested the changes. We worked with the operating units to roll them out. Wait times improved. The bounce pattern reduced.
An analytics team alone could have surfaced the pattern. A product team alone wouldn't have had the data to find it. A traditional government IT project would have spent two years writing a requirements document for unified case management. The discipline we'd built found the problem, prototyped against it, and operationalized the fix without restructuring the agency.
The customer service work also produced something I didn't plan for. As other operating units saw what was possible, they started asking for help with problems that weren't on the team's original mandate. One of those asks became the distribution methodology for the Medallion Relief Fund — the city's response to driver debt during the medallion market collapse. We weren't chartered for that work. We got pulled into it because the credibility we'd built doing the unglamorous customer service work made us the team people thought of when something big and outside-the-box needed to be figured out.
The resistance, and how it resolved
Not everyone was on board. The pushback I heard most often was a version of why are you asking so many questions, just build the thing. In an institution wired for execution, discovery work reads as overhead.
It didn't resolve through argument. It resolved through delivery. The closest-in operating units — the ones we worked with first — started seeing improvements they couldn't have specified up front. Their wait times dropped. Their backlogs cleared. Once those teams were vocal advocates, the request pattern flipped: instead of us pitching the discipline, other parts of the agency started asking when we could come help them.
The decision I still think about
The team lived in an operating division. Not in IT. Not in the commissioner's office.
I'd love to tell you this was a deliberate strategic choice. It wasn't. The opening was in operations — that's where the sponsor with budget and appetite happened to sit — and I took the opening because government windows are short and you take what's available. In retrospect, it's the most important thing about how the team turned out, even though I didn't fully appreciate that going in.
Living in operations meant we were close to the work. Down the hall from the people whose jobs we were trying to make easier. We earned trust because we showed up where the work was happening, not where it was being decided about.
Placed in the commissioner's office, the team would have had more authority and been treated by the rest of the agency as an inspectorate — a group whose questions implied judgment. Operators wouldn't have shown us the messy parts. The team's home shaped what kind of work it was structurally able to do — that's the part I think about.
What survived, and what travels
When I left, my role was split. One of the pieces became the agency's first chief product officer. The seat wasn't planned when I joined; it emerged because the work the team did, over years, demonstrated there was a discipline worth giving an executive home to.
The pattern from this engagement — discipline introduced opportunistically, earning trust through visible delivery, getting pulled into work no one had originally chartered it for, surviving as a permanent function once it had proven itself — is the same pattern I've watched play out in every operationally-mature engagement I've worked on since. In startups closing the gap between sold and shipped. In agencies trying to modernize without breaking the trust they run on. In foundations whose grantees need an operating discipline the funder didn't budget for.
If you're standing up something like this elsewhere, three things travel from TLC. Place the team close to operations, and if the opening is somewhere close to operations, take it — even if a tidier org chart would put you elsewhere. Sequence the team's growth around credibility, not capability. And expect discovery work to read as overhead until you have a portfolio of delivered wins to point to.
The seat at the table doesn't get built by declaring it should exist. It gets built by doing the work that makes it obvious one should.