The Monday Morning Problem

The Monday Morning Problem

The Monday Morning Problem

Why the data industry built brilliant tools for the wrong customer — and what it will take to actually help people run a business.

Why the data industry built brilliant tools for the wrong customer — and what it will take to actually help people run a business.

Every Monday at 10am, I sat in the executive leadership team meeting at Opendoor — a company I co-founded, where I was CTO and built the data infrastructure from scratch. And every Monday, the same thing happened. The analyst team had prepared through Friday and into the weekend. The deck looked great. And then a senior leader would ask a follow-up — why is that number down, what's driving the Northeast, what happens if we shift spend — and the answer wouldn't be there. We'd take it offline. By the time we had it, the moment to decide had passed. And often it never came at all — the analysts were already heads-down on next week's meeting.

We didn't lack data. We had petabytes of it. We didn't lack tools. We had a modern data stack, a data science team, dashboards for everything. We just couldn't answer the question that mattered most.

I'd spent my career as a producer of insights — building ML models, data pipelines, statistical frameworks. Finding myself deeply unsatisfied as a consumer of insights was jarring. If this was broken at a company that genuinely invested in data infrastructure, it was broken everywhere.

I've since validated this with dozens of executives across industries. The most revealing detail: they've learned to self-regulate their questions because they don't want to overwhelm their own teams. The people whose job is to ask the right questions about the business have trained themselves to ask fewer.

This is the Monday Morning Problem. And the data industry has spent two decades failing to solve it.

How we got here

Twenty years of data tooling — Hadoop, Redshift, BigQuery, Snowflake, Databricks — has produced something genuinely impressive: the ability to run analytical queries reliably and at scale. But that's the engineer's question. It has never been the operator's question. The operator's question is the follow-up in the room — the one that decides whether spend gets shifted, whether a region gets more headcount, whether the quarter gets re-forecast. Answering that requires a different architecture: not a bigger warehouse or a more elegant semantic model, but something built for how executives actually think.

The most persistent fantasy in all of this has been self-service analytics — the conviction that if you give business users access to the data, they'll do the analyst's job to get the answer. I've watched this idea survive generations of tooling: MicroStrategy cubes, Tableau's drag-and-drop revolution, Looker's semantic layer, and now a chatbot on top of it all. The executives and operators these tools were sold to don't pivot. They never have.

The latest incarnation, chat-with-your-data, finally removes the technical barrier: you can ask in English now, no SQL required. But SQL was never the constraint. The constraint is everything that comes after the first answer — which slice to look at next, which comparison matters, which thread is worth pulling. The space of possible questions is enormous, and the work of driving an investigation to a conclusion is exhausting. People get an answer or two, hit the wall, and give up.

I call this snacking. People paste data into ChatGPT or Claude, get an answer, paste it back into a spreadsheet or slide. ChatGPT is the new Excel — incredibly powerful, but it's being sold as the meal. The story never gets finished.

I'm part of this lineage. At Opendoor, we stood up the warehouse, modeled the tables, deployed the BI tool — and I believed self-service would work this time. It didn't.

The absurdity of how we actually work

If you've never worked inside a corporate finance team, let me describe a typical month. An FP&A analyst logs into their planning system. Downloads CSVs. Reformats them in Excel — adding layout and formatting to make the numbers parseable. They set up hour-long meetings with business unit stakeholders to walk through budget vs. actuals, line by line. Cost categories are frequently miscoded, so a meaningful portion of each meeting is spent figuring out whether the numbers are even right. Every "what if" takes days. And the analyst might be mapped to three, four, five business units — each with its own meeting cadence, each expecting the same manual prep. The weekly business review is the same story in a different room: a team spends most of the week assembling the deck, leaving no time to act on what it says.

This is not a process waiting to be optimized. It's a process that should not exist in its current form.

In fifty years, we'll look back at this the way we now look at switchboard operators. The difference is that we're inside it, so we can't see it. We've normalized the dysfunction. People build elaborate workarounds — Excel macros, formatting templates, calendar rituals — until the grind becomes muscle memory. Technology should give people grace, especially in repetitive white-collar work. Instead, it has given them more sophisticated ways to do the same broken process.

What solved looks like

This is where I'm supposed to tell you that AI fixes everything. It can — but not the way it's being deployed today.

We've watched what happens when teams use AI-generated weekly reviews verbatim. A single report routinely contains dozens of factual errors against the underlying data. Dozens of errors in the document that's supposed to tell you how the business is doing.

Summation is an AI product, deeply so. The point is that AI has to be the engine, not the experience. The experience is the answer — polished, traceable, verified. Not vibes. Not "pretty close." An output a decision-maker can trust enough to act on.

Executives already have the best interfaces in the company. They text an analyst. They put a meeting on the calendar where participants are expected to show up with completed analyses. Natural language in, structured insight out, on a deadline.

The problem isn't the interface. It's what happens between the request and the response: the hours of data pulling, reformatting, manual investigation, deck assembly, QA. That's the drudgery AI should take on — not to replace analysts but to free them, so they can review the work, apply judgment, and execute on what comes next instead of spending the entire week producing the deck.

Start with the recurring reports the organization already demands — weekly business reviews, monthly budget-vs-actuals, board decks, operational scorecards. These have to exist. An analyst has to assemble them. So rather than building another blank box and hoping people fill it, start with the finished product.

When those reports are assembled by a system that deeply understands the data — not just the schema but the business context, the variance patterns, the historical norms — the Monday morning meeting starts differently. Instead of "does anyone know why revenue was down?", it starts with "revenue was down 3.2%, driven primarily by a 12% drop in the Northeast enterprise segment; here are three contributing factors ranked by impact; here are two scenarios for reallocation." The root cause is surfaced, decomposed, and ready for discussion.

Why now

The pain has become untenable. As companies add business units, channels, and geographies, the manual processes that barely worked at a smaller scale collapse under their own weight. The analyst mapped to five business units isn't just inefficient; they're drowning. And the executives they serve are making decisions on intuition that should be made on evidence, because the evidence takes too long to assemble.

Every technology wave produces a gold rush of general-purpose tools, followed by the special-purpose tools that actually transform how people work. The PC gave us the spreadsheet. The internet gave us the browser. The smartphone gave us the app. We're at exactly that inflection point with AI. The foundation models are here. Most enterprises have their data in queryable systems. The pipes exist. What's missing is the finished product.

Someone has to build it — the one that starts from what an operator needs on Monday morning and works backward to the technology. Monday morning shouldn't begin with a question. It should begin with an answer.

That's what we built Summation to do.

Ian Wong is the founder and CEO of Summation. Previously, he was the co-founder and CTO of Opendoor, where he led the company from inception to going public. He holds a BS and MS in Electrical Engineering and an MS in Statistics from Stanford, and was the founding data scientist at Square (now Block).

Master complexity. Deliver results.

Master complexity. Deliver results.

Master complexity. Deliver results.