Sienna Chen · Design Approach
Three lenses I use across every project — from framing the right problem to shipping the right solution.
Before pixels, before prototypes, before anything — there's the question of whether we're solving the right problem.
The most significant shift in my thinking at Parsons was learning when to step back from the problem entirely. Most people enter a business challenge at the subject level — optimizing a product, fixing a process, refining a feature.
What strategic design taught me is to first ask whether the problem is subject-level at all, or whether a system is producing it. That distinction determines everything. Which tools apply, which stakeholders hold leverage, where design intervention actually makes sense. Frameworks like Dark Matter, Three Horizons, Liedtka's design principles for strategy, and the BCG Strategy Palette each operate differently depending on the business goal, time horizon, and nature of uncertainty in play. The skill isn't knowing them. It's knowing which one the situation calls for.
Much of this thinking was shaped by my professor Raz Godelnik, whose work on strategic design continues to influence how I approach complex problems.
Designers are often brought in after the strategy is set. The BCG Strategy Palette is what I use to push back on that dynamic. Before I define a design direction, I need to know which strategic environment the business is actually operating in — predictable, adaptive, harsh, or visionary — because each one demands a fundamentally different design logic.
The mistake most designers make is applying the same design process regardless of context. When I was analyzing Beryl Consulting's positioning in the investment advisory space, the Palette helped me identify that the firm was operating in an Adaptive environment while designing for a Classical one.
Three Horizons is the framework I use most, and one of the most widely applied in industry because it holds across almost any strategic context. What I've observed is that tech-driven mindsets tend to gravitate toward H2, where innovation feels most tangible. But H2 is the hardest horizon to define without anchoring it first.
My practice is to establish H1 and H3 before engaging the middle. Without those anchors, H2 drifts into either uncritical optimism or strategic paralysis.
When I do reach H2, Roger Martin's question from the Strategic Choice Structuring Process sharpens the work further: "What would have to be true for this transition to be possible?" It moves the conversation from speculation into testable conditions. In my current project designing a sustainability layer for Google Maps, the work sits in H2+. Google's existing eco-certification infrastructure is H1. A world where sustainable choices are effortless and invisible is H3. The design question only becomes precise once both ends are established.
Referenced · Roger Martin's Strategic Choice Structuring Process
This is the one that most directly explains why I sit at the intersection of design and strategy rather than choosing a side. Liedtka's argument is that the cognitive tools designers use — abductive reasoning, prototyping as hypothesis testing, empathy as data — are exactly what strategy formation requires but rarely has. I use this framework less as a checklist and more as a lens for facilitating strategic conversations with non-designers.
Make strategy-making broadly participative and dialogue-based
Hold two-way strategic conversations across the organization
Work iteratively and experimentally
Start with possibilities, then consider constraints
Lead by persuasion — and, at best, by inspiration
Design a 'purposeful space' — form follows intended outcomes
Strategy as Synthesis
Formulation and Implementation as One Process
Abductive and Creative Thinking
Design for flexibility and emergent opportunity
The goal of research is not to generate slides. It's to reduce the cost of being wrong. I use mixed methods, triangulate across sources, and synthesise into decisions — not recommendations.
Semi-structured 1:1 conversations to surface latent needs, mental models, and decision-making logic.
Observe users in their natural environment. What people say they do and what they actually do are always different.
Scale qual insights and validate assumptions across larger populations. Best for confirming, not discovering.
Task-based sessions to identify friction in existing or prototype flows. Moderated and unmoderated variants.
Understand how users mentally categorise information. Essential for IA and navigation design.
Visualise the full experience across touchpoints, channels, and over time. Reveals systemic breakdowns.
Synthesis method to cluster raw observations into themes. The bridge between data and insight.
Expert review against established UX principles. Fast signal on friction without user recruitment.
Funnels, heatmaps, session recordings. Quantify what the qual surfaced — and find anomalies qual would miss.
What do we actually need to learn? What decisions will this research inform? I refuse to run research without a clear brief — it creates noise, not signal.
Method selection follows the question, not convention. I triangulate at least two sources — one to surface, one to confirm.
Interviews, observation, surveys — capturing raw data without interpretation. I keep analysis out of the room during research.
Affinity mapping, thematic coding, insight laddering. The goal: compress raw observations into patterns, and patterns into insights that have a "so what."
Insights are worthless without a receiver. I pair every insight with a strategic implication and an owner. Research ends when a decision gets made — not when the deck gets shared.
The line between designer and engineer is dissolving. I think every serious designer needs to cross it — not to replace engineers, but to remove the translation layer between idea and reality.
What AI consistently cannot do is carry accountability. It cannot sit in a stakeholder room and navigate competing priorities. It cannot make the ethical trade-off between what the data suggests and what the community actually needs. It cannot be held responsible for a decision. At Bayer and Beryl, the moments that created real value weren't when I executed well — they were when I pushed back on the brief, reframed the problem, or surfaced something the team didn't want to hear. That kind of judgment doesn't come from training data. It comes from being human in the room.
Vibe coding is the practice of using AI as a real-time development partner — directing, reviewing, and iterating on code through natural language rather than writing every line from scratch. For a designer, the leverage is asymmetric: it collapses the time between idea and testable artifact from weeks to hours.
This isn't about replacing engineers. It's about removing the translation layer between what I'm thinking and what I can put in front of a user. When I can spin up a working prototype in an afternoon, I can test more assumptions, kill bad ideas faster, and arrive at better decisions before anyone writes a line of production code. This entire portfolio is a vibe coding artifact — built without an agency, without a developer, without a handoff.
My Vibe Coding Stack
My workflow: brief in natural language → AI generates a working base → I review, redirect, and refine in real time. The output is always mine. The velocity is AI-enabled.
A Note on Being New
The hiring assumption:
Experience = rigour. Seniority = quality. Years of practice = better judgment.
What actually happens:
Experience also means ingrained workflows, risk aversion, and slower iteration. The most dangerous thing in a fast-changing industry is a designer who's too confident in yesterday's playbook.
I've been told — sometimes directly, more often implicitly — that limited experience means limited judgment. I think that framing misunderstands what the current moment in design actually requires. The designers who will matter most in the next decade aren't the ones with the deepest pattern libraries built in a different era. They're the ones who can learn fast, unlearn faster, and use AI as a native collaborator rather than a novelty. I led research and strategy at Bayer within months of starting, redesigned a full platform at Beryl as the sole designer, and took a startup concept from zero to working prototype. What I lack in years, I more than compensate for in range, adaptability, and a refusal to treat convention as a constraint.