<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Product Strategy &amp; Decision SUpport | Tan Zhou</title><link>https://www.tanzhou.space/tag/product-strategy-decision-support/</link><atom:link href="https://www.tanzhou.space/tag/product-strategy-decision-support/index.xml" rel="self" type="application/rss+xml"/><description>Product Strategy &amp; Decision SUpport</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© 2021 Tan Zhou</copyright><lastBuildDate>Sat, 01 Nov 2025 05:26:35 +0000</lastBuildDate><item><title>From Messy Dataset to “At-a-Glance” Visualizations of Competitive Landscape</title><link>https://www.tanzhou.space/project/competitive-lanscape-at-a-glance/</link><pubDate>Sat, 01 Nov 2025 05:26:35 +0000</pubDate><guid>https://www.tanzhou.space/project/competitive-lanscape-at-a-glance/</guid><description>&lt;h2 id="overview">&lt;strong>Overview&lt;/strong>&lt;/h2>
&lt;p>As AI and automation accelerated across the industry, my stakeholders need to understand the competitive space of AI, automation, and technology in title/settlement platforms. The challenge wasn’t collecting information—it was &lt;strong>making complex, uneven competitive data understandable and actionable for decision-makers&lt;/strong>.&lt;/p>
&lt;blockquote>
&lt;p>This case study focuses on &lt;em>how&lt;/em> I translated a large competitive dataset into a clear visualization system. It intentionally avoids sharing competitive “insights” or conclusions about specific companies.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h3 id="the-business-problem">&lt;strong>The Business Problem&lt;/strong>&lt;/h3>
&lt;p>Leadership needed decision support for product strategy questions, like:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Where are competitors investing in automation across the transaction workflow?&lt;/p>
&lt;/li>
&lt;li>
&lt;p>What types of solutions exist (end-to-end platforms vs. narrow tools)?&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Which parts of the ecosystem are truly comparable to our context?&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>To answer these, stakeholders needed a landscape they could trust and interpret quickly—without reading a long report.&lt;/p>
&lt;hr>
&lt;h3 id="the-research-challenge">&lt;strong>The Research Challenge&lt;/strong>&lt;/h3>
&lt;p>This was not a clean comparison set. The competitive space had three structural issues:&lt;/p>
&lt;p>&lt;strong>1. “Apples-to-oranges” offerings&lt;/strong>&lt;/p>
&lt;p>Some products are broad workflow platforms. Others specialize in one slice (e.g., document automation, search, closing coordination, post-close). Comparing them on a single axis would oversimplify and mislead.&lt;/p>
&lt;p>&lt;strong>2. “AI” claims were inconsistent&lt;/strong>&lt;/p>
&lt;p>Many vendors used similar language (“AI-powered,” “automation,” “intelligent workflow”), but the underlying capability varied widely. The dataset needed a way to separate marketing terms from meaningful maturity indicators.&lt;/p>
&lt;p>&lt;strong>3. Too much information to be usable&lt;/strong>&lt;/p>
&lt;p>Raw competitive research often becomes a dense spreadsheet that only the researcher can navigate. Stakeholders needed &lt;strong>clarity at a glance&lt;/strong>, with enough structure to support follow-up questions.&lt;/p>
&lt;hr>
&lt;h3 id="my-role">&lt;strong>My Role&lt;/strong>&lt;/h3>
&lt;p>I led the work end-to-end across:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Research framing (what decisions the landscape needed to support)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Data modeling and taxonomy creation (how we normalized inconsistent inputs)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Classification logic and decision rules&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Information design and visualization system&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Stakeholder alignment through iterative readouts and refinement&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="what-success-looked-like">&lt;strong>What Success Looked Like&lt;/strong>&lt;/h3>
&lt;p>We defined success as a set of outputs that were:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Strategic&lt;/strong>: tied to product decisions, not just market description&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Trustworthy&lt;/strong>: classification logic visible and repeatable&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Scannable&lt;/strong>: usable in seconds, not minutes&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Multi-dimensional without being messy&lt;/strong>: complexity represented through a system, not a single overloaded chart&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Reusable&lt;/strong>: designed as an artifact we could update as the market changed&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="process-from-research-needs-to-visualization-system">&lt;strong>Process: From Research Needs to Visualization System&lt;/strong>&lt;/h2>
&lt;h3 id="step-1-translate-stakeholder-questions-into-decision-views">&lt;strong>Step 1: Translate stakeholder questions into “decision views”&lt;/strong>&lt;/h3>
&lt;p>Before making any visual, I reframed stakeholder needs into explicit questions the landscape must answer:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Orientation question:&lt;/strong> “Where does each solution fit in the workflow?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Capability question:&lt;/strong> “How advanced is automation/AI—and how broadly does it apply?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Context question:&lt;/strong> “Which solutions are actually relevant to our domain focus?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Ecosystem question:&lt;/strong> “What’s plug-and-play vs. what changes switching costs and integration realities?”&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>This step prevented a common failure mode: building one beautiful chart that answers none of the real decisions.&lt;/p>
&lt;hr>
&lt;h3 id="step-2-build-a-classification-model-to-normalize-messy-data">&lt;strong>Step 2: Build a classification model to normalize messy data&lt;/strong>&lt;/h3>
&lt;p>To compare uneven offerings, I created a shared taxonomy—essentially a “data contract” for the landscape.&lt;/p>
&lt;p>&lt;strong>What we standardized (examples)&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Primary workflow focus&lt;/strong>: where the product anchors its value (even if it touches multiple steps)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Workflow breadth&lt;/strong>: narrow point solution → broad end-to-end platform&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Automation mechanism&lt;/strong>: rules-based automation vs. AI-driven vs. hybrid&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Integration posture&lt;/strong>: standalone tool → integrated suite → ecosystem&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Domain relevance&lt;/strong>: relevance based on transaction complexity and operational needs (rather than vendor labels)&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>The key: explicit decision rules&lt;/strong>&lt;/p>
&lt;p>I documented rules for edge cases, such as:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Platforms spanning multiple workflow stages&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Suites that bundle unrelated modules&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Tools that market “AI” but primarily deliver rules-based automation&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Products that appear comparable but serve fundamentally different transaction contexts&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>This turned subjective categorization into something stakeholders could understand, challenge, and trust.&lt;/p>
&lt;hr>
&lt;h3 id="step-3-choose-a-backbone-view-for-orientation">&lt;strong>Step 3: Choose a “backbone” view for orientation&lt;/strong>&lt;/h3>
&lt;p>I started with &lt;strong>Workflow Stage&lt;/strong> because it matches how most stakeholders naturally reason about real estate closing: as a lifecycle with handoffs and dependencies.&lt;/p>
&lt;p>&lt;strong>Why this came first&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>It gives immediate context to non-experts: “Where in the process does this help?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>It avoids premature ranking or “winners/losers”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>It makes later views easier to interpret by grounding them in a shared mental model&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Design principle:&lt;/strong> &lt;em>Always orient before differentiating.&lt;/em>&lt;/p>
&lt;hr>
&lt;h3 id="step-4-avoid-the-single-22-trapuse-complementary-orthogonal-views">&lt;strong>Step 4: Avoid the single 2×2 trap—use complementary, orthogonal views&lt;/strong>&lt;/h3>
&lt;p>A single chart can’t responsibly represent a market where:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>some products are broad platforms,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>some are specialized,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>and “AI” is not consistently defined.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>So I designed a &lt;strong>system of four views&lt;/strong>, each answering a different strategic question with minimal cognitive load.&lt;/p>
&lt;hr>
&lt;h3 id="the-solution-a-four-view-competitive-landscape-system">&lt;strong>The Solution: A Four-View Competitive Landscape System&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>1. Workflow Stage Landscape&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Question answered:&lt;/strong> “Where does each solution primarily contribute within the closing workflow?”&lt;/p>
&lt;p>**Why it works:**It helps teams understand the ecosystem without needing domain expertise. It also prevents false comparisons by showing that many solutions aren’t trying to solve the same problem.&lt;/p>
&lt;p>&lt;strong>How it’s designed for clarity:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Grouped by workflow stages with short “expectations” per stage (what buyers typically look for there)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A dedicated representation for cross-lifecycle platforms so multi-stage tools don’t distort stage-specific comparisons&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="2-ai--automation-maturity--workflow-breadth">&lt;strong>2. AI &amp;amp; Automation Maturity × Workflow Breadth&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>Question answered:&lt;/strong> “How mature is automation/AI—and how broadly does it apply across the workflow?”&lt;/p>
&lt;p>**Why it works:**This separates two things stakeholders often conflate:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>maturity of automation capability&lt;/p>
&lt;/li>
&lt;li>
&lt;p>how much of the workflow the product claims to cover&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>How it’s designed for responsible interpretation:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>“Maturity” is grounded in observable capability indicators rather than marketing terms&lt;/p>
&lt;/li>
&lt;li>
&lt;p>“Breadth” is framed as workflow ownership, not simply feature count&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Design principle:&lt;/strong> &lt;em>Keep axes orthogonal so the chart stays truthful.&lt;/em>&lt;/p>
&lt;hr>
&lt;h3 id="3-commercial-vs-residential-relevance">&lt;strong>3. Commercial vs. Residential Relevance&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>Question answered:&lt;/strong> “Which solutions are most comparable to our operational context?”&lt;/p>
&lt;p>**Why it works:**Transaction types differ in complexity, documentation, risk, and workflow variability. Without this lens, stakeholders may draw incorrect strategic conclusions from superficially similar tools.&lt;/p>
&lt;p>&lt;strong>How it’s designed:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A simple segmentation that scopes interpretation rather than ranking vendors&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Helps stakeholders quickly identify “directly relevant” vs. “adjacent signals” in the market&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="4-ecosystem-integration-landscape">&lt;strong>4. Ecosystem Integration Landscape&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>Question answered:&lt;/strong> “What’s a tool we can plug in vs. an ecosystem that changes interoperability and switching costs?”&lt;/p>
&lt;p>**Why it works:**Integration posture shapes adoption dynamics: procurement, implementation effort, dependency risk, and long-term flexibility.&lt;/p>
&lt;p>&lt;strong>How it’s designed:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Clear categories that highlight whether a solution is:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>a standalone product,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>part of an integrated suite,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>or operating as an ecosystem strategy&lt;/p>
&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Design principle:&lt;/strong> &lt;em>Strategy isn’t only about features—it’s about constraints.&lt;/em>&lt;/p>
&lt;hr>
&lt;h2 id="making-it-usable-storytelling-and-stakeholder-alignment">&lt;strong>Making It Usable: Storytelling and Stakeholder Alignment&lt;/strong>&lt;/h2>
&lt;p>&lt;strong>Progressive disclosure (how the readout was structured)&lt;/strong>&lt;/p>
&lt;p>I presented the work like a product narrative:&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Start with &lt;strong>workflow stage&lt;/strong> to establish orientation&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Move to &lt;strong>maturity × breadth&lt;/strong> to discuss capability patterns&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Add &lt;strong>relevance&lt;/strong> to prevent misinterpretation&lt;/p>
&lt;/li>
&lt;li>
&lt;p>End with &lt;strong>ecosystem integration&lt;/strong> to connect to strategic leverage and constraints&lt;/p>
&lt;/li>
&lt;/ol>
&lt;p>This sequence reduced debate and increased clarity: stakeholders could follow the logic rather than getting stuck on definitions.&lt;/p>
&lt;p>&lt;strong>Built-in “how to read” guidance&lt;/strong>&lt;/p>
&lt;p>Each view includes lightweight framing—axis definitions, category labels, and reading cues—so the landscape can stand alone without the researcher in the room.&lt;/p>
&lt;hr>
&lt;h3 id="outcomes">&lt;strong>Outcomes&lt;/strong>&lt;/h3>
&lt;p>This work created an artifact stakeholders could actually use:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A &lt;strong>shared vocabulary&lt;/strong> for discussing a fragmented market&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A &lt;strong>trustworthy classification model&lt;/strong> that made comparisons feel grounded&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A &lt;strong>decision-ready visualization system&lt;/strong> that supported strategy discussions without requiring deep domain knowledge&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A framework designed to be &lt;strong>maintained and updated&lt;/strong>, not a one-time research dump&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>(Deliberately omitted here: any market-specific conclusions or vendor evaluations.)&lt;/p>
&lt;hr>
&lt;h3 id="what-this-demonstrates">&lt;strong>What This Demonstrates&lt;/strong>&lt;/h3>
&lt;p>This project is a snapshot of the kind of UX work that sits at the intersection of:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>research strategy (defining what must be true to make a decision),&lt;/p>
&lt;/li>
&lt;li>
&lt;p>analytics and synthesis (normalizing messy inputs),&lt;/p>
&lt;/li>
&lt;li>
&lt;p>information design (reducing cognitive load),&lt;/p>
&lt;/li>
&lt;li>
&lt;p>and stakeholder alignment (building shared understanding through clear frameworks).&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="key-takeaways-id-reuse">&lt;strong>Key Takeaways I’d Reuse&lt;/strong>&lt;/h3>
&lt;ol>
&lt;li>
&lt;p>&lt;strong>Start with the decisions, not the data.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Normalize first; visualize second.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Use multiple simple views instead of one complex chart.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Make classification rules explicit so the work earns trust.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Design for scanning—then support deeper follow-up.&lt;/strong>&lt;/p>
&lt;/li>
&lt;/ol></description></item></channel></rss>