<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Information Architecture | Tan Zhou</title><link>https://www.tanzhou.space/tag/information-architecture/</link><atom:link href="https://www.tanzhou.space/tag/information-architecture/index.xml" rel="self" type="application/rss+xml"/><description>Information Architecture</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© 2021 Tan Zhou</copyright><lastBuildDate>Sat, 01 Nov 2025 05:26:35 +0000</lastBuildDate><item><title>From Messy Dataset to “At-a-Glance” Visualizations of Competitive Landscape</title><link>https://www.tanzhou.space/project/competitive-lanscape-at-a-glance/</link><pubDate>Sat, 01 Nov 2025 05:26:35 +0000</pubDate><guid>https://www.tanzhou.space/project/competitive-lanscape-at-a-glance/</guid><description>&lt;h2 id="overview">&lt;strong>Overview&lt;/strong>&lt;/h2>
&lt;p>As AI and automation accelerated across the industry, my stakeholders need to understand the competitive space of AI, automation, and technology in title/settlement platforms. The challenge wasn’t collecting information—it was &lt;strong>making complex, uneven competitive data understandable and actionable for decision-makers&lt;/strong>.&lt;/p>
&lt;blockquote>
&lt;p>This case study focuses on &lt;em>how&lt;/em> I translated a large competitive dataset into a clear visualization system. It intentionally avoids sharing competitive “insights” or conclusions about specific companies.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h3 id="the-business-problem">&lt;strong>The Business Problem&lt;/strong>&lt;/h3>
&lt;p>Leadership needed decision support for product strategy questions, like:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Where are competitors investing in automation across the transaction workflow?&lt;/p>
&lt;/li>
&lt;li>
&lt;p>What types of solutions exist (end-to-end platforms vs. narrow tools)?&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Which parts of the ecosystem are truly comparable to our context?&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>To answer these, stakeholders needed a landscape they could trust and interpret quickly—without reading a long report.&lt;/p>
&lt;hr>
&lt;h3 id="the-research-challenge">&lt;strong>The Research Challenge&lt;/strong>&lt;/h3>
&lt;p>This was not a clean comparison set. The competitive space had three structural issues:&lt;/p>
&lt;p>&lt;strong>1. “Apples-to-oranges” offerings&lt;/strong>&lt;/p>
&lt;p>Some products are broad workflow platforms. Others specialize in one slice (e.g., document automation, search, closing coordination, post-close). Comparing them on a single axis would oversimplify and mislead.&lt;/p>
&lt;p>&lt;strong>2. “AI” claims were inconsistent&lt;/strong>&lt;/p>
&lt;p>Many vendors used similar language (“AI-powered,” “automation,” “intelligent workflow”), but the underlying capability varied widely. The dataset needed a way to separate marketing terms from meaningful maturity indicators.&lt;/p>
&lt;p>&lt;strong>3. Too much information to be usable&lt;/strong>&lt;/p>
&lt;p>Raw competitive research often becomes a dense spreadsheet that only the researcher can navigate. Stakeholders needed &lt;strong>clarity at a glance&lt;/strong>, with enough structure to support follow-up questions.&lt;/p>
&lt;hr>
&lt;h3 id="my-role">&lt;strong>My Role&lt;/strong>&lt;/h3>
&lt;p>I led the work end-to-end across:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Research framing (what decisions the landscape needed to support)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Data modeling and taxonomy creation (how we normalized inconsistent inputs)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Classification logic and decision rules&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Information design and visualization system&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Stakeholder alignment through iterative readouts and refinement&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="what-success-looked-like">&lt;strong>What Success Looked Like&lt;/strong>&lt;/h3>
&lt;p>We defined success as a set of outputs that were:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Strategic&lt;/strong>: tied to product decisions, not just market description&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Trustworthy&lt;/strong>: classification logic visible and repeatable&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Scannable&lt;/strong>: usable in seconds, not minutes&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Multi-dimensional without being messy&lt;/strong>: complexity represented through a system, not a single overloaded chart&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Reusable&lt;/strong>: designed as an artifact we could update as the market changed&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="process-from-research-needs-to-visualization-system">&lt;strong>Process: From Research Needs to Visualization System&lt;/strong>&lt;/h2>
&lt;h3 id="step-1-translate-stakeholder-questions-into-decision-views">&lt;strong>Step 1: Translate stakeholder questions into “decision views”&lt;/strong>&lt;/h3>
&lt;p>Before making any visual, I reframed stakeholder needs into explicit questions the landscape must answer:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Orientation question:&lt;/strong> “Where does each solution fit in the workflow?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Capability question:&lt;/strong> “How advanced is automation/AI—and how broadly does it apply?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Context question:&lt;/strong> “Which solutions are actually relevant to our domain focus?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Ecosystem question:&lt;/strong> “What’s plug-and-play vs. what changes switching costs and integration realities?”&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>This step prevented a common failure mode: building one beautiful chart that answers none of the real decisions.&lt;/p>
&lt;hr>
&lt;h3 id="step-2-build-a-classification-model-to-normalize-messy-data">&lt;strong>Step 2: Build a classification model to normalize messy data&lt;/strong>&lt;/h3>
&lt;p>To compare uneven offerings, I created a shared taxonomy—essentially a “data contract” for the landscape.&lt;/p>
&lt;p>&lt;strong>What we standardized (examples)&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Primary workflow focus&lt;/strong>: where the product anchors its value (even if it touches multiple steps)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Workflow breadth&lt;/strong>: narrow point solution → broad end-to-end platform&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Automation mechanism&lt;/strong>: rules-based automation vs. AI-driven vs. hybrid&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Integration posture&lt;/strong>: standalone tool → integrated suite → ecosystem&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Domain relevance&lt;/strong>: relevance based on transaction complexity and operational needs (rather than vendor labels)&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>The key: explicit decision rules&lt;/strong>&lt;/p>
&lt;p>I documented rules for edge cases, such as:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Platforms spanning multiple workflow stages&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Suites that bundle unrelated modules&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Tools that market “AI” but primarily deliver rules-based automation&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Products that appear comparable but serve fundamentally different transaction contexts&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>This turned subjective categorization into something stakeholders could understand, challenge, and trust.&lt;/p>
&lt;hr>
&lt;h3 id="step-3-choose-a-backbone-view-for-orientation">&lt;strong>Step 3: Choose a “backbone” view for orientation&lt;/strong>&lt;/h3>
&lt;p>I started with &lt;strong>Workflow Stage&lt;/strong> because it matches how most stakeholders naturally reason about real estate closing: as a lifecycle with handoffs and dependencies.&lt;/p>
&lt;p>&lt;strong>Why this came first&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>It gives immediate context to non-experts: “Where in the process does this help?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>It avoids premature ranking or “winners/losers”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>It makes later views easier to interpret by grounding them in a shared mental model&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Design principle:&lt;/strong> &lt;em>Always orient before differentiating.&lt;/em>&lt;/p>
&lt;hr>
&lt;h3 id="step-4-avoid-the-single-22-trapuse-complementary-orthogonal-views">&lt;strong>Step 4: Avoid the single 2×2 trap—use complementary, orthogonal views&lt;/strong>&lt;/h3>
&lt;p>A single chart can’t responsibly represent a market where:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>some products are broad platforms,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>some are specialized,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>and “AI” is not consistently defined.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>So I designed a &lt;strong>system of four views&lt;/strong>, each answering a different strategic question with minimal cognitive load.&lt;/p>
&lt;hr>
&lt;h3 id="the-solution-a-four-view-competitive-landscape-system">&lt;strong>The Solution: A Four-View Competitive Landscape System&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>1. Workflow Stage Landscape&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Question answered:&lt;/strong> “Where does each solution primarily contribute within the closing workflow?”&lt;/p>
&lt;p>**Why it works:**It helps teams understand the ecosystem without needing domain expertise. It also prevents false comparisons by showing that many solutions aren’t trying to solve the same problem.&lt;/p>
&lt;p>&lt;strong>How it’s designed for clarity:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Grouped by workflow stages with short “expectations” per stage (what buyers typically look for there)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A dedicated representation for cross-lifecycle platforms so multi-stage tools don’t distort stage-specific comparisons&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="2-ai--automation-maturity--workflow-breadth">&lt;strong>2. AI &amp;amp; Automation Maturity × Workflow Breadth&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>Question answered:&lt;/strong> “How mature is automation/AI—and how broadly does it apply across the workflow?”&lt;/p>
&lt;p>**Why it works:**This separates two things stakeholders often conflate:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>maturity of automation capability&lt;/p>
&lt;/li>
&lt;li>
&lt;p>how much of the workflow the product claims to cover&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>How it’s designed for responsible interpretation:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>“Maturity” is grounded in observable capability indicators rather than marketing terms&lt;/p>
&lt;/li>
&lt;li>
&lt;p>“Breadth” is framed as workflow ownership, not simply feature count&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Design principle:&lt;/strong> &lt;em>Keep axes orthogonal so the chart stays truthful.&lt;/em>&lt;/p>
&lt;hr>
&lt;h3 id="3-commercial-vs-residential-relevance">&lt;strong>3. Commercial vs. Residential Relevance&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>Question answered:&lt;/strong> “Which solutions are most comparable to our operational context?”&lt;/p>
&lt;p>**Why it works:**Transaction types differ in complexity, documentation, risk, and workflow variability. Without this lens, stakeholders may draw incorrect strategic conclusions from superficially similar tools.&lt;/p>
&lt;p>&lt;strong>How it’s designed:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A simple segmentation that scopes interpretation rather than ranking vendors&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Helps stakeholders quickly identify “directly relevant” vs. “adjacent signals” in the market&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="4-ecosystem-integration-landscape">&lt;strong>4. Ecosystem Integration Landscape&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>Question answered:&lt;/strong> “What’s a tool we can plug in vs. an ecosystem that changes interoperability and switching costs?”&lt;/p>
&lt;p>**Why it works:**Integration posture shapes adoption dynamics: procurement, implementation effort, dependency risk, and long-term flexibility.&lt;/p>
&lt;p>&lt;strong>How it’s designed:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Clear categories that highlight whether a solution is:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>a standalone product,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>part of an integrated suite,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>or operating as an ecosystem strategy&lt;/p>
&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Design principle:&lt;/strong> &lt;em>Strategy isn’t only about features—it’s about constraints.&lt;/em>&lt;/p>
&lt;hr>
&lt;h2 id="making-it-usable-storytelling-and-stakeholder-alignment">&lt;strong>Making It Usable: Storytelling and Stakeholder Alignment&lt;/strong>&lt;/h2>
&lt;p>&lt;strong>Progressive disclosure (how the readout was structured)&lt;/strong>&lt;/p>
&lt;p>I presented the work like a product narrative:&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Start with &lt;strong>workflow stage&lt;/strong> to establish orientation&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Move to &lt;strong>maturity × breadth&lt;/strong> to discuss capability patterns&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Add &lt;strong>relevance&lt;/strong> to prevent misinterpretation&lt;/p>
&lt;/li>
&lt;li>
&lt;p>End with &lt;strong>ecosystem integration&lt;/strong> to connect to strategic leverage and constraints&lt;/p>
&lt;/li>
&lt;/ol>
&lt;p>This sequence reduced debate and increased clarity: stakeholders could follow the logic rather than getting stuck on definitions.&lt;/p>
&lt;p>&lt;strong>Built-in “how to read” guidance&lt;/strong>&lt;/p>
&lt;p>Each view includes lightweight framing—axis definitions, category labels, and reading cues—so the landscape can stand alone without the researcher in the room.&lt;/p>
&lt;hr>
&lt;h3 id="outcomes">&lt;strong>Outcomes&lt;/strong>&lt;/h3>
&lt;p>This work created an artifact stakeholders could actually use:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A &lt;strong>shared vocabulary&lt;/strong> for discussing a fragmented market&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A &lt;strong>trustworthy classification model&lt;/strong> that made comparisons feel grounded&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A &lt;strong>decision-ready visualization system&lt;/strong> that supported strategy discussions without requiring deep domain knowledge&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A framework designed to be &lt;strong>maintained and updated&lt;/strong>, not a one-time research dump&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>(Deliberately omitted here: any market-specific conclusions or vendor evaluations.)&lt;/p>
&lt;hr>
&lt;h3 id="what-this-demonstrates">&lt;strong>What This Demonstrates&lt;/strong>&lt;/h3>
&lt;p>This project is a snapshot of the kind of UX work that sits at the intersection of:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>research strategy (defining what must be true to make a decision),&lt;/p>
&lt;/li>
&lt;li>
&lt;p>analytics and synthesis (normalizing messy inputs),&lt;/p>
&lt;/li>
&lt;li>
&lt;p>information design (reducing cognitive load),&lt;/p>
&lt;/li>
&lt;li>
&lt;p>and stakeholder alignment (building shared understanding through clear frameworks).&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="key-takeaways-id-reuse">&lt;strong>Key Takeaways I’d Reuse&lt;/strong>&lt;/h3>
&lt;ol>
&lt;li>
&lt;p>&lt;strong>Start with the decisions, not the data.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Normalize first; visualize second.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Use multiple simple views instead of one complex chart.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Make classification rules explicit so the work earns trust.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Design for scanning—then support deeper follow-up.&lt;/strong>&lt;/p>
&lt;/li>
&lt;/ol></description></item><item><title>Modernizing document workflows in a complex transaction platform</title><link>https://www.tanzhou.space/project/transforming-document-experince/</link><pubDate>Sun, 01 Jun 2025 05:26:35 +0000</pubDate><guid>https://www.tanzhou.space/project/transforming-document-experince/</guid><description>&lt;h2 id="problem">Problem&lt;/h2>
&lt;h3 id="business--product-challenge">Business &amp;amp; product challenge&lt;/h3>
&lt;figure id="figure-before-document-coordination-lived-in-email-threads-and-attachmentsforcing-manual-tracking-follow-ups-and-low-confidence-in-whats-latest">
&lt;div class="figure-img-wrap" >
&lt;img alt="Before: Document coordination lived in email threads and attachments—forcing manual tracking, follow-ups, and low confidence in &amp;#39;what&amp;#39;s latest&amp;#39;." srcset="
/media/problem-visual_hu480d41c255ee25ae355a403997eceb5b_212899_22aee16a2930d28e364121e08f103e4c.png 400w,
/media/problem-visual_hu480d41c255ee25ae355a403997eceb5b_212899_5fe2b65278fb4ded83625bd1b490b2a2.png 760w,
/media/problem-visual_hu480d41c255ee25ae355a403997eceb5b_212899_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/problem-visual_hu480d41c255ee25ae355a403997eceb5b_212899_22aee16a2930d28e364121e08f103e4c.png"
width="760"
height="151"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
Before: Document coordination lived in email threads and attachments—forcing manual tracking, follow-ups, and low confidence in &amp;lsquo;what&amp;rsquo;s latest&amp;rsquo;.
&lt;/figcaption>&lt;/figure>
&lt;p>In complex, high-stakes transactions, “documents” aren’t a feature—they’re the operating system. Internal teams and external clients must request, collect, verify, and reference dozens of items across multiple parties, deadlines, and handoffs.&lt;/p>
&lt;p>The legacy reality looked like this:&lt;/p>
&lt;ul>
&lt;li>Requirements defined through contracts + back-and-forth Q&amp;amp;A&lt;/li>
&lt;li>Documents arriving in scattered email threads and attachments&lt;/li>
&lt;li>Manual tracking (“what’s missing, who owes what, what changed?”)&lt;/li>
&lt;li>Version confusion and rework (duplicate uploads, wrong file shared, unclear latest)&lt;/li>
&lt;/ul>
&lt;p>This is a &lt;em>product&lt;/em> problem (lack of shared visibility and trusted status), and a &lt;em>business&lt;/em> problem (time and risk). Industry benchmarks show why this matters: “interaction workers” spend &lt;strong>~28% of time on email&lt;/strong> and &lt;strong>~19% searching/gathering information&lt;/strong>, and improving collaboration/searchability can create &lt;strong>~20–25% productivity uplift&lt;/strong> in the right conditions.&lt;/p>
&lt;h3 id="users">Users&lt;/h3>
&lt;ul>
&lt;li>&lt;strong>Internal transaction teams&lt;/strong> (ops/service/processing): need a reliable source of truth to coordinate work, maintain confidentiality, and avoid errors.&lt;/li>
&lt;li>&lt;strong>External clients/partners&lt;/strong>: need clarity on what’s required, what’s outstanding, and confidence that the right version was received.&lt;/li>
&lt;/ul>
&lt;h3 id="why-this-was-important">Why this was important&lt;/h3>
&lt;p>Document handling is repeated constantly. If the workflow is unclear, people default to email and personal workarounds—creating compounding cost (minutes lost per document × many users × many transactions) and compounding risk (wrong versions, missed requirements, delayed approvals).&lt;/p>
&lt;h2 id="strategy">Strategy&lt;/h2>
&lt;h3 id="research-goal">Research goal&lt;/h3>
&lt;p>To define a modern document workflow that:&lt;/p>
&lt;ol>
&lt;li>makes requirements visible,&lt;/li>
&lt;li>makes progress trackable,&lt;/li>
&lt;li>makes document status trustworthy, and&lt;/li>
&lt;li>scales to high volumes without forcing users back into email or local folders.&lt;/li>
&lt;/ol>
&lt;h3 id="approach-a-program-of-research-not-one-study">Approach: a program of research, not “one study”&lt;/h3>
&lt;p>I ran this as a &lt;strong>multi-phase research arc&lt;/strong> where each phase answered the next logical question:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Discovery (workflow reality)&lt;/strong>: What actually happens today—and where does it break?&lt;/li>
&lt;li>&lt;strong>Concept shaping (new mental model)&lt;/strong>: What structure reduces ambiguity (checklists, status, ownership, visibility)?&lt;/li>
&lt;li>&lt;strong>Validation (does it work for real users?)&lt;/strong>: Can internal and external users understand it quickly, act confidently, and avoid errors?&lt;/li>
&lt;li>&lt;strong>Scale (second-order constraints)&lt;/strong>: Once adoption grows, what breaks next (organization, findability, versioning, automation)?&lt;/li>
&lt;/ol>
&lt;h3 id="methods">Methods&lt;/h3>
&lt;p>Because the work spanned maturity stages, I matched method to decision:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Interviews / workflow mapping&lt;/strong> to surface real breakdowns and system constraints&lt;/li>
&lt;li>&lt;strong>Prototype concept testing&lt;/strong> to de-risk mental models (terminology, status, ownership, visibility)&lt;/li>
&lt;li>&lt;strong>Design validation&lt;/strong> to confirm comprehension and usability before rollout&lt;/li>
&lt;li>&lt;strong>Later-stage discovery&lt;/strong> focused on scale issues (high doc counts, search behavior, version control expectations)&lt;/li>
&lt;/ul>
&lt;h2 id="my-decision-rationale">My decision rationale&lt;/h2>
&lt;h3 id="why-interviews-first">Why interviews first&lt;/h3>
&lt;p>At the start, the users didn’t have a UI problem—we had a &lt;strong>coordination system problem&lt;/strong>. Interviews and workflow mapping were the fastest way to:&lt;/p>
&lt;ul>
&lt;li>uncover the real “jobs to be done” (request → chase → receive → verify → organize → reuse)&lt;/li>
&lt;li>expose hidden constraints (privacy boundaries, handoffs, audit needs)&lt;/li>
&lt;li>identify why “email + attachments” persisted (it filled gaps the product didn’t cover)
&lt;strong>Decision logic&lt;/strong>: If we guessed at the workflow, we’d build a beautiful interface around the wrong system.&lt;/li>
&lt;/ul>
&lt;h3 id="why-prototype-testing-next">Why prototype testing next&lt;/h3>
&lt;p>Once I saw the breakdown was “tracking + trust,”, I needed to validate whether a checklist/status model could become the shared source of truth. Prototype testing was the right tool because it let us:&lt;/p>
&lt;ul>
&lt;li>test comprehension of status/ownership (without expensive build)&lt;/li>
&lt;li>test terminology and “professional tone” early (a known adoption lever)&lt;/li>
&lt;li>measure whether users could correctly answer “what’s left?” in seconds
&lt;strong>Decision logic&lt;/strong>: We needed behavioral evidence that the model reduced ambiguity—not just opinions about it.&lt;/li>
&lt;/ul>
&lt;h3 id="why-design-validation-internal--external">Why design validation (internal + external)&lt;/h3>
&lt;p>The workflow had two audiences with different risk profiles. Validation ensured:&lt;/p>
&lt;ul>
&lt;li>internal users could move fast without creating errors&lt;/li>
&lt;li>external users could act confidently without needing an explainer&lt;/li>
&lt;li>status changes and version cues didn’t create false confidence or confusion
&lt;strong>Decision logic&lt;/strong>: In document workflows, clarity is safety—validation is risk management.&lt;/li>
&lt;/ul>
&lt;h3 id="why-organization--versioning-later">Why organization + versioning later&lt;/h3>
&lt;p>As the system matured, the next bottleneck wasn’t “can I upload?”—it was “can I find the right thing and trust it?” At scale, long document lists and multiple versions shift the problem from interaction design to &lt;strong>information architecture and reliability&lt;/strong>.&lt;/p>
&lt;p>&lt;strong>Decision logic&lt;/strong>: Once the checklist model reduced “what’s missing,” the system’s limiting factor became “what’s correct and where is it?”—so research pivoted to structure, search behavior, and version control.&lt;/p>
&lt;h2 id="key-decisions">Key decisions&lt;/h2>
&lt;figure id="figure-checklist-became-the-coordination-layer-that-connects-requests-uploads-ownerships-status-notifications-and-version-confidence">
&lt;div class="figure-img-wrap" >
&lt;img alt="Checklist became the coordination layer that connects requests, uploads, ownerships, status, notifications, and version confidence." srcset="
/media/insight-visual_huc057ca841da83c4a25edad75c3369b93_168586_8856eeccd1d2986d955c3552573ae022.png 400w,
/media/insight-visual_huc057ca841da83c4a25edad75c3369b93_168586_65a7ea32f76ad294b0583d2e0cb7c185.png 760w,
/media/insight-visual_huc057ca841da83c4a25edad75c3369b93_168586_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/insight-visual_huc057ca841da83c4a25edad75c3369b93_168586_8856eeccd1d2986d955c3552573ae022.png"
width="666"
height="260"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
Checklist became the coordination layer that connects requests, uploads, ownerships, status, notifications, and version confidence.
&lt;/figcaption>&lt;/figure>
&lt;ol>
&lt;li>&lt;strong>Reframe documents from “file storage” to “workflow tracking”&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Treat document handling as an end-to-end workflow (requirements → request → receipt → verification → history), not a repository.
&lt;strong>Why&lt;/strong>: Email persists because it supports coordination and status tracking—so the product had to do that job better.&lt;/p>
&lt;ol start="2">
&lt;li>&lt;strong>Use a checklist model as the shared source of truth&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Anchor the experience in a checklist/status structure that answers: what’s needed, what’s in progress, what’s done, what changed, who owns it.
&lt;strong>Why&lt;/strong>: This reduces ambiguity for both internal teams and external clients, and creates a consistent foundation for later features (notifications, organization, automation).&lt;/p>
&lt;ol start="3">
&lt;li>&lt;strong>Make ownership, visibility, and status explicit (not implied)&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Design for role-based visibility and unambiguous status transitions (with language that users trust).
&lt;strong>Why&lt;/strong>: In transaction workflows, unclear “who owns this” creates delays; unclear “status” creates rework and risk.&lt;/p>
&lt;ol start="4">
&lt;li>&lt;strong>Standardize organization defaults before adding “more flexibility”&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Provide sensible default structure (folders/tabs/categories, sorting and filtering patterns, and “pin/priority” behaviors) rather than relying on everyone inventing their own system.
&lt;strong>Why&lt;/strong>: Ad-hoc organization scales poorly and increases search time and error rates—especially across teams.&lt;/p>
&lt;ol start="5">
&lt;li>&lt;strong>Invest in version confidence as a first-class requirement&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Prioritize version history and clear draft/final cues (including stacking, timestamps, and traceability).
&lt;strong>Why&lt;/strong>: Version confusion is a trust-breaker; users can’t move fast if they fear sharing the wrong thing.&lt;/p>
&lt;h2 id="what-changed">What changed&lt;/h2>
&lt;h3 id="roadmap--scope-changes">Roadmap &amp;amp; scope changes&lt;/h3>
&lt;ul>
&lt;li>The roadmap shifted from “improve upload” to “support workflow clarity” (tracking, status, ownership, visibility).&lt;/li>
&lt;li>Document organization and versioning were treated as strategic enablers—not nice-to-haves—because they determine whether the system works at scale.&lt;/li>
&lt;/ul>
&lt;h3 id="design--ux-changes">Design &amp;amp; UX changes&lt;/h3>
&lt;ul>
&lt;li>Checklist-based experience became the core navigation layer for document work (what’s outstanding, who owes what, what’s completed).&lt;/li>
&lt;li>Status language and interaction patterns were refined through iterative testing to reduce misinterpretation.&lt;/li>
&lt;li>Organization patterns were elevated: default structures, better sorting/filtering, and pathways to reduce scanning and “where did it go?” confusion.&lt;/li>
&lt;li>Version confidence was explicitly designed (history, recency cues, clearer distinctions between draft/final).&lt;/li>
&lt;/ul>
&lt;h3 id="stakeholder-alignment-outcomes">Stakeholder alignment outcomes&lt;/h3>
&lt;ul>
&lt;li>Research artifacts created a shared mental model across product/design/ops/engineering—so decisions could be made faster and with less debate about what users “really do.”&lt;/li>
&lt;/ul>
&lt;figure id="figure-how-research-translated-into-action-key-inisghts-were-turned-into-concrete-product-decisions-and-measurable-experience-changes">
&lt;div class="figure-img-wrap" >
&lt;img alt="How research translated into action: key inisghts were turned into concrete product decisions and measurable experience changes." srcset="
/media/decision-visual_huefe46a908e2c5a917e93fb1ccf62ba49_165602_5d538d37b73d3d6496dcbc0a88ecee35.png 400w,
/media/decision-visual_huefe46a908e2c5a917e93fb1ccf62ba49_165602_2ef1c05fcee1169734d691be77ee12bc.png 760w,
/media/decision-visual_huefe46a908e2c5a917e93fb1ccf62ba49_165602_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/decision-visual_huefe46a908e2c5a917e93fb1ccf62ba49_165602_5d538d37b73d3d6496dcbc0a88ecee35.png"
width="626"
height="269"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
How research translated into action: key inisghts were turned into concrete product decisions and measurable experience changes.
&lt;/figcaption>&lt;/figure>
&lt;h2 id="impact">Impact&lt;/h2>
&lt;p>Industry research suggests that a large share of knowledge work is consumed by communication and information retrieval:&lt;/p>
&lt;ul>
&lt;li>~28% of time is spent managing email (reading/writing/responding)&lt;/li>
&lt;li>~19% of time is spent searching and gathering information&lt;/li>
&lt;li>Making information more available and searchable can reduce information searching time by as much as ~35% in some contexts&lt;/li>
&lt;/ul>
&lt;p>A checklist-driven document system directly targets both buckets:&lt;/p>
&lt;ul>
&lt;li>fewer emails needed to ask “what’s missing / did you get it?”&lt;/li>
&lt;li>less time spent searching across inbox threads and attachments&lt;/li>
&lt;li>fewer wrong-version loops and duplicate handling&lt;/li>
&lt;/ul></description></item></channel></rss>