<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | Tan Zhou</title><link>https://www.tanzhou.space/project/</link><atom:link href="https://www.tanzhou.space/project/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© 2021 Tan Zhou</copyright><lastBuildDate>Sat, 01 Nov 2025 05:26:35 +0000</lastBuildDate><item><title>From Messy Dataset to “At-a-Glance” Visualizations of Competitive Landscape</title><link>https://www.tanzhou.space/project/competitive-lanscape-at-a-glance/</link><pubDate>Sat, 01 Nov 2025 05:26:35 +0000</pubDate><guid>https://www.tanzhou.space/project/competitive-lanscape-at-a-glance/</guid><description>&lt;h2 id="overview">&lt;strong>Overview&lt;/strong>&lt;/h2>
&lt;p>As AI and automation accelerated across the industry, my stakeholders need to understand the competitive space of AI, automation, and technology in title/settlement platforms. The challenge wasn’t collecting information—it was &lt;strong>making complex, uneven competitive data understandable and actionable for decision-makers&lt;/strong>.&lt;/p>
&lt;blockquote>
&lt;p>This case study focuses on &lt;em>how&lt;/em> I translated a large competitive dataset into a clear visualization system. It intentionally avoids sharing competitive “insights” or conclusions about specific companies.&lt;/p>
&lt;/blockquote>
&lt;hr>
&lt;h3 id="the-business-problem">&lt;strong>The Business Problem&lt;/strong>&lt;/h3>
&lt;p>Leadership needed decision support for product strategy questions, like:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Where are competitors investing in automation across the transaction workflow?&lt;/p>
&lt;/li>
&lt;li>
&lt;p>What types of solutions exist (end-to-end platforms vs. narrow tools)?&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Which parts of the ecosystem are truly comparable to our context?&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>To answer these, stakeholders needed a landscape they could trust and interpret quickly—without reading a long report.&lt;/p>
&lt;hr>
&lt;h3 id="the-research-challenge">&lt;strong>The Research Challenge&lt;/strong>&lt;/h3>
&lt;p>This was not a clean comparison set. The competitive space had three structural issues:&lt;/p>
&lt;p>&lt;strong>1. “Apples-to-oranges” offerings&lt;/strong>&lt;/p>
&lt;p>Some products are broad workflow platforms. Others specialize in one slice (e.g., document automation, search, closing coordination, post-close). Comparing them on a single axis would oversimplify and mislead.&lt;/p>
&lt;p>&lt;strong>2. “AI” claims were inconsistent&lt;/strong>&lt;/p>
&lt;p>Many vendors used similar language (“AI-powered,” “automation,” “intelligent workflow”), but the underlying capability varied widely. The dataset needed a way to separate marketing terms from meaningful maturity indicators.&lt;/p>
&lt;p>&lt;strong>3. Too much information to be usable&lt;/strong>&lt;/p>
&lt;p>Raw competitive research often becomes a dense spreadsheet that only the researcher can navigate. Stakeholders needed &lt;strong>clarity at a glance&lt;/strong>, with enough structure to support follow-up questions.&lt;/p>
&lt;hr>
&lt;h3 id="my-role">&lt;strong>My Role&lt;/strong>&lt;/h3>
&lt;p>I led the work end-to-end across:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Research framing (what decisions the landscape needed to support)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Data modeling and taxonomy creation (how we normalized inconsistent inputs)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Classification logic and decision rules&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Information design and visualization system&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Stakeholder alignment through iterative readouts and refinement&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="what-success-looked-like">&lt;strong>What Success Looked Like&lt;/strong>&lt;/h3>
&lt;p>We defined success as a set of outputs that were:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Strategic&lt;/strong>: tied to product decisions, not just market description&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Trustworthy&lt;/strong>: classification logic visible and repeatable&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Scannable&lt;/strong>: usable in seconds, not minutes&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Multi-dimensional without being messy&lt;/strong>: complexity represented through a system, not a single overloaded chart&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Reusable&lt;/strong>: designed as an artifact we could update as the market changed&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h2 id="process-from-research-needs-to-visualization-system">&lt;strong>Process: From Research Needs to Visualization System&lt;/strong>&lt;/h2>
&lt;h3 id="step-1-translate-stakeholder-questions-into-decision-views">&lt;strong>Step 1: Translate stakeholder questions into “decision views”&lt;/strong>&lt;/h3>
&lt;p>Before making any visual, I reframed stakeholder needs into explicit questions the landscape must answer:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Orientation question:&lt;/strong> “Where does each solution fit in the workflow?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Capability question:&lt;/strong> “How advanced is automation/AI—and how broadly does it apply?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Context question:&lt;/strong> “Which solutions are actually relevant to our domain focus?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Ecosystem question:&lt;/strong> “What’s plug-and-play vs. what changes switching costs and integration realities?”&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>This step prevented a common failure mode: building one beautiful chart that answers none of the real decisions.&lt;/p>
&lt;hr>
&lt;h3 id="step-2-build-a-classification-model-to-normalize-messy-data">&lt;strong>Step 2: Build a classification model to normalize messy data&lt;/strong>&lt;/h3>
&lt;p>To compare uneven offerings, I created a shared taxonomy—essentially a “data contract” for the landscape.&lt;/p>
&lt;p>&lt;strong>What we standardized (examples)&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Primary workflow focus&lt;/strong>: where the product anchors its value (even if it touches multiple steps)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Workflow breadth&lt;/strong>: narrow point solution → broad end-to-end platform&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Automation mechanism&lt;/strong>: rules-based automation vs. AI-driven vs. hybrid&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Integration posture&lt;/strong>: standalone tool → integrated suite → ecosystem&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Domain relevance&lt;/strong>: relevance based on transaction complexity and operational needs (rather than vendor labels)&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>The key: explicit decision rules&lt;/strong>&lt;/p>
&lt;p>I documented rules for edge cases, such as:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Platforms spanning multiple workflow stages&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Suites that bundle unrelated modules&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Tools that market “AI” but primarily deliver rules-based automation&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Products that appear comparable but serve fundamentally different transaction contexts&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>This turned subjective categorization into something stakeholders could understand, challenge, and trust.&lt;/p>
&lt;hr>
&lt;h3 id="step-3-choose-a-backbone-view-for-orientation">&lt;strong>Step 3: Choose a “backbone” view for orientation&lt;/strong>&lt;/h3>
&lt;p>I started with &lt;strong>Workflow Stage&lt;/strong> because it matches how most stakeholders naturally reason about real estate closing: as a lifecycle with handoffs and dependencies.&lt;/p>
&lt;p>&lt;strong>Why this came first&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>It gives immediate context to non-experts: “Where in the process does this help?”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>It avoids premature ranking or “winners/losers”&lt;/p>
&lt;/li>
&lt;li>
&lt;p>It makes later views easier to interpret by grounding them in a shared mental model&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Design principle:&lt;/strong> &lt;em>Always orient before differentiating.&lt;/em>&lt;/p>
&lt;hr>
&lt;h3 id="step-4-avoid-the-single-22-trapuse-complementary-orthogonal-views">&lt;strong>Step 4: Avoid the single 2×2 trap—use complementary, orthogonal views&lt;/strong>&lt;/h3>
&lt;p>A single chart can’t responsibly represent a market where:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>some products are broad platforms,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>some are specialized,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>and “AI” is not consistently defined.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>So I designed a &lt;strong>system of four views&lt;/strong>, each answering a different strategic question with minimal cognitive load.&lt;/p>
&lt;hr>
&lt;h3 id="the-solution-a-four-view-competitive-landscape-system">&lt;strong>The Solution: A Four-View Competitive Landscape System&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>1. Workflow Stage Landscape&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Question answered:&lt;/strong> “Where does each solution primarily contribute within the closing workflow?”&lt;/p>
&lt;p>**Why it works:**It helps teams understand the ecosystem without needing domain expertise. It also prevents false comparisons by showing that many solutions aren’t trying to solve the same problem.&lt;/p>
&lt;p>&lt;strong>How it’s designed for clarity:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Grouped by workflow stages with short “expectations” per stage (what buyers typically look for there)&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A dedicated representation for cross-lifecycle platforms so multi-stage tools don’t distort stage-specific comparisons&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="2-ai--automation-maturity--workflow-breadth">&lt;strong>2. AI &amp;amp; Automation Maturity × Workflow Breadth&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>Question answered:&lt;/strong> “How mature is automation/AI—and how broadly does it apply across the workflow?”&lt;/p>
&lt;p>**Why it works:**This separates two things stakeholders often conflate:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>maturity of automation capability&lt;/p>
&lt;/li>
&lt;li>
&lt;p>how much of the workflow the product claims to cover&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>How it’s designed for responsible interpretation:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>“Maturity” is grounded in observable capability indicators rather than marketing terms&lt;/p>
&lt;/li>
&lt;li>
&lt;p>“Breadth” is framed as workflow ownership, not simply feature count&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Design principle:&lt;/strong> &lt;em>Keep axes orthogonal so the chart stays truthful.&lt;/em>&lt;/p>
&lt;hr>
&lt;h3 id="3-commercial-vs-residential-relevance">&lt;strong>3. Commercial vs. Residential Relevance&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>Question answered:&lt;/strong> “Which solutions are most comparable to our operational context?”&lt;/p>
&lt;p>**Why it works:**Transaction types differ in complexity, documentation, risk, and workflow variability. Without this lens, stakeholders may draw incorrect strategic conclusions from superficially similar tools.&lt;/p>
&lt;p>&lt;strong>How it’s designed:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A simple segmentation that scopes interpretation rather than ranking vendors&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Helps stakeholders quickly identify “directly relevant” vs. “adjacent signals” in the market&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="4-ecosystem-integration-landscape">&lt;strong>4. Ecosystem Integration Landscape&lt;/strong>&lt;/h3>
&lt;p>&lt;strong>Question answered:&lt;/strong> “What’s a tool we can plug in vs. an ecosystem that changes interoperability and switching costs?”&lt;/p>
&lt;p>**Why it works:**Integration posture shapes adoption dynamics: procurement, implementation effort, dependency risk, and long-term flexibility.&lt;/p>
&lt;p>&lt;strong>How it’s designed:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Clear categories that highlight whether a solution is:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>a standalone product,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>part of an integrated suite,&lt;/p>
&lt;/li>
&lt;li>
&lt;p>or operating as an ecosystem strategy&lt;/p>
&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Design principle:&lt;/strong> &lt;em>Strategy isn’t only about features—it’s about constraints.&lt;/em>&lt;/p>
&lt;hr>
&lt;h2 id="making-it-usable-storytelling-and-stakeholder-alignment">&lt;strong>Making It Usable: Storytelling and Stakeholder Alignment&lt;/strong>&lt;/h2>
&lt;p>&lt;strong>Progressive disclosure (how the readout was structured)&lt;/strong>&lt;/p>
&lt;p>I presented the work like a product narrative:&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Start with &lt;strong>workflow stage&lt;/strong> to establish orientation&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Move to &lt;strong>maturity × breadth&lt;/strong> to discuss capability patterns&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Add &lt;strong>relevance&lt;/strong> to prevent misinterpretation&lt;/p>
&lt;/li>
&lt;li>
&lt;p>End with &lt;strong>ecosystem integration&lt;/strong> to connect to strategic leverage and constraints&lt;/p>
&lt;/li>
&lt;/ol>
&lt;p>This sequence reduced debate and increased clarity: stakeholders could follow the logic rather than getting stuck on definitions.&lt;/p>
&lt;p>&lt;strong>Built-in “how to read” guidance&lt;/strong>&lt;/p>
&lt;p>Each view includes lightweight framing—axis definitions, category labels, and reading cues—so the landscape can stand alone without the researcher in the room.&lt;/p>
&lt;hr>
&lt;h3 id="outcomes">&lt;strong>Outcomes&lt;/strong>&lt;/h3>
&lt;p>This work created an artifact stakeholders could actually use:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>A &lt;strong>shared vocabulary&lt;/strong> for discussing a fragmented market&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A &lt;strong>trustworthy classification model&lt;/strong> that made comparisons feel grounded&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A &lt;strong>decision-ready visualization system&lt;/strong> that supported strategy discussions without requiring deep domain knowledge&lt;/p>
&lt;/li>
&lt;li>
&lt;p>A framework designed to be &lt;strong>maintained and updated&lt;/strong>, not a one-time research dump&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>(Deliberately omitted here: any market-specific conclusions or vendor evaluations.)&lt;/p>
&lt;hr>
&lt;h3 id="what-this-demonstrates">&lt;strong>What This Demonstrates&lt;/strong>&lt;/h3>
&lt;p>This project is a snapshot of the kind of UX work that sits at the intersection of:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>research strategy (defining what must be true to make a decision),&lt;/p>
&lt;/li>
&lt;li>
&lt;p>analytics and synthesis (normalizing messy inputs),&lt;/p>
&lt;/li>
&lt;li>
&lt;p>information design (reducing cognitive load),&lt;/p>
&lt;/li>
&lt;li>
&lt;p>and stakeholder alignment (building shared understanding through clear frameworks).&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h3 id="key-takeaways-id-reuse">&lt;strong>Key Takeaways I’d Reuse&lt;/strong>&lt;/h3>
&lt;ol>
&lt;li>
&lt;p>&lt;strong>Start with the decisions, not the data.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Normalize first; visualize second.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Use multiple simple views instead of one complex chart.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Make classification rules explicit so the work earns trust.&lt;/strong>&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Design for scanning—then support deeper follow-up.&lt;/strong>&lt;/p>
&lt;/li>
&lt;/ol></description></item><item><title>Modernizing document workflows in a complex transaction platform</title><link>https://www.tanzhou.space/project/transforming-document-experince/</link><pubDate>Sun, 01 Jun 2025 05:26:35 +0000</pubDate><guid>https://www.tanzhou.space/project/transforming-document-experince/</guid><description>&lt;h2 id="problem">Problem&lt;/h2>
&lt;h3 id="business--product-challenge">Business &amp;amp; product challenge&lt;/h3>
&lt;figure id="figure-before-document-coordination-lived-in-email-threads-and-attachmentsforcing-manual-tracking-follow-ups-and-low-confidence-in-whats-latest">
&lt;div class="figure-img-wrap" >
&lt;img alt="Before: Document coordination lived in email threads and attachments—forcing manual tracking, follow-ups, and low confidence in &amp;#39;what&amp;#39;s latest&amp;#39;." srcset="
/media/problem-visual_hu480d41c255ee25ae355a403997eceb5b_212899_22aee16a2930d28e364121e08f103e4c.png 400w,
/media/problem-visual_hu480d41c255ee25ae355a403997eceb5b_212899_5fe2b65278fb4ded83625bd1b490b2a2.png 760w,
/media/problem-visual_hu480d41c255ee25ae355a403997eceb5b_212899_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/problem-visual_hu480d41c255ee25ae355a403997eceb5b_212899_22aee16a2930d28e364121e08f103e4c.png"
width="760"
height="151"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
Before: Document coordination lived in email threads and attachments—forcing manual tracking, follow-ups, and low confidence in &amp;lsquo;what&amp;rsquo;s latest&amp;rsquo;.
&lt;/figcaption>&lt;/figure>
&lt;p>In complex, high-stakes transactions, “documents” aren’t a feature—they’re the operating system. Internal teams and external clients must request, collect, verify, and reference dozens of items across multiple parties, deadlines, and handoffs.&lt;/p>
&lt;p>The legacy reality looked like this:&lt;/p>
&lt;ul>
&lt;li>Requirements defined through contracts + back-and-forth Q&amp;amp;A&lt;/li>
&lt;li>Documents arriving in scattered email threads and attachments&lt;/li>
&lt;li>Manual tracking (“what’s missing, who owes what, what changed?”)&lt;/li>
&lt;li>Version confusion and rework (duplicate uploads, wrong file shared, unclear latest)&lt;/li>
&lt;/ul>
&lt;p>This is a &lt;em>product&lt;/em> problem (lack of shared visibility and trusted status), and a &lt;em>business&lt;/em> problem (time and risk). Industry benchmarks show why this matters: “interaction workers” spend &lt;strong>~28% of time on email&lt;/strong> and &lt;strong>~19% searching/gathering information&lt;/strong>, and improving collaboration/searchability can create &lt;strong>~20–25% productivity uplift&lt;/strong> in the right conditions.&lt;/p>
&lt;h3 id="users">Users&lt;/h3>
&lt;ul>
&lt;li>&lt;strong>Internal transaction teams&lt;/strong> (ops/service/processing): need a reliable source of truth to coordinate work, maintain confidentiality, and avoid errors.&lt;/li>
&lt;li>&lt;strong>External clients/partners&lt;/strong>: need clarity on what’s required, what’s outstanding, and confidence that the right version was received.&lt;/li>
&lt;/ul>
&lt;h3 id="why-this-was-important">Why this was important&lt;/h3>
&lt;p>Document handling is repeated constantly. If the workflow is unclear, people default to email and personal workarounds—creating compounding cost (minutes lost per document × many users × many transactions) and compounding risk (wrong versions, missed requirements, delayed approvals).&lt;/p>
&lt;h2 id="strategy">Strategy&lt;/h2>
&lt;h3 id="research-goal">Research goal&lt;/h3>
&lt;p>To define a modern document workflow that:&lt;/p>
&lt;ol>
&lt;li>makes requirements visible,&lt;/li>
&lt;li>makes progress trackable,&lt;/li>
&lt;li>makes document status trustworthy, and&lt;/li>
&lt;li>scales to high volumes without forcing users back into email or local folders.&lt;/li>
&lt;/ol>
&lt;h3 id="approach-a-program-of-research-not-one-study">Approach: a program of research, not “one study”&lt;/h3>
&lt;p>I ran this as a &lt;strong>multi-phase research arc&lt;/strong> where each phase answered the next logical question:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Discovery (workflow reality)&lt;/strong>: What actually happens today—and where does it break?&lt;/li>
&lt;li>&lt;strong>Concept shaping (new mental model)&lt;/strong>: What structure reduces ambiguity (checklists, status, ownership, visibility)?&lt;/li>
&lt;li>&lt;strong>Validation (does it work for real users?)&lt;/strong>: Can internal and external users understand it quickly, act confidently, and avoid errors?&lt;/li>
&lt;li>&lt;strong>Scale (second-order constraints)&lt;/strong>: Once adoption grows, what breaks next (organization, findability, versioning, automation)?&lt;/li>
&lt;/ol>
&lt;h3 id="methods">Methods&lt;/h3>
&lt;p>Because the work spanned maturity stages, I matched method to decision:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Interviews / workflow mapping&lt;/strong> to surface real breakdowns and system constraints&lt;/li>
&lt;li>&lt;strong>Prototype concept testing&lt;/strong> to de-risk mental models (terminology, status, ownership, visibility)&lt;/li>
&lt;li>&lt;strong>Design validation&lt;/strong> to confirm comprehension and usability before rollout&lt;/li>
&lt;li>&lt;strong>Later-stage discovery&lt;/strong> focused on scale issues (high doc counts, search behavior, version control expectations)&lt;/li>
&lt;/ul>
&lt;h2 id="my-decision-rationale">My decision rationale&lt;/h2>
&lt;h3 id="why-interviews-first">Why interviews first&lt;/h3>
&lt;p>At the start, the users didn’t have a UI problem—we had a &lt;strong>coordination system problem&lt;/strong>. Interviews and workflow mapping were the fastest way to:&lt;/p>
&lt;ul>
&lt;li>uncover the real “jobs to be done” (request → chase → receive → verify → organize → reuse)&lt;/li>
&lt;li>expose hidden constraints (privacy boundaries, handoffs, audit needs)&lt;/li>
&lt;li>identify why “email + attachments” persisted (it filled gaps the product didn’t cover)
&lt;strong>Decision logic&lt;/strong>: If we guessed at the workflow, we’d build a beautiful interface around the wrong system.&lt;/li>
&lt;/ul>
&lt;h3 id="why-prototype-testing-next">Why prototype testing next&lt;/h3>
&lt;p>Once I saw the breakdown was “tracking + trust,”, I needed to validate whether a checklist/status model could become the shared source of truth. Prototype testing was the right tool because it let us:&lt;/p>
&lt;ul>
&lt;li>test comprehension of status/ownership (without expensive build)&lt;/li>
&lt;li>test terminology and “professional tone” early (a known adoption lever)&lt;/li>
&lt;li>measure whether users could correctly answer “what’s left?” in seconds
&lt;strong>Decision logic&lt;/strong>: We needed behavioral evidence that the model reduced ambiguity—not just opinions about it.&lt;/li>
&lt;/ul>
&lt;h3 id="why-design-validation-internal--external">Why design validation (internal + external)&lt;/h3>
&lt;p>The workflow had two audiences with different risk profiles. Validation ensured:&lt;/p>
&lt;ul>
&lt;li>internal users could move fast without creating errors&lt;/li>
&lt;li>external users could act confidently without needing an explainer&lt;/li>
&lt;li>status changes and version cues didn’t create false confidence or confusion
&lt;strong>Decision logic&lt;/strong>: In document workflows, clarity is safety—validation is risk management.&lt;/li>
&lt;/ul>
&lt;h3 id="why-organization--versioning-later">Why organization + versioning later&lt;/h3>
&lt;p>As the system matured, the next bottleneck wasn’t “can I upload?”—it was “can I find the right thing and trust it?” At scale, long document lists and multiple versions shift the problem from interaction design to &lt;strong>information architecture and reliability&lt;/strong>.&lt;/p>
&lt;p>&lt;strong>Decision logic&lt;/strong>: Once the checklist model reduced “what’s missing,” the system’s limiting factor became “what’s correct and where is it?”—so research pivoted to structure, search behavior, and version control.&lt;/p>
&lt;h2 id="key-decisions">Key decisions&lt;/h2>
&lt;figure id="figure-checklist-became-the-coordination-layer-that-connects-requests-uploads-ownerships-status-notifications-and-version-confidence">
&lt;div class="figure-img-wrap" >
&lt;img alt="Checklist became the coordination layer that connects requests, uploads, ownerships, status, notifications, and version confidence." srcset="
/media/insight-visual_huc057ca841da83c4a25edad75c3369b93_168586_8856eeccd1d2986d955c3552573ae022.png 400w,
/media/insight-visual_huc057ca841da83c4a25edad75c3369b93_168586_65a7ea32f76ad294b0583d2e0cb7c185.png 760w,
/media/insight-visual_huc057ca841da83c4a25edad75c3369b93_168586_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/insight-visual_huc057ca841da83c4a25edad75c3369b93_168586_8856eeccd1d2986d955c3552573ae022.png"
width="666"
height="260"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
Checklist became the coordination layer that connects requests, uploads, ownerships, status, notifications, and version confidence.
&lt;/figcaption>&lt;/figure>
&lt;ol>
&lt;li>&lt;strong>Reframe documents from “file storage” to “workflow tracking”&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Treat document handling as an end-to-end workflow (requirements → request → receipt → verification → history), not a repository.
&lt;strong>Why&lt;/strong>: Email persists because it supports coordination and status tracking—so the product had to do that job better.&lt;/p>
&lt;ol start="2">
&lt;li>&lt;strong>Use a checklist model as the shared source of truth&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Anchor the experience in a checklist/status structure that answers: what’s needed, what’s in progress, what’s done, what changed, who owns it.
&lt;strong>Why&lt;/strong>: This reduces ambiguity for both internal teams and external clients, and creates a consistent foundation for later features (notifications, organization, automation).&lt;/p>
&lt;ol start="3">
&lt;li>&lt;strong>Make ownership, visibility, and status explicit (not implied)&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Design for role-based visibility and unambiguous status transitions (with language that users trust).
&lt;strong>Why&lt;/strong>: In transaction workflows, unclear “who owns this” creates delays; unclear “status” creates rework and risk.&lt;/p>
&lt;ol start="4">
&lt;li>&lt;strong>Standardize organization defaults before adding “more flexibility”&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Provide sensible default structure (folders/tabs/categories, sorting and filtering patterns, and “pin/priority” behaviors) rather than relying on everyone inventing their own system.
&lt;strong>Why&lt;/strong>: Ad-hoc organization scales poorly and increases search time and error rates—especially across teams.&lt;/p>
&lt;ol start="5">
&lt;li>&lt;strong>Invest in version confidence as a first-class requirement&lt;/strong>&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Decision&lt;/strong>: Prioritize version history and clear draft/final cues (including stacking, timestamps, and traceability).
&lt;strong>Why&lt;/strong>: Version confusion is a trust-breaker; users can’t move fast if they fear sharing the wrong thing.&lt;/p>
&lt;h2 id="what-changed">What changed&lt;/h2>
&lt;h3 id="roadmap--scope-changes">Roadmap &amp;amp; scope changes&lt;/h3>
&lt;ul>
&lt;li>The roadmap shifted from “improve upload” to “support workflow clarity” (tracking, status, ownership, visibility).&lt;/li>
&lt;li>Document organization and versioning were treated as strategic enablers—not nice-to-haves—because they determine whether the system works at scale.&lt;/li>
&lt;/ul>
&lt;h3 id="design--ux-changes">Design &amp;amp; UX changes&lt;/h3>
&lt;ul>
&lt;li>Checklist-based experience became the core navigation layer for document work (what’s outstanding, who owes what, what’s completed).&lt;/li>
&lt;li>Status language and interaction patterns were refined through iterative testing to reduce misinterpretation.&lt;/li>
&lt;li>Organization patterns were elevated: default structures, better sorting/filtering, and pathways to reduce scanning and “where did it go?” confusion.&lt;/li>
&lt;li>Version confidence was explicitly designed (history, recency cues, clearer distinctions between draft/final).&lt;/li>
&lt;/ul>
&lt;h3 id="stakeholder-alignment-outcomes">Stakeholder alignment outcomes&lt;/h3>
&lt;ul>
&lt;li>Research artifacts created a shared mental model across product/design/ops/engineering—so decisions could be made faster and with less debate about what users “really do.”&lt;/li>
&lt;/ul>
&lt;figure id="figure-how-research-translated-into-action-key-inisghts-were-turned-into-concrete-product-decisions-and-measurable-experience-changes">
&lt;div class="figure-img-wrap" >
&lt;img alt="How research translated into action: key inisghts were turned into concrete product decisions and measurable experience changes." srcset="
/media/decision-visual_huefe46a908e2c5a917e93fb1ccf62ba49_165602_5d538d37b73d3d6496dcbc0a88ecee35.png 400w,
/media/decision-visual_huefe46a908e2c5a917e93fb1ccf62ba49_165602_2ef1c05fcee1169734d691be77ee12bc.png 760w,
/media/decision-visual_huefe46a908e2c5a917e93fb1ccf62ba49_165602_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/decision-visual_huefe46a908e2c5a917e93fb1ccf62ba49_165602_5d538d37b73d3d6496dcbc0a88ecee35.png"
width="626"
height="269"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
How research translated into action: key inisghts were turned into concrete product decisions and measurable experience changes.
&lt;/figcaption>&lt;/figure>
&lt;h2 id="impact">Impact&lt;/h2>
&lt;p>Industry research suggests that a large share of knowledge work is consumed by communication and information retrieval:&lt;/p>
&lt;ul>
&lt;li>~28% of time is spent managing email (reading/writing/responding)&lt;/li>
&lt;li>~19% of time is spent searching and gathering information&lt;/li>
&lt;li>Making information more available and searchable can reduce information searching time by as much as ~35% in some contexts&lt;/li>
&lt;/ul>
&lt;p>A checklist-driven document system directly targets both buckets:&lt;/p>
&lt;ul>
&lt;li>fewer emails needed to ask “what’s missing / did you get it?”&lt;/li>
&lt;li>less time spent searching across inbox threads and attachments&lt;/li>
&lt;li>fewer wrong-version loops and duplicate handling&lt;/li>
&lt;/ul></description></item><item><title>Improving Order Status Communication for Insurance Products</title><link>https://www.tanzhou.space/project/insurance-order-status-update/</link><pubDate>Mon, 09 Jan 2023 05:26:35 +0000</pubDate><guid>https://www.tanzhou.space/project/insurance-order-status-update/</guid><description>&lt;p>&lt;strong>My Role:&lt;/strong> UX Researcher&lt;/p>
&lt;p>&lt;strong>Research Type:&lt;/strong> Discovery, Generative, Primary research&lt;/p>
&lt;p>&lt;strong>Methods:&lt;/strong> User Interview (remote), Secondary Research, Cognitive Interview, Jobs to be Done&lt;/p>
&lt;p>&lt;strong>Deliverables:&lt;/strong> User journey map, Readout deck with research findings and recommendations&lt;/p>
&lt;p>&lt;strong>Tools:&lt;/strong> UserZoom, Microsoft Teams, Miro, Excel, PowerPoint&lt;/p>
&lt;/br>
&lt;h2 id="research-motivation">Research Motivation&lt;/h2>
&lt;p>After receiving anecdotal comments, the product team wanted to design a page to communicate with users the status of their insurance orders, but they were unsure of the benefits it would bring to the users. Additionally, they were not certain what specific status information would be helpful to users.&lt;/p>
&lt;/br>
&lt;h2 id="process">Process&lt;/h2>
&lt;p>I started by interviewing the Product team members to better understand the research request and define the problem statements. Some of the questions I asked include:&lt;/p>
&lt;blockquote>
&lt;ul>
&lt;li>What information do you need to have to feel confident to start the project?&lt;/li>
&lt;li>What do you know about our users’ preferences around this topic, and what are we not yet sure about?&lt;/li>
&lt;li>Are there competitive examples of what we’re building that we should take a look at?&lt;/li>
&lt;li>Do we understand how this information is being communicated to users currently? Through what channels?&lt;/li>
&lt;li>Who are the difference user segments at play in your opinions?&lt;/li>
&lt;/ul>
&lt;/blockquote>
&lt;p>To achieve this, I drafted research objectives and proposed a research plan to conduct 1:1 interviews with the targeting use segments. The plan involved recruiting 12 participants with different levels of product expertise and from different states, as the types of information in users' order updates for this insurnace product vary greatly by state. I then invited them to participate in remote interview sessions via UserZoom, and key stakeholders were encouraged to join and observe the sessions.&lt;/p>
&lt;p>During the interviews, I collected data to understand foundational user needs, behaviors, pain points, and motivations by prompting users to tell stories around searhcing or requesting order status for their order and by following the &amp;ldquo;Job to be done&amp;rdquo;(JTBD) framework exploring the journey and the causes of user beheviors. After completing the analysis, I presented detailed findings and my recommendations to the product managers, designers, business analysts, and key executives working on this product.&lt;/p>
&lt;/br>
&lt;h2 id="deliverable">Deliverable&lt;/h2>
&lt;ul>
&lt;li>&lt;strong>15-minute post-session debrief&lt;/strong>: For interview sessions where stakeholders were present, I hosted a quick debrief call immediately after to discuss key takeaways.&lt;/li>
&lt;li>&lt;strong>30-minute readout&lt;/strong>: After completing my analysis, I presented detailed findings and my recommendations to the product managers, designers, business analysts, and key executives working on this product.&lt;/li>
&lt;/ul>
&lt;p>My deliverables included a journey map of inquiring order status: user actions, where the information came from, how it is provided, tools that are used, pain points, and needs, and potential areas of improvement, as well as a deck that identified the specific information that users find valuable with accompanying quotes and recommendations on next step product strategies.&lt;/p>
&lt;/br>
&lt;h2 id="conclusion-and-impact">Conclusion and Impact&lt;/h2>
&lt;p>The results of my research had a significant impact on the product team&amp;rsquo;s understanding of the user needs and provided foundational user knowledge that informed next-step product strategies and design decisions. The research empirically validated the assumed business value of order status updates and surfaced additional value adds for users, such as increasing user trust through greater transparency.&lt;/p>
&lt;p>In conclusion, my research provided critical insights into the user needs and behaviors surrounding order status updates, which will help the product team make informed decision early on to design a page that meets users' specific needs and adds value to their overall experience.&lt;/p></description></item><item><title>Voice Interaction Design: Food Journaling via Voice Assistant</title><link>https://www.tanzhou.space/project/accessibility-design-food-journaling-through-conversational-agent/</link><pubDate>Thu, 07 Mar 2019 06:32:08 +0000</pubDate><guid>https://www.tanzhou.space/project/accessibility-design-food-journaling-through-conversational-agent/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>&lt;strong>My Role:&lt;/strong> UX Researcher and Designer&lt;/p>
&lt;p>&lt;strong>Methods:&lt;/strong> Literature review, Interviews, Competitive analysis, Journey mapping, Heuristic evaluation&lt;/p>
&lt;p>&lt;strong>Data Sources:&lt;/strong> Interview transcripts, Literature&lt;/p>
&lt;p>&lt;strong>Deliverables:&lt;/strong> Research report, Dialogic flow, Sample conversations&lt;/p>
&lt;p>&lt;strong>Tools:&lt;/strong> Google Assistant, Amazon Alexa, Google Sheet&lt;/p>
&lt;p>&lt;strong>Background/Context:&lt;/strong> Studies following diabetic patients and weight watchers found food journaling to be an effective means of managing one’s diet. Although automating the journaling process using smart devices could increase adherence by decreasing the effort and mental burden required, it could also lead to a decrease in users reflecting on collected data. After searching the field, I failed to find a well-designed voice-based interface that supports food tracking.&lt;/p>
&lt;p>&lt;strong>Project Overview:&lt;/strong> The overarching goal was to answer the question &amp;ldquo;When people track foods they eat daily (food journaling) via a voice assistant, how can we design the dialogic flow to facilitate users’ reflections on their eating habits?&amp;rdquo;.&lt;/p>
&lt;/br>
&lt;/br>
&lt;details class="toc-inpage d-print-none " open>
&lt;summary class="font-weight-bold">Table of Contents&lt;/summary>
&lt;nav id="TableOfContents">
&lt;ul>
&lt;li>&lt;a href="#overview">Overview&lt;/a>&lt;/li>
&lt;li>&lt;a href="#objective">Objective&lt;/a>&lt;/li>
&lt;li>&lt;a href="#opportunity-and-process">Opportunity and Process&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#opportunity">Opportunity&lt;/a>&lt;/li>
&lt;li>&lt;a href="#process">Process&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#strategy">Strategy&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#exploratory-interviews">Exploratory Interviews&lt;/a>&lt;/li>
&lt;li>&lt;a href="#heuristic-evaluation">Heuristic Evaluation&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#outcomes">Outcomes&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#sample-conversations">Sample Conversations&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#conclusionreflection">Conclusion/Reflection&lt;/a>&lt;/li>
&lt;/ul>
&lt;/nav>
&lt;/details>
&lt;h2 id="objective">Objective&lt;/h2>
&lt;ul>
&lt;li>Understand users needs and pain points in food journaling&lt;/li>
&lt;li>Design the flow of the conversation and its underlying logic to facilitate voice-based food journaling&lt;/li>
&lt;li>Adapt Nielsen’s heuristics to evaluate voice-based interface&lt;br>
&lt;/br>
&lt;/br>&lt;/li>
&lt;/ul>
&lt;h2 id="opportunity-and-process">Opportunity and Process&lt;/h2>
&lt;h3 id="opportunity">Opportunity&lt;/h3>
&lt;p>The study of automated food journaling show that such automation could lower the burden of tracking and increase adherence (&lt;a href="https://dl.acm.org/doi/abs/10.1145/2858036.2858554" target="_blank" rel="noopener">Beenish et al., 2016&lt;/a>), but it could lead to a decrease in users reflection on collected data(&lt;a href="https://dl.acm.org/doi/abs/10.1145/2556288.2557372" target="_blank" rel="noopener">Choe et al., 2014&lt;/a>). Also, the majority of these studies leverages only graphical interfaces, building on the legacy of hand-and-finger input devices.These approaches are limited by the required input since users might not always be able to log their entries.&lt;/p>
&lt;p>Voice assistants like Amazon Alexa, Apple Siri, and Google assistant are getting more and more ubiquitous. Their support of hands-free interaction makes voice-based food journaling an ideal use case for voice assistants.&lt;/p>
&lt;figure id="figure-voice-assistants-are-more-accessible-than-ever-----photo-stratabluecom">
&lt;div class="figure-img-wrap" >
&lt;img alt="Voice Assistants are more accessible than ever. Photo: Stratablue.com" srcset="
/media/voice_assistant_huc7f3b2bb7b00d1069b78925bdcf5c658_613289_80c0dc8524d03041aebf381c008cb185.jpg 400w,
/media/voice_assistant_huc7f3b2bb7b00d1069b78925bdcf5c658_613289_4cd5261bd8c8bb2834ec88c1dacb31ae.jpg 760w,
/media/voice_assistant_huc7f3b2bb7b00d1069b78925bdcf5c658_613289_1200x1200_fit_q75_lanczos.jpg 1200w"
src="https://www.tanzhou.space/media/voice_assistant_huc7f3b2bb7b00d1069b78925bdcf5c658_613289_80c0dc8524d03041aebf381c008cb185.jpg"
width="760"
height="304"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
Voice Assistants are more accessible than ever. Photo: Stratablue.com
&lt;/figcaption>&lt;/figure>
&lt;h3 id="process">Process&lt;/h3>
&lt;p>In order to define the users’ requirements, I conducted a competitive analysis of existing tools that support voice-based food journaling. I also interviewed two participants who have previously used their mobile phones for food journaling to understand users’ needs, behaviors, and motivations of the user during the journaling process.&lt;/p>
&lt;p>Using the information collected from the field review and interviews, I wrote a series of sample dialogues to capture the “sound-and-feel” of the interaction under different scenarios. These sample dialogues convey the flow that the user will experience and allow me to experiment with different design strategies, such as how to promote the discoverability of new features or how to confirm a user’s request.&lt;/p>
&lt;p>At the usability test stage, a friend who is unfamiliar with the project was asked to role-play the simple dialogues with me. This helped me curate the conversation, defining the flow and the underlying logic that represents the complete food journaling experience. I also conducted system evaluations with a set of adapted heuristics to expose usability issues.
&lt;/br>
&lt;/br>&lt;/p>
&lt;h2 id="strategy">Strategy&lt;/h2>
&lt;h3 id="exploratory-interviews">Exploratory Interviews&lt;/h3>
&lt;p>My initial goal of exploratory interviews was to understand users’ needs in the journaling process. Two interviews were conducted with informants who had experience journaling the food they ate.&lt;/p>
&lt;h3 id="heuristic-evaluation">Heuristic Evaluation&lt;/h3>
&lt;p>&lt;strong>1. Awareness of system status&lt;/strong>&lt;/p>
&lt;p>The Nielsen heuristics emphasize the visibility of system status. Although visibility does not apply to voice-based interfaces, user awareness and feedback are still important. The system needs to inform users about what it is doing in a timely and appropriate fashion.&lt;/p>
&lt;p>&lt;strong>2. Error prevention&lt;/strong> &lt;/p>
&lt;p>As with any system, it is best to prevent errors from occurring or handle them in a way that is less intrusive to user experience. This becomes even more important in a system with no visual interface since people cannot “unsay” what they have previously said. This system addresses this issue by providing users their options for the next step explicitly in the conversation.&lt;/p>
&lt;p>&lt;strong>3. Flexibility and efficiency&lt;/strong>&lt;/p>
&lt;p>Novice users become experts after they become familiar with a system, and experts benefit from efficiency. As a result, some instructions, which would be helpful to novice users, may become redundant to experts. The logic flow supporting setting up customized shortcuts for frequently used terms to speed up the process.&lt;/p>
&lt;p>&lt;strong>4. Accessibility&lt;/strong>&lt;/p>
&lt;p>Voice based interaction is great for people who are unable to use a graphic interface. It doesn’t require input from hands which offers more accessibility since users could interact with the system while carrying groceries, cooking a meal, or driving a car.&lt;/p>
&lt;p>&lt;strong>5. Ambiguity&lt;/strong>&lt;/p>
&lt;p>People don’t communicate in syntax the way computers do. They sometimes use metaphors or slang. They sometimes forget words or pause when speaking. Voice technology should accommodate the users’ communication style and needs. When the user pauses, the system just repeats the previous response with a much more detailed instruction of what users can do until they make a selection.&lt;/p>
&lt;p>&lt;strong>6. Discoverability&lt;/strong>&lt;/p>
&lt;p>The invisibility of a voice-based interface makes it difficult for users to explore new ways of interacting with the system. In this logic flow, new actions will be introduced in the form of quick tips at the end of each conversation (except those involving specific customized shortcuts). During the reflection stage when a user views their journals on a display, the system suggests new interactions that are more efficient that would increase the accuracy of the users’ food journaling.&lt;/p>
&lt;p>&lt;strong>7. Multimodal Reflection&lt;/strong>&lt;/p>
&lt;p>In the five-stage personal informatics model proposed by &lt;a href="https://dl.acm.org/doi/abs/10.1145/1753326.1753409" target="_blank" rel="noopener">Li et al.(2010)&lt;/a>, the reflection stage may involve looking at lists of collected personal information or exploring or interacting with information visualizations. For a voice-based system, exploration is inherently difficult for exploration and visualization is impossible. To address this, the logic flow allows for short-term reflection by repeating and confirming the list of input items during every conversion. The system also expands to another modality for long-term reflection. The design flow falls back to a visual interface after collecting information. A user can access their journaling data on their smartphone and on the website where auxiliary graphical interfaces would be available. However, this part is not accomplished during this project.
&lt;/br>
&lt;/br>&lt;/p>
&lt;h2 id="outcomes">Outcomes&lt;/h2>
&lt;ul>
&lt;li>Dialogic flow for food journaling via voice assistant&lt;/li>
&lt;li>Sample dialogue for voice-based interactions&lt;/li>
&lt;/ul>
&lt;p>&lt;img src="https://lh6.googleusercontent.com/mLcCIMme-TSHIrtRCe9GqDrf1WXf4blBtlHC48aIGfIXr9nuJWP5ySFhhZNG4coAAamdGdNDaYW2PNyKTMjw0WVEFZQp80S7A2B__33IVdZsVfUi2dm1uma7jo9DnKXOKIDnQdUz" alt="">&lt;/p>
&lt;p>The logic flow of conversation shows four stages of the voice-based food journaling system. In the beginning, users start with the voice assistant by saying specific trigger words. In the collection phase, novices and experts can take different approaches. The reflection stage suggests a multimodal approach – short-term reflection is embedded in the conversation while long-term reflection involves reports and visualizations viewed on another display. At the final stage, users can take actions after they reflect on their food journal. They can correct something that was incorrectly recorded via speech, set a shortcut for more efficient interaction, or explore new ways to interact with the voice assistant.&lt;/p>
&lt;h3 id="sample-conversations">Sample Conversations&lt;/h3>
&lt;p>A novice user adding Food. (Inputs from the user is highlighted)&lt;/p>
&lt;blockquote>
&lt;mark>- Hi My Food Journal,&lt;/mark>
&lt;/blockquote>
&lt;blockquote>
&lt;p>- Hi! Welcome to My Food Journal. I can help you add new food to the journal or check previous journals. Which would you like?&lt;/p>
&lt;/blockquote>
&lt;blockquote>
&lt;mark>- Add new food.&lt;/mark>
&lt;/blockquote>
&lt;blockquote>
&lt;p>- Okay, go ahead and say the names.&lt;/p>
&lt;/blockquote>
&lt;blockquote>
&lt;mark>- Ummm…&lt;/mark>
&lt;/blockquote>
&lt;blockquote>
&lt;p>- You can add food by saying its name and serving size. Go ahead.&lt;/p>
&lt;/blockquote>
&lt;blockquote>
&lt;mark>- One in-n-out double-double, French Fries, and Chocolate shake.&lt;/mark>
&lt;/blockquote>
&lt;blockquote>
&lt;p>- Got it. Would you like to add more?&lt;/p>
&lt;/blockquote>
&lt;blockquote>
&lt;mark>- No, that’s it.&lt;/mark>
&lt;/blockquote>
&lt;blockquote>
&lt;p>- Sure. I got one in-n-out double-double, a medium French Fries, and a medium chocolate shake. Is that correct?&lt;/p>
&lt;/blockquote>
&lt;blockquote>
&lt;mark>- Yup.&lt;/mark>
&lt;/blockquote>
&lt;blockquote>
&lt;p>- Alright. Quick tip: You can always add more food by saying “Hi My Food Journal, quick add.”&lt;/p>
&lt;/blockquote>
&lt;p>In the sample conversation, note that the system always provides an affirmation when it gets any information from the users. At the end of each conversation, new actions are introduced in the form of quick tips to increase discoverability.&lt;/p>
&lt;/br>
&lt;p>Quick add and customized shortcuts (Inputs from the user is highlighted)&lt;/p>
&lt;blockquote>
&lt;p>- My Food Journal, quick add.&lt;/p>
&lt;/blockquote>
&lt;blockquote>
&lt;mark>- (A response tone playing indicating the system is listening)&lt;/mark>
&lt;/blockquote>
&lt;blockquote>
&lt;p>- “McDonald Lunch Combo”&lt;/p>
&lt;/blockquote>
&lt;blockquote>
&lt;p>- &lt;mark>(A confirmative tone playing indicating the system have received your information)&lt;/mark>&lt;/p>
&lt;/blockquote>
&lt;p>In this example, a user has set up a shortcut named “McDonald Lunch Combo” which includes a list of food they usually get from McDonald. This way the user could skip the instructive steps and complete the journaling in a much more efficient fashion.
&lt;/br>
&lt;/br>&lt;/p>
&lt;h2 id="conclusionreflection">Conclusion/Reflection&lt;/h2>
&lt;p>The focus of this project was on the bottom-up process of conversation design. After gathering insights from competitive analysis, interviews, sample conversations, and adapted heuristic evaluations, I delivered a dialogic flow for food journaling via voice assistant and sample conversations. The next step would be to expand the dialogs based on the flow and implement the system using a real-world voice assistant platform.&lt;/p></description></item><item><title>Usability Study: League of Legends Chatbot</title><link>https://www.tanzhou.space/project/chatbot/</link><pubDate>Wed, 07 Mar 2018 01:00:00 +0000</pubDate><guid>https://www.tanzhou.space/project/chatbot/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>&lt;strong>My Role:&lt;/strong> Lead UX Researcher in a team of 4 researchers and 2 developers&lt;/p>
&lt;p>&lt;strong>Methods:&lt;/strong> Interviews, Focus groups, Surveys, Thematics analysis, Observations, Moderated usability test, and Descriptive statistical analysis&lt;/p>
&lt;p>&lt;strong>Data Sources:&lt;/strong> Interview transcripts, Observation notes, Survey result, and In-game chat logs&lt;/p>
&lt;p>&lt;strong>Deliverables:&lt;/strong> Research report and Chatbot prototype&lt;/p>
&lt;p>&lt;strong>Tools&lt;/strong>: AutoHotKeys, League of Legends spectator mode, Google Docs, Google Slides, Google Forms, Excel, OBS Studio, and Slack&lt;/p>
&lt;p>&lt;strong>Background/Context:&lt;/strong> In this project, the client requested our team study the usefulness of a chatbot for improving group collaboration. We grounded our research in the context of League of Legends games, where team collaboration is the key theme of the gameplay experience. In an LoL game, the temporarily assembled team encounters various challenges in communicating and coordinating, e.g. not understanding other player’s intentions, requests for help going unanswered. These challenges, if not resolved, could be detrimental to the team in League of Legends’ fast-paced gameplay.&lt;/p>
&lt;p>&lt;strong>Project Overview:&lt;/strong> The overarching question is to understand &amp;ldquo;How can a chatbot help facilitate collaborations among a temporarily assembled team in League of Legends?&amp;rdquo;.&lt;/p>
&lt;p>&lt;strong>Client:&lt;/strong> &lt;a href="https://researcher.watson.ibm.com/researcher/view.php?person=ibm-Dakuo.Wang" target="_blank" rel="noopener">Dakuo Wang, IBM Research&lt;/a>
&lt;/br>
&lt;/br>&lt;/p>
&lt;details class="toc-inpage d-print-none " open>
&lt;summary class="font-weight-bold">Table of Contents&lt;/summary>
&lt;nav id="TableOfContents">
&lt;ul>
&lt;li>&lt;a href="#overview">Overview&lt;/a>&lt;/li>
&lt;li>&lt;a href="#objective">Objective&lt;/a>&lt;/li>
&lt;li>&lt;a href="#opportunity-and-process">Opportunity and Process&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#opportunity">Opportunity&lt;/a>&lt;/li>
&lt;li>&lt;a href="#process">Process&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#strategy">Strategy&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#interview-and-focus-groups">Interview and Focus Groups&lt;/a>&lt;/li>
&lt;li>&lt;a href="#survey">Survey&lt;/a>&lt;/li>
&lt;li>&lt;a href="#prototype">Prototype&lt;/a>&lt;/li>
&lt;li>&lt;a href="#wizard-of-oz-test">Wizard of Oz Test&lt;/a>&lt;/li>
&lt;li>&lt;a href="#user-testing-process">User Testing Process&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#outcomes">Outcomes&lt;/a>&lt;/li>
&lt;li>&lt;a href="#key-takeaways">Key Takeaways&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#the-positives">The Positives😀:&lt;/a>&lt;/li>
&lt;li>&lt;a href="#the-negatives">The negatives🙁:&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#conclusion">Conclusion&lt;/a>&lt;/li>
&lt;/ul>
&lt;/nav>
&lt;/details>
&lt;h2 id="objective">Objective&lt;/h2>
&lt;ul>
&lt;li>Understand what obstacles impede collaborations among a temporarily assembled team in League of Legends&lt;/li>
&lt;li>Understand how a chatbot might help players collaborate with strangers on their team&lt;/li>
&lt;li>Evaluate the usability of the chatbot
&lt;/br>
&lt;/br>&lt;/li>
&lt;/ul>
&lt;h2 id="opportunity-and-process">Opportunity and Process&lt;/h2>
&lt;h3 id="opportunity">Opportunity&lt;/h3>
&lt;p>Many League of Legend players said that they had a better chance of winning when they coordinated with their team. However, the current game doesn’t have any built-in mechanism that proactively encourages players to communicate with their teammates. A chatbot can provide additional useful information for the team members and engage them to better communicate with each other.&lt;/p>
&lt;h3 id="process">Process&lt;/h3>
&lt;p>Several UX research approaches were used to design the chatbot.&lt;/p>
&lt;p>We completed interviews and focus groups to understand problems players experience in team collaboration.
We conducted surveys to determine the chatbot functions.
We built a chatbot prototype with learnings from the previous two steps.
We carried out a Wizard of Oz experiment to verify the usability of the functions
&lt;/br>
&lt;/br>&lt;/p>
&lt;h2 id="strategy">Strategy&lt;/h2>
&lt;h3 id="interview-and-focus-groups">Interview and Focus Groups&lt;/h3>
&lt;p>We conducted 9 interviews with players to learn more about their attitude toward and experience with team collaborations during their most recent gameplay. We encouraged participants to describe frustrating situations where their teammate didn’t respond timely to their calls for help, where they had difficulties with the built-in chatbot - the main channel of in-game communication. We also asked about positive moments when they felt connected to the team, and when they successfully operate tactics. These interviews allowed us to better understand the current state of in-game collaborations and their points of breakdown.&lt;/p>
&lt;p>We then held 2 focus groups, in which we specifically asked participants to list the things they asked their teammates to do, the channels through which they made their requests and the responses they received from their teammates. The focus groups’ back and forth conversations allowed us to generate a rather extensive list of ideas for potential chatbot functions.&lt;/p>
&lt;h3 id="survey">Survey&lt;/h3>
&lt;p>After thematic analysis of the interviews and focus groups, we synthesized a few broad genres of chatbot functions to improve upon. We designed a survey to gather quantitative data on the qualitative findings so we could then prioritize the list of topics.&lt;/p>
&lt;h3 id="prototype">Prototype&lt;/h3>
&lt;p>We then developed prototype functions around the areas deemed most important. We developed a list of functions showing what the chatbot would say in response to a variety of scenarios.&lt;/p>
&lt;h3 id="wizard-of-oz-test">Wizard of Oz Test&lt;/h3>
&lt;p>Due to the difficulty of inserting a chatbot function into an already mature and complex game system, we have decided to use a Wizard of Oz study in which a research team member would play as the chatbot in testing sessions.&lt;/p>
&lt;p>A fundamental requirement of this Wizard of Oz study is that the participants cannot know that an actual person is behind the curtain. Instead, they must believe they are interacting with a fully automated chatbot.&lt;/p>
&lt;p>
&lt;figure id="figure-prototype-chatbot-introduction-function">
&lt;div class="figure-img-wrap" >
&lt;img alt="Prototype Chatbot `Introduction` Function" srcset="
/media/chatbot2_hud66b4e5350853cb92894090968fd0e13_194284_e5d0564b95f4714dfa803bd6a1abd2ac.png 400w,
/media/chatbot2_hud66b4e5350853cb92894090968fd0e13_194284_79e9149a8340b696a88cd39fe96b60ce.png 760w,
/media/chatbot2_hud66b4e5350853cb92894090968fd0e13_194284_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/chatbot2_hud66b4e5350853cb92894090968fd0e13_194284_e5d0564b95f4714dfa803bd6a1abd2ac.png"
width="478"
height="251"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
Prototype Chatbot &lt;code>Introduction&lt;/code> Function
&lt;/figcaption>&lt;/figure>
&lt;figure id="figure-prototype-chatbot-encourage--response-function">
&lt;div class="figure-img-wrap" >
&lt;img alt="Prototype Chatbot `Encourage` &amp;amp; `Response` Function" srcset="
/media/chatbot3_hu3dd1c98da89053e54eb1a28eb119677a_109594_a2669424f06714495307b9a752131830.png 400w,
/media/chatbot3_hu3dd1c98da89053e54eb1a28eb119677a_109594_73b0b2e83c02749aa37709a60048f4df.png 760w,
/media/chatbot3_hu3dd1c98da89053e54eb1a28eb119677a_109594_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/chatbot3_hu3dd1c98da89053e54eb1a28eb119677a_109594_a2669424f06714495307b9a752131830.png"
width="403"
height="173"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
Prototype Chatbot &lt;code>Encourage&lt;/code> &amp;amp; &lt;code>Response&lt;/code> Function
&lt;/figcaption>&lt;/figure>
&lt;/p>
&lt;p>Although this was a challenge, we leveraged &lt;a href="https://www.autohotkey.com/" target="_blank" rel="noopener">AutoHotKey&lt;/a> - a scripting software that allows the test operator to send pre-scripted messages with keyboard shortcuts - to simulate the speed, accuracy, and efficiency of a real-world chatbot system.&lt;/p>
&lt;h3 id="user-testing-process">User Testing Process&lt;/h3>
&lt;ol>
&lt;li>Recruit participants via emails with details about the study and compensation&lt;/li>
&lt;li>Participants play two rounds of the game, the first without a chatbot facilitating and the second with our Wizard of Oz chatbot&lt;/li>
&lt;li>Survey participants about their experience after each round of game&lt;/li>
&lt;li>Complete additional survey about the chatbot’s performance after final round&lt;/li>
&lt;/ol>
&lt;/br>
&lt;/br>
&lt;h2 id="outcomes">Outcomes&lt;/h2>
&lt;blockquote>
&lt;p>&lt;strong>Objective 1:&lt;/strong> Understand what obstacles impede in-game collaborations among a temporarily assembled team in a League of Legends game&lt;/p>
&lt;/blockquote>
&lt;p>After analyzing interview transcripts, the most highly mentioned codes were:&lt;/p>
&lt;ul>
&lt;li>“Play as a team”: Players don’t feel like they were playing as a team and want to enhance team collaboration.&lt;/li>
&lt;li>“Extra information”: Players are not getting enough from the game’s built-in mechanism and want more information&lt;/li>
&lt;li>“Communication”: Players find it difficult to communicate with teammates.&lt;/li>
&lt;li>“Encouragement”: Players expect the chatbot to send encouragement when they played poorly, and praise when they played well&lt;/li>
&lt;li>“Chatbox”: Players feel the current chat box in the game is difficult to use&lt;/li>
&lt;/ul>
&lt;p>&lt;img src="https://lh6.googleusercontent.com/wAq2MSR-s6f9Lavky607T-v0HpwmT8hnJAfze_iSycc6NhfahD4QGasNvKMuqBVO5dYb-KFxPGF93rvXmUqBu139FQQvrV3K-hUoDTC3p5DHDxWIyuJrMrhStfW0LKjTStcHBMF5" alt="Frequency visualization for top codes">&lt;/p>
&lt;blockquote>
&lt;p>&lt;strong>Objective 2:&lt;/strong> Understand how a chatbot might help facilitate collaboration among a team of strangers&lt;/p>
&lt;/blockquote>
&lt;p>Medians were calculated for all Likert Scale measurements shown below&lt;/p>
&lt;p>&lt;img src="https://lh4.googleusercontent.com/DRfaugOdSjuUdBCDAG7igodhm4F6ezVhTMaGTHB860C5d1h3rSHgqVK43EwS83e_2xN6GKhVbBirDsID6KYwG2ojTpEqyjOrWehAAHptwkd90qkpuiKNrO6VuXceWo9H-GJg6kMn" alt="">&lt;/p>
&lt;p>From the quantitative results, we determined that a chatbot could:&lt;/p>
&lt;ul>
&lt;li>facilitate communication by providing contextual information to “ping” signals&lt;/li>
&lt;li>remind players their progress&lt;/li>
&lt;li>aid in coordination at the beginning of the game for inexperienced players&lt;/li>
&lt;li>amplify players’ “call for help” when they are under attack&lt;/li>
&lt;li>encourage players to respond to calls for help&lt;/li>
&lt;/ul>
&lt;p>After pseudo-pilot test data, we defined the functions of our chatbot. We planned to increase players’ sense of being a team by adding functions as “introduction and reminder before game” and mediating “When somebody insults you/team”. We decided to provide extra information as “Tower/Inhibitor/Base under attack” and “more information beyond pin”. The full function list here.&lt;/p>
&lt;p>&lt;img src="https://lh4.googleusercontent.com/jC09TtJXmxeEk9c22qmt_YY9kpNJKya8TW1S5EuXA0RC9kDdIRoC7sueu1UqH0WkXMLEB8l0Taqjj2IrHpkDce4bNuJLq7MxJZiD6cg2al-h_tEmGFLlZ-8_7XNLXgTwBvqfBMYg" alt="">&lt;/p>
&lt;blockquote>
&lt;p>&lt;strong>Objective 3:&lt;/strong> Evaluate the usability of chatbot&lt;/p>
&lt;/blockquote>
&lt;p>&lt;img src="https://lh5.googleusercontent.com/N6Sl7_q3K0SielA8X4kOJ3vk6_BEm65na0xri-EXKAuhMYjPAkaFa1dbPQKu9PIZWsH_Up6K8hkR33ydnCadKPIqtkuqHSDgLMnyUNXR04Q4hAXGWZb8lTCA835D1kOgr_3grcmx" alt="">&lt;/p>
&lt;p>The above chart showed players generally saw improvements in every measured category, with players’ self-measurement and teammate-measurement of teamwork showing the greatest improvements.&lt;/p>
&lt;p>The chatbot also appeared to facilitate a greater sense of coordination among players, make it easier for them to get help from teammates, increase their perceptions of speedy responses from teammates, and improve their awareness of teammates’ expectations.&lt;/p>
&lt;p>&lt;img src="https://lh3.googleusercontent.com/w_0Pg5ij_T3-HYDx1bA12TDDTSMMh4uq0unU7hjRHryWYO2HRnZ5VWABSp-tWiM3r_oSo4NipL1m3g5Czn8K5j9mon33NOw6bwEqVjVQ26wqDysockbML3-nUe78K1M2mZJgRL9h" alt="">&lt;/p>
&lt;p>Players appreciated the chatbot to some extent on every measured scale. Participants strongly indicated that they enjoyed the chatbot’s personality, appreciated the chatbot asking others to help them, felt that the chatbot made them feel like part of the team, and believed the chatbot helped them coordinate. Since these were the primary functions that the chatbot was meant to fulfill, we can say that it was, in these trials at least, a success.&lt;/p>
&lt;/br>
&lt;/br>
&lt;h2 id="key-takeaways">Key Takeaways&lt;/h2>
&lt;h3 id="the-positives">The Positives😀:&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Did the chatbot help players collaborate better?&lt;/strong> &lt;/br>
&lt;strong>Yes.&lt;/strong> The chatbot alleviated the burden of both direct and indirect collaboration.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Was the chatbot intrusive to their gaming experience?&lt;/strong>&lt;/br>
&lt;strong>No.&lt;/strong> The chatbot reminded players to help/coordinate with their teammates, but it was easy for players to ignore messages if they wanted to.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Was the Wizard of Oz Study successful?&lt;/strong>&lt;/br>
&lt;strong>Yes.&lt;/strong> One player asked me how we made the bot work so seamlessly.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>-&lt;strong>Did players like our chatbot?&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Yes.&lt;/strong> Players especially appreciated the chatbot’s personality, functions, and encouragement.&lt;/li>
&lt;/ul>
&lt;h3 id="the-negatives">The negatives🙁:&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>One player briefly mistook the chatbot for another player.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>One player was distracted by the chat box and did not pay enough attention to the gameplay.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>One player stated that the chatbot would work really well for entry-level players but wasn’t as helpful for more experienced players.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;/br>
&lt;/br>
&lt;h2 id="conclusion">Conclusion&lt;/h2>
&lt;p>In this project, we explored the use of a chatbot in League of Legends and its influence on in-game team collaboration. At the initial stage of the project, we studied players’ issues and needs using interviews, focus groups, and surveys. We then built a prototype based on our research results and verified it through two rounds of Wizard of Oz user tests. Finally, we evaluated the chatbot’s performance using surveys and interviews following players’ interaction with the chatbot during gameplay.&lt;/p>
&lt;p>Players found the chatbot useful and said it improved their collaborations with teammates while maintaining the fairness of the game. We used players’ self-reported perceptions of teamwork as our measurement. However, similar studies on team collaboration may want to also include more objective measurements for cross validation.&lt;/p></description></item><item><title>Design Patterns: Help small non-profits build trust and engagement</title><link>https://www.tanzhou.space/project/design-patterns-help-small-non-profits-build-trust-and-engagement/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://www.tanzhou.space/project/design-patterns-help-small-non-profits-build-trust-and-engagement/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>&lt;strong>My Role:&lt;/strong> UX Researcher in a team of five (5) researchers&lt;/p>
&lt;p>&lt;strong>Background/Context:&lt;/strong> Donors are often the core providers of a charity’s resource. Without donor support, some charities quickly struggle to function and are likely to enter a path of decline. Thus, it is important for charities to connect with their donors in order to gain their trust and increase their motivation.&lt;/p>
&lt;p>&lt;strong>Project Overview:&lt;/strong> I worked as part of a team to design best practices for facilitating online monetary donations to small charities and non-profits. Through interviews and analyses of established non-profit websites and their users’ behaviors and motivations, we developed design patterns allowing non-profit organizations to increase trust and engagement among donors.&lt;/p>
&lt;p>&lt;strong>Methods:&lt;/strong> Interviews, Competitive analysis, Design patterns, Literature review, and Desk research&lt;/p>
&lt;p>&lt;strong>Data Sources:&lt;/strong> Interview transcripts and Literature&lt;/p>
&lt;p>&lt;strong>Deliverables:&lt;/strong> Report describing research and suggested design patterns&lt;/p>
&lt;p>&lt;strong>Tools:&lt;/strong> Google Docs, Google Slides, and Google Forms&lt;/p>
&lt;/br>
&lt;/br>
&lt;details class="toc-inpage d-print-none " open>
&lt;summary class="font-weight-bold">Table of Contents&lt;/summary>
&lt;nav id="TableOfContents">
&lt;ul>
&lt;li>&lt;a href="#overview">Overview&lt;/a>&lt;/li>
&lt;li>&lt;a href="#objective">Objective&lt;/a>&lt;/li>
&lt;li>&lt;a href="#opportunity-and-process">Opportunity and Process&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#opportunity">Opportunity&lt;/a>&lt;/li>
&lt;li>&lt;a href="#process">Process&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#strategy">Strategy&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#competitive-analysis">Competitive Analysis&lt;/a>&lt;/li>
&lt;li>&lt;a href="#interviews">Interviews&lt;/a>&lt;/li>
&lt;li>&lt;a href="#persuasive-design">Persuasive Design&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#outcome">Outcome&lt;/a>&lt;/li>
&lt;li>&lt;a href="#conclusionreflection">Conclusion/Reflection&lt;/a>&lt;/li>
&lt;/ul>
&lt;/nav>
&lt;/details>
&lt;h2 id="objective">Objective&lt;/h2>
&lt;ul>
&lt;li>What stops a motivated user from making donations to small charities online?&lt;/li>
&lt;li>How can we convert a user’s motivation to donate to small charities online?&lt;/li>
&lt;li>How can help small charities design their online presence to establish trust and engagement?
&lt;/br>
&lt;/br>&lt;/li>
&lt;/ul>
&lt;h2 id="opportunity-and-process">Opportunity and Process&lt;/h2>
&lt;h3 id="opportunity">Opportunity&lt;/h3>
&lt;p>Most HCI research related to non-profits focuses on facilitating transparency in order to build trust among stakeholders (Marshall et al., &lt;a href="https://dl.acm.org/doi/abs/10.1145/2858036.2858301" target="_blank" rel="noopener">2016&lt;/a>, &lt;a href="https://dl.acm.org/doi/abs/10.1145/3173574.3173849" target="_blank" rel="noopener">2018&lt;/a>). However, there is still little understanding of how a non-profit’s website and its features impact donors’ motivation and engagement.&lt;/p>
&lt;h3 id="process">Process&lt;/h3>
&lt;p>We analyzed charity websites to understand the design processes they follow to convert user motivation into calls for action. We interviewed nine (9) donors to understand their perspectives on these websites. We focused our research on establishing online trust and engagement through better design.&lt;/p>
&lt;p>We delved into the Persuasive Design literature and created design patterns that enhance the experience for millennials visiting small charitable websites. Effective design patterns translate research findings into executable solutions and assist web designers and developers. They contain:&lt;/p>
&lt;ul>
&lt;li>A concise and memorable title&lt;/li>
&lt;li>Statement[s] describing the problem or challenge&lt;/li>
&lt;li>Description and discussion on solving the problem&lt;/li>
&lt;li>Solutions that are likely to succeed&lt;/li>
&lt;li>Solutions that are typical yet undesirable&lt;/li>
&lt;/ul>
&lt;p>We identified impediments to charitable donations and recommended four design patterns that can help convert users’ motivation into action.
&lt;/br>
&lt;/br>&lt;/p>
&lt;h2 id="strategy">Strategy&lt;/h2>
&lt;h3 id="competitive-analysis">Competitive Analysis&lt;/h3>
&lt;p>We analyzed websites of well-establish charities to explore different donation models, including WWF, UNICEF, Facebook Fundraisers, GoFundMe, etc.. We examined their user bases, information architectures, messaging approaches, calls to action, emotional designs, accreditations, and social influences. The competitive analysis provided us with key insights and informed our development of interview questions.&lt;/p>
&lt;h3 id="interviews">Interviews&lt;/h3>
&lt;p>We conducted interviews with nine (9) individual participants who either were active online donors or had donated through the above websites. The interviews highlighted the aspects that either motivated or discouraged donors as they navigated through the charity website. After our interviews, we inferred a relatively comprehensive list of statements taking the form “a person donates if he or she…”. We synthesized these statements into more general themes relevant to donors’ motivations.&lt;/p>
&lt;h3 id="persuasive-design">Persuasive Design&lt;/h3>
&lt;p>&lt;img src="https://behaviormodel.org/wp-content/uploads/2020/08/Fogg-Behavior-Model.jpg" alt="Fogg Behavior Model ©2007 BJ Fogg">
We followed Fogg&amp;rsquo;s &lt;a href="https://dl.acm.org/doi/abs/10.1145/1541948.1541999" target="_blank" rel="noopener">behavior model for persuasion&lt;/a> to guide our designs. The behavior model highlights three factors relevant to accomplishing target behavior - &lt;strong>motivation, ability to perform, and trigger.&lt;/strong>&lt;/p>
&lt;p>There is some debate about the ethics of designing charity websites with the sole intention of encouraging people to donate their money. However, our goal in creating design patterns was not simply to persuade people to donate:&lt;/p>
&lt;ul>
&lt;li>We wanted to leverage persuasive design for &lt;strong>potential donors&lt;/strong> that already have &lt;strong>medium levels of motivation&lt;/strong> but are &lt;strong>missing ability and triggers&lt;/strong> to act on it.&lt;/li>
&lt;li>We wanted to understand what discourages motivated people from performing actions related to donations in online platforms and how HCI can help address these aspects.&lt;/li>
&lt;/ul>
&lt;/br>
&lt;/br>
&lt;h2 id="outcome">Outcome&lt;/h2>
&lt;p>&lt;strong>Importance of trust&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Trust is often critical when converting charitable motivations to actions&lt;/li>
&lt;li>Monotonous, one-way interactions do not help donors help engaged or fulfilled&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Suggested patterns&lt;/strong>&lt;/p>
&lt;p>Our design patterns were concise and memorable with explicit descriptions of the relevant problems and common pitfalls that should be avoided.&lt;/p>
&lt;p>The patterns that emerged were &lt;mark>&lt;strong>“Trust through reputation”&lt;/strong>, &lt;strong>“Trust through transparency”&lt;/strong>, &lt;strong>“Trust through social circle”,&lt;/strong> and &lt;strong>“Commitment through Engagement”&lt;/strong>.&lt;/mark>&lt;/p>
&lt;figure id="figure-delivered-design-patterns">
&lt;div class="figure-img-wrap" >
&lt;img alt="Delivered Design Patterns" srcset="
/media/poster_hua446799e7b82fd69fb65472697896e49_1567135_7f484ab7760fded113231893d6665444.png 400w,
/media/poster_hua446799e7b82fd69fb65472697896e49_1567135_98c78f896cb0aeefc46240a19d814a4a.png 760w,
/media/poster_hua446799e7b82fd69fb65472697896e49_1567135_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/poster_hua446799e7b82fd69fb65472697896e49_1567135_7f484ab7760fded113231893d6665444.png"
width="612"
height="760"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
Delivered Design Patterns
&lt;/figcaption>&lt;/figure>
&lt;h2 id="conclusionreflection">Conclusion/Reflection&lt;/h2>
&lt;p>In this project, we conducted competitive analysis and interviews and analyzed user behavior/motivations pertaining to charities. We developed design patterns for non-profits seeking to increase trust and engagement among donors.&lt;/p>
&lt;p>In the future, we hope to create additional design patterns incorporating other factors that are useful when designing a successful non-profit website (e.g. social media marketability, attractiveness to volunteers, etc.). These additional design patterns would eventually form a pattern language useful for converting different users’ motivations into actions furthering the non-profits’ overarching goals.&lt;/p></description></item><item><title>Quantitative Analysis: Personal Data Sticker</title><link>https://www.tanzhou.space/project/quantitative-analysis-personal-data-sticker/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://www.tanzhou.space/project/quantitative-analysis-personal-data-sticker/</guid><description>&lt;h2 id="overview">Overview&lt;/h2>
&lt;p>&lt;strong>My Role:&lt;/strong> User Researcher working under a leading researcher&lt;/p>
&lt;p>&lt;strong>Methods:&lt;/strong> Survey, R, Factorial design, Factorial Analysis, and Literature review&lt;/p>
&lt;p>&lt;strong>Data Sources:&lt;/strong> Literature review and resulting survey&lt;/p>
&lt;p>&lt;strong>Deliverables:&lt;/strong> Research report describing users’ perceptions of social media stickers containing personal informatics&lt;/p>
&lt;p>&lt;strong>Tools:&lt;/strong> R Studio, JSON, Windows Powershell, Photoshop, and Google Doc&lt;/p>
&lt;p>&lt;strong>Background/Context:&lt;/strong> When people set health and behavior goals such as training for a half-marathon, going to bed earlier, or losing weight, they often use social media to share their progress with friends and family (&lt;a href="https://ieeexplore.ieee.org/abstract/document/6240359" target="_blank" rel="noopener">Munson, 2012&lt;/a>). &lt;a href="https://dl.acm.org/doi/abs/10.1145/2675133.2675135" target="_blank" rel="noopener">Epstein et al. (2015)&lt;/a> identified that many people share personal informatics data to receive advisory information, emotional support, motivation, or accountability from their audience. However, prior research has shown that sharing this data on social media generally doesn’t not result in the users’ desired outcomes.(&lt;a href="https://par.nsf.gov/biblio/10158861" target="_blank" rel="noopener">Epstein, 2019&lt;/a>).&lt;/p>
&lt;p>&lt;strong>Project Overview:&lt;/strong> I explored design principles for incorporating users’ step counts, one of the most commonly-tracked and shared pieces of personal informatics, into stickers in preparation for creating a similar feature for use in Snapchat posts or Instagram Stories.&lt;/p>
&lt;/br>
&lt;/br>
&lt;details class="toc-inpage d-print-none " open>
&lt;summary class="font-weight-bold">Table of Contents&lt;/summary>
&lt;nav id="TableOfContents">
&lt;ul>
&lt;li>&lt;a href="#overview">Overview&lt;/a>&lt;/li>
&lt;li>&lt;a href="#objective">Objective&lt;/a>&lt;/li>
&lt;li>&lt;a href="#opportunity-and-process">Opportunity and Process&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#opportunity">Opportunity&lt;/a>&lt;/li>
&lt;li>&lt;a href="#process">Process&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#strategy">Strategy&lt;/a>
&lt;ul>
&lt;li>&lt;a href="#experiment-design">Experiment Design&lt;/a>&lt;/li>
&lt;li>&lt;a href="#user-perception-measurements">User Perception measurements&lt;/a>&lt;/li>
&lt;li>&lt;a href="#survey-sampling-and-participant-screening">Survey Sampling and Participant Screening&lt;/a>&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;a href="#outcomes">Outcomes&lt;/a>&lt;/li>
&lt;li>&lt;a href="#conclusionreflection">Conclusion/Reflection&lt;/a>&lt;/li>
&lt;/ul>
&lt;/nav>
&lt;/details>
&lt;h2 id="objective">Objective&lt;/h2>
&lt;p>When users send or receive Snapchat stickers containing their personal step count data, &lt;strong>what factors influence the users’ perceptions of those stickers? And how do these factors actually impact the users’ perceptions?&lt;/strong>&lt;/p>
&lt;/br>
&lt;h2 id="opportunity-and-process">Opportunity and Process&lt;/h2>
&lt;h3 id="opportunity">Opportunity&lt;/h3>
&lt;p>In Snapchat or Instagram stories, people are able to use stickers to customize their posts for informative, aesthetic, or entertainment purposes. Available stickers are organized in a variety of categories depending on their content, format, style, etc. Currently, these stickers include methods for sharing some types of data, but the data is rarely about the user themself and instead typically pertains to the user’s location or surroundings. (Habib et al., 2019). This gives me an opportunity to design a series of stickers to support more types of user-generated content focused on users’ reasons for sharing.&lt;/p>
&lt;h3 id="process">Process&lt;/h3>
&lt;p>Through a review of research on multimedia and online advertising, I identified a series of predictors that may influence Snapchat users’ perceptions of stickers. I also determined the evaluation metrics for measuring these user perceptions. My team then designed a series of stickers for Snapchat that contain personal data about step counts. Finally, I conducted an online survey to determine how those stickers’ designs influenced users’ perceptions.&lt;/p>
&lt;p>
&lt;figure id="figure-samples-of-data-driven-stickers-incorporating-step-counts">
&lt;div class="figure-img-wrap" >
&lt;img alt="Samples of data-driven stickers incorporating step counts" srcset="
/media/stickers_hub047090f42bff412fa6c741eff5f366d_168346_708343fb38744bd97ceb4011b09e56e8.png 400w,
/media/stickers_hub047090f42bff412fa6c741eff5f366d_168346_c39481a6260585457a81c7407150800b.png 760w,
/media/stickers_hub047090f42bff412fa6c741eff5f366d_168346_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/stickers_hub047090f42bff412fa6c741eff5f366d_168346_708343fb38744bd97ceb4011b09e56e8.png"
width="760"
height="289"
loading="lazy" data-zoomable />&lt;/div>&lt;figcaption>
Samples of data-driven stickers incorporating step counts
&lt;/figcaption>&lt;/figure>
&lt;/br>&lt;/p>
&lt;h2 id="strategy">Strategy&lt;/h2>
&lt;h3 id="experiment-design">Experiment Design&lt;/h3>
&lt;p>I used a factorial study design to evaluate varying levels of context, presentation, and style’s influence on participant perception and preference. After they consented to participate in the study, participants were asked to identify one person who they frequently Snap with to imagine as their conversation partner. They then gave feedback on six (6) randomly generated posts featuring data-driven stickers, answering some demographic questions upon completion.&lt;/p>
&lt;p>The post generated varied on three dimensions:&lt;/p>
&lt;p>&lt;strong>Presentation styles&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Badge style annotates objects with the specific tracked value, for example a shoe or ribbon with “5,793 steps” written on it.&lt;/li>
&lt;li>Embellished style presents common objects as charts, picking one dimension to be the axis and shading the object partway according to the tracked value.&lt;/li>
&lt;li>Analogy style re-expresses tracked values as better-known quantities through comparisons.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Relevance levels&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Domain-relevant
Domain-relevant designs use objects or comparisons specifically related to steps, such as snicker, track field.&lt;/li>
&lt;li>Domain-irrelevant
Domain-irrelevant designs use well-known objects and comparisons that are not commonly associated with steps, such as a star or a speedometer.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Background styles&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Background of the post is a photo of specific scenario&lt;/li>
&lt;li>Background is from &lt;a href="https://leaverou.github.io/css3patterns/" target="_blank" rel="noopener">public CSS patterns&lt;/a> with abstract shapes&lt;/li>
&lt;/ul>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>&lt;/th>
&lt;th>Domain-relevant&lt;/th>
&lt;th>Domain-irrelevant&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>Badge&lt;/td>
&lt;td>
&lt;figure >
&lt;div class="figure-img-wrap" >
&lt;img alt="" srcset="
/media/badge1_hu064a447834afdff884f31c78069a4d7f_45186_44c6ec7678aaa220d07c7e544be07ab2.png 400w,
/media/badge1_hu064a447834afdff884f31c78069a4d7f_45186_b9fe96540718c82cfc187179c26ff055.png 760w,
/media/badge1_hu064a447834afdff884f31c78069a4d7f_45186_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/badge1_hu064a447834afdff884f31c78069a4d7f_45186_44c6ec7678aaa220d07c7e544be07ab2.png"
width="50%"
height="50%"
loading="lazy" data-zoomable />&lt;/div>&lt;/figure>
&lt;/td>
&lt;td>
&lt;figure >
&lt;div class="figure-img-wrap" >
&lt;img alt="" srcset="
/media/badge2_hu4912aa3ffce07dd4bf7fce0d915f086a_39371_47df0ce7ceed85a83eeefac577f04227.png 400w,
/media/badge2_hu4912aa3ffce07dd4bf7fce0d915f086a_39371_542787192dca495c8397efc5ef6835ae.png 760w,
/media/badge2_hu4912aa3ffce07dd4bf7fce0d915f086a_39371_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/badge2_hu4912aa3ffce07dd4bf7fce0d915f086a_39371_47df0ce7ceed85a83eeefac577f04227.png"
width="50%"
height="50%"
loading="lazy" data-zoomable />&lt;/div>&lt;/figure>
&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Embellished&lt;/td>
&lt;td>
&lt;figure >
&lt;div class="figure-img-wrap" >
&lt;img alt="" srcset="
/media/embellished1_hu99dae52ec1ca7f480c2aebff86624ee4_16463_9ca4b664cfeb6dd5b02d75099976d86f.png 400w,
/media/embellished1_hu99dae52ec1ca7f480c2aebff86624ee4_16463_ae13adfb8c3d572d31473858b3475201.png 760w,
/media/embellished1_hu99dae52ec1ca7f480c2aebff86624ee4_16463_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/embellished1_hu99dae52ec1ca7f480c2aebff86624ee4_16463_9ca4b664cfeb6dd5b02d75099976d86f.png"
width="50%"
height="50%"
loading="lazy" data-zoomable />&lt;/div>&lt;/figure>
&lt;/td>
&lt;td>
&lt;figure >
&lt;div class="figure-img-wrap" >
&lt;img alt="" srcset="
/media/embellished2_hu60fd90b8397137eff5effd990053fb28_15634_bcc85dccf3c51b8edcf4066f594ea8f6.png 400w,
/media/embellished2_hu60fd90b8397137eff5effd990053fb28_15634_bff7de11828ec7cecaedadb198c46e8a.png 760w,
/media/embellished2_hu60fd90b8397137eff5effd990053fb28_15634_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/embellished2_hu60fd90b8397137eff5effd990053fb28_15634_bcc85dccf3c51b8edcf4066f594ea8f6.png"
width="50%"
height="50%"
loading="lazy" data-zoomable />&lt;/div>&lt;/figure>
&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Analogy&lt;/td>
&lt;td>
&lt;figure >
&lt;div class="figure-img-wrap" >
&lt;img alt="" srcset="
/media/analogy1_hu5f27e9ff5c4870d1220903a51913a5d8_50827_5d92ef21cd4d5d50b8aca666f71c2c14.png 400w,
/media/analogy1_hu5f27e9ff5c4870d1220903a51913a5d8_50827_7eedfe371dd29417701cea744d67e62e.png 760w,
/media/analogy1_hu5f27e9ff5c4870d1220903a51913a5d8_50827_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/analogy1_hu5f27e9ff5c4870d1220903a51913a5d8_50827_5d92ef21cd4d5d50b8aca666f71c2c14.png"
width="50%"
height="50%"
loading="lazy" data-zoomable />&lt;/div>&lt;/figure>
&lt;/td>
&lt;td>
&lt;figure >
&lt;div class="figure-img-wrap" >
&lt;img alt="" srcset="
/media/analogy2_hu2e32e6b44ff12bd24ab6578fd0d261d5_68108_4b6c311bfbee54adeced5f7074adaba6.png 400w,
/media/analogy2_hu2e32e6b44ff12bd24ab6578fd0d261d5_68108_4bae821d43bef061aaeab2b4de87e4c1.png 760w,
/media/analogy2_hu2e32e6b44ff12bd24ab6578fd0d261d5_68108_1200x1200_fit_lanczos_2.png 1200w"
src="https://www.tanzhou.space/media/analogy2_hu2e32e6b44ff12bd24ab6578fd0d261d5_68108_4b6c311bfbee54adeced5f7074adaba6.png"
width="50%"
height="50%"
loading="lazy" data-zoomable />&lt;/div>&lt;/figure>
&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h3 id="user-perception-measurements">User Perception measurements&lt;/h3>
&lt;p>Participants answered questions from four widely-used scales from online marketing and advertising literature, modified to this context.
The validated scale measures:&lt;/p>
&lt;p>&lt;strong>Entertainment Value:&lt;/strong> how entertaining the shared content is&lt;/p>
&lt;p>&lt;strong>Attitude:&lt;/strong> attitude toward the content&lt;/p>
&lt;p>&lt;strong>Intention to use:&lt;/strong> How inclined a user find to use this feature&lt;/p>
&lt;p>&lt;strong>Information Value:&lt;/strong> how informative a user find receiving the content&lt;/p>
&lt;p>&lt;strong>Privacy Considerations:&lt;/strong> how invasive the a user find sharing the content&lt;/p>
&lt;p>Participants answered each question on a 7-item Likert scale with endpoints “Strongly Disagree” and “Strongly Agree”.’&lt;/p>
&lt;h3 id="survey-sampling-and-participant-screening">Survey Sampling and Participant Screening&lt;/h3>
&lt;p>I used convenience sampling to gather information on adult Snapchat users’ opinions of the stickers. I sent out recruitment emails to a student research subject list, and distributed flyers with a survey link in university classrooms and meetings.&lt;/p>
&lt;p>Only participants who were at least 18 years of age and who, on average, sent or viewed at least one post on Snapchat per week were chosen to participate in the study&lt;/p>
&lt;/br>
&lt;/br>
&lt;h2 id="outcomes">Outcomes&lt;/h2>
&lt;p>The regression analysis showed that:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>Users’ perceptions of stickers they’ve received correlated more significantly with our independent variables, compared with sharers’ perceptions.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>And that influence that factors have on the two sides – Sharer and Recipient – are not received equally.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>When the relevance of stickers’ presentations changes from irrelevant to relevant, the ratings increase in both sharers and recipients’ intention to use, recipients’ level of entertaining, as well as informative.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Context is significant to ratings of Attitude and Entertaining from recipients.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Stickers design with “Analogy” is perceived to be more exciting to receive than “Plaintext”.
&lt;/br>
&lt;/br>&lt;/p>
&lt;/li>
&lt;/ul>
&lt;h2 id="conclusionreflection">Conclusion/Reflection&lt;/h2>
&lt;p>In the project, I explore design principles for incorporating self-tracked step counts in data-driven stickers as a first step towards integrating these data into Snapchat posts or Instagram Stories. I examine the effect of a sticker’s presentation style, domain-relevance, and background through surveys. We uncover the importance of domain-relevant backgrounds and stickers, identify the situational value of stickers styled as analogies, embellished, and badges, and demonstrate that data-driven stickers can make content more informative and entertaining.&lt;/p>
&lt;p>Some limitations of the study: I only received valid answers from 19 participants. This means that our regression analysis results are highly troubled by sampling error. Thus, it is unlikely to draw statistically valid conclusion from such a small sample size. Another flaw in the analysis is that no interaction terms were included in the model, due to that our data sample were simply not powerful enough to discern any differences.&lt;/p>
&lt;p>However, considering the pilot nature of the project, the process I took to reach these conclusions provided meaningful learning experience. Indeed, &lt;mark>the learnings generated from this pilot directly influence the framing of research questions and analysis approach when the pilot was later developed into a larger, more comprehensive study with 506 total participants.&lt;/mark> The findings of the large-scaled study were published in &lt;a href="https://dl.acm.org/doi/abs/10.1145/3415166" target="_blank" rel="noopener">Proceedings of the ACM on Human-Computer Interaction&lt;/a> in October 2020.&lt;/p></description></item></channel></rss>