✦ AI-powered · Free to use · Built by students

Card sorting & tree testing,
without the price tag

TreeTest AI is a free, AI-assisted research pipeline for information architecture. Run card sorts, generate site maps, and validate navigation with tree tests — all in one tool.

One pipeline, end to end
From raw content items to a validated navigation structure — AI handles the heavy lifting so you can focus on the research.
CS
Card Sort
Upload your content items and run open, closed, or hybrid card sorts. Share a link — participants sort in their browser, no software needed.
SM
AI Site Map
AI clusters your card sort results into a navigation structure using participant language. Review, edit, and approve before testing.
TT
Tree Test
AI writes realistic tasks with verified paths. Collect unmoderated responses, then get a full dashboard with success rates, SEQ scores, and path analysis.
DB
Rich Dashboards
Similarity matrices, first-click heatmaps, confusion tables, dendrograms, and more — all the visualizations you'd expect from enterprise tools.
AI
AI Insights
Get prioritized recommendations, problem areas, and strengths — AI reads your data so you can walk into stakeholder meetings with a clear story.
IT
Iterate & Improve
Apply AI suggestions, re-run tree tests, and track improvement across rounds. The pipeline loops until your navigation is solid.
Built by students, for researchers who can't afford $300/month tools

We're UX students who ran into the same wall every semester: industry-standard user testing tools are prohibitively expensive. Optimal Workshop, Maze, UserTesting — they're built for enterprise budgets, not student projects or indie teams.

So we built our own. TreeTest AI uses AI to replace the manual overhead that makes these tools expensive — generating tasks, clustering card sort data, writing analysis, validating paths. The result is a tool that does what a $3,000/year subscription does, for free, running entirely in your browser with your own AI API key.

This isn't a watered-down demo. It's the same tool we use for our own research projects — complete with unmoderated testing, real-time response collection, and publication-ready dashboards. We believe access to good research tools shouldn't depend on your budget.

Built by
New Study
Step 1 of 7

What are you organising?

Name your study, choose your card sort type, and add the content items you want participants to sort.

Study name
Study description (helps AI write better tasks)
Card sort type
Choose how participants will categorise your items.
OP
Open
Participants create their own category names
CL
Closed
You define the categories; participants sort into them
HY
Hybrid
You provide seed categories; participants can rename or add
Define categories
Add the categories participants will sort into. Assign hierarchy levels to define primary vs. secondary groupings.
0 categories added
Content items
These are what participants will sort into groups. Type each item and press Enter.
0 items added
Or import items
Paste a list (one per line or comma-separated), upload a file, or drop a screenshot for AI to extract items from.
Add at least 5 items to continue
Step 2 of 7

Your card sort is ready

AI has built a hybrid card sort activity with your items. Share the link — participants will group items in whatever way makes sense to them.

Items to sort
6
Seed categories
Closed — fixed categories
0
Responses so far
Share with participants
We recommend 15–20 responses for reliable clustering. The link stays open until you close it.
Live responses
Collecting
Waiting for first response…
Step 3 of 7

Card sort results

Here's how participants grouped your items. Review the analysis below, then generate your site map.

18
Participants
71%
Agreement score
↑ High
6m
Avg. completion
✦ AI Insights Card sort analysis
Participants showed strong agreement on most groupings (71%), particularly for kitchen, home, and clothing items. The primary ambiguity is in media-related items — comics and vinyl records were distributed across 3 different groups.
Problem areas
Strengths
Item confusion table
Items placed in the most different categories — highest ambiguity first.
Item # Groups Top group 2nd group
Top participant category names
Co-sort heatmap
Items frequently sorted together by the same participants.
Co-sort similarity matrix
% of participants who placed each pair in the same group. White = 0%, deep blue = 100%.
✦ AI Insights Card sort analysis
Overall assessment Good
Participants showed strong agreement on most groupings (71%), particularly for kitchen, home, and clothing items. The primary ambiguity is in media-related items.
Problem areas
Strengths
Recommendations for site map
Step 4 of 7

Proposed site map

AI has clustered your card sort data into a navigation structure. Review and edit any labels, then approve it to generate your tree test.

✦ AI note1 item flagged
Based on 18 participants
"Comic Books" appeared in 3 different participant groups. I've placed them under Books & Media (most common) but you may want to consider a cross-link under Collectibles too.
Site map structure
Step 5 of 7

Tree test is ready

AI has written 10 realistic tasks from your site map, with every path verified against your tree. Review them below, then share the link.

10
Tasks generated
✓ All paths verified
8 / 8
Branches covered
0
Responses so far
Generated tasks
Share with participants
We recommend at least 15 responses for reliable tree test data.
Step 6 of 7

Tree test results

Here's how well participants found items in your navigation. Tasks below 70% success are worth investigating.

22
Participants
74%
Overall success
↑ Moderate — room to improve
5.4
Avg. SEQ score / 7
Mostly easy
Task success breakdown
Found it directly   Found it (with backtracking)   Didn't find it
Participant responses 22 participants
Direct   ~ Indirect   Fail   Skip
PID Timestamp T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 Score
First-click matrix
Heatmap of where participants first clicked per task. Darker = more clicks. ✓ = correct category.
SEQ scores by task
Mean perceived ease per task (1–7). Dashed line = 5.5 benchmark.
Success x Time matrix
Each dot = one task. Top-left = slow & unsuccessful. Bottom-right = fast & successful.
Time on task
Box = Q1–Q3, centre line = median, whiskers = min/max.
✦ AI Insights Tree test analysis
Overall assessment Moderate
The navigation structure performs adequately overall at 74% success, but tasks involving media items (T2, T7) show significantly lower findability. The primary issue is label ambiguity.
Problem areas
Strengths
Prioritised recommendations
Step 7 of 7 · Iteration 1

AI's suggested improvements

Based on the tree test data, here's what AI recommends changing. Apply what you agree with, then run another round to confirm the improvements.

1
Refinement round · 74% → targeting 85%+
Updated site map
Changes pending
Export study data
Download raw data from each phase for further analysis.
AI Settings
AI Provider
API Key
Enter an API key to test the connection.
Your key is stored only in this browser. It is never sent to any server other than your chosen AI provider.
Working…
This takes about 10–15 seconds