Our client, Neighborbrite, is a startup offering an AI-powered landscaping tool that lets users upload photos of their yard and receive design inspirations. When the project began, the app was only 4 months old, and the team needed help understanding users and improving the UX.
First, we conducted user testing to see how free and pro (paid) users interact with the tool.
Users tend to tap the “Try now” button shortly after arriving on the landing page, but many hesitate on the next screen, the login page, as they don’t fully understand what the tool does. To address this, I recommended making the demo video more noticeable to help users understand the product.
I also recommended that, if feasible from an engineering or business perspective, the login or sign-up step be delayed until just before generating the design, or only when credentials are needed, so users can explore the tool first and feel more confident about signing up.
Users were confused by the free-text input for adding or avoiding elements. Since these are simple entries, users didn’t expect a fully free-text format and instead looked for more efficient input. The limited examples in the input field didn’t clearly convey the range of acceptable inputs.
To address this, I aimed to clarify input examples and improve the input efficiency. I came up with 2 solutions: Checkboxes - combine checkbox-based selection for common elements with a free-text input for other elements. Suggestions- show examples and common elements as users type. In the final presentation, I proposed the checkbox idea because it helps users more easily grasp the range of customizable elements, and 25% of participants explicitly expressed a need for checkboxes (filters).
At the end of this project, we presented with the client as well as detailed report to share the users’ feedback from testing sessions. The CEO described satisfaction for our work. We contributed to the clients’ success of reaching 100,000 users within a year.
As external UX consultants, we didn’t have the opportunity to measure the impact. If I were part of their team, I would use the following metrics to evaluate success.
We assumed that adding or avoiding specific elements was the primary need. However, the participants didn’t always have specific elements in mind; instead, they expressed more abstract or general needs, such as “low maintenance” or “suitable for dry climates.” Since the strength of AI lies in its flexibility, helping users write prompts based on vague needs might have been the better solution.
I looked into other AI tools to see how they assist with prompt writing and found a common pattern: showing example prompt sentences in easily accessible ways.
Inspired by a lightning demo, I designed a prompt-assist feature. It displays common requests that users typically have as starting points. Users can tap a button to add these sample prompts. If there's already a prompt in the field, they can choose to either append or replace it.
During the project, the CEO expressed interest in understanding how users feel about the customization experience. However, there’s no built-in way to collect detailed feedback on each generated output or capture user reactions. While user testing helps, it isn’t scalable. To evaluate the AI customization UX, I also want to recommend adding a feedback feature like the one below.