Over seven months, I worked with a client to define a use case and prototype a co-adaptive interaction to make creative software easier to use. See the product website
user research // survey, structured interview, contextual inquiry, congnitive task analysis, think alouds, speed-dating, analogous domains, secondary research
synthesis // affinity diagramming, customer journey map, visioning
prototyping // storyboarding, low to high-fidelity prototyping, usability testing, heuristic evaluation
WHAT IS A CO-ADAPTIVE INTERACTION?
Creative software enriches the work we do in fields like photo-editing, drawing, and 3D modeling. However, with increasing complexity and feature-richness comes decreasing usability. Users are forced to learn and adapt to the tools that they use, without proactive assistance from the tool itself. Can the user and the system “co-adapt” so that users can learn faster and achieve their creative goals more efficiently? We set out to understand this new technological landscape—somewhere beyond context-awareness and personalization—to build a smarter creative software tool.
LAYING THE GROUNDWORK
Armed with an unusually academic and open-ended problem, my team of 5, Converge, quickly grounded ourselves in existing research. We conducted a thorough literature review by delving into academic papers and conducting interviews with experts in the field. These initial conversations generated the research questions that steered the following months of user research.
From week 1, I pushed to develop a rigorous, reusable co-adaptive framework to structure our prototypes within. This early decision greatly influenced our ability to think systematically and holistically throughout the research phase.
an early concept map during a brainstorming session
design & communication // I led the UX and visual design direction of our deliverables. I designed and wrote the content of our digital and print materials.
strategy & vision // I defined the insights that emerged from several months of research. As a generalist, I pushed for a high-level, conceptual understanding of the problem space.
product // I defined the workflow and helped scope the product features of our high-fidelity prototype.
misc // I worked on all aspects of the project and played other vital roles as a researcher, project manager, and content strategist. On any given day, I might be writing, analyzing data, and coordinating meetings—while conducting user tests or ideating in between.
UNCOVERING THE UNDERLYING PROBLEMS OF CREATIVE SOFTWARE
To answer our research questions, we wove through various levels of abstraction in our research methodology. This multi-pronged approach gave us a bird's-eye view of our scope and diversified the responses we received. To understand why, I interviewed users about their motives for using creative software. To understand what, I observed users in their work environment, making note of the way their space affects their workflow. Finally, to understand how, I recorded nuanced interactions during think aloud sessions, in which we asked users to complete tasks in Photoshop and GIMP.
While we employed traditional user research methods, stepping out of these boundaries to explore analogous domains also inspires design ideas. For instance, we played games like Minecraft to experiment with wildly different interfaces. We talked to dancers, yoga instructors, and climbers. We taught a teammate how to drive while acting as a co-adaptive interface.
We tackled our research using a variety of methods, including speed-dating, analogous domains research, and contextual inquiry.
We synthesized our findings throughout the research phase. They span a range from quantitative (e.g., survey data) to qualitative (e.g., affinity diagrams of user interviews). As our design ideas emerged, we found ourselves expanding and exploring new technologies and modalities beyond the interface.
WHAT WE DISCOVERED
Ironically, through our research in human-computer interaction, the emergent, resounding theme comes from human-human interaction. We, in our human relationships, co-adapt to one another based on context and history. This nuance does not exist in creative software, which at best segments users into novices and experts categories. We summarized our findings into five insights:
ONE SIZE DOESN'T FIT ALL. Goals, prior knowledge, and context are not isolated factors. Their interplay creates a unique user profile. Any system that attempts to classify users by a single factor (e.g., novice vs. expert) will miss the inherent nuances at play in every interaction.
I WANT HELP, FAST. Users turn to online and in-person interactions for help because they are dynamic, personal, and engaging. They watch tutorials and read forums for personalized help. How can we bring the elements of conversation into the tool when users need help?
LEARNING THE TERMINOLOGY IS INEFFICIENT. Users tend to search for help based on the task rather than the feature. But if users don’t know what feature to use to complete their task, how can they use the correct terminology to search for the feature?
I NEED TO TRUST YOU FIRST. Users only relinquish control and accept changes after they build trust with the software. By adopting more incremental and salient changes, software can set up a pattern of successful interactions.
THE ROAD TO PERFECTION IS LITTERED WITH TRIALS AND ERRORS. The states in which a user learns, gains skills, and fine-tunes are necessary to the creative process. However, creative struggle ≠ struggle with the tool. When users grapple with complex features, there is an opportunity to refocus them on their creative work.
THE BOTTOM LINE:
What users fundamentally seek—and what is missing in their creative software—is a human element of teaching, sharing, and creating.
The inherent value of co-adaptive technology—what it offers that existing software does not—is a proactive partnership between the user and the software to more efficiently accomplish the user's goals. To that end, our framework creates a feedback loop in which the software observes, predicts, and adapts based on user behaviors, interactions, and other context. The user adapts to the changing system, and both work in tandem to drive towards the user's goal. We used our framework as a benchmark to measure the co-adaptivity of our prototypes.
While intelligent agents (Siri and Alexa, for example) are closely related to our work, our team believes that co-adaptive software can go a step farther—particularly in the field of creative software.
Based on our insights and areas of opportunity, we are developing a prototype to declutter the UI and to allow natural langugage input into the creative software tool. After multiple rounds of usability testing with storyboards and low to hi-fidelity prototypes, we are confident that we chose a promising direction of exploration.
Given only two months to build and test a prototype, we scoped our product to be a simple photo-editor. However, we envision that more robust creative software, with a larger featureset, will only benefit more from our interface implementation.
Our conceptual photo-editing software includes two modes:
(1) Search mode allows users to search for a task or workflow using their natual language. The system learns their language and adapts to it by prioritizing the search results (features and workflows) based on the user's mental model. On the backend, we use Mechanical Turk to crowdsource the potential language of the user and populate a thesaurus of search terms. Instead of requiring an exact feature name, like "Curves tool," the user can type in "I want a hazy effect" to see results.
(2) Explore mode provides users with the toolbar that they're used to. By selecting a feature, they can view a panel of photos that other people have edited with that feature. Our users repeated told us that they look to other creatives for inspiration. By bringing this inline with the tool, the user can browse inspiration and learn about feature combinations that others have used to capture the effect that they want.
Part of our workflow for a smart, simplified, minimal photo-editing interface.
Our capstone was a perfect mix of practice and pedagogy, combining thorough academic research with rapid iterations of design and testing. By synthesizing data, writing, and designing, we have molded our abstract ideas into a tangible prototype. Beyond the prototype, we have also conceived a co-adaptive framework and developed a process for crowdsourcing natural language. By investing time in these reusable components, we can make a contribution to our client's project that lasts beyond the scope of the prototype. And by thinking critically about the entire system, we envision a product that has space to evolve as technology improves.