TAPESTRY
Technology has dramatically increased our capacity for preserving memories and history. Cameras are compact and ubiquitous, always ready to snap a quick picture or record a special moment, but not without creating a divide between the user and the moment itself. A participant becomes an observer, outside of time and looking in on that moment in time, not part of it. The observer steps out of time and sits adjacent to it in order to capture it. This disconnect between participant and observer creates an interesting space. What are the implications for this event-based time phenomenon? Are there ways to circumvent or bridge the gap between participant and observer? Tapestry is a tool designed using the principles of Slow Design that allows multiple members of a shared experience to use text-based recollection after the experience to create an AI-generated visual, digital artifact of the experience.
Tools
Photoshop
My Roles
Project Manager, Product Designer
Design Process
Background
With new iPhone and smartphones launching every year in the market, cameras have become ubiquitous and people are using cameras more than before to snap pictures. According to an estimate from KeyPoint Intelligence, a digital imaging business intelligence organization, the number of images captured using a cellphone camera increased by a hundred billion from 2016 to 2017. When a person takes out their phone to capture an image, they become an observer and no longer remain a participant in the time space. Their participation and recollection of the event will be different because of this shift in roles.
This fundamental shift formed the motivation of this project. Most of us have been involved in a memory capture activity, either as a participant or an observer, so exploring this topic has wide application and relevance to our lives. People increasingly become observers and not participants, capturing moments from behind their cellphones. This is the challenge that our project attempts to address. By focusing on deferred collaborative narrative building, our project intends to encourage people to participate and live in the moment and define their recollection of an event later, collaborating with their friends and family, who shared those moments with them.
Tangentially, this project drew upon David Levy’s idea of the “Fast World” and the “Slow World” as well as the principles of the Slow Design movement. The goal of this project was to move people from the Fast World of content generation and the perceived responsibility of constant or frequent memory capture (and the resulting personal disconnect) to the Slow World where moments are appreciated, a mode of living “slower, calmer and more contemplative” as Levy describes. In alignment with this, we wanted to build a tool that was unobtrusive, undemanding and ultimately focused on “well-being,” per Fuad-Luke’s definition of Slow Design.
Generative Design Thinking
Armed with a general but mutual understanding of the problem, my team convened and discussed various aspects of the memory capture process, detailing the motivations and steps involved in the typical digital process. Some avenues we thought promising are exploring the participant/observer divide, the building of narratives into memories, the application of Slow Design principles to this space and the impact of bias in memory capture.
An initial direction we discussed was developing a solution that absorbed the work of capturing a moment so that participants never had to become observers. This led to talk of a wearable camera that would fit into a pair of glasses and might operate with a wink, based on the idea that the glasses would detect eye movement and prompt the camera to capture a moment when the wearer winks. The eye movement sensor would be able to differentiate between an involuntary blink or twitch and a deliberate wink.
As we discussed that solution, however, numerous privacy concerns arose and we realized we were essentially revisiting Google Glass, so we moved on. A related idea involved a drone that automatically followed participants and used AI and machine learning to recognize events worth capturing, but we felt that the privacy concerns remained, and only after a long period of ubiquitous usage would devices like that fade into the background and not influence behavior, altering the event and subsequent memory of the event.
Another idea we spent some time exploring was imbuing event capture with more meaning. Capturing a photo or video today isn’t hindered by consideration of materials and medium. Digital imagery is always available to gather, but what if we returned to a time when more consideration was placed on the value of a moment as it related to the materials it would require to capture it? We were reminded of the days of the Polaroid camera, large and unwieldy, and using a finite resource (blank film), which lead to talk of a disposable camera that would allow you to take only one photograph but also capture audio at the same time.
We were also inspired by a Kickstarter campaign for the Memory Camera which isn’t actually a camera at all. It is instead a device that simulates the feel of taking a photo with a traditional camera, with a click and button tension, but it can only be used once and in doing so, theoretically, imprints the memory of an event on the user. They’re left with an artifact representing a moment.
While we thought these ideas were interesting, we felt we were drifting from our goal of breaking down the barriers between participant and observer, so we moved on with new ideas. We discussed how event capture involves multiple parties, so we focused our energies on solutions that integrated those groups (participants and observers) in sharing the labor and responsibility that comes with memory capture.
An initial idea was a small camera that produced no image for the user that would be on hand for all members of an event. Each member would then capture the event using their cameras, making whatever choices they wanted, and then all members would deliver the cameras to a third party, an artist, who would have access to the images and the freedom to reinterpret the event as they liked. This could be an image, a sculpture, a written recollection - the artist would be given the leeway to produce a unique artifact to return to the group of participants. Their memory will have been “remixed” and strengthened by the influence of a third party.
After much conversation and exploration of this idea, we noticed that we may have been creating an interesting art project about the reinterpretation of memories but weren’t addressing our goal. But it was a useful diversion because it led us to settle on the notion of “collaborative memory construction,” or how we can break down the divide between observer and participant to involve everyone experiencing a memorable moment in the memory capture process.
Based on this premise, we set forth to design a tool that allows multiple members of a shared experience to use text-based recollection after the experience to create an AI-generated visual, digital artifact of the experience.
To support this idea, we reviewed research that touches on the neurological mechanics of memory and recollection, group behavior and theoretical influences on recollection such as collaborative inhibition or socially-shared retrieval-induced forgetting.
Solution Design
Beginning with formative research through a literature review and AEIOU framework application, we developed storyboards that would describe iconic use cases and conducted interviews with typical users to inform the final design. Finally, we created a video illustrating the speculative vision and the tool as we envision it.
Literature Review
Our literature review and research centered of course on the mechanics of memory and recollection, and the impact of memory capture on the social dynamic as well as a look into any similar projects tackling the same subjects.
One project, Highlights, was described in a 2010 paper as a system by which “mobile phones can be harnessed to collaboratively sense and record events of interest,” which would address the participant/observer problem (also referenced in the paper). While more forward-looking than actually currently practical given the limited availability of applicable devices, the paper did suggest an interesting technical solution to this problem.
Another paper that informed our project centered on understanding the impact of smartphones on the family vacation experience. While the study covered all aspects of cellphone usage, not just memory capture, there was some applicable overlap. The authors noted “the accurate recollection of episodic memories is highly dependent on the degree of the participant’s engagement during the episode in question ” suggesting disengagement from an activity would have a lasting impact on recollection in the future in the absence of access to media documenting the episode. They also found the “autobiographical memories about events” of the study participants were “fragmented and vague” and they “had difficulty recalling and detailing their vacation experiences due to their smartphone use during those experiences,” again pointing to the impact of traditional memory capture on memory and recollection.
Finally, related to the idea of collaborative memory reconstruction, the paper “Why two heads apart are better than two heads together: Multiple mechanisms underlie the collaborative inhibition effect in memory” interestingly described how groups working together, that is, in the same room at the same time, don’t necessarily produce the best outcome with regards to recollection, a phenomenon the authors refer to as collaborative inhibition. This idea informed a significant aspect (remote collaboration) of the solution we would later develop.
AEIOU Framework
To guide data collection during the interview process, we took advantage of the structure provided by the AEIOU ethnographic framework.
Activities
Audio Recording, Video Recording, Meta Data Capture, Writing, Recalling, Editing, Memorizing, Copying, Deleting, Storing, Recreating, Remixing, Messaging
Environment
Concerts, Weddings, Special Events, Funny moments, Vacations, Lectures, Family Photoshoots, Social Gatherings
Interactions
Adjusting Settings, Posing, Ordering Pose Configuration, Setting up the Shot, Asking someone to take your photo, Excessive Photo/memory Taking
Objects
Camera, Mic, Video Recorder, Phone, Journal, Tripod, Printers, Photo Editor, Storage, Cloud
Users
Professional Memory Capture (Photographers, Videographers, Sound Artists, Etc.), Casual Social Media Goers (Instagram, Snapchat, Vimeo, etc.), Bystanders (Involved in the shot, or taking of the shot)
Storyboards
Two storyboards were constructed showcasing two possible use cases. The first storyboard is an example of a user going to a concert using the Tapestry mobile application to record their experience. The user taps a plus icon to create a ‘new memory’ when they get to the concert which records certain environmental details (but no visual data) based on their in-app preferences and privacy settings, such as location and environmental conditions. Once they are ready to begin, they press start. Then the user can put their phone away and enjoy the experience. Once the concert is over, the user returns to the application to end the memory by pressing the ‘stop’ button. Once a memory is logged, they are prompted to write down any reflections, give the memory a name, and send it to others who were there to add their own reflections. Finally, a static, digital image representation of the event is produced using artificial intelligence and is sent to the user after a 24- 48- hour period.
The second storyboard is an example of a user going on a hike. The user opens the Tapestry mobile application at the trailhead to create a new memory. The user can then put their phone away for the remainder of the hike. Once the hike is over the user returns to the application to ‘stop’ the memory and reflect on their experience, name it, and send the experience to others they were with. Once they are done reflecting, the user presses a button to send the information to an artificial intelligence algorithm which produces a visual representation of the memory which is sent to the user after a 24- 48-hour period.
Interviews
Our team conducted four semi-structured interviews with potential users to understand how our proposed solution would be understood and interpreted by a general user.
The interviewees ranged in age from 18 to 25 and included equal representation from those identifying as male and female. All interviewees were university students, with two pursuing degrees in a STEM field and two enrolled in Liberal Arts programs, and all self-identified as comfortable and familiar with traditional memory capture methods.
Each interview started with a brief description of the project and an overview of the interview agenda. Following this, we asked the interviewees to tell us about an event-based memorable experience they had in the recent past. This conversation was used as a starting point for questions about the types of methods or tools they used to remember better. Next, we asked the interviewees several questions about how the usage of a particular method or the inclusion of a tool affected their memory. Continuing on this thread, we inquired about the interviewee’s attitudes towards this particular tool and to describe how they have used it in other contexts to aid them in their memory of important, interesting, or fun events and experiences.
Next, we asked the interviewees to choose one of the two storyboards. Once chosen, we let them explore the storyboard on their own, providing descriptions when requested and taking note of any reactions or comments they had. This portion of the interview was largely led by the interviewee to provoke a sense of freedom in their interpretation of the project.
Finally, once they were done exploring the storyboards, the interviewees were asked to provide their remaining thoughts about the project. One interviewee had an interesting remark following at the end of the interview “So this is not about taking pictures?” This interviewee had a hard time understanding that our application didn’t utilize pictures as input, but she was quickly amazed by the potential of our project.
Prototype
Having felt that we’d adequately validated our concept, we moved into a more concrete phase of our design process and defined the ultimate flow and processes involved with what we named Tapestry, our collaborative memory capture and reconstruction application.
The flow of use for this cellphone application was as follows:
1. An individual Tapestry user activates the application during an event and metadata is captured, such as time of day, GPS-assisted location, and other environmental conditions.
2. Later, that user is prompted to share that event with other Tapestry users present during the event.
3. Individually, each member of this group describes the event in as much detail as they wish using text input.
4. Tapestry then combines these descriptions using AI to generate a visual representation of the memory which is then distributed to each member.
Consider the following example scenario:
You are at a restaurant with your friends, celebrating a birthday while enjoying good food and live music. This is an event you’d like to capture as a memory, but you don’t want to leave the moment by pulling out your cellphone, opening the camera app, and becoming an observer. Instead, you open Tapestry with a simple click and return to the celebration. Later, you return home and recall the evening you just had with your friends. You describe the event in Tapestry using text and invite your friends to collaborate, to describe their version of the event. Based on your description and those of your friends who participated in the narration as well as the environmental data collected when you initiated the process (weather, time of day, location), Tapestry will create a static, digital image and distribute to everyone who participated in its construction. It will draw on available imagery of the venue, include images of participants and any other significant details recalled by the participants, and make adjustments to the image to reflect any pertinent environment conditions. In this way, you were able to remain a participant throughout the event and the shared memory will be informed by the multiple perspectives of everyone involved.
Tapestry and Slow Design Principles
Tapestry wasn’t originally conceived as an active exercise in the application of Slow Design principles as defined by Strauss and Fuad-Luke, but as work continued, it was clear the goals of our project and those of the Slow Design movement were in parallel. Each tenet of the Slow Design movement could be applied to in a meaningful way to our project:
Reveal
The collaborative nature of Tapestry allows multiple participants to “fill in” gaps in a memory.
Expand
Those missing pieces of a memory could be inconsequential or incredibly significant and crucial to the long-term meaning of a memory.
Reflect
Users are asked to recall and thoughtfully consider their experience in a free-form manner.
Engage
At its core, Tapestry is a shared-experience tool that relies completely on the sharing of knowledge and cooperation.
Participate
Tapestry requires users to actively contribute and be a part of the process, providing their perspective in a way that makes their contribution not only important but critical.
Evolve
As users engage with Tapestry over time, they would eventually provide more information in their recollections.
Post-Project Considerations
While I believe Tapestry would break down the barrier between participant and observer, it’s not possible to say this definitively without extensive testing of a functional prototype. Theoretically, what I've described in this project is possible to build but would require and scale of effort beyond the scope of this project.
I also accept that while Tapestry requires much less user intervention than using a cellphone camera to capture an image or video, the user must still access the application. It could be seen that this act also removes the participant from the event, though we believe the difference in effort has less of an impact than traditional memory capture does. This suggests that testing and iteration might reveal that a dedicated hardware form factor with limited interactivity may be the ideal solution. For example, a device consisting of essentially one button and no screen that could be activated by touch while still inside someone’s pocket.