12 | FMP: Visualising the people

Maria S
3 min readNov 25, 2021

<<< For my previous post on the project, click here.

Last week we decided that our design would be a scenario-driven retrieval system for people who are learning a language informally. However, I was struggling with how we could make the most out of seeing visual context. Given that I begun the project with an interest in visually contextualising language learning, I wanted to explore this as much as I could with our remaining time.

My work in progress brainstorming on image exploration (Credit: me).

Last week I started by making a scenario for meeting somebody new, but found this so hard to visualise — perhaps because the topic was too abstract. I made a new mind-map on the much more specific topic of coffee to help me ground my visualisations. I created categories for coffee preferences, ordering a coffee and talking about coffee, with sub-categories that get more specific, going into climate change, growing coffee and coffee notes & flavours.

Image exploration (Credit: me)

However, it was still very difficult to bridge the gap between what I created, and an interactive prototype. We decided to start prototyping without this aspect figured out, as we needed to have something to present on our Thursday feedback session.

Feedback

Prototype ideas (Credit: me).

We introduced our ideas from this week as above, presented as an app. This is not the only format we envisioned it to be made into — currently targetting touch screens as they’re more tactile. We were immediately critiqued for not including data from real people outside of our team. This is something we did not have time to do, but agreed that was extremely important, and would make our next priority.

A couple of our tutors commented that our designs reminded them of a word cloud. They mentioned that they would be really interested to see how much of a language they know displayed in this way — visualising the non-linear journey of learning languages.

Finally, they highlighted a problem with the personal aspect of our library: How are people going to understand how to be, or act in certain situations if they only have their personal experience to rely on?

We thought that this could be solved through collaborative use, giving users the chance to correct, or add information in the app. However we noted that this could be weaponised, or lead to misinformation. We thought that adding a “Downloadable situation packages” feature might be a good start as this could provide a foundation for essential learning upon which the user could build. Mor mentioned that we could strengthen this aspect by including automated word or phrase suggestions, based on what the user inputs into the library — similar to song suggestions on Spotify based on what you like.

Next steps

We decided to prototype this collaborative feature as we felt it would contribute to the functionality of our experience.

Collaborative feature (Credit: all of us).

Now that we had tackled this aspect, we needed to fill the app with real data. Each of us planned to ask one person (who lives in a country that they don’t know the language of) to take pictures, audio recordings, and make notes while doing an action — which could be anything.

>>> For my next post on the project, click here.

--

--

Maria S

Personal blog for MA User Experience Design at University of Arts London