Generative Flag @ MixBrasil Festival
Generative design and front-end dev. 2022
Context
The MixBrasil Festival is a yearly festival dedicated to the Queer Community and its various forms of expression. It happens in November at prestigious movie theaters across São Paulo and takes full advantage of the diversity of the city as host of a variety of innovative events.
In 2020, an estimated audience of 176k people attended the screenings, theatre and music performances, conferences, and debates promoted in the festival's 28th annual edition.
Specific context and challenge
I've been invited by the agency Dentsu Brazil to design and develop a generative system for attendees of the festival and online audience to interact with and to have a unique queer flag built for them. The motto was to go beyond the diversity flags that already exist and provide users with a unique motion piece of generative art for them to see, download, and share with their community.
In this project, I acted as a generative designer and developer — I programmed the generative system in p5.js (Javascript) and used HTML+CSS in some points. While the scope of my role will become clear in the next paragraphs, it is worth saying that the interactive pages were designed by the agency Dentsu as well as logos, typeface and other auxiliary graphic elements.
Process
1 - Briefing and first conceptual explorations;
2 - Scoping and definition of roadmap;
3 - Software experimentation and low-fidelity prototyping;
4 - Execution and checkpoints for validation;
5 - Last adjusts;
6 - Release!
1 - Briefing and first conceptual explorations
We began talking about the intentions of the team early on the process. It was not the first time the agency Dentsu Brazil created the visual identity and advertising pieces for an edition of the festival, however this time they wanted to use creative technology — in this case, generative art/design — to strengthen the conceptual propositions of a diversity-focused event. Such intentions were captured by the short briefing below:
"We want to expand the symbolic reach of the flags that exist today in order to recognize the value of all parts of the community as a whole and as individuals.
It will work as an intersection of the colors of the flags that represent each existence, each person. One that will embrace and unite in a new and unprecedented way all the colors that represent one's sexual orientation, gender identity, or skin color."
After discussions on how we would capture user input to feed the generative system, we agreed that users would be invited to go through a flow, answering simple questions. More specifically, they were expected to pick: one color of the traditional rainbow flag; a sexual orientation that best described them; the gender identity of the person; their ethnicity; and, lastly, we also asked for their name to best personalise the experience.
I began playing around this idea of finding a way to mix all these attributes together and build something new, while making clear that these original aspects were still there.
The picture below exemplifies this thought.
2 - Scoping and definition of roadmap
After initial concept validation, we discussed the technical aspects of the intervention. Since the event took place across multiple venues in São Paulo, we opted for a mixed approach. We planned to have digital totems at the event venues for attendees to create flags and a web app for remote fans to do the same.
To enhance the festival experience, we intended to let attendees download their generated flags to their mobile phones. However, due to technical and time constraints, we decided to handle everything on the client side (browser), which made cross-platform sharing a bit more complex.
I created low-fidelity flows to outline user interaction and data input. Unfortunately, we couldn't implement a flag gallery on the totems due to the absence of a database and backend support.
Flows and other technical discussions happened in a first moment of the project.
Since I had only one month to take the intervention from idea to product, I proposed the general schedule below, comprised of specific moments and checkpoints for partial validation.
3 - Software experimentation and low-fidelity prototyping
After initial definitions, I started experimenting with software to suggest an artistic direction for the flag. The agency had some key concepts in mind, like the rainbow flag, gender identity, sexual orientation, ethnicity, and the user's name. We aimed to create a visually appealing flag while clearly showcasing that these user-selected aspects were integral to the art.
I began collecting some general references of generative design — specially when applied to visual-identity and advertising pieces — and also began setting the general environment for programming. After some experiments with Processing and openFrameworks — in which I'm used to code — I decided to use p5.js, a Javascript-based programming language for aesthetic purposes. This decision has been influenced by the short timeline and lack of back-end support, which favoured a language already made for web-browsers and without so much dependency on external libraries.
During initial software experiments and meetings to define our visual direction, we decided on a fluid, organic feel with dynamic shapes, influenced by the users' chosen colors. I also wanted to experiment with text overlays to give the generated art a "poster-like" quality.
I run some experiments for shape generation, exploring different themes and systems — eg. nature-inspired behaviour like wind, or even random generation of shapes and textures. As an example to help illustrate this process, I publish a lot of personal experiments like this into my Instagram account. I selected some concepts I thought were more interesting visually and took them to an alignment meeting with the agency. In the end, based on their feedback, I decided to rely on glitch behaviour to implement a flag-ish movement. My general idea was to program the foundation of the movement and then play around with textures and different colors above it, depending on what the person selected when filling the form.
The video below shows one of the first experiments with the background texture and with color variation in the foreground.
After aligning with the agency the general aspect of the background-shape, I also started experimenting with some styles for the text-overlay. This part of the process was run through many iterations over Figma and programming.
The video below shows a glimpse of this process:
4 - Execution and checkpoints for validation
After another partial validation of the general direction to be pursued, I jumped to the refinement of the synthesising-program. I also needed to build some of the functions that were going to allow the operation and sharing of the generative flag.
I programmed auxiliary libraries in p5.js to do that, and also relied on some external plug-ins to synthesise the video. The general functioning of the system happens as follows:
Like all p5.js sketches, the core of the program is running in the file sketch.js. When the user gets into the page, the library interaction_lib.js is called to synthesize the user interface and the form the user will be interacting with. As the user navigates through the form, options are saved into an array that feed the color_handling_lib.js — another library I created —, which transforms input into the color logic that is going to be applied to the flag. After triggering the synthesis of the flag, users are taken to the main screen, in which the generative video is being displayed. Automatically, after some seconds, the overlay is drawn and users are presented with the options to download the generative flag — as a video — or re-start the application. If they choose to download the video, sharing_lib.js is called to draw the "exporting" page and call the necessary codecs and external libraries. Users are, then, taken back to the main page where the flag is being executed.
At this stage, I began working on the user interface's visual elements, using designs provided by the agency's graphic designers. We settled on a straightforward step-by-step form to gather user input. Additionally, we incorporated visuals related to the festival, as discussed in the next section (video).
5 - Last adjusts
While refining the mobile app, we also had to address the challenge of providing the same experience on the digital totems at festival venues. This required implementing a flag gallery and a feature for users to send their flags to their phones after creating them on the totem.
However, discussions with the agency and festival organizers revealed that we couldn't depend on an internet connection, which posed a challenge. As a result, we decided to scrap the gallery idea but I found a workaround for attendees to access their newly created flags on their phones: a QR code.
By relying on an external library for QR-Code-generation, I could build personalised URLs that comprised the selection made by the attendees who created their flags at the event — every URL was unique. Then, I changed the mobile application to initiate differently if triggered by the QR-Code. Basically, when accessed through the QR-Code, the application would directly parse the URL and build the generative flag without asking users to go through the forma again, taking them to the flag-screen.
Below, there's a video showing the complete flow users would go through when using the mobile application.
6 - Release!
During the festival, attendees utilized the totems to create and share their flags on their social media profiles. The video below showcases a test session where the totem is being used on-site. Additionally, the board at the bottom of the page captures various reactions, advertising content, and flags shared by festival attendees.