hi there. I'm maya. currently studying computer science x media studies at pomona college in claremont, california. fascinated by the intersection of art & technology. continually exploring the world of creative code.
This sketch was created as a part of the Diversity with Code and Art series created by Chelly Jin.
The first series focused on artwork created by Asian women and gender non-conforming artists, coders, and designers. Each person's design was displayed for two weeks on the p5.js homepage.
View the live sketch here.
Tools: p5.js, Tone.js
Working on the Experience Centers team at Google, another Engineering Practicum intern and I were given the task of creating an exhibit in the Holodeck space at the Google Partner Plex that somehow used the kinect sensors on the walls and the Youtube API. Using the design thinking process to brainstorm, we decided to create this immersive music visualization experience that shows off Youtube and Google's technology.
The exhibit is configurable: users can type in up to six countries of their choosing and see the whole exhibit change based around those regions' data. Also, the color palette of each music visualization is chosen based on the dominant colors used in the video's thumbnail. This way, each visualization's aestheic matches that of the music video it's surrounding.
A typography assignment for my 2D Design class. We were prompted to represent a four letter word in the physical realm and create it using materials relating in some way to the word's denotation. Though asked to photograph our work, I requested to produce a video instead, for I wanted to focus on the word move.
With the help of two dancers wearing all black, I recreated each letter of the word using the shapes of their bodies and filmed a short video clip of them against a white wall. After putting each clip side by side, I filtered the videos using openCV’s in range feature so that the pixels making up the dancers’ bodies became white and the background became black. This piece is meant to bring the word “move” to life and demonstrate that font can come in all forms.
Tools: Processing, OpenCV
For the past couple of years, I have been extremely interested in food politics and its relation to sustainability. After researching environmental, ethical, and health statistics related to eating, I was inspired to go vegan and cut out animal products from my life in hopes of promoting a healthier planet. I have often found it difficult to discuss these issues without others because many may feel uncomfortable or even offended, so I was quite excited to have the opportunity to express the severity of food's impact on our planet through this final project on water.
With this piece, my partner Colin & I intended to convey a fact surprisingly unknown to many: the production of many commonly eaten foods, such as beef, chicken, and milk, requires a ridiculous amount of water. We chose to create a data visualization using plastic water bottles. Not only do the bottles represent a commonly known amount of water, but this was also an important theme that we covered in class. After playing with ideas of scrolling, appearing, etc. we eventually decided that showing the bottles falling from the sky would be the most effective graphic. As an additional reference, we added a to scale image of a person standing next to a water bottle. As the animation plays, the water bottle grows to represent, in relation to the human standing to its side, the volume of water necessary to produce a certain food. The juxtaposition of these two visualizations aims to demonstrate to our audience the great consequence of their choices, especially here in California considering the drought.
The physical aspect of our piece is meant to connect these facts and this concept to one's reality. Four plates of "food" rest on four red solo cups (again, the bringing back the plastic motif). Distance sensors powered by an arduino detect if a plate was chosen and tell the program (written in Processing) the corresponding sound file and number of bottles to use. We structured it so that every person must pick up a plate in order to trigger the visualization. We hope that the performance of this action creates a muscle to mind connection that causes people to think about our piece the next time they reach for their food.
Tools: Processing, Arduino, Firmata
Created with my team at the Claremont Colleges bi-annual 5C Hackathon, Heart (Hear Art) aims to make music accessible and enjoyable for both the deaf/hard of hearing and hearing communities together. We used the Spotify web API to access their music library and p5.js to make the visualizations. Anyone can simply log in with their spotify account or facebook, search a song, then watch, listen, and jam. In the future, we'd love to make this an open source project and build out a library of visualizations collected from artists and technologists all over the world.
I built this machine to simulate the practice of plucking petals off of a flower while reciting “he loves me, he loves me not” (but I chose to use “they” for gender inclusivity). Normally, this couplet is repeated until all of the petals are dropped, and whichever statement spoken as the last petal is plucked is deemed true. In this work, I chose to infinitely oscillate between both statements, thus representing many peoples’ train of thought as they pursue love: round and round in their own head going crazy trying to figure out someone else’s true feelings. But, love is a natural human emotion, and such sentiment cannot be captured nor defined by a logical program. This rendition of an old game emphasizes the foolishness of the petal plucking and the daftness of the mind. Yet, this machine’s performance evokes a serious feeling of familiarity for any that have been caught endlessly wondering if their feelings are shared.
Tools: Processing, Arduino, Firmata
Inspired by recent episodes of police brutality, the Black Lives Matter movement, and more specifically, the "Hands Up Don't Shoot" slogan adopted by protestors responding to to the shooting of Michael Brown in Ferguson, Missouri.
This game like interaction forces the user to take on a very commonly played persona in video games today: the one with a gun. But here, rather than shooting back or attempting to defend itself, the target simply raises its hands up when the weapon is aimed at it. This moment is crucial, for it causes the "player" to consider their next move. Many people will be curious and click the mouse, causing the gun to fire and the game to end.
I hope this piece sparks some reflection and contemplation of the role that guns, games, and power dynamics play in real life interactions both very recently and further in the past.
Jack Lally is a surfer/film photographer based in New York City. His site features selected shots captured from his explorations in NYC, Costa Rica, Fire Island, California, and more.
Chella Man is a 16 year old deaf artist. On her site you can find images of her work in various mediums and a form to complete in order to purchase an item from her clothing line, Chella Man Apparel.