Verbal Web Canvas
// INSTRUCTIONS
text <- input;
CHARACTERS <- [' ']; // can be modified
words <- text.split_by(CHARACTERS);
for_each(word in words) {
results <- fetch_images(query: word);
results <- results.filter(has minimal text);
x <- text.count_occurance_so_far(word);
canvas.insert(results.select_index(N: x-1));
}
return canvas;
"Here's to the crazy ones. The misfits. The rebels. The troublemakers…"
Instance Output:
Instance Input:
Verbal Web Canvas was inspired by the mosaic artworks exploring the "part vs. whole" concept through popular media, such as fan-arts depicting public figures and their contributors (e.g. Steve Jobs and the Apple tech products associated with him), and even book covers with novel execution of this idea such as "A Billion Wicked Thoughts" which—besides the interesting insight on users' lifestyle on the web—forms a provoking whole consisted of neutral faces collaged together. Researching interactive and generative algorithms while reflecting on our access to the stream of content on the web, I saw an opportunity to mix the two concepts into a novel procedure whose generation process could imply significant understanding about our digital perception. In particular, it’s our current digital environment which would have an impact on our ciphering and deciphering process, and the illustration of this process is an exercise worth exploring to understand our mentality better.

The algorithm above basically takes a text and replaces every word with a non-trivial image search result, with a few considerations to remove the redundancy of pictures. While ideally it's implementable as an interactive service, it's also worth highlighting the process that could create such sequences of images. Not exactly as an instructive art, but rather a cognitive exercise on what differences it makes to try to make a visual equivalent to our verbal content. In cognitive science there's an emphasis on prototype thinking model, and we could ask if it was purely visual and if we would only rely on what's publicly on worldwide web for the recalls during our cognitive process, would we understand the same concept or will we be distracted by the chaotic impression that each out of context image would give? A more critical question is: aren't we already misunderstanding the verbal input we get by the impression that each image popping in our head gives? Even for the abstract concepts that seem undepictable, it’s fair to assume a corner of our mindset would store a represtation for those concepts based on previous experiences or observations.

We're fairly bound by the resolution of content when living in digital life; we can't distinguish perspectives with differing or even opposing directions once they get downscaled enough and as we get limited to our visual perception. As an example, Martin Luther King’s "I have a dream", Marc Andreessen's Techno-Optimist Manifesto, and Conservative Party of Canada's political manifesto would virtually have the same thumbnail (i.e. a white box with a few black lines) and it's the verbal content that lets us distinguish their perspectives, which signals our challenge in distinguishing what we perceive visually versus verbally from other content. On the same note, there is a natural confusion that results from most of the sequence of images that we would receive from the algorithm. Not just beacause the instance input includes the “misfits” phrase and a series of misfitting images would fit it well—oh the irony—but rather due to how the verbal messgages we produce form a more coherent chain of ideas and trying to replicate that with word-by-word queries would bypass the cohession step in content construction.