2014 - 2018

An ongoing video series exploring Google Deep Dream neural network with generative and computational methods. Utilizing public domain and found video footage. This series explores the sense of vision and how humans and machines process image frame by frame. Taking advantage of Google’s neural networks and other open source generative coding platforms such as openFrameworks. This acted as an early stage of figurative and material experimentation at the early days of open source machine learning tools and neural networks, where everything had to be done by hand and processes time spanned days.  This gave me a good understanding of how this new emerging medium worked.

Stills taken from old TV commercials and scientific infographics and processed through the Google Deep Dream engine. 2014.

The video on the left side is a single image run through the Google Deep Dream engine multiple cycles, then pieced together frame by frame, showcasing the morphology of the image through machine processing.

Due to limited access to my data during the Covid-19, there are more images and experiments that will be uploaded at a later time.