AI is infiltrating our lives, in much the same way mobile did before it. It’s being fueled by the massive amounts of data we humans are generating from our phones, and it’s begun to radically change the way we interact with our machines.
For instance, when you upload a photo to Facebook, it runs through DeepFace, Facebook’s technology to be able to recognize faces. It looks into your photo for any faces it may recognize, using its knowledge of previously tagged uploads to tell people apart.
In my case, DeepFace knew the black and white photo I uploaded was me. I only have a little over a hundred tagged photos of myself on Facebook, but that’s enough for DeepFace to recognize me.
Windows Hello is Microsoft’s latest “security” feature, which allows you to use an infrared, Kinect like web camera to log into your computer securely. It uses a 3d model and images of what you look like, to ensure that it’s not just a photo of you in front of your computer, but the real you.
Every iPhone now has the “security” of Apple’s TouchID, an infrared thumb reader and recognizer. Apple has said multiple times that the thumbprint is stored securely on your phone, with no remote access, but all iPhones were just remotely rebootable via a text message.
Why do the largest companies in the world think we want to put all of our biometric data onto their platforms?
What happens when we have our first big biometric data breach, and everyone’s thumbprint, retina scan, and face patterns get leaked? How do we replace all newly insecure biometrics overnight when that happens?
A month ago, Google released a piece of software called Deep Dream. It allowed people to see what the machine learning algorithms were looking for when they recognized things like dogs or faces in images.
If you haven’t heard or seen any of the images, I wrote a guide walking through how it works.
The machine learns based on lots of input data. It needed thousands of images, each labeled as things like dogs, squids, bicycles, etc., all to know and learn what these things all look like.
So platforms like Google, Facebook, and Microsoft are all in unique positions to exploit and collect as much data as possible, to look for possibly novel uses for their massive amounts of data later.
But more interestingly, the recent release of DeepDream gives us an opportunity to subvert the machine’s process of discovery, by feeding it images that are exactly what it’s looking for, and creating noise which gives us an opportunity untrain the machinery from knowing who we are.
Originally, I tried generating raw noise, and having the noise be Deep Dreamed by a face trained neural network. (Specifically, pool5 of the Age Net from the Caffe model zoo.) This didn’t work at all. I used OpenCV and a Haar Cascade trained on faces to see when I’d generated a face from the background noise, and got a few images where there were multiple faces, but Facebook simply didn’t see the same faces as the Haar Cascade.
So I changed tacks, and just did a simple copy and paste job. I used the Haar Cascade on a few photos of myself, and copied and pasted multiple versions of my face into the image.
Unfortunately, this didn’t give me the results I was looking for. Instead, most of my faces were being missed by Facebook’s face detection. So I’d dream an entire image, filled with maybe 30 or 40 copies of my face, and I’d only get 1 or two faces recognized by Facebook.
Perplexed, I started tiling faces, and doing multiple levels of dreams. Eventually, I found the optimal response for tricking Facebook’s DeepFace to come from 2 Deep Dreams of pool5 of the Age Net, and to use 1 other non face background image square as a filler. I stumbled on to this when a Haar Cascade mistook one of the trees in the background of my photo as a face.
from PIL import Image import random import cv2 cascPath = './haarcascade_frontalface_default.xml' # our face classifier from OpenCV classy = cv2.CascadeClassifier(cascPath) image = cv2.imread('face.jpg', 1) # supply image with a face for opencv gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # convert to grey for haar cascade faces = classy.detectMultiScale( # play with these numbers if your face isn't recognized gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30), flags = cv2.cv.CV_HAAR_SCALE_IMAGE ) print len(faces) # number of faces in the image facesImages =  for (x, y, w, h) in faces: facesImages.append( img.crop((x-10, y-10, x+w+10, y+h+10))) # make an array of all faces with a bit of room around them x = 0 y = 0 i = 0 angles = [0, 45, 90, 180, 270] # not used now, but you could rotate, cycle through each angle blankIMG = Image.new("RGB", (1280, 720), "white" ) # optimal resolution for image for me while y < blankIMG.height: ''' x = random.randint(0,img.width) y = random.randint(0,img.height) ''' if x > blankIMG.width: y = y + facesImages.height x = 0 if (i % 2) == 0: blankIMG.paste(facesImages[random.randint(0,len(facesImages)-1)], (x,y)) # for making face selection random else: blankIMG.paste(facesImages[random.randint(0,len(facesImages)-1)].transpose(Image.FLIP_LEFT_RIGHT), (x,y)) # flip image horizontally i+=1 x = x + facesImages.width blankIMG.save('presuccess.jpg') # image filled with tessellated faces imgnum = np.float32(blankIMG) frame = deepdream(net, imgnum, end='pool5') frame = deepdream(net, frame, end='pool5') PIL.Image.fromarray(np.uint8(frame)).save("success.jpg")
Finally, I stumbled on the perfect amount of glitch for Facebook to still think a Deep Dreamed version of myself was still me, the photo you see at the top of this post. When I uploaded it to Facebook, this is what I got:
The idea here is that we can start to steer the AI in a direction of our choosing. Maybe we want the right to be forgotten by Facebook’s machines, or maybe we want to loosen what gets seen as us. Either way, this is the beginning of a tool to steer the conversation of what the machines know about us.
I could see this sort of noise generation being used to throw AI and Big Data off of our personal trails. We may in the future have AIs covering our tracks for us online, generating our own signal to noise to be able to regain a piece of our anonymity.
I’ve posted the code for this article over at github, and I encourage any and all pull requests / ideas. I think using neural networks to trick one another is just beginning, and the AI arms race is about to get very interesting.
Can’t wait to see what you come up with!