The End of Privacy

In a word, Google Glass. Ok, that’s two words, but let’s move on. Right now, today, you can buy what appear to be glasses (for vision correction) with a little extra hardware on the temple. This device feeds live video wherever you want. It could be a mobile device in your pocket. It could be a laptop, desktop, or embedded machine. Wherever you want it, that video is available if you so allow. You could record nearly all of your waking life in HD, streaming from your glasses to your mobile device to a web service. Once in the cloud, you can share it with anyone you want. You can even publish it for all to see. So that’s you.

What about the other seven billion people on the planet? Today, you can buy this and use it wherever you want (except for public bathrooms, where that is fortunately a felony, at least in America). If one can buy, so will thousands. What do we do when YouTube, Second Life, and Tour Wrist intersect? How do you protect your privacy when you don’t control the video stream that recorded you? All the signs indicate we’re at the foot of a massive mountain of innovation, staring up at Kurtzweil’s notion of a social/technological singularity. In a year, your phone, laptop, television, and car will be obsolete. Then, the cycle repeats exponentially faster, until at some point, we all experience a shift in collective perspective. We will no longer consider our own privacy as something that can be threatened. We will be open, or at least as open as we want to be.

Mechanisms for authenticating and authorizing access have exploded in the last 5yrs. Using these systems, people are sharing text, photos, and video all over the world, but only with those to whom they wish to offer access. There will always be a reason to keep secrets, but we will certainly soon encounter a scenario where people start exposing information as free publicly available data. Despite some protests, Facebook’s standard policy is to prefer proliferation of information over compartmentalization. Facebook has been a lightning rod for social and ethical commentary on privacy. They have weathered the storm, and as much as I hate myself for saying it, Zuck will lead the moral majority in twenty years. And beyond that, even Kurtweil can’t predict…

Advertisements

Texting Without Typing

Two years ago, I was dating a lovely young lady working her way toward medical school. She was working part-time in a lab, assisting a post-doc with research in electroencephalography, measuring and analyzing brain waves. One day, I was invited to tour the lab and experience some of their work first-hand. I say “experience” instead of “see” or “observe” because it was very much a hands-on experience, and a fascinating one at that. She sat me down in a chair in front of a computer with a grid of letters on it and showed me what looked like a lunch lady hair net designed by a steampunk technophile.

“This is what we’ll use to read your mind”, she said. “Just kidding. It can’t do that quite yet.”

It had a bunch of electrodes woven into it, each of which had a wire attached to it. The wires bundled together at the back, like a ponytail, and snaked away to a data acquisition unit with what looked like at least 64 channels. Once the hair net was strapped on, she started squirting electrolytic gel into each electrode. They use the same gel used by ultrasound technicians, so anyone who’s ever had an ultrasound knows a little about what I’m talking about here. It’s cold, slippery, and slightly sticky. It has special electrical properties to help reduce noise in the signals.

“Ok, now stare at the grid of letters. When we start, you’ll see cursors advancing through the grid. Just focus on one letter at a time and be patient,” she said. “Here we go.”

I watched as two cursors stepped quickly through the grid. I thought of the letter D. After about ten seconds, the test completed, and the letter D displayed in the output field.

“Oh, that’s fucking badass,” I said. We continued.

As the tests proceeded, the scientist part of me wanted to start performing experiments and try to find the weaknesses in the current system. It wasn’t flawless, but I was amazed at its ability to guess correctly most of the time. Guess is the wrong word. The machine was interpreting electrical activity in my brain and using that information to determine, with impressive accuracy, something that seems impossible – my thoughts.

“Now for the fun part,” she said, as she switched from one app to another on the laptop. “This one lets you free-form the letters. We needed to calibrate it for your brain with the other one. Try to spell a sentence.”

A few minutes later, I had spelled “this is cool” on the display, without touching the keyboard or saying anything out loud. With nothing but this strange looking hair net, stuck to my scalp with cold sticky gel, I had written a sentence.

Fast forward to today. Since that experience, someone has used similar technology to draft a tweet. It won’t be long before researchers find a way to interact with the brain without the sticky hair net wired to a laptop. Eventually, we’ll be able to message each other with a thought. To me, this represents an enormous challenge to the tool makers. Someone will be given the herculean task of designing a tool that allows people to share thoughts with others, message each other, and (most importantly) filter out the garbage. If you think Twitter or Facebook have a low signal-to-noise ratio now, imagine how much worse that would be when it’s downloaded right into your brain. The current privacy filters Facebook has integrated will not be sufficient to control this level of connectedness.

I don’t fully believe we can imagine a system that will be able to manage these issues. Mostly, I believe this is an emergent behavior, not something designed by a master architect. This is the realm of tools catching up to the creativity and innovation of average citizens, some of whom randomly start trends like the retweet, the mention, the hashtag, the overheard, and other yet-undiscovered trends. This is also a social construct, governed by rules that transcend technology. Social pressures will undoubtedly weigh heavily in the decisions of the collective minds of the community. Drunk tweeting evolves into drunk electrotelepathy, but both are equally embarrassing. As Hollywood has shown us time and time again, though, your brain is a very intimate place, and it’s very easy to see how something like this could cross the boundary from harassment into something that has no name yet, but could be described as the virtual extension of rape.

As this technology evolves, we must be vigilant in our protections of individual rights and the philosophy of “just because we can doesn’t mean we should.” After all, what does it say of us when we develop technology to allow a man to ask his wife with his mind to grab him a beer instead of promoting that the man simply get off his lazy ass and get his own damn beer? Still, my creative mind races with potentially beneficial uses for this tech, but that is a story for another time.

Location Tracking is not Spying

Yesterday’s twitter stream is punctuated with big brother sentiment about the storage of non-personal location data on the mobile device and in backups stored on the computer used to sync. Much of the discussion seemed to criticize Apple for storing the data and for not being really clear about it in their user agreement. There are a few things I find disappointing about this discussion.

1. Access to Raw Data

While it’s true that the information is stored on the device, it is nearly impossible for any malicious party to gain access to this information. First, there’s the concern about accessing it directly on the device. An app developer could conceivably access the file, but I’d bet the farm that Apple will reject any app that attempts to open the file. That basically means the only way to access the information is for you to build your own app, install it on your own device, and subsequently fuck yourself over. That potential exists, regardless of technology. You could easily push yourself in front of a bus, mail yourself some anthrax, etc. As the expression goes, “you can’t fix stupid.”

Another way to gain access is to find the file in the device backup, made whenever the device syncs with a computer. Again, this means the malicious party has access to your computer, which probably contains substantially more sensitive information (social security number, bank account details, passwords, etc), so the location data from your mobile is the least of your concerns. The only other way to gain access is to dig into a Time Machine backup. Similarly, if someone has this level of access, you’re fucked anyway.

2. Precision

The raw location data is not a precise or accurate time-stamped latitude/longitude coordinate of your device. It is instead a log of weight values indicating the probability that the device was near a given point on a grid (presumably either in minutes or seconds of lat/lon). As a point of reference for those who are not experts in geospatial terminology, a one second grid has points about every 31m (100ft) at sea level. Given the precision information available to legitimate developers and personal experience with Core Location, you’re lucky if you can resolve location with 3sec (100m / 330ft) accuracy.

Put simply, it is not possible to derive from this information precisely where a device was, is, or will be. It would not be possible, for example, to determine whether your device was at the Gap or Banana Republic. It might be possible to say you were at the mall, but that’s about it.

3. Identity

Remember that the raw location data shows only the possible location of the device, not its owner. This alone is not enough information to be dangerous. While the device does have a unique identifier, it would not be possible to determine with any certainty whether the device is in the owner’s possession. I realize this is the weakest of these four points, as there are a few other things we might use to increase our certainty that the device is indeed being carried by its owner. For example, twitter posts made from the device could provide sufficient evidence. Even that, though, is somewhat dubious.

4. Self-Reporting

Let’s not forget that in the age of Foursquare, GoWalla, Twitter, Facebook, and all the other social networks that allow (and even encourage) location tagging, we are our own big brother. Joe DeSetto wrote about this last year in his post, titled Social Location Is Creepy. With the growing trend of publishing not just what we’re doing, but where we’re doing it, I’m baffled by the outrage exhibited by so many people over this latest big brother meme.

Your mobile device may contain more information about your location than you realized, but it’s not enough to derive anything meaningful, nor is it accessible to anyone but you. More importantly, it doesn’t fucking matter when you’re announcing your location (and personal preference) to the world at large by checking in at Mons Venus. And if you’re the mayor, you’re telling the world so much more than they could ever glean from mining your location log.