Thoughts on uses and a categorization of said uses for lifelogging
The past is not dead. In fact, it’s not even past.
The fundamental idea of lifelogging is that computing resources are pervasive and cheap: Having not just a computer, but I/O peripherals and indefinite storage space. We may spend 50% of our waking time with access to a computer (if not actually using it), but there is a profound difference between 50% and 100%, a difference much greater than what the mere numerical fact of doubling would suggest.
Instead, we ought to expect order of magnitude changes. Cell phones may boost the access to 99%, but they miss the crucial 1%. Even still, cell phones have woven their way into the lives of Generation Y and Zers: they aren’t just communication tools used occasionally, they have ascended almost to a new sense or a new mental world.
Random thoughts: telecoms know this, and seek to sell our cellphones to us piecemeal; a world parceled up and rented out will bring quite a bit.
Will cheap social contact mean that ever more time has to be spent on social grooming? If it takes an hour to walk to a friend’s house, they will forgive you for only hanging out once a week. But if it takes 2 seconds to ring them up or text message them…
When we go from 99% to 100%, cell phones can become part of our minds1. Consider: it takes less than a second to move your dominant hand from anywhere to anywhere. (Try it out.) Imagine wearing a heads up display with perhaps a keyboard on your forearm. Any message from the computer to you will reach your brain in something under a tenth of a second, and you can begin responding in less than a second. This may seem cumbersome, but our very thoughts and memories aren’t much faster! We’ve all experienced moments where it was faster to go to a browser and look up a factoid than to struggle with our memory and work with associations and hope the factoid will suddenly pop into mind; fact is, even our short-term or working memory operates on the time scale of fractions of seconds and seconds.
Isn’t that amazing? Our programs are as ‘far away’ as our memories! With the right interface, a computer is competitive with our very mind. We’re not even postulating way-out-there Sci-Fi technology like brain-computer interfaces, just normal technology produced commercially right now, which you could go out and buy or assemble yourself.
Hitherto, computers have been best known for replacing work. Number crunching, manufacturing, etc. But as overhead & ‘friction’ shrink, computers can increasingly replace mental processes.
Consider programs like Mnemosyne which implement spaced repetition. We could view this as a crude, high-overhead replacement for an eidetic memory; or we could view as the computerization of a mental process - imagine one kept the flashcards in a memory palace and every day ‘walked’ through it. Here we have a very simple2 computerized habit replacing a very complex3 mental habit.
Consider this XKCD comic, “I know you’re listening”; it’s funny as a parody of Pascal’s Wager, but it’s also funny because we know we could never really do that (the novelty would wear off too quickly), and the fact that we could never really do it means it’d be all the funnier if someone did that and someone was listening. Or consider the standard advice for people attempting to induce lucid dreams: to periodically check for dream-like events & situations, in the hope that this habit will carry over to when actually in a dream. But who can randomly stop and deliberately ask whether they’re in a dream? When I know I’m awake, I won’t ask; and when I don’t know I’m awake, I won’t think about it…
But a computer can do this! Waking up at random times is trivial for a computer; utilities like [cron] have been around for half a century now. We could program all sorts of habits into it: check your shoelaces, see whether you can fly, say aloud one of its canned scripts (like the XKCD one), review a flashcard scheduled for that minute4, get a reminder of upcoming deadlines, etc. We can imagine even more sophisticated ones: the computer could say ‘did you remember to drop your daughter off at the daycare? I noticed my camera picked up an image of her getting into the car but none of her getting out’5. Classic issues like babies sleeping or not sleeping are easier with more data.6 And there are more novel uses like recording everything a child utters to see whether its language skills are above-average for its age or not.7
And this is to say nothing about the possibilities enabled by retrospective review:
“In 200420ya, Marian Bakermans-Kranenburg, a professor of child and family studies at Leiden University, started carrying a video camera into homes of families whose 1-to-3-year-olds indulged heavily in the oppositional, aggressive, uncooperative, and aggravating behavior that psychologists call ‘externalizing’: whining, screaming, whacking, throwing tantrums and objects, and willfully refusing reasonable requests…
In an intervention her lab had developed, she or another researcher visited each of 120 families 6 times over 8 months; filmed the mother and child in everyday activities, including some requiring obedience or cooperation; and then edited the film into teachable moments to show to the mothers…
The moms, watching the videos, learned to spot cues they’d missed before, or to respond differently to cues they’d seen but had reacted to poorly. Quite a few mothers, for instance, had agreed only reluctantly to read picture books to their fidgety, difficult kids, saying they wouldn’t sit still for it…when these mothers viewed the playback they were”surprised to see how much pleasure it was for the child-and for them.” Most mothers began reading to their children regularly, producing what Bakermans-Kranenburg describes as “a peaceful time that they had dismissed as impossible.”
…A year after the intervention ended, the toddlers who’d received it had reduced their externalizing scores by more than 16%, while a nonintervention control group improved only about 10%…And the mothers’ responses to their children became more positive and constructive.
Few programs change parent-child dynamics so successfully.”8
Unconvinced? Then read the New York Times Magazine, “The Data-Driven Life” and see the scores of uses people have come up with for even partial scanty data gathering/life logging. There are many fascinating papers in the security literature about what inferences can be drawn from even very noisy or apparently inadequate data - copying keys with ordinary photographs, labeling people in photographs of crowds (obviously very useful in a lifelogging context), scanning fingerprints from 6 feet away or replicating keys from 200 feet away, deanonymizing online data, etc.
The problem with a mental habit is that it either needs to be constantly in mind - and taking up precious mental space - or it is ‘out of sight, out of mind’. Computers could not care less about this.
So we might divide portable computers’ functionality into 2 major categories:
-
memory; taken to its extreme, it becomes lifelogging
-
supervision
What ‘memory’ means is not always apparent.
A calendar is pretty obviously ‘memory’, but a calendar is relatively simple. But there’s lot of temporal data one might want; TO-DO lists, augmented with dates and reminders. A passive calendar can’t do that.
But the reminders can be much more complex than ‘send an email on March 26’. Arbitrarily complex reminders are useful. Consider the phenomenon of iPhone applications aimed at men, which track a woman’s menstrual cycle and predict when they are PMSing; this can be very useful for men as they know when to be extra-thoughtful and forgiving.
Now, what does the iPhone add? Nothing stops men from using spreadsheets to track menstrual cycles, but few did before. Spreadsheets even take most of the work out of it. So why do the men using iPhone applications seem somehow less creepy and more understandable than the men using spreadsheets on their desktops? Because the effort involved is lower, and it is more useful. (One can believe that a low-effort-high-payoff program like an iPhone application is genuine; but a high-effort-low-payoff desktop spreadsheet makes one wonder what else the man is getting out of it, and thoughts turn to undefined sexual perversions.)
-
Lifelogging resources: https://www.highlightcam.com/ summarizes long video files; intended for surveillance, but excellent for lifeloggers too? Kevin Kelly has some musings & links here: https://kk.org/thetechnium/lifelogging-an/
-
What device specifically? Not commercially available: https://www.lesswrong.com/posts/o6BfKyWQC6yv8G7bc/lifelogging-the-recording-device
TODO
Wearable computers have many possibilities:
-
10 second audio loop for when you miss something
-
directional/boom mikes
-
voice-to-text dictation of conversations (even high error rate would be useful for search)
-
random Mnemosyne quizzes
Lifelogging increases liberty & freedom:
-
Bruce Schneier on recording the police: https://www.schneier.com/blog/archives/201014ya/12/recording_the_p.html
Lifelogging runs afoul of old wiretapping and privacy laws:
-
“Digital recording tools are so cheap and simple to use that it’s easy to deploy them without thinking through the consequences. A Nebraska mother and grandfather found this out the hard way last month when they were hit with a combined $171,141.13$120,0002010 penalty for wiretapping after sticking an audio recorder inside a young girl’s favorite teddy bear.
Though the mother claimed only to be concerned with her child’s welfare, the judge found that the indiscriminate use of the recording device had violated the privacy of numerous people, each of whom were entitled to $14,261.76$10,0002010.”
“MIT Scientist Captures 90,000 Hours of Video of His Son’s First Words, Graphs It”; https://www.ted.com/talks/deb_roy_the_birth_of_a_word (TODO: grab quotes where he attests to emotional value of lifelogging):
A combination of new software and human transcription called Blitzscribe allowed them to parse 200 terabytes of data to capture the emergence and refinement of specific words in Roy’s son’s vocabulary. (Luckily, the boy was an early talker.) In one 40-second clip, you can hear how “gaga” turned into “water” over the course of six months. In a video clip, below, you can hear and watch the evolution of “ball.”
…Most moving of all was the precise mapping of tight feedback loops between the child and his caregivers-father, mother, nanny. For example, Roy was able to track the length of every sentence spoken to the child in which a particular word–like “water”–was included. Right around the time the child started to say the word, what Roy calls the “word birth,” something remarkable happened.
“Caregiver speech dipped to a minimum and slowly ascended back out in complexity.” In other words, when mom and dad and nanny first hear a child speaking a word, they unconsciously stress it by repeating it back to him all by itself or in very short sentences. Then as he gets the word, the sentences lengthen again. The infant shapes the caregivers’ behavior, the better to learn. [Note the connection with spaced repetition: Start simple, and repeat.]
“Big brother untangles baby babble”:
“Current samples that the field works with - typically an hour of recorded speech a week - are one to two orders of magnitude too small for our scientific purposes,” Professor Steven Pinker of Harvard University told BBC News.
So, Professor Roy, who by then had a child on the way, set about solving the conundrum. His solution: wire up his house with 11 cameras, 14 microphones and terabytes of storage and record every waking moment of his soon-to-arrive son.
…Now, a quarter of million hours of recordings later, Professor Roy is beginning to tease apart the masses of data and look for answers.
To extract meaningful patterns from the 200GB (gigabytes) of data that flowed daily onto the racks of hard drives in the basement, the team created a series of software tools.
…Automatic systems could have error rates of up to 90%, he said.
At the other extreme, Professor Roy also experimented with human transcribers, but that also came with its own problems.
“It would take an average of 10 hours to find and transcribe one hour of speech,” he told the BBC.
…Instead, the researchers created a piece of software called Blitzscribe, which finds speech in the recordings and breaks it down into easily transcribed sound bites.
“We have automated components assisting human annotators,” he said.
The net result is that we have reduced 10 hours down to two hours.”
The analysis also takes into account how a word was said - called prosody - and who said it.
To date, the team have already transcribed more than four million words.
“It’s already the most complete transcript of everyday life at home than any recording ever made.”
A similar human-computer system, called TrackMarks, has also been developed to analyse the video and gives information such as where people are in relation to one another and the orientation of their heads.
…In part to address this criticism, he has developed a stand-alone device - called the Speechome recorder - that can be easily put into homes with out 10001,024yam (3000ft) of wiring in the walls and converting the basement into a data center.
The devices look like floor lamps and contain an overhead microphone and camera, with another lens at eye level for children.
The base of the device holds a touch-screen display and enough storage to hold several months of recordings.
Their first deployment will be in six pilot studies of children with autism where they will be used to monitor and quantify the children’s response to treatment.
TED video lists some statistics:
-
3 years recorded; 8-10 hours per day
-
90,000 hours of video
-
140,000 hours of video
-
200 terabytes
-
9 (?) cameras
“USENIX 201113ya Keynote: Network Security in the Medium Term, 2061–5002561 AD”, Charles Stross:
“What can you do with 2 terabits per second per human being on the planet? (Let alone 2tb/sec per wireless device, given that we’re already within a handful of years of having more wireless devices than people?)
One thing you can do trivially with that kind of capacity is full lifelogging for everyone. Lifelogging today is in its infancy, but it’s going to be a major disruptive technology within two decades.
The basic idea behind lifelogging is simple enough: wear a couple of small, unobtrusive camera chips and microphones at all time. Stream their output, along with metadata such as GPS coordinates and a time sequence to a server somewhere. Working offline, the server performs speech-to-text on all the dialogue you utter or hear, face recognition on everyone you see, OCR on everything you read, and indexes it against images and location. Whether it’s performed in the cloud or in your smartphone is irrelenvant - the resulting search technology essentially gives you a prosthetic memory.
We’re already used to prosthetic memory to some extent; I used Google multiple times in the preparation of this talk, to retrieve specific dates and times of stuff I vaguely recalled but couldn’t bring directly to memory. But Google and other search engines are a collective prosthetic memory that can only scrutinize the sunlit upper waters of the sea of human experience, the ones that have been committed to writing and indexed. Lifelogging offers the promise of indexing and retrieving the unwritten and undocmented. And this is both a huge promise and an enormous threat.
Initially I see lifelogging having specific niches; as an aid for people with early-stage dementia or other memory impairments, or to allow students to sleep through lectures. Police in the UK are already experimenting with real time video recording of interactions with the public - I suspect that before long we’re going to see cops required to run lifelogging apps constantly when on duty, with the output locked down as evidence. And it’ll eventually become mandatory for other people who work in professions where they are exposed to any risk that might result in a [substantial] insurance claim - surgeons, for example, or truck drivers - not by force of law but as a condition of insurance cover.
Lifelogging into the cloud doesn’t require much bandwidth in absolute terms, although it will probably take a few years to take off if the cellcos succeed in imposing bandwidth caps. A few terabytes per year per person should suffice for a couple of basic video streams and full audio, plus locational metadata - multiply by ten if you want high definition video at a high frame rate. And the additional hardware - beyond that which comes in a 201113ya smartphone - is minimal: a couple of small webcams and microphones connected over some sort of short range personal area network, plus software to do the offline indexing.
Lifelogging raises huge privacy concerns, of course. Under what circumstances can your lifelog legally be accessed by third parties? And how do privacy laws apply? It should be clear that anyone currently lifelogging in this way takes their privacy - and that of the people around them - very lightly: as far as governments are concerned they can subpoena any data they want, usually without even needing a court warrant. Projects such as the UK’s Interception Modernization Program - essentially a comprehensive internet communications retention system mandated by government and implemented by ISPs - mean that if you become a person of interest to the security services they’d have access to everything. The prudent move would be to lifelog to encrypted SSDs in your personal possession. Or not to do it at all. The security implications are monstrous: if you rely on lifelogging for your memory or your ability to do your job, then the importance of security is pushed down Maslow’s hierarchy of needs. When only elite computer scientists on ARPANet had accounts so they can telnet into mainframes at another site, security was just a desirable luxury item - part of the apex of the pyramid of needs. But when it’s your memory or your ability to do paid employment, security gets to be something close to food and water and shelter: you can’t live without it.
On the up side, if done right, widespread lifelogging to cloud based storage would have immense advantages for combating crime and preventing identity theft. Coupled with some sort of global identification system and a system of access permissions that would allow limited queries against a private citizen’s lifelog, it’d be very difficult to fake an alibi for a crime, or to impersonate someone else. If Bill the Gangster claims he was in the pub the night of a bank robbery, you can just query the cloud of lifelogs with a hash of his facial features, the GPS location of the pub, and the time he claims he was there. If one or more people’s lifelogs provide a match, Bill has an alibi. Alternatively, if a whole bunch of folks saw him exiting the back of the bank with a sack marked SWAG, that tells a different story. Faking up an alibi in a pervasively lifelogged civilization will be very difficult, requiring the simultaneous corruption of multiple lifelogs in a way that portrays a coherent narrative.
So whether lifelogging becomes a big social issue depends partly on the nature of our pricing model for bandwidth, and how we hammer out the security issues surrounding the idea of our sensory inputs being logged for posterity.
Lifelogging need not merely be something for humans. You can already buy a collar-mounted camera for your pet dog or cat; I think it’s pretty likely that we’re going to end up instrumenting farm animals as well, and possibly individual plants such as tomato vines or apple trees - anything of sufficient value that we don’t kill it after it has fruited. Lifelogging for cars is already here, if you buy a high end model; sooner or later they’re all going to be networked, able to book themselves in for preventative maintenance rather than running until they break down and disrupt your travel. Not to mention snitching on your acceleration and overtaking habits to the insurance company, at least until the self-driving automobile matches and then exceeds human driver safety.
…I’m not talking about our identities in the conventional information security context of our access credentials to information resources, but of our actual identities as physically distinct human beings. We use the term “identity theft” today to talk about theft of access credentials - in this regime, “identity theft” means something radically more drastic. If we take a reductionist view of human nature - as I’m inclined to - our metagenomic context (including not just our own genome and proteome, but the genome of our gut flora and fauna and the organisms we coexist with) and our sensory inputs actually define who we are, at least from the outside. And that’s not a lot of data to capture, if you look at it in the context of two terabits per second of bandwidth per person. Assume a human life expectancy of a century, and a terabit per second of data to log everything about that person, and you can capture a human existence in roughly 3.15 × 1021 bits … or about 65 milligrams of memory diamond.
With lifelogging and other forms of ubiquitous computing mediated by wireless broadband, securing our personal data will become as important to individuals as securing our physical bodies. Unfortunately we can no more expect the general public to become security professionals than we can expect them to become judo black-belts or expert marksmen. Security is going to be a perpetual, on-going problem.
Moreover, right now we have the luxury of a short history; the world wide web is twenty years old, the internet is younger than I am, and the shifting sands of software obsolescence have for the most part buried our ancient learning mistakes. Who remembers GeoCities today? Nor is there much to be gained by a black hat from brute-force decrypting a bunch of ten year old credit card accounts.
But it’s not going to be like that in the future. We can expect the pace of innovation to slow drastically, once we can no longer count on routinely more powerful computing hardware or faster network connections coming along every eighteen months or so. But some forms of personal data - medical records, for example, or land title deeds - need to remain accessible over periods of decades to centuries. Lifelogs will be similar; if you want at age ninety to recall events from age nine, then a stable platform for storing your memory is essential, and it needs to be one that isn’t trivially crackable in less than eighty-one years and counting.
…Storing and indexing the data from such exhaustive lifelogging is, if not trivial, certainly do-able (the average human being utters around 5000 words per day, and probably reads less than 50,000; these aren’t impossible speech-to-text and OCR targets). And while there are plausible reasons why we might not be able to assert the overriding importance of personal privacy in such data, it’s also clear that a complete transcript of every word you ever utter in your life (or hear uttered), with accompanying visuals and (for all I know) smell and haptic and locational metadata, is of enormous value.
Which leads me to conclude that it’s nearly impossible to underestimate the political significance of information security on the internet of the future. Rather than our credentials and secrets being at risk - our credit card accounts and access to our email - our actual memories and sense of self may be vulnerable to tampering by a sufficiently deft attacker. From being an afterthought or a luxury - relevant only to the tiny fraction of people with accounts on time-sharing systems in the 1970s - security is pushed down the pyramid of needs until it’s important to all of us. Because it’s no longer about our property, physical or intellectual, or about authentication: it’s about our actual identity as physical human beings.
“The Truth of Fact, the Truth of Feeling”, by Ted Chiang
-
This can be quite literal. Tools can affect our mental networks and whether our brain chooses to remember something; I found the 201113ya paper “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips” (Sparrow et al) interesting. Among the experiments, this one stood out:
In Experiment 2, we tested whether people remembered information they expected to have later access to-as they might with information they could look up online (4). Participants were tested in a 2 × 2 between-subject experiment by reading 40 memorable trivia statements of the type that one would look up online (both of the new information variety eg. “An ostrich’s eye is bigger than its brain” and information that may be remembered generally, but not in specific details, eg. “The space shuttle Columbia disintegrated during re-entry over Texas in Feb. 200321ya.”). They then typed them into the computer to assure attention (and also to provide a more generous test of memory). Half the participants believed the computer would save what was typed; half believed the item would be erased. In addition, half of the participants in each of the saved and erased conditions were asked explicitly to try to remember the information. After the reading and typing task, participants wrote down as many of the statements as they could remember. A between-subjects 2 (saved/erased) × 2 (explicit memory instructions vs. none) ANOVA revealed a [statistically-]significant main effect for only the saved/erased manipulation
…The main effect of the instruction to explicitly remember or not was not [statistically-]significant, which is similar to findings in the learning literature on intentional versus incidental studying of material, which generally finds there is no difference of explicit instruction (6,7). Participants were more impacted by the cue that information would or would not be available to them later, regardless of whether they thought they would be tested on it.
-
run the program and do whatever it tells you↩︎
-
the advanced mnemonic techniques have always been used by very few; even Giordano Bruno may not have used all the techniques he taught.↩︎
-
We can see early versions of this. Anki can schedule cards down to the minute, and there is already a website/application called Popling which will pop up a scheduled card every few minutes. It’s advertised for those who ‘lack motivation’; there is probably also a connection to intermittent reinforcement/game mechanics.↩︎
-
A functionality that would be appreciated by any parent who has absent-mindedly left their child in the car to bake to death.↩︎
-
Before you dismiss this as contrived, consider that the cellphone application Trixie Tracker charges $71.31$502010 annually to track babies’ sleep and feeding data. It is not a mini-EEG like the Zeo, nor is it a motion-based sleep tracker. If you read the press coverage, all the data is entered manually by the parents. In other words, it is a spreadsheet with a nice UI.↩︎
-
LENA is what looks like an ordinary USB audio recorder (~$71.31$502010) combined with a voice-recognition/transcription program (likely akin to Dragon NaturallySpeaking; ~$128.36$902010 retail) and some statistics code. It costs $998.32$7002010.↩︎
-
“The Science of Success”, David Dobbs, December 200915ya, The Atlantic↩︎