June 8, 2016
ALA’s OIF has begun publication of a new journal, the Journal of Intellectual Freedom and Privacy (JIFP). It replaces and expands the Newsletter on Intellectual Freedom (NIF).
The table of contents is here.
You will notice an article by yours truly, which is about SRRT’s Alternatives in Print Task Force, the attention to media monopoly issues in the 80s and 90s, and a related 2007 report from the IFC “Subcommittee on the Impact of Media Concentration on Libraries.”
Additionally, in the review section there is a review of the recent Library Juice Press publication, Where are all the Librarians of Color? The Experiences of People of Color in Academia.
We’re proud to be a part of this first issue of JIFP, and we look forward to seeing future issues.
December 10, 2015
Alison Macrina is a librarian, privacy rights activist, and the founder and director of the Library Freedom Project, an initiative which aims to make real the promise of intellectual freedom in libraries by teaching librarians and their local communities about surveillance threats, privacy rights and law, and privacy-protecting technology tools to help safeguard digital freedoms. Alison is teaching a class for Library Juice Academy next month, called Everything to Hide: A Toolkit for Protecting Patrons’ Digital Privacy. She has agreed to do an interview here, to tell people about the class and also to talk about the Library Freedom Project.
Hi Alison, thanks for agreeing to do this interview.
Hi Rory, thanks for having me.
I want to start by asking you to briefly describe the Library Freedom Project and a bit about how it got started.
Library Freedom Project is an initiative to bring practical privacy education and tools into libraries and the communities they serve. We teach librarians about threats to privacy from government, corporate, and criminal actors, privacy law and our responsibility to protect privacy, and privacy-enhancing technology tools that can be installed on library PCs or taught to patrons in computer classes. We work closely with the ACLU — particularly the ACLU of Massachusetts — and with The Tor Project, who are the technologists building a few of the privacy technologies we recommend.
I started Library Freedom Project after Edward Snowden began his revelations about mass surveillance in the summer of 2013. The Snowden revelations showed me that the problem was much more massive than any of us could have imagined — and this includes those of us who opposed the passage of the USA PATRIOT Act back in 2002. I was working as a library technologist at the time, and I saw libraries as the ideal places to fight back against this kind of pervasive surveillance. For one, we have a historic commitment to privacy and recognize the relationship it has to intellectual freedom and censorship. We’re often the only spaces offering free computer instruction classes, and our computer terminals are for many their only computer access. Furthermore, libraries have long prioritized service to marginalized populations — such as immigrants, Muslims, people of color, formerly incarcerated people, and people who are or have been homeless — and we know that surveillance affects these populations much more significantly than the general population. So it seemed to me an obvious way of combining our values and our commitment to our communities with a very real social need, and I began traveling around my home state of Massachusetts with staff of the Massachusetts ACLU, training librarians on surveillance resistance.
Is the training you’re giving them similar to what you’ll be teaching in your class with us?
There are overlapping topics, yes. But the class will cover a lot more ground.
So what will the class cover?
The class will start with some of the issues around surveillance and privacy, as well as threat modeling — understanding the capabilities of our adversaries and determining which particular ways we want to protect ourselves. We will cover many of the ways in which the internet is a hostile and insecure place. Then we will learn how to use the technology, getting into more advanced topics like PGP for email and OTR for chat.
Full disclosure: I’m planning to sit in on your class, because I want to learn about these things. I’m a little embarrassed to tell you, but I think I’m typical of librarians in that I am aware of privacy issues in general but tend not to do much to address the problem in my own work life. I use Google services heavily, along with Dropbox and Evernote, often for important things. I anticipate that your course will help me feel empowered and encouraged to make changes in my own work life, as well as to equip me to help others. Do you find when you do trainings that you have that effect on librarians? What are your thoughts on that?
I don’t think that’s something to be embarrassed about. You’re where most people are. And I do find that our trainings are empowering, because at the very least they give people a framework to understand these issues, and they can start making small, meaningful changes immediately. Privacy is ultimately about control, and the loss of that control can feel very discouraging. Taking back even a little of it certainly helps people combat their feelings of despair.
You’ve been doing the trainings for a little while now. What are some of the common issues that come up, that you expect to address in the class? What are some of the more problematic issues?
There are a great number of challenges — pretty much all of this information is new to the participants, the issues around privacy and surveillance are too big to know, the problems are massive, and the adversaries are powerful. Plus, most people are nontechnical (not an insult) and privacy-enhancing technologies can be more difficult than technologies that trade privacy for convenience. I will try to address those issues in the class the way that I do whenever I teach: people should know that even small changes can be significant, and that security is a process. The internet is a hostile place, and we have a lot of work to do to overcome that, but we can be successful if we take it one step at a time, adopt new strategies and get comfortable with them, and then move on to something new when we’re ready.
I just want to clarify that when you say “the internet is a hostile place,” you’re not talking about people who are assholes in the comment section; you are talking about spyware and things like that, right? In your experience, are we less than fully aware of the extent of the hostility you’re referring to?
Well, in some ways I do mean both. There are hostile individuals who want to dox feminists and marginalized people online, and they use some of the same resources that the intelligence agencies do. But mostly I mean that the internet was never designed to be secure or private, and the adversaries have so much power. People are DEFINITELY unaware of the extent of the hostility, and who can blame them? So much of it is invisible. For example, most people don’t know that Flash is ridiculously hostile, because they go on using it. Most people don’t know that leaving your software updates for days or weeks or longer is putting you in a lot of danger of exploitation. Most people — even those who followed the Snowden leaks — don’t have any idea of the capabilities of the intelligence agencies and how those are used against real people in our communities. I honestly don’t know anyone who knows the full extent of the internet’s hostility, because so much of the internet is essentially secret — proprietary, closed source technology that can’t be examined for security flaws or malicious code, and agencies that operate under incredible secrecy. Fortunately, the technology exists to protect us — but making that mainstream is its own Herculean task. That’s why libraries are the right places to teach this stuff. We have to make it mainstream.
It strikes me that we’re still under the strong influence of an idealistic cyber-utopian vision of the internet, as a technology that links the world together benevolently. What you’re saying is that people need to be made aware that the opposite is true, and that libraries should have a central role in teaching people to defend themselves in an environment that we formerly cherished for its openness. Is that right? If so, what does it mean for the library ideal of information sharing? I mean, I remember Sandy Berman quoted as saying, “I can’t have information I know would be of interest to someone and not share it.” Privacy education is about teaching people how not to share information. Is there a tension here, and do you think it reflects changing times?
The internet does need to be open, but that doesn’t mean that individuals should be exploited by its openness. I believe in transparency for governments and corporations, and privacy for individuals. There doesn’t need to be a tension, because you can define it easily across those lines. Libraries have long recognized this — providing information access has *never* meant “freely handing over patron records to the police with no warrant”; we know that privacy and intellectual freedom depend on one another. And Sandy Berman, bless him, maybe didn’t consider how much advertisers might want information about his lifestyle habits, his intellectual interests, and his associations, and maybe he didn’t consider how they’d use that information to shape public opinion and filter the results we get on the web — thus making it less open and free. He also probably didn’t imagine that those advertisers would use means totally hidden to the average user…not exactly openness or transparency. Furthermore, he probably never thought about how secretive and powerful intelligence agencies would grow in the Global War on Terror-era, to the point where they, too, have access to all that advertising data, plus anything else we share with a third party, plus a whole lot of other stuff too.
Now, simultaneously, my belief in a free and open internet means that I value free and open source software — software where the source code is shared openly and can be scrutinized for security holes or other privacy threats — thus making it the best option for people who want to defend against these adversaries. Using FOSS protects internet freedom, including privacy, and is one way we can make the internet a more democratic place.
Thank you, you’ve drawn the key distinctions that I needed.
So the Library Freedom Project trains librarians to do patron education about privacy. I wonder if you’re also interested in addressing library policies around patron privacy. What are some of the issues there? And is that within the scope of the project?
Yes, but we are a tiny organization and so we haven’t been able to make this a priority. I did help a small amount with the best practices guidelines created by the Intellectual Freedom Committee and the LITA Patron Privacy Interest Group. The guidelines address some of the major issues — that is, we’ve given 3rd party vendors so much access to patron data, we have not demanded secure transmission and storage, and so on. That’s how we wound up with the Adobe breach, something that we should be deeply ashamed of as information professionals. It seems to me that in our push to get more electronic content for our patrons, we left privacy out of our policies and contracts almost entirely, and now that’s come back to bite us.
Right after you answered that question you did a webinar, which I attended. I noticed that in your presentation you were addressing the librarians in attendance as the users of the tools, rather than explicitly as patron educators, or stewards of patrons’ privacy. It probably isn’t a meaningful difference, because either way the librarians need to know the tools they are going to be teaching. But in teaching to an audience of librarians as direct users of the tools, you assumed a degree of motivation that may not be as high as it is for political activists whom librarians may find themselves helping as patrons. Not that privacy isn’t something everyone should be interested in, but I know that in my case, if I decided to get involved with Deep Green Resistance I would start to get very concerned about privacy and would want to use Tor and PGP a lot, when in the course of my daily work I am not concerned to that degree. How do you navigate that issue in teaching and doing in-depth workshops? Are there any issues that have a different shape depending on whether the librarians are the users of the tools or the stewards and educators?
Well, when I only have 15 minutes to speak, my approach is quite different than when I have an hour or more. Also, I don’t think I was really addressing the librarians only as users of the tools — I referred back to April’s part of the presentation frequently, mentioning how tracking affects our communities, etc. I can’t really get into teaching strategies in a 15 minute presentation, but some of the resources I referred to on our site include a teacher’s guide.
I’m also not really sure what you mean about assuming a degree of motivation — people showed up to a webinar about privacy, which tells you something already about the motivation they have in learning about privacy tools. I don’t think it’s wrong to believe that they are thus motivated to, you know, do what I suggest that they do. Also, it is my experience that librarians are HIGHLY motivated to help their communities protect their privacy — whether those community members are political activists or domestic violence survivors or whatever. Librarians are service-minded people, and they tend to care very much about the ways their patrons are affected by privacy issues. April brought up a lot of those issues in the first half of the presentation — for example, how advertisers use algorithms to target people of color with predatory lending ads. If there are librarians who hear about how these issues affect our communities in serious ways, and they still don’t care to help them…I’m not really sure what to tell those librarians, frankly.
Also, our longer trainings go into much more detail about specific threats, cover a much wider range of tools, and offer teaching strategies as well. In those in-depth trainings, we cover the reasons why all people, not just political activists or people with more serious threats, have a reason to use these tools. For example, you mention PGP encryption. Maybe you’re unmotivated to use it, but if I explained to you how insecure and nonprivate email is, you might change your tune. You surely have had to send tax forms or other sensitive material over email, and that is incredibly unsafe without PGP encryption. Tor Browser also might seem like too much for you, but if you knew how much advertisers, analytics companies, A/B testers, and the like were collecting information about you and using it to filter your web content and create an information profile about you to sell you products, you again might feel differently. Those are only two examples. My assumption in teaching librarians is always that they are both users and teachers of the tools, because in order to be good teachers, they have to use the tools themselves and understand them.
That makes good sense. It will be good to see how you get into issues of patron education in more depth in the class. Patron education, and do you also get into issues of ensuring greater privacy for patrons in their use of the internet in the library? I recall you mentioning in the webinar that you have helped a couple of libraries install Tor on public computers. Is that a complicated thing, as far as getting admin to go along with it? Do you find issues with untraceable, anonymous services? I am thinking of this because I remember hearing a story about something that happened at my last place of work. There was a patron who used a public computer to send a serious threat, and the IT department tracked the computer using its IP, and then used the surveillance footage to ID him, and the police ultimately made an arrest. I know that the people in IT and in the admin office, at that place anyway, were interested in helping law enforcement, and they didn’t hesitate to violate the patron’s privacy in order to help the police. And in this case, he wasn’t just exercising his first amendment rights. I am pretty sure that at that library the administration would be reluctant to install a system that got in the way of their cooperative relationship with law enforcement. That’s not very nice to think about, but I bet it is common. Have you ever gotten pushback about things like installing Tor on a public terminal?
Yep, I will talk about teaching strategies. And yes, half the point of teaching these tools is trying to get libraries to install them on public PCs. As for the difficulty in getting admin to agree to things like that, it really depends on the library itself. Some libraries have agreed immediately — like the library in Lebanon, New Hampshire, where we installed our first Tor relay. Their board and director agreed to join the project unanimously. Others are harder to convince, but as more and more libraries start making this a norm, it won’t be as hard.
As for the situation you outline, that sort of activity is exceedingly rare, and most libraries will never have to deal with something like that. But what is incredibly common is that our communities face surveillance threats every time they use the internet, from pervasive advertising to overzealous intelligence agencies, and all the malware and criminal hacking that comes with using insecure tools. A browser that makes it easy for the police to identify the source of criminal activity also makes it easy for a domestic violence survivor to be tracked by her abuser, or for a poor person to be targeted by predatory lending schemes, or for children to be followed by malicious people, or for anyone to have their online activity tracked step by step. That is not a free internet, but an internet ruled by adversaries. That worries me much more than the rare occurrence of criminal activity on library computers. Furthermore, criminals have many options, because they are willing to break the law to achieve their ends — they can use proxies or spoof MAC addresses or find some other way of conducting their activities. Other people who need privacy don’t have those options, and we should prioritize their needs, because there are many more of them than there are criminals. It is of course a risk to give people the freedom of anonymity online, but in a democracy, we are often confronted with such decisions. As the ALA Freedom to Read Statement says: freedom itself is a dangerous way of life, but it is ours.
Thanks for saying all of this so well. I’ve been provoking you a little bit and I’m really glad that you’ve said all of this. I’m excited that you’re going to be teaching this class for us, and I hope you keep inspiring people to take control of their online privacy. Thanks for the interview.
Thanks Rory. I am really excited to teach the class — I’ve never had the chance to teach so many people over such a long course of time — and I’m excited to see what we can all learn from each other.
September 17, 2014
Information Ethics Roundtable 2015
University of Wisconsin Madison
April 9th & 10th
Theme: Transparency and Secrecy
Information and the CFP here…
May 9, 2014
Some people from Radical Reference have put together a zine with anti-surveillance resources for the discerning library worker-slash-activist. (Full title: We Are All Suspects: A Guide for People Navigating the Expanded Powers of Surveillance in the 21st Century.) As I wrote on that site, the zine includes “know your rights” info; suggestions for applications, browser plug-ins, and other tech tools for online privacy; and, of course, a reading list!
Download it from the Rad Ref page, where there’s also contact information if you want to get involved in similar privacy education projects.
We Are All Suspects: A Guide for People Navigating the Expanded Powers of Surveillance in the 21st Century
April 11, 2014
Chitra Ganesh and Mariam Ghani are artists, archivists, and activists. Both have been involved in immigration rights activism, especially after 9/11, and they created the shifting exhibition Index of the Disappeared, now in its 10th year, to address the insidious surveillance, false narratives, and criminalization of dissent perpetrated by the U.S. government.
I saw the “Secrets Told” version of the archive at New York University last month. During a tour of the exhibit, Ghani spoke about her and Ganesh’s idea of “exploding the archive” and putting the fragments elsewhere. The information they’ve collected is all in the public domain, but what their project does is make the connections of disparate data more visible.
(If you want to read more, a previous incarnation of Ganesh and Ghani’s work was the subject of the essay Warming up Records: Archives, Memory, Power and Index of the Disappeared. As Alice Royer puts it, “Their project makes visible that which has been rendered invisible, re-politicizes that which has been deemed natural, and names the government as the perpetrator.” [Emphasis in original.])
The Q&A at the “Secrets Told” tour brought up the question of the line between the activist and the archivist, which is something Ganesh and Ghani want us all to grapple with. Today is the start of the two-day Radical Archives conference at NYU. The hashtag is #radarcs—follow along!
“Reasonable Articulable Suspicion,” redactions, and Benjamin Franklin.
One of the many binders of articles, government documents, court cases, and other materials collected and organized for researchers’ use.
Files arranged by topic, with connections drawn between them.
The pivotal 1979 Smith v. Maryland decision, which led to the legalization of personal metadata collected via (land) phone calls.
August 10, 2013
NSA Data Center — Bluffdale, Utah
In a recent post to this blog, I outlined how the debate regarding the National Security Administration’s data gathering activities pitted privacy against national security and sought to “balance” the two competing values. I suggested that framing the debate in these terms misses the more important concern that the NSA’s data gathering activities are a significant threat to democracy. In what follows, I will explain my concerns.
Although most reporters suggested that Edward Snowden was primarily concerned about the invasion of privacy when he revealed the NSA’s data gathering activities, Snowden himself made it clear that his primary concern was for democracy itself. In an interview about the reasons for his actions, Snowden worried that through his work for the NSA, he was “extend[ing] the capabilities of … [an] architecture of oppression” and that the government unilaterally was “grant[ing] itself powers to create greater control over American society and global society.” Snowden was calling on us to see the dangers of the NSA’s surveillance programs more broadly. These programs do not simply pose harms to individuals, they have the potential to transform the character of all political life in the country.
But what is this “architecture of oppression” that Snowden mentions, and how will it “create greater control over American society?” The answers lie in understanding the significance of collecting and accessing Big Data which is really the core of the NSA surveillance activities.
Far from merely poking into the privacy of individuals, Big Data potentially provides its owners with the ability to modify the behavior of individuals and entire demographic groups. The most obvious example of this is the data collected by internet companies like facebook and Google. By collecting information about a person via their voluntarily constructed on line profile or through recording their search behavior, facebook, Google, and other such companies are able to craft advertising messages that are increasingly able to direct our behavior on and off line. To be sure, the algorithms used to customize advertising and search results are not perfect, but one need not succeed in every instance to increase the odds that members of a market segment will be persuaded to make a purchase or view a website. Such is Big Data’s role in commerce – a role that is not especially worrying.
We should be concerned more, however, with the political use of Big Data. In the past, political strategists employed data collected by Boards of Elections. One’s voter registration record usually contained one’s name, address, date of birth, political party affiliation (if any), and the elections in which one voted. From this, campaigns tried to identify likely and unlikely voters as well as sympathetic and unsympathetic voters. Door-to-door campaigns could then be run more effectively. Furthermore, the campaign message could be tailored to specific groups to maximize voter turnout in favor of the candidate and suppress turnout for the opposing candidates.
Now, with the availability of Big Data, a campaign can understand the voting population much better. This data often is available freely on government websites, e.g., the US Census Bureau and the Federal Election Commission. These sites can inform a campaign about the socio-economic status of a precinct, the breakdown of renters versus home owners, an individual voter’s history of campaign contributions, and much more. Conceivably, other Big Data repositories could be made available from the private sector. Knowing which voters purchased SUVs, have health insurance, shop at discount stores, take advantage of “back to school” sales, subscribe to specific magazines, purchased home security systems, or visit certain websites can help identify individuals with specific interests that then could be exploited by the campaign. The candidate who has the most extensive access to these data sources and can hire the data analysts capable of mining the data will have an enormous advantage over candidates who do not.
This style of campaigning is not merely a prospect for the future. During the 2012 presidential campaign, the Obama reelection committee employed Big Data (or at very least a lot of data and very sophisticated data analysis) to contact voters with messages that brought them to the polls in numbers far greater than anyone expected. According to Jonathan Alter the analyses were sophisticated enough to tell the campaign “why placing ads on reruns of The Andy Griffith Show made sense on some cable systems but not on others.” Furthermore, data was collected to test campaign messages and to measure the persuasiveness of particular door-to-door volunteers. The data analysis used by the Obama campaign, however, mostly focused on creating a nation-wide database that linked likely voters, volunteers, and donors in order to make donors of volunteers and volunteers of donors. So in this sense, it was not as sophisticated as it could have been. Still, it was really only the first concerted attempt to run a Big Data campaign. It often has been credited for winning the election, and so it likely will become the model for future political campaigns that will make greater and greater use of data analysis. (For an illuminating account of the Obama campaign’s use of data in 2012, see Jonathan Alter’s recent book The Center Holds.)
But where is the danger to democracy in this? After all, it is still the voters who are deciding the outcomes. Well, the danger arises long before the voters have anything to say about the election.
As campaign data analysis becomes more sophisticated, voters will only be presented with candidates who have access to the largest data sets about the voting population and who have the resources to analyze these sets. All others will be screened out of the electoral process long before any serious campaigning begins. For a campaign to be successful, it will need to have supporters who own important data sets and can provide the technical expertise to exploit them. Such friends cannot come from the working or underprivileged classes. Obama’s digital campaign had a budget of over $25 million dollars and costs for future campaigns surely will be higher. Consequently, the only entities capable of amassing the financial and digital resources will be extremely rich individuals, major corporations, internet companies, and broad industry groups. The ability to affect an election will not be based on the democratic principle of one person – one vote. It will be proportional to the donor’s wealth. Even more so than today, these groups will have effective veto power over who will be a “viable” candidate for state and federal office. If the Supreme Court’s decision in Citizens United advanced the cause of plutocracy, then the private ownership of Big Data and its use in elections will ensure that plutocrats will be unchallenged in perpetuity.
George Orwell’s 1984 warned that video surveillance might ensure that a political party would one day establish unassailable control over a society. He wrote, “If you want a picture of the future, imagine a boot stamping on a human face – forever. There is no way the Party can be overthrown. The rule of the Party is forever. Make that the starting point of your thoughts.” Today’s surveillance technology is not just Orwell’s simple video cameras. It is also the ubiquitous data and metadata harvesting by public and private entities. The NSA is merely one institution that is amassing this data, though it is doing so on an unimaginable scale and with an enormous budget. It currently is constructing a data center in Bluffdale, Utah, containing four 25,000-square-foot halls, filled with servers that will be able to handle yottabytes of information. (A yottabyte is equally to approximately 500 quintillion or 500 x 1018 pages of text.) Meanwhile, the NSA has only the slightest democratic oversight and ominously, it is working in support of a bloated National Security State that defends a plutocratic government. One might be tempted to call it an “architecture of oppression.”
I suspect (hope) that Orwell’s image of the future as a boot stamping on a human face is too extreme, at least for US domestic politics. More likely, if you want a picture of the future, it will not be much different from the present, but it will be less corrigible. We will see a wide disparity of wealth with a large, struggling underclass that is alienated from the benefits of economic progress. These condidtions will be guaranteed by governments that first of all serve the owners and managers of society. The pretense of democracy will survive only in the carefully manipulated elections contested by competing elements within the ruling class, and one of their most important tools for social control will be Big Data.
August 1, 2013
The recent revelations that the National Security Administration has been collecting metadata for the phone calls of American citizens and that they have been acquiring data from Google, Yahoo!, facebook, and other internet companies comes as no big surprise to many. Sen. Frank Church’s investigation in the 1970s into government surveillance revealed a long history of surveillance. The Foreign Intelligence Surveillance Act of 1978, its subsequent amendments, and the PATRIOT Act left enough clues to create a disturbing picture of what the government might be doing. Furthermore, there have been plenty of past news reports providing evidence of surveillance; but with the revelations from Edward Snowden, any room for willful ignorance is now over. The surveillance programs are out in the open and have sparked a media debate. Even Congress took up the issue.
According to news reports, the debate is about “balancing” national security against privacy. Numerous news sources and blogs have published (verbatim) the sentence, “The revelations have reopened the post-Sept. 11 debate about individual privacy concerns versus heightened measures to protect against terrorist attacks.” Obama put the question this way: “How are we striking this balance between the need to keep the American people safe and our concerns about privacy?” House Minority Leader Nancy Pelosi put it most succinctly, “We have to have a balance between security and privacy.” Even critics of the surveillance policies have adopted this framework. American Library Association President Barbara Stripling writes, “We need to restore the balance between individual rights and terrorism prevention.”
The problem with framing of the debate in this way is that it tends to overestimate the benefits and underestimate the harms. Worse yet, the most important harms are overlooked entirely. Our attention is directed to benefits that accrue to the whole of society (national security) and to harms posed only to individuals (the invasion of privacy). We are led to think that the NSA surveillance programs protect us from terrorism, while the only down side is that certain individual’s rights to privacy might be underweighted in the “balance.” Framing the debate this way seems to ask: should the government be prevented from setting up an anti-terrorist database on the grounds that some security analyst might – as a side effect – discover that someone is secretly visiting internet porn sites or dialing 1-900-SEX-CHAT? Framed in this way, personal privacy amounts to a dispensable luxury, particularly when Obama assures us that the surveillance programs pose only “modest encroachments on privacy,” and that “nobody is listening to your phone calls” – they’re just collecting metadata.
Of course, embarrassing publicity can have important political consequences, particularly as it might be used against politicians, but the public is likely to conclude that a sexting politician is too stupid to deserve much sympathy. Beyond damaging particular high-profile political careers, there are more serious concerns. FBI agents might be led to discover who is organizing climate change rallies or Tea Party meetings and then obstruct these movements by causing trouble for perfectly law-abiding citizens. But even in these cases, the public is likely to conclude that targeting peaceful political groups will be limited by the FISA court and that covert interference with fringe political movements will be a criminal aberration made rare by the integrity of intelligence agents and the threat of prosecution. So much for privacy concerns.
In contrast to this, we are asked to consider national security, specifically, “terrorism.” The Director of the National Security Administration, Gen. Keith Alexander, tells us that the controversial surveillance programs “help[ed] prevent” fifty-four “potential terrorist events” – whatever that means. The terrorism threat, however, has been enlarged well out of proportion. The number of Americans killed or harmed by terrorists pales in comparison with the number harmed by the most routine dangers we face every day. Moreover, the harms that might come from “terrorist events” are largely speculative and vague enough that a scenario can be concocted that is so grim as to put any civil libertarian on the defensive. Think of Condoleezza Rice’s remark, “We don’t want the smoking gun to be a mushroom cloud.” Even our ostensibly liberal president assures us that these programs “help us prevent terrorist attacks.” It is no wonder that many Americans are unconcerned about (even welcome) these surveillance programs.
What is seldom mentioned is that these massive surveillance programs do not just pose a threat to individual privacy. They pose a profound threat to democracy. When the threat to democracy is mentioned, it tends to be a rhetorical addendum. For example, Barbara Stripling writes, “the surveillance law erodes our basic First Amendment rights, all while undermining the very fabric of our democracy.” Stripling deserves great praise for her remarks on this issue, but we are left to figure out for ourselves how the fabric of democracy is undermined. I will explore this danger in a future post to this blog.
August 5, 2012
Journalism students at the University of Missouri have published a very important report on book censorship in Missouri. It makes for chilling, but necessary reading. Take a look here.
October 30, 2011
September 30, 2011
We have often pointed out here that privacy in Facebook is not primarily a matter of controlling what you share with your friends, as Facebook likes to say it is, but what data Facebook has about you that it can sell or otherwise make available to its business partners.
Here is a great link that was just sent my way, to an inventory of all of that data, Facebook’s Data Pool. It is possible to gather this information in Europe, because in the EU they have a wonderful law that requires companies to disclose to citizens what information they have about them.
There is not too much that is surprising in what they have found by doing this, but it is interesting to see the way the data is organized and how it looks from the Facebook side.
Serious case of law envy here.
May 16, 2011
MiT7 was a great conference – intimate, warm, stimulating, interdisciplinary, and cutting-edge. There were some brilliant minds at work. I plan to post a few comments on the conference later. For now, here are links to podcasts from the three topical plenary sessions:
Media in Transition 7: Unstable Platforms
Archives and Cultural Memory
Power and Empowerment
May 11, 2011
Media in Transition 7 (MiT 7), a small conference at MIT, is starting Friday and running ’till Sunday. I will be there; if you will be there too please say hello.
Anyone wanting to follow the Twitter hash tag can look for #mit7.
April 12, 2011
Here is a scary if unsurprising bit of news: a report in PC world on a recent study by Christopher Soghoian: “US Police Increasingly Peeping at E-mail, Instant Messages.” Soghoian’s paper is linked in the article, which begins:
Law enforcement organizations are making tens of thousands of requests for private electronic information from companies such as Sprint, Facebook and AOL, but few detailed statistics are available, according to a privacy researcher.
Police and other agencies have “enthusiastically embraced” asking for e-mail, instant messages and mobile-phone location data, but there’s no U.S. federal law that requires the reporting of requests for stored communications data, wrote Christopher Soghoian, a doctoral candidate at the School of Informatics and Computing at Indiana University, in a newly published paper.
“Unfortunately, there are no reporting requirements for the modern surveillance methods that make up the majority of law enforcement requests to service providers and telephone companies,” Soghoian wrote. “As such, this surveillance largely occurs off the books, with no way for Congress or the general public to know the true scale of such activities.”
March 26, 2011
No comment about this or predictions about where the case may be headed or whether there will be broader implications for privacy down the road, except to say to anyone out there who uses an email account set up by a public university: Best to keep as much as possible on your own private email accounts.
Wisc. GOP defends request for professor’s emails…
August 22, 2010
I blog about tech stuff only very rarely, but this is something I really want to share. If you’re at all concerned about online privacy, you will want to know about the Network Advertising Initiative’s “Behavioral Advertising Opt Out Tool.” Go to it, and it will show you which advertising networks have installed tracking cookies on your computer. You can check the boxes and click through at the bottom to instruct all of those networks to opt you out of their spying, which they are legally obligated to do. Now, it is also possible to block specific sites from setting cookies on your computer using complicated settings in your browser, but this tool is easier, and lets you opt out of networks that have not found you yet.
To say something general, I would say that it is a good thing that so far we have been able to get national policies set up that allow us to opt out of privacy-compromising systems, and we have to keep doing that, but our right to opt out is meaningless unless we are actually able to figure out how to do the opting-out process, and then go and do it.
Personally, I find it hard to take seriously the claim of some that they “want to see more relevant advertising served up on [their] browsers [or wherever else].” Advertisers never know us as well as they think they do, and when they do hit close to home, it’s just spooky. I am not comfortable when the opaque networked computer that is everywhere with the soft synthetic voice knows what I had for breakfast. There is too much of a potential for power without accountability when we lose our privacy in that way.
There are other useful tech tools for privacy that readers might tell us about in the comments. This is one I like because it is so quick and easy and doesn’t require me to go through ten proxy servers, etc. Some readers may also be able to provide information about the limitations of this tool.
(Be sure to read the comments below if this interests you – Commenters have some important things to add here.)