There are many ways, but one is, by not filing a federal lawsuit against Google alleging that plaintiff’s Social Security number, turned upside-down, spells "Google" and violates plaintiff’s right to privacy. Publishing your Social Security number and other personal information in a (quite unusual) public filing against one of the most famous brands in the world is probably one of the most effective ways to undermine your privacy.
Personal information on a few thousand ABM Amro Mortgage Group (unit of Citigroup) customers has been leaked out to anybody in the world through a peer-to-peer (P2P) software. The names, Social Security numbers, and mortgage information of some 5,200 people which was stored on an employee laptop was shared via the LimeWire P2P software.
While it is unclear how many times the information has been downloaded over the P2P network, the fact that a computer containing a large amount of personal information was allowed to run P2P software is inexcusable. In all likelihood, Citigroup (or ABM Amro Mortgage) have some sort of restrictions (administrative policy or an IT set of software restrictions) which prohibit P2P sharing in general and in all likelihood the employee who used the laptop in question did so without authorization. Even so, the problem remains at Citi and their IT security personnel for failing to prevent or detect such software earlier. Placing the blame on the individual user is not an excuse as it is Citi’s reputation (and possibly checkbook after claims by these 5,200 affected people are filed) on the line.
According to Pike & Fischer, in testimony at a July House subcommittee hearing on P2P risks, LimeWire Chairman Mark Gorton said that a "small fraction" of users override safe default settings that come with the program, despite the company’s warnings and precautions. The company is working on a "new generation of user interfaces and tools designed with neophyte users in mind," making it "even easier for users to see which files they are sharing and to intuitively understand the controls available to them," he said.
Even if LimeWire is successful in preventing users from ‘inadvertently’ sharing their business documents folders, a company which takes its intellectual property and information security and privacy seriously should, in most cases, take proactive steps to weed out P2P software from its networks.
This happened to me very recently. I applied to join a certain credit union. The credit union has a wonderful website and, as it should, it has an online application which seems secure enough. I filled out the necessary personal information and submitted my application over the SSL connection. Among the standard questions were few security questions such as mother’s maiden name, favorite teacher, and others. In response to my completed application, I received an email which also seemed to meet adequate financial institution information security and privacy requirements (e.g. no account numbers, login names, passwords, etc. being sent in plain text over email.)
Everything seemed fine. Until the next day when I received a phone call from an "unknown name/unknown number" phone. The lady on the other end identified very politely as X from the credit union, welcomed me to the union, and asked me whether I would be willing to talk with her briefly about my finanical needs and how the credit union may be able to help. This was nice customer service, I thought, and agreed to talk with her for a "couple of minutes." The next thing she asked me was whether I can verify the security information on my account and proceeded to ask me about my mother’s maiden name. The call ended shortly after this question and after I calmly tried to explain to X that asking such questions during an outbound phone call is not a good idea because anybody could, in theory, make this phone call and obtain my security information.
I went to the credit union’s website and was impressed by the thorough explanations they have on Internet security and in the effort they make to "teach" their customers not to respond to phishing emails asking for personal login or financial information. I am sure the credit union has a policy prohibiting outgoing emails from soliciting customers’ security information. But did anyone at the credit union think to put in place the same security policy for outgoing phone calls to customers? Apparently not.
The Sixth Circuit Court of Appeals held on June 18th, in Warshak v. U.S., that people have a reasonable expectation of privacy in the contents of their email so that the government needs to obtain a search warrant before being able to obtain it.
The issue in the case was whether Warshak had a reasonable expectation of privacy in the email stored on his ISP’s servers. The government had obtained an order, authorized by the Stored Communications Act, to compel Warshak’s ISP to disclose Warshak’s email to the government without notifying Warshak. The defendant argued that this is improper search and seizure under the Fourth Amendment because of his reasonable expectation of privacy in the email.
The opinion by Judge Martin seems to rely on an analogy between email and phone calls. The courts have long established that there is a reasonable expectation of privacy in the content of phone calls notwithstanding the phone company’s ability to listen to calls. Under the established precedent, the government cannot eavesdrop on calls without a warrant. The Sixth Circuit held that email is similar to a phone call, for expectation of privacy purposes, and the phone call expectation of privacy reasoning applies to email.
The court seems to limit the holding, however. If ISP employees regularly look at customer email in the ordinary course of business or if the ISP has a broad authorization (by EULA or something similar) to look at customer email, then the outcome of the case might have been different as customers would have decreased expectation of privacy. It is also interesting to note that the court recognized that inspection of email by computer programs, such as virus or spam checkers, security filters, or other tools that process email based on its contents, does not decrease the expectation of privacy in one’s email - instead, manual (or otherwise human) inspection of email is necessary to erode the privacy expectation.
The pragmatic comment about this outcome is that it may not apply as broadly as one might think. Most ISPs may, if they do not already have, bury somewhere in their EULAs a "no reasonable expectation of privacy in stored email" language and this would defeat what the privacy expectation SIxth Circuit has carefully carved out. The ruling leaves much details to be fleshed out and subsequent cases interpreting this ruling may turn out to be as important as this one.
Many computer users try very hard to find the perfect software to protect their privacy and the security of their information by setting up encrypted drives, biometric authentication, or similar technological measures. What many people forget to do is set up a simple and very effective privacy protection - the monitor screen filter.
It is not uncommon to sit in a cafe or on the airplane and see a busy businessman or a lawyer busily staring at their laptop screen. Unfortunately, what is also uncommon is the fact that you can easily read what is on their screen, especially with the modern high-contrast laptop screens. I am not aware of statistics but there must be instances of confidential legal or business information being lost due to “shoulder surfing” while it is displayed on somebody’s screen and seen by others.
There is really no software solution to this. Fortunately, there is a very simple and relatively inexpensive tool that all users who display sensitive information on their laptop (or on their desktop, if they share office space with other people) should consider - screen privacy filter. This may be the best $50-$100 spend on information security and privacy.
A recent interpretation of Section 230 of the Communication Decency Act by a California Court of Appeals held that an employer is immune from liability based on an employee’s use of its communication networks and systems to send threatening messages. The case is Delfino v. Agilent Technologies, Inc., 06 C.D.O.S. 11380 (Cal. App. December 14, 2006)
The facts are as follows. Plaintiff Delfino was subject to a number of threatening messages sent anonymously over email and posted on Yahoo bulletin boards. The plaintiff contacted the FBI which was able to find out that the source was an employee of defendant Agilent. Eventually the employee admitted that he sent the threatening messages and that he used his work computer to do so. Agilent terminated the employee shortly after.
Plaintiff then sued Agilent under tort law for intentional and negligent infliction of emotional distress. They claimed that Agilent was liable under the respondeat superior doctrine and argued that Agilent was aware that the empoyee was using its computer systems to send the threats and took no action to prevent him from doing so.
Agilent claimed immunity under Section 230 of the CDA. The relevant portion states in part that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” and “No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230, subds. (c)(1) & (e)(3). Trial court agreed with Agilent and dismissed the case.
On appeal the Court of Appeals affirmed the lower court holding that Agilent was an interactive computer service provider immune under CDA from liability. The court’s reasoning was that one of Section 230’s rationales was to encourage Internet service providers to self-regulate and prevent chilling of speech that would result from imposing liability on companies for speech which merely “flows” through the company network regardless of whether it is authorized or not. Subsequently, the court held that Agilent provided Internet access through its computer servers and is therefore provides “interactive computer services.” The court also noted that Agilent was not on notice of its employee’s cyberthreats and that applying Section 230 immunity in the case would not be inconsistent with CDA.
As a result, employees may be successfully able to claim immunity under Section 230 in circumstances where employees are vigilant in developing and disseminating acceptable use of electronic resources policies and are proactive in detecting and acting on reports of misuse of its electronic assets.
steganography (n.) The practice of hiding messages, often by writing them in places where they may not be found. Often (wrongly) used to mean the same as cryptography which relates to encoded messages.
Why Use Steganography?
Unlike encryption, steganography (or stego for short) is useful to "hide" data in a way that a third party would not know of its existence and hence would not try to break its encryption or force the encryption key from its owner.
There are many uses for steganography, especially in the information security and privacy field. You may want to exchange sensitive information like passwords or shared secrets over an insecure transmission protocol, such as email or ftp. You can embed secret files that should be available to selected audience. You can embed copyright information into digital files and control distribution of content. You can store your own sensitive information in an image, upload it to a flickr, and have the information available anywhere in the world (subject to decryption, of course.)
There are a variety of tools that allows steganography. Here is a sample of few.
- Hide in Picture (Win) - allows you to embed a file into a GIF or BMP image and lets you set a password to retrieve the hidden file.
- wbStego (Win) - allows you to embed files into PDF, HTML, or bitmaps.
- mp3Stego (Win) - allows you to embed files into MP3s
- PictEncrypt (Mac) - adds text to GIF, JPEG, TIFF, PNG, and MacPICT images.
In late June, the Office of Management and Budget (OMB) issued a mandate to federal agencies to take certain measures to protect the privacy and security of personally identifiable information stored on removable devices. A deadline for implementing the OMB’s security mandate was Monday, August 7, 2006. The mandate guidelines were based on National Institute of Standards and Technology (NIST) requirements and inspectors general at several agencies have already begun reviewing compliance with the OMB checklist mandate.
The 45-day deadline imposes requirements that are beyond execution in such a short period of time. Brett Bobley, CIO of the National Endowment for the Humanities, says that he does not think any agency can say it meets every requirement in the OMB memo,
Within the [past] 45 days your goal is to show your IG that you have thoroughly looked through [the] guidelines and determined where you meet it and where you don’t. Once you know the areas where your policies and procedures fall short, you can start to take corrective action.
While Mr. Bobley is correct that full compliance is impossible, the OMB should be proud even if agencies take a serious hard look at their information privacy and security policies and chart plans to improve how data is handled.
A 33-year-old Californian admitted illegally obtaining personal data on thousands of individuals and then using the information to obtain credit cards or otherwise conduct identity theft. In a plea agreement filed on July 17, 2006 with the U.S. District Court for Central District of California, Bryan Dill pleaded guilty to aggravated identity theft and other fraud related crimes. Sentencing is scheduled for September 25th.
In the plea, Dill admitted he accessed the Merlin database service claiming to be a private investigator. Dill used the database to obtain personal information belonging to other people and used it to obtain credit cards on their behalf. Records suggest that Dill conducted at least 1,873 queries through the Merlin system to obtain information on approximately 5,875 people. [DoJ press release.]
Merlin Information Services is a database of public and credit report records which allows [mostly] anybody to open an account by filling a form, pay a fee, and search records which may contain SSNs, DOB, among other interesting pieces of information.
What is troublesome in this case is the apparent lack of control on who can access the database and the potentially unlimited reach of information that can be obtained. It sort of becomes like a Russian roulette - we know that our records are in these databases, and we know that eventually they will be compromised, either technologically or socially, and then it is just a matter of luck whether our information will be extracted or not.
According to Reps. Bachus (R-Ala.) and Pearce (R-N.M.), any proposal in Congress to limit consumers’ ability to unmask the identities of Web sites with whom they transact business would amount to a "radical change" that would interfere with consumers’ ability to adequately protect themselves. In a hearing entitled "ICANN and the Whois Database: Providing Access to Protect Consumers from Phishing," the Representatives strongly opposed measures to limit consumers’ ability to query the WHOIS database maintaining information on every registered domain name.
Many of our readers know that the WHOIS database was originally intended as a tool for efficient communication with domain owners over domain or hosting technical issues. However, as time went on, other parties started using the database for a variety [of illegal] purposes, e.g. as a source of email addresses to be spammed, or physical addresses to be used as part of a scam. In addition, exposing private information in plain text and unprotected on the Internet makes many legitimate domain name owners somewhat nervous - having a name, a telephone number, a physical address, and list of other domain names owned by an individual can prove to be very useful to cybercriminals.
In April, ICANN (Internet Corporation for Assigned Names and Numbers) decided that it should do more to protect the privacy rights of domain name owners and an ICANN advisory task force recommended to ICANN’s board that it revamp its policy approach to WHOIS by limiting access to the data for technical administration purposes only. Intellectual property owners and government agencies have objected to this proposal, fearing that if adopted, it could hinder IP or law enforcement efforts. Even though no one at the hearing argued that law enforcement should not have unfettered access to the database, the issue was framed as whether consumer access to Whois might be bargained away in effort to strike a deal that would permit continued access to the data by private entities, such as IP owners and banks, who have come to depend on the data for their own enforcement efforts.
In addition, Rep. Bachus, with FTC and Department of Commerce support, indicated that he was worried that limiting consumer access to WHOIS could deprive consumers of their "first line of defense" in protecting themselves and thus forced to complain to the Federal Trade Commission which would be swamped with consumer complaints. The problem with this claim, however, is that 1) WHOIS information, especially in cases when potential fraud is involved, is very often inaccurate, and 2) consumers may lack the technical savvy discover to sift out the true identity of the registrant. Marc Rothernberg, executive director of the Electronic Privacy Information Center, suggested that WHOIS data should be treated similar to the department of motor vehicles records - not widely available to the public, but accessible in appropriate and somewhat narrowly defined circumstances.
See more information on the hearing and witness testimony.