Main Conference Page and Registration Page
08:30–09:00 |
Registration and Welcome |
|
09:00–10:30 |
Session Chair: Prof Andrew A. Adams |
|
Keynote Speaker: Prof Luciano Floridi |
||
10:30–11:00 |
Break |
|
11:00–12:30 |
Session Chair: Prof Noyhoung Park |
|
Digital vigilantism: A conceptual, ethical and policy challenge |
||
Dr Toru Nakamura |
||
12:30–13:30 |
Lunch |
|
13:30–15:00 |
Session Chair: Prof Ana Maria Lara Palma |
|
Social materiality of surveillance camera: Case study of Kamagasaki Area of Japan |
||
Prof Kyung-Sin Park |
||
15:00–15:30 |
Break |
|
15:30–17:00 |
Session Chair: Prof Roger Clarke |
|
Dr Sarah Stevens Consumer behaviours in an exploratory study between Europe and Asia-Pacific |
||
Smokescreen or the Real Deal: Website Privacy Notices of Companies in New Zealand |
||
18:00–20:00 |
Conference Dinner (by Invitation) |
09:00–10:30 |
Session Chair: Prof Kiyoshi Murata |
|
Keynote Speaker: Naoya Bessho |
||
10:30–11:00 |
Break |
|
11:00–12:30 |
Session Chair: Prof Jim Foster |
|
Notional and actual financial penalties for privacy breaches: Asia-Pacific and European comparisons |
||
12:30–13:30 |
Lunch |
|
13:30–15:00 |
Session Chair: Prof Graham Greenleaf |
|
Social Impacts of Snowden's Revelations in Japan: Exploratory Research |
||
15:00–15:30 |
Break |
|
15:30–17:00 |
Session Chair: Dr David Murakami Wood |
|
Dr Kazuyuki Shimizu |
||
On Taiwan Personal Information Protection and Administration System (TPIPAS) |
||
Privacy Reform Process in Japan: Views from the Business Community |
||
17:00–17:15 |
Closing Remarks |
Naoya Bessho, Corporate Officer and Executive Vice President, Yahoo! Japan
The Right to Be Forgotten and Japanese Law
Abstract
I would like to present our thoughts on the current topics regarding the privacy issue on the Internet, especially on the right to be forgotten. The decision of the EU court raises a big issue on how we should balance privacy and freedom of speech. In the 80's, the so called "right to access" was one of the issues regarding the right of free expression in the constitution of Japan. Such concept was proposed during the time when the power of delivering information was in the hands of the media, such as newspaper, radio and TV, rather than in the hands of each individual. Freedom of speech is the most important concept to support democracy; everyone should be guaranteed the rights to see, listen and read other people's opinions in order to form their own political opinions. In addition, "speech" is not a unilateral action but always has a listener. In this regard, it is clear that without the power to reach people, freedom of speech does not work. How should we evaluate the decision given by the EU court, from such a viewpoint? I would like to revisit this issue and think about the core issues surrounding the right to be forgotten in light of the Japanese Constitution.
Prof Luciano Floridi of the Oxford Internet Institute at the University of Oxford
Paternalism and the Right to be Left Alone
Abstract
In this paper, I analyse a difficult question posed by the tension between fostering and protecting rights in security contexts. On the one hand, liberal and democratic societies aim at ensuring that civil and political rights such as privacy, freedom of thought, of conscience, of speech and of association, as well as the right to assemble, are not just legally protected but also culturally valued and socially enhanced among their citizens. On the other hand, in security contexts (especially, but not only, linked to terrorist, military, and organised-crime threats), the same societies need to identify satisfactory ways to deal with abuses of such rights and with exceptions, sometimes imposing circumscribed and temporary limitations. The risk is that misunderstandings about, and mismanagements of rights and their constraints may easily lead to significant problems in terms of public safety and security or democracy and freedom. The ethical difficulty generated by such a risk concerns the identification of a morally acceptable balance, in terms of a point of equilibrium between different requirements, or of a trade-off between pro and contra, or of a compromise among different needs. And paternalism is one of the ethical and political strategies adopted to address that ethical difficulty. Paternalism, broadly understood, is any approach that sees an agent A restrict or shape the liberty or autonomy or another agent B for B's own good. In this paper, I offer an analysis of different forms of paternalism, discuss some major difficulties, argue that one form of paternalism is less controversial, and show how it may be applied to deal with the ethical difficulty of finding a morally acceptable approach to toleration and privacy as the right to be left alone.
Prof Andrew A. Adams of The Centre for Business Information Ethics, Meiji University
Privacy, Security and Surveillance
Abstract
The revelations by Edward Snowden of the activities of the US' NSA and the UK'S GCHQ continue, and continue to generate debate. The founder of Lavabit, the secure email service used by Snowden to contact Greenwald and other journalists, recently broke his silence over his decision to shut down the network in August 2013. Other services such as TrueCrypt have also shut down. World-renowned security expert Bruce Schneier (who has been acting as a technical consultant to help Greenwald understand the Snowden material) has written a number of times in the last year about the insecurity generated by actions which weaken security infrastructure such as purchase of zero-day exploits and weakening cryptography standards. This talk explores the relations between privacy, security and surveillance in this context.
Individual security versus societal security: societal security is not simply a summation of individual security. The right of one individual to be free from the threat of law enforcement entering their residence and searching it for contraband, evidence of crimes committed or solid intent to commit crimes, is a classic example of this tension. The US constitution tries to create a balance in this area by limiting the arbitrary power of the state to invade private homes, requiring prior issue of a warrant in most cases for both the search to be lawful and for the resulting information to be acceptable in court. Many other jurisdictions, particularly ones which wish to at least appear democratic, have similar limitations on the legality of entry to and search of premises and the admissibility of material(physical or informational) gained thereby.
Monitoring of Distance Communications: The application of these principles to new distance communications has long been a source of further legal political and public debate, leading in the US to the famous 1928 Olmstead decision allowing telephone wiretapping without warrant, overturned in 1967 in the Katz case. In the UK various statutes have provided a legal basis for accessing communications in the course of investigations, but the information gained from eavesdropping is generally not admissible in court. This is not due to civil liberty concerns but at the behest of the communications monitoring groups who claim that revealing such evidence in court would provide information to criminals about the extent and limitations of surveillance operations, curtailing its utility in investigation.
Security Technology: Technology used to secure physical premises or electronic communications can be equally used by legitimate citizens and businesses to maintain the privacy and confidentiality of their affairs, and by criminals to provide cover for their unlawful activities. As is well-documented by Levy in "Crypto", the US government (the NSA specifically) worked hard in the 70s and later to discourage open research and publication on various forms of cryptography. Until the late 90s the US, amongst other countries, regulated cryptographic algorithms and their implementations as munitions.
In 1997 Director Freeh of the FBI called for a "balanced" solution to the problem that secure communications can be used for good and evil alike. Unfortunately, the idea that communications security can be at once sufficiently strong against cybercriminals when used by law-abiding citizens (for instance to perform online banking or shopping) but at the same time be weak enough for the FBI, NSA or CIA to crack into the communications of criminal gangs, is a hard, perhaps impossible, circle to square. Just as it is impossible to both allow access to digital entertainment information for use but deny access for copying (the DRM fallacy), so too is the idea of law enforcement-only backdoors in security a fallacy. Weak security for the criminals against law enforcement means weak security for the citizen against criminals.
The Kafka-essence of Secret Mass Surveillance: The description by Ladar Levison of the operation of secure communications system Lavabit, including their compliance with individual warrants served regarding customers accused of using the service for illegal activities such as trading in child abuse imagery, and then of their decision to shut the service down rather than reveal the global keys to the system in 2013, is evidence of the attitude of the US' surveillance authorities to the balance between mass and targetted surveillance, that even when a system willingly complies with targetted surveillance orders that they will still seek mass surveillance authority and technical access, is worrying enough. The description of the court processes under which the Lavabit operators had to cope reveals just how far from the ideals of a free society secret mass surveillance can take us.
Gertjan Boulet of Vrije Universiteit Brussel
Mutual recognition of sanctions as a facilitator for law enforcement
cooperation between data protection authorities in the cloud
Abstract
Data Protection Authorities (DPAs) and privacy enforcement authorities (PEAs) face challenges as regards their mutual cooperation for enforcing privacy and data protection laws. The significant variation of sanctioning powers among DPAs and PEAs, and the lack of harmonized criminal sanctions can be seen as a barrier for cooperation amongst them. In the European Union (EU), the proposed General Data Protection Regulation (GDPR) empowers a so-called lead DPA to supervise the processing activities of a data controller or processor in all EU Member States. In a Resolution of April 2014, the German DPAs adopted a Resolution on the "Future Structure of Data Protection Supervision in Europe", providing that the lead DPA should closely work together with other DPAs. In a similar vein, the Council of the EU, representing the EU Member States, repeatedly underlined the importance of clarifying the competence of local DPAs in the GDPR, including a discussion on the distribution of powers to adopt corrective measures.
Thus, mutual recognition by DPAs and PEAs of their respective sanctioning powers does not seem to be evident. This may partly explain why, in 2013, following a joint investigation by EU DPAs into Google' new privacy policy, DPAs from France, Germany, Italy, Spain, UK and The Netherlands started their own investigations under their own national data protection laws. Whereas the French and Spanish DPA in the Google case have already imposed significant sanctions on Google, the Irish DPA, seems to see the use of sanctions as a last resort. The Irish DPA and the German DPAs also took fundamentally different positions about the sharing of data with US intelligence services by the Irish subsidiaries of Facebook and Apple.
The widespread use of could computing, or the storage and maintenance of data via online cloud computing service providers in data centers located all over the world, will increase cross-border cases for the enforcement of the rights to privacy and data protection laws. A key challenge will be to address differences between sanctioning powers of European and Asian DPAs. Until now, a cooperative approach between European and Asian DPAs can be observed. First, in the above mentioned Google case, the Asia Pacific Privacy Authorities (APPA) supported the findings of the EU DPAs. Second, DPAs from EU and APEC have cooperated in the field of standard setting, by developing a "referential for requirements for Binding Corporate Rules submitted to national Data Protection Authorities in the EU and Cross Border Privacy Rules submitted to APEC CBPR Accountability Agents". However, challenges can be expected when trying to overcome the different approaches of EU and Asian countries regards the sanctioning of data protection wrongs.
Prof Roger Clarke
of Xamax Consultancy Pty Ltd,
Australian National University and
University of New South Wales
with Prof Andrew A. Adams and Arash Shaghaghi
Easy Privacy for Consumer-Oriented Social Media
Abstract
Most social media services are highly exploitative of consumers' data. Previous research has identified the desirable characteristics of an alternative approach to social media. These include appropriate architecture, avoidance of some technical features and inclusion of others, ease of use, appropriate terms of service and privacy policies, and business models that are less dependent on the exploitation of data about users. The focus of this presentation is on privacy-related aspects of such services.
The notion of Privacy-Enhancing Technologies (PETs) has been pursued since the mid-1990s. Yet PETs have achieved remarkably little penetration. A number of consumer-oriented social media have been conceived, some have been implemented, and a few have been deployed. But, like other PETs, they have achieved very limited adoption. A review of relevant theory identifies the need for drivers for adoption, and for means of overcoming impediments to adoption. Among those factors are several that fall within the 'Easy Privacy' theme of this conference.
The first concern addressed in this paper is the question of usability. The body of literature relating to the usability of security and privacy tools is applied, in order to identify key issues. These include awareness, rapid learnability and intuitiveness, transparency, convenience, consistency, recoverability and easy configurability and feature invocation.
A second aspect is the avoidance of technical functionality that is privacy-abusive and the inclusion of technical features that are privacy-supportive. A paper presented at the Third Asian Privacy Scholars Network conference in 2013 is drawn on to identify key privacy features.
Finally, it is argued that, for many users, consumer-orientation is not sufficiently attractive to cause them to abandon the exploitative services to which they have become accustomed. Rather than trying to make new services attractive to everyone, designers need to identify categories of users who do, or at least rationally should, regard privacy as a significant concern. On the basis of a risk assessment relevant to each such user segment, designs can emerge that are attuned to each segment's particular needs.
Social media designs have not yet delivered 'Easy Privacy'. This presentation applies existing theory to identify key shortfalls, and to suggest approaches that can be used to overcome them.
Prof Jim Foster of Keio University
Privacy Reform Process in Japan: Views from the Business Community
Abstract
Japan's Strategic Headquarters for the Promotion of an Advanced Information and Telecommunication Network Society (IT Strategy HQ) announced the start of a "Policy Review on the Protection and Utilization of Personal Data" on December 20, 2013. A policy outline for the amendment of relevant laws is expected to be published in July 2014, with a 30-day public comment period. New legislation on privacy is targeted for presentation to the Diet in January 2015.
The IT Strategy HQ has tasked the Personal Data Review Working Group with three issues: promoting the use of personal data in the "big data" era, meeting user expectations for privacy, and aligning Japan's approach to privacy with international best practices. The goal is new legislation that will enhance the protection of personal information and privacy, while eliminating ambiguities in current regulations that have led to business uncertainty in utilizing personal data more extensively.
The business community in Japan is following closely these discussions. Uncertainty with regard to standards and implementing guidelines for privacy under the current law have raised customer concerns and discouraged business activity. For these reasons, Government of Japan (GOJ) efforts at reform have been welcomed, particularly the focus on promoting greater coordination and collaboration among the various ministries and agencies in enforcing privacy principles and rules.
Nonetheless, there are numerous problems with the direction that the current review is taking. The American Chamber of Commerce in Japan (ACCJ) Internet Economy Task Force released a Viewpoint in March 2014 on "New Measures to Protect Privacy in Japan" and prepared a policy statement in collaboration with Keidanren the same month on the occasion of the Fifth Meeting of the US-Japan Internet Economy Dialogue. Both documents took issue with a government proposal to introduce a new "third party" body to oversee privacy and pressed the GOJ to develop a process for incorporating the views of all relevant stakeholders, including the business sector.
Specifically, the business community in looking for greater clarity in the following areas: the definition and scope of personal information; conditions for the transfer of de-identified data to third parties; measures to introduce a flexible framework for evaluating legal and practical risks; reporting requirements in event of data leakage; the need for consumer consent when repurposing data usage; handling of data disclosures and deletion requests; and respect for due process in administering the law.
Japan has a unique opportunity to create a new framework that is robust and protective of privacy, but also is balanced, flexible and one which supports innovation, and economic and social growth. The Abe administration has signaled its support for meaningful reform, but it remains to be seen whether the bureaucratic inertia and sectionalism that has hampered enforcement of the current law and postponed for over a decade needed changes to the regulations, can be overcome.
Prof Graham Greenleaf of University of New South Wales
Notional and actual financial penalties for privacy breaches: Asia-Pacific and European comparisons
Abstract
Money talks, and financial penalties (whether fines for criminal offences, administrative fines, compensation orders, or mediated settlements) are one of the simpler ways to measure the consequences of privacy breaches. If appropriately publicised, they also send signals to all relevant parties about the costs of privacy breaches. Other consequences may be more financially serious (e.g. loss of business reputation or share value, or costs of remediation) but are more difficult to measure. There is also usually a considerable gap between the maximum penalties specified in legislation (notional penalties) and those that are imposed. Frequency of imposition is also of course relevant.
This paper will present such data as is available on both notional and actual penalties arising from data privacy laws in Asia, Australasia and North America, and provide some comparisons with available data from Europe (from the European Fundamental Rights Agency and other sources).
This is only a preliminary report, in a field in which there is a surprising paucity of data.
Prof Gehan Gunasekara
of University of Auckland Business School
with Nora Xharra
Smokescreen or the Real Deal: Website Privacy Notices of Companies in
New Zealand
Abstract
This paper examines the website privacy notices of listed companies in New Zealand, comparing them with overseas companies listed in New Zealand and with a snapshot of United States companies listed on the New York Stock Exchange (NYSE) for sector comparison. The privacy notices are assessed for legal compliance and best practice against several criteria including accessibility, readability, compliance when providing online services, procedures for privacy breaches, transparency in relation to government requests for personal information and independent privacy assurance.
The paper explores the hypothesis that as personal information is increasingly seen as the new currency, individuals are more likely to share it with corporations when they feel in control as to the manner of sharing and the uses to which the corporation intends to put their information. One of the ways in which business can promote public trust as to the manner in which personal information is safeguarded is to be transparent through their web profile as well as in their reporting and governance documents. The paper focuses on the web profiles of companies alone and argues that privacy by design should be a central feature of the design of the websites themselves. It further argues that websites ought to do more than pay lip-service to privacy by using the notices to substantially comply with legal requirements as well as best practice where privacy is concerned.
Overseas research has pointed to a startling discrepancy between the facade portrayed in corporate practices and the reality behind it. The very real likelihood exists that consumers are being given a false sense of security or, worse, being actively misled through the use of corporate privacy notices. Such research has found evidence of obfuscatory language, unclear or undefined policies and market orientation in website privacy notices, to the detriment of consumer choice and consumers' rights. The paper assesses whether this is the case in New Zealand also, given New Zealand's comprehensive data protection legislation governing the private sector.
The paper finds the performance of New Zealand companies lags behind their overseas counterparts and that this is likely to put them at a competitive disadvantage through customer's reluctance to disclose their personal information. The research finds areas of deficiency (especially in relation to the requirements of the new Australian Privacy Principles for New Zealand companies doing business there) and makes recommendations as to how both legal compliance and best practice might be achieved through appropriately constructed privacy notices. It also finds significant differences in the practices of New Zealand and overseas incorporated companies and tentatively suggests explanations for these differences.
Hiroshi Koga of Kansai University
Social materiality of surveillance camera: Case study of Kamagasaki Area of Japan
Abstract
This paper presents the current status of and analyzes problems with surveillance cameras in Kamagasaki, an area of Osaka in Western Japan whose residents are primarily day laborers who lack permanent addresses in the city. In particular, this paper adopts the perspective of "sociomateriality" from Information Systems Research.
With the development of social media, surveillance theories based on "the panopticon" have been replaced or supplemented by theories of "perioptic surveillance" or "social orchestration". Many studies discuss mutual surveillance through social media. The target of discussion in this paper is a traditional surveillance camera, however this paper discusses the mutual monitoring that surveillance cameras produce.
Kamagasaki is a region which was formed artificially as an area of day laborers, located in the southwestern part of Osaka. However, Osaka City is not allowed to be stated the name of the "Kamagasaki" on official maps and the media could not use the name "Kamagasaki" in its discourse. The reason for this is that since 1961, day laborers in the Kamagasaki area have been in regular confrontation with the police. Day laborers seeking to obtain their human rights with the with the assistance of volunteers have been called "riots" by the media.
To consider the neighbors, Osaka city would be called "Airin Chiku" where riots occurred. That is, Osaka city has been erased from the map on the place name of "Kamagasaki."
Claiming that it is to prevent "riots", local authorities have installed surveillance cameras in the area. In a little less than one square kilometre, 15 cameras were installed.
Human rights campaigners supporting the laborers have sought the removal of the surveillance cameras. As a result of litigation, the Osaka District Court ordered the removal of just a single camera which was installed in front of the base of the volunteers. The ruling was on the basis of the right of publicity in Japan.
The focus of this paper is on the relationship of the surveillance cameras to the transformation of Kamagasaki. That is, the environment surrounding the day laborers has changed greatly. With their aging and the slump in the economy, day laborers were changed to "welfare recipients" or homeless. These welfare recipients and homeless were then considered security risks and used to justify an expansion of surveillance with a plan to expand the number of cameras to 45 in 2014.
Here, surveillance cameras transformed into a tool to eliminate welfare cases from devices to monitor volunteer support of human rights. Ogura has long argued that surveillance cameras in Japan were often used to target or eliminate vulnerable groups in Japanese society. The social construction of surveillance cameras in Japan is thus discussed through the case study of Kamagasaki.
Dr Sarah Stevens of University of
Bourgos
with Prof Ana Maria Lara Palma, Prof Michael
Schleusener and Prof Kiyoshi Murata
Consumer behaviours in an exploratory study between Europe and Asia-Pacific
Abstract
The current generation of consumers leave their traces and their data in each electronic transaction even when using their Smartphone with mobile Internet and GPS. Several studies have pictured how the behaviour of the new consumer compared with past behavior is; Kirk, Chiagouris and Gopalakrisshna (2012) pointed out that ~while consuming information in print form is a linear process, consuming information online is an interactive, consumer-driven process, offering participants the opportunity to change the view and content of the product they are consuming with a mouseclick or the tap of a finger~. While they do this, they are feeding various electronic systems with their opinions and personal data without any awareness about what implies manipulation, privacy and protection. Companies continuously collect this data and generate an individual user profile by combining all collected data. Through this, there are able to predict future behaviour of consumers as well as upcoming needs for consumption.
In the last century consumer behaviour research has been exclusively dealt with consumer insights. Nowadays there are such wide ranges of technological factors that alter the behaviour of consumers. According to Williams, Crittenden, Keo and McCarty (2012), ~in conjunction with this social media revolution has emerged a consumer who has grown up with brand new perspectives and redefined the interplay of communications, relationships, brands, technology and media~. In a more effective way than ever before all these communication, relationships, brands, technology and media are feeding a new phenomenon which is, consumers are vulnerable to manipulation of their demand. This kind of manipulation is much more in transparent to consumers than conventional advertising. The threat is that consumers' minds are manipulated in such a manner that they believe to have made a decision under their own free will. Therefore, in the context of manipulation, the main objective of this paper is to get a better understanding of the opportunities to protect consumers against manipulation and against non-transparent use and combination of personal data.
Knowledge hold by consumers is existential for their protection. This perspective can be used to obtain a differentiated view of various consumer groups from Europe and Asia. All they have different vulnerability to the invasion of privacy, as far as they have different socio-demographics, psychographics and behaviour. According to Binding (2013), in China, for example, ~consumer protection is understood as essentially the responsibility of the state, (...), nevertheless, the liability of businesspersons is subject to primary and it is required to look ahead to upcoming reforms~. Under the concept of protection, this paper provides a basis for identifying the different needs for protection of various consumer groups and also can be used to give recommendations for the regulation of personal data; in addition, the research reports on an exploratory contribution to the consumer policy discussion and establishes a basis for the development of approaches to consumer protection. The developed recommendations can be applied to the field of legal, technological and social aspects of privacy and data-used-protection.
Different countries provide different rules for dealing with privacy of consumers. In addition, it is assumed that consumers from different countries have a different cultural background. Thus it can be expected that they also have different behaviours and motives, and consequently different vulnerability to privacy. ~Behavioural response when consuming interactive digital information products may also differ depending on the user`s goals, motivations or the usage context~ (Williams et al., 2012). But of all them share the same drawbacks, the sensibility to privacy and the legislation and laws. The target population is Europeans and Asia-Pacific people, and, several research done, guides this exploratory question. For instance, Adams, Murata and Orito (2009, p. 339) argued that ~although the exact boundaries of what information may be passed to whom under which circumstances differ from other cultures, the Japanese are not uniquely possessed of a lack of a sense of information privacy. This sense depends, as it does in other cultures, on the conception of self and one's place in society. Social norms exist to provide sufficient privacy, real or perceived, for individual sanity and social cohesion~. The level of influencability and damage is outlined in this paper by a comparison between European and Asia-Pacific consumers. Therefore, the third contribution of this study explores the different groups of customers and different heuristics consumers have developed in dealing with individual security and privacy.
In more depth, the following research steps were executed to achieve the objectives described above:
Exploring secondary studies to determine the short-and medium-term risk potential in Europe and Asia-Pacific. The paper differentiates three types of threat: the substantive and economic threat as well as threat of diffusion of data.
Design and implementation of an expert survey to determine the future developments in the flied of privacy and personal data.
High potential persons were asked about their opinion about these four things: Current status of the current practice in business, current state of the art/technically possible, current opinion of the policy/consumer protection and opinion future development/trend research. From the results of the three types of threat described above and the expert survey five levels of threats have been developed. In these five levels consumer have a decreasing chance to save their privacy and personal data. Thus it is expected that consumers feel increasingly more threatened with every higher level of threat. The confrontation is realized in the next step by focus groups or by face to face interviews.
Inquiry of the consumer perspective by means of qualitative focus groups or by means of face to face interviews. In this step the really perceived threat of consumer is identified. Afterwards the expected/adopted threat can be compared with the perceived threat of consumers who were interviewed.
Identification of different types of consumers and their respective behaviour patterns. This step evaluates the results and identifies different types of consumers by a cluster analysis and also identifies behaviour patterns (heuristics) of consumers. So reasons of consumers' behaviour can be discovered and the Europe and Asia-Pacific consumers can be compared. As well causes and effect why European and Asia-Pacific laws may be not good enough to protect consumers can be discussed.
Recommendations for political and social framework for the protection of different types of consumers
Comparison between Europe and Asia-Pacific consumers in the field of heuristics and vulnerability as well as consumer policies
After describing the underlying framework of consumer manipulation, privacy and protection, the methodology used in this exploratory study is outlined:
An expert survey with European and Asiatic employees of companies with relevance to the topic was conducted. The qualitative results of this expert survey is the five escalation levels of adopted threats and the five heuristics (repeatedly observable ways, users are dealing with privacy and personal data on internet): personalized advertising on the Internet, advertisement in combination with data from social networks, smartphone with geodata and lack of transparency, reactive services and proactive services. The quantitative results are the four types of consumers developed with a cluster analysis in SPSS and the comparison between adopted threat and perceived threat in each escalation level.
A second group of data were collected from a three focus group interviews with a presenter and around 24 volunteers (people from 18-49 years old, who are online-affine, some have access to the Internet by mobile phones, different level of education low-and high-level mixed, no experts like informatics-students or it-workers or something else among other characteristics). The volunteers were confronted with the five escalation levels first in a qualitative way and second in a quantitative way. In the quantitative way examples of every level were shown the volunteers live in the Internet. In the quantitative way the volunteers were asked if they feel comfortable or uncomfortable with the content of the just shown level. They have to evaluate their emotional situation after every level by a predefined scale.
Taking into account that security and privacy not only have affects to the consumer, but also to the companies that offer their products and information online, the perspective from the companies' perspective is also included.
By the cluster analysis of the quantitative results of the focus groups and the face to face interviews, four types of consumers were identified as a further result of this research project. The expected/adopted threat, which was determined by the expert survey, was compared with the perceived threat by each of the four types of consumers. With regard to the heuristics, a correlation between the perceived threat and a kind of illusion of control of consumers about the disclosure of data and information about themselves is conceivable. Finally, the four types of consumers were contrasted with the heuristics. In this way, different interests of consumers can be effectively represented and recommendations for the protection of different types of consumers in different countries or regions can be given. Future research should explore other correlations and causalities between consumer privacy, protection and vulnerability in different countries such as parameters like spending-power or even the cultural background. In addition, the effect of the new consumers, the recent called Digital Natives, requires also more studies to tease out the influence of consumer behaviour in their security, privacy and protection beyond this cannibal world of technological developments.
Adams, A. A., Murata, K., Orito, Y. (2009): The Japanese sense of information privacy. AI & Soc. Iss. 24, pp. 327-341. Springer Verlag London Limited.
Adams, A. A., Murata, K., Orito, Y. (2010): The development of Japanese Data Protection. Policy & Internet . Vol. 2: Iss. 2, pp. 95-124.
Ariza, J.A.; Morales Gutiérrez, A.; Morales Fernández, E. (2000). Gestión integrada de personas. Una perspectiva de organización. Descleé De Brouwer, S. A. Bilbao.
Binding, J. (2013): The development of Japanese Data Protection. China-EU Law Journal.
Kirk, C. P., Chiagouris, L., Gopalakrishna, P. (2012). Some people just want to read: the roles of age, interactivity and perceived usefulness of print in the consumption of digital information products. Journal of Retailing and Consumer Services. Vol. 19: pp. 168-178.
Murata, K., Orito, Y., Fukuta, Y. (2014): Social Attitudes of Young People in Japan towards Online Privacy. Journal of Law, Information and Science. Vol. 23: Iss. 1, pp.137-157.
Williams, D. L., Crittenden, V. L., Keo, T., McCarty, P. (2012): The use of social media: an exploratory study of usage among digital natives. Journal of Public Affairs. Vol. 12: Iss. 2, pp. 127-136.
Wan-Ping Li of Institute for Information Industry
On Taiwan Personal Information Protection and
Administration System (TPIPAS)
Abstract
Protection of personal Information has become a global phenomenon, and in addition to the evolution of laws and regulations with regard to the protection of personal information, another trend that should never be ignored is the emergence of various certification systems of management of personal information collected by legal entities. The systems could play a very important role helping people to become relatively easier to decide whether a legal entity could be trusted with regard to the protection of personal information the entity collects.
In Taiwan, there are several such certification systems. These systems include (but not limited to) the Taiwan Personal Information Protection and Administration System (TPIPAS), a system established and maintained by the Ministry of Economic Affairs (MOEA) of Taiwanese government; the BS 10012, a British system promoted by a private company; the so-called Privacy Compliance Audit (PCA); and, interestingly, the ISO 29100 privacy framework.
Although toward the same aim, these systems are of very different features. This research would conduct a case study on TPIPAS. It would first explore that the difference between
TPIPAS and other systems, and then try to conduct a discussion with regard to whether the special features of TPIPAS might enhance or weaken the competitiveness of the system and how the features may influence the quality or level of protection of personal information in the legal entities that adopt TPIPAS.
For example, since TPIPAS was launched, it required every entity that would like to apply for the TPIPAS audit and its privacy seal, the data privacy protection mark (dp.mark), to hire at least one TPIPAS Internal Management Professional to work in the entity. This may strengthen the competitiveness of TPIPAS, for the reason that, on the on hand, these TPIPAS professionals would have incentive to promote TPIPAS in the entities they work for, and, on the other hand, the entity may tend to need more TPIPAS professionals to help maintaining its TPIPAS compatible management system (for reasons such as that TPIPAS also requires a PDCA process of a entity's management system of personal information). This may also better improve the quality or level of personal information protection in the entity.
Declaration of Interest: The author of the research would like to declare that the author is one of the members that help to establish and maintain TPIPAS on behalf of the Ministry of Economic Affairs (MOEA) of Taiwan. The author would nevertheless do their best to remain fair and objective while conducting the research.
Prof Kiyoshi Murata The Centre for Business Information
Ethics, Meiji
University
with Prof Andrew A. Adams, Dr Yohko Orito, Dr
Yasumori Fukuta and Prof Ana Maria Lara Palma
Social Impacts of Snowden's Revelations in Japan: Exploratory Research
Abstract
Former NSA contractor Edward Snowden revealed effectively limitless information gathering and indiscriminate mass monitoring carried out by the NSA (National Security Agency), an intelligence agency of the US Department of Defence, and its British counterpart the GCHQ (Government Communications Headquarters), through the UK's The Guardian newspaper and the US's The Washington Post newspaper starting on 5th June 2013. The revelations contained an astonishing picture of the PRISM program, which allowed the NSA to monitor individual users, not only in the USA but also throughout the world, indiscriminately and in bulk by collecting their communication data such as email contents, search history, live chat and transferred files. It did this through directly accessing the servers of US IT companies including Microsoft, Google, Yahoo!, Facebook, Apple, YouTube and Skype [1]. The Boundless Informant program which was operated by the NSA to collect, analyse and store billions of data of emails and phone calls passing through the US communication infrastructures was also reported [2].
His revelations have attract heavy doses of both praise and censure; whereas some have positively evaluated his deed as an act of valor to protect democracy against the tyranny of the state, others have criticised him as a traitor to his country that have been preoccupied with responses to the threat of terrorism since the 9.11 attacks. Indeed, on 21st June, the US government filed charges of spying against him.
It is alleged that lively discussions of national security, safety and security of societies, personal freedom and privacy have been generated in the world by Snowden's revelations, and that the establishment of EU data protection rules was postponed due to the disturbing news report, based on his revelations, that the mobile phone German Chancellor Angela Merkel personally used had been tapped by the US intelligence agency for several years. On the other hand, a Pew Research Center/USA Today survey found that 57% of young (18- to 29-year-old) respondents considered the revelations had served rather than harmed the public interest, 42% said the US government should not pursue a criminal case against Snowden, and 78% answered that Americans shouldn't have to give up privacy and freedom in order to be safe from terrorism [3].
Given that the contents of Snowden's revelation can provoke controversy on the future of democracy, freedom, privacy, national security and international community, these allegations and survey results may raise sympathy of many people living in democratic countries. However, this may not be the case in Japan, which has the longest history of a democratic country in Asia, considering the results of previous work on social attitudes of youngsters in the country towards online privacy conducted by the three of the authors [4].
This study attempts to investigate social impacts of Snowden's revelations in Japan focusing on Japanese young people's awareness of and interest in the revelations and their social meaning. A questionnaire survey and follow-up interviews will be conducted to do the investigation. The survey results will further be used for cross-national analysis of social impacts of the revelations.
NSA slides explain the PRISM data-collection program The Washington Post
NSA Prism program taps in to user data of Apple, Google and others The Guardian
Boundless Informant: the NSA's secret tool to track global surveillance data The Guardian
Most Young Americans Say Snowden Has Served the Public Interest Pew Research
Murata K., Orito Y., Fukuta Y. (2014). Social Attitudes of Young People in Japan towards Online Privacy, Journal of Law, Information and Science, 23(1), pp. 137-157.
Orito Y., Murata K., Fukuta Y. (2013). Do Online Privacy Policies and Seals Affect Corporate Trustworthiness and Reputation? International Review of Information Ethics, 19, pp. 52-65.
Dr Toru Nakamura
of KDDI R&D Labs
with Prof Andrew A. Adams, Prof Kiyoshi Murata, Dr Shisaku Kiyomoto, Dr Haruo Takasaki, Dr Ryu Watanabe and Dr Yutaka Miyake
Introduction to Privacy Policy Manager (PPM)
Abstract
Personalization has been successfully implemented in a variety of online services such as targeted advertisements, individualized searches, and location-based information provision. Privacy has been a major concern for users of personalized services, not only regarding online web services but also offline real services. Online to Offline (O2O) is a new direction for commercial services; however, privacy concerns have risen due to the expansion of service provider collaborations. Users have been very concerned when diverted to services they were unaware of having any relationship with.
We introduce a new mechanism for providing ongoing privacy and data protection control to users, called the Privacy Policy Manager (PPM). PPM provides ID management service, proxy service including an access control mechanism, and etc. The goal of the PPM is to provide users with greater and simpler control of their provision of individual data to online service providers, including default, site-specific and session-specific control, a trace of information previously provided and a mechanism for informing service providers of request for deletion of data. It is not simply a P3P user agent and although it could use P3P as a mechanism, it is not limited to the P3P details.
We implemented a prototype of PPM and some demonstration services via PCs and Android smartphones. We show some important features and functions, such as personalization of privacy policies, privacy policy checking, log viewing, and deletion of personal information.
We show the result of survey on user reactions to the concept and prototype of a PPM by experiments via explanation and video demonstration. The experiment's objective was to gain data on (i) attitudes to privacy policies and personalized services (ii) the potential impact of the PPM, particularly in reducing privacy concerns, and (iii) acceptability assessment of the PPM. The experiments which we assessed the acceptability of the PPM are separated into the following three parts. In Pre-Questionnaire part, we asked participants about their knowledge of IT services, frequency of service use, and general impression of privacy policy descriptions and usage of personal data in an initial questionnaire. In Introduction of PPM part, we explained the PPM to participants, including video demonstrations of it in use (at the time of the experiment it was not robust enough for participants to use themselves). In Post-Questionnaire part, we asked participants to give their impressions of the PPM, the effort they would need to use the PPM, and their level of interest in using it.
Prof Kyung-Sin Park of Korea University
Paradox of Trust: Korean Resident Registration Number
Abstract
The 13-digit resident registration number or RRN is automatically given to anyone born in Korea and is nearly impossible to change ever. Many companies and agencies will require production of RRN before providing services or dealing with you for identification purposes. What is wrong with this picture?
RRN was made for the purpose of identifying spies in 1970's. The theory was that, if all Koreans receive a number each, one can cull out as spies those who do not have that number. In other words, it was supposed to work as a password of some sort to legitimate citizenship.
The problem was that, once RRNs were cherished as unique identifiers, many government agencies and companies began to require RRNs as the condition of providing services to or opening up an account for people. By itself, no problem. However, the vogue of requiring RRNs continued for a few decade to a point where so many agencies and companies have RRNs of so many people that RRN can no longer work as a password. If many people have your passwords, your password cannot function as one. What exacerbated this problem was that those RRN-holding companies and agencies became the target of hacking and so many RRNs fell into the evil hands as well.
At any rate, what should happen at that point? Since RRN cannot function as a password, the companies and agencies should have stopped requiring the RRNs as the condition of providing services for or conducting business with people. However, the companies and agencies in Korea still continue to require them. What next? The RRNs became the Holy Grail of the financial fraudsters who used them to assume others' identities to withdraw and borrow money as the trust-based companies and agencies continue to rely on RRNs.
Wait a moment. RRN began as a trustworthy identification system. However, because it was so trustworthy, it was so widely sought for, and it later ended becoming the tool for financial fraud. I would like to call this Paradox of Trust.
Why do we see Paradox of Trust in Korea? Because Korea, unlike other countries, does not restrict the uses of national unique identifiers.
True, we prohibited the collection of RRNs through the Internet in 2012 after the 50-million-people country was shocked when SK Communications suffered the data breach of 37 million people in 2011. Also, the Personal Data Protection Act Article 24 does ban any collection of any RRN unless expressly allowed or required by statutes or regulations. (The linked, old version also used to allow collection of RRNs "upon the data subject's separate consent" in Article 24(1)2, which was stricken in 2014)
The problem is that there are so many statutes still allowing or requiring the collection of RRNs. According to the last count by the authorities in January 2014, there were 77 statutes, 404 Presidential decrees, and 385 ministerial rules all independently requiring or allowing the collection of RRNs.
This is not about to change. After three card companies (Lotte, NH, and KB) suffered the data breach of 104 million data sets of 20 million people in 2014, the government was not willing to change the one out of the 866 laws and regulations, which was responsible for the card companies' collection, namely the enforcement decree of the Real Name Financial Transactions and Secret Protection Act, which in Article 3 specified the name and RRN as the singular method of identity verification. Also, the Information Communications Network Act that banned collection of RRNs through the Internet still allows the telecom companies to collect RRNs in issuing phones, one of which again suffered the data breach of 12 million in February 2014.
The paradox of trust is a paradox because people cannot get out of it: they still feel insecure about identifying themselves with anything other than RRN, especially when it comes to financial transactions. The truth is that banks do not require just RRN from you when you open a bank account with them. They require your name, BOD, address, birth address, job, mobile phone number, home phone number, etc., any appropriate combination of which can become a unique identifier. Otherwise, how can Korean banks open accounts for foreigners who do not have RRNs? American banks can open a bank account as long as you have two photographed identifications and address. Banks are required only to make "reasonable efforts" to require Social Security Numbers or Tax Identification Numbers but not to require it as a condition. Rather, banks requiring SSNs "in violation of federal law" can be punished up to five (5) years in prison.
What is more important, Korean RRNs are the combinations of DOBs, gender, birth place code, and the number computed from the previous three, all of which the banks, agencies, and companies routinely require anyway in addition to RRNs. One does not have to feel insecure about dealing without RRNs.
The real chaos will dawn upon, not on the users, but the hackers and identity thieves. As banks, companies, and agencies require different sets of credentials from the users, they can no longer use the data illegitimately obtained from one data processor to open an account with the other data processor. Now? They can get the most out of their limited resources because they can focus all the resources on the standardized data sets built around the singular identifier: RRN.
Prof Pauline C. Reich of Waseda University
A Brief Review of Data Privacy, Data Protection and Government Surveillance of the Internet in Selected Asia-Pacific/South Asia/Oceania Jurisdictions
Abstract
These days, in the aftermath of the Snowden/NSA incident, there is a need to take a look at data privacy, data protection and government surveillance of the Internet in countries around the world outside the US as well as the US. This paper will focus primarily on the Asia-Pacific/South Asia/Oceania region, the US and EU to assess rights and possible issues arising under the respective laws in the various countries and regions when Information Security/Cybercrime/Cybersecurity (national security focused) activities collide with data privacy and data protection and involve government surveillance of the Internet.
We will look at how comprehensive existing laws related to data privacy and data protection are; how compatible they are with the OECD guidelines, the EU Data Protection Directive, the Council of Europe Convention 108 and APEC/ASEAN data protection and data privacy initiatives. We will also look at what models they are based on.
There will be discussion of whether countries in this region are applying data privacy and data protection laws to enforce them with respect to the private sector and the public sector; what exceptions have been applied for national security and law enforcement purposes and whether constitutional protections have been taken into account when such exceptions are relied on.
Several case studies will be presented about how governments in this region have been applying or will apply surveillance measures based on exceptions to data privacy, data protection laws and in some cases their constitutions. The paper will focus particularly on case studies of India, the Philippines, Thailand, Indonesia and Japan, although laws and regulations may be in flux at present due to recent political changes and incomplete details of laws and regulations which are currently being worked out by legislators.
Next, we will examine the notion of privacy as a human right as applied in the EU. For example, we will rely on the paper presented by Justice Uldis Kinis of the Constitutional Court of Latvia at an international symposium held by the Asia-Pacific Cyberlaw, Cybercrime and Internet Security Research Institute at Waseda University on December 14, 2013, which describes how such an exception can be applied in the EU. We will also look at the original Council of Europe Cybercrime Convention text concerning privacy and human rights in the context of Cybercrime legislation.
Other issues which were prompted by a plenary session at the annual Council of Europe Cybercrime meeting in Strasbourg, France in December 2013 will also be mentioned, e.g. when should law enforcement and when should national security personnel be called into an Information Security matter; what kinds of training do law enforcement and Information Security staff have in civil rights vs. civil liberties vs. constitutional rights to privacy? What rules and directions are adopted or need to be adopted to ensure that constitutional protections are maintained? Some discussion of pending US legislation in response to concerns of other countries after the Snowden/NSA matter will be included.
Dr Kazuyuki Shimizu of Meiji University
Understanding Path Dependency through a Cybernetics Approach;
The foundation of differences among privacy laws
Abstract
`Luck' is also a part of ability. Wiener introduced the cybernetics concept that suggests, "... the structure of the machine or of the organism is an index of the performance that may be expected from it." [1] If this is the case, then is `luck,' which assumes an unexpected circumstance, part of performance? We applied this idea to understand path dependency, which explains how a set of decisions faced in any given circumstance is limited by the decisions made in the past, even though the past circumstances may no longer be relevant. [2] David proposed three reasons for path dependency: technical inter-relationship, switching cost, and historical accidents. [3] We focused on the third factor, historical accidents. Historical accidents or unexpected circumstances are a precondition for and a component of ability. In general, our decisions do not reflect an optimum resolution of questions in the economic world, such as the superiority of VHS over Beta or Windows over Apple, etc. However, this is the standard, which we are decided. Wiener expected predictable performance from a machine, which could ensure certain capabilities based on its material properties. Privacy laws can be imposed in a material world, and we abide by them when we adhere to the legal system. Yet, we might ask why there are such differences among countries with respect to the particulars of such laws. Particularly in Europe, the United States, and Asian countries, different privacy rules have been proposed regarding the Internet. In general, Japanese society has not been very sensitive to privacy protection, probably due to characteristics of Japanese cultural and social environment. [4] The power of civil/personal law exists in continental Europe, and the right to be forgotten is encompassed within the right of personal freedom and the right to a private life. In the United States, there is tension regarding the matter of freedom of expression. In Anglo-American court practice, particularly in the United States, the right of free speech, insofar as statements are true, is relevant to the public interest.[5] Several types of fundamental rights exists in these countries, which are sensitive to the protection of privacy, the right to personal freedom, and the right of free speech. The connection between the legal system and economic path dependency is found in Darwinian evolution, the basis for evolutionary economics, which includes cybernetics. A system of privacy laws for the Internet depends on two factors: Internet technology and information itself. Returning to the cybernetic point of view, which is the basis of the machines that structure Internet technology, an index of their performance is our understanding of the code of privacy. Privacy law will depend on how we use Internet technology as well as on the privacy code. Any given circumstance influences the capabilities of this technology as well as how we use the technology and apply privacy codes.
N. Wiener, "The Human Use of Human Beings; Cybernetics and Society," Free Association Books, London U.K., 1989. http://21stcenturywiener.org/wp-content/uploads/2013/11/The-Human-Use-of-Human-Beings-by-N.-Wiener.pdf/ 2014.05.10
Path_dependence; http://en.wikipedia.org/wiki/Path_dependence 2014.05.30
P. David, "Clio and the Economics of QWERTY", The American Economic Review, Vol. 75, No. 2, Papers and Proceedings of the Ninety-Seventh Annual Meeting of the American Economic Association. (May, 1985), pp. 332-337. http://www.econ.ucsb.edu/~tedb/Courses/Ec100C/DavidQwerty.pdf 2014.05.30
Y. Orito and K. Murata, (2005), "Privacy Protection in Japan: Cultural Influence on the Universal Value", Proceedings of ETHICOMP 2005 (CD-ROM).
R., Weber, "The Right to Be Forgotten; More Than a Pandora's Box?", jipitec, Vol. 2., 2011. http://www.jipitec.eu/issues/jipitec-2-2-2011/3084/ 2014.05.10
Dr Daniel Trottier of University of Westminster
Digital vigilantism: A conceptual, ethical and policy challenge
Abstract
This paper considers digital vigilantism as a user-led violation of privacy that not only transcends online/offline distinctions, but also complicates relations of visibility and control between police and the public. In 2013, Gary Cleary hanged himself in Leicestershire after being pursued by Letzgo Hunting, an online group that exposes suspected paedophiles. Likewise, in 2010 Mary Bale was subject to death threats when a video of her mistreating a cat in Coventry surfaced online. Both individuals were targeted by a clandestine form of criminal justice: digital vigilantism. This global reaching practice harms the lives of those who are targeted, with no clear legal or policy recourse. This research will develop a theoretically and empirically grounded understanding of digital vigilantism in order to advance ethical and policy guidelines.
Digital Vigilantism (DV) is a process where citizens are collectively offended by other citizen activity, and coordinate retaliation on mobile devices and social platforms. The offending acts range from mild breaches of social protocol to terrorist acts and participation in riots. The vigilantism includes, but is not limited to a `naming and shaming' type of visibility, where the target's home address, work details and other highly sensitive details are published on a public site (`doxing'), followed by online and in-person harassment. The visibility produced through DV is unwanted (the target is typically not soliciting publicity), intense (content like text, photos and videos can circulate to millions of users within a few days), and enduring (the vigilantism campaign may be top search item linked to the target, and even become a cultural reference).
DV is linked to digital media affordances such as the ability to monitor and intervene in the lives of others. Social platforms like Facebook, Twitter and Reddit allow citizens to discuss a target, publish their personal details and call for action. In addition, mobile devices such as smartphones enable real-time recording and transmission of an offending act to other citizens. As a product of digital media culture, DV is as much a communicative and mediated act as it is a collective social act (the coordinated mass persecution of a targeted citizen). Current scholarship considers the crowdsourcing on digital media (Trottier 2013), as well as the changing nature of policing and visibility online (Trottier 2012). These research streams suggest that bottom-up forms of organisation are facilitated by social platforms and that policing is changing as a result of digital media, which in turn shapes how these technologies are used. What remains to be investigated is how these streams intersect.
Vigilantism is framed as a kind of `private violence' (Culberson 1990) whereby citizens seek to legitimate their own violence as a form of criminal justice. Galtung makes a distinction between direct physical violence, structural violence and cultural violence (1990). DV embodies all three forms of violence, and in particular citizen-led structural violence is a novel and troubling concern. This violence appears to be a kind of communication counter-power (Castells 2007) led by citizens. Yet the connection between state power and DV is unclear, and forces a reconsideration of state-citizen relations. There is a need to critically examine role and reaction of state and police in this context. DV highlights the complex nature of privacy and public space. It is a private form of violence that at the outset marks a severe privacy violation for the targeted individual. Yet it takes place in platforms that constitute a potential public sphere (Fuchs 2014), even if these are privately owned, and tempered through privacy settings. How does this impact the already complex relation between privacy and publicity on social platforms? While DV is a substantial concern, the way it is represented in the media and the ideologies that inform that representation are also troubling. As an emerging phenomenon there is a risk that the media present a distorted account of DV for ideological reasons (the desire to launch a `moral panic'). For this reason, it is important to recognize that the media and other stakeholders shape how DV is understood, and as such play a vital role in its development.
DV constitutes a severe violation of the targeted individual's privacy and data protection rights, as their personal details are publicly transmitted without their consent. Targets may be selected on the basis of gender and ethnicity. DV also amounts to a kind of criminal justice response that is performed by untrained non-professionals. It is a challenge to police process, and it can undermine perceptions of authority and statehood, while reproducing the worst abuses of state- sanctioned violence. DV is typically manifest as a series of crimes, including harassment, stalking and death threats. There is a need for a greater understanding of participants' motivations to take part in DV, as well as the way this has shaped their lives and the lives of targets. Researchers, but also courts, educators and policymakers need to consider the full ramifications of this activity. Citizens learn to not upload their personal information, but to what extent are they taught not to put others in harm's way?
In outlining a research agenda for studying DV, we may consider the following questions: In what ways does digital media culture foster DV? How does DV shape theoretical understandings of structural violence, state power and communication counter-power? How does DV shape theoretical understandings of an online private/public paradox? How does the news media represent DV? What are the roles, challenges and problems of state power in respect to DV? What are the social impacts of DV for targeted victims and participants? How can educators and policy makers minimise harm associated with DV?
Castells, Manuel. 2007. Communication, Power and Counter-power in the Network Society. International Journal of Communication 1: 238-266.
Culberson, William. 1990. Vigilantism: Political History of Private Power in America. Westport, CT: Greenwood Press.
Fuchs, Christian. 2014. Social media and the Public Sphere. tripleC 12 (1): 57-101.
Galtung, Johan. 1990. Cultural violence. Journal of Peace Research 27 (3): 291-305.
Trottier, Daniel. 2012. Policing Social Media. Canadian Review of Sociology 49 (4): 411-425.
Trottier, Daniel. 2013. Crowdsourcing CCTV Surveillance on the Internet. Information, Communication & Society. DOI: 10.1080/1369118X.2013.808359.
Lachlan Urquhart
of University of Nottingham
with Ewa Luger and Prof Tom Rodden
The intersection of EU Data Protection Law Reform and everyday ambient computing design: challenges and opportunities
Abstract
In this presentation we lay out some of the legal challenges that are emerging due to the shift of ubicomp technologies from the lab and `into the wild'. Instantiations of everyday ambient interactive systems in the home range from learning thermostats to smart fridges and intelligent smoke alarms. Beyond such examples of the Internet of Things (i.e. networked objects communicating with each other independently), increasingly such technologies are being scaled into public settings at the city level too, with intelligent transport, energy and logistic infrastructures emerging.
These systems ordinarily sense/collect human data, are largely designed to be invisible in use(psychologically and physically), and embedded into the everyday routine of users. They become contextually aware of their surroundings through pervasive data collection, with the aim of managing and controlling the space to achieve their given aim (e.g. a learning thermostat autonomously managing temperature in a room to maintain a perceived level of optimum comfort). The systems are becoming increasingly autonomous too, which poses a number of challenges to legal constructs like individual consent, contractual relationships of agency and human autonomy.
Within this paper we focus specifically on data protection and privacy governance. We outline the questions around use of human data in light of the proposed EU General Data Protection Reform Package (GDPR). This proposal includes new attempts by regulators to modernise DP governance in Europe and foster new solutions to ensure higher standards of transparency, accountability and data management by public and private bodies. We focus on mapping questions from three particular areas that are relevant for ambient system design:
Consent - the GDPR requires informed, explicit consent to personal data processing and we question the kinds of mechanisms that may exist to achieve this for ambient systems.
Purpose limitation on further data use - how can these systems be designed to ensure they manage data in a manner that is not incompatible with the original purpose of collection? How can the data flows be presented to users in a more accessible, transparent way?
Privacy by design - Framed as a legal requirement within Article 23 of the GDPR, what does it require in practice? How can privacy protection be embedded into the architecture of new ambient technologies?
We are aiming to develop the interface between law and HCI, and understand how best to integrate DP law considerations into an iterative, user centred HCI design process. As such we briefly consider relevant conceptual frameworks from HCI too, like 'value sensitive design'. Mapping the relevant areas of law and questions that need to be asked is our first step and contribution towards the broader goal of narrowing the gap between these two communities.
Dr David Murakami Wood of Queens University, Canada
Japan and the US National Security Agency
Abstract
Given the friendly relationship that the Japanese state enjoys with the USA, and the equally cordial relationship the Japanese media has with its government, it comes as no surprise that few questions have been raised about Japan and the Snowden revelations about the surveillance operations of the US National Security Agency (NSA). Yet Japan has long been a base for NSA operations and it is very likely that the NSA is collecting metadata (and much more) from Japanese electronic communications, both official and unofficial, and operating intelligence satellite downlinks in the country. This paper examines US intelligence presence in Japan and particularly the role of the Misawa Air Base in Aomori, known to be the most important NSA centre of operations in East Asia. It also asks why the Japanese government and the USA are facing comparatively little scrutiny over their international surveillance entanglements in Japan compared to that faced in other parts of the world.
Dr Sachiko Yanagihara
of Toyama University
with Prof Hiroshi Koga
Exploratory study on web communities and privacy:
The case of Word-of-Mouth Marketing in the foot care industry in Japan
Abstract
This study attempts to examine the contradictory relations between the two kinds of informational privacy, the right to control the circulation of one's personal information and the right to be forgotten, from the viewpoint of businesses which operate web community sites.
In order to facilitate active and effective communications in a community, whether they are virtual or real, proper and continual disclosure of personal information made by community members is one of the most important factors. Here, "proper" contains the meaning that all the members of the community can ensure their capacity to autonomously control the types and extent of disclosed personal information of them. In this regard, informational privacy as an individual's right to control the circulation of information relating to him/her should be protected to ensure effective communications in a community.
On the other hand, recently, the idea of "the right to be forgotten", which is considered as a variant of informational privacy, was proposed and has been empathised reflecting the reality that organisations don't need to delete any data from their databases thanks to the tremendous advances in information and communication technology (ICT). However, this variant contradictorily has an aspect that denies the right to control the circulation of information relating to oneself; if one exercises the former right over what one disclosed about oneself as a result of the execution of the latter right, then one set these two kinds of privacy right up against each other. From the viewpoint of businesses which set up web community sites to encourage their existing and potential customers to disclose their personal information on those sites through creating the atmosphere of "ante-festum" (Kimura, 1982; 2006) on the web community sites, the customers' execution of the right to be forgotten is an evidence of the businesses' failure of the management of their web communities which has lead to the atmosphere of "post-festum" (Kimura, 1982; 2006) on the community sites. The challenge to businesses which operate web community sites is to let their customers not exercise the right to be forgotten.
To clarify the discussion, the case-study method is adopted in this study. In particular, the case of LIBERTA Co., Ltd., which sell hot product "baby foot", is examined to demonstrate their effective management of consumer generated media (CGM), while mentioning the significance of the experience economy and word-of-mouth marketing (WOMM) or marketing buzz. Media strategies such as omni-channel relating to unify all sales and distribution channels at stores in both real space and cyberspace and O2O (Online to Off-line) which aims at coordinating online activities with off-line purchasing behaviour are also taken into consideration.
Kimura, Bin. 1982. Time and the Self. Tokyo: Chuokoron-Shinsha. (in Japanese)
Kimura, Bin. 2006. The Self, Relations and Time: Phenomenological Psychopathology. Tokyo: Chikumashobo. (in Japanese)