New Threats

Big data” is playing a major role in the development of technologies. This happens to be the case for Internet of Things and Artificial Intelligence. Their reliance on big data has enabled these technologies to pick up speed and we are close to see their mass adoption. Whether these technologies are safe when it comes to our personal data remains to be seen.

This corner of the internet will do the following: give you an overview of IoT - what it is, what are its advantages, what sorts of risks it poses to our personal data - and it will do the same for Artificial Intelligence. Although we treat the two technologies separately, this is not often the case in reality. Artificial Intelligence is a tool that is spreading to all kinds of other technology. Soon we will live in a world where every device we have, be it IoT or not, will be powered by AI.

Imagine a world where a room lights up and the heating turns on as soon as you get back home. Or where the steps you take on your way to the closest metro station can produce electricity. Or where your car drives you to work and drops off your kids to school. Where your fridge identifies what is missing and automatically makes a shopping list for Amazon and where you can just tell your Alexa (Amazon Voice Assistant) to buy the items the fridge has identified.

Sounds great, doesn’t it? Until you realize that this is the same world where you might have to worry not only about your computer being hacked, but your car and house too. Where Alexa, Cortana, Siri, or however your voice assistant is called, may be required to provide evidence in a murder case (Elliot C. McLaughlin, CNN, 2017).

These are all examples of Internet of Things (IoT) in action. IoT is today most visible in our FitBits and AppleWatch, in the washing machine that can connect to your wifi and can be started with a tap on your smartphone, in all those wearables clothes, and driverless cars Tesla, Uber and the likes are currently testing. IoT is ultimately about connecting and enhancing devices with the power of the internet, thus creating a more interactive and interconnected world. It is, however, a world full of its challenges and threats, especially when it comes to your personal data.

The answer is a resounding Yes, if those within the tech industry are to be believed.

A 2015 article by In Lee and Kyoochun Lee highlights some of the benefits of IoT. Among them is having a smart home where devices constantly produce data that is collected by monitoring and control systems that can check for equipment performance, environmental conditions and energy usage. “The primary value propositions [of a smart home] are family and property protection and energy savings” (Lee, Lee 2015, p. 433).

Through Verizon Home Monitoring and Control network, one can do everything, from (un)locking doors to controlling your room temperature. In May 2017 Ikea announced that their smart light bulbs will respond to voice commands through voice assistant systems like Apple Siri, Amazon Alexa and Google Assistant (Shu, 2017). These interconnected smart devices will - and to an extent are already - making our lives easier.

It is not only homes that IoT will make smarter, but cities too. The hype is high for the development of Smart Cities around the world. The Chinese government is one of the most fervent supporter of this new type of city - “ a digital connected city” where “information technologies [are used] to solve urban problems incorporating intelligent and sustainable urban development.” (Parasol, 2017). The city of Yinchuan in northern China is being used by the government as the blueprint “Smart City” across China. Measures such as solar-powered public rubbish bins that act also as compactors sending out a signal when ready to be emptied; or public transport where one pays by using facial recognition software (your face is linked to your bank account) instead of the farebox have been introduced in Yinchuan. The goal is to make these cities - and their running - more efficient.

From the smart home to the smart city, the advantages of the widespread use of IoT seem obvious. IoT devices have the potential to make our lives easier, to make the running of our houses and cities smoother. We stand to gain all the benefits we expect technology to bring about, just like the washing machine decades ago made it easier to do laundry, or cars proved to be a more efficient method of transportation instead of horses.

Venture capitalists seem to agree with this assessment. By simply looking at how much funding went into IoT in just the past few years, we can identify that investors have an appetite for companies working to develop and apply this technology to as many fields as possible: in the US alone, between 2012 and 2016, IoT funding tripled and only between 2015 and 2016 it rose from $1.10 billion to $1.27 billion (Myers 2017).

IoT seems very promising and its applications will undoubtedly change our lives. It is not, however, without its challenges. What we should all ask ourselves is whether the disadvantages and risks associated with IoT are worth it when it is our personal data that may be at risk.

We have all heard stories of people who have had their Facebook and email accounts hacked, or companies whose records are breached. The latest clamorous case of the latter was Equifax, a credit monitoring company in the US, whose hacking left 143 million Americans personal data exposed (Solon 2017). In a world where IoT devices are widespread and the norm, the pool of devices that can be hacked or the data of which can be accessed by third parties – be it the companies that make the devices themselves or malicious hackers – just became larger.

This presents two main problems: ransom and privacy.

1. Ransomware: The Internet of Things may become the Internet of Ransomware Things

A car has been hacked. The car system that has been for all intent and purposes been hijacked does not allow the driver to have any control of the car. It switches to autopilot and drives full speed to wherever it may decide to. The car communicates to the driver that they should transfer x amount of money on a specific bank account right now by using their smartphone banking app or the car will crash somewhere. Your life for your bank account.

An individual is wearing a pacemaker. A hacker manages to gain access to it and control it. They send that individual a threat: you give us money or we will stop your pacemaker and you will die.

This is a particularly gloomy - and perhaps alarmist - scenario but it is not impossible to envision. Just like computers are vulnerable to attacks, so could be IoT devices. Perhaps the hacking and the ensuing ransom will be needed to regain access to the device, or to make it work properly. But the hacking may be specifically about the personal data these devices have collected and analysed. Just like computers have great amounts of our personal data, so do IoT devices thanks to their constant monitoring of our every action and movement. So instead of the car being hacked for the sake of putting the driver at a quasi gunpoint for their money, imagine that the hacker has managed to gain data associated with the car and some other IoT device. In order to avoid the data being leaked to the press - if the individual is say a politician whose IoT data compromise - they have to act a certain way in parliament or pay a sum of money. That becomes a question not only of ransomware but of privacy.

2. Privacy

“IoT devices can provide a vast amount of data on IoT users’ location and movements, health conditions, and purchasing preferences—all of which can spark significant privacy concerns.”(I.Lee and K.Lee 2015, p. 439). Whether that data is vulnerable or not to hacking and breaching is important but so is another consideration: if the manufacturers and sellers of those devices were not even allowed to collect and analyse that data, the possibility of that data being hacked might not exist.

In a scenario where someone uses a fitness and health tracker like FitBit, should Fitbit be allowed to make that data available to a health insurer, with which the user is in talks to determine their insurance premium? Would the health insurer be justified to charge a higher or lower insurance premium? Or in the eventuality of a health claim, based on data related to the diet and exercise routine of the Fitbit user, should the insurer be able to reject an insurance claim?

This is only an example of privacy problems that may arise because of the personal data generated by a fitness and health tracker. Countless of these scenarios can be foreseen in the context of a car needing car insurance, or a car’s movements and trajectories being constantly monitored.

The underlining key concern that leads to all possible questions and scenarios is of property: who owns the data produced by those devices? Is it the user/owner of the device, or is it the company that manufacture and sell that product?

If the user and the owner of a product owns that data and decides that they are not willing to share it, then one may presume that all of the above would not be possible. There are, however, two problems with this. First, although it is easier in the context of a health tracker to see that the data generated should be property of the person whose health it tracks, what about a car, a lightbulb or a Home Voice assistant system? It is not clear whom we can attribute data ownership to when that data is generated by connected devices and not by people, in the same way that you generate data by interacting with an app (Guinard, 2017). Second, “Users want greater control and transparency with regard to their IoT data” for a variety of reasons including personal image management, protection from exposure and availability of insurance, but they themselves admit to rarely reading the privacy policies of IoT providers (Richardson et al, 2017). And the fact that the language employed in Terms and Conditions and privacy policies is often inaccessible to those lacking a legal background, surely does not help. This matters because without information users will not be able to demand greater security and privacy protections should they wish to do so (Bannan, 2016).

More than just users stand to lose something: IoT providers too may lose out if the balance between privacy/data protection and their use of personal data is not stroked right. This might lead to a possible loss of faith in IoT devices by users if controls are not put in place to allow consumers who want to share their personal data to limit the amount that is shared (Guinard, 2015).

Some – advocates for absolute privacy – will probably tell us to forego IoT devices altogether. This would mean doing away with all the benefits IoT may bring. One day developments in IoT may make our cities, towns and homes safer by using Computer Vision through cameras to use the appearance of an unwanted human being as a sensor for another device to, for instance, call the police, or wake the homeowner (González García, Meana-Llorián, Pelayo G-Bustelo, Cueva Lovelle, & Garcia-Fernandez, 2017). A luddite attitude may best protect our personal data and in the meantime keep us away from all the progress these devices may bring to our lives.

A more measured approach is one that balances the concerns for privacy and security with the need for IoT providers to have access to data. One such approach may be encryption of all or some of the data generated by IoT devices. This would work more within the context of consumer IoT devices, rather than in the context of a Smart city where the city government owns the data and different rules may apply.

Another approach focuses on the introduction of regulation. Richardson et al. (2017) argue for a regulatory response to IoT in order to protect personal data while at the same time avoiding stifling this growing industry within the Australian context. This regulation could be minimal but effective; it would be a participatory solution instead of a top-down or laissez-faire regulatory regime. This responsive system of privacy & data protection would need to have 3 tiers: privacy & data protection by design; more stringent consumer privacy & data protection regulation; and “actions brought by individuals in court to vindicate their claims relying on privacy-type doctrines as applied by judges”.

IoT oftentimes does operate on its own. In fact, IoT devices are becoming more and more intelligent thanks to the power of Artificial Intelligence. That technology is just like IoT: it brings about many positives but poses risks and challenges to our personal data.

Google the words “Artificial Intelligence ” and the scaremongering will follow. Talks of AI tend to be linked to discussion of automation and jobs being lost, ranging from truck drivers to lawyers and accountants finding their skills to be passé. Elon Musk, the CEO of Tesla and SpaceX, the company trying to commercialise space travel and take humans to Mars, has gone as far as to say that AI is an existential threat to humanity (Dowd, 2017).

However, before we even start imagining the perils of a world ruled by AI overlords, bringing it back to what AI is exactly and how it works reveal another challenge.

Let us start by defining AI and why it is relevant to big data. Most AI today is Narrow (also known as Weak) AI - which are systems designed to perform a specific task. They have specific intelligence rather than general intelligence like human beings, meaning that unlike us who are able to make different set of tasks, AI are incredibly good - at only one task. AI that is able to beat any human being at chess is null at identifying cases of fraud.

So how does AI relate to“big data”?

First, machine learning, which is the most common way for AI to be developed, “teach[es] computers to learn from examples (or Data) and perform a task without being explicitly programmed to do it.” (De Brule, 2017). It is the data used to train AI that makes AI reliant on big data. Without data the AI would not be able to learn and then go on and perform its task. Big data is essential to AI.

Second, the applications of AI. There are many but we will focus on one. Thanks to machine learning and self-learning algorithms, AI makes for great data analytics tools. Imagine an AI whose scope is to turn big data into useable information by detecting patterns and to make predictions, for instance, when it comes to crime. It could mean looking at data about who commits crimes, where, what kind of crimes are committed. Based on those patterns the AI would be able to predict what locations are more likely to be crime hotspots, who is more likely to commit crimes and etc. As a matter of fact this is already happening as documented by Bello-Orgaz, Jung, Camacho (2015). Crime analysis they say, is “allowing for detecting and discovering crime and their relationships with criminals.” (Ibid, p.53). For instance, the analytical techniques of hotspot mapping and spatial analysis can predict where crime may happen; data mining can help detect fraud.

Without big data, this would not be possible.

There are gains to be had because of AI. Our lives may become more secure. AI is currently being used to detect credit card fraud, substituting the painstaking job of human analysts and arguably doing it faster and more accurately by the sheer fact that they are able to process more data and identify patterns we ourselves would not be able to see.

The potential of AI are endless and so are its applications. But in its scope as a data analysis tool, AI could have the ability to learn patterns of rare diseases. Imagine a world where AI is able to identify through facial recognition whether someone is highly likely to suffer from a disease, or is displaying the first signs of such a disease. In a specific context, that could help straight away a doctor identify whether someone is having a stroke before it becomes life-threatening.

Governments, despite their fear of AI as a gateway to further automation and mass unemployment, seem keen to harness big data and AI, especially when it comes to security (Broaders et al., 2017). Other areas of policy could benefit from AI. In the US, politicians see Artificial intelligence as having the potential to make the public sector great again. Recently a Senator drafted a bill that would establish a select committee of experts “to advise agencies across government on the economic impact of federal artificial intelligence” (Sharma, 2017). This effort could lead to a jumpstart in AI focused programs to stimulate the economy and a legitimising of government investment in AI innovation among other things.

We are moving to an era where AI for government would mean creating “a network of complementary technologies built with the ability to autonomously conduct, support or manage public sector activity across disciplines.” (Sharma, 2017. This is even before considering the endless applications of AI by the private sector, including Netflix being able to predict based on our viewing history what we would like to see next; or Facebook Messenger able to suggest to you what should be the next action you could take as a response to someone’s message: starting a poll, creating an event and etc.

AI presents the same challenges associated with any technology that relies on big data. Just like the amount of data produced by IoTs and their use by governments and private actors raise privacy concerns, so does AI. The fundamental question here is whether companies, governments and the likes should be allowed to collect such amount of data and do of it as they wish without our explicit consent. In the AI case, however, the focus is more on whether we should allow the data collected to train the AI, and when that AI is a data analytics tool - that can identify patterns, make predictions and thus affect decision making - whether it should be allowed to do so.

To the first of those challenges, the judgement should be left to the legislator. Some guidelines should be provided to make sure that the data collected has been collected ethically and with the consent of the user that has generated that data (although that is in itself a problem because it is unclear whether users really consent. To the second, two examples offer a cautionary note.

The first example is that of the Social Credit System that will officially launch in China in 2020 as a mandatory requirement for all Chinese citizens. The goal of this system is to evaluate how trustworthy a citizen is by looking at the data they have generated and scoring it on a number of criteria. Alibaba is among the companies working with the government to build this and they assess people by looking at an individual’s credit history, personal characteristics, fulfillment capacity (i.e. whether they fulfil contract obligations), behaviour and preferences and interpersonal relationships (i.e. who are your friends and what are their interactions). By analysing data, the algorithm Alibaba is using is able to score the trustworthiness of a citizen (in essence predicting based on all the data it has been fed and analysed previously) and affect whether they get a loan, or can have a faster check-in at hotels, or whether they can rent a car without a deposit (Botsman, 2017). For now adherence to this program is voluntary and millions of people have decided to do so. Hence one could argue that because they have decided to take part in the program, they are consenting to this kind of government surveillance and are okay with the government in the future scoring their worthiness as a citizen based on an algorithm decision. What happens when this become mandatory? Should any government have that kind of hold on their citizens? If you were to ask citizens in the Western world they would probably be horrified, but do the 1.3 billion Chinese residents have a choice at all. That remains unclear, but what is evident is that the Chinese government and its drive to become the global leader of AI by 2030 will help it gain further control on its citizens (The Economist, 2017).

The second example is that of an algorithm trained to identify sexual orientation. Through thousands of photos from dating sites, social science researchers set out to show what are the threats to privacy and safety that can be caused by AI by creating a tool that threatens privacy and safety (Chen, 2017). And a threat it is indeed. First thing, it is not clear whether the people whose pictures were used for the study were aware of their photos being used for that purpose. Second, should such a tool end in the wrong hand, this would be disastrous. It could be an anti LGBT+ government intent on persecuting members of the LGBT+ community or a hate group intent on attacking minorities that could go around and through AI predict whether someone is gay and jail them on the basis of the algorithm decision/evidence. This would thus lead to an attack on privacy as well as on the safety of these people. And it does not have to stop there. What will it be next? Whether someone is to be a good parent or not, a religious person or not?

The author of the article reporting on this study argued for the need of ethical guidelines to prevent social scientists using AI that may accidentally harm people, perhaps by requiring an ethical review of studies conducted that use databases with no interaction with real humans.

So far we have talked about the challenges associated with too much data being collected and processed. That conversation has a major blindspot. Algorithms and AI are being used more and more to make decisions about our lives, ranging from what we see on our news feeds to whether we can get access to a loan. When it comes to these decisions, AI has the opportunity to become a great equaliser and go beyond the biases human beings have when it comes to making decisions. That will happen, however, only if the data collected and used to train AI.

What do we mean by that?

An example should help. Imagine an AI whose purpose is to identify by analysing data whether someone has a disease or not. This is a disease that hits both men and women that can be become life-threatening if not caught earlier. The data used to train the AI comes almost solely from male patients. The AI is great at predicting/assessing whether a male patient have that disease, but is not so good at doing the same with female patients. This means that doctors are able to identify earlier on whether a male patient is affected earlier, but do so later when it comes to female ones. The incidence of death for male patients stands at 20% whereas for female is 60%.

This is just one example of too much data of one kind being collected and used to train AIs, leading to a AI blindspot that can have terrible consequences. In the example above is higher mortality rate among women for a particular disease. In real life is a sentencing and bail scoring system used by multiple US states that is found to be biased towards black people by ProPublic last year (Simonte, 2017).

Tom Simonite reports for Wired that according to AI researchers computer code is undermining the right to due process as more and more government agencies in the US are leaving decision making to software and scoring systems powered by AI. These decision steering and decision making software are being used in public policy areas like health, welfare and criminal justice. If that AI is biased in any way (there are 5 different ways in which AI can become biased as documented by the Inclusive Data Team at Microsoft ) how can we guarantee that the decisions taken are not biased as well? The answer is that we cannot.

Many things.

Possible solutions to the problem of too much data exist. One is simply to stop companies and governments to collect data - but that ship has probably sailed. Big data is here to stay, and just like advocating for the non-development of IoT because of the risks associated seem like a luddite reaction so is the idea of trying to stop all kind of collection of data and analysis of it by AI. IoT and AI can be positive as we have illustrated. What can be done, however, is regulate the analysis and use of Big data. Broaders et al (2017), for instance argue for the regulation of government organisations using big data for security purposes by ensuring that the data is up to date, the data used is obtained legitimately from third parties and that the methodological choices made as regards to that data can be reviewed. Bello-Orgaz et al. address the problem of privacy by proposing among other things the use of “k-anonymity” which would make the system/service having to make data practically useful “without identifying” subjects of the data (2016, p.56).

When it comes to concerns of bias dataset and biased AI, some of the solutions include putting humans at the centre of AI as stated by Fei-Fei Li, director of Stanford’s AI Lab and chief scientist of Google Cloud in an interview (Knight, 2017). This means making sure that those developing the technology comes from all walks of life to ensure that AI serves all of humanity. It would mean that if both a man and a woman AI experts were working on building AI able to identify disease patterns, they would ensure that the data the AI is trained with represent both sexes. We should then encouraging more women and more ethnic minorities to pursue careers in technology. Another solution requires acknowledging that biased AI is bad, recognising exclusion and building the technology with inclusion in mind (Chou, Murillo & Ibars, 2017).

Some people are working to make this happen. The Inclusive Data team at Microsoft, for instance is doing just that. On an individual level there are also people working to dismantle algorithmic bias. The Algorithmic Justice League (AJL) founded by Joy Buolamwini is working to take human prejudice out of machines by raising awareness of the generation of biased algorithm underpinning AI systems that are trained on a biased large dataset (Cohen, 2017).

For their part governments can require that all companies building AI should create tools to identify biases in training data, and when they are the ones using AI to automate decision making that they allow their citizens affected to challenge such decisions just like they can appeal decisions made by humans. The EU is ahead of the curve here having produced draft guidelines on this subject matter. Should federal policy pass in the US on AI, the government it should include ethical standards about building and deploying of AI - “from the diversity of the people building the Ai to the data it learns from”. It should hold accountable people and the tech for outcomes (Sharma, 2017).

AI and IoT have shown enormous potential of making our lives better. What one must remember though is that technology is but a tool. It is up to us citizens, governments and corporations to work in tandem to make sure that technology is used for good and that the challenges it presents to our privacy, to our data and to our lives are mitigated the right way.

Shutting our eyes and burying our heads in the sand is likely to accomplish nothing here. Banning IoT or AI is unlikely to happen, and to focus all our attention on solely the privacy and data protection implication of these new technologies is missing a part of that conversation. Too much collection and thus use of personal data has been identified by many as a largely negative thing. But data collection and processing by AI and IoT and the other emerging technologies is here to stay. Our focus should be on two things: data governance to ensure that the advance of technology is balanced by the preservation of our rights and data; secondly, that we do not replicate the prejudices of our societies by stopping biases to see through big data.

IoT and AI are just two out of a myriad of technologies that are coming, whether we like it or not. We must do everything we can to welcome them, but at the same time provide a framework within which they can operate lawfully and without putting at risk the values and the rights we have as human beings to freedom and rights.

Bannan, C. (2016, August 14). The IoT threat to privacy. Retrieved from https://techcrunch.com/2016/08/14/the-iot-threat-to-privacy/

Bello-Orgaz, G., Jung, J. J., & Camacho, D. (2016). Social big data: Recent achievements and new challenges. Information Fusion, 28, 45-59. doi:10.1016/j.inffus.2015.08.005

Botsman, R. (2017, November 14). Big data meets Big Brother as China moves to rate its citizens. Retrieved from http://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion

Broeders, D., Schrijvers, E., Van der Sloot, B., Van Brakel, R., De Hoog, J., & Hirsch Ballin, E. (2017). Big Data and security policies: Towards a framework for regulating the phases of analytics and use of Big Data. Computer Law & Security Review, 33(3), 309-323. doi:10.1016/j.clsr.2017.03.002

Chen, S. (2017, September 18). AI research is in desperate need of an ethical watchdog. Retrieved from https://www.wired.com/story/ai-research-is-in-desperate-need-of-an-ethical-watchdog/

Chou, J., Murillo, O., & Ibars, R. (2017, October 12). What the kids' game "Telephone" taught Microsoft about biased AI. Retrieved from http://www.fastcodesign.com/90146078/what-the-kids-game-telephone-taught-microsoft-about-biased-ai

Cohen, A. (2017, June 26). Meet the digital activist that's taking human prejudice out of our machines. Retrieved from http://www.bloomberg.com/news/articles/2017-06-26/the-digital-activist-taking-human-prejudice-out-of-our-machines

DeBrule, S. (2017, April 5). WTF is Artificial Intelligence? ? Machine Learnings. Retrieved from https://machinelearnings.co/how-to-prepare-your-career-for-artificial-intelligence-driven-automation-1bb153759b3b?gi=1a9d2b3f4ba0

Dowd, M. (2017, March 26). Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse. Retrieved from https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x

The Economist. (2017, July 27). Why China's AI push is worrying. Retrieved from https://www.economist.com/news/leaders/21725561-state-controlled-corporations-are-developing-powerful-artificial-intelligence-why-chinas-ai-push

Eliott C. McLaughlin, CNN. (2017, April 26). Suspect OKs Amazon to hand over Echo recordings in murder case. Retrieved from http://edition.cnn.com/2017/03/07/tech/amazon-echo-alexa-bentonville-arkansas-murder-case/index.html

González García, C., Meana-Llorián, D., Pelayo G-Bustelo, B. C., Cueva Lovelle, J. M., & Garcia-Fernandez, N. (2017). Midgar: Detection of people through computer vision in the Internet of Things scenarios to improve the security in Smart Cities, Smart Towns, and Smart Homes. Future Generation Computer Systems, 76, 301-313. doi:10.1016/j.future.2016.12.033

Guinard, D. (2017, May 25). Internet of things: businesses must overcome data and privacy hurdles. Retrieved from https://www.theguardian.com/media-network/2015/jun/01/internet-of-things-businesses-data-privacy

Knight, W. (2017, October 13). Why everyone benefits if we emphasize the human side of the technology. Retrieved from http://www.technologyreview.com/s/609060/put-humans-at-the-center-of-ai/

Lee, I., & Lee, K. (2015). The Internet of Things (IoT): Applications, investments, and challenges for enterprises. Business Horizons, 58(4), 431-440. doi:10.1016/j.bushor.2015.03.008

Myers, S. (2017, January 6). As connected hardware matures, investors pour more funding into fewer companies. Retrieved from https://techcrunch.com/2017/01/06/as-connected-hardware-matures-investors-pour-more-funding-into-fewer-companies/

Parasol, M. (2017). The impact of China's 2016 Cyber Security Law on foreign technology firms, and on China's big data and Smart City dreams. Computer Law & Security Review. doi:10.1016/j.clsr.2017.05.022

Richardson, M. & Bosua, R. & Clark, K. & Webb, J. & Ahmad, A. & Maynard, S. (2017). Towards responsive regulation of the Internet of Things: Australian perspectives . Internet Policy Review, 6(1). DOI: 10.14763/2017.1.455

Schulz, W. & Dankert, K. (2016). ’Governance by Things’ as a challenge to regulation by law. Internet Policy Review, 5(2). DOI: 10.14763/2016.2.409

Sharma, K. (2017, July 14). Artificial intelligence can make America's public sector great again. Retrieved from http://www.recode.net/2017/7/14/15968746/artificial-intelligence-ai-federal-government-public-sector

Shu, C. (2017, May 23). Ikea’s smart light bulbs will work with Amazon Alexa, Apple Siri and Google Assistant. Retrieved from https://techcrunch.com/2017/05/23/ikeas-smart-light-bulbs-will-work-with-amazon-alexa-apple-siri-and-google-assistant/

Simonte, T. (2017, October 19). AI experts want to end 'Black Box' algorithms in government. Retrieved from http://www.wired.com/story/ai-experts-want-to-end-black-box-algorithms-in-government/

Solon, O. (2017, September 8). Credit firm Equifax says 143m Americans' social security numbers exposed in hack. Retrieved from https://www.theguardian.com/us-news/2017/sep/07/equifax-credit-breach-hack-social-security