Of Hal, Skynet and iRobot

 Of Hal, Skynet and iRobot

EVOLVING BUSINESS AND LEGAL ISSUES IN ARTIFICIAL INTELLIGENCE, BIG DATA AND THE INTERNET OF THINGS

By Erin Fonte

Discussions regarding artificial intelligence (AI) used to be limited to ComiCon and other sci-fi gatherings, or lunch time in the Stanford engineering school robotics department. But with recent changes in technology and the launch of products and services, discussions about AI, and its precursors such as Big Data and Internet of Things, are becoming more common occurrences at the dinner table and in the board room.

Some members of academia and the tech community do worry that developing AI could lead to an intelligent system building something similar to SkyNet — that dreaded, impersonal AI network that executes “Judgment Day” against humans, gives rise to “The Terminator” and ruthlessly hunts down John Connor over time and space.

In reality, we’re far from that level of autonomy in AI. So far, in fact, no machine has passed the Turing Test. This benchmark test developed by Alan Turing — an English mathematician and code breaker, and the father of modern computing — determines whether a machine can fool a person into thinking he or she is communicating with another human. As yet, no AI has passed the test, though a computer program succeeded last year in convincing 10 of 30 judges from the Royal Society that he was a 13-year-old Ukrainian boy speaking English as a second language.

But how much can academics, policy makers, lawyers and society in general “kick the can down the road” regarding concerns about supercomputers and the scope and extent of their ultimate authority and power?

There are several steps and building blocks already in existence, with others in development, that are laying key groundwork and accelerating the pace of AI development. One of the buzz words of the last few years has been the “Internet of Things” or “IoT.” One working definition of IoT is that it “refers to uniquely identifiable objects or ‘things’ that have a digital presence.” There are two main categories of these objects: identified objects and connected devices. These objects or devices can be connected to one another to create a digital ecosystem, as well as to the Internet. Hence, the name, Internet of Things. And all of that growth, along with the proliferation of better, faster and more refined data and advances in computing capacity and speed, will drive growth and development of AI.

“Law of AI” Is Currently Evolving and Will Continue to Evolve

While technology development is a very “forward-looking” field, laws, rules, regulations and related policy make up a more “backward-looking” field. However, while policymakers and laws tend to always be playing catch-up to technology developments, artificial intelligence raises a series of social, economic, political, technological, legal, ethical and philosophical questions. To address uncertainties, possibilities and potential perils related to artificial intelligence, there is a need to understand the correlation between these fields.

There is currently little legislation, however, that specifically contemplates AI, other than some e-commerce provisions that enable “electronic agents” to enter into contracts. Other “technology neutral” laws cover copyright and the collection of data. But as machines get smarter and public and commercial interest in this area continues to grow, policy makers will need to catch up. A wide range of issues, from how data is captured and used by firms, to who is liable should an earthbound or airborne drone or self-driving vehicle accidentally hurt someone, will create significant challenges for the legal community in the coming years. And lawyers and law makers will be called upon more and more to help shape the regulatory paradigm for AI as a whole.

“Strong” AI May Be Coming, But “Weak” AI Is Already Here

“Strong” AI, also known as AGI or artificial general intelligence, would match or exceed human intelligence. To achieve “strong AI” status, a system must have the ability reason, share knowledge, plan, learn and communicate all in the service of a common goal. Think HAL, the deranged computer in 2001: A Space Odyssey, or Lieutenant-Commander Data in Star Trek.

But it appears that true “Strong” AI systems are fairly far in the future. “Right now, we are struggling to achieve the level of intelligence we see in an ant. I don’t see that changing substantially under the current paradigm,” said WebSupergoo CEO, Joss Vernon. “If you don’t see that level of intelligence on your desktop or phone, it doesn’t exist.”

“Weak” AI, though, is already being implemented and used today. Any time an individual communicates with a device — booking film tickets, paying a gas bill or listening to GPS directions — that constitutes “weak” or “narrow” AI. Apple’s Siri and Google’s self-driving cars are probably the most recognizable products currently using “weak” AI. It seems intelligent, but it still has defined functions, and it has no self-awareness.

But of course, “weak” AI is still very powerful. High-speed share-trading algorithms, responsible for half of the volume of trades on Wall Street, helped cause the infamous 2010 “flash crash” that temporarily wiped nearly $1 trillion off the benchmark indexes. The technology that enables the NSA to develop very sophisticated data-mining or “snooping” tools also falls into this category, as do autonomous weapons, which generally use the same technology as self-driving cars.

3 Key Proto-AI Areas to Watch from a Legal/Policy Perspective

Even though true “strong” AI may be years away, there are three areas where legal and policy issues regarding AI are already starting to surface: predictive Big Data, automated agents and augmented reality. As with any new and evolving technology, the legal landscape will evolve through a combination of applying existing laws to new technologies, and then enacting new laws where gaps exist or truly new and unique issues are not addressed under current law.

  1. Continuing Shift in Big Data from “Observational/Historical” to “Predictive”

Right now, most activity is in innovations in so-called “deep learning.” Through deep learning, systems acquire knowledge through pattern recognition, and by using what they’ve gleaned from previously entered information. The more information available to them, the more skilled they become, hence the need for enormous amounts of data. By combining powerful processors and layers of neural nets, computer programs can even learn to do certain tasks independently and with fewer inputs. The underlying technology makes systems increasingly able to analyze an individual’s behavior, mood and desires — profiling, in theory. And that has some observers worried.

Firms, as well as states, “are getting better and better at predicting people’s behavior,” said Joanna Bryson, associate professor of artificial intelligence at the University of Bath. “That’s why regulation should focus on privacy issues – because people who are predicted can be exploited.”

A real-world effect evolving “deep learning” is that companies and professionals will rely more and more on “weak” AI decision-making without necessarily understanding the underlying technology that powers these decisions. That raises real issues of trust, and how much companies providing the “deep learning” algorithms and processes will represent and warrant (or disclaim) the quality of their data and processes.

The focus on deep learning is also resulting in new products and services leveraging real-time data from smartphones and search engines. Uber and Airbnb are two companies that are built around such “deep learning” and connected data. Services based on deep learning also trigger the responsibility to follow privacy laws and consumer consent when gathering information from consumers and users, and to design products that are “compliant by design,” which includes “privacy by design.”

  1. Automated Agents: Self-Driving Cars

The technology behind self-driving cars is nearly advanced enough for everyday use. And while the technology seems to be ready for prime time, there is a bigger set of legal issues and concerns around the use of self-driving cars. Many of these focus on what happens when self-driving cars cause accidents, ranging from what happens when a driverless car kills someone, to who pays the ticket when the self-driving car does not notice a no-parking sign, to who is responsible when an error in Google Maps sends a self-driving vehicle the wrong way down a one-way street.

Current liability laws already provide some guidance in this area. For parking or traffic tickets, the owner of the car would most likely be held responsible for paying the ticket, even if the car itself, and not the owner, made the decision that led to breaking the law. For instances where an accident injures or kills someone, many parties would be likely to sue one another, but ultimately the car’s manufacturer would probably be held responsible, at least for civil penalties. However, allocation of liability between the “hardware” (manufacturer of the physical vehicle) and “software” manufacturers (Google, Apple, etc.) will lead to a deep dive and investigation as to what failure of which systems caused the accident.

And a manufacturer’s responsibility for problems discovered after a product is sold — like a faulty software update for a self-driving car — is perhaps less clear. Insurance companies will likely reexamine coverages and related issues for self-driving vehicles.

Criminal law, however, may be a different story, for the simple reason that robots cannot be charged with crime. Criminal law is going to be looking for a guilty mind and a particular mental state. But if a “person” is not driving the car, it’s going to be difficult to ascertain the “responsible individual’s” state of mind at the time of the accident.

In mid-July 2016, Joshua Brown, 40, of Canton, Ohio, was killed driving a Tesla Model S in the first fatality involving a self-driving car, and questions have already arisen about the safety of the car’s crash-avoidance autopilot system. Tesla told Senate investigators that a “technical failure” of the automatic braking system played a role, but maintains that autopilot was not at fault. This may be a seminal “test case” in the self-driving car arena.

  1. Augmented Reality: Pokemon Go

Pokemon Go is a certified cultural phenomenon, and a huge money maker as well. The popular app grossed $200 million in first month after launch, beating Candy Crush by a landslide. Pokemon Go is also a seminal test case for widespread use of augmented reality, even if it’s only a game.

Reports associated with Pokemon Go have ranged from the weird to the bizarre. Homeowners have reported a sudden spike in people parking in front of their homes because, unbeknownst to them, their physical house has become a “Pokemon gym.” That raises potential trespass and nuisance issues and claims “IRW” (in real world).

Some venues have embraced Pokemon Go and become “gyms” or character locations to lure more traffic. Other venues, like the Holocaust Museum in Washington D.C., have asked that players stop catching Pokemon there because they consider it deeply disrespectful of the nature and intent of the museum. The museum is trying to contact Niantic to get its location removed from the game.

Individuals playing the game have reportedly been lured to dark and desolate places so others could rob or assault them, raising issues of personal responsibility, as well as liability disclaimer issues for players of the game. And those liability issued get even more complex when the players are teens or children under the age of 18. The issues that have arisen, and continue to arise, with respect to Pokemon Go are illustrative of precisely where society and the law may draw the lines of propriety and legality around the use of augmented reality.

Key Takeaways

AI is already affecting automation, national security, psychology, ethics, law, privacy, democracy and other societal issues, and will continue to do so. Companies looking to develop or deploy any “weak” AI products and services need to investigate and understand the existing laws related to the products or services they are seeking to offer or desiring to use, and also pay attention to the evolving legal/regulatory landscape around AI technologies. Yes, it is a brave new world, and many companies will boldly be going where none have gone before. But companies should still always bring a (legal) towel.

Erin Fonte is head of Dykema’s Financial Services Regulatory and Compliance Group and a Member in the firm’s Austin office, where she assists clients with a broad range of matters related to FinTech, payments/payment systems, digital commerce, banking and financial services, and cybersecurity, privacy and data asset management matters.

Digiqole ad
Avatar

Aaron

Related posts

Leave a Reply

Your email address will not be published. Required fields are marked *