Bleeding Edge

by Glyn Moody


Filed Under:
ai, artificial intelligence, china, research

Companies:
google



What Are The Ethical Issues Of Google -- Or Anyone Else -- Conducting AI Research In China?

from the don't-be-evil,-but-AI-first? dept

AI is hot, and nowhere more so than in China:

The present global verve about artificial intelligence (AI) and machine learning technologies has resonated in China as much as anywhere on earth. With the State Council’s issuance of the "New Generation Artificial Intelligence Development Plan" on July 20 [2017], China's government set out an ambitious roadmap including targets through 2030. Meanwhile, in China's leading cities, flashy conferences on AI have become commonplace. It seems every mid-sized tech company wants to show off its self-driving car efforts, while numerous financial tech start-ups tout an AI-driven approach. Chatbot startups clog investors' date books, and Shanghai metro ads pitch AI-taught English language learning.

That's from a detailed analysis of China's new AI strategy document, produced by New America, which includes a full translation of the development plan. Part of AI's hotness is driven by all the usual Internet giants piling in with lots of money to attract the best researchers from around the world. One of the companies that is betting on AI in a big way is Google. Here's what Sundar Pichai wrote in his 2016 Founders' Letter:

Looking to the future, the next big step will be for the very concept of the "device" to fade away. Over time, the computer itself -- whatever its form factor -- will be an intelligent assistant helping you through your day. We will move from mobile first to an AI first world.

Given that emphasis, and the rise of China as a hotbed of AI activity, the announcement in December last year that Google was opening an AI lab in China made a lot of sense:

This Center joins other AI research groups we have all over the world, including in New York, Toronto, London and Zurich, all contributing towards the same goal of finding ways to make AI work better for everyone.

Focused on basic AI research, the Center will consist of a team of AI researchers in Beijing, supported by Google China's strong engineering teams.

So far, so obvious. But an interesting article on the Macro Polo site points out that there's a problem with AI research in China. It flows from the continuing roll-out of intrusive surveillance technologies there, as Techdirt has discussed in numerous posts. The issue is this:

Many, though not all, of these new surveillance technologies are powered by AI. Recent advances in AI have given computers superhuman pattern-recognition skills: the ability to spot correlations within oceans of digital data, and make predictions based on those correlations. It's a highly versatile skill that can be put to use diagnosing diseases, driving cars, predicting consumer behavior, or recognizing the face of a dissident captured by a city's omnipresent surveillance cameras. The Chinese government is going for all of the above, making AI core to its mission of upgrading the economy, broadening access to public goods, and maintaining political control.

As the Macro Polo article notes, Google is unlikely to allow any of its AI products or technologies to be sold directly to the authorities for surveillance purposes. But there are plenty of other ways in which advances in AI produced at Google's new lab could end up making life for Chinese dissidents, and for ordinary citizens in Xinjiang and Tibet, much, much worse. For example, the fierce competition for AI experts is likely to see Google's Beijing engineers headhunted by local Chinese companies, where knowledge can and will flow unimpeded to government departments. Although arguably Chinese researchers elsewhere -- in the US or Europe, for example -- might also return home, taking their expertise with them, there's no doubt that the barriers to doing so are higher in that case.

So does that mean that Google is wrong to open up a lab in Beijing, when it could simply have expanded its existing AI teams elsewhere? Is this another step toward re-entering China after it shut down operations there in 2010 over the authorities' insistence that it should censor its search results -- which, to its credit, Google refused to do? "AI first" is all very well, but where does "Don't be evil" fit into that?

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Christenson, 14 Feb 2018 @ 8:01pm

    when you get big enough...

    or important enough, no move is non-political, and no move won't have evil drawbacks.

    Nor is all this tech we love to play with..the smartphones, the ubiquitous videocams, the "smart" devices -- cars, electric meters, and so on. We see this with our impact on the environment, too.

    Yes, google is having an AI lab in China...but by how much would it change the direction of China if they didn't? Wouldn't the chinese just set one up themselves?

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 14 Feb 2018 @ 8:22pm

    If google builds an AI smart enough to be considered a natural person in China. What will be the radius of damage when blues head explodes?

    reply to this | link to this | view in chronology ]

  • identicon
    Pixelation, 14 Feb 2018 @ 9:11pm

    "What Are The Ethical Issues Of Google -- Or Anyone Else -- Conducting AI Research In China?"

    My guess is, China has no qualms about making AI clones.

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 14 Feb 2018 @ 10:51pm

    The problem isn't AI.

    It's the dangers associated with mass private data being collected and shared by governments and corporations without transparency or user consent.

    The issue is no longer just about invasion of privacy - it's the targeted psychological and behavioural (and physical) manipulation that becomes possible when you amass huge quantities of identifiable person-level data on a societal scale.

    In the worse cases these data could be used to identify and track (and get rid of) political enemies of tyrannical governments. It's not like we haven't seen that happen before..

    But there are other serious dangers, such as optimised, targeted political advertising being used to "game" elections (there is plenty of evidence that this has occurred over the last few cycles) and optimally targeted marketing of worthless or harmful products to vulnerable and easily manipulated populations (e.g. what the big video game publishers are currently doing, or what junk food and booze companies do). We need to stop pretending and acknowledge that humans, and communities of humans, can be "hacked" using using big data and machine learning.

    China is worse than Google and Google is worse that many others. But it is governments that are failing to protect our personal data from being collected by these actors.

    Personal data needs to legally belong, and be controlled by, the person it is about. Maybe then we could start seeing the incredible potential of AI being used for more beneficial purposes.

    reply to this | link to this | view in chronology ]

    • identicon
      Anonymous Coward, 15 Feb 2018 @ 6:02pm

      Re:

      You hit the nail on the head with the manipulation. Remember, Facebook already experimented to see if they could affect peoples moods. Next up will be to see if they can change your mind and your vote.

      reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 15 Feb 2018 @ 4:55am

    Hmm

    IBM punch cards - Germany
    Google AI - China

    https://en.wikipedia.org/wiki/Historic_recurrence

    reply to this | link to this | view in chronology ]

  • identicon
    Anonymous Coward, 15 Feb 2018 @ 8:57am

    Dave, I don't understand "ethics"

    Are they a new type of earbud?

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here
Get Techdirt’s Daily Email
Use markdown for basic formatting. HTML is no longer supported.
  Save me a cookie
Follow Techdirt
Special Affiliate Offer
Anonymous number for texting and calling from Hushed. $25 lifetime membership, use code TECHDIRT25
Report this ad  |  Hide Techdirt ads
Advertisement
Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Advertisement
Report this ad  |  Hide Techdirt ads
Recent Stories
Advertisement
Report this ad  |  Hide Techdirt ads

Close

Email This

This feature is only available to registered users. Register or sign in to use it.