Wednesday, August 6, 2014

Who is Responsible for an Algorithms Outcome?

I was reading an article today that stated:
 "A Hong Kong court today ruled that local businessman Albert Yeung Sau-sing can sue the company over its Autocomplete function"(1)

Google states:
"...that it couldn't be held responsible for the suggestions made by Autocomplete. It was, it argued, a “mere passive facilitator”, with its algorithm based on the content of previous searches."(1)

This is very troubling to me as I understand that even though Google originally created the intelligence that makes the Autocomplete function. The data that it provides is not from any Google employee(s) but rather users of their products.

So it begs the question. Who is responsible for the outcomes of algorithms and eventually as these algorithms grow, artificial intelligence - "AI"?

Take the human example. You have a child that you raised and taught right from wrong to the best of your ability. One day they say something that you never told them and it gets them in a lot of trouble. Maybe they heard it from a friend, or TV, or even a song. Regardless, is the parent responsible for them being in trouble? And I don't mean responsible in the sense of the child possibly being a minor, I mean responsible in the sense that they somehow made that child say the inappropriate comment.

So then take Google's Autocomplete. I love it and it saves me a lot of typing. The algorithm behind it is amazing and I thank Google for this wonderful technology. But is it a stretch to say that when someone types their name and a negative or even liable phrase is suggested by the algorithm is somehow the fault of the algorithm itself? Results are only as good as the data. Plus lets be honest, sometimes the truth hurts.

To take the side of the Hong Kong court and force Google to modify their algorithm and have an exception list would be ludicrous. Not only is it blatant censorship but to police the Autocomplete suggestions and deal with removal request would be a huge drain on Google's resources.

This does not just have to do with Google. This is a much broader notion and will be a larger concern as more intelligence is integrated into our daily lives with algorithms. Ultimately we will (may already) have AI making decisions all the time that will affect us, possibly adversely.

As with a child that you teach right from wrong, they are going to do right sometimes and wrong others. Your goal as a parent is to give them tools so that they can make the best decisions for themselves.

Is this not what an intelligent algorithm is intended to do? Don't we want our computer creations to be smart enough to gather data available to them to make the best decision? Can Google or other software companies really force an algorithm to be nice all the time and be perfect?

I say no, because humans are not perfect therefore our creations cannot be perfect. But should censorship and the ignorance of a few destroy these amazing pieces of code that make our world in my option a much better place?


References:
(1) http://www.forbes.com/sites/emmawoollacott/2014/08/06/more-privacy-woes-for-google-this-time-its-autocomplete/

No comments:

Post a Comment