Friday, August 29, 2014

Thermal Cameras Everywhere... Beware...

Up until recently thermal cameras have been quite expensive and were generally only used in high end test equipment or military applications.

That is now changing with an announcement from Seek Thermal (1) where they will soon be offering a smartphone attachment for around $250 that will allow you to see temperature in real time on your smartphone screen.

The applications are endless! Just a few examples are:
Finding people or animals in the dark.
Checking the temp on the playground slide for you kids.
Looking for electrical shorts in a wall.
Looking for leaks in an air duct.
Testing the effectiveness of insulation.
Watching for forest fires.
And so much more...

Now as with all technologies people will always find a dark side. There are endless possibilities of abuse as well. Since this technology used to be difficult to obtain for most people it was not much of a concern. But with the significant lower price point more people will have access and thus more people will do bad things. This is just inevitable.

Here are a few ideas of malicious things you could do with a thermal camera:
Spy on people in the dark.
Look for a guard dog in a backyard to avoid.
Find a car in a parking lot that just parked so you know they are not likely to be back for a while.
Just use your imagination and think of the privacy implications.

Finally my last thought on this topic is that you may soon be able to tell what mood someone is in just by taking a thermal image of them. I belive it will not be long before there is an app that with a thermal sensor and possibly an IR sensor a user will be able to tell what mood a person is in. This could potentially change everything.

Let's say you want to ask the boss for money to find a project you're working on. Better check their mood first. Good, let's ask for money. Bad, let's wait. Very happy, let's ask for a lot of money.

I think this tech could also find its way into the dating scene. Imagine being able to read the body of the opposite sex. What they are feeling just by their thermal signature. We may have to wear thermal clothes just to protect ourselves from the curious or worse malicious.


Wednesday, August 6, 2014

Who is Responsible for an Algorithms Outcome?

I was reading an article today that stated:
 "A Hong Kong court today ruled that local businessman Albert Yeung Sau-sing can sue the company over its Autocomplete function"(1)

Google states:
"...that it couldn't be held responsible for the suggestions made by Autocomplete. It was, it argued, a “mere passive facilitator”, with its algorithm based on the content of previous searches."(1)

This is very troubling to me as I understand that even though Google originally created the intelligence that makes the Autocomplete function. The data that it provides is not from any Google employee(s) but rather users of their products.

So it begs the question. Who is responsible for the outcomes of algorithms and eventually as these algorithms grow, artificial intelligence - "AI"?

Take the human example. You have a child that you raised and taught right from wrong to the best of your ability. One day they say something that you never told them and it gets them in a lot of trouble. Maybe they heard it from a friend, or TV, or even a song. Regardless, is the parent responsible for them being in trouble? And I don't mean responsible in the sense of the child possibly being a minor, I mean responsible in the sense that they somehow made that child say the inappropriate comment.

So then take Google's Autocomplete. I love it and it saves me a lot of typing. The algorithm behind it is amazing and I thank Google for this wonderful technology. But is it a stretch to say that when someone types their name and a negative or even liable phrase is suggested by the algorithm is somehow the fault of the algorithm itself? Results are only as good as the data. Plus lets be honest, sometimes the truth hurts.

To take the side of the Hong Kong court and force Google to modify their algorithm and have an exception list would be ludicrous. Not only is it blatant censorship but to police the Autocomplete suggestions and deal with removal request would be a huge drain on Google's resources.

This does not just have to do with Google. This is a much broader notion and will be a larger concern as more intelligence is integrated into our daily lives with algorithms. Ultimately we will (may already) have AI making decisions all the time that will affect us, possibly adversely.

As with a child that you teach right from wrong, they are going to do right sometimes and wrong others. Your goal as a parent is to give them tools so that they can make the best decisions for themselves.

Is this not what an intelligent algorithm is intended to do? Don't we want our computer creations to be smart enough to gather data available to them to make the best decision? Can Google or other software companies really force an algorithm to be nice all the time and be perfect?

I say no, because humans are not perfect therefore our creations cannot be perfect. But should censorship and the ignorance of a few destroy these amazing pieces of code that make our world in my option a much better place?