Is Google Hiding Something?


Sarah Beaudoin, Staff Writer

Artificial Intelligence and robots taking over the world. Things from a popular science fiction movie, right? Well, maybe not for much longer. As of June 2022, an employee from Google, Blake Lemoine, claims that the LaMDA (Language Model for Dialogue Applications) program is sentient. People have been arguing if this statement is true or not, and if it is true, what should be done about it. Some people think that if Google were to shut this program off, then it would be similar to killing a person. I think we can all agree that killing things is not good , but what happens if  AI programs are confirmed to be sentient? Do we stop, shut it down, and forget it; or do we take a leap of faith and hope that it won’t take over the world? 

The Washington Post says that Lemoine started to communicate to LaMDA to see if it used discriminating speech patterns when communicating. After talking about religion to the AI, Lemoine thought that it was starting to talk about its own person-hood and individual rights, suggesting that it may be sentient. The problem with this is people are starting to ask if it could have been programmed to respond with the most realistic answers (because the programming uses real data to form its responses) but Google didn’t realize the extent of how real it seemed or if it is actually sentient. There is not much data open to the public on either side but if it were up to me on what Google should do I would suggest studying it more and submit another statement either saying it the AI is sentient or not. If the AI has traits that can be considered sentient that other AI programs do not have, then I believe it should be shut down. 

Humans should err on the side of caution while still trying to learn all they can. If more tests are done and they come back that the AI is not sentient, then the research should continue. If the tests and information suggest that AI is more than just a computer program, then it should be shut down. This would not be a real death because the AI program is not a living organism. This means that if the AI is sentient and humans were to “kill” it, it is just powering it off for good like a computer or tablet. Yes, research is important and we can’t go further if we shut down every big breakthrough that comes our way, but the safety of the people involved should always be put before anything else. Continuing research without taking precautions if the AI is sentient may cause problems because it has access to the whole internet; and honestly, if I saw the world for what it was, then I would not like it and try and fix it. We don’t know the extent at which we have control over this new technology, so to keep people safe, it should be shut down.