Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Saturday 17 June 2017

Will Robots get better than Humans ?



There is a general agreement among the scientists / technocrats that , in some respects at least , robots will get better at doing things , as compared to human beings


Some of these experts have even gone to the extent of predicting which jobs will be taken over by the robots – and when


But some scepticism persists in respect of a robot’s ability to copy and display “ human emotions and feelings “


Eg : Empathy


This is a “ feeling “ that  I believe a Software Agent ( - a Chat Bot ? ) could be trained to acquire if someone was to implement :






Following news report ( Mumbai Mirror / 17 June ) , reinforces my belief :




“ New algorithm teaches robots human etiquette “



Scientists have developed a new machine-learning algorithm to help robots display appropriate social behaviour in interactions with humans.


Advances in artificial intelligence (AI) are making virtual and robotic assistants increasingly capable in performing complex tasks, researchers said.


For these “smart” machines to be considered safe and trustworthy collaborators with human partners, however, robots must be able to quickly assess a given situation and apply human social norms, they said.


Now, researchers at Brown University and Tufts University in the US have created a cognitive-computational model of human norms in a representation that can be coded into machines.


They developed a machine-learning algorithm that allows machines to learn norms in unfamiliar situations drawing on human data.


The project funded by the US Defence Advanced Research Projects Agency (DARPA) represents important progress towards the development of AI systems that can “intuit” how to behave in certain situations in much the way people do.


“The goal of this research effort was to understand and formalise human normative systems and how they guide human behaviour, so that we can set guidelines for how to design next generation AI machines that are able to help and interact effectively with humans,” said Reza Ghanadan, DARPA programme manager.


As an example in which humans intuitively apply social norms of behaviour, consider a situation in which a cell phone rings in a quiet library, researchers said.


A person receiving that call would quickly try to silence the distracting phone, and whisper into the phone before going outside to continue the call in a normal voice.


Today, an AI phone-answering system would not automatically respond with that kind of 
social sensitivity.


“We do not currently know how to incorporate meaningful norm processing into effective computational architectures,” Ghanadan said, adding that social and ethical norms have a number of properties that make them uniquely challenging.


Ultimately, for a robot to become social or perhaps even ethical, it will need to have a capacity to learn, represent, activate, and apply a large number of norms that people in a given society expect one another to obey, Ghanadan said.


That task will prove far more complicated than teaching AI systems rules for simpler tasks such as tagging pictures, detecting spam, or guiding people through their tax returns.


However, by providing a framework for developing and testing such complex algorithms, the new research could bring machines that emulate the best of human behaviour closer, researchers said.


18 June  2017



No comments:

Post a Comment