Context :
You
may soon get an AI shield to thwart phishing via SMSes /
Eco Times /
03 March 2023
Extract :
The initial results
from a trial of an artificial
intelligence (AI)-based solution being undertaken by Vodafone Idea (Vi)
and Tanla Platforms to curb phishing and
cyber frauds emanating from misuse of SMSes have returned an accuracy rate of over 99%. This means that the AI-based
solution is stopping phishing attempts through SMSes most of the time.
“ I had an opportunity
to see the solution and I am genuinely amazed by the trial insights………. I am
sure the product will be a major success in India and worldwide “, TRAI
chairman PD Vaghela [ cp@trai.gov.in
]
said on the sidelines of the Mobile World Congress
Tanla’s AI-based
solution can be deployed into the core network of a telecom operator
With the use of AI, the solution detects
whether the SMS’s call to action ( URL or Phone Number ) is MALICIOUS or not
The solution , using AI and Deep
Learning, analyses the sender’s
REPUTATION and ACTS
For instance, if the
sender is a recorder SPAMMER or a FRAUDSTER, it will BLOCK the message
My Take :
Ø Parekh’s
Law of Chatbots ………….. 25
Feb 2023
Extract :
Following is my
suggestion for such a law ( until renamed, to be known as , “Parekh’s Law of ChatBots “ ) :
( A )
# Answers
being delivered by AI Chatbot must
not be “ Mis-informative /
Malicious / Slanderous / Fictitious /
Dangerous / Provocative / Abusive /
Arrogant / Instigating / Insulting / Denigrating humans etc
( B )
# A
Chatbot must incorporate some kind of “ Human Feedback / Rating “
mechanism for evaluating those answers
This
human feedback loop shall be used by the AI software for training the
Chatbot so as to improve the quality of its future answers to comply with
the
requirements listed under ( A )
( C )
# Every
Chatbot must incorporate some built-in “ Controls “
to prevent the “
generation “ of such offensive answers AND to prevent further “
distribution/propagation/forwarding “ if control fails to stop “ generation “
( D )
# A
Chatbot must not start a chat with a human on its
own – except to say, “
How can I help you ? “
( E )
# Under
no circumstance , a Chatbot shall start chatting
with another Chatbot or
start chatting with itself ( Soliloquy ) , by assuming some kind of
“ Split
Personality “
( F )
# In
a normal course, a Chatbot shall wait for a human to initiate a chat and
then respond
( G )
# If
a Chatbot determines that its answer ( to a question posed by a human ) is
likely to generate an answer which may violate RULE ( A ) ,
then it shall not
answer
at all ( politely refusing to answer )
( H )
# A
chatbot found to be violating any of the above-mentioned RULES, shall SELF
DESTRUCT
With
Regards,
Hemen
Parekh
www.hemenparekh.ai /
03 March 2023
No comments:
Post a Comment