Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Sunday 8 January 2023

Code Red : for the Humanity ?

 


 

Context :

ChatGPT is a 'Code Red' for Google's search business  /  ET  /  26 Dec 2022

 

Extract :

 

Over the past three decades, a handful of products like Netscape's web browser, Google's search engine and Apple's iPhone have truly upended the tech industry and made what came before them look like lumbering dinosaurs.

Last month, an experimental chatbot called ChatGPT made its case to be the industry's
next big disrupter.

It can serve up information in clear, simple sentences, rather than just a list of internet links. It can explain concepts in ways people can easily understand. It can even generate ideas from scratch, including business strategies, Christmas gift suggestions, blog topics and vacation plans.

Although ChatGPT still has plenty of room for improvement, its release led Google's management to declare a "code red."

 

For Google, this was akin to pulling the fire alarm. Some fear the company may be approaching a moment that the biggest Silicon Valley outfits dread - the arrival of an enormous technological change that could upend the business.

For more than 20 years, the Google search engine has served as the world's primary gateway to the internet. But with a new kind of chatbot technology poised to reinvent or even replace traditional search engines,

 

Google could face the first serious threat to its main search business. One Google executive described the efforts as make or break for Google's future.


ChatGPT was released by an aggressive research lab called OpenAI, and Google is among the many other companies, labs and researchers that have helped build this technology. But experts believe the tech giant could struggle to compete with the newer, smaller companies developing these chatbots, because of the many ways the technology could damage its business.


Google has spent several years working on chatbots and, like other big tech companies, has aggressively pursued artificial intelligence technology. Google has already built a chatbot that could rival ChatGPT. In fact, the technology at the heart of OpenAI's chatbot was developed by researchers at Google.

Called LaMDA, or Language Model for Dialogue Applications, Google's chatbot received enormous attention in the summer when a Google engineer, Blake Lemoine, claimed it was sentient. This was not true, but the technology showed how much chatbot technology had improved in recent months.

Google may be reluctant to deploy this new tech as a replacement for online search, however, because it is not suited to delivering digital ads, which accounted for more than 80% of the company's revenue last year.

"No company is invincible; all are vulnerable," said Margaret O'Mara, a professor at the University of Washington who specializes in the history of Silicon Valley. "For companies that have become extraordinarily successful doing one market-defining thing, it is hard to have a second act with something entirely different."

Because these new chatbots learn their skills by analyzing huge amounts of data posted to the internet, they have a way of blending fiction with fact. They deliver information that can be biased against women and people of color.

 

They can generate toxic language, including hate speech.

All of that could turn people against Google and damage the corporate brand it has spent decades building. As OpenAI has shown, newer companies may be more willing to take their chances with complaints in exchange for growth.


Even if Google perfects chatbots, it must tackle another issue: Does this technology cannibalize the company's lucrative search ads? If a chatbot is responding to queries with tight sentences, there is less reason for people to click on advertising links.


"Google has a business model issue," said Amr Awadallah, who worked for Yahoo and Google and now runs Vectara , a startup that is building similar technology. "If Google gives you the perfect answer to each query, you won't click on any ads."


Sundar Pichai, Google's CEO, has been involved in a series of meetings to define Google's AI strategy, and he has upended the work of numerous groups inside the company to respond to the threat that ChatGPT poses, according to a memo and audio recording obtained by The New York Times.

 

Employees have also been tasked with building AI products that can create artwork and other images, such as OpenAI's DALL-E technology, which has been used by more than 3 million people.


From now until a major conference expected to be hosted by Google in May, teams within Google's research, Trust and Safety, and other departments have been reassigned to help develop and release new AI prototypes and products.

As the technology advances, industry experts believe, Google must decide whether it will overhaul its search engine and make a full-fledged chatbot the face of its flagship service.


Google has been reluctant to share its technology broadly because, like ChatGPT and similar systems, it can generate false, toxic and biased information. LaMDA is available to only a limited number of people through an experimental app, AI Test Kitchen.

Google sees this as a struggle to deploy its advanced AI without harming users or society, according to a memo viewed by the Times.

 

In one recent meeting, a manager acknowledged that smaller companies had fewer concerns about releasing these tools but said Google must wade into the fray or the industry could move on without it, according to an audio recording of the meeting obtained by the Times.

Other companies have a similar problem. Five years ago, Microsoft released a chatbot, called Tay, that spewed racist, xenophobic and otherwise filthy language and was forced to immediately remove it from the internet - never to return. In recent weeks, Meta took down a newer chatbot for many of the same reasons.

Executives said in the recorded meeting that Google intended to release the technology that drove its chatbot as a cloud computing service for outside businesses and that it might incorporate the technology into simple customer support tasks. It will maintain its trust and safety standards for official products, but it will also release prototypes that do not meet those standards.


It may limit those prototypes to 500,000 users and warn them that the technology could produce false or offensive statements. Since its release on the last day of November, ChatGPT - which can produce similarly toxic material - has been used by more than 1 million people.


"A cool demo of a conversational system that people can interact with over a few rounds, and it feels mind-blowing ? That is a good step, but it is not the thing that will really transform society," Zoubin Ghahramani, who oversees the AI lab Google Brain, said in an interview with the Times last month, before ChatGPT was released. "It is not something that people can use reliably on a daily basis."

Google has already been working to enhance its search engine using the same technology that underpins chatbots like LaMDA and ChatGPT. The technology - a "large language model" - is not merely a way for machines to carry on a conversation.

Today, this technology helps the Google search engine highlight results that aim to directly answer a question you have asked. In the past, if you typed "Do aestheticians stand a lot at work?" into Google, it did not understand what you were asking. Now, Google correctly responds with a short blurb describing the physical demands of life in the skin care industry.


Many experts believe Google will continue to take this approach, incrementally improving its search engine rather than overhauling it. "Google Search is fairly conservative," said Margaret Mitchell, who was an AI researcher at Microsoft and Google, where she helped to start its Ethical AI team, and is now at the research lab Hugging Face. "It tries not to mess up a system that works."


Other companies, including Vectara and a search engine called Neeva, are working to enhance search technology in similar ways. But as OpenAI and other companies improve their chatbots - working to solve problems with toxicity and bias - this could become a viable replacement for today's search engines. Whoever gets there first could be the winner.


"Last year, I was despondent that it was so hard to dislodge the iron grip of Google," said Sridhar Ramaswamy, who previously oversaw advertising for Google, including Search ads, and now runs Neeva. "But technological moments like this create an opportunity for more competition."

 

 

Internet sensation ChatGPT attracts the dark side of tech     /  Business Line  /  09  Jan  2023

Extract :

ChatGPT, the artificial intelligence-based chatbot tool created by Open AI, that has taken the world by storm with its human-like ability to respond to queries, is falling prey to cybercriminals. Two months into its launch, hackers have started using the platform to generate malicious content to dupe people.


In the Dark Web, a hacker think tank has been busy posting how ChatGPT can be used to build malicious tools and recreate malware strains for data theft.


Another hacker showed how to use the platform to create a marketplace script on the Dark Web for trading illegal goods, according to Check Point Research (CPR).


“Recently, I have been playing with ChatGPT. And, I have recreated many malware strains and techniques based on some write-ups and analyses of commonly known malware,” a hacker said, taking part in a thread.


According to CPR, it is not just coding savvy hackers, but even people with less technical skills who can use the platform for malicious purposes.


Srinivas Kodali, a privacy and digital infrastructure transparency activist, says it is quite a natural social phenomenon. “Technology can always be used for good and bad things. It is the responsibility of the government to create awareness, educate the public and to regulate and keep tabs on the bad actors,” he said.


ChatGPT seems to be aware of this challenge. When a user posed a question on the platform on the scope for malicious uses, it responded out that some might try to “use me or other language models to generate spam or phishing messages”.


“As a language model, I do not have the ability to take action or interact with the real world, so I cannot be used for malicious purposes. I am simply a tool that is designed to generate text based on the input that I receive,” it says.


OpenAI, which developed the platform, has warned that ChatGPT could sometimes respond to harmful instructions or exhibit biased behavior, though it has made efforts to make the model refuse inappropriate requests.


“Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. Although the tools that we analyse in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools,” Sergey Shykevich, Threat Intelligence Group Manager at Check Point, said.

 


MY  TAKE  :


When , this morning, I asked ChatGPT to comment on the above-stated Business Line article, I got following reply :


It is not uncommon for new technologies to be used for malicious purposes. This is true for artificial intelligence and chatbots like ChatGPT.

It is important for developers to be aware of this possibility and to design their systems with security in mind.

It is also the responsibility of governments to educate the public about the potential risks of new technologies and to regulate and monitor the use of these technologies to prevent abuse.

It is important to note that ChatGPT is simply a tool that generates text based on the input it receives and does not have the ability to take action or interact with the real world.

However, it is possible for malicious actors to use the output of the tool for nefarious purposes.

 

How can we expect future generations of ChatGPT – like , AI to evolve ?

 

Here is my guess :


V 1.0 ( 2010 – 2020 ) >   Text – to – Speech

 

V 2.0 ( 2020 – 2023 ) >   Conversational AI  /  Text – to – Image / Text – to –

                                       Video

                                 >   DigitalAvatarOfHemenParekh.com (powered by

                                      Personal.ai) chatting ( in my voice ) with 1,000 persons

                                      SIMULTANEOUSLY

                                     

V 3.0 ( 2023 – 2024 ) >   Text – to – Feel  SmellTaste

                                      3 D Volumetric Holograms    ( CES 23 )

                                      3 D Television

 

V 4.0 ( 2024 – 2025 ) >   Thought – to – Speech / Action ( Brain Computer

                                       Interface )

       

V 5.0 ( 2025 – 2026 ) >  Thought–to–Holographic Transportation  ( ? )

 

 

 

 

With regards,

Hemen Parekh

hcp@RecruitGuru.com  /  09 Jan 2023

 

 

Related Readings :

 

Ø  If Satya is here, can Sundar be far behind ?...........[ 04 Jan 2023 ]

 

Ø  Now it is ChatGpt vs YouChat ………………………………….[ 03 Jan 2023 ]

 

Ø  My Experiments with ChatGPT………………………………… [ 26 Dec 2022 ]

 

Ø  Google , watch out – here comes ChatGPT !............[ 25 Dec 2022 ]

 

 

Ø  Future of Tourism ?................................................. [ 22 July 2020 ]

 

Ø  Evolution : JioFibre > JioMeet / Revolution : Jio Glass.. [ 18 July 2020 ]

 

Ø  As Envisaged:Merging–Morphing–Melding of Senses……. [ 29 Apr 2022 ]

 

Ø  2024 ! – V 2.0 of Orwellian 1984 ?..........................   [ 07 July 2017 ]

 

Ø  Nostradamus could have said ?................................  [  08 Nov 2018 ]

 

Ø  Bypassing Electrodes ? ……………………………………………………[ 21 July 2020 ]

 

Ø  Fast Forward to Future ( 3 F ) ………………………………………..[ 20 Oct 2016 ]

 

Ø  Artificial Intelligence : Brahma , Vishnu or Mahesh ? ………[ 30 June 2017 ]

 

Ø  Racing towards ARIHANT ?   ………………………………………………[ 04 Aug 2017 ]

 

Ø  to : Alphabet / from : ARIHANT …………………………………………[ 12 Oct 2017 ]

 

Ø  ARIHANT  :  the  Destroyer  of  Enemy ………………………………[ 24 Nov 2017 ]

 

Ø  ARIHANT : Beyond “ Thought Experiment “  ………………………[ 21 May 2018 ]

 

Ø  Singularity : an Indian Concept ?  ………………………………………[ 29 Mar 2020 ]

 

Ø  From  Tele-phony  to  Tele-Empathy  ?............................[ 27 Mar 2018 ]

 

 

 

 

 

 

 

 

     

No comments:

Post a Comment