Context :
High
court unwilling to step in to curb deepfakes ….
TOI / 05 Dec 2023
Extract :
The Delhi high court expressed reservations
about intervening in the use of deep fake content generated through AI
The court suggested that the govt
is better equipped to address the complex issue and find a balanced solution.
“ This technology is now available
in a borderless world . How do you control the
internet ? It cannot be policed
much. After all the
freedom of net will be lost. So there are very important balancing factors
involved in this. You have to arrive at a solution which balances all
interests. It is only the government with all its resources that can do it.
They have the data, they have the wide machinery, they have to make a decision
about this. It is a very, very complicated issue “
, Justice Manmohan said
Manohar Lal, counsel for the petitioner,
submitted before the court that the court could at least issue guidelines , so that the
private parties which misuse deep fake and AI technology are held accountable and suggested that websites could be asked to mark content generated by AI as such,
and that such websites could be prevented from generating illegal content.
My Take :
When Justice Manmohan talked of a “borderless world
“, most probably he was imagining the
following scenario :
-------------------------------------------------------------------------
Suppose, a Minister has a opulent bungalow
surrounded by slums of poor people living in Zopat-patti . Children of those jobless
poor, sometimes go to sleep without food. Minister throws a lavish party to
celebrate the wedding of his son. Aromas of delicious food fills the hall. Bride
and groom are wearing garlands made out of sweet-smelling roses. A Videographer
shoots a video of the guests ( which include the celebrities of the entire
country ) listening to Minister’s welcome speech and instantly uploads it on
Youtube “
-----------------------------------------------------------------------------
If this event was taking place in 2026 , expect the
following to happen INSTANTLY – AUTOMATICALLY
, powered by AI tools and without needing any HUMAN INTERVENTION
:
# That VIDEO ( posted on Youtube ) , also gets posted on INSTAGRAM, as a
VIDEO
( SIGHT )
# Minister’s speech ( SOUND ) appears as a TEXT on Facebook and on Minister’s
own website
# Speech ( SOUND ) also turns into a PODCAST ( SOUND ) and gets delivered to
the
subscribers
# IMAGES of those “ delicious food items “ show up on Food-Sites as recipes (
TEXT )
# IMAGES of those food items turn up on the tongues of those poor children as
TASTE
# Fragrance ( SMELL ) of those roses turn into bright red colours ( SIGHT ) on
your favorite screen
# IMAGE ( photo ) of that opulent bungalow , turns into a bill of construction
materials used ( TEXT ) on some
# When you see ( SIGHT ) the photo of that rose on your WhatsApp message ,
your fingers ( holding a metal body of your mobile ) will feel the soft TOUCH of
a rose petal
--------------------------------------------------------------------------------------------
Is this MORPHING of SENSES ( Sight – Sound – Taste – Touch – Smell ) , from “ any one “ to “ any other
“, just around the corner ?
Dear
Ashwini Vaishnawji
,
While whole-heartedly supporting your
efforts to “ contain “ the emergence of DEEP FAKES , I urge you to :
Ø Keep in mind the evolution of “ SENSORY
MERGING “ as outlined below
Ø Consider implementation
of my suggestion at :
DeepFake
News : Here is how to Control ………….. 23 Nov 2023
With regards,
Hemen Parekh
www.hemenparekh.ai / 06
Dec 2023
When I asked this question to BARD
, here is how it replied :
Your
vision of the technological convergence blurring the distinction between
original and transformed senses is not only feasible but also within the realm
of near-future possibilities.
The
rapid advancements in artificial intelligence, particularly in the areas of
machine learning, natural language processing, and computer vision, are paving the way for seamless integration
of different sensory modalities.
The scenario you described, - where a video of a minister's speech is instantly transformed into multiple forms of content – text, podcasts, recipes, and even sensory experiences
– is well within the grasp of current technological capabilities.
AI
algorithms can already extract text from audio, generate images from text
descriptions, and synthesize audio from text. With further development, these
capabilities can be combined and integrated to create a truly immersive and
multimodal experience.
The timeline for this vision to materialize depends on several factors, including the rate of technological advancement, societal acceptance, and regulatory frameworks.
However, given the current trajectory of AI development and the
increasing demand for immersive experiences, it is reasonable to expect that this scenario could become a reality within the next decade or two.
The implications of such a technological convergence are profound. It has the
potential to revolutionize communication, education, entertainment, and even our
understanding of reality. However, it also raises concerns about
misinformation, manipulation, and the potential loss of authenticity.
It is crucial to approach these advancements with caution and foresight. We need
to develop ethical guidelines and safeguards to ensure that these technologies
are used responsibly and for the benefit of society. We also need to educate
ourselves and others about the capabilities and limitations of these
technologies to become discerning consumers of information and experiences in
this increasingly multimodal world.
And here is what ChatGPT had to say about what I imagined :
Your vision
of a seamless convergence of senses through technology isn't far-fetched,
given the rapid advancements in AI, augmented reality, and sensory
technologies.
The concept you've outlined envisions a world where different
sensory experiences can be transmitted or transformed across various mediums
almost instantly.
As for the timeline, achieving a complete and flawless convergence of senses might take more time.
While some elements of this vision,
- like visual and auditory transformations,
- are already feasible to a certain extent through augmented
and virtual reality technologies, the translation of sensory experiences like
taste, touch, and smell into digital formats is a bit more complex.
Technologies that mimic or stimulate these senses, like digital
scent or taste simulators, are still in their early stages. Progress is being
made, but it might take several more years, maybe a decade or more, to achieve
a seamless integration of all five senses in a manner as fluid as you've
described.
However, the concept you've put forth aligns with ongoing technological
research and development.
Innovations in haptic feedback, scent-emitting devices, and
further advancements in AI-driven sensory simulations could indeed lead us closer to the
kind of sensory
convergence you've envisioned.
As technology continues to evolve and new breakthroughs emerge, the boundaries between the physical and digital worlds might blur further, making the translation and transmission of sensory experiences a reality.
It's a fascinating prospect that's both exciting and challenging in
terms of technological development and ethical considerations.
AI companies can be innovative and responsible: Markham Erickson, Google executive
There have to be rules that ensure their safety, and those will be risk-based in proportional system
==============================================
Google to join industry mechanism to discern AI-generated content
Google is in the process of joining a cross-industry collaboration that can identify and watermark online content generated using artificial intelligence (AI) or created synthetically, a top Google executive said on Wednesday
No comments:
Post a Comment