Context :
UN Security Council to hold first talks on AI risks ……….. ET…… 17 July 2023
Extract :
The United Nations Security Council will hold its
first formal discussion on artificial intelligence (AI) this week
in New York, with Britain to call for an international dialogue about its
impact on global peace and security.
Governments around the world are considering how to mitigate the dangers of emerging AI
technology, which could reshape the
global economy and change the international security landscape.
Britain holds the
rotating presidency of the UN Security Council this month and has been seeking
a global leadership role in AI regulation.
British Foreign
Secretary James Cleverly will chair the discussion on Tuesday.
In June, U.N. Secretary-General Antonio Guterres
backed a proposal by some artificial intelligence executives for the creation
of an international AI watchdog body like the International Atomic Energy
Agency (IAEA).
Dear Mr James Cleverly,
Congratulations for your initiative . I hope the discussions in the Security Council
will result into a Regulatory Frame-work as proposed in my following blog .
To facilitate a meaningful and DECISIVE discussion , I urge you to forward this
suggestion to
all the Members of the Security Council
With regards,
Hemen Parekh
www.hemenparekh.ai / 17
July 2023
CC :
james.cleverly.mp@parliament.uk
My Suggestion :
United Nations Agency for Regulating Artificial
Intelligence ( UNARAI )
. 30 May
2023
Extract :
AI Regulation Framework – UNARAI
A. Past
History of AI
The history of artificial intelligence (AI) can be traced
back to the early days of computing, when Alan Turing first proposed the idea
of a machine that could think like a human. In the decades that followed, AI
research made significant progress, but it was not until the 1990s that AI
began to be used in practical applications. The development of deep learning in
the early 2010s led to a new wave of AI innovation, and today AI is being used
in a wide range of fields, including healthcare, transportation, and finance.
B. Current
Status of AI Industry
The AI industry is growing rapidly, and the global market
for AI is expected to reach $190 billion by 2025. The United States is the
leading market for AI, followed by China and Europe. The AI industry is
dominated by a few large companies, including Google, Microsoft, and Amazon.
However, there are also a number of smaller companies that are developing
innovative AI technologies.
C. Major
Players / Companies
The following are some of the major players in the AI industry:
· Google
· Microsoft
· Amazon
· IBM
· Baidu
· Alibaba
· Tencent
· Nvidia
· Intel
· OpenAI
These companies are developing a wide range of AI
technologies, including natural language processing, machine learning, and
computer vision. They are also using AI to develop new products and services,
such as self-driving cars, virtual assistants, and fraud detection systems.
D. Future
Outlook
The future of AI is bright. AI is expected to have a major
impact on a wide range of industries, including healthcare, transportation, and
finance. AI is also expected to create new jobs and opportunities.
E. Industries
that will witness major disruption
The following industries are expected to witness major
disruption from AI:
· Healthcare
· Transportation
· Finance
· Retail
· Manufacturing
· Education
· Customer
service
· Security
AI is expected to automate many tasks in these industries, and
it is also expected to create new products and services.
F. Jobs
that might disappear
The following jobs are at risk of disappearing due to AI:
· Data
entry clerks
· Customer
service representatives
· Truck
drivers
· Factory
workers
· Insurance
agents
· Loan
officers
· Lawyers
· Doctors
· Teachers
AI is not going to disappear all of these jobs, but it is
likely to automate many of them. This means that people who work in these jobs
will need to retrain for new jobs that are not at risk of being automated.
G. Potential
for Benefitting Humanity
AI has the potential to benefit humanity in a number of ways,
including:
· Improving
healthcare
· Reducing
traffic congestion
· Preventing
fraud
· Personalizing
education
· Making
customer service more efficient
· Improving
security
AI is still in its early stages of development, but it has the
potential to make a major positive impact on the world.
H. Potential
for Harming Humanity
AI also has the potential to harm humanity in a number of ways,
including:
· Creating
mass unemployment
· Increasing
inequality
· Being
used for military purposes
· Being
used for surveillance
· Being
used to spread misinformation
It is important to be aware of the potential risks of AI, and to
develop policies and regulations that will mitigate these risks.
I. Conceptual
Framework for UNARAI
UNARAI, or the United Nations Agency for Regulating
Artificial Intelligence, is a proposed international organization that would be
responsible for regulating AI. UNARAI would be composed of representatives from
all member states of the United Nations, and it would have the authority to
develop and enforce regulations on the development, use, and sale of AI
technologies.
UNARAI would be guided by a set of principles that would
be designed to promote the safe and responsible development of AI. These
principles would include:
· The
principle of human control: AI
technologies should be developed and used in a way that respects human autonomy
and control.
· The
principle of non-discrimination: AI
technologies should not be used to discriminate against individuals or groups
of people.
· The
principle of transparency: AI
technologies should be transparent and accountable to the public.
· The
principle of safety: AI
technologies should be safe and secure, and they should not pose a threat to
human safety or security.
UNARAI would have a number of tools at its disposal to
regulate AI. These tools would include:
· Regulations: UNARAI would be able to
develop and enforce regulations on the development, use, and sale of AI
technologies.
· Standards: UNARAI would be able to develop
and promote standards for the development and use of AI technologies.
Initiative |
EU |
USA |
India |
Artificial
Intelligence Act |
In
progress |
In
progress |
In
early stages |
National
Artificial Intelligence Initiative |
In
progress |
In
progress |
No
national initiative |
State
AI regulations |
Some
states have passed AI regulations |
Some
states have passed AI regulations |
No
state AI regulations |
Private
sector initiatives |
Some
companies have developed their own AI ethics guidelines |
Some
companies have developed their own AI ethics guidelines |
No
major private sector initiatives |
As you can see, all three regions are still in the early
stages of developing AI regulation. However, there is a growing recognition of
the need for regulation to ensure the ethical and responsible development and
use of AI.
Here are some additional details about each initiative:
· The
EU's Artificial Intelligence Act is a comprehensive piece of legislation that would
regulate the development and use of AI in a variety of sectors, including
healthcare, finance, and transportation. The Act would establish a risk-based
approach to regulation, with different requirements for high-risk, medium-risk,
and low-risk AI systems.
· The
US National Artificial Intelligence Initiative is a government-led effort to
promote the responsible development and use of AI. The Initiative includes a
number of initiatives, such as the creation of a National AI Research and
Development Strategic Plan and the establishment of a National AI Advisory
Committee.
· India
does not have a national AI regulation initiative, but there are a number of
state-level initiatives. For example, the state of Maharashtra has passed an AI
ethics bill that requires companies to develop and implement AI ethics
guidelines.
It is still too early to say what the impact of these
initiatives will be. However, they are a sign of the growing recognition of the
need for regulation to ensure the ethical and responsible development and use
of AI.
Here is the
comparative tabulation of the UNARAI framework prepared by BARD and by ChatGPT,:
Parameters |
BARD |
ChatGPT |
A. Mandate |
To
promote the responsible development and use of AI, and to protect the rights
and interests of individuals and society. |
To
regulate the development and use of AI, and to ensure that AI is used for
good and not for harm. |
B. Structure |
A
global governing body with a board of directors, a secretariat, and a
technical advisory committee. |
A multi-stakeholder
organization with representatives from governments, businesses, civil
society, and academia. |
C. Key principles |
Human
rights, fairness, accountability, transparency, and safety. |
Safety,
fairness, transparency, accountability, and non-discrimination. |
D. Scope |
All
AI systems that have the potential to harm individuals or society. |
All
AI systems that are used in public or commercial settings. |
E. Enforcement |
The
power to investigate and sanction AI systems that violate its regulations. |
Enforcement
would be handled by national governments. |
F. Funding |
Contributions
from member states. |
A
combination of government contributions and private donations. |
G. Timeline |
Expected
to be operational within five years. |
Expected
to be operational within ten years. |
H. Public engagement |
Engagement
with the public to ensure that its regulations are responsive to the needs of
society. |
Engagement
with the public to ensure that its regulations are responsive to the needs of
society. |
I. Research and development |
Support
for research and development in AI to ensure that the technology is used for
good and not for harm. |
Support
for research and development in AI to ensure that the technology is used for
good and not for harm. |
J. Education and training |
Support
for education and training programs to help people understand AI and its
potential impact on society. |
Support
for education and training programs to help people understand AI and its
potential impact on society. |
K. International cooperation |
Work
with other international organizations to ensure that AI is regulated in a
consistent and coordinated manner. |
Work
with other international organizations to ensure that AI is regulated in a
consistent and coordinated manner. |
Footnotes
· The ChatGPT framework
is based on the EU proposed AI Act, the US initiative for AI regulation, and
India's Digital India Bill.
Comments
· The ChatGPT framework
is more focused on safety, as it includes specific provisions for preventing AI
systems from being used for harm.
· Both
frameworks are still in development, and it is likely that they will evolve
over time as AI technologies continue to develop.
Sources used for developing UNARAI
framework :
( 1 ) ….. World Wide Work
Federation
http://www.hemenparekh.in/2015/06/world-wide-work-w3f.html
( 2 ) …. Parekh’s Law of Chatbots
https://myblogepage.blogspot.com/2023/02/parekhs-law-of- chatbots.html
( 3
) …….Fast
Forward to Future
https://myblogepage.blogspot.com/2016/10/fast-forward-to-future-3-f.html