Explore Our Affordable Courses

Click Here

Global AI Summit London 2023 – Global Governance of Artificial Intelligence

Global AI Summit London 2023 – Global Governance of Artificial Intelligence

Context:

  • This article is based on the news “C Raja Mohan writes: London Summit and how to make AI responsible” Which was published in the Indian Express. The two-day Global AI summit London on the safe use of Artificial Intelligence (AI) is being hosted by the UK for discussing the risks and opportunities posed by artificial intelligence. 

 Global AI Summit London – Key Highlights

  • The Global AI Summit 2023 is convened by the British Prime Minister Rishi Sunak at Bletchley Park outside London. Early research on AI was pioneered at Bletchley Park” by Alan Turing who is widely considered as the “father of AI” .
    • Turing and his team of mathematicians had helped crack “Enigma”, a German code during World War II, giving the Allies a huge advantage in their military operations.
  • Global AI Summit London will discuss the establishment of 
    • International register of frontier AI models that will allow governments to assess the risks involved with AI.
    • AI Safety Research Institute will examine, evaluate and test new types to understand what each new model is capable of and its risks. .
  • It marks an important first step towards the global governance of AI that offers unprecedented danger to human rights.
What is Artificial Intelligence?

It is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. 

Need of Global AI Governance

  • Lack Of Laws To Regulate Data Scraping: Web scrapers for Generative AI gather data for training the models properly and this data needs to be regulated.
    • Currently, there is no uniform global law to regulate this data.
  • Dominance of The AI Big Three: China, the European Union (EU) and the US are shaping the new global order of governance, development of AI and the data-driven digital economy in support of their interests. 
    • The AI Big Three aim to control and own global critical infrastructure and software, and hardware value chains that are prerequisites for national AI deployments. 
    • At an industry level, the AI Big Three are headquarters of the top 200 most influential digital technology companies worldwide, and they shape current industry-led global AI governance.
  • Widening social disparities: Today, AI development is with large digital corporations.
    • The concentration of AI expertise in a few companies and nations could exacerbate global inequalities and widen digital divides.
  • Cyber crime: The potential risks with AI, include online harassment, hate and abuse, and threats to children’s safety and privacy. 
    • In the realm of generative AI, it points to the danger of foreign information manipulation, which involves spreading disinformation, undermine democratic values, suppress freedom of expression, and threatening the enjoyment of human rights. 
    • HYAS Labs created a polymorphic malware called BlackMamba using Large Language Models, as a proof of concept, which could collect sensitive information like usernames, passwords, and credit card numbers.
  • Curb of individual rights: The risks posed by AI to freedom of expression entail, among others, excessive content blocking and restriction and opaque dissemination of information.
    • For example, DeepMind, Google’s AI unit, is alleged to violate UK data protection laws and patient privacy rules during the development and testing of an app for the NHS (National Health Services).
  • Exclusionary AI governance frameworks: Current transnational AI governance frameworks do not adequately consider the perspectives of Global South.
    • Without active participation in the multidimensional global AI governance discourse, countries in the Global South will likely find it challenging to limit the harm caused by AI-based disruption.
    • There is also a lack of consideration for the fact that Western knowledge, values, and ideas, might not function as effectively in other regions.
What is meant by Data scraping?

  • Data scraping refers to a technique in which a computer program extracts data from output generated from another program. 
  • Data scraping is commonly manifested in web scraping, the process of using an application to extract valuable information from a website.

What are Web crawlers or data scrapers?

  • The foundational model is the core of any Generative AI program which is a deep learning algorithm that has been pre-trained on data scraped from the internet. 
  • These foundational models need fresh data inputs constantly. 
  • To ensure that their models are working properly, Generative AI companies build web crawlers or data scrapers, which are computer programs that crawl through websites and extract their data


Also Read:
Empowering Legislation: Harnessing Artificial Intelligence for Better Governance

What has been India’s progress towards regulating AI?

  • NITI Aayog: It issued the National Strategy for Artificial Intelligence in 2018 which had a chapter dedicated to responsible AI. 
    • In 2021, it issued a paper, ‘Principle of Responsible AI’. 
  • During the G20 meeting, India emphasised on a global framework on expansion of “ethical” AI. 
    • This implies establishment of a regulatory body to oversee the responsible use of AI, akin to international bodies for nuclear non-proliferation. 
  • In the recently concluded G20 meeting, it suggested international collaboration to come out with a framework for responsible human-centric AI.
  • Sector-specific frameworks: These were issued due to absence of an overarching regulatory framework for the use of AI system in India.
    • In June 2023, the Indian Council of Medical Research had issued ethical guideline for AI in biomedical research and healthcare;
    • In January 2019, SEBI issued a circular for creating an inventory in AI systems in the capital market and guide future policies. 
    • Under the National Education Policy 2020, AI awareness has been recommended to be included in school courses.
    • Telecom Regulatory Authority of India (TRAI) has recommended setting up a domestic statutory authority to regulate AI in India, through the lens of a risk-based framework.
Other Global Governance Initiatives to Regulate AI

  • US: Blueprint for an AI Bill of Rights’ released by US
  • EU: European Board for Artificial Intelligence
  • OECD:  Principles on Artificial Intelligence
  • European Union (EU): Negotiated comprehensive AI legislation
    • It gave recommendation on the Ethics of Artificial Intelligence and the Ethics Guidelines for Trustworthy AI presented by the EU’s High-Level Expert Group on AI
  • G7 Hiroshima Al Process: It is an effort to determine a way forward to regulate artificial intelligence (AI).
    • Voluntary AI code of conduct: G7 published guiding principles and a 11-point code of conduct to “promote safe, secure, and trustworthy AI worldwide”.
  • China: Recently launched the Global AI Governance Initiative

Role of G 20 in AI Governance

  • Several initiatives and working groups have been established where discussions typically focus on three main areas: 
    • ethical considerations
    • economic implications
    • regulatory frameworks 
  • G20 AI Principles:  They provide a framework for countries and organisations to develop and deploy AI in a way that is beneficial and addresses concerns related to ethics, privacy, and security.
    • However, there is limited consideration of the distributional aspects and existing multidimensional power dynamics that shape global AI governance.

Way Forward

  • Uniform Global Regulation: All countries need to build a set of harmonised regulations that govern all Generative AI models and their data crawlers/scrapers.
    • Enforcing global governance through a fragmented approach, and regulating AI in some nations while leaving it unregulated elsewhere holds limited effectiveness. 
  • Establishment of a Global AI Knowledge Hub:  There should be a centralised platform for sharing best practices, research findings, and policy recommendations on AI governance. 
    • It would address ethical issues and involve experts and citizens into governance mechanisms, thereby benefiting countries of both the Global North and Global South.
  • Support for International Technical Standards: Technical standards function as a baseline to gauge a product’s features and performance. 
    • These technical standards enable a common platform for risk assessments and audits, allowing countries with varying regulations to mutually assess and evaluate AI systems or services.
  • Creation of  a coordinating committee for the governance of artificial intelligence and data (CCGAID):  It will institutionalize linkages between relevant actors within the G20
    • Decolonial-informed approach (DIA) to responsible AI governance can help address power imbalances.
    • To support a DIA, the CCGAID must establish a dedicated Global South Working Group (GSWG) that includes multistakeholder representatives from the Global South. 
    • This working group would ensure the inclusion of diverse perspectives in shaping responsible  AI governance frameworks.
  • Blueprint for effective AI governance: 
    • Human-centred: Firmly rooted in the principles of human rights, ethical values, and the rule of law
    • Comprehensive: Leaves no room for gaps or grey interpretations
    • Agile: Provides the necessary flexibility for policymakers to adjust and make necessary corrections as AI continues to develop
    • Anticipatory: Focuses on possible risks before they materialise
    • Inclusive: Welcomes the involvement of all stakeholders.

Conclusion:

Global AI Summit London 2023 highlights the imperative for unified global governance in regulating AI, addressing ethical concerns, and mitigating potential risks, emphasizing the need for inclusive collaboration and a decolonial-informed approach to ensure responsible and equitable AI governance.

 

Prelims Question (2017)

In India, it is legally mandatory for which of the following to report on cyber security incidents?

1. Service providers

2. Data centres

3. Body corporate

Select the correct answer using the code given below:

(a) 1 only

(b) 1 and 2 only

(c) 3 only

(d) 1, 2 and 3

Ans: (d)

 

Mains Question: Examine the government initiatives aimed at creating jobs in India and suggest some measures to improve the job creation scenario in the context of emerging trends like artificial intelligence and digital transformation.(250Words 15 Marks)

 

To get PDF version, Please click on "Print PDF" button.

Need help preparing for UPSC or State PSCs?

Connect with our experts to get free counselling & start preparing

Download October 2024 Current Affairs.   Srijan 2025 Program (Prelims+Mains) !     Current Affairs Plus By Sumit Sir   UPSC Prelims2025 Test Series.    IDMP – Self Study Program 2025.

 

THE MOST
LEARNING PLATFORM

Learn From India's Best Faculty

      

Download October 2024 Current Affairs.   Srijan 2025 Program (Prelims+Mains) !     Current Affairs Plus By Sumit Sir   UPSC Prelims2025 Test Series.    IDMP – Self Study Program 2025.

 

Quick Revise Now !
AVAILABLE FOR DOWNLOAD SOON
UDAAN PRELIMS WALLAH
Comprehensive coverage with a concise format
Integration of PYQ within the booklet
Designed as per recent trends of Prelims questions
हिंदी में भी उपलब्ध
Quick Revise Now !
UDAAN PRELIMS WALLAH
Comprehensive coverage with a concise format
Integration of PYQ within the booklet
Designed as per recent trends of Prelims questions
हिंदी में भी उपलब्ध

<div class="new-fform">







    </div>

    Subscribe our Newsletter
    Sign up now for our exclusive newsletter and be the first to know about our latest Initiatives, Quality Content, and much more.
    *Promise! We won't spam you.
    Yes! I want to Subscribe.