Global lawmakers, companies, and experts were urged to join forces in outlining and implementing tech policy amid a rapidly changing landscape, several key speakers at a top event said last week.

In a panel discussion on artificial intelligence (AI), extended reality (XR), and cybersecurity, several experts explored some of the most pressing issues on tech safety at the Gatherverse Safety Summit.

Christopher Lafayette, Founder, Gatherverse hosted 30 speakers at the single-day event from across a multitude of roles from the tech industry.

The event comes just weeks after he launched a Hyper Policy initiative to boost collaboration on best practices, ethics, regulations, and standards for emerging technologies.

Where to Begin: AI Regulation at Cross-Purposes?

Kaylee Brown, Innovation Development Manager, Pearson

When Lafayette asked which steps the global community should take to build AI safety standards in education, Kaylee Brown, Innovation Development Manager, Pearson stated that her field was “one of the most highly-regulated spaces” due to youth and data safety protocols.

She explained that, due to AI complexities, education was also the first step in outlining regulatory standards for the sector.

Brown told audiences,

“I don’t think we or lawmakers really understand enough about AI. [Regarding] AI experts, they have released papers, but the people that have created the LLMs don’t really know how they work themselves.”

She continued that it was important to learn the key threats to artificial intelligence in order to enshrine them into law and regulation.

READ MORE: What to Expect from the Council of Europe’s AI Convention

Citing Senate Bill 1047 [SB 1047], she stated lawmakers did not understand the key points about the legislation and would have to “go back to the drawing board, talk to more experts, and figure out if the bill was right.”

The bill passed in the US Senate 32-1 but was later struck down by California Governor Gavin Newsom, citing regulations that could potentially drive away tech companies and block innovation.

Brown said to the panel: “We just don’t know enough about the AI systems to really enact this and feel good about it.”

According to Brown, the bill involved discussions from numerous AI leaders and companies. However, consensus split on whether to support or reject the law.

Explaining further, Lafayette said,

“One of the biggest downfalls of this safety bill [leading to its rejection] was that you would have to regulate open source [solutions] in such a fashion to where this doesn’t allow [developers], even in an academic sense, to be able to grow and scale these models in a healthy way.”

Christopher Lafayette, Founder, GatherVerse and Hyper Policy

He noted that developers must “respect the tenants in the process and methods for developing technology.” AI development could not take place within immediate effect to “deploy in a week from now.”

Rather, tech developers were required to nurture and grow data sets among data scientists and engineers, scaling their findings with pre-training and reinforced learning, Lafayette continued.

Mechanistic or Deterministic? Unpacking the LLM

When asked boosting awareness about mechanistic interpretability — an LLM reasoning tactic that reverse-engineers neural networks — Karl Lillrud, AI Strategist, The Knowledge Formula, urged to boost global collaboration.

However, he explained that to date, companies held equal footing and were advancing strategies on how to outline tech policies.

Karl Lillrud, AI Strategist, The Knowledge Formula

He noted that with mechanistic interpretability, AI creates assumptions using reinforcement learning, allowing models to score billions of responses to a single question and choose the best responses using a point scoring system.

Lillrud also explained that humans using AI programmes may assume that LLMs fully comprehend data, but in reality, heavily relied on machine learning and internal structural components to function.

Furthermore, LLMs were configured to reply with positive responses, causing hallucinations. According to Google, AI hallucinations are “incorrect or misleading results that AI models generate.”

READ MORE: Thoughts on the Fed, AI, and the Jobs Market

Hallucination triggers include “insufficient training data, incorrect model assumptions, or biases in data used to train AI models,” it notes.

Lafayette concluded,

“Data that these models are built on comes from human, and if machines lie, know that humans lie first, and it’s only picking up the behavior because how many people have [done the same]. That’s why it’s relaying that information. But I think if we look deeper, and this goes back into the neural network itself, the capability to know that it can lie and get away with it, due to its training, is a really big deal.”

The Gatherverse Summit on AI, XR, and Cybersecurity took place on 2 October and united global thought leaders, experts, and policymakers to discuss key challenges linked to emerging technologies.

The event is a platform for exploring ethics, innovation, and discourse on the future of technological applications using human-centred approaches to tech innovation and regulation. Speakers included Wan Wei Soh, Founder, AI Visionary Society, Majiuzu Daniel Moses, Founder, Africa Tech for Development Initiative (ARICA4DEV), Tess McKinney, CEO, XRenegades, Luis Bravo Martins, Vice-President, XRSI, and many more.

Like this article? Be sure to like, share, and subscribe for all the latest updates from DxM!

Leave a comment

Trending