MYSEC.TV

Home   /   RESOURCES   /   MYSEC.TV   / AI & Law

AI & Law

Tech & Sec Weekly
SHARE:

IN THIS VIDEO

Mr Yeong Zee Kin holds a Master of Laws from Queen Mary University of London and completed his undergraduate law degree at the National University of Singapore. His experience as a Technology, Media and Telecommunications lawyer spans both the private and public sectors. He has spoken and published in areas relating to electronic evidence and intellectual property, as well as legal issues relating to Blockchain and AI deployment.

Zee Kin is an internationally recognized expert on AI ethics. He spearheaded the development of Singapore’s Model AI Governance Framework, which won the UNITU WSIS Prize in 2019. He is currently a member of the OECD Network of Experts on AI (ONE AI). In 2019, he was a member of the AI Group of Experts at the OECD (AIGO), which developed the OECD Principles on AI. These principles have been endorsed by the G20 in 2019. He was also an observer participant at the European Commission’s High-Level Expert Group on AI, which fulfilled its mandate in June 2020

Zee Kin is also a well-regarded expert on data privacy issues. He has contributed to publications on legal issues relating to data privacy and has spoken at many well-recognised international and domestic platforms on this topic.

In this interview, Zee Kin shares his insights on the legal challenges in the Era of Advanced AI

Zee Kin highlighted that with the latest AI innovations, the responsibility and legal issues remain largely consistent, but the tools and technology introduce different challenges.

For instance, he shared that such concerns around content, child protection, intermediary behavior, data security, data protection, and cybercrime remain, while challenges such as detection of fake content has intensified due to increased tool accessibility and the scalability of threats.

Referring to the “Getty vs. Stability AI” case, he shared that the interesting question is the use of copyrighted data to train AI models – which is not new, and the key is to establish a proper legal basis for using such data. Data lineage and the provenance of data have always been important in legal contexts.

He also noted that these concerns have also surfaced during the recent governmental responses around the world to the latest AI innovations.

Zee Kin also highlighted the challenges with defining terms such as “fairness,” “transparency,” and “repeatability” – varies by context, where expectations and priorities for AI differ based on its use, such as safety and predictability in medicine, and bias and fairness in personal data applications.

Repeatability poses an additional challenge in Generative AI because every iteration of an image or summary will vary (**owing to Generative AI’s statistical predictive nature).

Zee Kin also shares his views of AI’s impact on job security, nothing that there will be emerging opportunities for lawyers to use AI tools for efficiency and error reduction.

Recorded at TechLaw Fest 2023, 21st Sept 2023, 3.30pm, Marina Bay Sands, Singapore.
#mysecuritytv #cybersecurity #ai #law #ailawyer

OTHER VIDEOS IN THIS SERIES

asean-01
December 18, 2024
Group-IB has released a fascinating case investigation on deep fake fraud. Watch Now
bug
December 13, 2024
Learn what ethical hackers can teach us about the next era of artificial intelligence. Watch Now
chie-1112-2
December 11, 2024
We speak with Craig Patterson, Senior Vice President of Global Channels at Aryaka Networks, where he leads the company’s channel strategy worldwide, enabling alignment across partner sales and marketing teams and programs in North America, Europe, Africa and the Middle East (EMEA) and Asia-Pacific (APAC). Watch Now
chie-1112-1
December 11, 2024
We speak with Paul Tyrer, Global VP of IT Channel Ecosystem, Schneider Electric about the impact of AI on Data Centers in the coming years. Generative AI is expected to grow by US$158.6 billion by 2028, according to #canalys Watch Now