MYSEC.TV

Home   /   RESOURCES   /   MYSEC.TV   / AI, ML & Automation | Aligning Safety & Cybersecurity – Episode 6

AI, ML & Automation | Aligning Safety & Cybersecurity – Episode 6

Tech & Sec Weekly
SHARE:

IN THIS VIDEO

In March 2024, the Australian Senate resolved that the Select Committee on Adopting Artificial Intelligence (AI) be established to inquire into and report on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia. The committee intends to report to the Parliament on or before 19 September 2024.

More than 40 Australian AI experts made a joint submission to the Inquiry. The submission from Australians for AI Safety calls for the creation of an AI Safety Institute. “Australia has yet to position itself to learn from and contribute to growing global efforts. To achieve the economic and social benefits that AI promises, we need to be active in global action to ensure the safety of AI systems that approach or surpass human-level capabilities.” “Too often, lessons are learned only after something goes wrong. With AI systems that might approach or surpass human-level capabilities, we cannot afford for that to be the case.”

This session has gathered experts and specialists in their field to discuss best practice alignment of AI applications and utilisation to safety and cybersecurity requirements. This includes quantum computing which is set to revolutionise sustainability, cybersecurity, ML, AI and many optimisation problems that classic computers can never imagine. In addition, we will also get briefed on: OWASP Top 10 for Large Language Model Applications; shedding light on the specific vulnerabilities LLMs face, including real world examples and detailed exploration of five key threats addressed using prompts and responses from LLMs; Prompt injection, insecure output handling, model denial of service, sensitive information disclosure, and model theft; How traditional cybersecurity methodologies can be applied to defend LLMs effectively; and How organisations can stay ahead of potential risks and ensure the security of their LLM-based applications.

Panelists
Dr Mahendra Samarawickrama
Director | Centre for Sustainable AI
Dr Mahendra Samarawickrama (GAICD, MBA, SMIEEE, ACS(CP)) is a leader in driving the convergence of Metaverse, AI, and Blockchain to revolutionize the future of customer experience and brand identity. He is the Australian ICT Professional of the Year 2022 and a director of The Centre for Sustainable AI and Meta61. He is an Advisory Council Member of Harvard Business Review (HBR), a Committee Member of the IEEE AI Standards, an Expert in AI ethics and governance at the Global AI Ethics Institute (GAIEI), a member of the European AI Alliance, a senior member of IEEE (SMIEEE), an industry Mentor in the UNSW business school, an honorary visiting scholar at the University of Technology Sydney (UTS), and a graduate member of the Australian Institute of Company Directors (GAICD).

Ser Yoong Goh
Head of Compliance | ADVANCE.AI | ISACA Emerging Trends Working Group
Ser Yoong is a seasoned technology professional who has held various roles with multinational corporations, consulting and also SMEs from various industries. He is recognised as a subject matter expert in the areas of cybersecurity, audit, risk and compliance from his working experience, having held various certifications and was also recognised as one of the Top 30 CSOs in 2021 from IDG.

Shannon Davis
Principal Security Strategist | Splunk SURGe
Shannon hails from Melbourne, Australia. Originally from Seattle, Washington, he has worked in a number of roles: a video game tester at Nintendo (Yoshi’s Island broke his spirit), a hardware tester at Microsoft (handhelds have come a long way since then), a Windows NT admin for an early security startup and one of the first Internet broadcast companies, along with security roles for companies including Juniper and Cisco. Shannon enjoys getting outdoors for hikes and traveling.

Greg Sadler
CEO | Good Ancestors Policy
Greg Sadler is also CEO of Good Ancestors Policy, a charity that develops and advocates for Australian-specific policies aimed at solving this century’s most challenging problems. Greg coordinates Australians for AI Safety and focuses on how Australia can help make frontier AI systems safe. Greg is on the board of a range of charities, including the Alliance to Feed the Earth in Disasters and Effective Altruism Australia.

Lana Tikhomirov
PhD Candidate, Australian Institute for Machine Learning, University of Adelaide
Lana is a PhD Candidate in AI safety for human decision-making, focussed on medical AI. She has a background in cognitive science and uses bioethics and knowledge about algorithms to understand how to approach AI for high-risk human decisions

Chris Cubbage
Director – MYSECURITY MEDIA | MODERATOR

For more information and the full series visit https://mysecuritymarketplace.com/sec…

OTHER VIDEOS IN THIS SERIES

asean-01
December 18, 2024
Group-IB has released a fascinating case investigation on deep fake fraud. Watch Now
bug
December 13, 2024
Learn what ethical hackers can teach us about the next era of artificial intelligence. Watch Now
chie-1112-2
December 11, 2024
We speak with Craig Patterson, Senior Vice President of Global Channels at Aryaka Networks, where he leads the company’s channel strategy worldwide, enabling alignment across partner sales and marketing teams and programs in North America, Europe, Africa and the Middle East (EMEA) and Asia-Pacific (APAC). Watch Now
chie-1112-1
December 11, 2024
We speak with Paul Tyrer, Global VP of IT Channel Ecosystem, Schneider Electric about the impact of AI on Data Centers in the coming years. Generative AI is expected to grow by US$158.6 billion by 2028, according to #canalys Watch Now