CAIS Lecture (AI + pizza) - Professor Lawrence Paulson FRS
6.30pm – 9pm8 March 2023+ Add to calendar
Cambridge
The Bradfield Centre
With the November 2022 release of OpenAI’s ChatGPT, the rate of advance in AI has seemingly gone from meteoric to meteoric on steroids. But many commentators have expressed doubts that Large Language Models such as ChatGPT are truly intelligent.
What do advanced AI systems need in order to be able to think rationally? Not just give the impression that they are doing so (to a lay and impressionable observer), but for real…?
Consider the following: If all cats are crazy, and Jonathan is a cat, then Jonathan is crazy.
This is an example of deduction, one of the three primary modes of reasoning (along with induction and abduction) underlying critical thought and general problem solving. Any advanced AI system currently on the drawing board with ambitions of becoming a general problem solver, or AGI (Artificial General Intelligence), will unavoidably need to perform reliable deductionsomehow, either as an explicit high-level function or as an emergent property of its low-level behaviour, and every AI researcher with an interest in AGI – or AI company with ambitions to become an AGI company – needs to know about deduction.
The art of getting a mindless automaton (a.k.a. computer) to perform deduction is called Automated Theorem Proving (ATP), and Professor Paulson has been a recognised world leader in ATP for over 40 years. Who better than Professor Paulson to deliver the inaugural CAIS talk at the Bradfield Centre, Cambridge, on Wednesday 8 March? No one, that’s who!
About the talk
Title: “Automated Theorem Proving: a Technology Roadmap”
Abstract: The technology of automated deduction has a long pedigree. For ordinary first-order logic, the basic techniques had all been invented by 1965: DPLL (for large Boolean problems) and the tableau and resolution calculi (for quantifiers). The relationship between automated deduction and AI has been complex: does intelligence emerge from deduction, or is it the other way around? Interactive theorem proving further complicates the picture, with a human user working in a formal calculus much stronger than first-order logic on huge, open-ended verification problems and needing maximum automation. Isabelle is an example of a sophisticated interactive prover that also relies heavily on automatic technologies through its nitpick and sledgehammer subsystems. The talk will give an architectural overview of Isabelle and its associated tools. The speaker will also speculate on how future developments, especially machine learning, could assist (not replace) the user.
Sponsors
CAIS would not be possible without the support of its sponsors: one-man AGI non-profit BigMother.AI, AI-driven-financial-forecasting startup Dimension Technologies, and (for this event) membership organisation and technology community CW (Cambridge Wireless)
Darendra is part of the Systems Engineering & Test department, where he applies his expertise in Digital Security Testing to various technologies, including Machine Learning and AI. He has a background in Software Test Automation of wireless communications protocols, such as satellite communications and LTE. He enjoys defining and developing testing strategies for cybersecurity challenges and ensuring the quality and reliability of innovative solutions.
Maria is a globally recognised, award-winning AI ethics public policy expert, a member of various Advisory Boards - UK All-Party Parliamentary Group on AI (APPG AI), NATO EDT & SKEMA AI Institute, and Chair of techUK Data and AI leadership committee. In her current role as Head of AI Public Policy and Ethics, she aligns PwC's AI strategies with ethical considerations and regulatory trends, fostering collaboration with external stakeholders and leading PwC's responses to public policy consultations and initiatives. Maria's commitment to responsible AI has made her a recognised thought leader and influencer in the field. Maria is a passionate advocate for children's rights in the age of AI, serving as a member of the Advisory Board for UNICEF #AI4Children and World Economic Forum Generation AI programmes. She also serves as an Intellectual Forum Senior Research Associate at the University of Cambridge researching human-centric AI and the intersection between tech policy and ethics.
Phil Claridge is a ‘virtual CTO’ for hire within Mandrel Systems covering end-to-end systems. Currently having fun and helping others with large-scale AI systems integration, country-wide large scale big-data processing, hands-on IoT technology (from sensor hardware design, through LoRa integration to back end systems), and advanced city information modelling. Supporting companies with M&A ‘exit readiness’, due-diligence and on advisory boards. Past roles include: CTO, Chief Architect, Labs Director, and Technical Evangelist for Geneva/Convergys (telco), Arieso/Viavi (geolocation), and Madge (networking). Phil’s early career was in electronics, and still finds it irresistible to swap from Powerpoint to a soldering iron and a compiler to produce proof-of-concepts when required.
Parminder is a patent attorney based in Appleyard Lees’ Cambridge office, and helps companies to protect their technological innovations. She has built a substantial reputation working with high-growth start-ups, spin-outs and SMEs in Cambridge, and has in-house experience. She specialises in writing and prosecuting patent applications for computer-implemented inventions. Her work includes patenting AI-based technologies, including new machine learning frameworks and applications of machine learning in image classification, human-computer interactions and text-to-speech. Parminder also writes her own AI blog on LinkedIn, and is a member of the Chartered Institute of Patent Attorneys’ Computer Technology Committee.
Simon leads a team that develops AI and ML solutions for large financial institutions. Before joining GFT he was the Principal Investigator for BT’s AI program. Before that he was the Head of Practice for Big Data and Customer Experience at BT and BT’s lead for collaborations with MIT, and the first industry fellow at the Alan Turing Institute. Simon is interested in practical application of AI technology and the practice and process of AI and ML projects. His book “Managing Machine Learning Projects” was published by Manning Books in 2023.
Peter is Founder & CEO of Vision Formers, the specialist consultancy that supports and mentors leaders of visionary technology businesses get product to market and turn ideas into reality. Vision Formers works with start-ups and scale-ups, providing significant expertise in accelerating business growth through a focus on developing a robust product strategy, growing and coaching product and development teams, and providing operational excellence. Peter has a long track record of conceiving, developing and marketing successful technology-based solutions, deployed at scale, globally. Innovative products Peter has brought to market in digital, cloud, AI, consumer electronics and telecommunications have been used by countless millions of people on a daily basis globally, badged by the world’s leading digital and technology brands. Peter also works with Digital Catapult as Programme Manager for UKTIN, working with partners and stakeholders to deliver UKTIN’s mission to transform the UK telecoms innovation ecosystem, capitalising on the country’s strengths in technology, academia, and entrepreneurialism, while positioning it for growth as new opportunities emerge in the industry. Peter is a board member of CW (Cambridge Wireless), a Fellow of the IET, a Chartered Engineer, and a member of the Association of Business Mentors.