22nd International Conference on Information Fusion
Ottawa, Canada | July 2-5, 2019

Fusion 2019 > Participation > Call for Papers and Proposals | Registration | Submissions


Keynote Presentations Include:

Wednesday, 3 July

A Discussion of the Fusion of Data in an Open Smart City Context and How Do We Govern the Data and the Technology at the Level of the Architecture

Dr. Tracey Lauriault - Carleton University Ottawa

Thursday, 4 July

Probabilistic Programming: Past, Present, and Future

Dr. Avi Pfeffer - Charles River Analytics

Friday, 5 July

Panel: Benefits and Challenges of Using Artificial Intelligence (AI) Technologies Throughout the Phases of the Decision Cycle, the Canadian perspective

Dr. Maria Rey - Space Strategies Consulting Ltd

Stephan King - Canadian Coast Guard

Dr. Kelly Lyons - University of Toronto

Rob Davidson - Information and Communications Technology Council (ICTC)

A Discussion of the Fusion of Data in an Open Smart City Context and How Do We Govern the Data and the Technology at the Level of the Architecture

Dr. Tracey Lauriault’s plenary talk will be on Open Smart Cities, including what it is, why openness matters, the difficulties of governing large social and technological spaces, why critical data fusion work is necessary, and why we need experts to think critically.

Dr. Lauriault is an Assistant Professor of Critical Media and Big Data in the School of Journalism and Communication, Carleton University Ottawa. Lauriault's work on open data, big data, open smart cities, is international, disciplinary and multi-sectoral. She is one of the founders of critical data studies and of open data in Canada and founded Open Smart Cities with OpenNorth a data and technology governance approach shaping how Canadian cities roll out their ‘smart’ programs.

She has expertise in data infrastructures, spatial media and smart cities, and is especially interested in the assemblage of social and technological processes such as artificial intelligence and machine learning (AI/ML), standards, and technologies such as platforms and the internet of things (IoT) that intermediate data and large social and technological systems and infrastructures and how these structure, automate and govern so much of daily life.

She is particularly fascinated and applies systems thinking to map out the processes by which deep technological infrastructures and vast machines operate. Her scholarship is critical and engaged, and as a data and technological citizen, she works with the makers, governors and stakeholders of these data, processes and infrastructures, not only to better understand them but also to ensure that these do not cause harm and more so that they are governed in an ethical, accountable and transparent way so as to balance economic development, social progress and environmental responsibility.

Probabilistic Programming: Past, Present, and Future

Probabilistic programming makes building models for prediction, inference, and learning easier and more powerful. The central idea in probabilistic programming is to provide an expressive programming language to express probabilistic models, along with inference and learning algorithms that automatically apply to models written in the language. This makes the development of applications involving rich probabilistic models much easier. In this talk, I will explain what probabilistic programming is, present the main ideas behind the languages and describe some of the approaches to inference. To illustrate the ideas, I will describe our Figaro probabilistic programming language, an expressive and mature language with many algorithms and a wide variety of applications. I will also present my vision for the future of probabilistic programming: A framework for building long-lived AI systems that continually learn and improve over time.

Dr. Avi Pfeffer is Chief Scientist at Charles River Analytics. Dr. Pfeffer is a leading researcher on a variety of computational intelligence techniques including probabilistic reasoning, machine learning, and computational game theory. Dr. Pfeffer has developed numerous innovative probabilistic representation and reasoning frameworks, such as probabilistic programming, which enables the development of probabilistic models using the full power of programming languages, and statistical relational learning, which provides the ability to combine probabilistic and relational reasoning. He is the lead developer of Charles River Analytics’ Figaro probabilistic programming language. As an Associate Professor at Harvard, he developed IBAL, the first general-purpose probabilistic programming language. While at Harvard, he also produced systems for representing, reasoning about, and learning the beliefs, preferences, and decision making strategies of people in strategic situations. Prior to joining Harvard, he invented object-oriented Bayesian networks and probabilistic relational models, which form the foundation of the field of statistical relational learning. Dr. Pfeffer serves as Action Editor of the Journal of Machine Learning Research and served as Associate Editor of Artificial Intelligence Journal and as Program Chair of the Conference on Uncertainty in Artificial Intelligence. He has published many journal and conference articles and is the author of a text on probabilistic programming. Dr. Pfeffer received his Ph.D. in computer science from Stanford University and his B.A. in computer science from the University of California, Berkeley.

Benefits and Challenges of Using Artificial Intelligence (AI) Technologies Throughout the Phases of the Decision Cycle, the Canadian perspective

Panelists from the Canadian Academia, Industry and Government talk about their experience from their efforts as well as perspectives on benefits and challenges for using of AI technologies throughout the phases of the decision cycle.

Maria Rey - Space Strategies Consulting Ltd

Dr. Maria Rey is the Vice President and Chief Science Officer at Space Strategies Consulting Ltd., an Ottawa-based company that provided advice to government and industry on all aspects of space-based satellite sensor and communications systems. She retired after 30 years with the Canadian Department of National Defence as the Director General, Science and Technology for Joint Force Development, in which she was responsible for advising on and developing the DND research program in the area of C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance) as well as being the Director General for the Defence Research and Development Canada Ottawa laboratory. She is currently a member of the DND Defence Advisory Board that provides advice to DND on a broad range of scientific and technical issues ranging from effects of disruptive technologies on defence applications to science policy.

Maria has a strong scientific background in space-based imagery exploitation with emphasis on Synthetic Aperture Radar and is passionate about exploring the Policy, Big Data, Artificial Intelligence and Fusion implications of the current explosion in space-based Earth Observation systems. In particular, she is interested in how to apply leading edge technologies to leverage these new space systems with other information sources to improve remote sensing for monitoring human impact on the environment, resource and land-use management, disaster prevention, community resiliency and safety and security for all.

Stephan King- Canadian Coast Guard

Stephan King spent 34 years in the Royal Canadian Navy, sailing extensively on the east and west coasts of Canada and across the globe. He served as Commanding Officer of HMC Ships EDMONTON and BRANDON, was appointed to NORAD HQ in Colorado and finished his career in Ottawa in Naval Strategy. During a deployment to the Middle East as part of Combined Task Force 150, a multi-national counter-terrorism task force, the use of space-based, airborne and other data highlighted the need for timely information and fusion, especially given a 2.5m square mile operating area.

Stephan now manages the Coast Guard’s international Capacity Building initiative which enables improved maritime safety and security in other regions of the world. The Coast Guard has been sending mentors to East and West Africa since 2017.

In his current capacity, Stephan is very interested in the role of AI, machine learning and fusion to produce improved maritime domain awareness. The Royal Canadian Navy and Coast Guard rely heavily on multiple datasets to develop an accurate picture of the maritime domain. The sheer volume of data demands AI, machine learning, and fusion solutions. Accelerating detection, planning and decision-reaction times through fusion and AI can allow Canada to predict or respond to maritime incidents more effectively. Whether the data is derived from space, terrestrial, or other means, the ability to fuse multiple datasets to understand what is or may be occurring in real-time in Canadian or international waters is of major concern.

Kelly Lyons - University of Toronto

Dr. Kelly Lyons is an Associate Professor and Associate Dean, Academic and currently Acting Dean in the Faculty of Information at the University of Toronto. Prior to joining the Faculty of Information, she was the Program Director of the IBM Toronto Lab Centre for Advanced Studies (CAS). Her current research interests include service science, knowledge mobilization, social media, and collaborative work. Currently, she is focusing on ways in which social media can support human‐to‐ human interactions in service systems and data‐driven knowledge mobilization. In 2018, she led the development of a Networks of Centres of Excellence (NCE) proposal called the Advanced Data Science Alliance (ADA). Kelly has co‐authored several papers, served on program committees for conferences, given many keynote and invited presentations, and co‐chaired several workshops. She has been the recipient of an NSERC Strategic Partnership Grant, NSERC Discovery Grants, an NSERC Collaborative Research and Development Grant with SAP, two NSERC Engage Grants (with Sciencescape and Dell), MITACS Accelerate Grants (with CA, IBM, and Cerebri AI), an IBM Smarter Planet Faculty Innovation Grant and has received funding through the GRAND Networks of Centres of Excellence (NCE). Kelly holds a cross‐ appointment with the University of Toronto’s Department of Computer Science and is an IBM Faculty Fellow. From 2008 to 2012, she was a Member‐ at‐Large of the ACM Council and a member of the Executive Council of ACM‐W. Kelly is very interested in promoting Women in Technology initiatives and has given several presentations to young people and teachers on this topic.

Rob Davidson - Information and Communications Technology Council (ICTC)

Rob is a 25-year seasoned veteran of the software industry and has excelled in senior roles ranging from Director of Marketing & Communications, VP of Product Management to Chief Technologist. He is a passionate open data advocate, promoting the use of open data for social good and business creation. In June 2016, Rob founded the Open Data Institute Ottawa Node to help crystallize the open data movement in Ottawa. Rob is co-chair of Canada's Multi-Stakeholder Forum for the Open Government Partnership and is also an organizer for Data for Good Ottawa meetup group. Rob has spoken at national and international events on open data and emerging technologies. Rob is the Manager, Data Analytics and Research at the Information and Communications Technology Council (ICTC). Rob has a BSc. in Data Analysis from the University of New Brunswick and an MBA from the University of Western Ontario.

Rob has a wealth of data and information experience including financial, operational, security, health, and geo-spatial data. Rob believes that the fusion of data from multiple sources and multiple contexts is necessary to produce actionable information. He has also performed NLP analysis on unstructured social media data and is well-versed in machine learning and has experience with of a number of data analytic and data visualization tools. Rob has deep interest in the responsible use of data and AI, including concepts like privacy by design and techniques for assessing AI transparency and bias.