
Our AI survey
In October 2024, we at the Centre for Dialogue at Simon Fraser University worked with a public opinion firm to conduct a poll of more than 1,000 randomly selected B.C. residents to better understand their views on AI. Specifically, we wanted to understand their awareness of AI and its perceived impacts, their attitudes towards it, and their views on what we should be doing about it as a society. Overall, we found British Columbians to be reasonably knowledgeable about AI. A majority were able to correctly spot everyday technologies that use AI from a standardized list, and 54 per cent reported having personally used an AI system or tool to generate text or media. This suggests that a good portion of the population is engaged enough with the technology to be invited into a conversation about it. Familiarity did not necessarily breed confidence, however, as our study showed more than 80 per cent of British Columbians report feeling “nervous” about the use of AI in society. But that nervousness wasn’t associated with catastrophizing. In fact, 64 per cent said they felt AI was “just another piece of technology among many,” versus only 32 per cent who bought into the idea that it was going to “fundamentally change society.” 57 per cent held the view that AI will never truly match what humans can do. Instead, most respondents’ concerns were practical and grounded. 86 per cent worried about losing a sense of their personal agency or control if companies and governments were to use AI to make decisions that affect their lives. Eighty per cent felt that AI would make people feel more disconnected in society, while 70 per cent said that AI would show bias against certain groups of people. The vast majority (90 per cent) were concerned with deepfakes, and 85 per cent with the use of personal information to train AI. Just under 80 per cent expressed concern about companies replacing workers with AI. When it came to their personal lives, 69 per cent of respondents worried that AI would present challenges to their privacy and 46 per cent to ability to find accurate information online.
While many respondents believe AI can help them do daily tasks, many worry about the impact on their privacy. (Author provided)

Many respondents held positive views about AI’s potential uses in health care, however, many were also concerned by the potential impact on job opportunities and economic inequality. (Author provided)
Low trust in government and institutions
Some scholars have argued that government and the AI industry find themselves in a “prisoner’s dilemma,” with governments hesitating to introduce regulations out of a fear of hamstringing the tech sector. But in failing to regulate for the adverse effects of AI on society, they may be costing the industry the support of a cautious and conscientious public and ultimately its social license. Recent reports suggest that uptake of AI technologies in Canadian companies has been excruciatingly slow. Perhaps, as our results hint to, Canadians will hesitate to fully uptake AI unless its risks are managed through regulation. But government faces an issue here: our institutions have a trust problem. On the one hand, 55 per cent of respondents feel strongly that governments should be responsible for setting rules and limiting risks of AI as opposed to leaving it to tech companies to do this on their own (25 per cent) or expect that individuals develop the literacy to protect themselves (20 per cent). On the other hand, our study shows that trust in government to manage and develop AI carefully and with the public’s interest in mind is low, at only 30 per cent. Trust is even lower in tech companies (21 per cent) and non-government organizations (29 per cent). Academic institutions do best on the trust question, with 51 per cent of respondents somewhat or strongly trusting them to manage this responsibly. Just over half is still not exactly a flattering figure. But we might just have to take it as a call to action.
While most respondents support regulation of AI, many do not trust government, tech companies and other institutions to carefully develop AI. (Author provided)