Generative AI is redefining how we create, analyze, and interact with content. As members of the McMaster University community, it’s essential to approach these innovations with an informed, critical perspective. In some areas members of our community are actively integrating generative AI into research, teaching, and work.
While McMaster University provides resources to understand and navigate the complexities of generative AI, the university does not endorse the use of any specific generative AI tool. There are many generative AI tools available, and each of them come with specific capabilities, considerations, risks and opportunities.
The McMaster’s Privacy Office and Ontario’s Information and Privacy Commissioner are valuable resources for further understanding and guidance on privacy and data security matters. You can find out more by reviewing this Privacy Impact Assessment Guide from the Information and Privacy Commissioner of Ontario or by contacting the Privacy Office at McMaster.
As of December 2023, the Office of the Privacy Commissioner of Canada has also released “Principles for Responsible, Trustworthy and Privacy-Protective Generative AI Tools.”
Privacy Considerations
While specific tools will have unique privacy and security considerations, there are some broad risks associated with these tools to be aware of before you use them. In what follows you can review some of the privacy risks and considerations with using generative AI tools.
Generative AI models create outputs based on their training data. If they access sensitive or personal data during their training phase, there’s a potential risk of unintentional leaks or exposures. For instance, a model trained on medical records might inadvertently generate outputs that resemble real patient data. Be aware of what data you submit to a generative AI tool in the prompt, and ensure you are not unintentionally sharing sensitive or personal data.
Even if direct personal data isn’t fed into the AI, there’s a risk that the model can infer or deduce personal information from the patterns it has learned. This inferred data can sometimes be reverse-engineered to reveal details about individuals or the datasets the model was trained on.
Generative AI can produce incredibly realistic content. This ability, when misused, can lead to the creation of ‘deepfakes’ — highly convincing but entirely fabricated videos, images, or audio recordings. Such manipulated content can have serious implications, from spreading misinformation to impersonating individuals.
Like all digital tools, generative AI systems store, process, and sometimes transmit data. If not adequately secured, these channels can become vulnerable to breaches, unauthorized access, or cyberattacks.
Many generative AI tools function as ‘black boxes’, meaning their inner workings and decision-making processes aren’t transparent. This lack of clarity can make it challenging to pinpoint data handling practices, privacy measures, or potential biases embedded in the tool.
Some generative AI tools might integrate with third-party platforms or services, raising concerns about where the data flows, who has access, and the privacy protocols of these external entities.
Better Protecting Privacy
Depending on your audience and intended use of a generative AI tool, there may be a need for a more detailed review of privacy, this review is called a “privacy impact assessment” or PIA. Not all tools and not all uses of a tool require a PIA. Complete this short form to determine if the tool you want to use and how you want to use it requires a PIA.
Once you’ve completed this form, if you find you do not need to complete a PIA for the tool or its use, consider the following guidelines from the Office of the Privacy Commissioner of Canada that will increase awareness of privacy considerations and offer directions on how to keep personal information safer.