Expandable List
Understanding how generative AI works, and how it can support work within organizations represents an ongoing area of exploration and innovation. The opportunities of generative AI are exciting: creating workplace efficiencies to allow for different kinds of work, adding capabilities for individuals and teams, and offering personalized uses.
Alongside these opportunities, these systems come with some limitations, inclusive of the potential for ethical and legal challenges. Some of these challenges speak to the specifics of the post–secondary context – like data privacy and confidentiality – while others intersect with communities, the environment, and humanity more broadly.
Depending on the task and prompt, generative AI tools can produce incorrect or biased responses. For many tools there are considerations around privacy and the use of personal data. Artificial intelligence can be a tool to help and support employees in the critical work they do, while still requiring valuable employee expertise and decision making to evaluate the appropriateness and accuracy of content produced.
What follows are guidelines for the use and adoption of generative AI tools in the McMaster work environment and for McMaster employees. These guidelines are organized into three sections: (1) Principles, (2) Use of Tools and (3) Adoption and Licence of Tools. The guidelines aim to provide the McMaster community with a sense of how generative AI can be positively used by employees in the regular course of their work, for the purposes of providing services, or for identifying potential tools for integration into McMaster operations.
In this 35-minute presentation, Dr. Erin Aspenlieder, Special Advisor to the Provost on Generative AI, introduces staff at McMaster to generative AI and to the Provisional Guidelines.
Provisional Principles
The provisional guidelines offered here are guided by the following principles:
- Generative artificial intelligence tools are not neutral; their adoption or use invite careful consideration of risk and value.
- Generative artificial intelligence tools may change how the community and individuals work, but the use of artificial intelligence does not change the meaningful impact of work.
- Generative artificial intelligence tools introduce opportunities for work efficiency, alongside opportunities to engage in different kinds of work.
- Individuals will have different reactions to the use of and interaction with artificial intelligence tools. Openness to learning about this technology is encouraged, recognizing individuals learn and respond to change differently.
Provisional Guidelines on the Use of Generative AI in Operational Excellence
Provisional Guidelines: The Use of Generative Artificial Intelligence (AI) in Operational Excellence at McMaster University – April 2024 by McMaster University is licenced under a Creative Commons Attribution-NonCommercial 4.0 International Licence
- Employees may be able to use generative AI in their work. The use of generative AI should involve a conversation between supervisors and employees and the completion of Appendix A: Employee Generative AI Considerations Checklist, regardless of how its use is initiated, by whom or when.
- Supervisors should ensure employees understand the use generative AI as required by their role within work hours; supervisors should ensure employee privacy training and information security training is current.
- Specific McMaster training on generative AI use for employees is in the early stages of development and will be made available as soon as possible.
- Do not upload or share confidential, personal, personal health or proprietary information with a generative AI tool unless a data security and risk assessment and a privacy and algorithmic assessment have been completed for the specific tool.
- Never upload or share personal and personal health information with a generative AI tool where the information is in the custody of a partner health care institute. Any use of this information must be within the control of their services and policies.
- See Appendix C: Generative AI Tool Risk Assessment Processes
- Microsoft Copilot and Microsoft 365 Copilot are two different tools. As of March 2024, McMaster has an enterprise licence for Microsoft Copilot (but not 365) which ensures that, when logged in using McMaster credentials, data used is not shared with either Microsoft or McMaster. Confidential, personal, or proprietary information can therefore be used with Microsoft Copilot when logged in with McMaster credentials. See: Start Here with Copilot for currently licenced tools and capabilities.
- Generative AI tools may create false, misleading, or biased outputs. Critically evaluate and personally verify any outputs used or integrated into work products.
- Employees should cite or acknowledge the use of generative AI according to agreements in Appendix A.
- Employees should review the privacy policy and user agreement of generative AI tools and consult with the Privacy Office, Information Security, or the Office of Legal Services to address any questions or concerns about privacy policies or terms and conditions found in user agreements.
- Legal questions of intellectual property (such as copyright and privacy continue to be evaluated by provincial and federal courts. Until such questions are resolved, employees should not use generative AI created content for proprietary work, to autonomously make decisions (e.g. hiring), or to create content embedded in other university systems (e,g. Mosaic, Slate, Active Directory) and all McMaster policies continue to remain in effect.
- See Appendix D: Decisions on Use for examples.
Widespread access to generative AI tools* offers exciting applications in the university-sector and opportunities to innovate. Many existing software products are adding AI features, and many new tools and products are launching or in development. McMaster supports the use of generative artificial intelligence tools when that use enhances the values and the strategic priorities of the institution, and enables individuals, teams and units to experiment with new ways of teaching, learning, researching and working.
While exciting, adopting any new technology at McMaster invites pause and consideration of a range of questions, as well as evaluation of costs and benefits of the tools and to the institution. Recognizing that the pace of change in artificial intelligence is – at the moment – rapid, the desire to move quickly to take advantage of new technological capabilities must be tempered by appropriate caution, due diligence and care for our communities.
The evaluation of the new tool or feature and the support for its implementation involves a (perhaps surprising!) range of service units and individuals on campus. These evaluative steps are designed to ensure thoughtful and safer use of technology and will provide recommendations for your consideration on if and how to adopt the tool in line with McMaster processes including IT governance and compliance measures such as cybersecurity and data governance.
The exact steps you’ll need to follow will vary depending on whether you are interested in a new tool, a new feature in an existing tool, building your own custom tool or something somewhere in-between.
Your first step will be to meet with the Special Advisor to the Provost on Generative AI to discuss the use case you are interested in exploring. This early conversation will help identify possible next steps and answer any questions you may have. You can contact the Special Advisor at macgenai@mcmaster.ca and should anticipate a reply in three business days.
As these tools and their adoption are new for the central support units on campus, too, and the processes for evaluating and supporting them are being developed as we discover and use them, we invite feedback and appreciate your patience as we learn together.
*In the context of these guidelines, “tool” is used to refer to a generative AI model, system, software, or product that can be used to enhance teaching, learning, and work at McMaster University. This includes existing software products that are adding AI features, as well as new AI-based tools and products that are being launched or developed.
As “provisional”, these Guidelines will need to evolve with changes to technology, application in practice and feedback from the McMaster community. There are also some questions and support that are not yet in place that service units along with the AI Advisory Committee will work to address. Some of the known areas of continued need for clarification or support include:
- Processes to regularly assess the impact of generative AI tools and systems on privacy and data governance and security.
- Processes to regularly review generative AI tools to identify bias in their operation and plan training and resources to mitigate these harms.
- Processes for reporting and addressing incidents or concerns related to AI misuse or unethical behaviour that fall outside the Academic and Research Integrity policies or constitute a breach of regulatory privacy compliance.
- Training and resources specific to using generative AI for employees and professional services at McMaster.
- McMaster specific resources on copyright and intellectual property.
- McMaster specific guidance on referencing, attribution and citation of generative AI use in operational activities.
We welcome questions or feedback
Appendix A: Employee Generative AI Considerations Checklist
Before using a generative AI tool as part of your work at McMaster University, please discuss with your supervisor the following questions and document your agreement on how you will approach each question. Even if generative AI tool(s) have already been incorporated into your work, please discuss with your supervisor and complete this process if you have not already done so. Communication and transparency are important to building shared understanding of how and when to use generative AI in operations.
- What types of work within my role could benefit from generative AI use?
- What types of work within my job description should not use generative AI?
- How should I document and disclose when I have used generative AI within my work? What level of use (e.g. brainstorming, drafting, copy editing, coding) warrants disclosure of use? What level of transparency is required to satisfy privacy requirements? How do I ensure everyone involved in the work I am doing understands how we will use (or not use) generative AI?
- When and how will these considerations be revisited? How will I share my experiences using generative AI with my supervisor/team?
- Completed review of resources on generative AI and understand the capabilities and limitations of these tools.
- Commitment not to share personal, proprietary, or sensitive data with a generative AI tool unless a data security check has been completed and a privacy impact assessment have been done or using Microsoft Copilot with McMaster authentication and data classification matrix has been reviewed.
- Commitment to review and evaluate for accuracy any content created by generative AI that I use in my work.
- Commitment to review and evaluate for bias any content created by generative AI that is used work product or processes.
- Commitment to read the privacy policy and user agreement for the generative AI tool(s) used and consulting with the University privacy office to identify and resolve any privacy concerns
- Commitment to subscribing, where possible, to notification processes for tool updates in the event of significant changes to user agreements.
Name: William Shakespeare
- What types of work within my role could benefit from generative AI use?
- Summary of recent articles on poetry to assist me in staying current on literary developments
- Drafting of reports for submission to the Theatre Guild
- Drafting of evaluations of performance by actors
- Identification of gaps, omissions or areas of improvement in production reports and staffing plans
- Translation of documents for informal use by other members of the Guild and some actors
- Data visualizations of trends in the audience attendance and engagement
- What types of work within my job description should not use generative AI?
- Final evaluations of performance by actors
- Complete production of a new play or poem
- How should I document and disclose when I have used generative AI within my work? What level of use warrants disclosure of use? What level of transparency is required to satisfy privacy requirements? How do I ensure everyone involved in the work I am doing understands how we will use (or not use) generative AI?
- I will include a statement “[name of Generative AI tool] was used for [this purpose – creation, drafting, revision, image generation etc] in this document/presentation”
- I will include this statement whenever I use something created by generative AI in my work. I will not include a disclosure statement if I used generative AI to brainstorm ideas that I do not ultimately use.
- I will only use the data or information from others with Microsoft Copilot and with their agreement
- When setting up new projects or working with a new team we will discuss how we want to use generative AI in that team or project.
- When and how will these considerations be revisited? How will I share my experiences using generative AI with my supervisor/team?
- Routinely informally discuss with the start of new projects or teams
- I will share my experience in team meetings and when I identify challenges or questions with use that we could discuss as a team
- Formally once in the first four months of use, and again every year after that
Name: Your Name
- What types of work within my role could benefit from generative AI use?
- Draft explanations of area of speciality to others who do not have the equivalent knowledge in content or methods
- Drafting of presentations, reports, project briefs
- Research pertinent literature in a designed project area and suggest the applicability of the concepts
- Draft sections of papers, proposals and abstracts
- What types of work within my job description should not use generative AI?
- Review and approval of papers, proposals and abstracts
- Apply specialized knowledge and principles to review, critically appraise and interpret literature, reports, presentations and other work
- Facilitating meetings, delivery of presentations
- Safeguarding confidentiality of data
- Coordination and management of project activities
- How should I document and disclose when I have used generative AI within my work? What level of use warrants disclosure of use? What level of transparency is required to satisfy privacy requirements? How do I ensure everyone involved in the work I am doing understands how we will use (or not use) generative AI?
- I will include a statement “[name of Generative AI tool] was used for [this purpose – creation, drafting, revision, image generation etc] in this document/presentation”
- I will include this statement whenever I use something created by generative AI in my work. I will not include a disclosure statement if I used generative AI to brainstorm ideas that I do not ultimately use.
- I will only use the data or information from others with Microsoft Copilot and with their agreement
- When setting up new projects or working with a new team we will discuss how we want to use generative AI in that team or project.
- When and how will these considerations be revisited? How will I share my experiences using generative AI with my supervisor/team?
- Routinely informally discuss with the start of new projects or teams
- I will share my experience in team meetings and when I identify challenges or questions with use that we could discuss as a team
- Formally once in the first four months of use, and again every year after that
Appendix B: Supervisor Generative AI Conversation Guide
Discussing generative AI use with your employees and teams is an important step in establishing expectations, opening opportunities for experimentation and innovation, and identifying where more information, training or guidance is needed. Communication and transparency are important to building shared understanding of how and when to use generative AI in operations.
The “Employee Generative AI Considerations Checklist” asks employees to work with supervisors to discuss and document how generative AI should be used as part of their role and responsibilities. Supervisors should likewise discuss the same considerations for their own work with their Director, Dean, AVP or similar role.
As a supervisor you may already have employees using generative AI tools, or you may be unsure about how the technology works and hesitant to encourage its use without knowing more. Integrating generative AI into operational work will be a long process with different individuals and teams at different stages of that work.
Recognizing the wide range of uses and varying approaches, supervisors should plan to hold these conversations with an aim of learning more about how employees might already be using these tools, imagining together where there could be use, and further opening lines of communication and learning together.
Just as responses will vary, the formality of these conversations and the documentation of the discussion will be contextual. Supervisors can consider documenting conversations in an electronic file, in meeting notes, or as part of regular check-ins with team members. The formality of this process may change as the University gains more experience with integrating generative AI into operational work.
While responses will vary based on the unique context of roles across the University, in what follows we offer some questions and examples that may guide you in these conversations.
Possible supervisor strategies:
- Review job description with employee and discuss activities that could benefit from generative AI (e.g. brainstorming, summary of notes, email drafting, copy editing)
- Discuss day-to-day tasks with employee and consider which tasks could benefit from experimentation or use of generative AI (e.g. data analysis, report writing, documentation)
- Invite employee to develop an initial list of tasks or activities that they see as benefiting from generative AI and discuss together.
Possible discussion prompts:
- What value might generative AI bring to this task?
- How might generative AI support or assist in your work on this task?
- What are some of the possible uses of generative AI in your work or responsibilities?
- Do you foresee any risks in using generative AI in this work?
Possible supervisor strategies:
- Review job description with employee and discuss activities that should not use generative AI and discuss why generative AI use would be inappropriate (e.g. use of personal information outside of Copilot)
- Discuss day-to-day tasks with employee and consider which tasks should not use generative AI and consider why not (e.g. analyzing qualitative program evaluation data from a focus group without the informed consent of all participants)
- Collaborate to create a list of activities or tasks, or parts of the role, where generative AI would be problematic, risky or inappropriate
Possible discussion prompts:
- What might be some of the risks of using generative AI to complete [this task] or [this part of your role]?
- What could be some of the negative impacts on our community/ students/ colleagues if generative AI was used in this work?
Possible supervisor strategies
- Citation and disclosure practices vary by context. Check with colleagues in your area or similar roles at other institutions to consider what emerging norms for citation or disclosure may be.
- Collaborate with other supervisors in your area or role to develop citation or disclosure language or draw from the McMaster Library’s Generative AI citation guide.
- Develop with the employee or team a working document that includes use cases of generative AI. Work together to make decisions for each use case on whether and how to disclose use.
- Sample acknowledgement could read: “[Name of generative AI tool] was used in the creation/drafting/editing of this document. I have evaluated this document for accuracy.”
- Connect with macgenai@mcmaster.ca to share any processes or approaches that you feel could benefit other areas on campus, or if you are seeking additional input or support.
Possible discussion prompts
- What might be some reasons our [key consulted groups] might need or want to be aware that generative AI was used in this [type of work]?
- How do we ensure that everyone involved in a project or process that uses generative AI is aware and agrees to the use?
- What possible risks to our credibility or expertise are present if we do not disclose use of generative AI in this [type of work]?
- What professional obligations do we have to be transparent with our use of generative AI in our area?
Appendix C: Generative AI Tool Risk Assessment Processes – Data Security, Privacy and Intellectual Property
Many AI tools or software collect, transfer and store user data. For some tools there are options to not have data collected – either by selecting options or paying for premium features. Understanding what data a specific tool is collecting, transferring or storing is important for limiting the risk of sensitive or personal data being used in ways you did not anticipate or want.
For any personal or sensitive data McMaster has an enterprise licence for Microsoft Copilot which ensures that, when logged in using McMaster credentials, data used is not shared with either Microsoft or McMaster and confidential, person, or proprietary information can therefore be used.
For other generative tools there are ways to mitigate some risk and ways to better protect your data security. To begin this process, University Technology Service’s Information Security team provides a template for artificial intelligence (AI) and machine learning (ML) data security review. If you are implementing an AI/ML project, your first step should be to contact Information Security to request this template. Click on this link and initiate a new Security Review ticket, you do not need to fill in all details.
The template includes a thorough list of what a full security review for an AI/ML project could look like. In planning your tool or software implementation, you will work with Infromation Security to co-determine which sections of this review are both applicable and feasible.
If you are interested in learning more consider checking out these resources:
- Threat Modeling AI/ML Systems and Dependencies – Microsoft Learn
- Failure mode analysis – Azure Architecture Center | Microsoft Learn
- Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov)
If you have questions about data security, please reach out to macitgo@mcmaster.ca
The Privacy Office is responsible for ensuring organizational compliance with relevant legislation, acting as a source of advice, guidance, and policy direction to ensure McMaster complies with the Freedom of Information and Protection of Privacy Act (FIPPA), the Personal Health Information Protection Act (PHIPA), the Canadian Anti-Spam Legislation (CASL), and other relevant legislation. In cases where the work of university employees includes access to personal or personal health information collected and retained by partner healthcare institutions, such information must remain in the custody and control of that institute.
When considering the use of generative AI tools, especially when the use will include personal or personal health information with these tools, begin by completing an Early Privacy Risk Check which may lead to a fuller Privacy and Algorithmic Impact Assessment (PAIA). This process enables you to work in partnership with the Privacy Office to identify risks of regulatory non-compliance, and managing those risks through mitigation planning.
If you have questions about privacy, please reach out to privacy@mcmaster.ca
Legal questions of copyright and intellectual property remain an emerging area in Canada, with no current case law to guide decisions.
With that, when using generative AI tools, users should be aware that there are open legal questions about copyright and ownership of the data used to train large language models. Depending on how these questions are settled the allowable use of outputs from these tools may change.
Likewise, when using a generative AI tool, users should be cautious about inputting prompts that include intellectual property unless the user has completed data security and/or privacy reviews and is confident the intellectual property data is secure.
If you have questions about copyright or intellectual property and generative artificial intelligence, please reach out to the Office of Legal Services.