Skip to McMaster Navigation Skip to Site Navigation Skip to main content
McMaster logo

Office of the Provost & Vice-President (Academic)

Academic Excellence

Provisional Guidelines on the Use of Generative AI in Research

Guidelines for the use of generative AI tools for McMaster researchers – Published January 2025

These Guidelines were developed by the AI Expert Panel on Research. Learn more about its work here.

Expert Panel details

 
Provisional Guidelines: The Use of Generative Artificial Intelligence (AI) in Research at McMaster University – January 2025 by McMaster University is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Introduction and Context

McMaster’s Provisional Guidelines on the Use of Generative Artificial Intelligence in Research are structured to provide guidance for the use of generative AI across the three primary stages of the research lifecycle: preparation, conducting research, and dissemination. Each stage presents opportunities and risks, which these guidelines aim to anticipate and address.  

The rapid pace at which technology is developing means these guidelines will need review and revision to ensure they remain current with technological advancements and emerging practices.

Questions, comments or suggestions about these Guidelines may be directed to the Special Advisor, Generative AI at macgenai@mcmaster.ca.  

The audience for these Guidelines is ‘researchers’ at McMaster, a broad category intended to include faculty researchers, graduate students engaged in research activities, and staff and undergraduate students who may be taking part in research activities.  

Moreover, while these guidelines are written to be broadly applicable, there are contextual differences in disciplines and research areas, as well as differences for graduate student researchers. Graduate students should attend to these Provisional Guidelines and the Guidelines for the Use of Generative AI in Teaching and Learning, as both apply. For graduate students in coursework the guidelines for Teaching and Learning are especially relevant; for graduate students engaged in graduate research, the guidelines here are pertinent.  

Where possible, it is noted where these contextual differences may alter the guidelines, but feedback is encouraged on how these guidelines may be further refined to reflect these differences.  

Differences among disciplines are mirrored in differences among granting agencies and publication venues. Researchers should carefully review the expectations and guidelines of these external bodies before using generative AI in any stage of the research process.  

Provisional Guidelines on the Use of Generative AI in Research

  • Generative AI tools may create false, misleading, or biased outputs. Critically evaluate and personally verify any outputs used in the research process. Researchers are personally accountable for the accuracy and integrity of their work.  
  • Many legal and ethical debates have yet to be settled around the appropriate use of generative AI and of inputting personal information or copyright texts into generative tools. Researchers should ensure they are informed about these debates and make appropriate legal, ethical, and political decisions in their research. 
  • Researchers who use generative AI should complete this module to help them review and reflect on broader societal implications of the use of generative AI, including labour, copyright, bias, and environmental impact. 
  • Researchers who use generative AI in any context should cite or acknowledge its use drawing on McMaster Libraries’ LibGuide and follow any publication/granting specific instructions. 

To support researchers in deciding if and when to use generative AI in the research enterprise and to use it responsibly and well, further work is required. Specifically the AI Expert Panel on Research will work over the 2024-25 year to develop and share: 

  • A Protocol for documenting the use of generative AI in all stages of research 
  • A module specific to researchers that describes the risks and challenges of using generative AI 

Appendix A: Supervisor and Graduate Student Researcher and/or Research Team Conversation Guide

Graduate students and supervisors should also consult the “Communication tool for supervisory relationships” before discussing generative AI use. 

What do you [graduate researcher/supervisor/team member] already know about generative artificial intelligence and what might you need to learn before incorporating these tools into your graduate research? What is your individual approach to generative artificial intelligence? What do you believe about its value or risks? 

Possible discussion prompts: 

  • How would you describe your ‘philosophy of AI use’? When, how, and why do you think AI should be used in research?  
  • How might generative AI intersect with, influence, or impact your professional goals? 

Permitted Activities 

What scholarly activities within my graduate research/research may benefit from the use of generative AI?  

Possible discussion prompts: 

  • What value might generative AI bring? 
  • For graduate student researchers: What impact might using generative AI for [this task] have on my core learning experience as a graduate student researcher? 
  • What phases of the research process and research activities would most benefit from the inclusion of generative AI? 

Examples: translation, copy-editing, brainstorming, concept explanation, drafting, coding, data analysis, data visualization, drafting, simulations, literature reviews 

Prohibited Activities  

What scholarly activities within my graduate research/research should not involve the use of generative AI? 

Possible discussion prompts: 

  • What might be some of the risks of using generative AI to complete [this task] or [this part]? 
  • What could be some of the negative impacts on my work, colleagues or my disciplinary community if generative AI was used for [this part]? 
  • For graduate student researchers: What impact might using generative AI for [this task] have on my core learning experience as a graduate student researcher? 

Examples: translation, drafting, data analysis, data visualization, interpretation and analysis, synthesis, literature reviews 

What benefit or risk does the use of generative AI pose for me as a graduate researcher/researcher? 

Examples: accessibility features, data sovereignty, implicit bias, data protection, privacy, data contamination and international data agreements. 

How should I document and disclose when I have used generative AI in my work? What level of use (e.g. brainstorming, drafting, copy editing, coding) warrants disclosure of use? How do I ensure everyone involved in the work I am doing understands how we will use (or not use) generative AI?  

Possible supervisor strategies 

  • Citation and disclosure practices vary by context. Check with colleagues, journals and funding agencies in your area to consider what emerging norms for citation or disclosure may be. 
  • Sample acknowledgement could read: “[Name of generative AI tool] was used in the creation/drafting/editing of this [scholarly output]. I have evaluated this document for accuracy.” 

Possible discussion prompts 

  • What research ethics implications and obligations do we have to consider? 
  • What might be some reasons our [key consulted groups] might need or want to be aware that generative AI was used in this [type of work]? 
  • How do we ensure that everyone involved in a project or process that uses generative AI is aware and agrees to the use? 
  • What possible risks to our credibility or expertise are present if we do not disclose use of generative AI in this [type of work]? 
  • What professional obligations do we have to be transparent with our use of generative AI in our area?