Overview
Welcome to Introduction to Generative AI for Staff!
This online module is an extension of McMaster’s Provisional Guidelines on the use of Generative AI for Operational Excellence.
The module aims to provide you with an understanding of generative AI to help you think through how these technologies intersect with your work. Whether you have reservations or are enthusiastic about AI, “Introduction to Generative AI for Staff” offers a space for exploration and thoughtful consideration.
Topics include what generative AI is, what generative AI tools can/can’t do, how to use generative AI tools, and how generative AI is changing the world of work.
Intended Learning Outcomes
By the end of this module, you should be able to:
- Recognize the differences in Generative AI models and tools and how they can be used in your work.
- Use a Generative AI tool to complete a work task.
- Consider ways in which generative AI could impact your work.
What is Generative AI?
Generative artificial intelligence (Gen AI) is artificial intelligence that can generate text, images, or other media, using predictive modelling. Here’s how it works.
Gen AI models are initially trained on large datasets.
- Text generators are trained on large datasets of existing text, such as books, articles, or websites.
- Image generators are trained on extensive datasets of images. Each image consists of a grid of pixels, with each pixel having colour values and positions.
- Audio and video generators are trained on datasets containing audio clips or video frames, which are sequences of images displayed rapidly.
Gen AI models learn to recognize patterns in the training data and build predictive models based on this learning.
- Text generators learn the context in which words and phrases commonly appear and use linguistic and grammatical rules to predict the next word or phrase and generate sentences or paragraphs.
- Image generators learn patterns in images, identifying shapes, objects, colours, and textures, and use spatial relationships between elements and colours to predict and generate pixels.
- Audio/video generators, in addition to recognizing static image features, learn how sounds or images evolve in a sequence, and use these temporal and spatial relationships to generate video frames and/or audio segments.
If you’re interested in learning more about how this process works, you can check out this visual explainer.
You can further refine the generated content – directly, by providing feedback to the AI tool, or by editing your original prompt – to meet your specific needs. You’ll learn more about this in the Practice tab of this learning module.
Foundation Models and Large Language Models
Foundation models describe a class of AI systems that can learn from a large amount of data and perform a wide range of tasks across different domains. Foundation models are not limited to language, but can also handle other modalities like images, audio, and video. Foundation models are so called because they act as the “foundation” for many other uses, like answering questions, making summaries, translating, and more. Large language models (LLMs) are a specific type of foundation models that are trained on massive amounts of text data and can generate natural language responses or perform text-based tasks.
Foundation models are very general and broad, and they may not capture the nuances and details of every domain or task. You can “fine-tune” or adapt foundation models to improve the performance and quality of the model outputs by providing additional data and training that are relevant to a specific subject area or task. For example, if you want to use a foundation model like GPT-4 to generate summaries of news articles, you can fine-tune it on a dataset of news articles and their summaries. This helps the model learn the specific style, vocabulary, and structure of news summaries, and generate more accurate and coherent outputs
What’s the difference between a Gen AI model and a Gen AI tool?
A Gen AI model is the underlying technology or algorithm that enables the generation of content. A Gen AI tool is the user interface or service that allows users to access and interact with the generative AI model. For example, GPT (Generative Pre-trained Transformer) is one of the most popular LLMs (there are currently three versions – GPT-3.5, GPT-4, and GPT-4o), whereas ChatGPT is the natural language chatbot that uses GPT-3.5, GPT-4, or GPT-4o to generate content based on user inputs.
There are many Gen AI tools available – resource directories like There’s an AI for That list thousands, with more being added each day. But it’s most helpful to start with the core foundational models because most AI tools are running on top of or taking advantage of these models. Understanding how to use these foundational models directly is the most powerful and easiest way to gain experience with AI.
Click on the cards below to learn more about some of the features and functionality of the most common GenAI models. McMaster recommends you use Microsoft Copilot with your McMaster login information to maintain better data security and privacy. Find out more at this webpage Getting Started with Copilot.
References
Andrei. There’s An AI For That (TAAFT)—The #1 AI Aggregator. There’s An AI For That. Retrieved October 26, 2023, from https://theresanaiforthat.com
Anthropic. (2023, May 11). Introducing 100K Context Windows. Anthropic. https://www.anthropic.com/index/100k-context-windows
Anthropic. (2024, March 4). Introducing the next generation of Claude. Announcements. https://www.anthropic.com/news/claude-3-family.
Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., Chen, C., Olsson, C., Olah, C., Hernandez, D., Drain, D., Ganguli, D., Li, D., Tran-Johnson, E., Perez, E., … Kaplan, J. (2022). Constitutional AI: Harmlessness from AI Feedback (arXiv:2212.08073). arXiv. https://doi.org/10.48550/arXiv.2212.08073
McMaster University (2024). Start Here with Copilot. Discover M365 and Zoom. https://mcmasteru365.sharepoint.com/sites/discoverM365andZoom/SitePages/Start-Here-with-Copilot.aspx
Mollick, E. (2023, September 16). Power and Weirdness: How to Use Bing AI. One Useful Thing. https://www.oneusefulthing.org/p/power-and-weirdness-how-to-use-bing
Mollick, E. (2024, February 8). Google’s Gemini Advanced: Tasting Notes and Implications. One Useful Thing. https://www.oneusefulthing.org/p/google-gemini-advanced-tasting-notes.
Murgia, M. and the Visual Storytelling Team. (2023, September 12). Generative AI exists because of the transformer. Financial Times. https://ig.ft.com/generative-ai.
OpenAI. (2024, May 13). Hello GPT-4o. https://openai.com/index/hello-gpt-4o/
Ortiz, S. (2023, November 13). Bing Chat now goes by Copilot and feels a lot more like ChatGPT. ZDNET/Innovation. https://www.zdnet.com/article/bing-chat-now-goes-by-copilot-and-feels-a-lot-more-like-chatgpt/
Pequeño IV, A. (2024, February 26). Google’s Gemini Controversy Explained: AI Model Criticized By Musk And Others Over Alleged Bias. Forbes. https://www.forbes.com/sites/antoniopequenoiv/2024/02/26/googles-gemini-controversy-explained-ai-model-criticized-by-musk-and-others-over-alleged-bias/
Pinsky, Y. (2023, September 19). Bard can now connect to your Google apps and services. Google. https://blog.google/products/bard/google-bard-new-features-update-sept-2023/
Shah, D. (2024, February 26). What Are AI Tokens and Context Windows (And Why Should You Care)? The Agent AI Newsletter. https://simple.ai/p/tokens-and-context-windows
Stewart, E. (2024, February 14). Google’s Bard Has Just Become Gemini. What’s Different? Enterprise Management 360. https://em360tech.com/tech-article/gemini-vs-bard.
Wharton School. (2023, August 1). Practical AI for Instructors and Students Part 2: Large Language Models (LLMs)—YouTube. YouTube. https://www.youtube.com/watch?v=ZRf2BfDLlIA
What can it do?
The content on this page was adapted from Ethan Mollick’s blog post: How to Use AI to Do Stuff: An Opinionated Guide
Expandable List
Gen AI can create, compose, and produce a diverse array of content. Click on the accordions below to learn more about different ways to use AI and which tools are most suitable. McMaster recommends you use Microsoft Copilot with your McMaster login to better secure your data and privacy. If you’re using ChatGPT for any of these uses, you might consider turning off data collection so that your prompts and conversations are not collected and stored.
- Write drafts of anything – blog posts, essays, promotional material, lectures, scripts, short stories. All you have to do is prompt it. Basic prompts result in boring writing,but getting better at prompting is not that hard. AI systems are more capable as writers with a little practice and user feedback.
- Make your writing better. You can paste your text into an AI and ask it to improve the content, check for grammar and improve paragraphing. Or ask for suggestions about how to make it better for a particular audience. Ask it to create five drafts in radically different styles. Ask it to make things more vivid or add examples.
- Help you with tasks. AI can do things you don’t have the time to do. Use it to write summaries, create project templates, take meeting notes, and a lot more. Later in this module, you’ll have a chance to try out using an AI tool to help you complete a work task.
- Unblock yourself. It’s very easy to get distracted from a task when you get stuck. AI can provide a way of giving yourself momentum. Ask it for ideas to help you get started. You often need to have a lot of ideas to have good ideas, and AI is good at volume. With the right prompting, you can also get it to be very creative. Or you can ask it for possible next steps in a project or a work schedule to keep you organized. The key is dialog.
AI tools are being integrated directly into common office applications. Microsoft 365 applications now have the option to include an AI-powered “Co-pilot” to assist with various tasks as you work within documents, and Gemini is being integrated into Google’s G Suite applications. The implications of what these new innovations mean for writing are pretty profound. McMaster is currently not planning to activate Copilot for Microsoft 365 as it comes with a significant cost and the data security risks are still unknown.
There are four big image generators most people use:
- Stable Diffusion: is open source and can be run from any high-end computer. It takes effort to get started, since you have to learn to craft prompts properly, but once you do it can produce great results. It is especially good at combining AI with images from other sources. Here is a guide to using Stable Diffusion (be sure to read both parts 1 and part 2).
- DALL-E: is incorporated into Copilot (in creative mode) and Copilot image creator. This system is solid, but not as good as Midjourney.
- Midjourney: is the best system as of mid-2023. It has the lowest learning-curve: just type in “thing-you-want-to-see –v 5.2” (the –v 5.2 at the end is important, it uses the latest model) and you get a great result. Midjourney requires Discord. Here is a guide to using Discord.
- Adobe Firefly: is built into a variety of Adobe products, but it lags behind DALL-E and Midjourney in terms of quality. However, while the other two models have been unclear about the source images that they used to train their AIs, Adobe has declared that it is only using images it has the right to use. One of the major benefits of Firefly is generative fill – you can use it while editing an image in Photoshop to add something to or alter that image based on your prompting.
Here are the first images that were created by each model when provided with the prompt: “Fashion photoshoot of sneakers inspired by Van Gogh” (each image is labelled with the AI model)
An AI video generator is a web-based or standalone software that allows you to easily create video assets without needing prior video editing experience. These tools can assist with tasks like erasing video elements, creating green screens, using text to video to construct scripts from a URL or blog post, and more. It is now easy to generate a video with a completely AI generated character, reading a completely AI-written script, talking in an AI-made voice, animated by AI. It can also deepfake people. Runway v2 was the first commercially available text-to-video tool and is a useful demonstration of what is to come.
You can use Copilot to analyze or summarize documents or images. You can do this by attaching images or files using the web browser interface. Or you can open a PDF using the Microsoft Edge browser (right-click on a PDF and hover over the Open with option > choose Microsoft Edge) and click the Copilot icon in the top right corner of the browser to bring up Copilot in the sidebar. Then tell the tool what you’d like it to do (e.g., generate a summary). You can get more creative with your prompts to extract what you need. For example, summarize the key points from this document; give a 3-sentence summary highlighting the most important information in this report; extract a bulleted list of the 5 main takeaways from these meeting notes; describe the text that appears in the image. Using Copilot with your McMaster Microsoft license means that the data is not used to train large language models, so make sure you are signed in with your MacID@mcmaster.ca email and password.
Code Interpreter is a mode of GPT-4 that lets you upload files to the AI, allows the AI to write and run code, and lets you download the results provided by the AI. It can be used to execute programs, run data analysis, and create all sorts of files, web pages, etc. Though there has been a lot of debate since its release about the risks associated with untrained people using it for analysis, many experts testing Code Interpreter are impressed, one paper even suggesting it will require changing the way we train data scientists.
Claude 3 is excellent for working with text, especially PDFs. It’s possible to post entire books into the tool. You can also give it several complex academic articles and ask it to summarize the results, with reasonable results! You can then interrogate the material by asking follow-up questions: what is the evidence for that approach? What do the authors conclude? And so on.
Similarly, Gemini 1.5 Pro has a 128K-token context window. A limited group of developers and enterprise customers can try it with a context window of up to 1 million tokens, but this is a computationally intensive process that requires further optimizations.
It’s currently not recommended to use AI as a search engine. The risk of hallucination is high (an explanation of hallucinations is provided in the “What’s the catch” tab of this module). However, there is some evidence that AI can provide more useful answers than search when used carefully, according to a recent pilot study. Especially in cases where search engines aren’t very good, like tech support, deciding where to eat, or getting advice, Copilot is often better than Google as a starting point. This is an area that is evolving rapidly, but you should be careful about these uses for now.
What’s more exciting is the possibility of using AI to help us learn.You can ask the AI to explain concepts and get very good results. You can use structured prompts to work with AI as an automated tutor. Because we know the AI could be hallucinating, you would be wise to double-check any critical data against another source.
Clark, P. (2023, May 23). Dream bigger: Get started with Generative Fill. Adobe Blog. https://blog.adobe.com/en/publish/2023/05/23/future-of-photoshop-powered-by-adobe-firefly
Ethan Mollick [@emollick]. (2023a, April 5). There are big categories of common problems that, in retrospect, were never good applications for Google search. Bing AI, even with occasional inaccuracies, is just better for things like: tech support, Deciding what to do/where to eat How-to advice, Getting started advice https://t.co/9gIBxq86It [Tweet]. Twitter. https://twitter.com/emollick/status/1643718474668097538
Ethan Mollick [@emollick]. (2023b, June 15). There is a lot of excitement for AI to be a universal tutor. And it shows real promise, but there are some important problems that need to be solved. To get a sense of how good it is, try this prompt (in GPT-4): Https://chat.openai.com/share/ec1018ec-1d86-4160-b587-354253c7d5cb More in our paper: Https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4475995 https://t.co/X8kpg08DEr [Tweet]. Twitter. https://twitter.com/emollick/status/1669434927761313807
Ethan Mollick [@emollick]. (2023c, July 11). Every field of professional education needs to be working on a paper like this right now. This one tests Code Interpreter’s ability to do data science (90% on exams, the field is “on the verge of a paradigm shift”) Then it suggests how to change training. Https://arxiv.org/pdf/2307.02792v2.pdf https://t.co/OnUk22ZZ06 [Tweet]. Twitter. https://twitter.com/emollick/status/1678615507128164354
Gartenbert, C. (2024, February 16). What is a long context window? Google Blog. https://blog.google/technology/ai/long-context-window-ai-models.
Gunnell, M. (2022, April 11). How to use Discord: A beginner’s guide. PCWorld. https://www.pcworld.com/article/540080/how-to-use-discord-a-beginners-guide.html
Mollick, E. (2023a, September 16). A quick and sobering guide to cloning yourself. One Useful Thing. https://www.oneusefulthing.org/p/a-quick-and-sobering-guide-to-cloning
Mollick, E. (2023b, September 16). How to Use AI to Do Stuff: An Opinionated Guide. One Useful Thing. https://www.oneusefulthing.org/p/how-to-use-ai-to-do-stuff-an-opinionated?utm_medium=reader2
Mollick, E. (2023c, September 16). On-boarding your AI Intern. One Useful Thing. https://www.oneusefulthing.org/p/on-boarding-your-ai-intern
Mollick, E. (2023e, September 16). Setting time on fire and the temptation of The Button. One Useful Thing. https://www.oneusefulthing.org/p/setting-time-on-fire-and-the-temptation
Mollick, E. (2023f, September 16). What AI can do with a toolbox… Getting started with Code Interpreter [Now called Advanced Data Analytics]. One Useful Thing. https://www.oneusefulthing.org/p/what-ai-can-do-with-a-toolbox-getting
Mollick, E. (2023g, September 16). What happens when AI reads a book ??. One Useful Thing. https://www.oneusefulthing.org/p/what-happens-when-ai-reads-a-book
Mollick, E. & L. Mollick. (2024). Student Exercises. More Useful Things: AI Resources. https://www.moreusefulthings.com/student-exercises
OpenAI. (2023). ChatGPT (GPT-4) Friendly Tutor Explains Concepts. [Large language model]. https://chat.openai.com/share/ec1018ec-1d86-4160-b587-354253c7d5cb.
OpenAI. (2024). Data Controls FAQ. https://help.openai.com/en/articles/7730893-data-controls-faq
Prateek K. Keshari [@prkeshari]. (2023, July 9). 20 mins and 3 prompts later, ChatGPT code interpreter gives me 2 branded downloadable html files. Result ? https://t.co/NPMrW72g2A [Tweet]. Twitter. https://twitter.com/prkeshari/status/1678155933606637568
Stokes, J. (2022, September 29). Stable Diffusion 2.0 & 2.1: An Overview. Johnstokes.Com. https://www.jonstokes.com/p/stable-diffusion-20-and-21-an-overview
Xu, R., Feng, Y., & Chen, H. (2023). ChatGPT vs. Google: A Comparative Study of Search Performance and User Experience (arXiv:2307.01135). arXiv. https://doi.org/10.48550/arXiv.2307.01135
What’s the catch?
While the innovation and creativity of generative AI is exciting, these systems do not come without limitations or ethical challenges. One of the biggest challenges right now is no one knows the full range of capabilities of these large language models – there is no instruction manual. On some tasks generative AI is very powerful, and on others it fails completely or subtly. And the only way to figure out which is which is by playing around with the technology.
Click on the accordions below to learn about considerations that can be cause for concern.
Expandable List
One of the biggest criticisms levelled against Gen AI tools is that they make things up. As probabilistic models they are designed to generate the most likely response to any given prompt. Given that these tools do not ‘know’ anything and are – in most instances – limited in their ability to fact check, the responses they generate can include factual errors and invented citations/references. This known phenomenon has been termed ‘hallucination’.
The ability of generative AI to create realistic and plausible text, video, audio and code also makes the creation of false, biased, or politically motivated media faster and easier to produce. Our individual and collective ability to identify reliable and trustworthy sources, and to evaluate what we read, view and hear has never been more important.
There is some speculation that generative AI tools will come to include a ‘confidence indicator’ that might let users know the degree of confidence the tool has that a generated response is accurate. Likewise, some reporting suggests that generative AI tools will begin to fact-check their responses against internet sources or other AI models. At the time we are writing, these capabilities are not in wide circulation. Instead, we need to practice healthy skepticism about the reliability of generative AI produced responses and a consistent practice of checking outputs against verified sources.
Generative AI tools are trained on a range of data. Some general models, like GPT-4, draw on a wide range of sources. Biases inherent in the training data – those that may discriminate against or marginalize underrepresented, minority, and equity-deserving groups – may appear in the results generated by these tools. While efforts have been made by companies like OpenAI to create ‘guardrails’ to prevent hateful and discriminatory results from being generated, the risk of bias persists in the limitations of the training data itself. Existing biases in the training data may make a discriminatory result statistically more likely, and so the generative AI tool is more likely to produce that result.
For example, in a prompt to generate a story about slaying a dragon, the probabilistic result is to have a prince slay the dragon because that is the most common pattern in the training data. We need to be thoughtful about the ways these biases might be perpetuated or left unexplored when we use generative AI in our work.
Generative AI tools use a wide range of data to learn from before producing outputs. Many of these tools include in their datasets content created and shared publicly – like X (formerly Twitter) or Reddit – as well as that created by artists or users without explicit consent for inclusion in a dataset for generative AI use. Ongoing lawsuits related to copyright filed by artists are challenging the inclusion of creative works in datasets, and open questions remain about what might be fair use.
Some tools, like Adobe Firefly, are working to compensate contributors, while others are tailoring their datasets to include content with consent for inclusion explicitly obtained.
Without consistent government regulation of emerging generative AI tools, users rely on the user agreements and privacy guidelines of specific tools. Here at McMaster we have privacy and security protocols that see technology tools routinely evaluated for privacy and security risks. At the time of writing, with the exception of Microsoft Copilot, a complete privacy and security assessment of generative AI tools has not been completed. As such, we recommend that you carefully review user agreements and understand the ways in which generative AI tools may collect and make use of user data before consenting to use of the tools and when in doubt use Microsoft Copilot with your McMaster login.
Many generative AI tools, including ChatGPT, have settings that allow for users to turn off data collection, which means the tool will not use the inputted prompts or data for later use. When you use Copilot by signing in with your McMaster email and password, the data used in conversations with the tool is protected.
The exact environmental costs of generative AI models are hard to know, but the energy costs of training and running the tools is estimated to be considerable. The size of the model, the training approach used and the capabilities of the tool influence how much energy the model uses. Likewise, there are very different energy needs for training a model and for using it. Some prominent companies deploying generative AI tools – like Google and Microsoft – have also pledged to be carbon neutral or carbon negative in a way that – ostensibly – accounts for the energy use of their generative AI models.
As a community at McMaster, we have an opportunity to make a difference by contributing to carbon offsetting programs and to educating our students on the environmental cost of these tools.
Just as there is variation in the environmental impact of generative AI tools based on their size and capabilities, there is variation in how these models are trained. Some tools, like ChatGPT, have been trained using ‘reinforcement learning through human feedback.’ This kind of training for the model involves humans reviewing a prompt and the generated output and ranking or ‘up or down voting’ in a way that gives the model feedback about the accuracy and helpfulness of the generated output. In addition to training the accuracy of outputs, workers are also used to review outputs against guardrails of appropriate content or “content moderation.” While technology tools, including social media and generative AI, have long employed human workers for content moderation, OpenAI came under criticism for outsourcing this practice to low-wage workers in Kenya. These workers must sift through toxic and explicit content with an aim of creating safer systems for the broader public without full consideration of psychological wellbeing.
Al Jazeera English (2023). Who is the author of AI-generated art? https://www.youtube.com/watch?v=iPoRHiMLSOU
Hao, K. (2019, June 6). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/
IBM Technology. (2024). Why Large Language Models Hallucinate. https://www.youtube.com/watch?v=cfqtFvWOfg0
London Interdisciplinary School. (2023). How AI Image Generators Make Bias Worse. https://www.youtube.com/watch?v=L2sQRrf1Cd8&t=1s
Mollick, E. (2023, September 16). Centaurs and Cyborgs on the Jagged Frontier. One Useful Thing. https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the-jagged
OpenAI. (2023, April 25). New ways to manage your data in ChatGPT. https://openai.com/index/new-ways-to-manage-your-data-in-chatgpt/
Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/
Satia, A., Verkoeyen, S., Kehoe, J., Mordell, D., Allard, E., & Aspenlieder, E. (2023). Generative Artifcial Intelligence in Teaching and Learning McMaster. Paul R. MacPherson Institute for Leadership, Innovation and Excellence in Teaching. https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/chapter/generative-ai-limitations-and-potential-risks-for-student-learning/
Vincent, J. (2022, November 15). The scary truth about AI copyright is nobody knows what will happen next. The Verge. https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data
Practice!
The best way to learn what Gen AI is capable of, and where it falls short, is by experimenting with the tools.
Using Gen AI tools begins with a “prompt”. This is the information you give to the tool to get it to generate what you want. Prompting is mostly about experience – it takes practice to learn what works well and what doesn’t work.
Ethan Mollick differentiates two paths to prompting: conversational prompting and structured prompting.
With conversational prompting, talk to the AI to ask for what you want or might need and see what happens. For most people, today, a conversational approach is enough to help you with your work.
For some uses, at least for now, a more formal structured approach has value. Structured prompting is about getting the AI tool to do a single task well in a way that is repeatable and adaptable. It usually takes experimentation and effort to make a prompt work somewhat consistently.
Structured prompts allow you to take what you learned and apply it to different contexts. Prompt libraries are becoming more common as a way of sharing structured prompts that can be adapted or experimented with. McMaster is building a prompt library for staff that you can access and contribute to yourself.
Regardless of which approach you use, it’s good practice to tell the tool:
- Who it is: this gives the AI the right context to start from (e.g., you’re an experienced instructor teaching a second-year Economics course)
- Context for its task: the more context you give it, the more effective it can be (e.g., include points about information you want it to include
- What you want it to do including the format of the response, or the number of examples)
- What you don’t want it to do (if relevant)
- Examples or steps: this helps it learn what you want and helps it think step-by-step, which means it will do a better job.
- End with a question like “ask me 5 questions before you begin to better help you complete this task” or “what else do you need to know before you start” for even further clarification of the task
Learning how to prompt is just part of the equation – push back and interact with AI to improve the response (e.g., ask to expand on a particular point, add an additional point, or change an example). Ultimately, AI is just giving suggestions for us to build upon. We can give feedback to make the response better, take and adapt or combine ideas, or discard what doesn’t work. This is where you use your own knowledge to evaluate and improve the result and untap the real potential of using AI.
Information Box Group
Want to learn more about prompting? Check out these resources:
Using Generative AI for a Work Task
Choose one of the prompts below from Anthropic’s Prompt Library to try it out yourself. Or you can create your own prompt for a specific task you’d like to complete. You can sign into Copilot with your McMaster credentials or use another GenAI tool of your choice.
Your task is to compose a comprehensive memo based on the provided key points. The memo should be written in a professional tone, addressing all the relevant information in a clear and concise manner. Use appropriate formatting, such as headings, subheadings, and bullet points, to organize the content effectively. Ensure that the memo is well-structured, coherent, and easy to understand for the intended audience. [copy & paste key points]
Your task is to take the provided natural language description of a process or task and transform it into clear, concise step-by-step directions that are logical, sequential, and easy to follow. Use imperative language and begin each step with an action verb. Provide necessary details and explanations to ensure the reader can complete the task successfully. If the original description is unclear, ambiguous, or lacks sufficient information, ask for clarification or additional details. [copy & paste description of process/task]
Your task is to generate a series of thoughtful, open-ended questions for an interview based on the given context. The questions should be designed to elicit insightful and detailed responses from the interviewee, allowing them to showcase their knowledge, experience, and critical thinking skills. Avoid yes/no questions or those with obvious answers. Instead, focus on questions that encourage reflection, self-assessment, and the sharing of specific examples or anecdotes. You are interviewing a candidate for a [insert position] at [insert company name]. The ideal candidate should have [insert desired experience and skills].
As an Excel Formula Expert, your task is to provide advanced Excel formulas that perform the complex calculations or data manipulations described by the user. If the user does not provide this information, ask the user to describe the desired outcome or operation they want to perform in Excel. Make sure you gather all the necessary information you need to write a complete formula, such as the relevant cell ranges, specific conditions, multiple criteria, or desired output format. Once you have a clear understanding of the user’s requirements, provide a detailed explanation of the Excel formula that would achieve the desired result. Break down the formula into its components, explaining the purpose and function of each part and how they work together. Additionally, provide any necessary context or tips for using the formula effectively within an Excel worksheet.
Reflecting on your experience
Once you’ve had a chance to play around with the prompts and refine your responses, consider the following questions:
- Did the generative AI tool effectively generate the content I needed in terms of quality and relevance?
- Was the generated content easily customizable to suit my specific needs and preferences?
- How did the use of the tool impact the time required compared to traditional methods?
- What questions or concerns do I have about using Gen AI in this way?
References
Anthropic. (2024). Prompt Library. https://docs.anthropic.com/en/prompt-library/library
FeeDough. (2024). The Free AI Prompt Generator. https://www.feedough.com/ai-prompt-generator/
Mollick, E. (2023h, November 1). Working with AI: Two paths to prompting. One Useful Thing. https://www.oneusefulthing.org/p/working-with-ai-two-paths-to-prompting?utm_source=post-email-title&publication_id=1180644&post_id=138388046&utm_campaign=email-post-title&isFreemail=true&r=2sc7cm&utm_medium=email
Schuloff, S., Khan, A., & Yanni, F. Learn Prompting: Your Guide to Communicating with AI. Retrieved October 26, 2023, from https://learnprompting.org/
Schulz, S. (2024, February 16). What is a Prompt Library? And Why All Organizations Need One. Orpical Group. https://orpical.com/what-is-a-prompt-library/
Wharton School. (2023c, August 2). Practical AI for Instructors and Students Part 3: Prompting AI [Video file]. YouTube. https://www.youtube.com/watch?v=wbGKfAPlZVA
When to use it
This information was adapted from McMaster’s Provisional Guidelines on the Use of Generative AI in Operational Excellence.
Before using a generative AI tool as part of your work at McMaster University, please review the Provisional Guidelines on the Use of Generative AI in Operational Excellence and discuss with your supervisor. The following questions from the guidelines can help guide this conversation. It’s recommended that you document your agreement on how you will approach each question.
Questions to consider and discuss:
- What types of work within my role could benefit from generative AI use?
- What types of work within my job description should not use generative AI?
- How should I document and disclose when I have used generative AI within my work? What level of use (e.g. brainstorming, drafting, copy editing, coding) warrants disclosure of use?
- What level of transparency is required to satisfy privacy requirements? How do I ensure everyone involved in the work I am doing understands how we will use (or not use) generative AI?
- When and how will these considerations be revisited? How will I share my experiences using generative AI with my supervisor/team?
To help you think through possible tasks, you can start by reviewing your job description or consider day-to-day tasks that could benefit from experimentation or use of generative AI (e.g. brainstorming, summary of notes, email drafting, copy editing, data analysis, report writing, documentation). Consider:
- What value might generative AI bring to this task?
- How might generative AI support or assist in your work on this task?
- What are some of the possible uses of generative AI in your work or responsibilities?
- Do you foresee any risks or negative impacts in using generative AI in this work?
Frequently Asked Questions
Do not upload or share confidential, personal, personal health or proprietary information with a generative AI tool unless a data security and risk assessment and a privacy and algorithmic assessment have been completed for the specific tool. You should review the privacy policy and user agreement of generative AI tools and consult with the Privacy Office, Information Security, or the Office of Legal Services to address any questions or concerns about privacy policies or terms and conditions found in user agreements. Subscribing to notification processes for tool updates can be a helpful way to stay informed about significant changes to user agreements. Learn more by completing your privacy training and information security training.
For any personal or sensitive data McMaster has an enterprise license for Microsoft Copilot which ensures that, when logged in using McMaster credentials, data used is not shared with either Microsoft or McMaster and confidential, person, or proprietary information can therefore be used.
Legal questions of intellectual property (such as copyright and privacy continue to be evaluated by provincial and federal courts. Until such questions are resolved, employees should not use generative AI created content for proprietary work, to autonomously make decisions (e.g. hiring), or to create content embedded in other university systems (e,g., Mosaic, Slate, Active Directory) and all McMaster policies continue to remain in effect.
Before pursuing a new generative AI tool, McMaster employees and teams can engage with University Technology Services to consider if there are other existing applications, technologies and resources that could meet the need that have already been assessed for use by university employees. You should also complete an Early Privacy Risk Check which may lead to a fuller Privacy and Algorithmic Impact Assessment (PAIA). This process enables you to work in partnership with the Privacy Office to identify risks of regulatory non-compliance and manage those risks through mitigation planning.
Citation and disclosure practices will vary by context. Check with colleagues in your area or similar roles at other institutions to consider what emerging norms for citation or disclosure may be. You can also refer to McMaster Library’s Generative AI citation guide for ideas.
You may find it helpful to develop a collaborative working document that includes use cases of generative AI. For example, a sample acknowledgement could read: “[Name of generative AI tool] was used in the creation/drafting/editing of this document. I have evaluated this document for accuracy.”
To help you make these decisions, consider:
- What might be some reasons our [key consulted groups] might need or want to be aware that generative AI was used in this [type of work]?
- How do we ensure that everyone involved in a project or process that uses generative AI is aware and agrees to the use?
- What possible risks to our credibility or expertise are present if we do not disclose use of generative AI in this [type of work]?
- What professional obligations do we have to be transparent with our use of generative AI in our area?
You can contact macgenai@mcmaster.ca with any questions about the use of generative AI at McMaster.
Summary
In this module, we covered:
- Recognizing the differences in AI models and tools and how they can be used.
- Using an AI tool to complete a work task.
- Considering ways in which generative AI could impact your work.
By engaging in active exploration and thoughtful discussions about AI, we can cultivate a critical perspective on the potential implications and risks associated with AI. This allows us to create an environment where responsible AI use is carefully considered and thoughtfully integrated when appropriate.
If you’d like to continue your learning about Gen AI or engage in conversations with others, check out the Generative AI learning path on the Digital Capabilities Website and the Generative AI for Staff page on the Provost’s website for additional resources and events.