What tools and resources can help me learn more about generative AI?
When getting started, it’s important to have a solid understanding of how generative AI works and how to create effective prompts, which are the words and phrases that guide AI to generate specific outputs or tasks.
Below are a list of recommended resources and policies to review before using generative AI:
- Basics of generative AI: It’s recommended to spend some time understanding exactly how generative AI works before using it to support outputs, allowing users to better understand its strengths and limitations.
- Prompting: The results from generative AI are only as good as the questions or tasks assigned to it. A well-crafted prompt can guide the tool to generate high-quality responses. Poor prompting can result in irrelevant responses or hallucinations, which are responses that are made up or inaccurate. Taking the time to learn effective prompting will help to ensure success.
- Review the prompting job aid resources in ARC
- LinkedIn Learning is a fantastic resource and has a variety of courses and modules that get into more role-specific suggestions. Here’s a few to get started:
What generative AI-powered tools and resources can I use?
With new AI-powered tools emerging seemingly daily, it is important to ensure you’re taking the necessary steps to ensure they are in accordance with university data policies. If a tool's AI functionality has not been previously reviewed, you will need to initiate a formal risk assessment with OTDI.
- Microsoft Copilot with Corporate Data Protection is currently the only generative AI tool that has been vetted and approved for use at Ohio State.
- There are many tools with generative AI-powered support features. The following tools have AI-powered features that have undergone a risk assessment:
- Adobe Creative Suite
- Salesforce Marketing Cloud (Einstein)
Using generative AI with integrity
Using generative AI ethically and with integrity is important to ensure transparency and maintain trust with our audiences. Because of the ease with which one can create content with generative AI, you must protect yourself and the university by confirming that all generated content is accurate and ethical.
Risks of using generative AI include bias, copyright issues, inaccurate representations and misinformation. A disclosure statement can help to mitigate risks associated with publishing generative AI-infused content; however, it is strongly recommended that if you are considering a disclosure statement, you should also reconsider using the use of generative AI in that instance.