Skip to Main Content

Using AI Tools

Using AI Tools in the Research Process

Cowles Library recognizes that the rise of generative AI gives writers and researchers the ability to generate new content and seek out new information faster than ever before. The library faculty, in an effort to help educate the students, staff, and faculty of Drake University on the benefits and risks of generative AI, have created a set of guiding principles to consider when using these tools in the research and scholarship process.

Below is a concise version of those principles that covers the use of generative AI in the research process especially with regards to searching, finding, and evaluating information.

This document primarily focuses on written textual AI trained on large language models as that relates to the research and scholarship process and library technologies. However, some of these topics can also relate to AI image generation tools.

Expand All Sections | Collapse All Sections

  • Skills, Habits, & Tools: Good research requires the repetition of certain skills, the development of positive habits, & the use of proper tools.
  • AI as a Tool: The use of an AI as a tool can be helpful but only if used carefully, correctly, and in the proper amount.
  • Over-reliance: Leaning too heavily on AI tools can lead to the loss of skills & habits, making the researcher far too dependent on the AI and not on their own abilities.
  • Critical Thinking & Accountability: A quality research product requires human critical thinking and not just the output of an AI. The researcher is accountable for what they produce and put into the world.
  • Critical Thinking & Ethics: A quality research product must be produced in an ethical way otherwise the value, validity, & truthfulness of the resulting research can be called into question.
  • Critical Thinking & Reliability: A quality research product must be reliable or else another researcher won’t use it in their own research, therefore AI tools must be used sparingly to avoid unreliable information in the writing, results, & conclusion.
  • Information Risks
    • Hallucinations: AI can just make up false information and present it as truth. This includes fake sources when asked to create a reference list.
    • Misinterpretation: AI can misstate, distort, or inaccurately summarize information.
    • Incomplete information: AI can leave out important information.
  • Human Bias: Because AI systems are trained on historical data and use algorithms created by human programmers, they recreate and perpetuate human biases that exist in society. Artificial intelligence responses can contain racial and gender stereotypes and other various biases (e.g. disability, age, sexual orientation etc.) in both image and language-based models. Examples of bias in AI (scroll down)
  • Privacy & Security Concerns
    • User Data Protection: Information about what user data is collected and how data will be used, clear term agreements, the right to have companies delete user personal data, and robust protection from data breaches are all critical for protecting user privacy (Koerner, 2023).
    • Intellectual Property Theft: Generative AI tools can inadvertently use copyrighted or proprietary material, leading to potential legal issues.
    • Malicious Use: Generative AI output can be used for malicious activities such as creating deepfakes and spreading potentially harmful misinformation.
  • How to address the risks
    • Include domain experts: AI responses “should be checked by a domain expert for accuracy, bias, relevance, and reasoning.” (Hosseini, Rasmussen, and Resnik, p. 4) Students are also accountable for reviewing and engaging with the AI responses as the human creator of the content (Mollick and Mollick, 2023, p. 3)
    • Author accountability: Authors “must be held accountable for inaccuracies, fallacies, or any other problems in manuscripts." (Hosseini, Rasmussen, and Resnik, p. 4)
    • Human involvement: There is a need for a “human in the loop” to review and engage with AI responses. (Mollick and Mollick, 2023, p. 3). According to the Association of Research Libraries (2024, April, p.2), “Libraries believe No human, No AI,” where humans are needed to direct, manage, and review AI-driven processes for the sake of integrity and for creating dependable systems for research.
  • There are three copyright issues related to generative AI output: scraping (or copying) content without consent; use of that content in responding to prompts; and the human who publishes the output of a generative AI while claiming to be the sole author (i.e., plagiarism).
  • AI-generated information should not be cited like a published article because the information is not permanent. A scholar should acknowledge when they have used an AI to help create their article or other written document. (Nogueria and Rein 2024) APA has a method of acknowledging the use of AI that could be used by many fields (McAdoo, T. 2024).
  • Authority Is Constructed and Contextual: Critically evaluate the information produced by generative AI tools to ensure accuracy, reliability, and inclusivity, as they source information from various authorities that promote biased views. Note that "authority" itself is a form of bias.
  • Information Has Value: Avoid sharing sensitive or unauthorized information with generative AI tools, stay informed about legal and socioeconomic issues, be transparent, and address equitable access. Note that generative AI tools raise significant ethical and practical concerns.
  • Information Creation as a Process: Assess the information produced by generative AI to ensure it meets specific needs and aligns with current principles. Note that the stages of creating information—researching, creating, revising, and disseminating—vary by discipline, and influence the final product.
  • Research as Inquiry, Searching as Strategic Exploration & Scholarship as Conversation: Refine prompts by giving clearer, more specific instructions for better generative AI results. Note that these tools learn from human interactions and that well-crafted prompts can expand understanding or open new lines of inquiry.

Effective, appropriate, and ethical generative AI use requires awareness of context, legal issues, and best practices. Listed below are applicable university governing documents to guide Drake community members in the effective, appropriate, and ethical use of generative AI tools in the research process.

Related Drake University Policies
Related Drake University Resources

The information on this page was compiled by the Cowles Library AI Task Force, comprising the following members of the Cowles Library faculty:

Priya Shenoy (chair)
Dan Chibnall
Doreen Dixon
Marcia Keyser
Teri Koch
Andrew Welch

For a more detailed version of this information, including references for the sources cited in the above sections, please see the full AI Guiding Principles document.