Making Smart Choices with AI

Since OpenAI first released ChatGPT in November 2022, the impact of Artificial Intelligence (AI) developments has been huge, both at a societal level, and more locally within the sphere of Higher Education (HE). For researchers wanting to explore the role that AI might play in research processes, navigating the AI landscape might easily feel overwhelming. Alongside Generative AI (GenAI) tools like ChatGPT, Gemini and Claude, there are an abundance of AI tools available that could be used by researchers for discrete tasks (Futurepedia gives an idea of the sheer scale in terms of the volume of tools).

A screenshot of Futurepedia showing the number of AI tools across six different themes.

These tools tend to be available on a freemium or subscription basis, and because they are not part of the University’s subscriptions, necessitate that a researcher signs up to terms and conditions that may be problematic and need careful consideration before use of a tool can begin. This has the potential to create uncertainty for any researcher, who could easily end up caught between a desire to use AI in a meaningful way to advance their research, but unsure whether the tool they have selected risks harming the integrity of their research.

For postgraduate researchers (PGRs), the uncertainty is no different. For example, while the university provides access to an Enterprise version of the generative AI (GenAI) powered Microsoft Copilot (a version of Copilot providing security for your data), use of this still has to be considered alongside the university guidelines for PGRs on Using GenAI ethically for work and the guidance on acceptable use of GenAI in relation to theses (see the ‘Editorial help for PGR theses’ section in particular).

The challenge for any researcher, then, is how to evaluate when and whether AI can be used in a particular aspect of research, and if so, how to go about selecting an AI tool to use. To support researchers with selecting AI tools responsibly, the library has developed an Evaluative Framework for AI Tools. This framework (also available as a one-page Word document) provides any researcher with a series of questions to consider before selecting and using an AI tool to assist with carrying out a research-related task. The questions are grouped into categories, and the intention is that by working through and answering each question, researchers can then make informed choices about whether the use of an AI tool should be pursued.

There are seven categories in total and they are:

  1. Relevance of tool
  2. Prompting and input rights
  3. Outputs
  4. Policy compliance and ethics
  5. Corpus
  6. Costs
  7. Terms, conditions and data security

For example, the first set of questions prompts the researcher to consider whether an AI Tool is the most appropriate option for the task at hand. If the AI tool hasn’t been purpose built for that research task, would using a relevant non-AI tool be more suitable?

Once an AI tool has been selected, the library has also produced a Quick Review Checklist to assist a researcher with reviewing the tool’s terms and conditions so that an informed choice about whether to accept the terms can be made, or whether further advice is needed.

For a fuller introduction to the framework and how it can be applied to the AI Tools landscape, sign up for one of the Research Skills Team’s Introduction to AI Tools for Researchers sessions. We also suggest monitoring the AI for Researchers page on the UoB AI Hub for updates on university guidelines, resources and support on use of AI.


Discover more from UoB PGR Development blog

Subscribe to get the latest posts sent to your email.

Authors

James Barnett

Libraries and Learning Resources

Georgina Hardy

Libraries and Learning Resources

Latest posts

Who owns our knowledge?

Leave a Comment