Skip to Main Content

AI & ChatGPT

What is Artificial Intelligence?

"AI is intelligence exhibited by machines, where a machine can learn from information (data) and then use that learned knowledge to do something." Julie Mehan in Artificial Intelligence: Ethical, Social, and Security Impacts for the Present and Future

AI is usually contrasted to human intelligence. When you consider the variety in the kinds of intelligence a human brain can exhibit, the breadth of computing tasks that can be considered AI quickly becomes apparent. 

What is Machine Learning?

"Machine learning is a subset of artificial intelligence. This field of computer science explores a computer’s ability to learn from data." Andrew Lowe and Steve Lawless in Artificial Intelligence Foundations: Learning From Experience

Machine learning involves developing algorithms that allow computers to improve their performance at specified tasks. 

What is a large language model?

"Large language models (LLMs) are AI models that are designed to understand and generate human language, code, and much more." In this context, "understand" means accurately interpreting and identifying context, not full human understanding. Sinan Ozdemir in The Quick Start Guide to Large Language Models

What is ChatGPT?

A chatbot that uses a large language model and machine learning to successfully mimic human conversational patterns. You can think of it like the predictive text on a smart phone - it is trained to answer questions in a way that sounds plausible. It has not been trained to answer questions accurately

Learn more about how ChatGPT works

Is ChatGPT a Search Engine?

No. If you ask ChatGPT for lists of sources, it will provide a list, but it is guessing at which words seem to be plausible responses to the question posed rather than consulting any kind of bibliography or authoritative list of sources that actually exist. It can often use its knowledge of which words frequently appear near each other to successfully identify plausible publications, researchers, or authors for a topic, but you should always double check any citations provided by ChatGPT with an external source before assuming they are accurate. Do not simply ask ChatGPT if the citations are real - it will not provide a useful response. Sometimes it will even argue with you when you point out that a citation it provided is not real.

If you're not sure if a citation is real, please ask a librarian. Let us know that ChatGPT provided the citation. 

Fake ChatGPT Citations in the News

What is a Generative Search Engine?

While ChatGPT is not a search engine, there is a new kind of "generative search engine" that does intend to provide accurate and verifiable responses to natural language questions. Sometimes they succeed. Most of these tools provide both a brief answer to a question and supporting links where users can verify the information provided in the response. 

In general, these tools do a much better job than ChatGPT at linking to sources they used to answer questions, but you should always click through to read the cited pieces in full. Do not assume the summary responses provided by the chat tools are accurate. Generative Search Engines cannot always identify an article that answers the question that was asked, nor can they always accurately summarize the information provided in their citations. Additionally, they have a strong bias towards popular sources. If you specifically request a scholarly source, they may make up citation or link to a popular website. 

Learn more about the accuracy of Generative Search Engines

Explore Generative Search Engines

What is an AI Research Assistant?

AI Research Assistants take natural language questions or "seed papers" and return citations for scholarly papers and books (e.g. Elicit, Research Rabbit, etc.). This class of tools is designed for professional researchers seeking peer-reviewed scholarship, so they tend to do better at retrieving genuine citations for scholarly work than either ChatGPT or Generative Search Engines. However, researchers using the tools should be mindful of limitations in the indexes they search - none is as comprehensive as Google Scholar or the Summon Index, nor do they tend to have the quality control of a tool like Web of Science or a disciplinary index. Typically there is very little information available on the algorithms used to rank results. Some attempt to summarize the findings of an article, others simply display the traditional abstract written by humans. Library staff recommend not assuming that AI summaries are accurate. 

As of August 2023 the Oberlin Library staff have not yet found a scientific study evaluating the effectiveness of these tools in interpreting research questions, retrieving results relevant to those questions, or in summarizing the conclusions of articles. Preliminary tests by library staff suggest that some tools may be a helpful way to kickstart a research process as long as users are mindful of potential limitations. A handful of systematic reviews and meta-analyses available in PubMed mention using elicit.org as one of the search strategies.

Learn more about AI research assistants

Explore AI Research Assistants:

Does ChatGPT change over time?

Yes! The model is always trying to improve itself, and changes that may make it better at one task may make it worse at another. There is relatively little transparency into which changes are being implemented at any given time and what their implications are, which is why it is important to double-check information provided by it with another source. Double-checking is important even when you've heard from a reliable evaluator that it is usually good at a particular task.

Learn more about how ChatGPT's performance at specific tasks changes over time:

How can I tell if text was generated by ChatGPT or AI?

This is very hard to do, and you should be wary of people and tools who claim to be able to do this successfully! Asking the tools that generated the text, like ChatGPT, will not work - they are simply not trained to identify their own output and don't perform that task accurately, but they can provide seemingly-authoritative responses anyway. 

As far as external evaluations go, Weber-Wulf et. al. tested 14 AI detection tools and found that they struggled to identify text created by AI if the text was slightly rewritten by a human or even a machine paraphrasing tool. Jackesch, Hancock, and Namaan found that humans struggle to identify AI generated text even in training scenarios where they are provided immediate feedback on incorrect selections and receive financial incentives for correct identifications. 

Learn more about accuracy in AI-detection