top of page

Chatting to ChatGPT About My Own Book (or Why You Should Read the Original)

By Dr. Jem Tosh



The recent explosion of AI applications and the discussion of its potential uses for advancing a wide range of industries has been so pervasive that in the short time since ChatGPT launched I'm almost already sick of hearing about it. I've been researching the implications of technological development, specifically its relationship with violence and abuse, for over 15 years. I cover this work in our Virtual Violence Research Centre, including my forthcoming talk, Falling in Love with AI: A Critical Analysis of Advertisements for 'Digital Women'.


I asked ChatGPT to summarise my first book, Perverse Psychology, and here's how that conversation went.

I'm not anti-AI. I appreciate its potential uses in ways that can alleviate stress, streamline processes to make things more efficient, more productive and less time consuming. I see it as a means of freeing up our time for other things, and to make some things more accessible. I also know that technology is not neutral, despite the apparent neutrality of any inanimate object that makes up the devices that we use. From military conflict and colonial violence in areas of the globe that hold minerals essential to the production of technological items, like the current genocide in the Democratic Republic of the Congo, to the harmful environmental impacts of huge and ever-increasing data centres, to AI-generated images of s*xual abuse - any new technology not only can reflect the structures, oppressions, and values of those making it, but also can be taken up in ways that reflect the context it is released within. So, introduce a technology that was created in a capitalist system and release it into a capitalist culture and the result is unlikely to be one that frees up people's time to work less and live more, and more likely to put profit over people.


That doesn't mean that AI can't be used to counter, subvert, or resist that culture or context. AI can offer an opportunity for students who have a large amount of literature to cover in a short amount of time, while also juggling multiple other roles or responsbilities, such as working multiple jobs to cover costs during a 'cost-of-living' crisis, or single parents who are also working and studying. They can also assist with accessibility, such as providing an overview of a paper and key points that a student can read before reading the full text - to assist in giving a framework and to help begin to piece together new concepts into their current understanding.


...any new technology not only can reflect the structures, oppressions, and values of those making it, but also can be taken up in ways that reflect the context it is released within.

Even these uses, though, have their limits. There is also a difference between someone who is highly experienced in an area using AI to save time but who knows the topic enough to notice the exclusions and errors (or 'hallucinations' of AI - where algorithms create details that are not true), and someone who is learning about something for the first time who is more likely to take those errors as fact and not realise that there are significant absences in that AI-generated summary.


For myself, I wanted to test how accurate ChatGPT could summarise my own work, appreciating that some students, researchers, or other academics may be using it (or similar applications) to learn about my research. So I asked ChatGPT to summarise my first book, Perverse Psychology, and here's how that conversation went.


TLDR: Perverse Psychology by [Jem] Tosh



First I had to address the frustration of having to use my deadname, as that is the name associated with the first edition, but something that I will be rectifying with the forthcoming second edition. What I found interesting was that while the summary wasn't incorrect, it had missed, not only significant parts of the book, but many of the most important parts of the book. So, if a student were to summarise my work like this, or write a summary in an essay or a paper based on this overview, it would miss the entire point of the book. The other very interesting/concerning part as an author and educator, is that for some reason it skipped the first half of the book entirely. So, I took a slightly different approach and enquired if ChatGPT could pick up what the main argument was, because maybe it was just about finding the right wording or question to ask.


...while the summary wasn't incorrect, it had missed, not only significant parts of the book, but many of the most important parts of the book.


Unfortunately, while the details it generated next are not incorrect, they are not the main thesis of the book. The summary continues to exclude half of the book's contents, which is probably why it's missed the point or the argument, as the book compares these two topics to produce a novel conclusion. So, I persevered. I asked, 'Why haven't you included the discussion of sexual violence in the book?' ChatGPT apologised (it might not be thorough or accurate, but at least it's polite?) and then added in a section on sexual violence to the summary:



This addition isn't wrong, but it isn't right. Half of the book critiques psychiatric diagnoses that refer to sexual violence, I completed a detailed and historical discourse analysis on it, and examined the boundaries between what psy discourses frame as 'normal' or 'abnormal' rape - and the problematic way they do this. While I'm sure I mention survivors, they are definitely not the focus. The analysis is on the construction of rape and perpetrators and this is completed missed by ChatGPT and replaced by something I did not focus on significantly (I have in other publications, it just wasn't the point of this analysis).


Unfortunately, the more help I gave, the more errors started to appear (aka. 'hallucinations').

Given that ChatGPT was struggling here, I thought I'd be more direct. I gave it a description of what the book was about as a kind of 'helping hand' and asked it to add this into the summary. Here I was wondering, could ChatGPT be helpful in providing summaries of my work to students? Is this something that I could use in my own work and teaching? Unfortunately, the more help I gave, the more errors started to appear (aka. 'hallucinations').



The summary continues to say that the main focus of my book is comparing survivors of sexual violence with gender non-conforming people, that psychology pathologizes and marginalizes both. Now that's not a bad point, and I'm sure there are a few sentences in the book that make that point, but it is far from the main point of the book, and the huge focus on how rapists are constructed in psychological discourse is still, despite prompting, completely absent. The description foregrounds issues around stigma, discrimination, acceptance, and internalised oppression - none of which are key aspects of the book.


I argue that, 'If psychology considers [gender nonconformity] to be ‘abnormal’ and rape to be ‘normal’, or argues that trans people should be prevented but masculine aggression should be encouraged, then it sounds to me like psychology is the one that is perverse and in need of an intervention' (Tosh, 2014, p. 116). See, that's quite a different argument than the sanitized and oversimplified ChatGPT version gave, isn't it?

The book actually compares psychology and psychiatry's construction of sexual violence and gender nonconforming people. It concludes that psychological discourse tends to normalise sexual violence in numerous ways, and pathologise gender nonconforming people. I argue that, 'If psychology considers [gender nonconformity] to be ‘abnormal’ and rape to be ‘normal’, or argues that trans people should be prevented but masculine aggression should be encouraged, then it sounds to me like psychology is the one that is perverse and in need of an intervention' (Tosh, 2014, p. 116). See, that's quite a different argument than the sanitized and oversimplified ChatGPT version gave, isn't it?


At this point in my own work, I might ask ChatGPT some questions from time to time on topics that I am already very educated in, but I'm certainly not going to trust it to be accurate or its information to be complete.


If you choose to use AI to summarise it for you, you might save time but miss the point.

What also concerns me about people potentially accessing my work through AI rather than reading the original, is all the detail that is missed. I include so many examples and important history that students should be aware of when entering into a career in psychology, because histories of oppression and harm are so important to prevent further harm in the future and addressing past abuse. So my advice to those who are interested but strapped for time, maybe rather than asking ChatGPT about my work, try reading one of my blog posts where I summarise the book for you, or read the last chapter to find out the conclusion from years of work on this research project. If you choose to use AI to summarise it for you, you might save time but miss the point.







bottom of page