21 What aspect of the “Blythewood Study” did Emma find most surprising?
A The high success rate of detection tools
B The speed at which deepfakes are created
C The low number of participants who spotted fakes
22 Professor Vance says current detection software struggles mostly with
A poor lighting conditions.
B compressed video files.
C unnatural audio syncing.
23 For her main project, Emma has decided to focus on
A political campaign videos.
B social media influencer clips.
C historical documentary footage.
24 What do both speakers agree is the biggest ethical issue?
A False accusations against real people
B The cost of running detection software
C Loss of trust in mainstream news
25 To improve her methodology, Professor Vance suggests Emma should
A interview more software developers.
B use a larger sample of videos.
C analyze the data manually.
Questions 26 to 30
What detection technique is associated with each of the following researchers?
Choose FIVE letters from the box and write the correct letter, A-G, next to Questions 26-30.
Detection Techniques
A blinking patterns
B background shadows
C blood flow changes in the face
D lip movement consistency
E digital watermarking
F emotional mismatch
G pixel blending errors
Researchers
26 Dr. Aris Thorne
27 The Kinsley Group
28 Professor Vance
29 Julian Crosse
30 The Oakhaven Institute
Keys
21 C
22 B
23 B
24 A
25 B
26 C
27 B
28 D
29 F
30 G
Transcripts
Part 3: You will hear a student named Emma discussing her deepfake detection project with her tutor, Professor Vance.
EMMA: Hi Professor Vance. Thanks for meeting me to discuss my deepfake detection project.
PROFESSOR VANCE: No problem, Emma. I read your initial notes. You mentioned the Blythewood Study first. What did you think of the findings?
EMMA: Well, I expected that these fake videos are made incredibly fast now. But what really shocked me was the human element. The unexpectedly low number of participants who actually spotted the fakes was just scary. Like, only fifteen percent even noticed anything was wrong!
PROFESSOR VANCE: Yes, human perception is lagging. And even our computer tools aren’t perfect.
EMMA: Right. I read detection software is getting good at catching unnatural audio syncing.
PROFESSOR VANCE: It is. That used to be a hurdle, but the algorithms figured it out. The real struggle now is when videos get uploaded to social media and shrink in size. Compressed video files lose so much vital data that the software just gets entirely confused. Poor lighting conditions are tricky too, but compression is definitely the main issue currently facing developers.
EMMA: Got it. Okay, for my project focus, I was originally going to look at political campaign videos.
PROFESSOR VANCE: A very popular choice.
EMMA: Yeah, maybe too overdone. So, I completely changed my mind yesterday. I’m going to look at social media influencer clips instead. They use beauty filters, which I suspect might seriously mess with the detection algorithms.
PROFESSOR VANCE: Good pivot. Analyzing historical documentary footage would be too hard to source anyway. Now, what about the ethical side? Loss of trust in mainstream news is a massive topic.
EMMA: It is, but I think the worst part is the personal damage. You know, false accusations against real people just because a fake video looks so convincing.
PROFESSOR VANCE: Absolutely. It’s devastating. Let’s talk methodology. You plan to interview software developers.
EMMA: Yeah, three industry experts. Do I need more?
PROFESSOR VANCE: Three is plenty. But looking at your dataset, ten test videos isn’t enough. You really need to use a significantly larger sample of videos to get solid, reliable results. Don’t analyze the data manually though, let the program do the heavy lifting.
EMMA: I’ll bump my dataset up to fifty videos.
PROFESSOR VANCE: Perfect. Now, for your literature review, who are you looking at?
EMMA: Let’s start with Dr. Aris Thorne.
PROFESSOR VANCE: Ah, Thorne. Is he looking at blinking patterns?
EMMA: Actually, no. I thought so too at first, but he’s measuring microscopic blood flow changes in the face. Like, he can basically read a heartbeat through the camera.
PROFESSOR VANCE: Fascinating. What about the Kinsley Group?
EMMA: They look for obvious mistakes in the environment. Not looking at digital watermarking, but specifically they analyze background shadows. AI generators almost always get the angle of the light source entirely wrong.
PROFESSOR VANCE: They do struggle with physics. You should probably include my past research, right?
EMMA: Of course! I read your famous paper on audio-visual sync last night.
PROFESSOR VANCE: That’s an old one. My recent government grant was actually for analyzing lip movement consistency. The shape of the mouth has to perfectly match the exact vowels being spoken.
EMMA: I’ll update my notes. I also read Julian Crosse’s controversial work.
PROFESSOR VANCE: He thinks looking at psychology is better than looking at pixels.
EMMA: Yeah, he focuses entirely on emotional mismatch. Like, if a person is saying something furious, but their micro-expressions look completely relaxed.
PROFESSOR VANCE: It’s subjective, but interesting. Who is your last source?
EMMA: The Oakhaven Institute. They essentially scan for pixel blending errors right around the outer edges of the face mask. It’s super technical.
PROFESSOR VANCE: Excellent choice. Well, you have a solid foundation, Emma. Let’s meet next Wednesday.