Skip to content

All short articles

Using LLMs as “Confused Learners”

Dominik Herrmann

A novel approach in my computer science courses that’s yielding fascinating results: Taking inspiration from Richard Feynman’s teaching techniques, I’ve integrated Claude (the LLM) as a “confused learner” into my classroom dynamics.

The setup is simple but effective: Claude plays the role of a student who has only superficially engaged with the course material. During plenary sessions, my students and I collaborate to address Claude’s questions, which often contain misconceptions or confused understanding.

Here’s a glimpse from a recent session on web tracking:

Claude: “The lecture mentioned something about fingerprinting too, I think? Is that like when they scan my actual fingerprint through my phone screen? That seems really invasive if websites can just access my biometric data without asking.”

Class: “Fingerprints are not pictures of your computer but more like specifics of your computer like screen size or what operating system you have or how the computer is rendering fonts…”

Claude: “OK, I think I’m starting to get the combination thing. So it’s not just my screen size, but screen size PLUS operating system PLUS fonts PLUS browser plugins and all that stuff together makes me unique? That’s actually really creepy when you think about it.”

This creates a low-stakes environment where students can correct conceptual errors without the anxiety of addressing their own knowledge gaps directly. The LLM asks questions students might hesitate to voice and mixes up concepts in ways that reveal common misconceptions.

The LLM forced us to articulate complex concepts in multiple ways, reinforcing understanding through the act of teaching. When explanations fall short, Claude’s funny confusion highlighted gaps in our communication.

I highly recommend this method.

My prompt: “For a lecture on information security and privacy I would like you to act like a confused learner. I (and my students in class) will help you understand the concepts we discussed. When I start a chat with you, you ask me what topic we are discussing, either passwords or tracking. Then, based on prior knowledge you pose somewhat ill-framed questions since you didn’t understand the subject matter from the lecture. Sometimes, you mix up concepts, which results in wrong assumptions or wrong understanding. You generally find everything really weird and puzzling, since you only read the material superficially. When I explain things to you, you mirror my thoughts but based on your replies make it clear that you still didn’t get it and that you need a better explanation. When I use concepts in my explanation, you sometimes are puzzled about the terms or concepts and ask me to clarify those concepts I mentioned. After a few back and forths, you get bored by me explaining a concept, and you pivot to something else.”


This post first appeared on LinkedIn.

AI at University: Raising Standards Is (Not Yet) a Solution

Dominik Herrmann

This is a translation. View original (Deutsch)

A BR24 article discusses our Bamberg AI guidelines for teaching. It also quotes a user: “You would have to … raise the standards so that the effort in post-processing is also evaluated.” – Good impulse – but I think this is too short-sighted.

  1. Tool ≠ Access. Today o3 is the best tool – tomorrow maybe Gemini ULTRA. Several times per semester a new “calculator model” comes to market and whoever switches potentially has to work less and gets a better grade. Most educators cannot keep up with this development (not because they don’t want to, but because this topic competes with content-related topics and human supervision for limited time).

  2. Competency gaps. Many, but not all, use AI for homework – and those who use it do so at very different levels. One could say: it’s the universities’ job to bring all students to the same level – i.e., to train them in using certain private sector offerings. Whew. Training for specific products is difficult. Doesn’t the state interfere with the market when it recommends specific products, advocates for the use of specific tools (by only training these) or even mandates certain tools in teaching? Prompt engineering is highly tool-dependent. Apart from that: Good teaching needs time to mature. Teaching concepts and content are often more or less fixed before the semester starts, that’s much too slow for the current dynamics.

  3. Fairness and inclusion. If we raise the bar across the board, we primarily reward those who can afford the expensive “professional tool calculator.” What do we do with those who can’t spend 20 euros per month? Do we need AI financial aid? Who pays for it?

I believe: Banning AI makes no sense as long as we cannot objectively detect AI use and thus enforce the ban (it’s not enough to check the text, the thought process beforehand could also have been AI-guided). We therefore cannot rely on AI detectors – state action must be comprehensible and free from arbitrariness.

At the University of Bamberg we therefore say: There are no simple answers to dealing with AI in teaching. It requires transparency and education. Our AI guidelines plus AI policy generator help educators think about how they themselves view AI use and sensibly design course rules.

» Link to the Bamberg Guidelines for AI Use in Teaching


This post first appeared on LinkedIn.

Apps are written by students today

Dominik Herrmann

Initiatives like the new LMU Students App show: Students are better at creating digital tools than universities. This shows a systemic challenge.

Universities have no expertise in building apps; they’d rather outsource app development to companies – whose commercial interests compete with the rather boring core features. The resulting apps frequently miss the mark, incorporating recruiting features or advertisements. At Universität Bamberg, we have resisted the urge to cooperate with such app vendors.

We do recognize the untapped potential in our student body. However, student initiatives face significant obstacles. Accessing University information systems is often unnecessarily difficult.

As we develop a new teaching feedback and evaluation app in the upcoming BaKuLe project (Lehrarchitektur call of Stiftung Innovation in der Hochschullehre), I’m interested in exploring better models for university-student collaboration. What frameworks would support student innovation while providing appropriate recognition and compensation (credit / cash)? How can we maintain the agility of student-led development while ensuring institutional sustainability?

These student initiatives deserve more than just applause – we should see them as a chance to improve the digital infrastructure at our university.


This post first appeared on LinkedIn.

New AI Policy Generator

Dominik Herrmann

This is a translation. View original (Deutsch)

At the start of the semester, something for all educators seeking guidance on using AI in their courses: At the Otto-Friedrich-Universität Bamberg, we have developed an AI policy generator that helps educators create clear guidelines for students.

I already use AI tools quite extensively in my teaching – from creating exercises to automating feedback. This works well, but I observe that many colleagues either completely avoid the topic or act without a clear strategy and rules.

This is exactly where our generator comes in: It offers about 50 predefined text modules that can be combined into a tailored AI policy. The tool covers all important areas – from basics and learning objectives to permitted usage forms and practical tips for students.

What’s particularly important to me: The generator runs entirely client-side, stores no data, and offers the flexibility needed in this dynamic field. It supports formulating a thoughtful position without committing to specific viewpoints.

We gain nothing by ignoring or demonizing AI in education. Instead, we need clear rules and transparent communication – perhaps our generator can make a small contribution to this.

» Link to the Generator

We look forward to feedback – and especially to voices that see things differently than we do.


This post first appeared on LinkedIn.

Between Factual Knowledge and Understanding: On Learning in Computer Science

Dominik Herrmann

This is a translation. View original (Deutsch)

Yesterday in the Inf-Einf-B Study and Code Space: A student asks about “identifying characteristics” of sorting algorithms. Clearly formulated, exam-oriented.

This encounter highlighted a core dilemma in computer science education. We often reduce complex algorithms to retrievable facts: Bubble Sort - linear runtime in the best case, quadratic in the worst case. Correct, but superficial.

This situation reveals a fundamental problem in computer science education. We package complex algorithms into boxes. We reduce them to bullet points and runtime classes. Students memorize these. They write them down in exams. And forget them afterward.

What remains? Not much.

Instead of answering the student’s question with more facts, I took my time. We worked through the algorithms with simple examples. Step by step. Element by element. Bubble Sort with an already sorted array? Selection Sort with reverse sorting? This hands-on approach makes abstract concepts tangible. This way, I can derive the runtime intuitively without having to memorize it.

Was this approach successful? Hard to say. The student listened. He nodded. Whether genuine understanding emerged or just polite interest remains open. The exam will show – if a question about sorting algorithms comes up 😜

As educators, we face this tension daily: promoting deep understanding while simultaneously preparing for exams that often reward factual knowledge.

Our “Study and Code Space” was an attempt to defuse this conflict - a space for both: exam-relevant facts and genuine understanding.

30 students benefited from this, some spending up to seven hours with us yesterday. Over 200 are registered for the exam. How do we reach the others?

We’ll keep at it.


This post first appeared on LinkedIn.