Skip to content

All short articles

When the Story Isn't the Story: Reflections on Uncertainty

Dominik Herrmann

Earlier today, we published Annemarie’s story Twists and Turns: A Non-linear Lecture. The story is an authentic account of my experiment in using Twine to create interactive, non-linear lectures where students make choices that determine the session’s direction, solving cybersecurity problems embedded in a workplace narrative.

Her account captures both the planned elements and an unplanned moment that turned out to be surprisingly engaging - when I genuinely couldn’t solve an SQL injection problem and had to work through documentation with my students. That wasn’t planned vulnerability. I was just stuck.

During editing the story, I had an ‘aha’ moment that had eluded me: I had achieved real uncertainty. Not uncertainty about which path students would choose, but uncertainty about what would happen next. The students leaned in because the professor wasn’t just dispensing knowledge – we were all genuinely problem-solving together.

This makes me question what I thought I was optimizing for. Yet I’m somewhat torn. The storytelling elements do serve a purpose beyond engagement. Through stereotypical characters – the cynical security analyst, the clueless manager, and the overconfident developer – I can bring industry dynamics into the classroom in a way that tutorial exercises can’t. Students chuckle at the exaggerated workplace behaviors, but they’re also absorbing insights about professional ethics and organizational dysfunction that I want them to recognize – workplace realities that academic environments often sanitize or overlook.

The most powerful combination seems to be this light-hearted entertainment that draws students in, paired with moments of genuine uncertainty where none of us controls the outcome. The story provides the bait and the vulnerability creates the hook.

The Twine structure enables this with decision points that can lead anywhere, embedded stereotypes that illuminate workplace realities and problems that are complex enough that I cannot anticipate every solution path.

So perhaps when preparing a lecture, I don’t necessarily have to choose between storytelling and imparting knowledge, but simply create conditions that enable a conversation on equal terms.

And on the meta level, this episode shows one of the unexpected benefits of documenting my experiments through storytelling on this site: it encourages reflection that wouldn’t happen otherwise.

Using LLMs as “Confused Learners”

Dominik Herrmann

A novel approach in my computer science courses that’s yielding fascinating results: Taking inspiration from Richard Feynman’s teaching techniques, I’ve integrated Claude (the LLM) as a “confused learner” into my classroom dynamics.

The setup is simple but effective: Claude plays the role of a student who has only superficially engaged with the course material. During plenary sessions, my students and I collaborate to address Claude’s questions, which often contain misconceptions or confused understanding.

Here’s a glimpse from a recent session on web tracking:

Claude: “The lecture mentioned something about fingerprinting too, I think? Is that like when they scan my actual fingerprint through my phone screen? That seems really invasive if websites can just access my biometric data without asking.”

Class: “Fingerprints are not pictures of your computer but more like specifics of your computer like screen size or what operating system you have or how the computer is rendering fonts…”

Claude: “OK, I think I’m starting to get the combination thing. So it’s not just my screen size, but screen size PLUS operating system PLUS fonts PLUS browser plugins and all that stuff together makes me unique? That’s actually really creepy when you think about it.”

This creates a low-stakes environment where students can correct conceptual errors without the anxiety of addressing their own knowledge gaps directly. The LLM asks questions students might hesitate to voice and mixes up concepts in ways that reveal common misconceptions.

The LLM forced us to articulate complex concepts in multiple ways, reinforcing understanding through the act of teaching. When explanations fall short, Claude’s funny confusion highlighted gaps in our communication.

I highly recommend this method.

My prompt: “For a lecture on information security and privacy I would like you to act like a confused learner. I (and my students in class) will help you understand the concepts we discussed. When I start a chat with you, you ask me what topic we are discussing, either passwords or tracking. Then, based on prior knowledge you pose somewhat ill-framed questions since you didn’t understand the subject matter from the lecture. Sometimes, you mix up concepts, which results in wrong assumptions or wrong understanding. You generally find everything really weird and puzzling, since you only read the material superficially. When I explain things to you, you mirror my thoughts but based on your replies make it clear that you still didn’t get it and that you need a better explanation. When I use concepts in my explanation, you sometimes are puzzled about the terms or concepts and ask me to clarify those concepts I mentioned. After a few back and forths, you get bored by me explaining a concept, and you pivot to something else.”


This post first appeared on LinkedIn.

AI at University: Raising Standards Is (Not Yet) a Solution

Dominik Herrmann

This is a translation. View original (Deutsch)

A BR24 article discusses our Bamberg AI guidelines for teaching. It also quotes a user: “You would have to … raise the standards so that the effort in post-processing is also evaluated.” – Good impulse – but I think this is too short-sighted.

  1. Tool ≠ Access. Today o3 is the best tool – tomorrow maybe Gemini ULTRA. Several times per semester a new “calculator model” comes to market and whoever switches potentially has to work less and gets a better grade. Most educators cannot keep up with this development (not because they don’t want to, but because this topic competes with content-related topics and human supervision for limited time).

  2. Competency gaps. Many, but not all, use AI for homework – and those who use it do so at very different levels. One could say: it’s the universities’ job to bring all students to the same level – i.e., to train them in using certain private sector offerings. Whew. Training for specific products is difficult. Doesn’t the state interfere with the market when it recommends specific products, advocates for the use of specific tools (by only training these) or even mandates certain tools in teaching? Prompt engineering is highly tool-dependent. Apart from that: Good teaching needs time to mature. Teaching concepts and content are often more or less fixed before the semester starts, that’s much too slow for the current dynamics.

  3. Fairness and inclusion. If we raise the bar across the board, we primarily reward those who can afford the expensive “professional tool calculator.” What do we do with those who can’t spend 20 euros per month? Do we need AI financial aid? Who pays for it?

I believe: Banning AI makes no sense as long as we cannot objectively detect AI use and thus enforce the ban (it’s not enough to check the text, the thought process beforehand could also have been AI-guided). We therefore cannot rely on AI detectors – state action must be comprehensible and free from arbitrariness.

At the University of Bamberg we therefore say: There are no simple answers to dealing with AI in teaching. It requires transparency and education. Our AI guidelines plus AI policy generator help educators think about how they themselves view AI use and sensibly design course rules.

» Link to the Bamberg Guidelines for AI Use in Teaching


This post first appeared on LinkedIn.

Apps are written by students today

Dominik Herrmann

Initiatives like the new LMU Students App show: Students are better at creating digital tools than universities. This shows a systemic challenge.

Universities have no expertise in building apps; they’d rather outsource app development to companies – whose commercial interests compete with the rather boring core features. The resulting apps frequently miss the mark, incorporating recruiting features or advertisements. At Universität Bamberg, we have resisted the urge to cooperate with such app vendors.

We do recognize the untapped potential in our student body. However, student initiatives face significant obstacles. Accessing University information systems is often unnecessarily difficult.

As we develop a new teaching feedback and evaluation app in the upcoming BaKuLe project (Lehrarchitektur call of Stiftung Innovation in der Hochschullehre), I’m interested in exploring better models for university-student collaboration. What frameworks would support student innovation while providing appropriate recognition and compensation (credit / cash)? How can we maintain the agility of student-led development while ensuring institutional sustainability?

These student initiatives deserve more than just applause – we should see them as a chance to improve the digital infrastructure at our university.


This post first appeared on LinkedIn.

New AI Policy Generator

Dominik Herrmann

This is a translation. View original (Deutsch)

At the start of the semester, something for all educators seeking guidance on using AI in their courses: At the Otto-Friedrich-Universität Bamberg, we have developed an AI policy generator that helps educators create clear guidelines for students.

I already use AI tools quite extensively in my teaching – from creating exercises to automating feedback. This works well, but I observe that many colleagues either completely avoid the topic or act without a clear strategy and rules.

This is exactly where our generator comes in: It offers about 50 predefined text modules that can be combined into a tailored AI policy. The tool covers all important areas – from basics and learning objectives to permitted usage forms and practical tips for students.

What’s particularly important to me: The generator runs entirely client-side, stores no data, and offers the flexibility needed in this dynamic field. It supports formulating a thoughtful position without committing to specific viewpoints.

We gain nothing by ignoring or demonizing AI in education. Instead, we need clear rules and transparent communication – perhaps our generator can make a small contribution to this.

» Link to the Generator

We look forward to feedback – and especially to voices that see things differently than we do.


This post first appeared on LinkedIn.