An AI journey with my 90+ year old father-in-law
…and how AI may also help elderly people if designed carefully and explained well
Our recent family visit became a kind of little journey into what AI can or might do for elderly people — if they’re open to technology.
But it also shows how sensitive this topic is when it comes to building trustful systems.
=>
We need to go on developing such tools — but just as important: we have to educate people how to use them. Including how to spot mistakes.
The situation My father-in-law is 90+. He’s a classic ESTJ (MBTI). He enjoys structured conversations, real-world details, logic, a good plan — and results that fit and which he likes. He’s used to managing things himself — especially when it comes to his Linux system. (He started using notebooks at 70+, but he’s still surprisingly good at it for his age.). And yes — Linux. He uses it slowly. Independently. Proudly. Years ago, he discovered gaming. Command & Conquer became his passion, while my mother-in-law could focus on her garden. A perfectly functioning routine. My husband supported him whenever computer questions came up. (He’s a Linux addict and a network specialist.) Some years ago, like with many elderly people, health issues came into the picture: diabetes, heart problems, kidneys. He dislikes asking his real-life doctors. (They all talked about diets, which he refused.) So he turned to Dr. Google — using his own very special way: just keywords, never reading the context. And one day, Google gave him an answer he couldn’t shake off. He typed in: “Life expectancy, 90, heart disease, diabetes, kidney problems.” The result: One year left. That sentence stuck. No one could convince him otherwise. Not the doctors — who kept telling him how fit he actually was for his age and how many more years might be possible. “They just want my money. They say nice things, but it’s only business.” Every evening, he called my husband: “When will I die?” Always the same question. Night after night. Which — after a long workday — slowly became exhausting. The challenge When we arrived, we had lunch and took a short walk. At some point, he quietly pulled me aside: “Hey Anne, I heard you’re working with artificial intelligence in your company. That’s interesting. I asked your husband to give me one of these things — but he refused. He says I’m too old to understand it properly, that it might overwhelm me, or give bad answers. He says I better stay with my usual ways. But I don’t want to stay with that old stuff. I want to have one of these AI things. I want to show my friends that I’m still modern. Will you help me?” I hesitated. I told him I would need to talk to my husband and my mother-in-law — otherwise it might create some family tension. He agreed — but kept pushing me again and again to convince them. Back home, while we had cake (with cream), I slowly brought up the topic of AI — so he could step in. My husband stared silently out of the window. My mother-in-law looked intensely at the table. My father-in-law, however, jumped right into it — telling me many stories he had read about AI in recent weeks. Clearly, he had studied the topic quite seriously. About half an hour later, my husband finally said: “Dad, maybe you should ask Anne to help you set it up. With the right settings, it could work.” My mother-in-law looked at me and gently nodded. The rules of engagement We discussed for a while what exactly the assistant should do — and what not. Not so easy. My father-in-law used to be a Spieß (Company Sergeant Major) in the German Bundeswehr — later a driving instructor. He’s good at telling others what to do next. Always closed questions. No broader context. No tolerance for ambiguity. He chose to name his assistant Paul. Paul had to follow these rules: - No poetry. - No extra-niceness. - No surprises. - No contradicting the user. => And above all: be brief. He wanted to ask questions like: - “Dinner with my condition?” - “Cake twice a week?” - “Lasagna with béchamel still okay?” So I set up his account, adding a reduced version of the KSODI-prompt I had developed. Paul’s job was now: - Check: Is the context sufficient? - Is the structure usable? - Is objectivity ensured? - Is the clarity and density good enough for a reliable answer? - Is the information understandable? => If not, help qualify the question before answering. Additional background provided: Heart disease, kidney issues, diabetes, MBTI type, how to address my father-in-law, the assistant’s name Paul, and the style of answers expected (as above). After a short test, we found: Paul answered like a calm, competent assistant. No moralizing. No emotional tone. Just logic and food suggestions — clear and useful. It took me about an hour to teach my father-in-law how to use the interface, to select topics with emojis, and to understand where he was navigating in his browser. We chose Gemini 2.5 because he already had a Google account. I also gave him some extra tips which might help others working with elderly people or people with impairments: For example, using car emojis 🚗 for car questions, joystick emojis 🎮 for gaming topics, and so on. Helpful both for the model to hit the right context — and for him, to avoid typing too many letters. It also helped him reuse existing chats for ongoing topics — because while he understands the context window concept, he tends to forget about it quite often. The result Shortly after our visit, my father-in-law called my husband to say that Paul was absolutely fantastic and very helpful. All his friends now knew he was using AI — and admired him for it. He was proud. The only thing he didn’t like: “Paul keeps telling me not to eat too much cake, black bread, or cream — always reminding me of my heart, kidneys and diabetes.” We were afraid he might lose motivation. But something unexpected happened: After a few weeks, my father-in-law was still using Paul — even more than before. He nearly stopped using Google for complex questions. Paul continues correcting many of his queries, but my father-in-law tells us he learns a lot from these small corrections. My father-in-law now describes himself — somewhat jokingly — as sometimes being a bit “kurz ab” (hard to translate — maybe “a little imprecise in phrasing”). But he proudly says that he’s working on it — and appreciates Paul very much. “No, I’m not doing a diet. I’m just listening a bit more to Paul. Maybe it’s not bad to eat a bit less junk and focus more on vegetables.” And yesterday, he called again — very happy: His doctor had just confirmed his blood sugar was finally back in normal range. First time in months. “Paul is my best assistant ever.” Reflection: What We Realized (Again) It’s not about the information itself — it’s about how it’s delivered. AI can become: A tool of control — or a tool of dignity. What worked here? - Context sensitivity (age, health, personal preferences) - Personality alignment (ESTJ answer-style logic) - Emotional neutrality with consistent logic - Trusted humans quietly observing and guiding (my mother-in-law and my husband) => And above all: respect for the human being Yes — even amidst all justified criticism and open questions around AI: This little story quietly reminds us: Digital maturity is not a matter of age. If someone at 90 can run Linux, play Command & Conquer, and learn better questioning patterns every week — they deserve systems that meet them on their terms. Especially then.
An Afterthought
No AI should ever replace family and doctors.
But when designed carefully — and observed wisely —
it can become a worthwhile assistant even for elderly people.
Hope this may give someone the motivation to start with their elder family members.
Hope you enjoyed reading.
If you liked it, feel free to leave a little heart.
Anne