Voice of AI: When AI becomes an asset in care
- Ralph Schwehr

- Jan 21
- 5 min read
Anyone working in a care facility , youth welfare office, job center, or integration counseling service today can feel it: it's no longer about "just another tool," but about transforming the work routine itself. AI is moving precisely where language, documentation, and case management determine daily life, thus placing it squarely at the heart of the responsibilities of social work, care, education, and administration.
With today's newsletter, we continue our analyses on "new intelligence", this time from the perspective of municipalities, organizations and administrations in public services.
1. AI-competent state: Governance instead of tool shopping
The OECD puts it succinctly in its new policy brief: Public services cannot simply buy AI and "infect" it. They need their own expertise in data, procurement, risk management, and quality assurance; otherwise, responsibility, accountability, and impact will be too weak.
Specifically, this means:
systematic training of employees (not just IT staff),
Clear roles and responsibilities for data and AI managers,
Procurement processes that make transparency, auditability, and interoperability mandatory, not optional.
Robust data and AI governance that makes quality, security and sovereignty measurable and continuously monitors them.
The debate surrounding AI in public employment services argues similarly: profiling, matching, and service personalization can bring unemployed people into suitable jobs more quickly, but only if fairness, traceability, and data protection are modeled from the outset.
A look at Great Britain shows how political this is becoming: The NHS is developing an AI-based early warning system that analyzes routine data to identify patient safety risks early on, particularly in the highly sensitive field of obstetrics. This makes it clear: AI is not a "nice to have," AI saves lives. As part of critical infrastructure, errors in design, data, or governance would have direct consequences for life and health.
And the perspective is also shifting in the area of migration: The World Bank's approach to forecasting refugee movements uses machine learning on over 90 variables to estimate arrivals 4-6 months in advance, enabling municipalities to plan infrastructure (schools, healthcare, water) in good time.
The underlying governance message is: Those who use AI for planning and control also assume responsibility for the assumptions made.
2. Practical Factor AI: Social Work, Care & Documentation in Transition
In social work, the debate becomes very concrete. IRISS describes generative AI as a "practical factor": helpful in structuring complex cases, formulating reports, or translating technical content into understandable language , but only if professional curiosity, critical thinking, and ethics are consciously incorporated.
The logic:
Expertise frames AI (prompts, standards, controls)
AI does not replace expertise.
In the care sector, a parallel trend towards assistive systems is emerging . A recent study in Frontiers in Public Health shows how a humanoid companion robot is used in day care for people with cognitive impairments, not as a "toy," but embedded in a care model: structured activities, activation, and reminiscence therapy. The goal is to relieve the burden on caregivers and improve interaction with clients.
The real game changer, however, lies in "Ambient AI": systems that automatically transcribe conversations, structure them, and create draft documentation .
Medical studies show:
significantly reduced documentation burden and cognitive load,
improved perceived quality of care
and in some cases more time for patient contact.
But beware: Drafts are only as good as their quality control. If errors slip unnoticed into files, silent risks arise, especially in nursing care, youth services, and social psychiatry.
This is accompanied by ethical guidelines: The Oxford white paper on the responsible use of GenAI in adult care calls for a value-oriented perspective focusing on dignity, autonomy and relational quality of care, co-produced by those affected, professionals and providers.

3. Language, processes, education: AI as a bridge or a new barrier
In the asylum context, the European Parliament's briefing examines the use of AI throughout the process: from language analysis and document verification to risk assessment. While efficiency and consistency are promised, the risks are enormous: inaccurate or distorted models can undermine the right to asylum, exacerbate discrimination, and erode procedural safeguards.
In practical terms, this means:
Clear no-gos (e.g., no automated "credibility assessment"),
Transparency obligations towards affected parties,
and robust legal remedies when AI-based analytics are used.
In the education system, the NEA demonstrates in a very practical way how AI can support multilingual learners: text-to-speech, speech-to-text, translation, adaptive materials - all tools to facilitate access to content and relieve the burden on teachers. At the same time, the guidance warns against over-reliance on AI and the risk that learners will end up "speaking through AI" rather than developing genuine language skills.
The choice between these two poles - efficiency and empowerment on the one hand, new dependencies and inequalities on the other - will determine whether AI actually builds bridges or creates additional obstacles.
Key takeaways in brief
AI is becoming an asset : It is shifting from isolated tools to a central language, documentation and planning infrastructure in social work, care, education and administration.
Expertise before purchasing : Without their own skills in data, procurement and risk management, administrations and organizations remain dependent and vulnerable.
Professional guardrails are mandatory : Social work and nursing show that GenAI works, but only if professional standards guide prompts, use and control, not the other way around.
Relief requires quality assurance : Ambient AI can noticeably reduce documentation pressure, but only with clear testing processes, audit trails and liability rules.
Procedural justice in focus : In asylum, job matching and early warning systems, the design of AI systems determines fairness, trust and the rule of law.
Sources (selection)
OECD (01/19/2026) : Building an AI-ready public workforce (Policy brief) https://www.oecd.org/en/publications/building-an-ai-ready-public-workforce_b89244c7-en.html
IRISS / BASW (01/2026) : Generative AI and social work practice / Guidance https://www.iriss.org.uk/resources/insights/generative-ai-and-social-work-practicehttps://basw.co.uk/sites/default/files/2025-03/181372%20Generative%20AI%20and%20Social%20Work%20Practice%20Guidance_0.pdf
Frontiers in Public Health (06.01.2026) : Humanoid companion robot in caregiving: care model to empower caregivers https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2025.1658136/full
JAMIA / JAMA Network Open (2025) : Ambient AI documentation: effects on efficiency & documentation burden https://academic.oup.com/jamia/advance-article-abstract/doi/10.1093/jamia/ocaf180/8287711https://academic.oup.com/jamiaopen/article/8/1/ooaf013/8029407
EPRS, European Parliament (07/2025) : Artificial intelligence in asylum procedures in the EU (Briefing) https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2025)771795
EU Commission / PES Network (02.2025) : Harnessing the Opportunities of Artificial Intelligence in Public Employment Services (Report/PDF) https://ec.europa.eu/social/BlobServlet?docId=30060&langId=en
World Bank (03.06.2025) : AI-Powered refugee forecasting / Forecasting refugee flows using open-source data and ML https://blogs.worldbank.org/en/peoplemove/forecasting-refugee-flows-using-open-source-data-and-machine-learning
NEA (06/20/2025) : AI for Multilingual Learners (Guidance) https://www.nea.org/professional-excellence/student-engagement/tools-tips/ai-multilingual-learners
University of Oxford - Institute for Ethics in AI (04/2025) : Responsible use of Generative AI in adult social care - value-led approach (White Paper) https://www.oxford-aiethics.ox.ac.uk/sites/default/files/2025-04/AI-in-Social-Care-White-Paper-April-2025-Institute-for-Ethics-in-AI.pdf
The Guardian (June 30, 2025) : NHS will use AI in warning system to catch potential safety scandals early https://www.theguardian.com/society/2025/jun/30/nhs-will-use-ai-in-warning-system-to-catch-potential-safety-scandals-early
Conclusion
AI is becoming an indispensable asset for language, documentation, risk management, and case management. The winners are not the loudest tool providers, but rather the organizations that establish standards, competencies, and responsible processes early on, thereby building trust among employees, those affected, and policymakers.
Use our AI Readiness Check for assessment and/or contact us for further information at info@oakai.de
@Michael v. H.: Thanks for the suggestion!



Comments