As the world continues to grapple with the impact of artificial intelligence (AI) and where it fits into the workforce, social work is no different.
And while the US has seen recent regulatory movement concerning AI’s use in the mental health space—Illinois recently passed legislation prohibiting AI therapy in the state—the profession is still largely finding its footing and comfort level with the technology.
Recent research from Fordham Graduate School of Social Service Professor Lauri Goldkind, Ph.D., explored how clinical social workers are using—or avoiding—the technology, and their mindsets behind these decisions.
In an article titled “Clinical Social Workers’ Perceptions of Large Language Models in Practice: Resistance to Automation and Prospects for Integration,” Goldkind and her co-authors interviewed 21 clinical social workers and explored how they experience their work in the context of growing LLM use.
The answer: Clinicians have found both benefits and detrimental effects of AI in their practice. The two overarching themes of the study were:
- factors that enhanced social workers’ perceived usefulness of LLMs in clinical practice, including support for administrative tasks and client engagement
- factors that diminished perceived usefulness, such as concerns about confidentiality, loss of nuance, and limitations in conveying empathy and contextual understanding.
The study used the Technology Acceptance Model (TAM) as a framework to explore the perceived usefulness of LLMs, the ease of which they are adoptable, and the types of support practitioners need to engage with them effectively.
“While many of the perceived benefits of LLMs align with traditional TAM constructs such as job relevance, ease of use, and output quality, social work practitioners also filtered these evaluations through the lens of professional ethics and relational values,” the study reads.
Goldkind et al found that the efficiency these tools bring isn’t the be-all-end-all for the interviewed social workers; empathy, client autonomy, cultural responsiveness, and ethical accountability are also influencing how professionals are engaging with the LLMs.
“Rather than simply conforming to what colleagues might endorse, practitioners appeared to rely on deeper internalized standards rooted in the ethics and culture of the profession,” the article reads. “Even when LLMs demonstrated high output quality or increased efficiency, practitioners questioned their appropriateness if the tools appeared to compromise relational depth or client safety.”
But social workers aren’t rejecting the LLMs outright. Among the positive use cases identified were documentation assistance, brainstorming, and supervision—situations where human judgment still had autonomy over the important decisions.
“These patterns suggest a preference for augmentation over automation, a vision of AI as a tool that strengthens professional insight rather than substitutes for it,” the study reads.
Goldkind et al even acknowledged the Illinois legislation, stating that while some might see it as an outright ban on LLM use in the mental health space, it’s more aimed at keeping the clinicians in control.
“While the legislation may be interpreted by some clinicians as a blanket restriction on AI, it is more accurately understood as an effort to promote ethical LLM integration in mental health services, regulating standalone LLM-based therapy apps while preserving space for the supervised use of LLMs by licensed professionals. This overall approach closely mirrors participants’ concerns about the limitations of AI in clinical relationships and their conditional openness to its use in administrative and idea-generating capacities that assist, rather than replace, human judgment,” the study reads.
Read the entire article in the Journal of Evidence-Based Social Work.