Feeling left behind in this dizzying age of artificial intelligence (AI)? You’re not the only one, says Fordham GSS Professor Lauri Goldkind, Ph.D.
Goldkind and colleagues Clara Berridge, Ph.D., and John Bricout, Ph.D., recently wrote a piece for the National Association of Social Workers’ spring 2024 edition of its digital magazine Social Work Advocates, noting that many other professionals feel the same way.
“While the infusion of AI tools in social work practice is spreading, many people say they know little about AI overall,” the article reads. “In fact, in a poll of more than 300 attendees of our recent NASW webinar on AI—most of whom had an MSW—72% rated their knowledge of AI as “low’’ compared with only 1% who selected ‘high.’”
Goldkind et al. argue that while the tech sector may be “driving the train” on the AI movement, social workers cannot be excluded from the process. While social workers may not have the digital savvy that produces capital venture-backed applications (though they can!), their skillset can help ensure this technology isn’t further marginalizing those people and communities society too often leaves behind.
“It is not ethical to move fast and break things, because unregulated, market-driven AI first harms those most marginalized,” the article reads. “This should drive home the critical need for social workers to be engaged. AI has been deployed in public and private sectors in ways that foreclose housing opportunities, bar individuals from employment, and restrict access to health care services—and it incentivizes physical surveillance. But we are more likely to hear from AI enthusiasts about unrealized promises and potential of ‘tech for social good’ than about these present-day algorithmic harms.”
The authors outline three questions to help social workers guide their ongoing engagement with AI:
How were end users and impacted people involved in design and development? How were their experiences incorporated? (We know it’s problematic to develop a social work intervention without engagement of the professionals implementing it and the people it is designed to help, so we must also see that the practice of excluding people impacted by AI tools from the get-go is likely to cause harm.)
What’s being automated, why, and who benefits from this automation? How much money is invested in this? What are the potential opportunity costs? How well does it work in the specific case we’re using it for? How is success and accuracy measured? What are the evaluation metrics? Where does accountability lie when something goes wrong?
We haven’t dived into large language models (LLM) here, but we suggest reading or listening to experts like Emily Bender and Alex Hanna (co-hosted podcast: dair-institute.org/maiht3k), and asking questions that include:
What texts was the bot trained on, what or who is excluded, and which cultural assumptions feed their analysis? Regardless of whether it’s an LLM tool, it’s critical to understand on what data an AI tool was trained.
Artificial intelligence’s impacts are inevitable. Instead of being demoralized and apathetic about social work’s future influence on AI (and vice versa), Goldkind and colleagues argue for a proactive approach.
“…there’s a lot of energy and creative work out there to flip the power script either through acts of refusal or by using AI in the service of social or economic justice…We challenge you to consider how each of us can in our own spheres of practice contribute to naming and shifting this concentration of power.”