We are at the start of a fundamental shift in how humans interact with the digital aspects of our lives. Driving this are Large Language Models (LLMs) and now the Large Action Model (LAM) as put forth by a new company, Rabbit with the launch of their r1 device. These AI tools are bringing social agents deeper into our lives.

LLMs and LAMs are transformative AI tools that are already having an impact on our world, though not quite to the degree they’re being hyped as having. LLMs, like ChatGPT, Claude and Grok are quite well known. They are mostly contextual, not a social agent and used for generating content, from images and videos to book writing. LAMs are fairly new and are all about taking actions by performing tasks that we ask of a LAM.

The two combined usher in a new way of engaging with the digital sphere and digital technologies. LAMs make it possible now to create social AI agents that can take actions on our behalf.

This then begs the question of how humans will accept, adopt and adapt to engaging with these social agents? Additionally, how much agency we will give them and how much will we anthropomorphise social agents? And, can we socialise these agents, do we want to?

The term thinking machines is bandied about a fair bit, but so far, machines can’t really “think” in the way that humans do. This ties in with social agents, essentially personalised AI tools that help humans do things. From ordering dinner and a taxi to completing complex analysis of spreadsheets and documents at work.

The closest we are to a fully functional social agent with AI is chatbots. And humans have shown an increasing tendency to become emotionally attached to these tools. And research has shown there are significant psychological impacts of chatbots on humans. Today, there are many thousands of chatbots available and services like Replika, Hugging Face, Kuki and others enable people to create their own chatbots quite easily and quickly. They’re fairly narrow in scope and not very high functioning, and certainly not anywhere near “thinking.”

In the near future, we may have an AI agent that we use at work, assigned to us by the company. We may have another that is our personal AI agent, that we can name and train to take actions on our behalf. It is a...