Years ago, I was an avid Twitter user. I didn’t post much, but it was the way I preferred to find out what was going on in the world. At some point, I started to notice what it was doing to me. Nearly every time I closed the app, I was angrier, or sadder, or more worried than I was before opening it. I eventually decided to delete my account and the app. I was slightly less informed, but it was worth it. That move spared me from much of the Twitter drama of recent years. I’m thankful for the distance from which I’ve watched it.
Then last summer, Threads was launched. It was supposed to be a less toxic alternative to Twitter, and I think it was—for a while. Many of the features that fueled Twitter’s toxicity were absent. No hashtags (at first), no trending topics, no bots (again, at first). And for those who joined directly from Twitter, it was a breath of fresh air. So, I hopped on Threads. I immediately remembered how comfortable I am on a text-focused platform. It’s been nice.
Recently, Threads has demonstrated its own flavor of toxicity. Political camps are forming. Gates are being kept. Comment sections are getting spammed. I’m disappointed to see this happening, but I can’t say that I’m surprised. No one should be surprised.
Here’s the thing: Social media platforms are a dynamic kind of technology, and the impact of technology has more to do with the people who use them than the technologies themselves.
The Moral Potential of Technology
In saying this, I’m taking a position in a philosophical debate about the moral nature of technology.[1] Broadly speaking, the debate is over whether technologies are morally neutral. Are there good and bad technologies? Or are there just technologies that are used in good or bad ways? Philosophers have good arguments for multiple positions.
Most people are interested in this debate, even if they don’t know it. Whenever people talk about gun control, or internet regulation, or even driving speed limits, they are basically talking about the moral implications of technology. Philosophers of technology debate whether those implications are features or properties of the technologies themselves. Perhaps they are simply the results of the cultural context or the way in which they are used.
I think it’s helpful to consider the nature of technology, or what technology actually is. Technology is essentially the enhancement of our natural capacities. It may be more than that, but I can’t think of an example where it’s less than that. In other words, technology magnifies what we are able to do on our own.
That said, it seems that technologies do not have moral agency. That is, they don’t make moral decisions.[2] However, technologies do impact the decisions we make, often to a great degree. Just think about transportation and communication. We make daily decisions that are only possible because of the available technologies. It is important for us to recognize that many of those decisions have moral implications.
So, while technologies do not have moral agency, they do have moral consequences. They magnify our ability to do things—for better or worse. My argument is that the more a given technology magnifies our natural capacities to do good or bad, the greater that technology’s moral potential is.
An Analogy
Let’s say I were in a weightroom with plates that weigh from 45 lbs. down to 2.5 lbs. Holding a 45 lbs. plate over my head is a much more dangerous action than holding a 2.5 lbs. plate over my head. Why? Well, in terms of physics, the potential energy of a 45 lbs. plate is much greater than the 2.5 lbs. plate. In terms of blunt force trauma, 45 lbs. will cause quite a bit more damage than 2.5 lbs.
In a similar way, the more a given technology enables us to do, the more moral potential that technology seems to have. For example, no parent is concerned about how old their child should be before using a calculator. However, one of the biggest decisions parents make these days is when to allow their children access to a smartphone. The reason is clear: you can do a lot more with a smartphone than a calculator.
I’ll take the analogy a step further. The kind of person using the technology makes a significant difference. My 12-year-old son lifting 45 lbs. over his head is much more dangerous than me lifting the same weight. As a person’s strength increases, the potential danger decreases. In a similar way, as my son matures and demonstrates responsibility and self-control, the more I’m willing to give him access to more dynamic technologies.
I’ll take the analogy another step further. We could increase the weight we’re imagining to be too dangerous for anyone to lift. Similarly, we have technologies that give people the ability to do things that no one should be doing. Internet technologies in particular enable us to do everything from expressing hatred to exploiting people.
The Problems We Face and the Formation We Need
The problems we see on social media (and any new technology) are not new. Contempt, worry, arrogance, envy, exploitation—they’re all old problems that are intensified by the convenience and expanse of technology. Personal problems are now published for the world to see. Distant problems are now brought constantly to our attention. The conversation about what to do about social media has been active for a while. Yet we are still disappointed when a new platform doesn’t keep people from being mean. I understand certain features lead to certain problems. If nixing those features might help, then by all means get rid of them. But we had those problems long before we had the technology.
The incredible moral potential of social media demands that we become the kind of people who can handle such technology. There are certainly things we are allowing these platforms to do for us and to us that should have never been. But for the most part, the impact of social media has more to do with us than the platforms themselves. We don’t need better features; we need better formation.
Think how much better off we would be if we worked to be more compassionate and think more critically. But compassion and discernment are not developed online. They are developed in the spiritual formation of our character. We must cultivate these and other virtues within ourselves to truly counteract the negative potential of social media—and any technology for that matter.
[1] Most philosophers use the terms ethical and moral interchangeably. Personally, I don’t like that. But the distinction I’d like to make is beyond the scope of this article.
[2] This is also a philosophical debate that has become more complicated with the recent popularity of artificial intelligence.