Conversation with Caroline Sinders

A musical note, pencil, and palette are shown side by side

Caroline Sinders is a machine-learning-design researcher and artist. For the past few years, she has been examining the intersections of technology’s impact in society, interface design, artificial intelligence, abuse, and politics in digital, conversational spaces. Sinders is the founder of Convocation Design + Research, an agency focusing on the intersections of machine learning, user research, designing for public good, and solving difficult communication problems. As a designer and researcher, she has worked with Amnesty International, Intel, IBM Watson, the Wikimedia Foundation, and others.

Caroline Sinders spoke with Ashley Hopkinson on May 22, 2024. Click here to read the full conversation with insights highlighted.



Ashley Hopkinson: Hi Caroline, can you introduce yourself?

Caroline Sinders: I’m Caroline Sinders. I’m an artist and human-rights researcher. I’m based between the US and the UK. I look at how technology impacts marginalized groups through the lens of consumer safety and the design of those systems on a local and global scale.

Ashley Hopkinson: Describe the journey that brought you to that intersection of human rights and design. What led you to that? 

Caroline Sinders: I had an almost wobbly journey to human rights. I don’t see it as wobbly, but I think when I describe it for folks they are like, that’s an interesting jump. I was originally trained as a photographer, and so the thing I always like to ask people is, do you believe that photography involves research? Often people say yes. Well, do you believe that photojournalism is a part of journalism? Yes. Do you believe that journalism is research? Yes. Do you believe that research is related to human rights? Yes. Then you can see how I made my way to human rights. I was trained in photography and photojournalism. That also manifests in various ways of thinking through visual and cultural anthropology. There are aspects of ethnography that get folded into photography studies and photojournalism.

I’ve always been interested in technology, even as a photographer. I went to photo school in the mid-aughts, and I was really interested in how platforms are designed–designing for storytellers and with storytellers. How do you design platforms and spaces for photographers to tell stories online or digitally? How do you think through using technology to tell in-depth research stories and journalistic stories? How is photojournalism a part of that? And then how do you design ways to share that story effectively that are both compelling and true to the research and true to the event? I think technology is a deep part of that. 

That’s what brought me to thinking about how design and technology have impacts on research and on journalism. The more I got into that, the more I had to learn a variety of other skills like UX design, and I started looking more and more critically at systems and platforms. I got very into social media, and how stories were breaking on social media, and how social media was a form of the commons and that was becoming an integral part of activism and human rights and documentation. And then also, what are the politics of the platforms that are sharing those stories? Now that doesn’t seem very controversial, but imagine it’s 2010 or 2008 or 2012. That was when I was an undergrad and in my master’s program.

For me, it was very clear that all these things were intersecting. I started thinking more and more about how design and human rights are related [when] I was in grad school using video-game engines to tell nonfiction stories, because I saw that interaction as an interesting space where maybe photojournalism or photography or other forms of documentary could go. That also meant that at that time I ended up studying aspects of Gamergate. Gamergate, for those that don’t know what it is, was a massive online harassment campaign that targeted marginalized groups, women, and people of color in  video games and in video-game communities. There are a lot of other things related to Gamergate, but that’s the best way to summarize it. A lot of the harassment was occurring on platforms and social media, and it became really clear that UX had a role to play in safety, and it wasn’t mitigating the harm people were facing. That was where I could see very directly how design was having an impact both on policy and on people’s safety.

When I talk about the design of platforms, I think there are three main levels. There’s the policy design, so, what are the rules that shape the platform, and how does that manifest for users inside the platform? In some cases that might be like, oh, I can’t download this image from this platform; I have to take a screenshot. Or, I can’t upload this thing. Then there’s the technical infrastructure of what you can do, like, can you attach a GIF to the post? That is also a form of design. Then there is the user experience design, or what you see visually. All of those things are intersecting to create the system, so design has these very direct impacts on consumers and consumer safety. And how you see the policy or don’t see the policy is also a design element. This is where design in particular impacts things like human rights and consumer safety.

Ashley Hopkinson: Can you tell me a little bit about how your approach to human-rights work with the design lab is distinctive? What sets it apart from other things that are happening within the field?

Caroline Sinders: We take a design-research approach to our human-rights work. If the role of design is to look at a problem and use your design toolkit to unpack that problem, we actually work in the opposite way: we spend a lot of time gaining domain expertise to understand all points of the problem, including how communities are very directly impacted over time by the issue we’re looking at. Then we start to look at the faults of the software and hardware, and how that also relates to design.

When you dig into it with this background of understanding the harms that the community is facing and the policy landscape around those issues, then it’s much easier to surface the design harms. But it also means that when we’re providing analysis or feedback or recommendations for change, we’re able to take a multipronged approach: [we’re] able to talk explicitly about the harms people are facing, how the design contributes to that, how the policy contributes to that, and then taking the extra level of ensuring that whatever we’re recommending is technically feasible and possible, because we work closely with all different kinds of technologists who are leaders in their field. A big thing for us is ensuring that whatever recommendations we’re making can be implemented, and we’re thinking on a short-term and long-term basis.

I think what really sets us apart is that amount of care. We’re not trying to go into a space and recommend things that are impossible. We’re also trying to understand what it means when a platform says something can’t happen. Does that mean that it’s technically not feasible because of how they’ve built something, or it’s not possible because of a decision that an executive is making? 

If it’s not possible because of a decision that an executive is making, we can more easily challenge that, but also it makes it possible for us to then think about the policy implications of that being impossible. Looking at law and regulation of the area we’re working in is very important.

It’s rare that we will go into a geographical region that we have no ties to. We’re either working closely with local groups and local communities that have invited us in to collaborate or it’s a space in which we have either community or familial ties, meaning someone from our team is actually living there. We haven’t done a project without a local organization that has connected us. They have to be involved. We can’t just parachute in. We need a lot of local connections for a variety of reasons, [including] user trust and community trust and the right perspective when we’re engaging with a problem.

Ashley Hopkinson: Can you describe your process of reaching out to collaborate with another organization? Is there an illustration you can give me? 

Caroline Sinders: For example, we’re working on a project in Canada. Through the internet I met a local organization there called UKAI. We’re big fans of them and they’re big fans of me. We’ve spent three or four years chatting. I discovered their work through a friend who’s based in the UK who’s working with them. They’re an arts organization that works a lot in Toronto but also across different regions in Canada around AI and the arts. They’ve been fans of some of the work that I’ve done, and they’ve shared some art projects and articles of mine.

They do a lot of work that’s driven by their community and their community’s needs. If they’re working with artists and artists are like, I really want to learn about X, or, I’m not totally understanding Y, they bring that into their programming, and then apply for, let’s say, money from the Canadian government to support programming. In this particular case, they approached me [because they’d been] hearing from community members who want to learn more about data and creative ways to engage with it. I have an art project called Feminist Data Set, which is a way to use intersectionality, as defined by Professor Kimberlé Crenshaw, to think about how data impacts AI. They were like, “We think your project would be really resonant. Would you want to help us write this toolkit and do some workshops?”

This made sense to me. I’m not driving the research agenda, they are, and the research agenda is driven by a series of surveys and community meetings they had over a year to help shape this new programming. I see how there’s something that you need help on that is either a Caroline- or Convocation-specific flavor, it’s already been requested by your community, and I can see the logic of how I am slotting into this. But the organization there is really leading the project. An opposite way would be, let’s say, a funder approaching me to do that toolkit in Toronto, and then I bring in UKAI and they’re doing a small bit—like if we flipped it. We wouldn’t do that. We effectively have to be invited in.

There are some cases where we get invited to do stuff and we’re like, that’s not totally our expertise. In those cases, we will ask the organization, why us?  Why do you want to work with us, or how do you see us fitting in? We’re going to follow your lead. We might still turn that project down. A lot of it is trying to understand, are we a good fit? For example, we don’t do a lot of blockchain or decentralization work. But because we do a lot of work with folks facing online harassment—and that includes state-level threats to intimate partner violence all the way to the harassment you can face just by existing on the internet—I’m joining an advisory group for a decentralized banking research project because the person is a friend, and they were like, with your intersectional harms background, I really want you to be advising or helping us think through how this would impact someone facing, let’s say, online harassment or intimate domestic partner violence. That’s such a small role anyway, and it feels organic to what we’re doing. That’s the kind of feedback [we would give for] an expert interview or a stakeholder interview anyway. Whereas after the project, if that person then was like, do you want to write a grant that would focus on this? Then I would say, maybe–who would be the other partners? We’d have to ensure that that made sense. When it’s a new domain or topic, we have to make sure that we are the right group.

Ashley Hopkinson: There is this idea that the internet is this free, open space, so whatever happens, happens. As we’re coming into an awareness of some of the harm that technology can cause, can you walk me through what you mean by systems harm? How have you seen it show up? How does your human rights and design lab work to create some protections, create support, create tools?

Caroline Sinders: There are quite a few examples. I’ll start with some that might be easier to grok. In a US context, we don’t have a lot of regulation around privacy or data protection, but there is policy especially on platforms related to online harassment and tech-facilitated violence and online gender-based violence, and we see that actually manifested fairly consistently up until recently across major social media platforms. I say up until recently because of Twitter being purchased by Elon Musk, but for the most part the policy that was on Facebook, that was on Twitter, that was on Instagram, that was on YouTube was actually fairly consistent and pretty good if it was enacted at scale. The issue is, how do you enact that stuff at scale inside of a company? That’s part of the design challenge, if you will, in heavy quotes.

One of the things that we research and advocate for is pushing companies to staff up to be able to respond to harm, having more people on trust and safety teams, having more content moderators, ensuring that content moderators are paid well and that they’re trained and that they receive support and care, because they’re looking at some of the most traumatic information on the internet. We need to push for those things, but those are long-term things a platform might not engage in, so we also need to push for short-term solutions. That’s where design mitigations come into play. That is, here’s what different privacy settings could look like [based on what we’ve learned from interviews with] folks who are facing all different kinds of online gender-based violence or digital violence and threats related to their identity or the kind of work that they do, if they’re a journalist, an activist, a human rights defender, et cetera.

By being able to show—here’s the threat, this is how they received it, this is what happened, and then this is what they couldn’t do—we can then do qualitative and quantitative research to think about design changes. We try to get as specific as possible, and then we try to test those changes. We will test them with the folks that we’re interviewing, or we’ll convene workshops with impacted groups, and that will end up in a report. We do occasionally get pushback from policy folks [who say], this puts a lot of onus on the individual. I will be the first to say it does—that’s why it’s a short-term solution—but our clients, who have sometimes had to flee their houses, need immediate solutions. We can’t wait a month, six months, a year, two years, five years for platforms to make these changes. People need help now. That’s why we think of them as short-term and long-term solutions.

Another big thing we advocate for is user agency. Even if we have these longer-term solutions in place already, with quicker turnaround, I still believe that people need all different kinds of privacy settings, of support, of mitigation tools, because they just deserve to have that kind of agency on the internet. When people are like, “Well, the internet’s open,” or, “It’s not so bad,” the thing I always say to that is, “Maybe it’s not so bad for you.” The internet is not actually a fun place to be, let’s say, a Black woman. Black women face more harassment than any other group. If you are a Black trans woman, you’re facing even more harassment. As you add different aspects to your identity, you’ll face more and more harassment. If you’re a cis man, you face the least amount of harassment. If you’re a cis woman, you will face harassment. If you’re a white cis woman, you’re going to face more harassment than a white cis man. If you’re a white trans person, you’re facing more harassment than the white cis woman.

I think that that’s incredibly important to outline: there isn’t an equal way we experience the internet. And then depending upon what your job is—you might be, let’s say, a white cis man but you might be a political reporter reporting on a very contentious subject that makes you a target not only of regular trolling and harassment but also [harassment from] a state authority, like a foreign government. Those are real things on the internet. I hear people say, oh, I don’t know if it’s real or not. It is real. There are so many different kinds of harm. If you’re only thinking of digital harm as someone saying you don’t look pretty today, that’s such a narrow view to understand harm, when in fact we work with a lot of people that end up with spyware on their phones.

We also work with a lot of folks who have been the victims of targeted, planned harassment campaigns. We work with people who have had their home addresses doxxed and who have received very specific violent threats. When a threat is very specific about where you live or what’s going to happen to you with a date and time, that’s usually enough for law enforcement to take it seriously. But all kinds of harassment are bad. It impacts your mental health if you receive a thousand messages that say, “You look awful today.” People say you should just get offline, [but] even if you get offline and eventually come back, there’s no way to delete your mentions. That’s another thing we’ve pushed for, that people can just clear their mentions, because you have to wait for new content to come in to clear out that stuff. You can’t go to inbox zero on your Twitter mentions, you can’t select delete all, so seeing that is really harmful. There’s no way for you to filter that out.

But imagine if you received a thousand death threats or a thousand rape threats. That also is awful. I should’ve given a content warning, but for a lot of folks the threat of rape is very real. It’s not just words. It’s like, that is something that I could encounter at any moment. To have that amount of digital violence thrown at you, even if it’s just through the written word, it’s still violence, it’s still things you have to read. I think this is where it is important for people to understand that the online space is as real as the offline space. If we can fall in love online, which we can, if we can make friendships online, which we do, then it is a real space of interaction. 

Luckily, for the most part, we haven’t had to deal with pushback to that in the past few years in the work we do. As older generations age out of thought-leadership positions, Millennials onward do understand that the internet is as real as the physical space in terms of community building, in terms of threats of violence, but also just in terms of building friendships. It is as real and legitimate as offline spaces.

Another way we are thinking about design is [looking at what are] called harmful design patterns. Other people refer to them as deceptive design patterns or dark patterns. Those are design patterns that unintentionally or intentionally manipulate, confuse, or nudge users into making decisions they normally wouldn’t make. If you’ve ever tried to unsubscribe from something and found out you were still subscribed, you’ve encountered a harmful design pattern. It’s sometimes the easiest example to show how design does harm. You encounter them all the time.

The FTC in the U.S. is doing a lot of really great work on trying to curb the harm of harmful design patterns, particularly around subscriptions, to ensure that consumers have an equal playing field. There are a lot of really pertinent reasons for this. For big capitalist mindsets, this is to ensure a fair market; also, you don’t want to end up with an internet where we have so many harmful design patterns that you can’t easily navigate the web. It’s worth reading how the FTC is utilizing U.S. law and policy to talk about how harmful design patterns are subverting that. Some of that is in treating subscriptions like a contract in which they are misconstruing what it is you’re opting out of. That has legal ramifications, and it has monetary ramifications for consumers.

While we have certain privacy laws in the U.S., harmful design patterns in the UK and EU do subvert privacy regulation. We’re seeing court cases and fines related to that. When people are like, “Oh, well, you should just be more aware,” I’m always like, well, no, it’s also just illegal. I don’t need to be more aware. We have laws and standards, and this is not following that. That is sometimes the argument I make to Americans: that there actually are laws around how things are marketed to you, how information is presented to you, how that impacts you as a consumer, and where your money is going. You do actually have those kinds of consumer protections, and they have to be enacted inside of technology. And design is a layer of that.

Ashley Hopkinson: Can you give me an example of something from the work you’ve done—a tool you produced, or a practice, or a solution that came through a collaboration—that gives you a glimmer of hope?

Caroline Sinders: There have been quite a few things. One thing is that I’ve been involved for quite a few years in trying to make Slack safer. I worked on a really large campaign with the Mozilla Foundation and Fight for the Future on this in particular. Slack has often said that they are not a social network, they are a workplace tool, so they will not implement things like blocking, which now luckily almost all platforms have. I think of blocking as a necessary part of internet infrastructure. I think of it as the seat belts of online safety. You need blocking and you need muting.

We’ve been pushing Slack to make themselves safer and to make their DMs safer since 2019 or 2018, but the campaign with Mozilla and Fight for the Future started in 2023. A few months later, on July 5th, 2023, Slack announced that it would be introducing a feature to hide messages from other members. While it’s not a block button, it is definitely a win. Harassment can happen in the workplace, but Slack is also used by so many non-workplace communities. It at least allows a sort of additional step of safety. So for me, that was a big win.

I also do work on harmful design patterns court cases, and those individual cases make me really hopeful. Over the past few years over here in Europe with the Digital Services Act, or the DSA as it’s called, and the DMA, the Digital Markets Act, we’ve seen the term dark patterns explicitly mentioned in regulation. In 2021 Senator Ron Wyden introduced the DETOUR Act, which also mentioned dark patterns. It wasn’t passed, but that does give me hope that it could be. The California Privacy Act does mention dark patterns by name and also mentions trying to better regulate data brokers, which is a whole other story but something else we focus on. Data brokers are effectively why your information is so readily available on the internet, why it’s easy for a bad actor to find your previous addresses. The fact that, at least in California, we’re starting to see that clamped down on, we’re starting to see movement in the U.S. around the harms of data brokers, and we have been seeing that over here in Europe—all those things make me really hopeful.

The thing I always want to emphasize to the general public, to everyday folks, is that change sadly does take a while. This is a marathon, not a sprint, but you have to celebrate these smaller wins. A big win for me when I started this research was in 2012 and 2013, almost no platforms had rules around doxxing. Now it’s such a common rule that you can’t dox people. We also see it enacted fairly well at times, sometimes not totally, but it is a rule and it is being enacted. A lot of that was post-Gamergate. While that’s a small thing, it has big impacts on people. 

It’s important to celebrate those things and remember that the internet we have right now in 2024 does look really different from the internet I was on in 2014. There are more safety features. It might not feel that much safer at times, but there are more safety features, there are more things you can do, and a lot of these things are at least now defined in the policies of the platforms. That gives us a great space to work from as activists. We’re not starting from zero. We just have to keep pushing and pushing and pushing and try to not burn out from what we’re advocating for.

Ashley Hopkinson: What is a teachable lesson or takeaway that you’ve learned in the process of working in the intersection of design with human rights? 

Caroline Sinders: A big thing for me has been learning how to navigate the different waters I swim in. I used to navigate all of them the same. I’d go in very intense, very strong, and be like, “This is the thing.” It took me a while to realize that people maybe don’t know what I’m talking about, or I haven’t met them where they’re at. I was lucky that that was happening during Gamergate when I was a relatively young researcher, and I was able to learn.

Some of that is learning how to negotiate these bigger conversations that I’m now becoming a part of—how to interact with policy teams at big technology companies. Some of that is learning how I can, to the best of my ability, advocate for my clients and my community in a language that this group understands and not back down. A lot of that is looking at my message and saying, Okay, what are they saying is impossible? How can I help them see that it’s not impossible? How do I advocate for this? How do I surface examples other than just the most immediate ones? A lot of that is learning how to describe the impact and theory of change of what I’m pushing for and making sure it resonates with that audience.

For a while, I really struggled to see why that was important, but when I realized that that was one of the barriers towards making the argument palpable or understandable or the very least easier for that team to try to enact change, I was like, oh, this is a very clear part of what I need to be doing. In other cases, when I’m working with different community organizations, it’s being able to speak to their immediate needs, and understand too that some communities will have very particular ideas around what they want to do. It might not be something that I would personally or professionally recommend, but as long as it’s not hurting them it’s also [about] allowing them to make those decisions.

Because we do a lot of security training, in some cases we’ll see people engage in what we call security theater, where they’ll buy things or do certain activities that don’t technically make them safer but seem safer, like putting their phone in a refrigerator or using mic jammers. If you’re really worried about that, that means that there’s malware on your phone, and what we actually need to do is look at your phone. It’s like, let’s address the problem, but also it’s not going to harm you if you turn your phone off and put it in the microwave or the refrigerator. You don’t need to do that, but it’s also not harming you if you do do that.

Because it can be really hard for people to understand, especially when you’re talking about privacy and security and surveillance. You can give people a checklist, but they might not actually understand how that checklist relates to the technical problem that’s happening, and so one of the things we’re trying to do now is provide materials at my lab so people understand that when we say turn your phone off, here’s why we’re saying that, what that does. And if you’re worried about eavesdropping, here’s what that actually looks like. Then if you’re leaning towards this other solution, let’s say putting your phone in the microwave or the refrigerator, what we actually need you to do is try to assess if your phone has been compromised.

So, trying to get people to understand that if you’re worried your phone is listening to you because of a sensitive conversation, that means it’s listening to you all the time, and it’s not just randomly doing that. That means that something is on your phone that shouldn’t be on your phone. That’s something we are trying to deal with in balance. We view ourselves as community support. When do we step in, especially if our opinion is not asked for? For the most part, we will try to step in when there’s advice being given that we think will harm the community, whereas with some of these things it’s like, if you want to do that, we’ll tell you why we don’t recommend it, but it’s not really going to harm you. You’re not doing a negative security practice, you’re just doing one that is superfluous.

Ashley Hopkinson: What are some of the challenges that you face in doing this work, and how do you work to overcome those challenges?

Caroline Sinders: Some of the biggest challenges are honestly basic ones that a lot of organizations run into, like lack of funding. It can be very hard as a small organization to get funding. A lot of funders are driven by trends, and so it can be hard to get funding for projects right now related to online harassment because it’s seen as such a seasoned topic or an older topic, but it’s still a problem. It’s now more difficult to get funding to support projects and programs related to it. 

Ashley Hopkinson: When the funding and human resources are there, what would you like to see expand or grow that prioritizes well-being and human rights  in technology?

Caroline Sinders: It’s hard to say because there are so many answers. I would love to see companies make more paid efforts to collaborate with smaller organizations that are more expert in this. There’s often an idea with software companies, and particularly this happens a lot with designers and technologists, where they’re like, I’ll interview these 10 or 15 people and then I will become the expert. I’ve seen this throughout the past 11 years of my career, where someone will be on maybe a trust and safety team—they’re new to trust and safety but they’re a very good designer, and they’ll go through some of the basic mistakes to become an expert over a few years, and then they get moved to another team or they decide they want to focus on something else. That is one of the things that makes it harder to get much more iterative change related to digital harms.

I was trained as a UX designer. One of the more negative traits that exists in our design field is that you can join any specific team and become an expert in it if you’re already a design expert. We need to let that go. We need to move beyond that. If we’re thinking about what a T-shaped designer or technologist or project manager is, it shouldn’t be, can you do graphic design or front-end programming and UX? It should be, do you have a deep background in misinformation and can you do UX? That’s what we should be aiming for. Or, do you have a deep background in understanding tech-facilitated gender-based violence? By deep background, you spent a few years researching it, not, like, you read a handful of articles and you care a lot about it from an activist standpoint.

While that’s important, we need people who can come in and speak to all different kinds of harm and understand the policy around it. Let’s say doxxing—how that impacts different groups in many different ways and what are the material effects that that group or community will face, and also what is open to them or not open to them. [For example], putting people in contact with law enforcement, we don’t really do that at my organization. We have a very serious conversation around what people can do legally and what are the upsides and downsides. We have a very trauma-informed care approach. Someone who’s being doxxed and being digitally stalked, we will talk about some of the things you might have to do to start a legal case in the U.S., and that does involve having to engage with the police.

Then we will often say, we’re an abolitionist group. This is not something we would normally recommend, but it is something we’re going to tell you about and tell you that it’s open to you and the pros and cons of doing this. If you decide later on you want to file or start a legal case against this person, at least having a police report documented is really helpful, but the police probably aren’t going to do anything to actually help you. In some cases, if someone is living in, let’s say Seattle, and they’re worried that they’re going to be swatted, which is where the SWAT team is called to your house under false pretenses, we know that there is an initiative with the Seattle Police Department where you can preemptively go and say, I live at this address. I’m worried I’m going to be swatted. They will write it down, they will take it seriously, but other cities don’t do that.

If I worked at a platform that was like, you should go report this to law enforcement, I would be like, I don’t actually know if that’s the right answer, because that doesn’t work for a lot of people. I wouldn’t direct someone to the NOPD if they were afraid of being swatted. NOPD doesn’t know what to do. I wouldn’t direct someone to the NOPD if they were like, I’m being digitally stalked by this person. No. They’re an incompetent and very racist police department, and that’s not going to result in anything helpful for our client, and it might be even more traumatizing. These are the things we are constantly thinking about. 

Inside design technology, we have to start insisting that folks come to the table with some expertise already and not keep allowing the status quo that you’ll be trained on the job to gain that expertise. 

Some people might be like, how will I have this expertise? Well, these are things you can focus on in school. You can take gender-studies classes, you can seek out information around things like online gender-based violence. There’s so much material out there. Even if you can’t take a class on online harassment, you can spend a lot of time reading about it and incorporating that into your design technology projects and trying to engage with experts and professionals, and you can also learn another level of intersectionality by engaging with design justice meetups. I’m part of a meetup called Human Rights Centered Design. You can engage with us. You can also read more about data feminism and think about how you would apply that to your own expertise. There are ways, in fact, to train yourself up and become knowledgeable in this space versus just assuming that you will become a trust and safety expert when you get hired on a trust and safety team.

Ashley Hopkinson: Do you think technology and well-being can intersect? In other words, can technology be a space where we can have human rights wins, activism can happen, and well-being is supported?

Caroline Sinders: I would like to think so. I guess it depends on your definition of well-being. One of my really good friends here in the UK is a psychiatrist with the NHS. She’s been working in some of the first clinics around establishing online addiction, meaning addiction to video games or internet-related things. She’s a psychiatrist and doctor, and I think that’s very different from [an ethicist like] Tristan Harris. Addiction is an issue. I say this as the adult child of an alcoholic. My view on addiction is not that it’s hard to put down your phone. There needs to be clinical definitions of this, there needs to be thresholds that need to be met, and then there needs to be intervention. Obviously there is a space inside of technology to support that kind of well-being, but it needs to be led by expertise and in a measured and non-panic-inducing way.

In terms of other forms of well-being, I mean, the internet is a space of community. I suffer from long COVID. It’s really great to interact with other people that are in the long COVID community, and a lot of that is online. It’s been really great for me over the years finding communities of similar interests, even fandoms, and interacting with people, and I think those are also sources of well-being. Not to give a non-answer, but my answers back to people are usually like, what do you mean by X? What do we mean by well-being? Well-being can be very expansively defined. But in terms of community well-being, I think the internet can and does support that. Then in terms of, let’s say, mental health well-being, I think software is exacerbating aspects of our mental health, including increasing anxiety, but that means that there is stuff we can do to mitigate that. It just needs to be led by experts, and it needs to be led in a measured way and not as a moral panic.

Ashley Hopkinson: What do you think leaders and decision-makers can do to help advance progress when it comes to human-rights technology work? You talked about interacting with policy teams—people who are operating at the level of legislation and decision-making in governance and policy. What do you think they can do to help move things forward?

Caroline Sinders: A lot of it is very similar: make sure you’re not engaging in a moral panic and that you’re engaging with the right experts. There are plenty of issues, but these are issues we’re also starting to see mitigated. I think of certain senators and congresspeople not understanding how technology functions, but we have seen other senators and congressional representatives hiring technologists onto their staff as experts. The integration of technologists as key political researchers is incredibly important, because that’s how we help craft policy so people actually understand, let’s say, how machine learning works versus making crazy assumptions. 

It’s always really cringe to see the older congressional reps and senators try to talk about, let’s say, Facebook when it’s very clear they don’t understand how it functions. While that’s funny, cringe, and I laugh at it, I’m also horrified—you’re making laws and you don’t even understand how this thing functions? I think we should be shaming those senators more. Where is your technologist? Whomst did you hire? We need to know: whomst is fulfilling that role on your team? If there’s not someone fulfilling that role, then that needs to be a requirement. We could get into a whole side conversation of how should those roles be filled, but a lot of this is engaging with the right kind of expert who understands the domain you are focusing on. Luckily, again, over the past decade we have seen so much movement of senators and congressional reps bringing technologists into their staff and recognizing that those are key positions to hire for. Even if that person isn’t doing technology building, they can assess and explain how technology functions.

Ashley Hopkinson: Thank you, Caroline.

Click here to read the full conversation with insights highlighted.

Ashley Hopkinson is an award-winning journalist, newsroom entrepreneur and leader dedicated to excellent storytelling and mission-driven media. She currently manages the Solutions Insights Lab, an initiative of the Solutions Journalism Network. She is based in New Orleans, Louisiana.

* This conversation has been edited and condensed.

Read about other initiatives utilizing machine learning for wellbeing.

More Resources:

Kumi Naidoo is a prominent South African human rights and environmental justice activist. Naidoo spoke with Ashley Hopkinson on December 15, 2023. Click here to read the full conversation with insights highlighted. Ashley Hopkinson: Can you introduce yourself and tell...

From Annie Banerji / Christian Science Monitor: Microfinance institutions in India are providing low-cost loans to women in remote areas who typically only have access to loan sharks. The women are using the money to break poverty cycles and start their...

From Peter Hull / NBC LX: Research shows workers who receive paid sick leave are more likely to stay home when sick, thus limiting the spread of illness. Read the original story here. Read more articles on health related to...

Mariana is the co-founder of Raízes Sustainable Development. She has a wide experience in management of collaborative projects, sustainable tourism, and dialogue with traditional communities. She enjoys formulating strategies in order to bring collective dreams into reality and has always...

From Frank Ntarindwa / The New Times: ‘Kigali’s “Car Free Day” not only boasts an increase in residents’ activity levels, it also creates a space for more knowledge and service sharing for health conditions including Malaria.’ Screenings for non-communicable diseases...

From Marjolein Koster, Ties Gijzel / Reasons to Be Cheerful: A randomly selected group of lucky Germans are reaping the benefits of a guaranteed monthly income. An organization raffles off a year’s worth of guaranteed monthly income, allowing recipients to...