‘Trusted Friends’ and ‘hateful’ language filter: Twitter’s concept features to allow users to choose who & what they want to hear

Proposed features that would enable Twitter users to limit their audience to only “trusted friends” and choose phrases to be blacklisted have prompted accusations of encouraging “echo chambers” against the social media platform.

In a series of tweets on Thursday, designer Andrew Courter noted that Twitter is “exploring a bunch of ways to control who can see your Tweets.” He shared three such early design concept features to solicit public discussion and feedback, but pointed out that the company is “not building these yet.”

With the ‘Trusted Friends’ feature, comparable to Instagram’s ‘Close Friends’ functionality for its stories, users can “control who can see” their tweets – potentially toggling privacy settings to tailor their audience according to what they put out.

Reasoning that it “could be simpler to talk to who you want, when you want” instead of “juggling alt accounts,” Courter tweeted that “perhaps (users) could also see trusted friends’ tweets first” in their timelines – as opposed to the current algorithm-determined and chronologically-ordered ‘Home’.


According to TechCrunch, it would build on already existing controls that let original posters pick who is able to ‘reply’ to their tweets – those mentioned in the tweet, people they follow, or the default option, ‘everyone’. However, that feature left the actual tweet visible and shareable by anyone.

The second proposed change, under the working name ‘Facets’, would allow people to categorize tweets according to context by “embracing an obvious truth: we’re different people in different contexts” with respect to friends, family, work, and public lives.

According to Courter, this concept lets people tweet “from distinct personas within 1 account,” while enabling other individuals to “follow the whole account” or just the ‘facets’ they find interesting. For instance, a personal persona could relate to hobbies, while a professional persona is work-related.

Meanwhile, the third feature would allow users to filter out phrases deemed to be “hateful, hurtful and violent” or considered “profanity” that they would “prefer not to see” in replies to their tweets. They can also choose “automatic actions” like “moving violating replies to the bottom of the conversation” and “muting accounts that violate twice” despite the prompts.


Followers would then see these phrases “highlighted” in their replies and a prompt nudges them to “learn why” or they can just “ignore the guidance,” according to Courter. Likening it to a “spellcheck” against “sounding like a jerk,” he noted that it could help “set boundaries” for conversations.

The proposals drew a mixed reaction from the platform’s users, with several people raising concerns that the prospect of tweeters picking and choosing their audiences and the replies they would prefer to receive increased the likelihood of “echo chambers” and “virtue signalling.”



“Twitter is a public forum. It’s what makes it different. Close the communities enough, and it gets turned into a Facebook clone. One Facebook style social network is definitely enough,” one person tweeted.

When some users pointed out that the features would “block accounts,” Courter responded that “blocks are underused” and claimed there is a “need to normalize blocking and teach how it works.”

In response to Courter’s contention that the reply filters would “help people be their best selves,” a number of people agreed that it would “set a model for empathetic phrasing,” but others said it sounded like “another attempt to pressure users into ‘acceptable’ speech.”

Other users said the concept of tying different personas to one account had been explored previously by Google with their defunct ‘Circles+’ with one person saying it would lead to individual privacy concerns that are not present with the current workaround of using alternate accounts.


×