Bullying online is a thing, either we like it or not, but what can we do about it, everyone has freedom of speech, right? But, can the online platforms have a solution for these?
Earlier in the year, Instagram rolled out a feature that uses AI to tell or warn users about their comments. When the comment is deemed to be offensive, the app gives you a notification and a chance to rephrase your comment.
This feature is set to be made available for captions too. This is a result of a study that proves that if you get a prompt, offering a chance for you to do away with offensive words, you are likely to take it. This has been successful for the comment section, and it probably should be for the caption section.
Note, as reported by Mashable, you can still post the caption as it is, even after you get the prompt.
When you try to write something deemed offensive, the AI prompts and tell you that “the caption is similar to one that has been reported for bullying”. No one wants to be reported to IG, no one wants to be banned or suspended. This trick will surely work.
Even if we don’t have the AI to do that for us, we need to make our social space a better place to live for all.