The work of NSFW AI is complicated by the fact that digital content and societal norms are so dynamic, continually changing even as models try to adjust. NSFW AI systems are currently judged based on how well they retrain their filtering parameters with the availability of new content occurring in trend. However, in reality the majority of these systems ideally need updating and retraining on regular basis — this involves some manual input that can prove to be quite time consuming. Recall the 2023 paper that found AI content moderation tools require around a retraining of about 300 hours to catch up with revised trends — which is no small commitment in terms of both resources and time.
Online norms change all the time, both because it is a relatively new form of expression and still heavily influ- enced by changes in society at large as well as legal requirements. For instance, the #MeToo movement prompted a significant spike in content around sexual harassment and consent discussions. Because the educational content and pornographic images were often confused, inaccurate alarms increased leading to dissatisfaction of users. This approach led to a 20% higher rate of errors in removing content because the AI could not do more nuanced interpretations.
The industry has tried to combat these problems with the help of iterative updates and improved algorithms. The nature of the content being produced will require that machine learning models evolve to have a better understanding of context and adapt more quickly with new types. For example, in 2022 Google's Perspective API delivered a more nuanced analysis of context improvement — improving the accuracy of content classification by up to 15%, and therefore reducing inappropriate activity wrongly identified.
However, it is still an incomplete process. As Dr. Kate Crawford and others have noted, "The global flow of capital guarantees that online fashions change quickly; AI systems need to keep up not only with new data sets but also our socially enabled ability to produce changes in underlying signs in order for the whole process to work at all." This complexity can sometimes make it difficult for even the best-designed systems to keep up with changes in online behavior and user expectations.
While in other cases, platforms like Reddit are being criticized about slow implementations of AI over NSFW systems because increasing growth where digital arts and explicit content you frequently find at creative forums. Unfortunately, the systems' hard filtering criteria resulted in authentic art content being taken down: a testament to how tricky it can be to work out solutions that are compliant with rapidly changing norms yet robust enough when under stress.
Some new approaches are being explored to improve adaptability, e.g., via hybrid models combining AI with human oversight. They do enable some live tuning and contextual moderation, which goes a long way to addressing certain limitations of pure algorithmic systems. These hybrid approaches have improved content accuracy by as much 25% over traditional AI-only systems thus far in 2023.
To keep up with the newest internet standards, nsfw ai systems need to develop over time. By incorporating increasingly dynamic learning models, the strategies used for moderation work in a hybrid way and this essentially is our move to match better with evolving societal norms & user expectations.