Meta AI Introduces New AI Technology Called “Few-Shot Learner (FSL)” To Combat Harmful Content

Source: https://ai.facebook.com/blog/harmful-content-can-evolve-quickly-our-new-ai-system-adapts-to-tackle-it/

For the training of AI models, a large number of data points or labeled examples are required. Typically, the number of samples needed is tens of thousands to millions. Collecting and labeling this data can take several months. This manual collection and labeling is delaying the deployment of AI systems capable of detecting new types of harmful content on different social media platforms. To deal with this problem, Meta has deployed a relatively new AI model called “Few-Shot Learner” (FSL) so that harmful content can be detected even if enough tagged data is not available.

Meta’s new FSL deployment is a step towards developing more generalized AI models that will require very little or no labeled data for training. FSL falls into the category of an emerging field of AI called meta-learning, where the focus is “learning to learn” rather than “learning models” as traditional AI models do. . The FSL is first trained on generic natural language examples, acting as the training set. Then the model is trained with new policy texts explaining the harmful target content and the non-policy content that has been tagged in the past, which serves as the supporting set. Meta reported that their FSL outperforms several existing leading FSL methods by an average of 12% over various systematic assessment schemes. For more details, you can consult Meta’s research paper.

Due to the pre-training on generic languages, the FSL model can learn political texts implicitly and quickly take the necessary steps to eradicate harmful content on social media platforms. The deployment of Meta AI learners on Facebook and Instagram has been a great success. He has been able to stop the spread of newly evolving harmful content, such as hate speech and posts that create vaccine reluctance. Meta strives to improve these AI models to create generalized models that could browse and understand pages of political texts and implicitly learn how to apply them.

Besides application in the detection and removal of harmful content, FSL can play a major role in accelerating research in the field of computer vision, especially in such tasks as locating actions, character recognition, video motion tracking. In the field of natural language processing (NLP), it can be useful for classification of feelings, sentence completion and translation. Another major application of FSL may be voice cloning, which is becoming more and more critical with the advent of new digital assistance products in the market.

Meta plans to integrate FSL and other meta-learning strategies into their past and future AI models, enabling the development of a shared knowledge base to address various types of violations. In the future, FSL will likely be used to help policy makers and shorten the application time by orders of magnitude. Meta plans to increase its investment in zero-twist learning research to develop technologies that can go beyond traditional understanding of content and infer the behavioral and conversational contexts that underpin it. These early milestones indicate a shift in AI / DL research interests from traditional model learning models to generalized systems capable of multitasking, requiring little labeling effort, and requiring a shorter deployment period.

Article: https://arxiv.org/pdf/2104.14690.pdf

Reference: https://ai.facebook.com/blog/harmful-content-can-evolve-quickly-our-new-ai-system-adapts-to-tackle-it/


Source link

Comments are closed.