IEEE P3168 PDF
This standard specifies test methods for evaluating the robustness of a Natural Language Processing (NLP) service that uses machine learning. Models of NLP generally feature an input space being discrete and an output space being almost infinite in some tasks. The robustness of the NLP service is affected by various perturbations including adversarial attacks. A methodology to categorize the perturbations, and test cases for evaluating the robustness of an NLP service against different perturbation categories is specified. Metrics for robustness evaluation of an NLP service are defined. NLP use cases and corresponding applicable test methods are also described.
The purpose of the standard is to provide test methods for evaluating the robustness of an NLP service. Test methods are used by service developers, service providers and service users to determine the robustness of an NLP service.
New IEEE Standard – Active – Draft. The Natural Language Processing (NLP) services using machine learning have rich applications in solving various tasks, and have been widely deployed and used, usually accessible by API calls. The robustness of the NLP services is challenged by various well-known general corruptions and adversarial attacks. Examples of general corruptions include inadvertent or random deletion, addition, or repetition of characters or words. Adversarial attacks generate adversarial characters, words or sentence samples causing the models underpinning the NLP services to produce incorrect results. This standard proposes a method for quantitatively evaluating the robustness the NLP services. Under the method, different cases the evaluation needs to perform against are specified. Robustness metrics and their calculation are defined. With the standard, the service stakeholders including the service developer, service providers, and service users can develop understanding of the robustness of the services. The evaluation can be performed during various phases in the life cycle of the NLP services, the testing phase, in the validation phase, after deployment, etc.