The Impact of an Adversary in a Language Model

Zhengzhong Liang, Gregory Ditzler

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Neural networks have been quite successful at complex classification tasks. Furthermore, they have the ability to learn information from a large volume of data. Unfortunately, not all of the sources available are secure and there is a possibility that an adversary in the environment has the malicious intention to poison a training dataset to cause the neural network to have a poor generalization error. Therefore, it is important to observe how susceptible a neural network is to the free parameters (i.e., gradient thresholds, hidden layer size, etc.) and the availability of adversarial data. In this work, we study the impact of an adversary for language models with Long Short-Term Memory (LSTM) networks and its configurations. We experimented with the Penn Tree Bank (PTB) dataset and adversarial text that was sampled from works in a different era. Our results show that there are several effective ways to poison such an LSTM language model. Furthermore, from our experiments, we are able to provide suggestions about the steps that can be taken to reduce the impact of such attacks.

Original languageEnglish (US)
Title of host publicationProceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018
EditorsSuresh Sundaram
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages658-665
Number of pages8
ISBN (Electronic)9781538692769
DOIs
StatePublished - Jan 28 2019
Event8th IEEE Symposium Series on Computational Intelligence, SSCI 2018 - Bangalore, India
Duration: Nov 18 2018Nov 21 2018

Publication series

NameProceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018

Conference

Conference8th IEEE Symposium Series on Computational Intelligence, SSCI 2018
CountryIndia
CityBangalore
Period11/18/1811/21/18

ASJC Scopus subject areas

  • Artificial Intelligence
  • Theoretical Computer Science

Fingerprint Dive into the research topics of 'The Impact of an Adversary in a Language Model'. Together they form a unique fingerprint.

  • Cite this

    Liang, Z., & Ditzler, G. (2019). The Impact of an Adversary in a Language Model. In S. Sundaram (Ed.), Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018 (pp. 658-665). [8628894] (Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI 2018). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/SSCI.2018.8628894