### Abstract

Data mining is a technique to discover patterns and trends in data and can be used to create a model to predict those patterns and trends. This is particularly useful for data sets that are not amenable to traditional statistical analysis. One particular data mining task is classification, predicting a quantity that can only take on a finite number of values. An important class of binary classifiers are Support Vector Machines (SVMs). Traditional SVMs use constrained optimization to find a separating hyperplane. A new data point is classified based on which side of the separating hyperplane it happens to fall on. All SVMs try to minimize the number of potential errors the classifier will make by minimizing a sum of distances from the hyperplane. However, the actual task of classification does not place any importance on a distance. In order to model this more closely, we propose the Integer Support Vector Machine Classifier (ISVM). ISVM uses binary indicator error variables to directly minimize the number of potential errors the classifier can make.

Original language | English (US) |
---|---|

Title of host publication | Proceedings - Sixth Int. Conf. on Softw. Eng., Artificial Intelligence, Netw. and Parallel/Distributed Computing and First ACIS Int. Workshop on Self-Assembling Wireless Netw., SNPD/SAWN 2005 |

Pages | 144-149 |

Number of pages | 6 |

Volume | 2005 |

DOIs | |

State | Published - 2005 |

Event | 6th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and 1st ACIS International Workshop on Self-Assembling Wireless Networks, SNPD/SAWN 2005 - Towson, MD, United States Duration: May 23 2005 → May 25 2005 |

### Other

Other | 6th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and 1st ACIS International Workshop on Self-Assembling Wireless Networks, SNPD/SAWN 2005 |
---|---|

Country | United States |

City | Towson, MD |

Period | 5/23/05 → 5/25/05 |

### Fingerprint

### ASJC Scopus subject areas

- Engineering(all)

### Cite this

*Proceedings - Sixth Int. Conf. on Softw. Eng., Artificial Intelligence, Netw. and Parallel/Distributed Computing and First ACIS Int. Workshop on Self-Assembling Wireless Netw., SNPD/SAWN 2005*(Vol. 2005, pp. 144-149). [1434881] https://doi.org/10.1109/SNPD-SAWN.2005.16

**An integer support vector machine.** / Domm, Maryanne; Engel, Andrew; Pierre-Louis, Péguy; Goldberg, Jeffrey B.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Proceedings - Sixth Int. Conf. on Softw. Eng., Artificial Intelligence, Netw. and Parallel/Distributed Computing and First ACIS Int. Workshop on Self-Assembling Wireless Netw., SNPD/SAWN 2005.*vol. 2005, 1434881, pp. 144-149, 6th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing and 1st ACIS International Workshop on Self-Assembling Wireless Networks, SNPD/SAWN 2005, Towson, MD, United States, 5/23/05. https://doi.org/10.1109/SNPD-SAWN.2005.16

}

TY - GEN

T1 - An integer support vector machine

AU - Domm, Maryanne

AU - Engel, Andrew

AU - Pierre-Louis, Péguy

AU - Goldberg, Jeffrey B

PY - 2005

Y1 - 2005

N2 - Data mining is a technique to discover patterns and trends in data and can be used to create a model to predict those patterns and trends. This is particularly useful for data sets that are not amenable to traditional statistical analysis. One particular data mining task is classification, predicting a quantity that can only take on a finite number of values. An important class of binary classifiers are Support Vector Machines (SVMs). Traditional SVMs use constrained optimization to find a separating hyperplane. A new data point is classified based on which side of the separating hyperplane it happens to fall on. All SVMs try to minimize the number of potential errors the classifier will make by minimizing a sum of distances from the hyperplane. However, the actual task of classification does not place any importance on a distance. In order to model this more closely, we propose the Integer Support Vector Machine Classifier (ISVM). ISVM uses binary indicator error variables to directly minimize the number of potential errors the classifier can make.

AB - Data mining is a technique to discover patterns and trends in data and can be used to create a model to predict those patterns and trends. This is particularly useful for data sets that are not amenable to traditional statistical analysis. One particular data mining task is classification, predicting a quantity that can only take on a finite number of values. An important class of binary classifiers are Support Vector Machines (SVMs). Traditional SVMs use constrained optimization to find a separating hyperplane. A new data point is classified based on which side of the separating hyperplane it happens to fall on. All SVMs try to minimize the number of potential errors the classifier will make by minimizing a sum of distances from the hyperplane. However, the actual task of classification does not place any importance on a distance. In order to model this more closely, we propose the Integer Support Vector Machine Classifier (ISVM). ISVM uses binary indicator error variables to directly minimize the number of potential errors the classifier can make.

UR - http://www.scopus.com/inward/record.url?scp=33749417877&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33749417877&partnerID=8YFLogxK

U2 - 10.1109/SNPD-SAWN.2005.16

DO - 10.1109/SNPD-SAWN.2005.16

M3 - Conference contribution

SN - 0769522947

SN - 9780769522944

VL - 2005

SP - 144

EP - 149

BT - Proceedings - Sixth Int. Conf. on Softw. Eng., Artificial Intelligence, Netw. and Parallel/Distributed Computing and First ACIS Int. Workshop on Self-Assembling Wireless Netw., SNPD/SAWN 2005

ER -