Abstract
We study the problem of balancing effectiveness and efficiency in automated feature selection. After exploring many feature selection methods, we observe a computational dilemma: 1) traditional feature selection (e.g., mRMR) is mostly efficient, but difficult to identify the best subset; 2) the emerging reinforced feature selection automatically navigates to search the best subset, but is usually inefficient. Can we bridge the gap between effectiveness and efficiency under automation Motivated by this dilemma, we aim to develop a novel feature space navigation method. In our preliminary work, we leveraged interactive reinforcement learning to accelerate feature selection by external trainer-agent interaction. In this journal version, we propose a novel interactive and closed-loop architecture to simultaneously model interactive reinforcement learning and decision tree feedback. Our preliminary work can be significantly improved by modeling the structured knowledge of the downstream task (e.g., decision tree) as learning feedback. In particular, the tree-structured feature hierarchy of decision tree is leveraged to improve state representation; the feature importance hierarchy from decision tree is exploited to develop a new reward scheme. In addition, agents historical action records can also be feedback for another new reward scheme. Finally, extensive experiments demonstrate the improved performances of our methods.
Original language | English (US) |
---|---|
Journal | IEEE Transactions on Knowledge and Data Engineering |
DOIs | |
State | Accepted/In press - 2021 |
Externally published | Yes |
Keywords
- Automation
- Correlation
- Decision Tree in the Loop
- Decision trees
- Feature Selection
- Feature extraction
- Interaction Mechanism
- Reinforcement Learning
- Reinforcement learning
- Space exploration
- Task analysis
ASJC Scopus subject areas
- Information Systems
- Computer Science Applications
- Computational Theory and Mathematics