Knowledge-based systems (KBS) can potentially enhance individual decision making. Yet, recommendations from these systems continue to be met with resistance. This is particularly troubling in professions associated with deception detection (e.g., border control), where humans are accurate only about half the time. In this research-in-progress, we examine how the fit between KBS explanations and users' internal explanations influences acceptance of system recommendations. To describe the explanations, we rely on Toulmin's argument classifications. We leverage cognitive fit theory as the theoretical explanation as to why fit is important for user acceptance of the system's evaluation. We describe a two-phased research approach in which we first develop the arguments, evaluate their relative strength, and validate their fit with key argument types. This is followed by a description of an experiment in which we examine the processing of explanations provided by KBS, focusing on explanations in a credibility assessment task.