Benchmark frameworks and τbench

Stephen W. Thomas, Richard Thomas Snodgrass, Rui Zhang

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Software engineering frameworks tame the complexity of large collections of classes by identifying structural invariants, regularizing interfaces, and increasing sharing across the collection. We wish to appropriate these benefits for families of closely related benchmarks, say for evaluating query engine implementation strategies. We introduce the notion of a benchmark framework, an ecosystem of benchmarks that are related in semantically rich ways and enabled by organizing principles. A benchmark framework is realized by iteratively changing one individual benchmark into another, say by modifying the data format, adding schema constraints, or instantiating a different workload. Paramount to our notion of benchmark frameworks are the ease of describing the differences between individual benchmarks and the utility of methods to validate the correctness of each benchmark component by exploiting the overarching ecosystem. As a detailed case study, we introduce τBench, a benchmark framework consisting of ten individual benchmarks, spanning XML, XQuery, XML Schema, and PSM, along with temporal extensions to each. The second case study examines the Mining Unstructured Data benchmark framework, and the third examines the potential benefits of rendering the TPC family as a benchmark framework.

Original languageEnglish (US)
Pages (from-to)1047-1075
Number of pages29
JournalSoftware - Practice and Experience
Volume44
Issue number9
DOIs
StatePublished - 2014

Fingerprint

XML
Ecosystems
Data mining
Software engineering
Engines

Keywords

  • benchmarks
  • temporal databases
  • XML

ASJC Scopus subject areas

  • Software

Cite this

Benchmark frameworks and τbench. / Thomas, Stephen W.; Snodgrass, Richard Thomas; Zhang, Rui.

In: Software - Practice and Experience, Vol. 44, No. 9, 2014, p. 1047-1075.

Research output: Contribution to journalArticle

Thomas, Stephen W. ; Snodgrass, Richard Thomas ; Zhang, Rui. / Benchmark frameworks and τbench. In: Software - Practice and Experience. 2014 ; Vol. 44, No. 9. pp. 1047-1075.
@article{96474cd5debf4e7b8c7048b137aad82a,
title = "Benchmark frameworks and τbench",
abstract = "Software engineering frameworks tame the complexity of large collections of classes by identifying structural invariants, regularizing interfaces, and increasing sharing across the collection. We wish to appropriate these benefits for families of closely related benchmarks, say for evaluating query engine implementation strategies. We introduce the notion of a benchmark framework, an ecosystem of benchmarks that are related in semantically rich ways and enabled by organizing principles. A benchmark framework is realized by iteratively changing one individual benchmark into another, say by modifying the data format, adding schema constraints, or instantiating a different workload. Paramount to our notion of benchmark frameworks are the ease of describing the differences between individual benchmarks and the utility of methods to validate the correctness of each benchmark component by exploiting the overarching ecosystem. As a detailed case study, we introduce τBench, a benchmark framework consisting of ten individual benchmarks, spanning XML, XQuery, XML Schema, and PSM, along with temporal extensions to each. The second case study examines the Mining Unstructured Data benchmark framework, and the third examines the potential benefits of rendering the TPC family as a benchmark framework.",
keywords = "benchmarks, temporal databases, XML",
author = "Thomas, {Stephen W.} and Snodgrass, {Richard Thomas} and Rui Zhang",
year = "2014",
doi = "10.1002/spe.2189",
language = "English (US)",
volume = "44",
pages = "1047--1075",
journal = "Software - Practice and Experience",
issn = "0038-0644",
publisher = "John Wiley and Sons Ltd",
number = "9",

}

TY - JOUR

T1 - Benchmark frameworks and τbench

AU - Thomas, Stephen W.

AU - Snodgrass, Richard Thomas

AU - Zhang, Rui

PY - 2014

Y1 - 2014

N2 - Software engineering frameworks tame the complexity of large collections of classes by identifying structural invariants, regularizing interfaces, and increasing sharing across the collection. We wish to appropriate these benefits for families of closely related benchmarks, say for evaluating query engine implementation strategies. We introduce the notion of a benchmark framework, an ecosystem of benchmarks that are related in semantically rich ways and enabled by organizing principles. A benchmark framework is realized by iteratively changing one individual benchmark into another, say by modifying the data format, adding schema constraints, or instantiating a different workload. Paramount to our notion of benchmark frameworks are the ease of describing the differences between individual benchmarks and the utility of methods to validate the correctness of each benchmark component by exploiting the overarching ecosystem. As a detailed case study, we introduce τBench, a benchmark framework consisting of ten individual benchmarks, spanning XML, XQuery, XML Schema, and PSM, along with temporal extensions to each. The second case study examines the Mining Unstructured Data benchmark framework, and the third examines the potential benefits of rendering the TPC family as a benchmark framework.

AB - Software engineering frameworks tame the complexity of large collections of classes by identifying structural invariants, regularizing interfaces, and increasing sharing across the collection. We wish to appropriate these benefits for families of closely related benchmarks, say for evaluating query engine implementation strategies. We introduce the notion of a benchmark framework, an ecosystem of benchmarks that are related in semantically rich ways and enabled by organizing principles. A benchmark framework is realized by iteratively changing one individual benchmark into another, say by modifying the data format, adding schema constraints, or instantiating a different workload. Paramount to our notion of benchmark frameworks are the ease of describing the differences between individual benchmarks and the utility of methods to validate the correctness of each benchmark component by exploiting the overarching ecosystem. As a detailed case study, we introduce τBench, a benchmark framework consisting of ten individual benchmarks, spanning XML, XQuery, XML Schema, and PSM, along with temporal extensions to each. The second case study examines the Mining Unstructured Data benchmark framework, and the third examines the potential benefits of rendering the TPC family as a benchmark framework.

KW - benchmarks

KW - temporal databases

KW - XML

UR - http://www.scopus.com/inward/record.url?scp=84905573769&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84905573769&partnerID=8YFLogxK

U2 - 10.1002/spe.2189

DO - 10.1002/spe.2189

M3 - Article

AN - SCOPUS:84905573769

VL - 44

SP - 1047

EP - 1075

JO - Software - Practice and Experience

JF - Software - Practice and Experience

SN - 0038-0644

IS - 9

ER -