Heterogeneity at Work: Implications of the 2012 Clinical Translational Science Award Evaluators Survey

Cathleen Kane, Angela Alexander, Janice A. Hogle, Helen M. Parsons, Lauren Phelps

Research output: Contribution to journalArticle

6 Citations (Scopus)

Abstract

The Clinical and Translational Science Award (CTSA) program is an ambitious multibillion dollar initiative sponsored by the National Institutes of Health (NIH) organized around the mission of facilitating the improved quality, efficiency, and effectiveness of translational health sciences research across the country. Although the NIH explicitly requires internal evaluation, funded CTSA institutions are given wide latitude to choose the structure and methods for evaluating their local CTSA program. The National Evaluators Survey was developed by a peer-led group of local CTSA evaluators as a voluntary effort to understand emerging differences and commonalities in evaluation teams and techniques across the 61 CTSA institutions funded nationwide. This article presents the results of the 2012 National Evaluators Survey, finding significant heterogeneity in evaluation staffing, organization, and methods across the 58 CTSAs institutions responding. The variety reflected in these findings represents both a liability and strength. A lack of standardization may impair the ability to make use of common metrics, but variation is also a successful evolutionary response to complexity. Additionally, the peer-led approach and simple design demonstrated by the questionnaire itself has value as an example of an evaluation technique with potential for replication in other areas across the CTSA institutions or any large-scale investment where multiple related teams across a wide geographic area are given the latitude to develop specialized approaches to fulfilling a common mission.

Original languageEnglish (US)
Pages (from-to)447-463
Number of pages17
JournalEvaluation and the Health Professions
Volume36
Issue number4
DOIs
StatePublished - Dec 2013

Fingerprint

National Institutes of Health (U.S.)
Peer Group
Aptitude
Surveys and Questionnaires
Organizations
Efficiency
Health
Research

Keywords

  • clinical translational science awards
  • evaluation methods
  • peer led
  • translational science

ASJC Scopus subject areas

  • Health Policy

Cite this

Heterogeneity at Work : Implications of the 2012 Clinical Translational Science Award Evaluators Survey. / Kane, Cathleen; Alexander, Angela; Hogle, Janice A.; Parsons, Helen M.; Phelps, Lauren.

In: Evaluation and the Health Professions, Vol. 36, No. 4, 12.2013, p. 447-463.

Research output: Contribution to journalArticle

Kane, Cathleen ; Alexander, Angela ; Hogle, Janice A. ; Parsons, Helen M. ; Phelps, Lauren. / Heterogeneity at Work : Implications of the 2012 Clinical Translational Science Award Evaluators Survey. In: Evaluation and the Health Professions. 2013 ; Vol. 36, No. 4. pp. 447-463.
@article{8e8724c1995e49e394f94036b65d9b22,
title = "Heterogeneity at Work: Implications of the 2012 Clinical Translational Science Award Evaluators Survey",
abstract = "The Clinical and Translational Science Award (CTSA) program is an ambitious multibillion dollar initiative sponsored by the National Institutes of Health (NIH) organized around the mission of facilitating the improved quality, efficiency, and effectiveness of translational health sciences research across the country. Although the NIH explicitly requires internal evaluation, funded CTSA institutions are given wide latitude to choose the structure and methods for evaluating their local CTSA program. The National Evaluators Survey was developed by a peer-led group of local CTSA evaluators as a voluntary effort to understand emerging differences and commonalities in evaluation teams and techniques across the 61 CTSA institutions funded nationwide. This article presents the results of the 2012 National Evaluators Survey, finding significant heterogeneity in evaluation staffing, organization, and methods across the 58 CTSAs institutions responding. The variety reflected in these findings represents both a liability and strength. A lack of standardization may impair the ability to make use of common metrics, but variation is also a successful evolutionary response to complexity. Additionally, the peer-led approach and simple design demonstrated by the questionnaire itself has value as an example of an evaluation technique with potential for replication in other areas across the CTSA institutions or any large-scale investment where multiple related teams across a wide geographic area are given the latitude to develop specialized approaches to fulfilling a common mission.",
keywords = "clinical translational science awards, evaluation methods, peer led, translational science",
author = "Cathleen Kane and Angela Alexander and Hogle, {Janice A.} and Parsons, {Helen M.} and Lauren Phelps",
year = "2013",
month = "12",
doi = "10.1177/0163278713510378",
language = "English (US)",
volume = "36",
pages = "447--463",
journal = "Evaluation and the Health Professions",
issn = "0163-2787",
publisher = "SAGE Publications Inc.",
number = "4",

}

TY - JOUR

T1 - Heterogeneity at Work

T2 - Implications of the 2012 Clinical Translational Science Award Evaluators Survey

AU - Kane, Cathleen

AU - Alexander, Angela

AU - Hogle, Janice A.

AU - Parsons, Helen M.

AU - Phelps, Lauren

PY - 2013/12

Y1 - 2013/12

N2 - The Clinical and Translational Science Award (CTSA) program is an ambitious multibillion dollar initiative sponsored by the National Institutes of Health (NIH) organized around the mission of facilitating the improved quality, efficiency, and effectiveness of translational health sciences research across the country. Although the NIH explicitly requires internal evaluation, funded CTSA institutions are given wide latitude to choose the structure and methods for evaluating their local CTSA program. The National Evaluators Survey was developed by a peer-led group of local CTSA evaluators as a voluntary effort to understand emerging differences and commonalities in evaluation teams and techniques across the 61 CTSA institutions funded nationwide. This article presents the results of the 2012 National Evaluators Survey, finding significant heterogeneity in evaluation staffing, organization, and methods across the 58 CTSAs institutions responding. The variety reflected in these findings represents both a liability and strength. A lack of standardization may impair the ability to make use of common metrics, but variation is also a successful evolutionary response to complexity. Additionally, the peer-led approach and simple design demonstrated by the questionnaire itself has value as an example of an evaluation technique with potential for replication in other areas across the CTSA institutions or any large-scale investment where multiple related teams across a wide geographic area are given the latitude to develop specialized approaches to fulfilling a common mission.

AB - The Clinical and Translational Science Award (CTSA) program is an ambitious multibillion dollar initiative sponsored by the National Institutes of Health (NIH) organized around the mission of facilitating the improved quality, efficiency, and effectiveness of translational health sciences research across the country. Although the NIH explicitly requires internal evaluation, funded CTSA institutions are given wide latitude to choose the structure and methods for evaluating their local CTSA program. The National Evaluators Survey was developed by a peer-led group of local CTSA evaluators as a voluntary effort to understand emerging differences and commonalities in evaluation teams and techniques across the 61 CTSA institutions funded nationwide. This article presents the results of the 2012 National Evaluators Survey, finding significant heterogeneity in evaluation staffing, organization, and methods across the 58 CTSAs institutions responding. The variety reflected in these findings represents both a liability and strength. A lack of standardization may impair the ability to make use of common metrics, but variation is also a successful evolutionary response to complexity. Additionally, the peer-led approach and simple design demonstrated by the questionnaire itself has value as an example of an evaluation technique with potential for replication in other areas across the CTSA institutions or any large-scale investment where multiple related teams across a wide geographic area are given the latitude to develop specialized approaches to fulfilling a common mission.

KW - clinical translational science awards

KW - evaluation methods

KW - peer led

KW - translational science

UR - http://www.scopus.com/inward/record.url?scp=84887469151&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84887469151&partnerID=8YFLogxK

U2 - 10.1177/0163278713510378

DO - 10.1177/0163278713510378

M3 - Article

C2 - 24214662

AN - SCOPUS:84887469151

VL - 36

SP - 447

EP - 463

JO - Evaluation and the Health Professions

JF - Evaluation and the Health Professions

SN - 0163-2787

IS - 4

ER -