Can routine hospital data be used to detect poor quality service delivery among surgeons?
For over 20 years, routine data sources such as the hospital episode statistics have been widely perceived as being of little value because of problems with completeness and accuracy, and the Department of Health has in the past dismissed their use for identifying poor quality services.
But, despite these concerns, hospital episode statistics data were used to investigate hospital level death rates after heart surgery in the Bristol inquiry, and more recently death rates for individual cardiac consultants have been published.
A paper published online by the BMJ extends this theme to more general measures of clinical quality where death may not be the outcome.
The authors investigated whether routinely collected data from hospital episode statistics could be used to identify the gynaecologist Rodney Ledward, who was suspended in 1996 and was the subject of the Ritchie inquiry into quality and practice within the NHS.
The research team compared the performance of 142 gynaecology consultants with the performance of Ledward over a five year period, to determine if Ledward was a statistical outlier according to seven indicators from hospital episode statistics. The indicators were specifically chosen for their potential link with poor quality of service.
Their analysis identified Ledward as an outlier in three of the five years. Eight other consultants were also identified as outliers, but the researchers strongly caution against over-interpreting these consultants as having "poor" performance because valid reasons may exist that could credibly explain their results.
For example, cancer specialists may have high values for several indicators (such as surgical complications and long stays in hospital) because they carry out difficult operations on very ill patients, say the authors. The method therefore needs to be refined to deal with case mix variation.
The authors also warn of the potential limitations of statistics, including missing or poor quality data that can hamper all analyses, and they stress that the interpretation of outlier status is still as yet unclear. They recommend a structured approach to seeking explanations for outlier status.
"Further evaluation of our method is warranted, but our overall approach may be potentially useful in other settings, especially where performance entails several indicator variables," they conclude.
Contact: Mike Harley, Director, Inter-Authority Comparisons and Consultancy, Health Services Management Centre, University of Birmingham, UK Tel: +44 (0)121 414 7062 or +44 (0)121 414 7066 (secretary) Email: [email protected]
Click here to view full paper