Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Background: UK general practitioners largely conduct computer-mediated consultations. Although historically there were many small general practice (GP) computer suppliers there are now around five widely used electronic patient record (EPR) systems. A new method has been developed for assessing the impact of the computer on doctor-patient interaction through detailed observation of the consultation and computer use. Objective: To pilot the latest version of a method to measure the difference in coding and prescribing times on two different brands of general practice EPR system. Method: We compared two GP EPR systems by observing use in real life consultations. Three video cameras recorded the consultation and screen capture software recorded computer activity. We piloted semi-automated user action recording (UAR) software to record mouse and keyboard use, to overcome limitations in manual measurement. Six trained raters analysed the videos using data capture software to measure the doctor-patient-computer interactions; we used interclass correlation coefficients (ICC) to measure reliability. Results: Raters demonstrated high inter-rater reliability for verbal interactions and prescribing (ICC 0.74 to 0.99), but for measures of computer use they were not reliable. We used UAR to capture computer use and found it more reliable. Coded data entry time varied between the systems: 6.8 compared with 11.5 seconds (P = 0.006). However, the EPR with the shortest coding time had a longer prescribing time: 27.5 compared with 23.7 seconds (P = 0.64). Conclusion: This methodological development improves the reliability of our method for measuring the impact of different computer systems on the GP consultation. UAR added more objectivity to the observation of doctor-computer interactions. Iflarger studies were to reproduce the differences between computer systems demonstrated in this pilot it might be possible to make objective comparisons between systems. © 2008 PHCSG, British Computer Society.

Original publication

DOI

10.14236/jhi.v16i2.683

Type

Journal article

Journal

Informatics in Primary Care

Publication Date

01/01/2008

Volume

16

Pages

119 - 127