Authors: Sarah Mullin, Peter Elkin
1: Studies in health technology and informatics, vol. 270, 2020
|
Authors: Abhishek C Sawant, Arnav Kumar, Wilmon Mccray, Sheldon Tetewsky, Linda Parone, Srilekha Sridhara, Meghana Prakash Hiriyur Prakash, Gary Tse, Tong Liu, Nidhi Kanwar, Aishwarya Bhardwaj, Sahoor Khan, Christopher Manion, Ankush Lahoti, Ashish Pershad, Peter Elkin, John Corbelli
1: Journal of geriatric cardiology : JGC, vol. 16, 2019
|
Authors: Carmelo Gaudioso, Peter Elkin
1: Studies in health technology and informatics, vol. 245, 2018
|
Authors: Peter L. Elkin, Steven H. Brown
Journal-article: International Archives of Medicine, 2016
|
Authors: Peter L Elkin, Daniel R Schlegel, Edwin Anand
Journal Article: Studies in health technology and informatics, vol. 221, 2016
|
Authors: Daniel R Schlegel, Chris Crowner, Peter L Elkin
Journal Article: Studies in health technology and informatics, vol. 216, 2015
|
Authors: Glenn T. Gobbel, Ruth Reeves, Shrimalini Jayaramaraja, Dario Giuse, Theodore Speroff, Steven H. Brown, Peter L. Elkin, Michael E. Matheny
Keywords: Bayesian prediction CSV FN FP IQV MCVS Machine learning Multi-threaded Clinical Vocabulary Server NLP Natural language processing Opt Perf RapTAT Rapid Text Annotation Tool SNOMED-CT SVM Systematized Nomenclature of Medicine-Clinical Terms Systematized nomenclature of medicine TP UMLS Unified Medical Language System comma-separated value false negative false positive index of qualitative variation natural language processing optimism performance support vector machine true positive
Abstract: Rapid, automated determination of the mapping of free text phrases to pre-defined concepts could assist in the annotation of clinical notes and increase the speed of natural language processing systems. The aim of this study was to design and evaluate a token-order-specific na?ve Bayes-based machine learning system (RapTAT) to predict associations between phrases and concepts. Performance was assessed using a reference standard generated from 2860 VA discharge summaries containing 567,520 phrases that had been mapped to 12,056 distinct Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) concepts by the MCVS natural language processing system. It was also assessed on the manually annotated, 2010 i2b2 challenge data. Performance was established with regard to precision, recall, and F-measure for each of the concepts within the VA documents using bootstrapping. Within that corpus, concepts identified by MCVS were broadly distributed throughout SNOMED CT, and the token-order-specific language model achieved better performance based on precision, recall, and F-measure (0.95?0.15, 0.96?0.16, and 0.95?0.16, respectively; mean?SD) than the bag-of-words based, na?ve Bayes model (0.64?0.45, 0.61?0.46, and 0.60?0.45, respectively) that has previously been used for concept mapping. Precision, recall, and F-measure on the i2b2 test set were 92.9%, 85.9%, and 89.2% respectively, using the token-order-specific model. RapTAT required just 7.2ms to map all phrases within a single discharge summary, and mapping rate did not decrease as the number of processed documents increased. The high performance attained by the tool in terms of both accuracy and speed was encouraging, and the mapping rate should be sufficient to support near-real-time, interactive annotation of medical narratives. These results demonstrate the feasibility of rapidly and accurately mapping phrases to a wide range of medical concepts based on a token-order-specific na?ve Bayes model and machine learning.
Journal-article: Journal of Biomedical Informatics, vol. 48, pp. 54-65, 2014
|
Authors: P. L. Elkin, S. H. Brown, G. Wright
Keywords: Medical informatics bioinformatics expert consensus ontology publication record
Abstract: This article is part of a For-Discussion-Section of Methods of Information in Medicine on "Biomedical Informatics: We are what we publish". It is introduced by an editorial and followed by a commentary paper with invited comments. In subsequent issues the discussion may continue through letters to the editor. Informatics experts have attempted to define the field via consensus projects which has led to consensus statements by both AMIA. and by IMIA. We add to the output of this process the results of a study of the Pubmed publications with abstracts from the field of Biomedical Informatics. We took the terms from the AMIA consensus document and the terms from the IMIA definitions of the field of Biomedical Informatics and combined them through human review to create the Health Informatics Ontology. We built a terminology server using the Intelligent Natural Language Processor (iNLP). Then we downloaded the entire set of articles in Medline identified by searching the literature by "Medical Informatics" OR "Bioinformatics". The articles were parsed by the joint AMIA/IMIA terminology and then again using SNOMED CT and for the Bioinformatics they were also parsed using HGNC Ontology. We identified 153,580 articles using "Medical Informatics" and 20,573 articles using "Bioinformatics". This resulted in 168,298 unique articles and an overlap of 5,855 articles. Of these 62,244 articles (37%) had titles and abstracts that contained at least one concept from the Health Informatics Ontology. SNOMED CT indexing showed that the field interacts with most all clinical fields of medicine. Further defining the field by what we publish can add value to the consensus driven processes that have been the mainstay of the efforts to date. Next steps should be to extract terms from the literature that are uncovered and create class hierarchies and relationships for this content. We should also examine the high occurring of MeSH terms as markers to define Biomedical Informatics. Greater understanding of the Biomedical Informatics Literature has the potential to lead to improved self-awareness for our field.
Journal-article: Methods of Information in Medicine, vol. 52, pp. 538-546, 2013
|
Authors: Fern FitzHenry, Harvey J. Murff, Michael E. Matheny, Nancy Gentry, Elliot M. Fielstein, Steven H. Brown, Ruth M. Reeves, Dominik Aronsky, Peter L. Elkin, Vincent P. Messina, Theodore Speroff
Abstract: The aim of this study was to build electronic algorithms using a combination of structured data and natural language processing (NLP) of text notes for potential safety surveillance of 9 postoperative complications. Postoperative complications from 6 medical centers in the Southeastern United States were obtained from the Veterans Affairs Surgical Quality Improvement Program (VASQIP) registry. Development and test datasets were constructed using stratification by facility and date of procedure for patients with and without complications. Algorithms were developed from VASQIP outcome definitions using NLP-coded concepts, regular expressions, and structured data. The VASQIP nurse reviewer served as the reference standard for evaluating sensitivity and specificity. The algorithms were designed in the development and evaluated in the test dataset. Sensitivity and specificity in the test set were 85% and 92% for acute renal failure, 80% and 93% for sepsis, 56% and 94% for deep vein thrombosis, 80% and 97% for pulmonary embolism, 88% and 89% for acute myocardial infarction, 88% and 92% for cardiac arrest, 80% and 90% for pneumonia, 95% and 80% for urinary tract infection, and 77% and 63% for wound infection, respectively. A third of the complications occurred outside of the hospital setting. Computer algorithms on data extracted from the electronic health record produced respectable sensitivity and specificity across a large sample of patients seen in 6 different medical centers. This study demonstrates the utility of combining NLP with structured data for mining the information contained within the electronic health record.
Journal-article: Medical Care, vol. 51, pp. 509-516, 2013
|
Authors: E Andrew Balas, Peter L Elkin
Keywords: CTSA biomedical innovation performance improvement quality measures technology transfer translational research
Abstract: Studies documented 17 years of transfer time from clinical trials to practice of care. Launched in 2002, the National Institutes of Health (NIH) translational research initiative needs to develop metrics for impact assessment. A recent White House report highlighted that research and development productivity is declining as a result of increased research spending while the new drugs output is flat. The goal of this study was to develop an expanded model of research-based innovation and performance thresholds of transfer from research to practice. Models for transfer of research to practice have been collected and reviewed. Subsequently, innovation pathways have been specified based on common characteristics. An integrated, intellectual property transfer model is described. The central but often disregarded role of research innovation disclosure is highlighted. Measures of research transfer and milestones of progress have been identified based on the Association of University Technology Managers 2012 performance reports. Numeric milestones of technology transfer are recommended at threshold (top 50%), target (top 25%), and stretch goal (top 10%) performance levels. Transfer measures and corresponding target levels include research spending to disclosure (<$1.88 million), disclosure to patents (>0.81), patents to start-up (>0.1), patents to licenses (>2.25), and average per license income (>$48,000). Several limitations of measurement are described. Academic institutions should take strategic steps to bring innovation to the center of scholarly discussions. Research on research, particularly on pathways to disclosures, is needed to improve R&D productivity. Researchers should be informed about the technology transfer performance of their institution and regulations should better support innovators.
Journal-article: Evaluation & the health professions, vol. 36, 2013
|
Authors: Jennifer H. Garvin, Peter L. Elkin, Shuying Shen, Steven Brown, Brett Trusko, Enlai Wang, Linda Hoke, Ylenia Quiaoit, Joan LaJoie, Mark G. Weiner, Pauline Graham, Theodore Speroff
Keywords: evaluation of health information technology natural language processing (NLP) performance measurement
Abstract: Quality measurement is an important issue for the United States Department of Veterans Affairs (VA). In this study, we piloted the use of an informatics tool, the Multithreaded Clinical Vocabulary Server (MCVS), which extracted automatically whether the VA Office of Quality and Performance measures of quality of care were met for the completion of discharge instructions for inpatients with congestive heart failure. We used a single document, the discharge instructions, from one section of the medical records for 152 patients and developed a reference standard using two independent reviewers to assess performance. When evaluated against the reference standard, MCVS achieved a sensitivity of 0.87, a specificity of 0.86, and a positive predictive value of 0.90. The automated process using the discharge instruction document worked effectively. The use of the MCVS tool for concept-based indexing resulted in mostly accurate data capture regarding quality measurement, but improvements are needed to further increase the accuracy of data extraction.
Journal-article: Journal For Healthcare Quality, vol. 35, pp. 16-24, 2013
|
Authors: Peter L Elkin, Steven H Brown
Journal Article: Studies in health technology and informatics, vol. 192, 2013
|
Authors: Peter L Elkin
Keywords: Health Informatics Human Factors Engineering Patient Safety Usability Testing User-Centered Design
Abstract: Human factors engineering is a discipline that deals with computer and human systems and processes and provides a methodology for designing and evaluating systems as they interact with human beings. This review article reviews important current and past efforts in human factors engineering in health informatics in the context of the current trends in health informatics. The methodology of human factors engineering and usability testing in particular were reviewed in this article. This methodology arises from the field of human factors engineering, which uses principles from cognitive science and applies them to implementations such as a computer-human interface and user-centered design. Patient safety and best practice of medicine requires a partnership between patients, clinicians and computer systems that serve to improve the quality and safety of patient care. People approach work and problems with their own knowledge base and set of past experiences and their ability to use systems properly and with low error rates are directly related to the usability as well as the utility of computer systems. Unusable systems have been responsible for medical error and patient harm and have even led to the death of patients and increased mortality rates. Electronic Health Record and Computerized Physician Order Entry systems like any medical device should come with a known safety profile that minimizes medical error and harm. This review article reviews important current and past efforts in human factors engineering in health informatics in the context of the current trends in health informatics.
Journal-article: Healthcare informatics research, vol. 18, 2012
|
Authors: Casimir A Kulikowski, Edward H Shortliffe, Leanne M Currie, Peter L Elkin, Lawrence E Hunter, Todd R Johnson, Ira J Kalet, Leslie A Lenert, Mark A Musen, Judy G Ozbolt, Jack W Smith, Peter Z Tarczy-Hornoch, Jeffrey J Williamson
Abstract: The AMIA biomedical informatics (BMI) core competencies have been designed to support and guide graduate education in BMI, the core scientific discipline underlying the breadth of the field's research, practice, and education. The core definition of BMI adopted by AMIA specifies that BMI is 'the interdisciplinary field that studies and pursues the effective uses of biomedical data, information, and knowledge for scientific inquiry, problem solving and decision making, motivated by efforts to improve human health.' Application areas range from bioinformatics to clinical and public health informatics and span the spectrum from the molecular to population levels of health and biomedicine. The shared core informatics competencies of BMI draw on the practical experience of many specific informatics sub-disciplines. The AMIA BMI analysis highlights the central shared set of competencies that should guide curriculum design and that graduate students should be expected to master.
Journal-article: Journal of the American Medical Informatics Association : JAMIA, vol. 19, 2012
|
Authors: Peter L. Elkin, Steven H. Brown
Book-chapter: Health Informatics, pp. 71-94, 2012
|
Authors: Steven Brown, Trent S. Rosenbloom, Shawn P. Hardenbrook, Terry Clark, Elliot Fielstein, Peter Elkin, Ted Speroff
Journal-article: Journal of Data and Information Quality, vol. 3, pp. 1-17, 2012
|
Authors: Michael E. Matheny, Fern FitzHenry, Theodore Speroff, Jennifer K. Green, Michelle L. Griffith, Eduard E. Vasilevskis, Elliot M. Fielstein, Peter L. Elkin, Steven H. Brown
Abstract: The majority of clinical symptoms are stored as free text in the clinical record, and this information can inform clinical decision support and automated surveillance efforts if it can be accurately processed into computer interpretable data. We developed rule-based algorithms and evaluated a natural language processing (NLP) system for infectious symptom detection using clinical narratives. Training (60) and testing (444) documents were randomly selected from VA emergency department, urgent care, and primary care records. Each document was processed with NLP and independently manually reviewed by two clinicians with adjudication by referee. Infectious symptom detection rules were developed in the training set using keywords and SNOMED-CT concepts, and subsequently evaluated using the testing set. Overall symptom detection performance was measured with a precision of 0.91, a recall of 0.84, and an F measure of 0.87. Overall symptom detection with assertion performance was measured with a precision of 0.67, a recall of 0.62, and an F measure of 0.64. Among those instances in which the automated system matched the reference set determination for symptom, the system correctly detected 84.7% of positive assertions, 75.1% of negative assertions, and 0.7% of uncertain assertions. This work demonstrates how processed text could enable detection of non-specific symptom clusters for use in automated surveillance activities.
Journal-article: International Journal of Medical Informatics, vol. 81, pp. 143-156, 2012
|
Authors: Peter L. Elkin, David A. Froehling, Dietlind L. Wahner-Roedler, Steven H. Brown, Kent R. Bailey
Abstract: An effective national biosurveillance system expedites outbreak recognition and facilitates response coordination at the federal, state, and local levels. The BioSense system, used at the Centers for Disease Control and Prevention, incorporates chief complaints but not data from the whole encounter note into its surveillance algorithms. To evaluate whether biosurveillance by using data from the whole encounter note is superior to that using data from the chief complaint field alone. 6-year retrospective case-control cohort study. Mayo Clinic, Rochester, Minnesota. 17,243 persons tested for influenza A or B virus between 1 January 2000 and 31 December 2006. The accuracy of a model based on signs and symptoms to predict influenza virus infection in patients with upper respiratory tract symptoms, and the ability of a natural language processing technique to identify definitional clinical features from free-text encounter notes. Surveillance based on the whole encounter note was superior to the chief complaint field alone. For the case definition used by surveillance of the whole encounter note, the normalized partial area under the receiver-operating characteristic curve (specificity, 0.1 to 0.4) for surveillance using the whole encounter note was 92.9% versus 70.3% for surveillance with the chief complaint field (difference, 22.6%; P < 0.001). Comparison of the 2 models at the fixed specificity of 0.4 resulted in sensitivities of 89.0% and 74.4%, respectively (P < 0.001). The relative risk for missing a true case of influenza was 2.3 by using the chief complaint field model. Participants were seen at 1 tertiary referral center. The cost of comprehensive biosurveillance monitoring was not studied. A biosurveillance model for influenza using the whole encounter note is more accurate than a model that uses only the chief complaint field. Because case-defining signs and symptoms of influenza are commonly available in health records, the investigators believe that the national strategy for biosurveillance should be changed to incorporate data from the whole health record. Centers for Disease Control and Prevention.
Journal-article: Annals of Internal Medicine, vol. 156, pp. 11, 2012
|
Authors: Peter L. Elkin, Mark Samuel Tuttle
Book-chapter: Health Informatics, pp. 1-4, 2012
|
Authors: Peter L. Elkin, Mark Samuel Tuttle, Marjorie Rallins, Jennifer Trajkovski, Elizabeth Lumakovska, Steven H. Brown
Book-chapter: Health Informatics, pp. 125-175, 2012
|
Authors: Peter L. Elkin, Mark Samuel Tuttle
Book-chapter: Health Informatics, pp. 211-212, 2012
|
Authors: Peter L. Elkin, Mark Samuel Tuttle
Book-chapter: Health Informatics, pp. 177-209, 2012
|
Authors: Jennifer H. Garvin, Peter L. Elkin, Shuying Shen, Steven Brown, Brett Trusko, Enlai Wang, Linda Hoke, Ylenia Quiaoit, Joan LaJoie, Mark G. Weiner, Pauline Graham, Theodore Speroff
Journal-article: Journal for Healthcare Quality, pp. n/a-n/a, 2012
|
Authors: Peter L. Elkin, Mark Samuel Tuttle
Book-chapter: Health Informatics, pp. 5-10, 2012
|
Authors: Peter L. Elkin, Steven H. Brown
Book-chapter: Health Informatics, pp. 11-50, 2012
|
Authors: M. Ball, C. Boyer, P. L. Elkin, K. Ishikawa, C. Jaffe, A. March, H. Marin, J. Mykk??nen, O. Rienhoff, J. Silva, D. F. Sittig, J. Talmon, C. Lovis
Journal-article: Yearbook of Medical Informatics, vol. 20, pp. 73-82, 2011
|
Authors: A Y S Lau, K A Siek, L Fernandez-Luque, H Tange, P Chhanabhai, S Y W Li, P L Elkin, A Arjabi, L Walczowski, C S Ang, G Eysenbach
Journal Article: Yearbook of medical informatics, vol. 6, 2011
|
Authors: K. A. Siek, L. Fernandez-Luque, H. Tange, P. Chhanabhai, S. Y. W. Li, P. L. Elkin, A. Arjabi, L. Walczowski, C. S. Ang, G. Eysenbach, A. Y. S. Lau
Journal-article: Yearbook of Medical Informatics, vol. 20, pp. 131-138, 2011
|
Authors: Peter L Elkin, John S Carter, Manasi Nabar, Mark Tuttle, Michael Lincoln, Steven H Brown
Journal Article: Studies in health technology and informatics, vol. 166, 2011
|
Authors: C Lovis, M Ball, C Boyer, P L Elkin, K Ishikawa, C Jaffe, A March, H Marin, J Mykk?nen, O Rienhoff, J Silva, D F Sittig, J Talmon
Congresses: Yearbook of medical informatics, vol. 6, 2011
|
Authors: Harvey J. Murff, Fern FitzHenry, Michael E. Matheny, Nancy Gentry, Kristen L. Kotter, Kimberly Crimin, Robert S. Dittus, Amy K. Rosen, Peter L. Elkin, Steven H. Brown, Theodore Speroff
Abstract: Currently most automated methods to identify patient safety occurrences rely on administrative data codes; however, free-text searches of electronic medical records could represent an additional surveillance approach. To evaluate a natural language processing search-approach to identify postoperative surgical complications within a comprehensive electronic medical record. Cross-sectional study involving 2974 patients undergoing inpatient surgical procedures at 6 Veterans Health Administration (VHA) medical centers from 1999 to 2006. Postoperative occurrences of acute renal failure requiring dialysis, deep vein thrombosis, pulmonary embolism, sepsis, pneumonia, or myocardial infarction identified through medical record review as part of the VA Surgical Quality Improvement Program. We determined the sensitivity and specificity of the natural language processing approach to identify these complications and compared its performance with patient safety indicators that use discharge coding information. The proportion of postoperative events for each sample was 2% (39 of 1924) for acute renal failure requiring dialysis, 0.7% (18 of 2327) for pulmonary embolism, 1% (29 of 2327) for deep vein thrombosis, 7% (61 of 866) for sepsis, 16% (222 of 1405) for pneumonia, and 2% (35 of 1822) for myocardial infarction. Natural language processing correctly identified 82% (95% confidence interval [CI], 67%-91%) of acute renal failure cases compared with 38% (95% CI, 25%-54%) for patient safety indicators. Similar results were obtained for venous thromboembolism (59%, 95% CI, 44%-72% vs 46%, 95% CI, 32%-60%), pneumonia (64%, 95% CI, 58%-70% vs 5%, 95% CI, 3%-9%), sepsis (89%, 95% CI, 78%-94% vs 34%, 95% CI, 24%-47%), and postoperative myocardial infarction (91%, 95% CI, 78%-97%) vs 89%, 95% CI, 74%-96%). Both natural language processing and patient safety indicators were highly specific for these diagnoses. Among patients undergoing inpatient surgical procedures at VA medical centers, natural language processing analysis of electronic medical records to identify postoperative complications had higher sensitivity and lower specificity compared with patient safety indicators based on discharge coding.
Journal-article: JAMA, vol. 306, 2011
|
Authors: Dietlind L. Wahner-Roedler, Jeffrey M. Thompson, Connie A. Luedtke, Susan M. King, Stephen S. Cha, Peter L. Elkin, Barbara K. Bruce, Cynthia O. Townsend, Jody R. Bergeson, Andrea L. Eickhoff, Laura L. Loehrer, Amit Sood, Brent A. Bauer
Abstract: Most patients with fibromyalgia use complementary and alternative medicine (CAM). Properly designed controlled trials are necessary to assess the effectiveness of these practices. This study was a randomized, double-blind, placebo-controlled, early phase trial. Fifty patients seen at a fibromyalgia outpatient treatment program were randomly assigned to a daily soy or placebo (casein) shake. Outcome measures were scores of the Fibromyalgia Impact Questionnaire (FIQ) and the Center for Epidemiologic Studies Depression Scale (CES-D) at baseline and after 6 weeks of intervention. Analysis was with standard statistics based on the null hypothesis, and separation test for early phase CAM comparative trials. Twenty-eight patients completed the study. Use of standard statistics with intent-to-treat analysis showed that total FIQ scores decreased by 14% in the soy group (P = .02) and by 18% in the placebo group (P < .001). The difference in change in scores between the groups was not significant (P = .16). With the same analysis, CES-D scores decreased in the soy group by 16% (P = .004) and in the placebo group by 15% (P = .05). The change in scores was similar in the groups (P = .83). Results of statistical analysis using the separation test and intent-to-treat analysis revealed no benefit of soy compared with placebo. Shakes that contain soy and shakes that contain casein, when combined with a multidisciplinary fibromyalgia treatment program, provide a decrease in fibromyalgia symptoms. Separation between the effects of soy and casein (control) shakes did not favor the intervention. Therefore, large-sample studies using soy for patients with fibromyalgia are probably not indicated.
Journal-article: Evidence-Based Complementary and Alternative Medicine, vol. 2011, pp. 1-6, 2011
|
Authors: Michael E Matheny, Fern Fitzhenry, Theodore Speroff, Jacob Hathaway, Harvey J Murff, Steven H Brown, Elliot M Fielstein, Robert S Dittus, Peter L Elkin
Journal Article: AMIA ... Annual Symposium proceedings. AMIA Symposium, vol. 2009, 2010
|
Authors: Peter L Elkin, Brett E Trusko, Ross Koppel, Ted Speroff, Daniel Mohrer, Saoussen Sakji, Inna Gurewitz, Mark Tuttle, Steven H Brown
Journal Article: Studies in health technology and informatics, vol. 155, 2010
|
Authors: Peter L. Elkin, Mark Liebow, Brent A. Bauer, Swarna Chaliki, Dietlind Wahner-Roedler, John Bundrick, Mark Lee, Steven H. Brown, David Froehling, Kent Bailey, Kathleen Famiglietti, Richard Kim, Ed Hoffer, Mitchell Feldman, G. Octo Barnett
Abstract: In an era of short inpatient stays, residents may overlook relevant elements of the differential diagnosis as they try to evaluate and treat patients. However, if a resident's first principal diagnosis is wrong, the patient's appropriate evaluation and treatment may take longer, cost more, and lead to worse outcomes. A diagnostic decision support system may lead to the generation of a broader differential diagnosis that more often includes the correct diagnosis, permitting a shorter, more effective, and less costly hospital stay. We provided residents on General Medicine services access to DXplain, an established computer-based diagnostic decision support system, for 6 months. We compared charges and cost of service for diagnostically challenging cases seen during the fourth through sixth month of access to DXplain (intervention period) to control cases seen in the 6 months before the system was made available. 564 cases were identified as diagnostically challenging by our criteria during the intervention period along with 1173 cases during the control period. Total charges were $1281 lower (p=.006), Medicare Part A charges $1032 lower (p=0.006) and cost of service $990 lower (p=0.001) per admission in the intervention cases than in control cases. Using DXplain on all diagnostically challenging cases might save our medical center over $2,000,000 a year on the General Medicine Services alone. Using clinical diagnostic decision support systems may improve quality and decrease cost substantially at teaching hospitals.
Journal-article: International Journal of Medical Informatics, vol. 79, pp. 772-777, 2010
|
Authors: Brett Trusko, S. Trent Rosenbloom, Diane Montella, James C. Jackson, Fern Fitzhenry, Steven H. Brown, Peter L. Elkin, Elliot Fielstein, Kristen Kotter, Mark Tuttle, Richard J. Iannelli, Theodore Speroff
Abstract: The authors sought to evaluate how well the Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT) controlled vocabulary represents terms commonly used clinically when documenting posttraumatic stress disorder (PTSD). A list was constructed based on the PTSD criteria in the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV; American Psychiatric Association, 1994), symptom assessment instruments, and publications. Although two teams mapping the terms to SNOMED-CT differed in their approach, the consensus mapping accounted for 91% of the 153 PTSD terms. They found that the words used by clinicians in describing PTSD symptoms are represented in SNOMED-CT. These results can be used to codify mental health text reports for health information technology applications such as automated chart abstraction, algorithms for identifying documentation of symptoms representing PTSD in clinical notes, and clinical decision support.
Journal-article: Journal of Traumatic Stress, vol. 23, pp. 794-801, 2010
|
Authors: Peter L. Elkin, David Froehling, Dietlind Wahner-Roedler, Brett Trusko, Gail Welsh, S. Trent Rosenbloom, Theodore Speroff, Steven H. Brown
Abstract: In this manuscript we report an evaluation of the reliability of clinical research rules creation by multiple clinicians using the Health Archetype Language (HAL-42) and user interface. HAL-42 is a language which allows real time epidemiological inquiry using automatically derived clinical encodings with any health Ontology. This evaluation used SNOMED CT as the underlying Ontology. The inquiries were performed on a population of 17,731 patients whose 50,000 clinical records have all been fully encoded in SNOMED CT. Four subject matter experts (SMEs) were asked independently to encode and run 10 rules/studies. The inter-rater agreement was 74.8% (p=0.6526) with a Kappa statistic of 0.49217 (p=0.5722). The ten rules were divided into three easy rules, four moderate and three complex rules. There was no significant difference in the SME's agreement when representing easy and complex rules (p=0.6243). We conclude that although the usability of the HAL-42 language is usable enough to achieve reasonable inter-rater reliability, some training will be necessary to reach high levels of reliability for ad hoc queries. We also conclude that SMEs are just as competent to perform complex queries as easy queries of ontologically indexed clinical data.
Journal-article: International Journal of Medical Informatics, vol. 79, pp. e71-e75, 2010
|
Authors: Peter L Elkin, Mark S Tuttle, Brett E Trusko, Steven H Brown
Abstract: BioProspecting is a novel approach that enabled our team to mine data related to genetic markers from the New England Journal of Medicine (NEJM) utilizing SNOMED CT and the Human Gene Onotology (HUGO). The Biomedical Informatics Research Collaborative was able to link genes and disorders using the Multi-threaded Clinical Vocabulary Server (MCVS) and natural language processing engine, whose output creates an ontology-network using the semantic encodings of the literature that is organized by these two terminologies. We identified relationships between (genes or proteins) and (diseases or drugs) as linked by metabolic functions and identified potentially novel functional relationships between, for example, genes and diseases (e.g. Article #1 ([Gene - IL27] = > {Enzyme - Dipeptidyl Carboxypeptidase 1}) and Article #2 ({Enzyme - Dipeptidyl Carboxypeptidase 1} < = [Disorder - Type II DM]) showing a metabolic link between IL27 and Type II DM). In this manuscript we describe our method for developing the database and its content as well as its potential to assist in the discovery of novel markers and drugs.
Journal-article: BMC Bioinformatics, vol. 10, pp. S9, 2009
|
Authors: V Herasevich, BE Trusko, M Yilmaz, M Tsapenko, R Kashyap, AC Hanson, J Patrick, P Elkin, O Gajic
Proceedings-article: B44. ORGANIZATION AND GUIDELINES IN THE ICU, 2009
|
Authors: S. T. Rosenbloom, S. H. Brown, D. Froehling, B. A. Bauer, D. L. Wahner-Roedler, W. M. Gregg, P. L. Elkin
Abstract: Interface terminologies are designed to support interactions between humans and structured medical information. In particular, many interface terminologies have been developed for structured computer based documentation systems. Experts and policy-makers have recommended that interface terminologies be mapped to reference terminologies. The goal of the current study was to evaluate how well the reference terminology SNOMED CT could map to and represent two interface terminologies, MEDCIN and the Categorical Health Information Structured Lexicon (CHISL). Automated mappings between SNOMED CT and 500 terms from each of the two interface terminologies were evaluated by human reviewers, who also searched SNOMED CT to identify better mappings when this was judged to be necessary. Reviewers judged whether they believed the interface terms to be clinically appropriate, whether the terms were covered by SNOMED CT concepts and whether the terms' implied semantic structure could be represented by SNOMED CT. Outcomes included concept coverage by SNOMED CT for study terms and their implied semantics. Agreement statistics and compositionality measures were calculated. The SNOMED CT terminology contained concepts to represent 92.4% of MEDCIN and 95.9% of CHISL terms. Semantic structures implied by study terms were less well covered, with some complex compositional expressions requiring semantics not present in SNOMED CT. Among sampled terms, those from MEDCIN were more complex than those from CHISL, containing an average 3.8 versus 1.8 atomic concepts respectively, p<0.001. Our findings support using SNOMED CT to provide standardized representations of information created using these two terminologies, but suggest that enriching SNOMED CT semantics would improve representation of the external terms.
Journal-article: Journal of the American Medical Informatics Association, vol. 16, pp. 81-88, 2009
|
Authors: S. T. Rosenbloom, R. A. Miller, K. B. Johnson, P. L. Elkin, S. H. Brown
Abstract: Evaluations of individual terminology systems should be driven in part by the intended usages of such systems. Clinical interface terminologies support interactions between healthcare providers and computer-based applications. They aid practitioners in converting clinical "free text" thoughts into the structured, formal data representations used internally by application programs. Interface terminologies also serve the important role of presenting existing stored, encoded data to end users in human-understandable and actionable formats. The authors present a model for evaluating functional utility of interface terminologies based on these intended uses. Specific parameters defined in the manuscript comprise the metrics for the evaluation model. Parameters include concept accuracy, term expressivity, degree of semantic consistency for term construction and selection, adequacy of assertional knowledge supporting concepts, degree of complexity of pre-coordinated concepts, and the "human readability" of the terminology. The fundamental metric is how well the interface terminology performs in supporting correct, complete, and efficient data encoding or review by humans. Authors provide examples demonstrating performance of the proposed evaluation model in selected instances. A formal evaluation model will permit investigators to evaluate interface terminologies using a consistent and principled approach. Terminology developers and evaluators can apply the proposed model to identify areas for improving interface terminologies.
Journal-article: Journal of the American Medical Informatics Association, vol. 15, pp. 65-76, 2008
|
Authors: Peter L Elkin, Steven H Brown, Andrew Balas, Zelalem Temesgen, Dietlind Wahner-Roedler, David Froehling, Mark Liebow, Brett Trusko, S Trent Rosenbloom, Greg Poland
Journal Article: Studies in health technology and informatics, vol. 136, 2008
|
Authors: M. Cabrer, M. Hansen, C. Paton, P. L. Elkin, W. S. Erdley, P. J. Murray
Journal-article: Yearbook of Medical Informatics, vol. 17, pp. 44-51, 2008
|
Authors: Dietlind L. Wahner-Roedler, Swarna S. Chaliki, Brent A. Bauer, John B. Bundrick, Larry R. Bergstrom, Mark C. Lee, Stephen S. Cha, Peter L. Elkin
Abstract: Advances in medicine have led to a multitude of diagnostic tests. The contribution of the clinical skills of the general internist in the context of all these advances is unknown. Our objective was to assess the relative contributions of clinical skills and diagnostic test results in arriving at a final diagnosis. Records were retrospectively reviewed from 248 consecutive patients admitted to a general internal medicine hospital service during a 3-month period in 2000. All diagnostic evaluations that yielded the final diagnosis were recorded along with the date and time they were performed. Diagnostic credit was given to the evaluation that yielded the diagnosis at the earliest point in time. All cases had a firm diagnosis by 3 months after hospitalization. Of the 248 patients, 246 received a final diagnosis during hospitalization. The diagnoses were made by use of the clinical judgement of the general internist in 50.4% of the cases, a radiologic study in 31.7%, a blood test or culture result in 9.4%, biopsy findings in 3.3% and various other diagnostic studies (endoscopy, echocardiography, electromyography and electroencephalography) in 5.2%. Clinicians provided the correct diagnosis significantly more often than radiologic studies (P = 0.0015), which was the next most useful type of diagnostic evaluation. Although technology has become increasingly available in clinical practice, clinical expertise and skills are still important factors with respect to making correct, timely diagnoses in hospitalized patients.
Journal-article: Journal of Evaluation in Clinical Practice, vol. 13, pp. 321-325, 2007
|
Authors: Peter L Elkin, David Froehling, Brent A Bauer, Dietlind Wahner-Roedler, S Trent Rosenbloom, Kent Bailey, Steven H Brown
Journal Article: Studies in health technology and informatics, vol. 129, 2007
|
Authors: David A. Froehling, Paul R. Daniels, Stephen J. Swensen, John A. Heit, Jayawant N. Mandrekar, Jay H. Ryu, Peter L. Elkin
Abstract: To determine the sensitivity and specificity of a quantitative plasma fibrin D-dimer latex immunoassay (LIA) for the diagnosis of acute pulmonary embolism. Study subjects were Mayo Clinic Rochester inpatients and outpatients with suspected acute pulmonary embolism; all had undergone quantitative D-dimer LIA testing and multidetector-row computed tomographic (CT) angiography between August 3, 2001, and November 10, 2003. Multidetector-row CT angiography was the diagnostic reference standard. Of 1355 CT studies, 208 (15%) were positive for acute pulmonary embolism. Median D-dimer levels were significantly higher for patients with acute pulmonary embolism (1425 ng/mL) than for patients without (500 ng/mL) (P<.001). The highest specificity that optimizes sensitivity for acute pulmonary embolism was achieved by using a discriminant value of 300 ng/mL, which yielded a sensitivity of 0.94 (95% confidence interval [CI], 0.89-0.97), a specificity of 0.27 (95% CI, 0.25-0.30), and a negative predictive value of 0.96 (95% CI, 0.93-0.98). The quantitative D-dimer LIA with a discriminant value of 300 ng/mL had high sensitivity and high negative predictive value but low specificity for the diagnosis of acute pulmonary embolism. On the basis of these results, we believe that a negative quantitative D-dimer LIA result and a low pretest probability of thromboembolism together are sufficient to exclude acute pulmonary embolism.
Journal-article: Mayo Clinic Proceedings, vol. 82, pp. 556-560, 2007
|
Authors: Steven H. Brown, Theodore Speroff, Elliot M. Fielstein, Brent A. Bauer, Dietlind L. Wahner-Roedler, Robert Greevy, Peter L. Elkin
Abstract: To evaluate an electronic quality (eQuality) assessment tool for dictated disability examination records. We applied automated concept-based indexing techniques to automated quality screening of Department of Veterans Affairs spine disability examinations that had previously undergone gold standard quality review by human experts using established quality indicators. We developed automated quality screening rules and refined them iteratively on a training set of disability examination reports. We applied the resulting rules to a novel test set of spine disability examination reports. The initial data set was composed of all electronically available examination reports (N=125,576) finalized by the Veterans Health Administration between July and September 2001. Sensitivity was 91% for the training set and 87% for the test set (P-.02). Specificity was 74% for the training set and 71% for the test set (P=.44). Human performance ranged from 4% to 6% higher (P<.001) than the eQuality tool in sensitivity and 13% to 16% higher in specificity (P<.001). In addition, the eQuality tool was equivalent or higher in sensitivity for 5 of 9 individual quality indicators. The results demonstrate that a properly authored computer-based expert systems approach can perform quality measurement as well as human reviewers for many quality indicators. Although automation will likely always rely on expert guidance to be accurate and meaningful, eQuality is an important new method to assist clinicians in their efforts to practice safe and effective medicine.
Journal-article: Mayo Clinic Proceedings, vol. 81, pp. 1472-1481, 2006
|
Authors: Casey S Husser, Jeffrey R Buchhalter, O Scott Raffo, Amnon Shabo, Steven H Brown, Karen E Lee, Peter L Elkin
Journal Article: Methods in molecular biology (Clifton, N.J.), vol. 316, 2006
|
Authors: Dietlind L. Wahner-Roedler, Ann Vincent, Peter L. Elkin, Laura L. Loehrer, Stephen S. Cha, Brent A. Bauer
Abstract: The purpose of this study was to evaluate the attitudes of physicians at an academic medical center toward complementary and alternative medicine (CAM) therapies and the physicians' knowledge base regarding common CAM therapies. A link to a Web-based survey was e-mailed to 660 internists at Mayo Clinic in Rochester, MN, USA. Physicians were asked about their attitudes toward CAM in general and their knowledge regarding specific CAM therapies. The level of evidence a physician would require before incorporating such therapies into clinical care was also assessed. Of the 233 physicians responding to the survey, 76% had never referred a patient to a CAM practitioner. However, 44% stated that they would refer a patient if a CAM practitioner were available at their institution. Fifty-seven percent of physicians thought that incorporating CAM therapies would have a positive effect on patient satisfaction, and 48% believed that offering CAM would attract more patients. Most physicians agreed that some CAM therapies hold promise for the treatment of symptoms or diseases, but most of them were not comfortable in counseling their patients about most CAM treatments. Prospective, randomized controlled trials were considered the level of evidence required for most physicians to consider incorporating a CAM therapy into their practice. The results of this survey provide insight into the attitudes of physicians toward CAM at an academic medical center. This study highlights the need for educational interventions and the importance of providing physicians ready access to evidence-based information regarding CAM.
Journal-article: Evidence-Based Complementary and Alternative Medicine, vol. 3, pp. 495-501, 2006
|
Authors: Peter L. Elkin, Steven H. Brown, Casey S. Husser, Brent A. Bauer, Dietlind Wahner-Roedler, S. Trent Rosenbloom, Ted Speroff
Abstract: To evaluate the ability of SNOMED CT (Systematized Nomenclature of Medicine Clinical Terms) version 1.0 to represent the most common problems seen at the Mayo Clinic in Rochester, Minn. We selected the 4996 most common nonduplicated text strings from the Mayo Master Sheet Index that describe patient problems associated with inpatient and outpatient episodes of care. From July 2003 through January 2004, 2 physician reviewers compared the Master Sheet Index text with the SNOMED CT terms that were automatically mapped by a vocabulary server or that they identified using a vocabulary browser and rated the "correctness" of the match. If the 2 reviewers disagreed, a third reviewer adjudicated. We evaluated the specificity, sensitivity, and positive predictive value of SNOMED CT. Of the 4996 problems in the test set, SNOMED CT correctly identified 4568 terms (true-positive results); 36 terms were true negatives, 9 terms were false positives, and 383 terms were false negatives. SNOMED CT had a sensitivity of 92.3%, a specificity of 80.0%, and a positive predictive value of 99.8%. SNOMED CT, when used as a compositional terminology, can exactly represent most (92.3%) of the terms used commonly in medical problem lists. Improvements to synonymy and adding missing modifiers would lead to greater coverage of common problem statements. Health care organizations should be encouraged and provided incentives to begin adopting SNOMED CT to drive their decision-support applications.
Journal-article: Mayo Clinic Proceedings, vol. 81, pp. 741-748, 2006
|
Authors: S. T. Rosenbloom, R. A. Miller, K. B. Johnson, P. L. Elkin, S. H. Brown
Abstract: Previous investigators have defined clinical interface terminology as a systematic collection of health care-related phrases (terms) that supports clinicians' entry of patient-related information into computer programs, such as clinical "note capture" and decision support tools. Interface terminologies also can facilitate display of computer-stored patient information to clinician-users. Interface terminologies "interface" between clinicians' own unfettered, colloquial conceptualizations of patient descriptors and the more structured, coded internal data elements used by specific health care application programs. The intended uses of a terminology determine its conceptual underpinnings, structure, and content. As a result, the desiderata for interface terminologies differ from desiderata for health care-related terminologies used for storage (e.g., SNOMED-CT), information retrieval (e.g., MeSH), and classification (e.g., ICD9-CM). Necessary but not sufficient attributes for an interface terminology include adequate synonym coverage, presence of relevant assertional knowledge, and a balance between pre- and post-coordination. To place interface terminologies in context, this article reviews historical goals and challenges of clinical terminology development in general and then focuses on the unique features of interface terminologies.
Journal-article: Journal of the American Medical Informatics Association, vol. 13, pp. 277-288, 2006
|
Authors: Dietlind L. Wahner-Roedler, Peter L. Elkin, Ann Vincent, Laura L. Loehrer, Stephen S. Cha, Brent A. Bauer
Journal-article: Gender Medicine, vol. 3, pp. S66-S67, 2006
|
Authors: Naseema Gangat, Yi Lin, Peter L Elkin
Journal-article: Mayo Clinic proceedings, vol. 80, 2005
|
Authors: Dietlind L. Wahner-Roedler, Peter L. Elkin, Ann Vincent, Jeffery M. Thompson, Terry H. Oh, Brent A. Bauer
Journal-article: Mayo Clinic Proceedings, vol. 80, pp. 826-827, 2005
|
Authors: Dietlind L. Wahner-Roedler, Peter L. Elkin, Ann Vincent, Jeffrey M. Thompson, Terry H. Oh, Laura L. Loehrer, Jayawant N. Mandrekar, Brent A. Bauer
Journal-article: Mayo Clinic Proceedings, vol. 80, pp. 55-60, 2005
|
Authors: Dietlind L. Wahner-Roedler, Peter L. Elkin, Ann Vincent, Jeffrey M. Thompson, Terry H. Oh, Laura L. Loehrer, Jayawant N. Mandrekar, Brent A. Bauer
Abstract: To evaluate the frequency and pattern of complementary and alternative medicine (CAM) use in patients referred to a fibromyalgia treatment program at a tertiary care center. Patients referred to the Mayo Fibromyalgia Treatment Program between February 2003 and July 2003 were invited on their initial visit to participate in a survey regarding CAM use during the previous 6 months. An 85-question survey that addressed different CAM domains was used. Of the 304 patients invited to participate, 289 (95%) completed the survey (263 women and 26 men). Ninety-eight percent of the patients had used some type of CAM therapy during the previous 6 months. The 10 most frequently used CAM treatments were exercise for a specific medical problem (48%), spiritual healing (prayers) (45%), massage therapy (44%), chiropractic treatments (37%), vitamin C (35%), vitamin E (31%), magnesium (29%), vitamin B complex (25%), green tea (24%), and weight-loss programs (20%). CAM use is common in patients referred to a fibromyalgia treatment program.
Journal-article: Mayo Clinic Proceedings, vol. 80, pp. 55-60, 2005
|
Authors: Dietlind L Wahner-Roedler, Peter L Elkin, Ann Vincent, Laura L Loehrer, Tiffani D Schilling, Brent A Bauer
Journal-article: Evidence-Based Integrative Medicine, vol. 2, pp. 9-12, 2005
|
Authors: Peter L Elkin, Steven H Brown, Brent A Bauer, Casey S Husser, William Carruth, Larry R Bergstrom, Dietlind L Wahner-Roedler
Abstract: Identification of negation in electronic health records is essential if we are to understand the computable meaning of the records: Our objective is to compare the accuracy of an automated mechanism for assignment of Negation to clinical concepts within a compositional expression with Human Assigned Negation. Also to perform a failure analysis to identify the causes of poorly identified negation (i.e. Missed Conceptual Representation, Inaccurate Conceptual Representation, Missed Negation, Inaccurate identification of Negation). 41 Clinical Documents (Medical Evaluations; sometimes outside of Mayo these are referred to as History and Physical Examinations) were parsed using the Mayo Vocabulary Server Parsing Engine. SNOMED-C was used to provide concept coverage for the clinical concepts in the record. These records resulted in identification of Concepts and textual clues to Negation. These records were reviewed by an independent medical terminologist, and the results were tallied in a spreadsheet. Where questions on the review arose Internal Medicine Faculty were employed to make a final determination. SNOMED-CT was used to provide concept coverage of the 14,792 Concepts in 41 Health Records from John's Hopkins University. Of these, 1,823 Concepts were identified as negative by Human review. The sensitivity (Recall) of the assignment of negation was 97.2% (p < 0.001, Pearson Chi-Square test; when compared to a coin flip). The specificity of assignment of negation was 98.8%. The positive likelihood ratio of the negation was 81. The positive predictive value (Precision) was 91.2% Automated assignment of negation to concepts identified in health records based on review of the text is feasible and practical. Lexical assignment of negation is a good test of true Negativity as judged by the high sensitivity, specificity and positive likelihood ratio of the test. SNOMED-CT had overall coverage of 88.7% of the concepts being negated.
Journal-article: BMC Medical Informatics and Decision Making, vol. 5, 2005
|
Authors: Dietlind L. Wahner-Roedler, Peter L. Elkin, Ann Vincent, Jeffery M. Thompson, Terry H. Oh, Brent A. Bauer
Journal-article: Mayo Clinic Proceedings, vol. 80, pp. 826-827, 2005
|
Authors: Steven H Brown, Peter L Elkin, S Trent Rosenbloom, Casey Husser, Brent A Bauer, Michael J Lincoln, John Carter, Mark Erlbaum, Mark S Tuttle
Evaluation Studies: Studies in health technology and informatics, vol. 107, 2004
|
Authors: Dietlind L Wahner-Roedler, Peter L Elkin, Mark C Lee, Ann Vincent, Ralph E Gay, Laura L Loehrer, V Shane Pankratz, Brent A Bauer
Journal-article: Evidence-Based Integrative Medicine, vol. 1, pp. 253-260, 2004
|
Authors: David A. Froehling, Peter L. Elkin, Stephen J. Swensen, John A. Heit, V. Shane Pankratz, Jay H. Ryu
Abstract: To determine the sensitivity and specificity of the semiquantitative latex agglutination plasma fibrin D-dimer assay for the diagnosis of acute pulmonary embolism by using computed tomographic (CT) angiography as the diagnostic reference standard. From January 1, 1998, to June 26, 2000, patients who had both semiquantitative latex agglutination plasma fibrin D-dimer testing and CT angiography for suspected acute pulmonary embolism were selected for the study. A D-dimer value greater than 250 ng/mL was considered positive for thromboembolic disease. Diagnosis of acute pulmonary embolism was based solely on the interpretation of the CT angiogram. The D-dimer assay results were then compared with the CT angiographic diagnoses. Of 946 CT studies, 172 (18%) were positive for acute pulmonary embolism. The D-dimer assay was positive for 612 (65%) of the 946 patients. For acute pulmonary embolism, the D-dimer assay had a sensitivity of 0.83 (95% confidence interval [CI], 0.76-0.88), a specificity of 039 (95% CI, 036-0.43), a negative likelihood ratio of 0.44 (95 % CI, 032-0.62), and a negative predictive value of 0.91 (95% CI, 0.87-0.94). The semiquantitative latex agglutination plasma fibrin D-dimer assay had moderate sensitivity and low specificity for the diagnosis of acute pulmonary embolism. When used alone, the results of this test were insufficient to exclude this serious and potentially fatal disorder. Approximately two thirds of our patients had positive D-dimer assays and required further evaluation to exclude acute pulmonary embolism.
Journal-article: Mayo Clinic Proceedings, vol. 79, pp. 164-168, 2004
|
Authors: Peter L Elkin
Abstract: Bioinformatics is the discipline that develops and applies informatics to the field of molecular biology. Although a comprehensive review of the entire field of bioinformatics is beyond the scope of this article, I review the basic tenets of the field and provide a topical sampling of the popular technologies available to clinicians and researchers. These technologies include tools and methods for sequence analysis (nucleotide and protein sequences), rendering of secondary and tertiary structures for these molecules, and protein fold prediction that can lead to rational drug design. I then discuss signaling pathways, new standards for data representation of genes and proteins, and finally the promise of merging these molecular data with the clinical world (the new science of phenomics).
Journal-article: Mayo Clinic proceedings, vol. 78, 2003
|
Authors: Ralph Gay, Dietlind Wahner-Roedler, Peter Elkin, Brent Bauer, Randy Shelerud
Journal-article: The Spine Journal, vol. 3, pp. 77, 2003
|
Authors: Peter L. Elkin, Steven H. Brown, Michael J. Lincoln, Michael Hogarth, Alan Rector
Journal-article: International Journal of Medical Informatics, vol. 71, pp. 89-102, 2003
|
Authors: Peter L Elkin, Steven H Brown, Michael J Lincoln, Mark Pittelkow, Brent A Bauer, Dietlind Wahner-Roedler, Rhonda Thomas, Larry Bergstrom
Journal Article: Studies in health technology and informatics, vol. 95, 2003
|
Authors: P L Elkin, S H Brown
Journal Article: Journal of biomedical informatics, vol. 35, 2003
|
Authors: Peter L Elkin, Steven H Brown, Michael J Lincoln, Michael Hogarth, Alan Rector
Evaluation Studies: International journal of medical informatics, vol. 71, 2003
|
Authors: Peter L Elkin, Barb Sorensen, Diane De Palo, Greg Poland, Kent R Bailey, Douglas L Wood, Nicholas F LaRusso
Evaluation Studies: Journal of the American Medical Informatics Association : JAMIA, vol. 9, 2002
|
Authors: P. L. Elkin, B. Sorensen, D. De Palo, G. Poland, K. R. Bailey, D. L. Wood, N. F. LaRusso
Journal-article: Journal of the American Medical Informatics Association, vol. 9, pp. 472-478, 2002
|
Authors: P.L Elkin, S.H Brown
Journal-article: Journal of Biomedical Informatics, vol. 35, pp. 281-288, 2002
|
Authors: Brent A. Bauer, Peter L. Elkin, Dana Erickson, George G. Klee, Michael D. Brennan
Abstract: An 87-year-old woman was referred for evaluation of nervousness, tremor, insomnia, and fatigue of 2 months' duration. Initial laboratory evaluation revealed a suppressed thyrotropin level and an elevated triiodothyronine level. A review of her medications revealed that she had started taking several dietary supplements at the recommendation of her chiropractor before the onset of symptoms. One of these was tiratricol (3,5,3'-triiodothyroacetic acid or Triac), a substance sold as a dietary supplement despite classification as a drug by the Food and Drug Administration. Tiratricol has weak thyromimetic effects, can inhibit pituitary thyrotropin secretion, and in higher doses can significantly stimulate metabolism. Such was the case with this patient who presented with signs, symptoms, and biochemical evidence of hyperthyroidism that promptly resolved after discontinuation of tiratricol therapy. To our knowledge, this is the first reported case of documented thyrotoxicosis secondary to tiratricol use. Because tiratricol is still available for sale on several Internet sites, this case emphasizes the importance of inquiring about the use of dietary supplements in all patients. The availability of such products on the Internet increases the already complex task of monitoring patients' use of dietary supplements.
Journal-article: Mayo Clinic Proceedings, vol. 77, pp. 587-590, 2002
|
Authors: P. L. Elkin
Journal-article: Journal of the American Medical Informatics Association, vol. 9, pp. 128S-132, 2002
|
Authors: Peter L Elkin, Paul N Gorman
Journal Article: Journal of the American Medical Informatics Association : JAMIA, vol. 9, 2002
|
Authors: P L Elkin, M Peleg, R Lacson, E Bernstam, S Tu, A Boxwala, R Greenes, E H Shortliffe
Journal Article: M.D. computing : computers in medical practice, vol. 17, 2001
|
Authors: S. H. Brown, M. Lincoln, S. Hardenbrook, O. N. Petukhova, S. T. Rosenbloom, P. Carpenter, P. Elkin
Journal-article: Journal of the American Medical Informatics Association, vol. 8, pp. 379-390, 2001
|
Authors: P L Elkin, S H Brown, C G Chute
Journal Article: Studies in health technology and informatics, vol. 84, 2001
|
Authors: M. R. Harris, J. R. Graves, H. R. Solbrig, P. L. Elkin, C. G. Chute
Journal-article: Journal of the American Medical Informatics Association, vol. 7, pp. 539-549, 2000
|
Authors: M R Harris, J R Graves, H R Solbrig, P L Elkin, C G Chute
Journal Article: Journal of the American Medical Informatics Association : JAMIA, vol. 7, 2000
|
Authors: P L Elkin, M Tuttle, K Keck, K Campbell, G Atkin, C G Chute
Journal Article: Studies in health technology and informatics, vol. 52 Pt 1, 1999
|
Authors: Ronnier J. Aviles, Stacey A. Vlahakis, Peter L. Elkin
Journal-article: Annals of Internal Medicine, vol. 131, pp. 549, 1999
|
Authors: R J Aviles, S A Vlahakis, P L Elkin
Case Reports: Annals of internal medicine, vol. 131, 1999
|
Authors: M S Tuttle, N E Olson, K D Keck, W G Cole, M S Erlbaum, D D Sherertz, C G Chute, P L Elkin, G E Atkin, B H Kaihoi, C Safran, D Rind, V Law
Journal Article: Methods of information in medicine, vol. 37, 1998
|
Authors: N. E. Olson, K. D. Keck, W. G. Cole, M. S. Erlbaum, D. D. Sherertz, C. G. Chute, P. L. Elkin, G. E. Atkin, B. H. Kaihoi, C. Safran, D. Rind, V. Law, M. S. Tuttle
Journal-article: Methods of Information in Medicine, vol. 37, pp. 373-383, 1998
|
Authors: P K Nijhawan, P L Elkin
Journal-article: Mayo Clinic proceedings, vol. 73, 1998
|
Authors: A E Denktas, P L Elkin
Journal-article: Mayo Clinic proceedings, vol. 73, 1998
|
Authors: J J Cimino, P L Elkin, G O Barnett
Journal Article: Computers and biomedical research, an international journal, vol. 25, 1992
|
Authors: James J. Cimino, Peter L. Elkin, G.Octo Barnett
Journal-article: Computers and Biomedical Research, vol. 25, pp. 238-263, 1992
|
Authors: Francis L. Belloni, Peter L. Elkin, Beverly Giannotto
Journal-article: British Journal of Pharmacology, vol. 85, pp. 441-446, 1985
|
Authors: F L Belloni, P L Elkin, B Giannotto
Journal Article: British journal of pharmacology, vol. 85, 1985
|
Authors: Steven H. Brown, Michael J. Lincoln, Peter L. Elkin
Book-chapter: Health Informatics, pp. 104-124,
|