About Artificial Intelligence in Medicine
Artificial Intelligence in Medicine is a software engineering firm that develops and commercializes tools that leverage AI and natural language processing to extract cancer-related information and data from clinical documents, such as pathology reports, molecular testing reports, treatment plans and clinician's notes.
Expert Collections containing Artificial Intelligence in Medicine
Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.
Artificial Intelligence in Medicine is included in 3 Expert Collections, including Artificial Intelligence.
Companies developing artificial intelligence solutions, including cross-industry applications, industry-specific products, and AI infrastructure solutions.
The digital health collection includes vendors developing software, platforms, sensor & robotic hardware, health data infrastructure, and tech-enabled services in healthcare. The list excludes pureplay pharma/biopharma, sequencing instruments, gene editing, and assistive tech.
This collection includes companies applying technology to cancer care, diagnosis, and treatment. Examples include vendors offering cancer detection and diagnosis, oncology clinical decision support, real-world data, and AI oncology drug discovery.
Latest Artificial Intelligence in Medicine News
Nov 22, 2023
Journal of Medical Internet Research This paper is in the following e-collection/theme issue: February 04, 2023 Guidelines, Consensus Statements, and Standards for the Use of Artificial Intelligence in Medicine: Systematic Review Authors of this article: 2Department of Anesthesiology, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, China 3Department of General Practice, National Clinical Research Center for Geriatrics, International Medical Center, West China Hospital, Sichuan University, Chengdu, China 4Department of Operation Management, West China Hospital, Sichuan University, Chengdu, China 5Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China 6Department of Periodical Press, National Clinical Research Center for Geriatrics, Chinese Evidence-based Medicine Center, Nursing Key Laboratory of Sichuan Province, West China Hospital, Sichuan University, Chengdu, China 7Northern Ireland Methodology Hub, Queen's University Belfast, Belfast, United Kingdom *these authors contributed equally Yonggang Zhang, PhD Department of Periodical Press, National Clinical Research Center for Geriatrics, Chinese Evidence-based Medicine Center, Nursing Key Laboratory of Sichuan Province West China Hospital Abstract Background: The application of artificial intelligence (AI) in the delivery of health care is a promising area, and guidelines, consensus statements, and standards on AI regarding various topics have been developed. Objective: We performed this study to assess the quality of guidelines, consensus statements, and standards in the field of AI for medicine and to provide a foundation for recommendations about the future development of AI guidelines. Methods: We searched 7 electronic databases from database establishment to April 6, 2022, and screened articles involving AI guidelines, consensus statements, and standards for eligibility. The AGREE II (Appraisal of Guidelines for Research & Evaluation II) and RIGHT (Reporting Items for Practice Guidelines in Healthcare) tools were used to assess the methodological and reporting quality of the included articles. Results: This systematic review included 19 guideline articles, 14 consensus statement articles, and 3 standard articles published between 2019 and 2022. Their content involved disease screening, diagnosis, and treatment; AI intervention trial reporting; AI imaging development and collaboration; AI data application; and AI ethics governance and applications. Our quality assessment revealed that the average overall AGREE II score was 4.0 (range 2.2-5.5; 7-point Likert scale) and the mean overall reporting rate of the RIGHT tool was 49.4% (range 25.7%-77.1%). Conclusions: The results indicated important differences in the quality of different AI guidelines, consensus statements, and standards. We made recommendations for improving their methodological and reporting quality. Trial Registration: PROSPERO International Prospective Register of Systematic Reviews (CRD42022321360); https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=321360 J Med Internet Res 2023;25:e46089 cRIGHT: Reporting Items for Practice Guidelines in Healthcare. Discussion Summary of the Findings In recent years, medical technology, AI technology, and their combined application have been rapidly developed. With the expansion of medical data, application of medical images, improvement of AI algorithm models, and optimization of software and hardware devices, more AI technologies have started to be applied in health care scenarios to assist in making decisions on diagnosis and treatment. More medical institutions, internet companies, and nascent AI companies have started to seek cooperation with each other and have vigorously developed medical AI products, and more hospitals have been actively involved in collaborative research projects on medical AI. As a result, the field of medical AI has attracted many top scholars in terms of guideline development and scientific research, and some guidelines, expert consensus statements, and standards have been published in international journals in the field of medical AI. To ensure that health care practitioners make well-informed decisions about the use of AI and have access to more reliable evidence-based resources, this study presents a systematic review of 36 articles published in English and Chinese between 2019 and 2022, evaluating them for methodological and reporting quality. This study included 14 consensus statement articles, 19 guideline articles, and 3 standard articles, which were classified into 5 categories based on their content: (1) Disease screening, diagnosis, and treatment; (2) AI intervention trial reporting guidelines; (3) AI imaging development and collaboration; (4) AI data application; and (5) AI ethics governance and applications. The average scores from the assessment of methodological quality using the AGREE II tool ranged from 2.2 to 5.5 on a 7-point Likert scale. The mean reporting quality rate using the RIGHT tool was 49.4%, ranging from 25.7% to 77.1%. Guideline articles scored higher than consensus statement articles and standard articles. There were higher proportions for the classification of AI intervention trial reporting guidelines than for the other classifications. Domains 2, 3, and 5 of the methodological quality tool and sections 2, 3, 4, 5, 6, and 7 of the reporting quality tool are most in need of improvement. Recommendations for Improving the Quality of AI Guidelines, Consensus Statements, and Standards The development of guidelines must adhere to a strict systematic technique. Strict criteria must be developed to assure the quality of the guidelines. The main phases for guidelines, consensus statements, and standards are essentially the same: subject selection, evidence synthesis, recommendation creation, peer review, publishing, implementation, and updating [ 55 ]. However, in the 36 included studies, the forming methods were not ideal. The methodological quality of these documents needs to be improved in several categories, particularly rigor of development, stakeholder involvement, applicability, and reporting quality (background, evidence, recommendations, review, quality assurance, funding, declaration, management of interest, and other information). However, basic information, including scope and purpose, is already of a good standard. Background for AI Guidelines, Consensus Statements, and Standards Based on the results of the RIGHT assessment, the topics in the articles involving guidelines and consensus statements need to be more clearly described; need to cover the medical problems that the AI would be applied to (eg, disease screening, diagnosis, etc), the aims and specific objectives (eg, how AI applications are regulated), and the principal objectives or any subgroups covered by the recommendations of the guidelines (eg, clinical practitioners, medical data, or a certain type of AI technology); and need to identify the primary users of the guidelines (eg, technicians, clinicians, etc). Methodological Design for AI Guidelines, Consensus Statements, and Standards Based on the stakeholder involvement and rigor of development domains in AGREE II and section 5 (review and quality assurance) in RIGHT, the guideline developer should determine the targeted objects, technology, and population, and consider their preferences or development status. A reasonable evidence selection process, such as a systematic review, survey, or voting, should also be determined by the guideline developer with clear criteria stated for picking evidence, conducting surveys, or voting. At the same time, the guideline’s external evaluation scheme, comprising the list of evaluation experts and the treatment process for evaluation opinions, should be determined. After the draft guideline is finalized, it should be sent to specialists in relevant fields for review and made publicly available on the internet for public comment. Finally, the collected opinions should be evaluated and used to amend the guideline, and a mechanism should be put in place for updating it. Sources and Evaluation of Evidence for AI Guidelines, Consensus Statements, and Standards Based on the rigor of development domain in AGREE II and section 3 (evidence) in RIGHT, there are several areas for improvement. This might include stating the key questions for the recommendations in PICOS (Patient/Population, Intervention, Comparison, Outcome, Study design) or other formats as appropriate and indicating whether the guideline is based on a new systematic review conducted specifically for the guideline. The entire process of reference retrieval, including period, database, keywords, etc, should be provided in detail for the systematic review. Evidence inclusion and exclusion criteria should be established and followed, and formal techniques or methodologies (such as the GRADE [Grading of Recommendations Assessment, Development and Evaluation] system) should be used to assess the strengths and limitations of the evidence. Formation Method and Strength of Recommendations of Evidence for AI Guidelines, Consensus Statements, and Standards Based on the rigor of development and clarity of presentation domains in AGREE II and sections 4 (recommendations) and 5 (review and quality assurance) in RIGHT, the guideline should include a full description of the process used to create the recommendations, including how consensus was established and obtained. The guideline should also clearly state the grade of evidence, recommendations, and intensity of any suggestions, as determined by methods such as GRADE. The benefits and hazards of using AI in the medical profession should be explored, and there should be an explicit link between the recommendations and the supporting research. If the users of AI guidelines are intended to include different populations, or cost and resource implications are considered, the different advice for management of the AI issue should be clearly presented. The document should also indicate how the draft guideline underwent review and how this was used to inform the quality assurance process described in the methodological design. Promotion and Application of the Guidelines of Recommendations of Evidence for AI Guidelines, Consensus Statements, and Standards Based on the applicability domain in AGREE II and section 7 (access, suggestions for further research, limitations) in RIGHT, the guideline’s promotion and implementation strategy, which includes the target people, objects, technology, and data, should be developed. The potential benefits and hazards of implementing the recommendations, as well as the expenses and resources required to promote the guideline should be included, along with information on how the recommendations can be implemented, and the parameters and methods used by AI applications. Moreover, mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Furthermore, an adequate accessible redress should be ensured, especially in critical applications [ 56 ]. Finally, because AI is developing so quickly, it is important to specify where the guideline and its related materials can be found, as well as the limitations and suggestions for further research, and plans for keeping the document up to date. Disclosure and Management of Conflicts of Recommendations of Evidence for AI Guidelines, Consensus Statements, and Standards Based on the editorial independence section in AGREE II and section 6 (funding, declaration, and management of interest) in RIGHT, guideline documents should pay attention to providing precise information on conflicts of interest. For example, each team member should submit a conflict of interest disclosure statement, which can be used as a reference for guideline developers and include a declaration of employment, research grants, and other research support, among other things. Trends in the Application of AI in Health Care In addition to the 5 AI application classifications identified in this review, AI applications in health care currently include intelligent guidance to patients to find the most appropriate departments and experts for consultation, clinical intelligence to assist in decision-making, early warning of clinical behavior, patient prognosis analysis, intelligent rationalization of treatment recommendations, and prediction of personal health or disease status. In the future, AI may also be used for more profound therapeutic areas, such as brain-machine interfaces (also known as brain-machine fusion perception), and to reconstruct special senses (eg, vision) and motor functions in paralyzed patients. Possible Challenges of AI The 14 studies identified for classification 4 (AI data application) showed that data, arithmetic power, and algorithms are 3 core elements of AI, bringing new challenges for the implementation of AI in health care. The challenges of data include data quality, data annotation, data storage, data security, etc. To improve the learning efficiency of AI applications, a large amount of data annotation work is necessary, giving rise to more relevant guidelines and expert consensus statements. Due to the special nature of health care and health care systems, application system standards within different countries, regions, and hospitals are not uniform, making data collection irregular and imperfect. The challenges of massive data governance, technical robustness, and safety will also become increasingly important factors affecting the implementation of AI products, along with ethical approval, human oversight, privacy, transparency, nondiscrimination and fairness, societal well-being, and costs as important influencing factors [ 56 ]. This means that there is a great need for higher quality and more instructive guidelines to address a range of challenges. Future Research Directions for AI Although the development of AI still faces many challenges, countries and industries are increasing their investment in AI applications owing to the significant potential advantages of AI technology in improving productivity, reducing costs, and improving service quality. The rapid year-on-year growth in the number of scientific and technical papers published on medical AI in recent years indicates that AI has also become a key research area of interest for experts and scholars. Research directions include deep learning, machine learning, biomedical engineering, automation, oncology, complementary diagnosis, and adjuvant therapy. In the future, increasing research evidence on medical AI will emerge, which will be more helpful for the development of AI guidelines, writing authoritative guidelines for more types of AI health issues, and making more standardized guidance recommendations. Innovation To our knowledge, this systematic review is the first to use the AGREE II and RIGHT tools to evaluate AI guidelines, consensus statements, and standards. We reviewed and summarized articles involving international guidelines, consensus statements, and standards on the use of AI in health care published in recent years, as well as the main research directions. We also provide suggestions for methodological and reporting quality improvement for different types of documents. The rapid development of AI technology will see it being increasingly widely used in various fields, such as medical imaging, disease screening, and data learning, and this paper has also discussed future development trends, benefits, and potential hazards of AI applications in health care. We hope that it provides a scientific research and application reference for colleagues involved with AI in health care, and will help improve the quality and reporting of medical AI guidelines and provide a much needed foundation for improvements in the quality of research and practice [ 6 ]. Limitations Only 7 Chinese and English databases were included in the search strategy, and finally, only English and Chinese articles involving guidelines, consensus statements, and standards were included, which may cause limitations owing to restricted research sources and languages. An important limitation of this systematic review is that it relied on studies published in few journals with high impact factors. Thus, there are disparities between some of the guidelines and others in terms of quality and authority, and the findings may not be fully representative of AI guidelines, consensus statements, and standards published around the world. Moreover, the articles included in this paper were considered as articles involving guidelines, consensus statements, or standards according to the definitions by the authors and journals themselves. Thus, the authority of the definitions may be limited owing to differences in quality and differences among the authors and journals. Furthermore, we found that some items in the AGREE II and RIGHT tools are not fully applicable for evaluating medical guidelines related to AI, particularly those that use expert consensus statements and standards. As the clinical content of guidelines, consensus statements, and standards was not evaluated, no conclusions concerning the clinical appropriateness of the recommendations could be reached. Conclusions Our systematic review identified 36 articles involving guidelines, consensus statements, and standards on the application of AI in health care. The main areas for the development and application of AI guidelines are disease screening and diagnosis, reporting of trials of AI interventions, AI image development and cooperation, AI data application, and AI ethics governance and applications. The application of AI in health care was generally encouraged in these articles, including the development of more standardized and standard algorithms, quality control of AI data, and clinical application of AI data for certain diseases. However, the quality of the included articles that we identified was not uniform, and there were differences in the methodological and reporting quality of guidelines for different research content. Most of the deficiencies were concentrated in domains 2, 3, and 5 of the AGREE II tool for methodological quality and sections 2, 3, 4, 5, 6, and 7 of the RIGHT tool for reporting quality. Health care providers face challenges in gaining knowledge about the safe and effective use of AI. If the suggestions made for methodological and reporting quality improvements are followed, we believe that health care providers will have better access to higher quality guidance. This will be important if AI meets its potential for more powerful data induction and learning capabilities, which could significantly improve the application capabilities of medical imaging, disease screening, and diagnosis. We recommend that AI guidelines be further standardized in the future to improve the ability of AI deep learning and the ability of medical structured data service and sharing, and to strengthen the collection and fusion analysis of multicenter and multimodal medical data, allowing practitioners and scholars to cooperate in the best way to promote scientific research and clinical application. Acknowledgments The study was supported by the National Natural Science Foundation of China (grant number 82004213) and the Project of Sichuan Provincial Department of Science and Technology (number 2021YFH0191). Data Availability All data needed to evaluate the conclusions in the paper are present in the paper and the multimedia appendices. Authors' Contributions YZ and MC are co-corresponding authors. YW and NL are co-first authors. YZ, NL, and MC conceived and designed the analysis. YW and MW collected articles involving guidelines and consensus statements. YW, MW, ZD, and SM performed the data analysis and assessment of guidelines and consensus statements. YW, LC, and NL drafted the manuscript. LC and NL defined the research method. YZ and MC supervised the whole research process. All authors read and approved the final manuscript. Conflicts of Interest
Artificial Intelligence in Medicine Frequently Asked Questions (FAQ)
Where is Artificial Intelligence in Medicine's headquarters?
Artificial Intelligence in Medicine's headquarters is located at 403-2 Berkeley St, Toronto.
What is Artificial Intelligence in Medicine's latest funding round?
Artificial Intelligence in Medicine's latest funding round is Acquired.
Who are the investors of Artificial Intelligence in Medicine?
Investors of Artificial Intelligence in Medicine include Inspirata.