Artificial Intelligence (AI) is going to seep into Healthcare Industry. Healthcare organizations are generating and collecting enormous amounts of data and the conditions are ripe for AI to change the industry forever. Researches are being done by IBM for Oncology treatment and collaborative efforts are being made by NYU and Facebook to read high-speed MRI scans. This tech can help in healthcare research, manage automation and guide healthcare decisions. Association for the Advancement of Medical Instrumentation (AAMI) and British Standards Institution (BSI) held two workshops, one in London and one in Arlington to lay the future groundwork for future terminology, concepts and association for the safe use of AI in healthcare technology.
During the workshops, participants worked on defining high-level principles. It is important to come to consensus on common terminology like defining what an AI-utilizing device would mean. According to Sara Walton, market development manager at BSI, it is essential to set standards and clarify issues like the process, discipline or technology from the very beginning. It can help in ensuring a level playing field for market player and new entrants to trade with the healthcare system and encourage knowledge sharing and innovation. AI is a global issue and needs a global response and with the workshops they are trying to encompass all perspectives from the beginning.
AI is opinionated in two ways. One is that it will replace humans completely and other way is that it will work as an extension for us and will assist us in jobs but will not replace human beings. Does it mean that a complex system like AI which learns should change its behavior from patient to patient or will it be updated periodically? But, at the end of the day, good software development is good software development and when it comes to AI a lot has not been thought yet.
A major ongoing discussion includes determination of risk, software life cycles, safety and effectiveness of algorithms, verification and validation, understanding the quality of data and defining the levels of complexity in AI. AI needs training and a data that wasn't intended to be used for AI purposes may not be good for training. Data is the key here. An unexpected output may be the result of the AI system working well but goofing up because of inappropriate data. AI can be looked upon as a black box which helps track down the failure point and it will be very important to track it down. We should trust it but be skeptical as well. We have to gain confidence in what the software says.