There’s a huge amount of AI-related movement within the medical profession. IBM’s Watson is being used to trawl through vast amounts of medical literature using natural language processing, and to subsequently assist doctors to diagnose patients. In the linked article, AI capabilities are being used to predict heart failure. In other articles I’ve seen the steady march of AI towards evaluating medical imagery at a competence level that equals human radiologists.
This latter example is so immediate that there have been some calls to stop training future radiologists; most will be redundant by the time they graduate. Of course, that created a pretty heated debate in radiology circles!
In the specific machine learning cases, the AI disruption process is usually the same:
1. Set up a neural network / your learning machines
2. Feed the machines with a lot of training data and the ‘factual’ outcomes
3. Let the machine work out the underlying indicators by itself
It requires a bit of a leap of faith and it will seem counter-intuitive, since you’re not feeding our vast trove of knowledge / rules into the system directly. It will also cause some issues for heavily regulated industries, as it will (almost by definition) be impossible to accurately document the rules that the machines are using to derive their outcomes.