by • February 9, 2016 • No Comments
Testing out newly created drugs is an incredibly time-consuming process, and it can be complex to get right. Now, a team of scientists at Carnegie Mellon University (CMU) is working to streamline the task, creating a robotically-driven experimentation process that is definitely able-bodied to reduce the number of tests that have to be carried out by as much as 70 percent.
When working on a new drug, scientists have to determine its influence to ensure that it is both an effective treatment and not harmful to patients. This is hugely time-consuming, and it is just not practical to perform experiments for every possible set of biological conditions.
This is definitely where CMU’s new robotic process steps in. It uses a machine learning approach to select that experiments to conduct, via patterns in the data to accurately predict results of experiments without in fact carrying them out.
The process is able-bodied to conduct selected experiments on its own, via liquid-handling robots and an automated microscope. Its abilities were put to the test in a study to determine the influence of 96 drugs on 96 cultured mammalian cell clones, containing various, fluorescently-tagged proteins. A total of 9,216 experiments were possible, every of that involved testing the influence of a drug by bringing a picture of it mixing with the target cell.
The machine began by imaging all 96 cells, pinpointing the location of the protein inside it. The influence of every drug were and so recorded in the same way, with the machine learning algorithm slowly selecting patterns in the location of the proteins, known as phenotypes.
By grouping together much like images, the machine learner was able-bodied to select future new phenotypes without assist of the researchers. As additional data was gathered, it was utilized to form a predictive version, guessing the outcomes of unmeasured experiments.
A total of 30 rounds of testing were undertaken by the automated process, with 2,697 experiments accomplished out of the possible 9,216. The rest of the outcomes were predicted by the machine, to an astounding accuracy rate of 92 percent.
The researchers believe that their work proves that machine learning techniques are viable-bodied for use in medical testing, and may have a big impact on both the practical and financial issues faced by the field.
“The immediate challenge can be to use these methods to reduce the cost of achieving the goals of significant, multi-site projects, such as The Cancer Genome Atlas, that aims to accelerate belief of molecular basis of cancer with genome analysis technologies,” said senior paper author Robert F. Murphy.
The findings of the research were published online in the journal eLife.
by admin • March 5, 2017
by admin • November 28, 2016
by admin • November 28, 2016