by • April 5, 2016 • No Comments
In conversations of artificial intelligence and the time when machines can be able-bodied to functions as well as — or advantageous than — human beings, it is frequently said that one thing desktops can never be able-bodied to do is turn it into art and music the way we do. Well, that argument just lost a bit of steam thanks to a project that’s been carried out by Microsoft and ING. Working with the Technical University of Delft and two museums in the Netherlands, the project, called “Next Rembrandt,” utilized algorithms and a 3D printing device to turn it into a brand-new Rembrandt painting that looks like it may easily have been delivered by Dutch Master’s own hand of 350 years ago.
To turn it into the new painting, the team of experts utilized desktop software and a deep learning algorithm to analyze 346 of Rembrandt’s paintings. In addition to studying components of his work, such as how he drew an eye or how faces were proportioned, the project in addition included an analysis of the height of the paint of the surface of the canvas.
Because Rembrandt generated additional portraits than any other kind of painting, the group decided to focus its efforts on that type of artwork — particularly, those turn it intod of the years 1632-1642, when he painted the biggest number of them. After via software to analyze the portraits, the desktop suggested that the thoughtl Rembrandt painting to create may be a portrait of a white man with facial hair aged between 30 and 40. He should in addition be wearing black clothes and have a collar and a hat, said the analysis.
The following step, and the one that quite lies at the crux of the project — and the thought that desktops can create art every bit as great as our revered masters — is that the team created software that all but decoded how Rembrandt did what he did.
“To master his fashion, we created a software system that may know Rembrandt based on his use of geometry, composition, and painting materials,” says the website on that the project is featured. “A facial recognition algorithm synonymous and classified the many typical geometric patterns utilized by Rembrandt to paint human showcases. It and so utilized the learned principles to replicate the fashion and generate new facial showcases for our painting.”
That means the desktop was able-bodied to turn it into eyes, a nose, and other facial showcases by mimicking Rembrandt’s fashion. So it was time to put those showcases together.
“An algorithm measured the distances between the facial showcases in Rembrandt’s paintings and calculated them based on percentages,” says the site. “Next, the showcases were transformed, rotated, and scaled, and so accurately placed inside the frame of the face. Finally, we rendered the light based on gathered data in order to cast auand sotic shadows on every feature.” The rendering system took 500 hours.
Next came the moment for math and machine to do what just skin and bone had once done. Choosing the paint height map that had been turn it intod, a 3D printing device laid down 13 layers of ink to turn it into a textured work of art that may as well have been set to canvas by Rembrandt himself.
In the end, the painting took 18 months to turn it into and consists of 148 billion pixels. It is on temporary exhibit at the Looiersgracht 60 gallery in Amsterdam. A permanent home has yet to be revealed.
You can deeper insight into the artist, the system by that his fashion was return it intod and the new painting itself on The Next Rembrandt website or through this accompanying video.
by admin • March 5, 2017
by admin • November 28, 2016
by admin • November 28, 2016