This light-powered 3D printer materializes objects all at once

3D printing has changed the way people approach hardware design, but most printers share a basic limitation: they essentially build objects layer by layer, generally from the bottom up. This new system from UC Berkeley, however, builds them all at once, more or less, by projecting a video through a jar of light-sensitive resin.

The device, which its creators call the replicator (but shouldn’t, because that’s a MakerBot trademark), is mechanically quite simple. It’s hard to explain it better than Berkeley’s Hayden Taylor, who led the research:

Basically, you’ve got an off-the-shelf video projector, which I literally brought in from home, and then you plug it into a laptop and use it to project a series of computed images, while a motor turns a cylinder that has a 3D-printing resin in it.

Obviously there are a lot of subtleties to it — how you formulate the resin, and, above all, how you compute the images that are going to be projected, but the barrier to creating a very simple version of this tool is not that high.

Using light to print isn’t new — many devices out there use lasers or other forms of emitted light to cause material to harden in desired patterns. But they still do things one thin layer at a time. Researchers did demonstrate a “holographic” printing method a bit like this using intersecting beams of light, but it’s much more complex. (In fact, Berkeley worked with Lawrence Livermore on this project.)

In Taylor’s device, the object to be recreated is scanned first in such a way that it can be divided into slices, a bit like a CT scanner — which is in fact the technology that sparked the team’s imagination in the first place.

By projecting light into the resin as it revolves, the material for the entire object is resolved more or less at once, or at least over a series of brief revolutions rather than hundreds or thousands of individual drawing movements.

This has a number of benefits besides speed. Objects come out smooth — if a bit crude in this prototype stage — and they can have features and cavities that other 3D printers struggle to create. The resin can even cure around an existing object, as they demonstrate by manifesting a handle around a screwdriver shaft.

Naturally, different materials and colors can be swapped in, and the uncured resin is totally reusable. It’ll be some time before it can be used at scale or at the level of precision traditional printers now achieve, but the advantages are compelling enough that it will almost certainly be pursued in parallel with other techniques.

The paper describing the new technique was published this week in the journal Science.

This 3D-printed AI construct analyzes by bending light

Machine learning is everywhere these days, but it’s usually more or less invisible: it sits in the background, optimizing audio or picking out faces in images. But this new system is not only visible, but physical: it performs AI-type analysis not by crunching numbers, but by bending light. It’s weird and unique, but counter-intuitively, it’s an excellent demonstration of how deceptively simple these “artificial intelligence” systems are.

Machine learning systems, which we frequently refer to as a form of artificial intelligence, at their heart are just a series of calculations made on a set of data, each building on the last or feeding back into a loop. The calculations themselves aren’t particularly complex — though they aren’t the kind of math you’d want to do with a pen and paper. Ultimately all that simple math produces a probability that the data going in is a match for various patterns it has “learned” to recognize.

The thing is, though, that once these “layers” have been “trained” and the math finalized, in many ways it’s performing the same calculations over and over again. Usually that just means it can be optimized and won’t take up that much space or CPU power. But researchers from UCLA show that it can literally be solidified, the layers themselves actual 3D-printed layers of transparent material, imprinted with complex diffraction patterns that do to light going through them what the math would have done to numbers.

If that’s a bit much to wrap your head around, think of a mechanical calculator. Nowadays it’s all done digitally in computer logic, but back in the day calculators used actual mechanical pieces moving around — something adding up to 10 would literally cause some piece to move to a new position. In a way this “diffractive deep neural network” is a lot like that: it uses and manipulates physical representations of numbers rather than electronic ones.

As the researchers put it:

Each point on a given layer either transmits or reflects an incoming wave, which represents an artificial neuron that is connected to other neurons of the following layers through optical diffraction. By altering the phase and amplitude, each “neuron” is tunable.

“Our all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement,” write the researchers in the paper describing their system, published today in Science.

To demonstrate it they trained a deep learning model to recognize handwritten numerals. Once it was final, they took the layers of matrix math and converted it into a series of optical transformations. For example, a layer might add values together by refocusing the light from both onto a single area of the next layer — the real calculations are much more complex, but hopefully you get the idea.

By arranging millions of these tiny transformations on the printed plates, the light that enters one end comes out the other structured in such a way that the system can tell whether it’s a 1, 2, 3 and so on with better than 90 percent accuracy.

What use is that, you ask? Well, none in its current form. But neural networks are extremely flexible tools, and it would be perfectly possible to have a system recognize letters instead of numbers, making an optical character recognition system work totally in hardware with almost no power or calculation required. And why not basic face or figure recognition, no CPU necessary? How useful would that be to have in your camera?

WTF is computer vision?

The real limitations here are manufacturing ones: it’s difficult to create the diffractive plates with the level of precision required to perform some of the more demanding processing. After all, if you need to calculate something to the seventh decimal place, but the printed version is only accurate to the third, you’re going to run into trouble.

This is only a proof of concept — there’s no dire need for giant number-recognition machines — but it’s a fascinating one. The idea could prove to be influential in camera and machine learning technology — structuring light and data in the physical world rather than the digital one. It may feel like it’s going backwards, but perhaps the pendulum is simply swinging back the other direction.