How hard is it to 3D scan a lightbulb? Part 1

Last year whilst curating the temporary Digital Frontiers exhibition in the brilliant Octagon gallery at UCL, both myself and Nick Booth (UCL curator of the science and engineering collections) became a bit obsessed with light bulbs.  From this slightly odd obsession, and thanks to a kind research grant from the Institute of Making, Nick and I get to play with light bulbs and call it research.

We’re looking at the process of materials and making using 3D scanning and printing to see if creating new objects encourages a closer inspection and deeper understanding of historical objects – in our case light bulbs.  Neither Nick nor I claim to experts in 3D scanning museum objects, we wanted to see what we could do with the bare minimum of training on the devices.  How easy is it for a relatively normal person to scan and print museum objects.

Firstly we played with 3D scanning.   In the next blog post I’ll talk about the process of 3D printing.

We wanted to look at different ways we could scan and create a 3D mesh of a light bulb.  We have tried two main ways of scanning, firstly using easily accessible and relatively cheap (in this case free) technology using 123D Catch and then having a go with a NextEngine.

123D Catch is a free application from Autodesk that enables you to take a series of photos and turn them into 3D models.  We used the handy iPhone app.  It works by taking multiple digital photos that have been shot around a stationary object and then submitting those photos to a cloud based server for processing.  The images are then stitched together to produce a 3D model.

NextEngine is a desktop 3D scanner which captures 3D objects in full colour with multi-laser precision.

We knew a light bulb wasn’t going to be easy to scan because scanners don’t tend to like transparent, shiny or mirrored objects.  But we thought we’d have a go anyway.

Scanning transparent objects in practice

Before we experimented with some of the historical science and engineering collection, we used a normal every day bulb to see what worked and didn’t.

Firstly 123D catch

And now NextEngine

IMG_8366

As you can see the fitting shows up pretty well, but the glass bulb itself really doesn’t work.  And 123D Catch has gone completely funny.  So after a bit of googling, tweeting and advice from the 3D pros at UCL we decided to try and disguise the transparency with talcum powder.

So here are the versions with talc.

123D catch

NextEngine

IMG_8377

Which amazingly worked!

After checking with conservation and we decided to cover the historical lightbulb in talc and try scanning that and here are the results:

123D catch

NextEngine

IMG_8450

The NextEngine scan is pretty good. It doesn’t quite capture the peak at the top of the bulb, but it isn’t a bad representation.  I don’t think we can quite call it a replica but it definitely looks like a lightbulb.

Obviously covering a hisitorical object in talc throws a lot of questions up about how museums could utilise 3D scanning if they have to cover delicate and fragile glass objects in powder to get a adequate scan. We’d be really interested to hear if anyone has found a more conservation friendly way of dealing with transparent objects without having to coat them in talc.

It also brings up questions about how accurate 3D representations of museum objects should be.  Should they be identical? Or is an approximate object acceptable?

Creating a mini me: Playing with 3D printing

 
photo (13)

As part of the Digital Frontiers exhibition I have been experimenting a bit with 3D printing.  This is why working in a university is brilliant as there is so many clever people and bits of kit about who will let you have a bit of a play.

3D tech is becoming quite big in museum discussions right now, and many museums are looking to embed 3D features permanently into their museum services but there are a few challenges to do this. Check out Andrew Lewis’ from the V&A’s post about How ready is 3D for delivering museum services? And my post from bits to blogs about Crapjects.

Because 3D is emerging and is turning out to be a playfully disruptive technology I felt it was important to experiment with just what could be done relatively quickly with 3D tech for an exhibition.

A couple of months ago I had myself scanned quickly by Jan Boehm and John Hindmarch from UCL Engineering,  Virtual Environments, Imaging & Visualisation which was then printed out with Andy Hudson Smith’s (CASA) 3D printer and it produced this prototype:

3D Me!Last night Steve Gray and I had another play, this time creating an object model mesh with a Kinect.  We used a Kinect  and  the software ReconstructMe.

Microsoft’s Kinect is an awesome piece of tech.  Instead of game play you can use its Infrared sensors to do depth of field scanning!  We were trying to work out a re-usable workflow, so we could then scan everybody! We started with a desk drawer and moving the kinect around but that didn’t really cut it.  Eventually with a bit of tinkering we has success with an office swivel chair is to allow the object (aka me) to revolve slowly in front of the Kinect!

The scans produced are not faultless, but they are really very good for such simple and cheap kit.  We (I say we, but actually Steve) cleaned up the scan using free tools. Here is a scan of myself showing the problem areas. This is in MeshMixer:

3D me in MeshMixer

3D me in MeshMixer

The final result was using a mix of MeshMixer and MeshLabs and NetFabb Basic to fix gaps in the models.

3D Steve and Claire

And if you so wish, you can download and print either Steve or me, or both of us out! We added ourselves to thingyverse.  Now everyone can have a mini Claire!  since last night there’s already been 12 downloads of us! weird!