Creating a mini me: Playing with 3D printing

photo (13)

As part of the Digital Frontiers exhibition I have been experimenting a bit with 3D printing.  This is why working in a university is brilliant as there is so many clever people and bits of kit about who will let you have a bit of a play.

3D tech is becoming quite big in museum discussions right now, and many museums are looking to embed 3D features permanently into their museum services but there are a few challenges to do this. Check out Andrew Lewis’ from the V&A’s post about How ready is 3D for delivering museum services? And my post from bits to blogs about Crapjects.

Because 3D is emerging and is turning out to be a playfully disruptive technology I felt it was important to experiment with just what could be done relatively quickly with 3D tech for an exhibition.

A couple of months ago I had myself scanned quickly by Jan Boehm and John Hindmarch from UCL Engineering,  Virtual Environments, Imaging & Visualisation which was then printed out with Andy Hudson Smith’s (CASA) 3D printer and it produced this prototype:

3D Me!Last night Steve Gray and I had another play, this time creating an object model mesh with a Kinect.  We used a Kinect  and  the software ReconstructMe.

Microsoft’s Kinect is an awesome piece of tech.  Instead of game play you can use its Infrared sensors to do depth of field scanning!  We were trying to work out a re-usable workflow, so we could then scan everybody! We started with a desk drawer and moving the kinect around but that didn’t really cut it.  Eventually with a bit of tinkering we has success with an office swivel chair is to allow the object (aka me) to revolve slowly in front of the Kinect!

The scans produced are not faultless, but they are really very good for such simple and cheap kit.  We (I say we, but actually Steve) cleaned up the scan using free tools. Here is a scan of myself showing the problem areas. This is in MeshMixer:

3D me in MeshMixer

3D me in MeshMixer

The final result was using a mix of MeshMixer and MeshLabs and NetFabb Basic to fix gaps in the models.

3D Steve and Claire

And if you so wish, you can download and print either Steve or me, or both of us out! We added ourselves to thingyverse.  Now everyone can have a mini Claire!  since last night there’s already been 12 downloads of us! weird!

Blurring the boundaries between art and data: Listening Post

Yesterday I paid a visit to the Science Museum to try and make sense of all the ideas, objects and themes that are pinging around my head in relation to the new exhibition I’m creating.  I originally went to look at the narrative structures the Science Museum uses when talking about telecommunications and how they deal with a historical thread in different themes. But after looking at lots and lots of labels and text panels, my brain started to melt.

One of the aims of ‘my’ exhibition is to explore the difference between art and technology, and to ask questions about what is art and what is data. Can art be data and can data be art? With this in mind, I stumbled into the Listening Post installation by Mark Hansen and Ben Rubin.
It’s a mesmerising experience, classed as a ‘dynamic portrait’ of online communication.   The installation displays uncensored fragments of text, sampled in real-time, from public internet chatrooms, which are accompanied by the rhythm of computer-synthesized voices reading – or as some put it “singing” – the words that flicker over the screens.  It’s really quite beautiful and you do get lost listening to it. It really does challenge the visitor to think differently about data.

I’m really looking forward to delving deeper into this idea about the different between art and data, or lack thereof, using UCL Art Museum collections as a base for discussion.  I’d be interested to know if anyone has any other beautiful examples of installations that blur the boundaries between art and data.