Lab Preparation for the Cuneiform Project

Following on from the two photogrammetry tests I  carried out using natural light, I started a series of tests using the Lab with the artificial continuous lighting we would be using in the Russell Library and the soft cube/ light boxes we have access to. On the 1st of November after our visit to the Russell Library, myself, Amanda (who was also assigned to processing and group rep, as it was suggested that there should be two people involved in processing) and Sarah, (who wanted to practice capturing, since her task in her group was digitisation) utilised the Imaging Lab for the remainder of the day. We had previously used the lab on several occasions doing test set ups for photogrammetry and to familiarising ourselves with the equipment. We decided to work together, helping each other with capturing ideas and suggestions, and we produced a number of data sets from a number of different setups.

We also experimented with the different ways of making masks. We tried the process of creating a mask by taking a photograph of the background without the object. This was only going to generate a mask for half of the scene where the background didn’t change. Where there was movement, in our case the oasis and turn table changing position from image to image, the mask from background would not work. We tried it, as we felt it might save some time masking, and it did to an extent. However, as the camera angle changed to a higher position looking down on the scene, most of the background became the oasis and turntable, and therefore this process no longer made sense, so we abandoned it for manual masking.

One of the data sets Amanda and I processed separately to try different solutions to the issues that had arisen.

We were using a 3D printed cuneiform tablet to practice with.

We had been using the white background but it was causing severe reflections on the plastic tablet, so the green background was suggested. It appeared to be causing fewer obvious reflections on the tablet. It had the added bonus that the Magic Wand masking tool worked very well with the green-screen background. We were transferring the images to check their quality in PhotoScan and trying out low quality alignments as we went. However, later when the models were (partially) generated, it was found that there were still green reflections on the tablet.

We tried a number of techniques to try to remove the green from the tablets, while effective to some extent, it failed to remove all and I decided not to use the green-screen background at the Russell Library for this reason.

Removing a selected colour from the point cloud. The colour is selected and the sensitivity of the tool is also set in order to expand the range of neighboring colours to be removed.

I decided to use a white background if the tablets were dark, or black if they were light, in order to create a contrast with the background. The tablets I ended up documenting were all light in colour and I therefore used a black background.

The most serious issue we ran into during the processing was trying to get the two half of the plastic test tablet to align.

We tried many different approaches to try to get the two orientations of the tablet to align. We separated the chunks by orientation; we separated the chunks by camera position; we tried to combine all images into one chunk. None of this worked.

So I then tried to add markers to the same points of each aligned orientation. This did produce a complete tablet with both sides pieced together, but in a very unsuccessful manner, as if one half was wider at its opening than the other half was.

The tablet was captured in 360 degree rotations by moving the turn table upon which the oasis sat, which contained the 3D printed tablet. These rotations were captured from three camera positions with about 33 images covering the 360 degrees in each position; two with the tablet in one orientation and only one in the other.

This was probably too restrictive. I think this was the biggest issue with the tablet not aligning. (We took this on board when we went to the library and captured three vertical positions for each orientation of the tablets with a higher rate of images for each level. This was reduced to three for one orientation, and two for another, as there seemed to be enough of an overlap, and to be more efficient with time. This proved to be sufficient in the cases where we did this.)

The many different projects based on this dataset to experiment with settings

Aside from the insufficient dataset, my main concern during the practice was with the light box which we were using with built in LEDs. From my previous photography experience, and my basic understanding of photogrammetry up to that point, I felt that the light was too hard and too directional. The light box had a series of bright LEDs in the top of the cube only, and reflective smooth sides. The LEDs were very small and not defused in anyway. It therefore consisted of a lot of small bright lights which resulted in hard directional light, with the potential to create a lot of specular highlights.

Apart from this causing a substandard texture map, due to the built in hard shadows being part of the images, I felt it could also potentially cause problems with the alignment, due to the hard shadows being reversed in the cuneiform inscriptions when the tablet was reversed vertically to be captured in its entirety.

(Again, this was taken into consideration when going to the library and we took the soft light boxes / tents that are made of translucent material which defuses light. They have no internal light source and were lit externally all around, from top and sides, creating much softer light.)

Our conclusion was that a combination of insufficient capture images and hard lighting caused the plastic cuneiform tablet to be a failure when trying to align the two halves of the tablet.

We learnt a lot about capturing and processing from this lab work, and took what we learnt with us to the Russell Library, ensuring the capture day was a success.

Leave a Reply

Your email address will not be published. Required fields are marked *