menu

Experiencing Collaboration link

The first article about the Art of Transformation project is published.

There is increasing chatter about collaboration all around and among us at the university. To collaborate is becoming an aspiration. So many of us want to reach out and work together, preferably on something meaningful, and the process itself is meaningful. At a time when national leaders struggle to work together to meet the public's critical challenges, it feels good to seriously engage one another—particularly across the boundaries that so often isolate and divide us. In this spirit we want to share a newly published article about a project the IRC has been deeply involved in, The Art of Transformation: Cultural Organizing by Reinventing Media. It appears in the current issue of PUBLIC, the journal of the Imagining America consortium. It is about the process and practice of university people and communities working toward engaging with one another in a way that is truly collaborative.

If you have not been made aware of, or been able to join the many UMBC faculty, students, staff and administration that have become involved with Imagining America (IA), this article should give you a feel for what it's all about. IA is: "Publicly engaged artists, designers, scholars, and community activists working toward the democratic transformation of higher education and civic life." A lofty goal to be sure, but as we hope the article conveys, it is one that is surprisingly and wonderfully possible.

Writing the article itself was a deeply collaborative effort. Six of us wrote it together. Lots of articles have co-authors and many readers will be familiar with the process of writing and publishing an article or edited volume following standard protocols. There is a lead author who pulls together the thoughts of co-authors into an article that is clear and written with the single voice. It is a practical and successful tradition. Nonetheless, it was not applicable for writing about Art of Transformation because the very foundation of the project is about not privileging one voice over another. So with no one of us put "in-charge" we worked together, each adding what we could, until the writing was done. Our names are listed alphabetically rather than in an order that would describe a hierarchy of responsibility and involvement. We had no idea how to write the article when we began—people had very different ideas about structure and content—but getting the pieces to work together didn't feel like a series of compromises. It felt natural. Each voice found its place. Though writing a 9,000 word article is always a major untaking, in retrospect, this was probably no harder, and took no longer, than it normally is. We only had to rewrite the whole thing once!

Filming interviews on the street in Morrell Park

The article begins in a time beginning shortly after UMBC hosted the Imagining America conference in 2015. Conference organizers, which included UMBC and nearly fifty other organizations and institutions in Baltimore, pledged to stay organized—to keep meeting, at least long enough to launch some kind of continued collaboration. That collaboration became the Art of Transformation project. The article chronicles the first phase of the project, in which UMBC participants worked with four Baltimore communities and organizations including CultureWorks, WombWork Productions, and Chesapeake Center for the Arts, and with those involved in other UMBC projects such as Baltimore Traces, and Mill Stories. In each community, we listened to people and documented their stories about living in their communities—what they'd seen in their lifetimes, what it means to them, and what they imagine for the future. We captured and those stories in films and visualizations that will be organized and put into a new kind of public media during phase two of the project. The process of recording, editing, analyzing and compiling, then playing media back to those who have been recorded, definitely had ups, downs and bumps along the road. Given the often-troubled relationship between researchers and community residents, or as it's often phrased, "town and gown," pursuing the project in a way that upheld the spirit of collaboration was, like writing the article, challenging but possible and is paving the way for more possibilities going forward. We are encouraged, and together with our community partners, anxious to go much further along this path.

Both the article and the project it describes are things we at the IRC are proud to be involved with. The project is a great match for us because it combines reinventing media and rethinking and reworking the practices we use to create the stories we tell ourselves, about ourselves, and how we tell them. This is what IRC work is all about. Please check it out, and have a look around the rest of the issue once you do. In alphabetical order, the article's authors are: Frank Anderson, Doctoral Student, Language, Literacy and Culture and Assistant Director, Choice Program, UMBC; Beverly Bickel, Clinical Associate Professor, Language, Literacy and Culture, UMBC; Lee Boot, Director, Imaging Research Center, UMBC; Sherella Cupid, Doctoral Student, Language, Literacy and Culture, UMBC; Denise Griffin Johnson, Cultural Organizer, Culture Works & USDAC: Christopher Kojzar, Graduate Student, Intermedia and Digital Arts, UMBC.

Lee Boot

January 30, 2018

Faculty Fellows Work: Shabamanetica link

One of the most popular installations at the 2017 Baltimore Light City Festival was Eric Dyer’s Shabamanetica, two spinning sculptures that resembled massive ship’s wheels. As the wheels spin, a strobe light flashes, and the static images on the zoetropes come to life. One features spinning parasols and Panamanian waterfalls; the other shows Shanghai cargo cyclists and a machine that spits out poop-emoji pillows. Shabamanetica’s swirl of colors and objects unites three disparate places: SHAnghai, BAltimore, and the PanaMA Canal with Kinetics.. As Dyer explains, “The animations combine imagery from Shanghai, Panama, and Baltimore: three places connected anew by the recent expansion of the Panama Canal and dredging of the Port of Baltimore in preparation for the gigantic Neopanamax container ships.” This work was supported by one of the IRC’s Summer Faculty Research Fellowships.

Shabamanetica on display at Light City.

Dyer knew that he wanted the project to explore the connections between Baltimore, the East coast’s port that is best-suited to the enormous Neopanamax ships, and Shanghai, where Dyer had recently taught a zoetrope-making workshop. (Zoetropes are optical devices that give the illusion of motion from a series of static images.) While in Shanghai, he found himself fascinated by the wide variety of umbrellas and parasols that people used. Dyer speculates that he was drawn to them by their zoetrope-like radial shape and ease of spinning, and he shot video of his students playing with them. While visiting the Baltimore Museum of Industry, Dyer discovered that Baltimore was once the largest manufacturer of umbrellas in the world. But the Panama Canal, which made it possible to ship goods more cheaply from the Pacific Rim to the Atlantic, ended that.

Dyer drew on the BMI’s collections, including actual Baltimore umbrellas, and commercials for the “Teeny Popper.” As Dyer recalled, “the actress in the ad has this fantastic look of confusion and distress when she notices it’s starting to rain - to me she was wondering “what happened to our industries?’ There’s another great shot I used of her opening and closing the Teeny Popper, which I looped forward and backwards to make it appear as if she is opening and closing the umbrella endlessly.”

A close up shot showing the details of the umbrellas.

Dyer wanted Shabamanetica to create a seamless interactive experience for the public. He achieved it first by having individuals spin the wheels to activate the artwork. He also wanted to hide the technology, by putting the strobe light at the edge of the viewer’s peripheral vision and hiding the electronics in the base and behind the wheel.

This is where the IRC comes in. Our faculty research assistant, Mark Murnane, worked with Dyer to build the 400 watt LED strobe light system. The system uses an encoder to measure the speed at which the wheel is spinning and sync the strobe to it. The strobe light must be carefully synchronized with the wheel in order for the images to appear as a smooth animation. One challenge came in getting powerful-enough circuitry to drive 400 watts of LEDs. The other came in getting the strobe light to appear smooth at lower speeds, so as not to trigger seizures in any visitors.

The circuit board designed by Mark Murnane to control the LED lighting.

Finally, Dyer discovered that because the temporal resolution (frame rate) of the images was sometimes double that of film and video and the printed resolution of the images was about 16x that of HD video, the combination made for imagery that seemed hyper-real and richly dimensional.

Shabamanetica (2017) - Eric Dyer, Artist from Eric Dyer on Vimeo.

Anne Sarah Rubin

September 21, 2017

Faculty Fellows Work: Plantelligence link

Within a city like Baltimore, the landscape is generally considered the ‘background’ for human activity – a largely undifferentiated expanse of green, without much thought about the actual plants that fill the space. But UMBC Professor of Visual Arts Lynn Cazabon wants to move those plants to the foreground, and she is working with the IRC to explore the ways that plants respond to the environment around them. She also wants people to think about species that thrive in urban areas, and to that end is working with one common species, Conyza canadensis, better known as horseweed, a native annual. While most people think of horseweed as a nuisance, or weed, Cazabon is fascinated by its adaptability to the stresses of living in human-created landscapes.

Cazabon received a 2016 IRC Summer Faculty Fellowship for her project Plantelligence. As she explains, “Plantelligence emerged from my interest in how plants perceive and respond to events occurring in their surrounding environment, as a means to bring attention to how global warming impacts the way plants and in turn urban landscapes are currently evolving. Recent research in the field of plant neurobiology has led to inquiries into the evolutionary purposes of the many ways that plants sense their surroundings, including through analogues to sight, hearing, touch, taste and smell as well as through perception of electrical, magnetic, and chemical input.” But, plants move and react at speeds below human perception—and this is where the IRC’s research in photogrammetry, 3D modeling, animation, and virtual reality comes into play. As Cazabon explained, “I am using time-lapse photography and photogrammetry to study the movements of growing plants in order to translate these movements for human perception through animation…My goal in using VR is to create an immersive environment for the viewer which blurs conventional distinctions between inside and outside.”

Plantelligence first took shape in the photogrammetry rig, where we took scans of growing plants every 30 minutes for about two months, ultimately generating 8 terabytes of data. But the plan to create a 3D time-lapse film ran into a few temporal snags. The first was that the plants did not grow as quickly as we would have hoped. The second issue involved the time needed for the scans themselves: it took the computer 3-4 days to process a model from each scan. It would have taken far too much time to process every single scan. As a result, faculty research assistant Mark Murnane is working on a way to process the images on UMBC’s High Performance Computing Facility, which will speed up the process enough to make it feasible in the future.

We also learned a lot about the challenges of scanning a plant, because the models that were initially generated needed a lot of cleaning up by hand. Technical director Ryan Zuber calls these irregular models, with holes and deformation, “crunchy,” and he went to work smoothing them out. He cleaned up one model plant, imposing quadrangular polygons on its surface, which allow the model to be textured and animated.

But, as Zuber and Cazabon realized, it’s not easy to create and animate a realistic plant that is designed to be seen individually and up close. The horseweed has many leaves, tiny hairs, and variable textures, all of which need to be able to move independently of each other, and all of which need to be seen at multiple stages of growth. Zuber is treating each leaf as an individual ‘character’ and has built a rig that can work for all of the leaves, regardless of their specific geometry. He studied time-lapse films of plants growing in order to get a sense of the way the leaves grow and unfurl, and is now able to animate the plant.

The next step involves placing the plant in VR space where people can interact with it, which Cazabon envisions as a generic, unadorned gallery space. The goal here is to bring the outside to the inside: to isolate the plant against a neutral space that is more ideal for human perception. The final step will be to bring the animated plant and gallery environment together with custom software that will enable a viewer to explore and interactively affect the plant’s growth.

Anne Sarah Rubin

July 14, 2017

Pinhole Photogrammetry link

Have you ever used a pinhole camera before? Maybe you made one to look at a solar eclipse, or as a summer camp project, or in art class. The pinhole camera, or camera obscura is one of the oldest image-making technologies we have, and here at the IRC we are seeing its potential to improve our photogrammetry work.

The principles behind the pinhole camera go all the way back to Euclid in 300 BCE, but Leonardo Da Vinci is widely credited with refining and improving the camera obscura. A camera obscura is a light-proof box (it could actually be an entire room) with one very small hole. The hole lets in light, and directly opposite the hole is an upside down projection of whatever is outside. This happens because light travels in a straight line, but is distorted or refracted as it passes through the hole. Da Vinci realized that this is essentially the same way that the human eye works-- our brain flips the image to right-side up. The smaller the hole, the sharper the image.

First published picture of camera obscura in Gemma Frisius' 1545 book De Radio Astronomica et Geometrica

What does this have to do with out 94-camera photogrammetry rig? Mark Murnane, the IRC's post-baccalaureate faculty research assistant is our photogrammetry expert, and over the course of his research has grappled with two related issues. The first is that when the photon of light hits any given camera, the computer has to figure out the location in the photo of the corresponding pixel. That location is a function of the angle from which the light is entering the camera. But, camera lenses actually complicate this process, because they add distortion. Lens distortion warps the image in a variety of ways, causing an image of a single point to appear as a larger blob. This blob is called a point spread function, and describes how photons entering from a particular angle move through the lense. In the sample image below, each star is a point spread function of a sharp point of light. In an ideal world, each photon would only light up one pixel. With practical pinhole lenses however, refraction causes the image of a single point to spread into an Airy disk.

Measured PSF of a traditional lens

Traditional lenses exhibit radial distortion, tangential distortion, skew, and other optical effects. Accounting for all of these can lead to a model with 16 or more parameters. With 94 cameras to solve together this leads to an incredibly difficult problem.

Measured angular changes in PSF of a pinhole lens
Measured angular changes in PSF of a traditional lens

But, a pinhole lens, especially one with an extremely small and precise hole (the one Mark tested is .2 mm) has much less distortion, and that distortion is also more predictable or regular. Furthermore, with the pinhole, the distortion doesn't vary by location within the image. The sample image from the pinhole (below) shows straighter lines, and therefore the math needed to correct it is correspondingly simpler, resulting in 3-parameter model. Having fewer free parameters means we can find a more accurate reconstruction in less time, and moving from 16 parameters to 3 makes a world of difference.

PSF of an ideal pinhole
PSF of a practical pinhole

Having a predictable point spread function also opens the door to programmatic image sharpening. By deconvolving the point spread function across a captured image, some detail lost in the capturing process may be recovered. This method has become a crucial tool in other fields such as astronomy, where it has been used to sharpen images taken from many observatories.

The trick is to figure out how to get a perfect pinhole, without any aberrations. The less precise the pinhole, the less detail in the image. However, once we have figured out the optimal pinhole, then we will switch all of the cameras in the photogrammetry rig to pinholes, which should in turn allow for more accurate scanning.

There is a trade-off being made with this: is less light on the individual camera sensors worth knowing exactly where the light came from? We think that it is.

Mark Murnane, Anne Sarah Rubin

June 14, 2017

Scan Your Stuff Wrapup link

On April 7-8, the IRC held its inaugural Scan Your Stuff event, where we invited members of the UMBC community to bring in objects to be scanned using our photogrammetry rig. People could bring in anything they wanted, as long as it was bigger than a basketball and smaller than a suitcase. In return, the objects would be photographed by 94 cameras simultaneously, and then the images would be stitched together into a 3D model. By scanning a wide variety of items, with different sizes, shapes, colors, and textures, and seeing what best converts to 3D geometry, the IRC will be able to refine the algorithms that govern the photogrammetry process and build more complete, more accurate models.

Over the two days, 19 people brought in a total of 39 objects to be scanned. These included 8 dolls or stuffed animals, and three handmade sculptures brought in by their makers. We took a trip down memory lane with an 8mm film projector, several Apple II components, and Star Wars models from the 1970s. The oldest item we scanned was a two-handled cup from the 6th century BCE, known as a bucchero, while the newest was probably a tin of Blistex. We scanned ice skates and a cowboy hat. The Athletic Department brought over the 2008 Men’s Basketball trophy. Two people brought in large paper wasp nests—one that was about 50 years old, and one that was collected recently. And two people brought ukuleles.

UMBC Men's Basketball Trophy, 2008

We learned a lot from processing all of these images. First, we learned that our 94 cameras might not be enough. They do a great job of capturing the sides of objects, but not as well with the tops of things. We may need to restructure the rig (and buy a few more cameras) in order to get better density of coverage from the top down.

We were surprised to find that specularity (reflectivity) was less of an issue than we had anticipated. We did spray some shiny objects (like the blades of ice skates) with powder so that they would be more matte, but in general the software handled it well. We were surprised to find that the algorithms didn’t reject the white background of the rig as completely as we might have expected. We also found more instances of shelling, where the model had thin echoes or copies that didn’t fully match up. These look like splashes of texture coming off the objects.

Finally, we discovered that disabling CUDA (which allows us to run code on the GPU rather than the CPU) generated different results than when it is enabled. We generally use the GPU to reconstruct the 3D models, because it is significantly faster. However, for several of these scans we relied on the CPU exclusively. We found that the model was more accurate, but took much longer to be constructed.

All in all, Scan Your Stuff gave us a lot to think about for the future of photogrammetry at UMBC. We plan on holding a similar event in the fall, so start thinking about what you want to see scanned.

Anne Sarah Rubin

May 17, 2017