As a reminder, the two main goals of this project are:
- To enhance the street life of the city by offering delightful destinations and interesting experience, a new kind of urban design
- To engage, entertain, educate and reinforce the image of Yonkers as an historic center of innovation and to inspire the creativity of its current residents
We started out with a wide variety of content that entertains, educates and reinforces the residents’ understanding of their city. As the City government takes over full control of this, the next phase will be about deepening the engagement and interactivity with pedestrians – what will really make this a new tool of urban design.
This post is devoted to just a few of the possible ways that a digital experience on the streets can become more interactive.
First, a note about equipment and software. I’ve mentioned the high-quality HD projectors and outdoor speakers. I haven’t mentioned the cameras that are also installed. Those cameras have been used so far to make sure that the system is operating properly. But the best use of cameras is as one part of seeing – and with the proper software – analyzing what people are doing when they see the projections or hear something.
The smartphones that people carry as they pass by also allow them to communicate via websites, social media or even their movement.
With all this in place, it helps to think of what can happen in these four categories:
- Control of Text
- Physical Interaction
What’s your favorite part of the city? Show a dozen or so pictures and let people vote on them – and show real time results. It’s not a deeply significant engagement, but it will bring people out to show support for their area or destination.
Or people can be asked: what are your top choices in an amateur poetry contest (which only requires audio) or the best photography of the waterfront or a beautiful park or the favorite item that has been 3D printed inside the library’s makerspace? Or???
Even the content itself can be assessed in this way. We can ask passersby to provide thumbs up or down for what it is they are seeing at that moment. (Since the schedule of content is known precisely this means that we would also know what the person was referring to.)
People could vote on what kind of music they would want to hear at the moment, like an outdoor jukebox, or on what videos they might want to see at the moment.
Contests of this kind are a pretty straightforward use of either smartphones or physical gestures. Cameras can detect when people point to something to make a choice. It is possible to use phone SMS texting to register votes and the nice thing about this use of SMS is that it doesn’t require anyone to edit and censor what people write since they can only select among the (usually numerical) choices they’re given. SMS voting can be supplemented with voting on a website.
Control Of Text
Control implies that the person in front of a site can control what’s there merely by typing some text on a smart phone – or eventually by speaking to a microphone that is backed by speech recognition software.
People can ask about the history of people who have moved to Yonkers by typing in a family name, which then triggers an app that searches the local family database.
This kind of interaction requires that someone or a service provides basic editing of the text provided by people (i.e., censorship of words and ideas not appropriate for a site frequented by the general public).
With software that can understand or at least react to the movement of human hands, feet and bodies, there are all kinds of possible ways that people can interact with a blended physical/digital environment.
In a place like Getty Square where the projectors point down to the ground, it’s possible to show dance steps. Or people can modify an animation or visual on a wall by waving their arms in a particular way.
Originally in Australia, but now elsewhere, stairs have been digitized so that they play musical notes when people walk on them. These “piano stairs” are relatively easy to create and actually don’t really need to be stairs at all – the same effect can be created on a flat surface and it doesn’t have to generate piano sounds only.
In Eindhoven, the Netherlands, there is an installation called Lightfall, where a person’s movements control the lighting. See https://vimeo.com/192203302
Pedestrians could even become part of the visual on a wall and using augmented reality even transformed, say into the founder of the city with appropriate old clothes. Again, the only limit is the creativity of those involved in designing these opportunities.
The last category I’m calling teleportation, although it’s not really what we’ve seen in Star Trek. Instead with cameras, microphones, speakers and screens in one city and a companion setup in another, it would be possible for people in both places to casually chat as if they were on neighboring benches in the same park.
In this way, the blending of the physical and digital provides the residents with a “window” to another city.
I hope this three-part series has given city leaders and others who care about the urban environment as good sense of how to make 21st blended environments, how they might start with available content and then go beyond that to interaction with people walking by.
Of course, even three blog posts are limited, so feel free to contact me @NormanJacknis for more information and questions.
© 2017 Norman Jacknis, All Rights Reserved