You might have to wait a sec for your GPS location to turn on full blast and/or refresh the page a couple times till you see your location icon in your phone's toolbar.
Location-based generative music could be really cool. This is a proof of concept.
For example, what if you composed music for each city block within a neighborhood? A listener would be able to traverse the neighborhood how they wished, and to experience the piece they'd have to explore and discover the relationship between the actual place and the music.
How this would probably work is by defining zones, and then you would write the rules for each zone. I’ve started to prototype this and more to come soon.
You could get more granular than a city block though. How about you hear something different when you pass under those trees over there? Or, you could algorithmically generate music for large areas you haven’t even been to, based on city data for example.
Places have aesthetic qualities that can be really evocative, just like music. We have the technology to put the two together. Also, advances on the horizon in AR, computer vision, generative graphics are going to bring about some pretty exciting interactive art/games that you play in the real world, and sound needs to be a component of this. We might be waiting on headset tech to mature, but we can get going on the sound component of this today.
- Use phone accelerometer to know when someone’s likely to be traveling by car vs. walking. What if the beat dropped as you got off the bus?
- Gait detection. Probably possible with phone’s accelerometer. Map the tempo of a song to the rhythm of someone’s steps. These kind of synchronistic relationships between what you’re doing with your body and the music you hear are sort of like what it’s like to plan an instrument. I’m interested to see how it feels.
- Make better music. I’m new to the library I used for synthesis (flocking.js), and the relationship between location and the synth here is pretty one-to-one.