Here’s a project I’ve had on the back-burner for many years. Following the natural progression of generating stuff based on Markov chains, I decided a while ago to port the algorithm to music.
Music presents many challenges that I haven’t been able to address well so far. As a result, what the algorithm produces always had a bitter unfinished aftertaste to me, hence why I haven’t published anything about it for years.
- Music is multidimensional, time is relevant and needs it own analysis and subsequent generation
- The interconnectedness of different instruments from the piece is important as well.
- Random generation even based on Markov chains fails to produce any structure. The pieces all sound like a long solo without chorus or any other repetition that would give us what we strive for: anticipation. In other words, it’s perfect for jazz.
I’m hoping that publishing this will give me the kick in the nuts necessary to keep improving it. Without further ado, here’s what I have so far.
Future improvements:
- add to corpus
- clean pieces analyzed of noise
- try to infuse structure