One of the classics and must-reads in music technology.

I read it over and over again when I was building: https://glicol.org/

One of the motivations for building Glicol is to quickly let more people understand sound synthesis and music programming in the browser.

also recommand:

Designing Audio Effect Plugins in C++ by Will Pirkle

Audio Effects Theory, Implementation and Application By Joshua Reiss, Andrew McPherson

And all the books by JULIUS O. SMITH III https://ccrma.stanford.edu/~jos/filters/Book_Series_Overview...

A few additional/alternative reading recommendations:

Proceedings of the International Conference on Digital Audio Effects (DAFx). All open access at https://dafx.de/

Jon Dattorro's Effect Design papers:

https://ccrma.stanford.edu/~dattorro/EffectDesignPart1.pdf

https://ccrma.stanford.edu/~dattorro/EffectDesignPart2.pdf

https://ccrma.stanford.edu/~dattorro/EffectDesignPart3.pdf

Vadim Zavalishin - The Art Of VA Filter Design https://www.native-instruments.com/fileadmin/ni_media/downlo...

Proceedings of the International Computer Music Conference (open access) https://quod.lib.umich.edu/i/icmc/

Andy Farnell, "Designing Sound"

A standard introductory DSP textbook such as Ifeachor and Jervis, Orfanidis, Oppenheim and Schafer.

"The Computer Music Tutorial" and "Microsound" by Curtis Roads

Audio Anecdotes book series

"Music, Cognition and Computerised Sound", edited by Perry Cook

what am I missing?

Boulanger and Lazzarini "The Audio Programming Book" - great if you want to start working straight away in C

Boulanger - "The Csound Book" (another classic language I still use today)

Greenbaum and Barzel - "Audio Anecdotes". A fascinating series of 3 volumes with all sorts of wisdom on FX design, studio systems and composition

Wilson, Cottle and Collins - "The Supercollider Book". In the style of The Csound Book, but with SC.

Loy - "Musicmathics". A rare and much under-rated two volume set on the equations behind audio DSP

Bilbao - "Numerical Sound Synthesis". A hard but rewarding journey to understanding audio physics as linear diff systems and implementing them efficiently in C, Goes well with Perry Cook's stuff.

Benson - "Music a mathematical offering" Very unusual book that analyses many subjects in music physics. Equations but no code.

Miranda "Computer Sound Design". More about music synthesis than "sound design" imho, but has some interesting fringe methods like cellular autonoma and genetic algorithms.

>Boulanger - "The Csound Book"

Probably the gold standard for such books, wish all the audio DSLs had a book of such quality. Between it and The Computer Music Tutorial Csound is ahead of the rest when it comes to books.

Edit: Was thinking The Computer Music Tutorial was filled with Csound examples but on second thought I don't think it actually is. Been awhile since I last browsed it.

No, The Computer Music Tutorial doesn't have any code as far as I remember. I know there is a second edition but not sure what is new in there.

The Csound book is so great though. It would be nice if the orc/sco on the CD that came with it were available. I have the book still but the CD is long gone and so is owning a CD rom drive.

I just came back to csound recently and I think it has taken me about 25 years to actually like the sco. If one is use to a piano roll/DAW, the sco seems utterly ridiculous.

The csound manual now though actually has good working examples too. If I remember, that was not the case when the csound book came out and part of what made it so great too.

>I have the book still but the CD is long gone and so is owning a CD rom drive.

https://github.com/SamKomesarook/The-Csound-Book

Awesome. The book is such a bargain used then on Amazon with this.
> "The Computer Music Tutorial" and "Microsound" by Curtis Roads

It's worth noting that the second edition has been just released last year - 27 years after the first edition! It's a massive book: https://mitpress.mit.edu/9780262044912/the-computer-music-tu...

Do you think it is worth upgrading from the first edition? Does it update much of the old information? The new chapters alone are not really enough to sell me on buying, at least not until my first edition falls apart.
> Does it update much of the old information?

There is lots of information and even whole chapters that couldn't have been possibly written in 1996, but I haven't read the first edition so I can't really compare in detail.

FM Theory and Applications By Dr John Chowning and David Bristow https://www.burnkit2600.com/manuals/fm_theory_and_applicatio...

Mostly focused on FM as applied by the DX7 IIRC. But really good overview of how FM works since it's the guy who invented it.

Glicol looking super cool! Reminds me of ChucK https://chuck.cs.princeton.edu
Your "on phone" or desktop synthesizer is amazing. Thanks for that. I shared with my more musical synth type friends. Your program reminds us, of how amazing music can be, with such a simple backbone. (Just as in the case of a harmonica or whistle, where a few notes, can invoke a range of emotions.) So easy to see the results of ones own experimentation.
Charles Dodge and Thomas A. Jerse, 1997. Computer Music: Synthesis, Composition and Performance. Schirmer Books, New York. 2nd Ed. 455pp. ISBN 0-02-864682-7. {Glossary; Index. 100s of figures, formulas.} Highly recommended.
Thanks for pointing to Glicol; this is amazing; looks like some kind of marriage of Chuck and Faust; or to which other language would you compare it? Just looked around a bit on the sites, but didn't see a language specification. Can you provide a hint, please.

Is this some kind of master or PhD work, or just a hobby project?

it's one of my phd works. there was a conference paper but it seems that the database of the whole conf was gone...

still you can find some philosophy here:

https://github.com/chaosprint/glicol

the glicol-cli is also a relatively special work:

https://github.com/glicol/glicol-cli

Is there a link or copy of the conference paper or the PhD thesis?
here:

https://webaudioconf.com/posts/2021_8/

the video is still there

Thanks. I was able to download the paper from https://webaudioconf.com/_data/papers/pdf/2021/2021_8.pdf (the original link didn't work). Is the PhD thesis also available somewhere?

What are your experiences with Rust? Was it worth it, or would you rather consider another language for similar projects?

Rust is in my opinion probably the best language so far for audio/music infrastructure. So for me personally, I would not consider another language for this type of work. But I am definitely not saying that Rust is suitable for any job.
Interesting, thanks.
glicol is super super cool, thanks for sharing!
Finally, I get to reference one of the few kickstarters that were worth it.

"Since it's first edition in 1972, Electronic Music: Systems, Techniques and Controls has been acknowledged as the definitive text on modular synthesis"

For those who missed either of the kickstarter runs, there's a reprint due via Schneider's Berlin.

https://schneidersladen.de/en/allen-strange-electronic-music...

There are so many resources around timbrel construction and manipulation, but never enough about the rhythmic domain. This book being another example.

I've read Godfried Toussaint's book, and looking for more recommendations in this area.

I've been tinkering with a cybernetic folk drumming project, and trying to create rhythms using oscillators, with beats triggered at zero-crossings, so I can build and manipulated patterns in real-time. (demo: https://www.youtube.com/watch?v=yVlgPoTpL94) Results have been interesting, but perhaps not "good" in Toussaint's sense. I'm hoping to find a model that works better. Advice and pointers appreciated.

McLean, Alex, Giovanni Fanfani, and Ellen Harlizius-Klück. "Cyclic patterns of movement across weaving, epiplokē and live coding." Dancecult: Journal of Electronic Dance Music Culture 10.1 (2018).

McLean, Alex. "Algorithmic Pattern." NIME. 2020.

  • tpm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
One thing that could be great for experiments is Eurorack modules. There are I feel about thousand modules for drum patterns - Euclidean (like https://modulargrid.net/e/vpme-de-euclidean-circles-v2- ), but also plenty others, starting from Mutable Instrument Grids (https://modulargrid.net/e/mutable-instruments-grids). I think it's a much faster way to hands-on experiment than writing code. Also with Eurorack the only interface between modules is analog voltage which means clock, triggers, pulses, low frequency and audio frequency can be freely mixed (a common way to abuse this is to generate lower frequency audio using clock dividers, /2 from 800Hz audio gives you 400Hz square wave etc).

Ah and most of that can be also explored without a physical rack using VCVRack.

Creating Rhythms by Hollos and Hollos. I am not much of a fan of it but my expectations were high, the person who recommended it to me made it out to be exactly what I was looking for and it was not at all what I was looking for. It primarily focuses on things of Euclidean Rhythms ilk, simple to execute but can be quite effective. Despite not being what I was after I think I got my $10 worth.
I too bought this book and was somewhat disappointed. Some of the algorithms seems too contrived and specific. (All the code is here: https://abrazol.com/books/rhythm1/software.html)

I guess my wish would be a companion to Geometry of Musical Rhythm with lots implementations of code to demonstrate concepts. Euclidean Rhythm implementations seem to be everywhere, but there is SO MUCH MORE to cover.

Still, I'm kind of stuck on this idea that there may be a way to map the discrete mathematics of this rhythm stuff into a continuous mathematics for more natural and fluid rhythmic expression. It's just a theory and a few experiments so far.

Sounds like you might be interested in Mark Fell?
Yup.
If anyone's interested in a more hands-on approach towards learning how to build patches from basic oscillators, etc., I can highly recommend Syntorial.

https://www.syntorial.com

I recommend a Behringer 2600. The Arp 2600 was designed to facilitate learning synthesis.

And hardware is a useful abstraction.

I would choose Korg instead. Support originality.
Korg is not Arp. It paid a license fee for fifty year old brand named after a dead man.

If the Korg is right for you, that’s cool with me. My B2600 sits on my desk. I don’t need a rolling case and I already have monitors for when I am not using headphones.

In my opinion, Behringer designed a new more capable instrument and Korg made the 2600 Mini a less capable one by intent (missing everything that was on Arp’s keyboards) to avoid cannibalizing sales of the premium full size version.

And, the Korg has a copy of the Moog ladder filter. [1] Lots of guitars are shaped like Stratocasters and Telecasters. Synthesizers.com sells knockoffs of Moog modules. Your DAW probably has a Rhodes VST or two.

[1] When Alan R Perlman built the first 2600’s for Tonus (before changing the name of his company to “Arp”), he copied Bob Moog’s ladder filter. Moog sued for patent infringement. Perlman had to change the filter and that’s what most original 2600’s have.

  • aanet
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What a fantastic thread! Thanks for all the comments + resources. <3 <3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • aa-jv
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This was a classic, must-read book back in the day .. but these days, I think that Loopops Incomplete Guide of Electronic Music Ideas, Tips and Tricks is a better investment of ones time. Its just more broadly applicable and far more dense in terms of tooling and methodology.

https://www.docdroid.net/3K4UL8i/loopop-toc-pdf

In general, screeds documenting the theory and technique of electronic music would be better served if their authors didn't orient the works around a particular tool or method - such as Puredata, in this case. Even those PD is indeed an extremely powerful tool, its not really all-encompassing when it comes to making Electronic Music - its a digital tool, and electronic music covers the gamut from analog to digital and beyond...

I have this book and used the ideas from it to create a synth using 12 GI AY-3-8910 chips for a total of 36 voices. Might have been 25-30 years ago.

https://en.m.wikipedia.org/wiki/General_Instrument_AY-3-8910

The second best book about Pure Data, so I've heard.

[ Yes Miller, that's fighting talk round here too :) ]

Seriously, this is a very very good place to start learning audio DSP in general because you hit the ground running, making sounds you can compose actual music with right away.

What is the best book?

Don't think I would call this about PureData, it just uses PureData for examples and the knowledge it provides is more general.

I don’t know what the best book would be, but I found this extract from Andy Farnell’s book “Designing Sound” to be a very helpful introduction to Pure Data:

http://aspress.co.uk/ds/pdf/pd_intro.pdf

Another useful book is “Loadbang - ProgrammingElectronic Music in Pd” by Johannes Kreidler. The 2nd edition is evidently out of print, but a free download is available here:

https://www.wolke-verlag.de/musikbuecher/johannes-kreidler-l...

Of course Millers is the best book, I'm just joshing.

He created the language.

And I'd argue it's more than just a "language", it's a creative paradigm.

What's more TTEM is still available for free download from World Scientific Press, whereas MIT allowed me only to make a subset of mine free.

Another thing of note; Miller has carefully conserved the development of Vanilla Pd such that every example in his book (and mine) still work exactly as they did more than 10 years ago. How many languages can boast that stability?

> Finally, a realizable filter, whose frequency response is always a continuous function of frequency, must have a frequency band over which the gain drops from the passband gain to the stopband gain; this is called the transition band. The thinner this band can be made, the more nearly ideal the filter.

Perhaps ideal from some mathematical view, but musically, a brick wall filter could sound like shit where a lower order filter would be fine, because in that situation, you need a more nuanced blend of the range of frequencies than taking everything at 100% amplitude below the wall, and 0% above.

  • aanet
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Apart from the Books quoted below (thanks!!) - are there video resources / tutorials / online courses that knowledgeable folks would recommend?

Particularly on thery + praxis of state of the art?

Many Thanks!!

Re Theory and Praxis, Thor Magnusson's recent books come to mind:

https://www.bloomsbury.com/uk/sonic-writing-9781501313868/

https://livecodingbook.toplap.org/

  • aanet
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Thank you THANK YOU!! This is fantastic, esp. the Live Coding. I was looking for something like this. Immediately downloaded it. Gonna check Thor Magnusson book as well.

<3

This is a weird title for the book, because there's very little musical content in the book at all. It's about sound synthesis and signal processing. It's audio engineering, which is a nice skill to have for music making, but it's not music theory.
Found the classically trained musician.

Any and all formal, mathematical, informal theory about music can be called music theory. Music theory is about modeling music. Period. It does not matter if it's about harmony, rhythm, pitch, form, timbre, dynamics, or some other aspect of music. Whether it helps with understanding music, composing music, improvising music etc is a separate topic. Music theory neither has to be about practical music making skills, nor it has to be about music of a particular artistic tradition. It just needs to present some model that can be a helpful tool in some musical context. Maybe what you call "audio engineering" is a specialized skill for some musical traditions, but for musical traditions where the expressive content primarily comes from timbre and synthesizers are common instruments, it will be an essential music making skill.

Nevertheless there is a huge difference between composing or improvising a musical piece, and programming a filter or oscillator.

Btw. "audio engineering" is what audio engineers are doing (see e.g. https://aes2.org/), and yet another completely different profession.

<deleted>
  • xoac
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Bad analogy since timbre has been part of the compositional process for decades now.
It's actually a good analogy. It doesn't mention "timbre" and doesn't claim that timpre is not part of the compositional process. A classical composer can indeed specify timbre to a certain degree, and modern composers created new kinds of scores which offered more specification means for many features related to "timbre", but there is still a difference between composing music, playing music or building instruments in formal musical education.
No you're simply wrong. This is how a "classical Western" musician thinks of music, but this is not necessarily what music is. Timbre is the main expressive content in many cultures.

Check my comment here: https://news.ycombinator.com/threads?id=gnulinux#42368137

Maybe there is a difference how formally trained musicians and computer scientists see it. But actually I don't see a contradition of your statement to what I've written.

And don't forget that also Miller Puckette comes from the Western musical tradition and developed important works at IRCAM.

The difference is in the model. The same way you can model mechanics with Newtonian mechanics, or statistical mechanics, or quantum mechanics and each of them can be useful in different scenarios, and irrelevant in others.

If you're making Western classical music in classicist, romanticist or modernist style, the model of music you have will carry a lot of information about harmony and the application of harmonic techniques throughout the piece. Given a core musical idea you can then apply peripheral techniques (such as orchestration) to build a full piece. E.g. when people study counterpoint, the model of music originates from vertical harmony of notes and when they can be used with respect to each other. The assumption is that orchestration is something that'll be separately developed "skinning" the composition. E.g. a common technique in this tradition is composing a piece for piano four hands and then orchestrating it (e.g. Holst's "The Planets" symphony was composed this way).

However, this stops being a useful model once you step into other musical traditions. In some cultures harmony would be like how Western music treats orchestration, peripheral to composition (like how extreme speed is irrelevant to Newtonian mechanics because it was never designed for near lightspeed motion). So you'd first design timbres, and have an idea about how timbres interact, timbres change, transform to each. You may have a theory of counterpoint of timbres. Once you have this, you can apply any standard "harmony skin" on the composition and you have a piece. This is not even restricted to non-Western music. If you look at the postmodernism in Western music you'll find instances of it. Easy example: a lot of people say that Philip Glass "makes the same music" again and again, what is being missed is the point he's trying to convey is that even if you pick the exact same 4 chords you can still create variation in music via other means. It just won't be different from the traditional harmony-centric Western musical model.

By the way, I studied CS and my full-time job is a Software Engineer. So I doubt our disagreement comes from my background in computer science.

> So you'd first design timbres, and have an idea about how timbres interact, timbres change, transform to each. You may have a theory of counterpoint of timbres. Once you have this, you can apply any standard "harmony skin" on the composition and you have a piece

I asked the same question above, because I'm not sure if you're alluding to the same thing here or something different. May I have some examples of traditions which do this, with something to go listen to?

You seem to be hell-bent on a disagreement. Let's invest our time better, for example, making music. Do you have any musical works that can be listened to?
How are you so knowledgeable about music theory and classical music given that you didn't study music? Just curious
Which musical traditions use harmony as a “skin” in the sense of designing a timbral skeleton before anything else?
Isn't spectralism a thing in the modern classical world since the 20th century?
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This is exactly what I'm talking about. In Western music timbre is akin to fonts. You have a composition for piano, you play it, record it in MIDI, and reskin it with some other timbre in studio. This is an extremely Western way of looking at music. There are countless cultures where timbre is the "main" part of the music where the harmony and/or rhythm would be like fonts/reskins and timbre is the main juice composers and improvisers try to squeeze out. This type of distorted view on music is rooted in 18th/19th century beliefs of non-Western art being "primitive" art even though every single culture that's known to humanity have a unique musical tradition. This is an extremely anti-humanistic look at music.
I think there is just a certain a kind of ambiguity with the word "Theory". Miller is really focused on the theory of sound synthesis and does not really deal with composition or aesthetic theory. People who are more interested in the latter might enjoy "Composing Electronic Music: A New Aesthetic " by Curtis Roads (https://global.oup.com/us/companion.websites/9780195373240/b...).
I think in order to have a book about music theory, there should be some explanation as to how to make music for some particular expressive purpose and not just the technical details of how to make sounds. A guide to constructing a piano is not music theory. I don't care if it's classical music or techno or gamelan, or if the theory is formal or traditional, if there's not some discussion of how and why to express musical ideas, it's a technical manual and has very little to do with music.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The traditional concept of notes (and accessory ones like scales and traditional notation) marks the boundary between "traditional" music theory, that treats notes as the final result (when you have notes written down, it's only a matter of concretely playing them with a given instrument) and theory of electronic synthesis, which treats notes as an input, both optional and taken for granted, and audio signals as the product.
Traditional music didn't even have the concept of notes in the way modern music does. Of course as modern music was developing for hundreds of years, perhaps thousands, and as they did that they started taking common themes in traditional music and calling them notes, but real traditional musicians can't think of notes in the same was as modern music does.
Also notation systems were further developed. Many famous composers came up with their own scores for their electronic or electro-acoustic compositions (see e.g. Xenakis or Stockhausen).
Huh? There's a huge amount of theory and writing about electronic music that isn't just technical. See Mark Fell's PhD thesis, for example.

Your comment seems to suggest the other person is ignorant but really it just shows your ignorance of theory and writing about experimental and electronic music. Not all music theory is western classical.

I mean, how do you even consider Stockhausen and Xenakis from your perspective?

You're misunderstanding what I'm saying. Music theory from other cultures are of course music theory as well. Each composer can (and almost always will) have their own idiosyncratic music theory as well. There is nothing contradictory here.

Also I'm a composer with extensive knowledge of how to make, orchestra, and mix acoustic or electronic music. This thread has an extreme Western bias, just because something is studied in a particular way from Western music theory perspective, it doesn't mean it has to be that way.

Check my comment here: https://news.ycombinator.com/threads?id=gnulinux#42368137

That comment just seems like an overly disparaging and ignorant take on this so called “western music”, which you claim is somehow anti human. Why would I care what you think about electronic music when you think western music is anti human?
I didn't say Western music is anti-human. Treating it as the only possible form of music is.
> musical traditions where the expressive content primarily comes from timbre and synthesizers are common instruments

What traditions are you alluding to?

Experimenting with timbre and the nature of sound itself is absolutely musical. That's a big reason why people love to listen to many different kinds of electronic music in the first place (or things like heavily distorted guitars).

Music is not just about combining 12TET pitches in different ways. Everything about the experience of music is open game for creative expression.

During my university studies, I took courses in electro-accoustic music composition. Significant amounts of time dealt with synthesis and signal processing because those were critical elements in these kinds of compositions.

It's absolutely different than composition for traditional instruments in this regard because the sounds you are using to compose with are being created by the composer and much as are the notes, rhythms, and structure of the composition.

So for me, the title makes perfect sense.

The first sentence of the foreword brings to the point, what the book is about:

"The Theory and Technique of Electronic Music is a uniquely complete source of information for the computer synthesis of rich and interesting musical timbres."

Whereas tools like Max Mathews' (btw. the author of the foreword) MUSIC programs and their successors clearly separate music composition and instrument building (i.e. sound synthesis), later tools like Max, PD or SuperCollider are blurring this difference. Nevertheless the difference is still maintained by all institutions where electronic music is studied and performed (e.g. IRCAM).

> "The Theory and Technique of Electronic Music is a uniquely complete source of information for the computer synthesis of rich and interesting musical timbres."

It's really a great book, but it is far from "complete" as it omits some very important synthesis techniques - most notably granular synthesis and physical modeling! To be fair, no single book would be able to cover the entire spectrum of electronic sound synthesis. The second edition of "The Computer Music Tutorial" by Curtis Roads (https://mitpress.mit.edu/9780262044912/the-computer-music-tu...) comes close, but it is a massive book with over 1200 pages and took literally decades to write. (The second edition has been published 27 years after the first edition!)

What I find really cool about Miller's book is that all examples are written in Pd so anyone can try them out and experiment further.

On the matter of institutions: IRCAM is the paradigmatic example of composer / technologist role demarcation, but I would question whether this extreme position "is still maintained by all institutions" -- it certainly was not at my alma mater and I doubt at UCSD either. As you say, Max (coincidentally a product of Miller Puckette and IRCAM) and it's more recent ilk have empowered composers to independently build their own instruments and this practice has been ongoing within the academy for at least 35 years now.
As someone who studied computer music in the mid 2010s I can second that! All the composers in my generation who use live electronics do it themselves.

The devide between composer and programmer has disappered for the most part and I think the main reason is that both hardware and software has become so affordable and accessible. Back in the old days, you needed expensive computers, synthesizers and tape machines and people who could assist you with operating them. Today, anyone can buy a laptop and learn Pd / Max / SuperCollider!

That being said, institutions like IRCAM still have their place as they allow composers to work with technology that is not easily accessible, e.g. large multi channel systems or 360° projections. They also do a lot of research, too.

> Today, anyone can buy a laptop and learn Pd / Max / SuperCollider!

And anyone can buy a laptop and contribute to the development of Pd, SuperCollider, Chuck, et al.

Not sure how much overlap there is between those two groups. Arguing against my earlier point: there still seems to be a separation between music systems users and music systems developers.

> there still seems to be a separation between music systems users and music systems developers.

That's true, but just like a pianist typically doesn't need to build their own piano, computer musicians rarely need to build their own DAWs or audio programming languages. However, computer musicians do build their own systems on top of Pd, SC, etc. and these can evolve into libraries or whole applications. So the line between computer musicians and audio application developers is blurry.

That being said, I can tell for sure that only few computer musicians end up contributing code to Pd, SC, etc., simply because most of them have no experience in other programming languages and are not really interested in actual software development. Of course, there are other important ways to contribute that are often overlooked, like being active on forums, filing bug reports, etc.

Maybe I'm a bit biased because I was there for a study visit in the eighties. Of course it depends on the use case; if the composition is fully electronic, the composer can essentially be the same person as the performer, conductor and producer, so there is no big need for a score; live coding goes even further and "the composition" appears during the performance; specific tools have been implemented for these use-cases (e.g. Standford has a long tradition for such tools).
this assumes the position that the pre-eminence of the even-tempered music based on european art music traditions and the associated staff notation. this is extremely limiting when considering the breadth of music that exists in the real world.

this is a theory of music, and while most pedagogy will reinforce the special position of this system, it is not THE theory of music. there are alternative systems of notation. there are harmonic systems that incorporate tones that do not exist in even tempered western scales. there are drumming traditions that are taught and passed down by idiomatic onomatopoeia.

this is especially apparent in electronic music where things like step sequencers obviate the need to know any western music notation to get an instrument to produce sound.

the western classical tradition is a pedagogically imposed straight jacket. its important to keep a more open mind about what music actually is.

The book is from basically the “experimental music” school of electronic music. The idea was/is that music will be completely transmuted by electronics and computers, leaving traditional music behind. Here “traditional music” means almost everything people actually listen to, from orchestras to GarageBand electronica to pop.

The claim may be a bit aspirational right now, but in theory “electronic music” subsumes all music. Or enlarges music so much that traditional musical ideas are special cases, not necessarily relevant.

I’m trying to pitch this properly as a very cool concept. But I no longer believe it will happen in my lifetime.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]