I read it over and over again when I was building: https://glicol.org/
One of the motivations for building Glicol is to quickly let more people understand sound synthesis and music programming in the browser.
also recommand:
Designing Audio Effect Plugins in C++ by Will Pirkle
Audio Effects Theory, Implementation and Application By Joshua Reiss, Andrew McPherson
And all the books by JULIUS O. SMITH III https://ccrma.stanford.edu/~jos/filters/Book_Series_Overview...
Proceedings of the International Conference on Digital Audio Effects (DAFx). All open access at https://dafx.de/
Jon Dattorro's Effect Design papers:
https://ccrma.stanford.edu/~dattorro/EffectDesignPart1.pdf
https://ccrma.stanford.edu/~dattorro/EffectDesignPart2.pdf
https://ccrma.stanford.edu/~dattorro/EffectDesignPart3.pdf
Vadim Zavalishin - The Art Of VA Filter Design https://www.native-instruments.com/fileadmin/ni_media/downlo...
Proceedings of the International Computer Music Conference (open access) https://quod.lib.umich.edu/i/icmc/
Andy Farnell, "Designing Sound"
A standard introductory DSP textbook such as Ifeachor and Jervis, Orfanidis, Oppenheim and Schafer.
"The Computer Music Tutorial" and "Microsound" by Curtis Roads
Audio Anecdotes book series
"Music, Cognition and Computerised Sound", edited by Perry Cook
what am I missing?
Boulanger - "The Csound Book" (another classic language I still use today)
Greenbaum and Barzel - "Audio Anecdotes". A fascinating series of 3 volumes with all sorts of wisdom on FX design, studio systems and composition
Wilson, Cottle and Collins - "The Supercollider Book". In the style of The Csound Book, but with SC.
Loy - "Musicmathics". A rare and much under-rated two volume set on the equations behind audio DSP
Bilbao - "Numerical Sound Synthesis". A hard but rewarding journey to understanding audio physics as linear diff systems and implementing them efficiently in C, Goes well with Perry Cook's stuff.
Benson - "Music a mathematical offering" Very unusual book that analyses many subjects in music physics. Equations but no code.
Miranda "Computer Sound Design". More about music synthesis than "sound design" imho, but has some interesting fringe methods like cellular autonoma and genetic algorithms.
Probably the gold standard for such books, wish all the audio DSLs had a book of such quality. Between it and The Computer Music Tutorial Csound is ahead of the rest when it comes to books.
Edit: Was thinking The Computer Music Tutorial was filled with Csound examples but on second thought I don't think it actually is. Been awhile since I last browsed it.
The Csound book is so great though. It would be nice if the orc/sco on the CD that came with it were available. I have the book still but the CD is long gone and so is owning a CD rom drive.
I just came back to csound recently and I think it has taken me about 25 years to actually like the sco. If one is use to a piano roll/DAW, the sco seems utterly ridiculous.
The csound manual now though actually has good working examples too. If I remember, that was not the case when the csound book came out and part of what made it so great too.
It's worth noting that the second edition has been just released last year - 27 years after the first edition! It's a massive book: https://mitpress.mit.edu/9780262044912/the-computer-music-tu...
There is lots of information and even whole chapters that couldn't have been possibly written in 1996, but I haven't read the first edition so I can't really compare in detail.
Mostly focused on FM as applied by the DX7 IIRC. But really good overview of how FM works since it's the guy who invented it.
Is this some kind of master or PhD work, or just a hobby project?
still you can find some philosophy here:
https://github.com/chaosprint/glicol
the glicol-cli is also a relatively special work:
https://webaudioconf.com/posts/2021_8/
the video is still there
What are your experiences with Rust? Was it worth it, or would you rather consider another language for similar projects?
"Since it's first edition in 1972, Electronic Music: Systems, Techniques and Controls has been acknowledged as the definitive text on modular synthesis"
For those who missed either of the kickstarter runs, there's a reprint due via Schneider's Berlin.
https://schneidersladen.de/en/allen-strange-electronic-music...
I've read Godfried Toussaint's book, and looking for more recommendations in this area.
I've been tinkering with a cybernetic folk drumming project, and trying to create rhythms using oscillators, with beats triggered at zero-crossings, so I can build and manipulated patterns in real-time. (demo: https://www.youtube.com/watch?v=yVlgPoTpL94) Results have been interesting, but perhaps not "good" in Toussaint's sense. I'm hoping to find a model that works better. Advice and pointers appreciated.
McLean, Alex. "Algorithmic Pattern." NIME. 2020.
Ah and most of that can be also explored without a physical rack using VCVRack.
I guess my wish would be a companion to Geometry of Musical Rhythm with lots implementations of code to demonstrate concepts. Euclidean Rhythm implementations seem to be everywhere, but there is SO MUCH MORE to cover.
Still, I'm kind of stuck on this idea that there may be a way to map the discrete mathematics of this rhythm stuff into a continuous mathematics for more natural and fluid rhythmic expression. It's just a theory and a few experiments so far.
And hardware is a useful abstraction.
If the Korg is right for you, that’s cool with me. My B2600 sits on my desk. I don’t need a rolling case and I already have monitors for when I am not using headphones.
In my opinion, Behringer designed a new more capable instrument and Korg made the 2600 Mini a less capable one by intent (missing everything that was on Arp’s keyboards) to avoid cannibalizing sales of the premium full size version.
And, the Korg has a copy of the Moog ladder filter. [1] Lots of guitars are shaped like Stratocasters and Telecasters. Synthesizers.com sells knockoffs of Moog modules. Your DAW probably has a Rhodes VST or two.
[1] When Alan R Perlman built the first 2600’s for Tonus (before changing the name of his company to “Arp”), he copied Bob Moog’s ladder filter. Moog sued for patent infringement. Perlman had to change the filter and that’s what most original 2600’s have.
https://www.docdroid.net/3K4UL8i/loopop-toc-pdf
In general, screeds documenting the theory and technique of electronic music would be better served if their authors didn't orient the works around a particular tool or method - such as Puredata, in this case. Even those PD is indeed an extremely powerful tool, its not really all-encompassing when it comes to making Electronic Music - its a digital tool, and electronic music covers the gamut from analog to digital and beyond...
https://en.m.wikipedia.org/wiki/General_Instrument_AY-3-8910
[ Yes Miller, that's fighting talk round here too :) ]
Seriously, this is a very very good place to start learning audio DSP in general because you hit the ground running, making sounds you can compose actual music with right away.
Don't think I would call this about PureData, it just uses PureData for examples and the knowledge it provides is more general.
http://aspress.co.uk/ds/pdf/pd_intro.pdf
Another useful book is “Loadbang - ProgrammingElectronic Music in Pd” by Johannes Kreidler. The 2nd edition is evidently out of print, but a free download is available here:
https://www.wolke-verlag.de/musikbuecher/johannes-kreidler-l...
He created the language.
And I'd argue it's more than just a "language", it's a creative paradigm.
What's more TTEM is still available for free download from World Scientific Press, whereas MIT allowed me only to make a subset of mine free.
Another thing of note; Miller has carefully conserved the development of Vanilla Pd such that every example in his book (and mine) still work exactly as they did more than 10 years ago. How many languages can boast that stability?
Perhaps ideal from some mathematical view, but musically, a brick wall filter could sound like shit where a lower order filter would be fine, because in that situation, you need a more nuanced blend of the range of frequencies than taking everything at 100% amplitude below the wall, and 0% above.
Particularly on thery + praxis of state of the art?
Many Thanks!!
<3
Any and all formal, mathematical, informal theory about music can be called music theory. Music theory is about modeling music. Period. It does not matter if it's about harmony, rhythm, pitch, form, timbre, dynamics, or some other aspect of music. Whether it helps with understanding music, composing music, improvising music etc is a separate topic. Music theory neither has to be about practical music making skills, nor it has to be about music of a particular artistic tradition. It just needs to present some model that can be a helpful tool in some musical context. Maybe what you call "audio engineering" is a specialized skill for some musical traditions, but for musical traditions where the expressive content primarily comes from timbre and synthesizers are common instruments, it will be an essential music making skill.
Btw. "audio engineering" is what audio engineers are doing (see e.g. https://aes2.org/), and yet another completely different profession.
Check my comment here: https://news.ycombinator.com/threads?id=gnulinux#42368137
And don't forget that also Miller Puckette comes from the Western musical tradition and developed important works at IRCAM.
If you're making Western classical music in classicist, romanticist or modernist style, the model of music you have will carry a lot of information about harmony and the application of harmonic techniques throughout the piece. Given a core musical idea you can then apply peripheral techniques (such as orchestration) to build a full piece. E.g. when people study counterpoint, the model of music originates from vertical harmony of notes and when they can be used with respect to each other. The assumption is that orchestration is something that'll be separately developed "skinning" the composition. E.g. a common technique in this tradition is composing a piece for piano four hands and then orchestrating it (e.g. Holst's "The Planets" symphony was composed this way).
However, this stops being a useful model once you step into other musical traditions. In some cultures harmony would be like how Western music treats orchestration, peripheral to composition (like how extreme speed is irrelevant to Newtonian mechanics because it was never designed for near lightspeed motion). So you'd first design timbres, and have an idea about how timbres interact, timbres change, transform to each. You may have a theory of counterpoint of timbres. Once you have this, you can apply any standard "harmony skin" on the composition and you have a piece. This is not even restricted to non-Western music. If you look at the postmodernism in Western music you'll find instances of it. Easy example: a lot of people say that Philip Glass "makes the same music" again and again, what is being missed is the point he's trying to convey is that even if you pick the exact same 4 chords you can still create variation in music via other means. It just won't be different from the traditional harmony-centric Western musical model.
By the way, I studied CS and my full-time job is a Software Engineer. So I doubt our disagreement comes from my background in computer science.
I asked the same question above, because I'm not sure if you're alluding to the same thing here or something different. May I have some examples of traditions which do this, with something to go listen to?
Your comment seems to suggest the other person is ignorant but really it just shows your ignorance of theory and writing about experimental and electronic music. Not all music theory is western classical.
I mean, how do you even consider Stockhausen and Xenakis from your perspective?
Also I'm a composer with extensive knowledge of how to make, orchestra, and mix acoustic or electronic music. This thread has an extreme Western bias, just because something is studied in a particular way from Western music theory perspective, it doesn't mean it has to be that way.
Check my comment here: https://news.ycombinator.com/threads?id=gnulinux#42368137
What traditions are you alluding to?
Music is not just about combining 12TET pitches in different ways. Everything about the experience of music is open game for creative expression.
It's absolutely different than composition for traditional instruments in this regard because the sounds you are using to compose with are being created by the composer and much as are the notes, rhythms, and structure of the composition.
So for me, the title makes perfect sense.
"The Theory and Technique of Electronic Music is a uniquely complete source of information for the computer synthesis of rich and interesting musical timbres."
Whereas tools like Max Mathews' (btw. the author of the foreword) MUSIC programs and their successors clearly separate music composition and instrument building (i.e. sound synthesis), later tools like Max, PD or SuperCollider are blurring this difference. Nevertheless the difference is still maintained by all institutions where electronic music is studied and performed (e.g. IRCAM).
It's really a great book, but it is far from "complete" as it omits some very important synthesis techniques - most notably granular synthesis and physical modeling! To be fair, no single book would be able to cover the entire spectrum of electronic sound synthesis. The second edition of "The Computer Music Tutorial" by Curtis Roads (https://mitpress.mit.edu/9780262044912/the-computer-music-tu...) comes close, but it is a massive book with over 1200 pages and took literally decades to write. (The second edition has been published 27 years after the first edition!)
What I find really cool about Miller's book is that all examples are written in Pd so anyone can try them out and experiment further.
The devide between composer and programmer has disappered for the most part and I think the main reason is that both hardware and software has become so affordable and accessible. Back in the old days, you needed expensive computers, synthesizers and tape machines and people who could assist you with operating them. Today, anyone can buy a laptop and learn Pd / Max / SuperCollider!
That being said, institutions like IRCAM still have their place as they allow composers to work with technology that is not easily accessible, e.g. large multi channel systems or 360° projections. They also do a lot of research, too.
And anyone can buy a laptop and contribute to the development of Pd, SuperCollider, Chuck, et al.
Not sure how much overlap there is between those two groups. Arguing against my earlier point: there still seems to be a separation between music systems users and music systems developers.
That's true, but just like a pianist typically doesn't need to build their own piano, computer musicians rarely need to build their own DAWs or audio programming languages. However, computer musicians do build their own systems on top of Pd, SC, etc. and these can evolve into libraries or whole applications. So the line between computer musicians and audio application developers is blurry.
That being said, I can tell for sure that only few computer musicians end up contributing code to Pd, SC, etc., simply because most of them have no experience in other programming languages and are not really interested in actual software development. Of course, there are other important ways to contribute that are often overlooked, like being active on forums, filing bug reports, etc.
this is a theory of music, and while most pedagogy will reinforce the special position of this system, it is not THE theory of music. there are alternative systems of notation. there are harmonic systems that incorporate tones that do not exist in even tempered western scales. there are drumming traditions that are taught and passed down by idiomatic onomatopoeia.
this is especially apparent in electronic music where things like step sequencers obviate the need to know any western music notation to get an instrument to produce sound.
the western classical tradition is a pedagogically imposed straight jacket. its important to keep a more open mind about what music actually is.
The claim may be a bit aspirational right now, but in theory “electronic music” subsumes all music. Or enlarges music so much that traditional musical ideas are special cases, not necessarily relevant.
I’m trying to pitch this properly as a very cool concept. But I no longer believe it will happen in my lifetime.