5 tips to get the most from your drum sequencer plugins


Battalion lets you adjust the depth of randomization applied by adjusting the Depth parameter in the settings menu. This feature controls how much randomization is applied to knobs, sliders, etc. when you click the plugin’s Dice buttons. Larger Depth values lead to more extreme changes.

To prevent parameter values from drifting over time, as you continuously click a Dice button, you can enable Drift Prevention in Battalion’s settings. With this feature enabled, parameters will randomize within a certain range, no matter how many times you click a Dice button. For example, a parameter set to 0% using a small Depth value of 5 will never reach 100% when Drift Prevention is toggled on.



Source link


What is beatmatching? Understanding the science of syncing beats


What is beatmatching

Beatmatching is a necessary skill for the majority of DJ mixes. The goal of beat matching is to synchronize the bpm of two tracks so that they play simultaneously, and it requires the ability to determine which track is faster than the other.

In this guide, you’ll learn what beatmatching is, how to prepare your tracks and equipment to beatmatch, how to beatmatch in Traktor, and how to use Sync in Traktor. When you’re able to successfully beatmatch, you’re on your way to becoming a great DJ.

Jump to these sections:

Follow along with a free trial of Traktor Pro 3, professional DJ software that can help you

Demo Traktor for free

What is beatmatching?

Beatmatching is a fundamental skill in DJing where a DJ aligns the tempos of two tracks so they play in sync with each other. This involves adjusting the playback speed of one track, typically via pitch control or tempo sliders on DJ equipment, to match the BPM of the other track.

The goal is to seamlessly transition from one track to another without disrupting the flow of the music.

To learn how to beatmatch, you’ll need to develop good listening skills to hear which track plays at a faster speed, and then make adjustments to the tracks until they match. The tempo of your tracks are measured in bpm, which stands for beats per minute.

Beatmatching is a challenging skill to learn and the best way to master it is to practice. Using Traktor, you can practice beatmatching and make use of the Traktor’s Sync function to match the beats together.

Setting up to beatmatch

Let’s take a look at the steps you need to take to beatmatch.

1. Select percussion-heavy tracks

Select two tracks that are percussion heavy and have intros and outros that are mainly drums. It’s easier to match drum rhythms together than it is to mix dense musical arrangements. Ideally the tracks should be in the same key, or compatible keys.

To learn more harmonic mixing and how to use the key of each track to create better mixes, check out our guide.

Setting up two tracks to beat match

Setting up two tracks to beat match

2. Use a DJ monitor

Use a DJ monitor. You will need to hear track one (Deck A) playing out of the monitor, while you listen to track two (Deck B) in your headphones.

Traktor X1 MK3 DJ Controller and Monitors

Traktor X1 MK3 DJ Controller and Monitors

3. Find the beat and set a cue point

Find the first beat (also called the downbeat) of each track. In most cases, this is automatically marked in Traktor. If the first beat is not marked, use Cue 1 to add a cue point. Using a cue point allows you to easily return to the first beat of the track.

Learn more about how to become a DJ and the DJ equipment you’ll need to get started.

How to use Traktor to beatmatch

Let’s take a look at Traktor Pro 3 and Traktor hardware and how they work with beatmatching.

Ensure that Traktor is connected to a DJ monitor. If you’re using Traktor as stand-alone software on your computer, select your audio device in the Preferences/Audio Setup. If you’re using Traktor with a DJ controller, connect the outputs of the DJ controller to your monitors.

Traktor, Native Instruments Kontrol S4 and and DJ monitors

Traktor, Native Instruments Kontrol S4 and and DJ monitors

Load up a track on Deck A, and another track in Deck B.

Find the first beat of each track. Traktor will often mark this with an automatic cue point. If it’s not marked, create a cue point on the first beat on both tracks.

Press play on Deck A and bring up the channel fader all the way. Traktor will display the BPM in the upper right hand corner.

Sara Simms – 855 (Simmetry Sounds)

On Deck B, turn down the channel fader and select the headphone cue button. This will allow you to hear the track in your headphones when it’s playing, but it won’t play over your speakers. The BPM of the track is displayed in the upper right hand corner.

Two tracks in Traktor

Two tracks in Traktor

If you’re using a DJ controller, try moving the first beat of Deck B back and forth. Press play on the track, or release the track at the beginning of an eight bar sequence. Listen to the BPM of Deck B, and decide if the track needs to be sped up or slowed down to match the BPM of Deck A.

Deck A will be playing in your DJ monitors, while Deck B will be audible only in your headphones. Use the BPM information that’s displayed in Traktor as a guide. Quickly adjust the BPM of Deck B using the Pitch Fader until the BPM of Deck B matches the BPM of Deck A. If the tracks go out of sync (called a ‘train wreck’) simply bring Deck B back to the first beat, and start the mix again.

Once the two tracks are beat matched, you may need to start Deck B again from the beginning so that you can perfect the mix and bring in Deck B at the right time. To do this, simply press the corresponding cue button (eg. Cue 1) in Traktor or use DJ controller buttons to jump to Cue 1. Press Cue Point 1 on Deck B to prepare the track to play from beat one.

When the track in Deck A reaches the part you’d like to mix in on Deck B, press play on Deck B.

Tempo bend in Traktor

Tempo bend in Traktor

Bring up the channel fader on Deck B. The two tracks should be playing in sync! To make a slight adjustment forward or backward, use the Tempo Bend feature. This feature is the two arrow keys located on the top right hand side of each track and is often mapped to buttons on DJ controllers. Tempo Bend is a useful feature for making slight adjustments to the BPM of the track.

Sara Simms – 855 (Simmetry Sounds) & Simina Grigoriu – Raw Sugar (Kuukou Records)

If the two tracks aren’t playing exactly in sync, repeat the steps above until the BPM’s are matched. Traktor makes this process a lot easier by displaying the BPM.

If you are relying primarily on your ears to beatmatch, you will need to teach yourself how to listen to two different sources at the same time. With one ear, listen to the BPM of Deck A that’s playing on your monitors. Using your headphones, listen to Deck B. When you first begin beat matching, it’s easier to listen to one track or deck at a time in your headphones. When beat matching becomes easier, you can use the Headphone Cue buttons on both decks to listen to both decks simultaneously.

To learn more about DJing with Traktor, refer to our guide.

How to use Sync in Traktor for better beatmatching

Beatmatching can be a challenging skill to master. Luckily, advances in technology have made beatmatching easier.

Traktor’s Sync feature allows DJs to instantly beatmatch. Using Sync in Traktor can allow you to focus on more creative aspects of mixing, such as looping and applying effects.

To use Traktor’s Sync feature, Traktor’s beatgrids must be correctly aligned. The beatgrid is a fine white grid displayed on the track’s waveform. When you analyze a track in Traktor, the program calculates the BPM of your track based on the transients, which are the initial hits of each sound that appear as peaks in the waveform display. Traktor adds a beatgrid to each track.

While Traktor does a good job of automatically aligning the beatgrids, occasionally you may need to make adjustments to beat grids. Learn more about how to beatgrid with Traktor in this helpful guide to ensure your tracks are perfectly in sync.

Once your beatgrids are set, open Traktor’s Preferences and navigate to Transport.

In Sync Mode, select Tempo Sync. Press Play on Deck A and it will automatically be set to Master. This means that when Sync is activated on another Deck, its tempo will be matched to the Master Deck.

On Deck B, press the Sync button in the software or on your DJ controller. When Deck A reaches the point you wish you begin mixing, press play on Deck B. The two tracks will play at exactly the same speed.

Continue to mix the two tracks together, then bring down the channel fader on Deck A to finish the mix.

Weska ft. MC Flipside – Life Lines

Weska ft. MC Flipside – Life Lines (Weska) & Adam James – The Way You Move (Simmetry Sounds)

Start beatmatching in your sets

Now that you’ve learned beatmatching techniques, it’s time to start beatmatching in Traktor in your DJ sets. If you beatmatch tracks together, you can create smooth mixes, radio shows, and perform seamless club sets. Beat matching is one of the first steps to mastering the art of DJing.

While these techniques are fresh in your mind, pick up a copy of Traktor and your favorite Native Instruments DJ controller and start beatmatching in your sets today.

Demo Traktor Pro 3 free

The post What is beatmatching? Understanding the science of syncing beats first appeared on Native Instruments Blog.



Source link


How to make future house music with cutting-edge production techniques


future house music

If you’re a fan of electronic music, you’ve likely heard of future house. So, what is future house? Future house is a subgenre of house music that rose to prominence in the early 2010s and continues to dominate dance charts today. With its bright synth leads, powerful bass lines, and catchy hooks, this genre provides a playground for producers that is fun and rewarding to work within.

If you’re a producer who wants to make future house but doesn’t know where to start, you’ve come to the right place. In this article, we’ll guide you through the process of creating future house music, from finding inspiration to producing a mastered track that can sound like this:

Jump to these sections:

Follow along with this tutorial using Komplete 14, the leading production suite with 145+ instruments and effects, 100+ Expansions, and over 135,000 sounds.

Learn more

What is future house music?

While people often lump many subgenres of EDM together, future house has some key elements that distinguish it from other styles of electronic music. These characteristics are what make the genre so uplifting and cutting-edge.

Let’s listen to a few future house tracks.

Here’s “On My Mind” by Don Diablo:

And the more recent “Electric Elephants” Edit by Dastic:

What stands out in these tracks that can help us understand how to make future house music?

  1. A catchy melody: Future house music incorporates memorable and singable melodies that we can hum back after listening.
  2. Bass lines: The groovy syncopated bass lines are another mainstay of the genre. Bass often drives these songs and lays a strong foundation for any future house track.
  3. Synthesizers: One contributing factor to this subgenre’s “futuristic” sound is its extensive use of synthesizers. The synths embody the complex and layered sound design inherent to the genre.
  4. Four-to-the-floor beats: Like many other subsets of house music, future house tracks almost always feature a four-to-the-floor beat with a kick drum falling on every quarter note.
  5. Buildups and drops: Future house producers use tension and release extensively in their music. Anticipation builds so that the dynamic drops are laced with impact.

Who started future house music?

No artist single-handedly created future house. But while it is difficult to trace any genre to its exact inception, there are a few musicians in the niche that have been so influential that we could probably attribute its “beginning” and popularization to them. Two names that are practically synonymous with the term “future house” are Oliver Heldens and Tchami.

Heldens broke out in 2013 with “Gecko (Overdrive)” with Becky Hill—a groovy and forward-thinking track that caught the electronic music community’s attention. After his early successes, Heldens established his label, Heldeep Records, in 2015. There Heldens showcased his music alongside that of emerging future house artists on the label.

Tchami is another pioneer of the future house genre. His innovative production techniques, use of vocal sampling, and diverse influences shaped future house into what it is today. Listen to his early work “Promesses” which shot him into stardom:

How to make future house music

Future house today has become quite a broad term and ranges from a more melodic sound to the harder, darker varieties. For this tutorial, we’ll be making something a bit more on the melodic side, but feel free to change up your melodies and sounds for a more intense energy.

With that in mind, let’s make an original future house track. We’re using Battery, and The Gentleman, as well as Wake and Bump for Massive X. We’ll start by making each section and instrument separately, and then combine everything in an arrangement for a final mastered track.

We’re using Ableton Live in this tutorial, but you can follow along with your preferred DAW.

1. Program your drums

We’ll start by setting the tempo to 126 BPM.

Master tempo

Master tempo

As we saw earlier, future house tracks generally have a four-to-the-floor beat that lays the foundation for the track. Let’s make our own. We’re going to use the Arena Kit from Battery’s factory selection.

The Arena Kit in Battery

The Arena Kit in Battery

We’ll start by placing a kick on every downbeat. We’re using the kick that’s on C#3.

Kick pattern

Kick pattern

From there, we’ll grab and layer a snare and clap on beats two and four of every bar.

Kick and snare pattern

Kick and snare pattern

The last step to creating a basic four-to-the-floor is a hi-hat placed in between every kick.

A basic house beat

A basic house beat

This beat is the essential starting point for any future house track, but as it stands it lacks groove. Let’s add in some additional percussion to fill in the gaps.

The full beat pattern

The full beat pattern

With that, we have the beat that we’ll be using for the drop of the song.

2. Add chords

The Gentleman piano is a classic choice for your chordal instrument, but you can experiment with any of these free alternatives. We’re setting the tonal color like this.

The tonal color knob

The tonal color knob

We’re also adding on some Supercharger GT compression and saturation:

Supercharger GT

Supercharger GT

A good chord progression is essential in future house. It’s an emotive genre that uses melodies extensively. So you’ll want to use a chord progression that suits the mood you’re trying to evoke.

We’re going with a i-v-VI-VII in E minor. That means the chords that we’re using are E minor (i), B minor (v), C major (VI), and D major (VII). This ascending progression gives the track a euphoric and uplifting feeling.

The chord progression we’re using

The chord progression we’re using

Of course, playing the chords in such a static way is boring. Let’s change the rhythm by adding some stabs. We’ll also create anticipation by bringing the chords in before the start of each count. We’ll also remove the bass notes from the chords to leave space in the mix for our bass synth.

The chord progression with an interesting rhythm

The chord progression with an interesting rhythm

Finally, let’s add a Raum reverb to create an epic space to put our piano in.

Raum reverb on the piano

Raum reverb on the piano

3. Write a bass line

The bass patches we’re focusing on were made in Massive X. We’re layering the top end of the “Building Block” preset from Bump:

The “Building Block” preset

The “Building Block” preset

With a simple sub bass patch for low-end fatness. Group these two tracks together. Then make sure to EQ out the sub-frequencies from “Building Block”. That will prevent us from overloading the sub-frequencies.

High-pass filtering

High-pass filtering

With our chord progression in mind, we know which notes to highlight in our bass line – E, B, C, and D. We’ll play them in sparsely and double up the bass notes with octaves to create a spacious, bouncy and hard-hitting feeling.

Bass MIDI pattern

Bass MIDI pattern

4. Create a synth lead

Future house is full of metallic-sounding synthesizers, so for our synth lead we’re using the “Stargator” preset from the Wake expansion with a few edited settings like the “Gate” macro and the main envelope.

The “Stargator” preset

The “Stargator” preset

Let’s add some space by sending it to another instance of Raum on the “Basic Synth Hall” preset.

We’ll also add an Auto Filter onto the track (if you’re on another DAW you can use a low-pass filter on any EQ). We’re not doing anything with it just yet, but you’ll need it later for the arrangement section.

Auto Filter

Auto Filter

5. Sidechain to your kick

Sidechaining allows us to use particular sounds as triggers to lower the volume of other sounds while the trigger sound is playing.

To let our kick cut through the mix, we’ll need to sidechain certain elements of the song using the kick as a trigger.

In order to do that, we’ll need to send our kick to a separate audio channel outside of Battery. Simply right-click on the kick in Battery and send it to direct out 3/4.

Kick routing from Battery

Kick routing from Battery

Now create an audio channel that receives audio from Battery’s 3/4 output channels.

Kick routing to audio

Kick routing to audio

With that done, we can set up sidechain compressors that react to the kick’s signal, causing elements to “duck out” of the way and give room to the kick’s punchiness. Here are the sidechain compression settings so you can dial them in yourself:

A sidechain compressor

A sidechain compressor

Add the compressor onto any channel that is taking up a lot of space in the mix. In this case, that’s the bass and lead.

Let’s see what sidechaining is doing to our mix by comparing the piano sound without sidechaining to our piano with sidechaining.

Can you hear how the piano is “ducking out” of the way when the kick is playing?

6. Arrange your track

You’ve got all the ingredients for this track, so it’s time to get creative with the arrangement. We’ve decided to add piano into the intro section of the song and layer it with sub-bass playing legato bass notes, and a clap every second and fourth beat.

After that, we’re bringing in the melody subtly by automating the cutoff frequency with an auto filter, while increasing the intensity and timing of the snare. Remember if you’re working on another DAW, you can simply use a low-pass filter on an EQ here.

Cutoff automation

Cutoff automation

The snare building tension

The snare building tension

A repeating snare can start to sound very stiff, so don’t forget to adjust the velocity for each hit in this build-up.

Let’s add a bit more tension by inserting a noise swell in Massive X that will cut out just before the drop. The preset is called “Super Swell.”

The build-up will lead straight into the drop which will feature all the elements we composed earlier, except for the piano. We’ve removed the piano here to give the synths breathing room in the mix.

We’ve also added in one-shots from Battery in a few spots. Here is a screenshot of the full arrangement, but we encourage you to get creative here and chop up the elements we’ve created in different ways:

The full arrangement

The full arrangement

7. Master your composition

Mastering is what will take your track over the line and get it ready for distribution onto digital platforms. Mastering involves compression, EQing, limiting, and a few other processes.

For industry-standard mastering at the click of a button, we’re going to use Ozone’s “Learn” function. This will analyze your track and master it for you in seconds. Any suggestions it makes are tweakable, so if you know what you’re doing you can take a look under the hood and adjust any parameters yourself.

Mastering with Ozone Master Assistant

Mastering with Ozone Master Assistant

We’re happy with what it has suggested for this track. Let’s take a listen to our fully mastered future house track.

Start making future house music

Everything above is for demonstration purposes, which is why we have such a short track on our hands. But with all of those basics in mind, feel free to experiment and create unique variations of this future house track that repeat certain sections and chop up others.

Once you’ve gotten that down, you can make original future house tracks from scratch using all the techniques we discussed here. Get creative and use the tools at your disposal in Komplete 14 to make the next future house banger.

Get Komplete 14

The post How to make future house music with cutting-edge production techniques first appeared on Native Instruments Blog.



Source link


Understanding time signatures in music


How to use time signatures in a DAW

For most of us working in music nowadays, using a digital audio workstation (DAW) is quintessential to our process. Time signatures help us organize the content on our timeline and make edits much more easily. We can also use time signatures at times to create space between sections by adding additional beats to the end or beginning of a section.

Most DAWs have a dedicated feature or setting where you can specify the time signature for your project. This setting allows you to choose the number of beats per measure and the note value that represents one beat (e.g., 4/4, 3/4, 6/8, etc.).

Sometimes slowing the tempo down to create a transition can be a good approach, but sometimes just adding a couple of additional beats (for example, a 3/4 measure after the last measure of a phrase that is in 4/4) can be a way to “stretch time” while maintaining the same click track-tempo in the session. This comes in handy when working with collaborators who might be working on a different DAW or if we just want to export a quick mix bounce and tell them the bpm of the session so they can have their own click track without worrying about tempo changes or midi maps.

In the context of a DAW, the “piano roll” is our playground to play with time and how notes fall in place. We can create a note or rhythm pattern, copy it, paste it, duplicate it, transpose it, expand it in time, and create all sorts of combinations stemming from an initial musical cell.

Having a clear understanding of the time signature we have chosen can facilitate the compositional process as well as speed up our editing when working with both MIDI and audio. When we take into consideration the time signature and how our musical ideas are reflected in relationship to the musical grid, we are fully in control of time and how it all gets contextualized.



Source link


What is swing in music production?


What is swing in music?

Swing in music is a way of organizing rhythms to make them slightly irregular, enhancing their groove and personality. In its simplest form, swing involves delaying the second of each pair of beats in a rhythm to create a loping long-short, long-short feel. It makes rhythms less regular and, when applied well, more interesting.

Think of it as the difference between a person walking and a galloping horse. The walk is even, steady, and predictable, with the same length of time between each step. The horse’s gallop has an uneven rhythm that makes it feel dynamic and driving. Swing captures this feeling in musical form.

In the first audio example below, our beat has no swing. In the second example, we added a simple 16th note swing. Hear how every second 16th note is delayed to create that galloping feeling. It’s especially audible in the hi-hats.



Source link


Ableton Live 12 Lite – Out Now


Ableton Live 12 Lite is out now and free for all Live Lite users.

A simple and intuitive way to write, record, and perform your musical ideas, it includes core Live features and introduces some of Live 12’s latest improvements, too. 

Download Live 12 Lite now ›

New sounds, new workflows

Live 12 Lite comes with two new Packs previously only available in paid versions of Live – Beat Tools and Build and Drop – featuring hundreds of Instrument, Drum, and Effect Racks, audio loops, and MIDI clips designed to spark your creativity.

You can now create your perfect take with comping, too. Piece together the best moments of individual recordings, chop and combine samples in creative new ways, and more. 

Live 12 features in Lite

Create more intuitively with an improved interface and browser. Find the right sound faster with Sound Similarity Search – swap individual samples or entire drum racks with the closest matches from your library. Explore new tonalities with tuning systems, keep your ideas in harmony with Keys and Scales, and much more.

Learn more about Live Lite ›

Free for all Note users

Get a free Live 12 Lite license when you download Ableton Note – our playable iOS app for forming musical ideas. Take your Note sketches further when you transfer them to Live 12 Lite.

Discover Note ›

Get started with Live 12 Lite

Learn to make your first track in Live 12 Lite, and zoom in on specific features with the Learn Live video series.



Source link


Synth pads: how to create atmospheric textures in your music


What are synth pads?

A synth pad is a sustained, pitched sound typically created using a synthesizer. It could be a single note or a chord (two or more notes at once). Synth pads tend to have a long loudness envelope, meaning they fade in gently, stick around for a while, and trail away slowly when their moment has passed. Synth pads are smooth, atmospheric sounds that play an important supporting role in many musical contexts.

We can understand pads better by comparing them with other synthesizer elements you might hear in electronic music. A synth lead is more of a star player: it sits in the foreground of a track, outlining a single melody or topline. This might be the part you hum along to when listening to the track. A synth arpeggio, meanwhile, is a rhythmic sequence of notes that can be used to create movement and energy.

Synth pads, by contrast, don’t supply melody or add rhythmic movement to a track. Instead, they add atmosphere and depth, helping to fill out an arrangement and give your music richness and shine. Synth pads are used in all kinds of music, from pop and electronic tracks through to film and video game scores.



Source link


Sub bass: how to add that deep tone to your tracks


Final pro tips for sub bass

Whether you have your sub bass follow along with your bass line, use it only to accentuate certain notes, or let it steal the show by occupying the entire low end, here are some of my top pro tips for using sub bass in your productions.

Layer your sub bass

Sometimes, one sub bass isn’t enough to achieve the desired impact. Layering different sub bass sounds can add depth and complexity. For example, layering a clean sine wave with a subtly distorted triangle wave can combine the best of both worlds: solid low-end power with a hint of harmonic interest. When layering, pay close attention to phase alignment and check your EQs to make sure the layers complement rather than conflict with each other.

Use sidechain compression

Sidechain compression is a powerful technique to ensure your sub bass and kick drum coexist harmoniously in the mix, each with its own clear space. By applying sidechain compression to your sub bass, triggered by the kick drum, the sub bass volume temporarily ducks each time the kick hits. This creates a rhythmic pulse that not only prevents frequency clashes but also adds a dynamic movement to your track, enhancing the groove.

Monitor your volume levels

Volume leveling is a crucial step in integrating your sub bass seamlessly into the mix, ensuring it provides a solid foundation without overshadowing the other musical elements. By carefully adjusting the volume of your sub bass, you can maintain a balance that complements the overall track, allowing it to support the mix rather than dominate it.

Use automation

For parts of your track that might have varying energy levels or where the sub bass needs to be more prominent or subdued, volume automation becomes an invaluable tool. This technique allows for precise control over the sub bass volume throughout different sections of your song, adapting in real-time to the dynamic needs of the track to keep the energy consistent and engaging.

Play with spatial placement

While sub bass frequencies inherently possess an omnidirectional quality, making them feel like they’re coming from everywhere, a strategic approach to spatial placement can elevate your track. By applying slight stereo widening to the upper harmonics of your sub bass, you introduce an element of spatial intrigue without compromising the core low-end focus. However, it’s important to tread lightly with stereo effects on low frequencies – overdoing it can lead to phase issues that muddy your mix or weaken the impact of your sub bass when played on mono systems, such as club PA setups.

Reference and test

Ensuring your sub bass performs consistently across a variety of listening environments is crucial for its effectiveness in your mix. Reference and test your track on different systems – these can range from high-quality studio monitors to basic headphones and even built-in smartphone speakers. This will show you how your sub bass translates in real-world scenarios, highlighting any issues with balance, clarity, or presence that might not be apparent in the studio. By making adjustments based on these tests, you can achieve a sub bass that maintains its impact, whether it’s felt on a club sound system or heard through earbuds during a morning commute.



Source link


Meet the new Push | Ableton


The new Push is a standalone instrument that invites you to step away from your computer and be fully in the moment with your music. Play expressive MPE-enabled pads in your own style, and plug your other hardware straight into Push’s built-in audio interface. And as an upgradeable instrument with replaceable components, it’s set for a long life at the center of your setup. 

See what’s new with Push



Source link


AI and Music-Making Part 1: The State of Play


Listen to this article.

This is Part 1 of two-part aricle. Read Part 2 here.

The word “AI” provokes mixed emotions. It can inspire excitement and hope for the future - or a shiver of dread at what’s to come. In the last few years, AI has gone from a distant promise to a daily reality. Many of us use ChatGPT to write emails and Midjourney to generate images. Each week, it seems, a new AI technology promises to change another aspect of our lives.

Music is no different. AI technology is already being applied to audio, performing tasks from stem separation to vocal deepfakes, and offering new spins on classic production tools and music-making interfaces. One day soon, AI might even make music all by itself.

The arrival of AI technologies has sparked heated debates in music communities. Ideas around creativity, ownership, and authenticity are being reexamined. Some welcome what they see as exciting new tools, while others say the technology is overrated and won’t change all that much. Still others are scared, fearing the loss of the music-making practices and cultures they love.

In this two-part article, we will take a deep dive into AI music-making to try to unpick this complex and fast-moving topic. We’ll survey existing AI music-making tools, exploring the creative possibilities they open up and the philosophical questions they pose. And we will try to look ahead, examining how AI tools might change music-making in the future.

The deeper you go into the topic, the stronger those mixed emotions become. The future might be bright, but it’s a little scary too.

Defining terms

Before we go any further, we should get some terms straight.

First of all, what is AI? The answer isn’t as simple as you might think. Coined in the 1950s, the term has since been applied to a range of different technologies. In its broadest sense, AI refers to many forms of computer program that seem to possess human-like intelligence, or that can do tasks that we thought required human intelligence. 

The AI boom of the last few years rests on a specific technology called machine learning. Rather than needing to be taught entirely by human hand, a machine learning system is able to improve itself using the data it’s fed. But machine learning has been around for decades. What’s new now is a specific kind of machine learning called deep learning. 

Deep learning systems are made up of neural networks: a set of algorithms roughly configured like a human brain, that can interpret incoming data and recognize patterns. The “deep” part tells us that there are multiple layers to these networks, allowing the system to interpret data in more sophisticated ways. This makes a deep learning system very skilled at making sense of unstructured data. In other words, you can throw random pictures or text at it and it will do a good job of spotting the patterns. 

But deep learning systems aren’t “intelligent” in the way often depicted in dystopian sci-fi movies about runaway AIs. They don’t possess a “consciousness” as we would understand it - they are just very good at spotting the patterns in data. For this reason, some argue that the term “AI” is a misnomer. 

The sophistication of deep learning makes it processor-hungry, hence the technology only becoming widely accessible in the last few years. But deep learning technology has been present in our lives for longer, and in more ways, than you might think. Deep learning is used in online language translators, credit card fraud detection, and even the recommendation algorithms in music streaming services. 

These established uses of deep learning AI mostly sit under the hood of products and services. Recently, AI has stepped into the limelight. Tools such as Dall-E and ChatGPT don’t just sift incoming data to help humans recognise the patterns. They produce an output that attempts to guess what the data will do next. This is called generative AI. 

Where other forms of deep learning chug along in the background of daily life, generative AI draws attention to itself. By presenting us with images, text, or other forms of media, it invites us into a dialogue with the machine. It mirrors human creativity back at us, and makes the potentials - and challenges - of AI technology more starkly clear.

No ChatGPT for music?

Deep learning technology can be applied to digital audio just as it can to images, text, and other forms of data. The implications of this are wide-ranging, and we’ll explore them in depth in these articles. But AI audio is lagging behind some other applications of the technology. There is, as yet, no ChatGPT for music. That is: there’s no tool trained on massive amounts of audio that can accept text or other kinds of prompts and spit out appropriate, high-quality music. (Although there may be one soon - more on this in Part 2). 

There are a few possible reasons for this. First of all, audio is a fundamentally different kind of data to image or text, as Christian Steinmetz, an AI audio researcher at Queen Mary University, explains. “[Audio] has relatively high sample rate - at each point in time you get one sample, assuming it’s monophonic audio. But you get 44,000 of those samples per second.” This means that generating a few minutes of audio is the data equivalent of generating an absolutely enormous image. 

As AI audio researchers and innovators the Dadabots observe, this puts a limit on how fast currently available systems can work. “Some of the best quality methods of generating raw audio can require up to a day to generate a single song.”

Unlike images or text, audio has a time dimension. It matters to us how the last minute of a song relates to the first minute, and this poses specific challenges to AI. Music also seems harder to reliably describe in words, making it resistant to the text-prompt approach that works so well for images. “Music is one of our most abstract artforms,” say the Dadabots. “The meaning of timbres, harmonies, rhythms alone are up to the listener's interpretation. It can be very hard to objectively describe a full song in a concise way where others can instantly imagine it.”

Added to this, our auditory perception seems to be unusually finely tuned. “We may be sensitive to distortions in sound in a different way than our visual system is sensitive,” says Steinmetz. He gives the example of OpenAI’s Jukebox, a generative music model launched in 2020 - the most powerful at the time. It could create “super convincing music” in the sense that the important elements were there. “But it sounded really bad from a quality perspective. It’s almost as if for audio, if everything is not in the exact right place, even an untrained listener is aware that there's something up. But for an image it seems like you can get a lot of the details mostly right, and it's fairly convincing as an image. You don't need to have every pixel exactly right.”

It’s tempting to conclude that music is simply too hard a nut to crack: too mysterious, too ephemeral an aesthetic experience to be captured by the machines. That would be naive. In fact, efforts to design effective AI music tools have been progressing quickly in recent years. 

There is a race on to create a “general music model” - that is, a generative music AI with a versatility and proficiency equivalent to Stable Diffusion or ChatGPT. We will explore this, and its implications for music-making, in Part 2 of this series.

But there are many potential uses for AI in music beyond this dream of a single totalizing system. From generative MIDI to whacky-sounding synthesis, automated mixing to analog modeling, AI tools have the potential to shake up the music-making process. In Part 1, we’ll explore some of what’s out there now, and get a sense of how these tools might develop in the future. In the process, we’ll address what these tools might mean for music-making. Does AI threaten human creativity, or simply augment it? Which aspects of musical creation might change, and which will likely stay the same?

Automating production tasks

At this point you may be confused. If you are a music producer or other audio professional, “AI music production tools” might not seem like such a novel idea. In fact, the “AI” tag has been floating around in the music tech world for years. 

For example, iZotope have integrated AI into products like their all-in-one mixing tool, Neutron 4. The plug-in’s Mix Assistant listens to your whole mix and analyzes the relationships between the sounds, presenting you with an automated mix that you can tweak to taste.

Companies like Sonible, meanwhile, offer “smart” versions of classic plug-in effects such as compression, reverb, and EQ. These plug-ins listen to the incoming audio and adapt to it automatically. The user is then given a simpler set of macro controls for tweaking the settings. pure:comp, for instance, offers just one main “compression” knob that controls parameters such as threshold, ratio, attack, and release simultaneously. 

Other tools offer to automate parts of the production process that many producers tend to outsource. LANDR will produce an AI-automated master of your track for a fraction of the cost of hiring a professional mastering engineer. You simply upload your premaster to their website, choose between a handful of mastering styles and loudness levels, and download the mastered product. 

What is the relationship between these tools and the deep learning technologies that are breaking through now? Here we come back to the vagueness of the term “AI.” Deep learning is one kind of AI technology, but it’s not the only one. Before that, we had “expert systems.” 

As Steinmetz explains, this method works “by creating a tree of options.” He describes how an automated mixing tool might work following this method. “If the genre is jazz, then you go to this part of the tree. If it’s jazz and the instrument is an upright bass, then you go to this part of the tree. If it's an upright bass and there's a lot of energy at 60 hertz, then maybe decrease that. You come up with a rule for every possible scenario. If you can build a complicated enough set of rules, you will end up with a system that appears intelligent.”

"If you're doing a job that could theoretically be automated - meaning that no one cares about the specifics of the artistic outputs, we just need it to fit some mold - then that job is probably going to be automated eventually."

It’s difficult to say for sure what technology is used in individual products. But it’s likely that AI-based music tech tools that have been around for more than a few years use some variation of this approach. (Of course, deep learning methods may have been integrated into these tools more recently).  

This approach is effective when executed well, but it has limitations. As Steinmetz explains, such technology requires expert audio engineers to sit down with programmers and write all the rules. And as anyone who has mixed a track will know, it’s never so simple as following the rules. A skilled mix engineer makes countless subtle decisions and imaginative moves. The number of rules you’d need to fully capture this complexity is simply too vast. “The problem is of scale, basically,” says Steinmetz.

Here’s where deep learning comes in. Remember: deep learning systems can teach themselves from data. They don’t need to be micromanaged by a knowledgeable human. The more relevant data they’re fed, and the more processor power they have at their disposal, the more proficient they can become at their allotted task. 

This means that a deep learning model fed on large amounts of music would likely do a better job than an expert systems approach - and might, by some metrics, even surpass a human mix engineer.

This is not yet a reality in the audio domain, but Steinmetz points to image classification as an example of AI tools reaching this level. “The best model is basically more accurate than a human at classifying the contents of an image, because we've trained it on millions of images - more images than a human would even be able to look at. So that's really powerful.”

This means that AI will probably get very good at various technical tasks that music producers have until now considered an essential part of the job. From micro-chores like setting your compressor’s attack and decay, to diffuse tasks like finalizing your entire mixdown, AI may soon be your very own in-house engineer. 

How will this change things for music-makers? Steinmetz draws an analogy with the democratization of digital photography through smartphone cameras. Professional photographers who did workaday jobs like documenting events lost out; demand for fine art photographers stayed the same.

“In mixing or audio engineering, it's a similar thing. If you're doing a job that could theoretically be automated - meaning that no one cares about the specifics of the artistic outputs, we just need it to fit some mold - then that job is probably going to be automated eventually.” But when a creative vision is being realized, the technology won’t be able to replace the decision-maker. Artists will use “the AI as a tool, but they're still sitting in the pilot's seat. They might let the tool make some decisions, but at the end of the day, they're the executive decision-maker.”

Of course, this won’t be reassuring to those who make their living exercising their hard-won production or engineering skills in more functional ways. We might also wonder whether the next generation of producers could suffer for it. There is a creative aspect to exactly how you compress, EQ, and so on. If technology automates these processes, will producers miss out on opportunities to find creative new solutions to age-old problems - and to make potentially productive mistakes?

On the other hand, by automating these tasks, music-makers will free up time and energy - which they can spend expanding the creative scope of their music in other ways. Many tasks that a current DAW executes in seconds would, in the era of analog studios, have taken huge resources, work hours, and skill. We don’t consider the music made on modern DAWs to be creatively impoverished as a result. Instead, the locus of creativity has shifted, as new sounds, techniques, and approaches have become accessible to more and more music-makers.

“It is true that some aspects of rote musical production are likely to be displaced by tools that might make light work of those tasks,” says Mat Dryhurst, co-founder - alongside his partner, the musician Holly Herndon - of the AI start-up Spawning. “But that just shifts the baseline for what we consider art to be. Generally speaking artists we cherish are those that deviate from the baseline for one reason or another, and there will be great artists in the AI era just as there have been great artists in any era.”

In the beginning there was MIDI

Making a distinction between functional production tasks and artistry is relatively easy when thinking about technical tasks such as mixing. But what about the composition side? AI could shake things up here too.  

An early attempt to apply machine learning in this field was Magenta Studio, a project from Google’s Magenta research lab that was made available as a suite of Max For Live tools in 2019. These tools offer a range of takes on MIDI note generation: creating a new melody or rhythm from scratch; completing a melody based on notes given; “morphing” between two melodic clips. Trained on “millions” of melodies and rhythms, these models offer a more sophisticated - and, perhaps, more musical - output than traditional generative tools.

AI-powered MIDI note generation has been taken further by companies like Orb Plugins, who have packaged the feature into a set of conventional soft synths – similar to Mixed In Key's Captain plug-ins. Drum sequencers, meanwhile, have begun to incorporate the technology to offer users rhythmic inspiration.

Why the early interest in MIDI? MIDI notation is very streamlined data compared to audio’s 44,000 samples per second, meaning models can be simpler and run lighter. When the technology was in its infancy, MIDI was an obvious place to start. 

Of course, MIDI’s compactness comes with limitations. Pitches and rhythms are only part of music’s picture. Addressing the preference for MIDI among machine learning/music hackers a few years ago, the Dadabots wrote: “MIDI is only 2% of what there is to love about music. You can’t have Merzbow as MIDI. Nor the atmosphere of a black metal record. You can’t have the timbre of Jimi Hendrix’s guitar, nor Coltrane’s sax, nor MC Ride. Pure MIDI is ersatz.”

As AI technology gets more sophisticated and processor power increases, tools are emerging that allow musicians to work directly with raw audio. So are MIDI-based AI tools already a thing of the past? 

Probably not. Most modern musicians rely on MIDI and other “symbolic” music languages. Electronic producers punch rhythms into a sequencer, draw notes in the piano roll, and draw on techniques grounded in music theory traditions (such as keys and modes). AI can offer a lot here. Besides generating ideas, we could use MIDI-based AI tools to accurately transcribe audio into notation, and to perform complex transformations of MIDI data. (For instance, transforming rhythms or melodies from one style or genre into another).

In a talk arguing for the continued importance of “symbolic music generation,” Julian Lenz of AI music company Qosmo pointed out that raw audio models aren’t yet good at grasping the basics of music theory. For example, Google’s MusicLM, a recent general music model trained on hundreds of thousands of audio clips, has trouble distinguishing between major and minor keys. Lenz concluded by demonstrating a new Qosmo plugin that takes a simple tapped rhythm and turns it into a sophisticated, full-kit drum performance. While raw audio AI tools remain somewhat janky, MIDI-based tools may offer quicker routes to inspiration. 

Such tools pose tricky questions about the attribution of creativity. If an AI-based plug-in generates a melody for you, should you be considered the “composer” of that melody? What if you generated the melody using an AI model trained on songs by the Beatles? Is the melody yours, the AI’s, or should the Beatles get the credit?

These questions apply to many forms of AI music-making, and we’ll return to them in Part 2. For now it’s sufficient to say that, when it comes to MIDI-based melody and rhythm generation, the waters of attribution have been muddied for a long time. Modern electronic composers often use note randomizers, sophisticated arpeggiators, Euclidean rhythm generators, and so on. The generated material is considered a starting point, to be sifted, edited, and arranged according to the music-maker’s creative vision. AI tools may give us more compelling results straight out the gate. But a human subjectivity will still need to decide how the generated results fit into their creative vision.

Timbre transfer: Exploring new sounds

When we think of a radical new technology like AI, we might imagine wild new sounds and textures. MIDI is never going to get us there. For this, we need to turn to the audio realm. 

In the emerging field of “neural synthesis,” one of the dominant technologies is timbral transfer. Put simply, timbral transfer takes an audio input and makes it sound like something else. A voice becomes a violin; a creaking door becomes an Amen break. 

How does this work? Timbre transfer models, such as IRCAM’s RAVE (“Realtime Audio Variational autoEncoder”), feature two neural networks working in tandem. One encodes the audio it receives, capturing it according to certain parameters (like loudness or pitch). Using this recorded data, the other neural net then tries to reconstruct (or decode) the input. 

The sounds that an autoencoder spits out depend on the audio it’s been trained on. If you’ve trained it on recordings of a flute, then the decoder will output flute-like sounds. This is where the “timbre transfer” part comes in. If you feed your flute-trained encoder a human voice, it will still output flute sounds. The result is a strange amalgam: the contours of the voice with the timbre of a flute.

Timbre transfer is already available in a number of plug-ins, though none have yet been presented to the mass market. Perhaps the most accessible is Qosmo’s Neutone, a free-to-download plug-in that allows you to try out a number of neural synthesis techniques in your DAW. This includes RAVE and another timbre transfer method called DDSP (Differentiable Digital Signal Processing). DDSP is a kind of hybrid of the encoder technology and the DSP found in conventional synthesis. It’s easier to train and can give better-sounding outputs - providing the input audio is monophonic. 

Timbre transfer technology has been making its way into released music for some years. In an early example, the track “Godmother” from Holly Herndon’s album PROTO, a percussive track by the producer Jlin is fed through a timbre transfer model trained on the human voice. The result is an uncanny beatboxed rendition, full of strange details and grainy artifacts.

“Godmother” has an exploratory quality, as if it is feeling out a new sonic landscape. This is a common quality to music made using timbral transfer. On A Model Within, the producer Scott Young presents five experimental compositions with just such a quality. Each explores a different preset model found in Neutone, capturing the unfamiliar interaction between human and machine. 

Even before he’d encountered AI tools, a busy life made Young interested in generative composition approaches. When he started out making music, the producer recalls, “I spent a month making a tune. It was quite romantic. But my life in Hong Kong couldn't allow me to do that too much. And so I slowly attuned to Reaktor generators, to making sequences and stitching them together.”

Last year, the musician Eames suggested that he could speed things up further with generative AI. Young began exploring and came across RAVE, but struggled to get it to work, in spite of his background in software engineering. Then he discovered Neutone. “The preset models were so impressive that I eagerly began creating tunes with them. The results were mind-blowing. The output’s really lifelike.”

A typical fear surrounding AI tools is that they might remove creativity from music-making. Young’s experience with timbre transfer was the opposite. Timbre transfer models are - for now at least - temperamental. The sound quality is erratic, and they respond to inputs in unpredictable ways. For Young, this unpredictability offered a route out of tired music-making habits. “There's much more emphasis on serendipity in the making [process], because you can't always predict the output based on what you play.”

Once the material was generated, he still had to stitch it into an engaging composition - a process he likened to the editing together of live jazz recordings in an earlier era. “When using this generative approach, the key as a human creator is to know where to trim and connect the pieces into something meaningful that resonates with us.”

In the EP’s uncanniest track, “Crytrumpet,” Young feeds a recording of his crying baby daughter through a model trained on a trumpet. Moments like this neatly capture the sheer strangeness of AI technology. But timbral transfer is far from the only potential application of AI in plug-ins.

In March, Steinmetz co-organized the Neural Audio Plugin Competition alongside Andrew Fyfe of Qosmo and the Audio Programmer platform. The competition aimed to stimulate innovation by offering cash prizes for the most impressive entries. “As far as making neural networks inside plugins, it really hadn't been established yet,” says Steinmetz. “We need a way to encourage more people to work in this space, because I know there's stuff here to be done that's going to be really impactful.”

Of the 18 entries, some offered neural takes on conventional effects such as compression, and others proposed generative MIDI-based tools. Then there were the more surprising ideas. Vroom, a sound design tool, allows you to generate single sounds using text prompts. HARD is a novel “audio remixer,” enabling you to crossfade between the harmonic and rhythmic parts of two tracks independently. Everyone was required to open source their code, and Steinmetz hopes future plug-in designers will build on this work. He sees the start of a “movement of people interested in this topic.”

Analog modeling

So, AI can do new sounds. But it can also do old ones - perhaps better than we could before. Analog modeling is a cornerstone of the plug-in industry. According to some, AI could be its future. Plug-ins like Baby Audio’s TAIP (emulating “a 1971 European tape machine”) and Tone Empire’s Neural Q (“a well-known vintage German equalizer”) use neural network-based methods in place of traditional modeling techniques.

Baby Audio explain how this works on their website: 

“Where a normal DSP emulation would entail ‘guesstimating’ the effect of various analog components and their mutual dependencies, we can use AI / neural networks to accurately decipher the sonic characteristics that make a tape machine sound and behave in the way it does. This happens by feeding an algorithm various training data of dry vs. processed audio and teaching it to identify the exact characteristics that make up the difference. Once these differences have been learned by the AI, we can apply them to new audio.”

Why use AI instead of traditional modeling methods? One reason is better results. Tone Empire claims that traditional circuit modeling “can never produce as authentic an analog emulation” as AI-based approaches.

Another is speed. Analog modeling using neural processing could potentially save a lot of time and money for plug-in companies. This means we might be looking at a proliferation of low-cost, high-quality analog models - no bad thing for producers who enjoy playing with new toys.

More radically, it means that modeling can be placed in the hands of music-makers themselves. This is already happening in the guitar world, via companies like TONEX and Neural DSP. Neural DSP’s Quad Cortex floor modeling unit comes with an AI-powered Neural Capture feature that allows guitarists to model their own amps and pedals. It’s simple: the Quad Cortex sends a test tone through the target unit and, based on the output audio, creates a high quality model in moments. 

This presents exciting possibilities. Many of us have that one broken old pedal or piece of rack gear whose idiosyncratic sound we love. What if you could model it for further use in-the-box - and share the model with friends? Until now, modeling has mostly been the domain of technical specialists. It’s exciting to think what musicians might do with it.

Democratizing music tech

This theme - of bringing previously specialized technical tasks into the hands of musicians - recurs when exploring AI music-making tools. For Steinmetz, analog modeling is just one application of deep learning technology, and not the most exciting. He invites us to imagine a tool like Midjourney or Stable Diffusion, but instead of producing images on command, it generates new audio effects. 

“[This] enables anyone to create an effect, because you don't need to be a programmer to do it. I can search a generative space - just how I might search Stable Diffusion - for tones or effects. I could discover some new effect and then share that with my friends, or use it for my own production. It opens up a lot more possibilities for creativity."

We looked earlier at how certain production tasks may be automated by AI, freeing up musicians to focus their creativity in other areas. One such area might be the production tools they’re using. AI technology could enable everyone to have their own custom music-making toolbox. Perhaps making this toolbox as creative and unique as possible will be important in the way that EQing or compression is today.

Steinmetz envisions “the growth of a breed of programmer/musician/audio engineer, people that are both into the tech and the music side.” These people will either find creative ways to “break” the AI models available, or “build their own new models to get some sort of new sound specifically for their music practice.” He sees this as the latest iteration of a longstanding relationship between artists and their tools. “Whenever a [new] synthesizer is on the scene, there's always some musicians coming up with ideas to tinker with it and make it their own.”

Dryhurst also sees a future in artists building their own custom models, just as he and Herndon have done for PROTO and other projects. “I feel that is closer to how many producers will want to use models going forward, building their own ‘rig’ so to speak, that produces idiosyncratic results. I think that over time, we might also begin to see models themselves as a new medium of expression to be shared and experienced. I think that is where it gets very exciting and novel; it may transpire that interacting with an artist model is as common as interacting with an album or another traditional format. We have barely scratched the surface on the possibilities there yet.”

Read Part 2 of this article.

Text: Angus Finlayson
Images: Veronika Marxer

Have you tried making music with AI tools? Share your results and experience with the Loop Community on Discord. If you’re not already a member, sign up to get started.





Source link