Wow I suck…. I really haven’t updated this since summer 2017? Well then, time for an update. Still playing/engineering, thought not nearly as much as I would like. Last week I had to renew a subscription I have to site called Produce Like A Pro run by this rather interesting producer I’ve been following named Warren Huart . Cool music production/engineering site that offers the original sessions of songs for people to remix, I hadn’t downloaded one in a while so I thought I would see if anything interested me, and this was the first song I listened to. I downloaded it right away based on the first 10 seconds because I knew right away this one I had to do.
I try not to listen to anymore than a snippet of the original because I like the exercise of closing my eyes, listening to the song and then mixing it according to what my head interprets it to be. This one pretty well painted a story and feeling right away. It felt like a hot humid summer night, kind of stuffy but in a pleasant way. In a bar that was basically a large screened in wooden porch with dim lighting, some tables and a dance floor. It smelt and tasted like alcohol after you’ve had a bit too much to drink, smooth and appealing and you really want to keep going with the full knowledge that it means bad news tomorrow, but who cares?
And so I start making choices of the way the guitar should sound in the imaginary bar that this all is taking place in, like a beat up old Strat playing through an older Fender amp that breaks up just the tiniest bit in a really pleasant way. I start working on the vocals and the picture gets painted even further of an ex walking into the bar, and just like the alcohol you’re drawn to them knowing there is nothing but a release and bad news ahead. It ends up in a head to head dance with the a sweet smell of whiskey on each others breath and the lyrics of the song running through your head. That of course fuels the choice to make the vocals slightly more “breathy” and to try and have the overall sound be some weird mash up of One Of These Nights by The Eagles and something by Vince Gill. So I start chasing a sound I can hear in my head but know I will never get it quite right.
Jesus this sounds like a 50 year old man’s version of a trashy romance novel. OK so enough of that, the short version is, this was a really great song that I just wanted to do. So here’s my version of the mix of Whiskey by Steve Maggoria:
Afterwards I had a chance to listen to the original as well of some other peoples interpretations of it, and really amazes me how the same song can be presented so differently from person to person. In listening back to my mix, I can still hear the little imperfections that would normally drive me crazy, but in this case they work in the context of the song for me. And the cool part is I’m actually happy with this one because it’s the way I want to listen to this song. That’s the first time that’s happened for me so I’m happy I decided to try this song.
It’s been a while, I’ve got a couple of these on the go and thought I would throw this one up. Definitely out of my wheel house with this one, but a good exercise none the less. This is a song called Treat Me Right, by a young lady named Lauren Taylor who is best known as an actress on the Disney Channel show Best Friends. I typically like to work in the hard rock/metal genre so something like this is not in my comfort zone, but then again, most of the fun stuff happens outside your comfort zone right? I was working on it while my wife and kids and their friends were in the area of my mixing room, I got several comments about how catchy it was and they wanted to know who sang it so it’s definitely an ear worm. All that being said, here’s my remix of Lauren Taylor’s Treat Me Right.
That time again, this time another remix of a country song by a band called The Gallery. This song is entitled Dream Girl. It’s been interesting bouncing around genres a little bit. What I’ve noticed is that I tend to approach songs from a rock point of view. Not really surprising because metal and rock are really what my musical DNA is based on, but part of me has a philosophical debate with myself about whether I should spend time learning about the idiosyncrasies of mixing each specific genre or do I allow the mix to be filtered through my musical preferences.
I was having a discussion with a guitarist friend of mine the other week about this, he’s allowing me to re-record and remix some of his work, and I made the comment that in a way the producer and engineers are additional band members from a certain point of view because they can through their choices dictate what the end songs sounds like. Just going through the exercise of writing this post out makes me think, it is worthwhile to learn about how certain styles like country are engineered and mixed because that knowledge is just another tool in the larger engineering box o’ tricks. Enough blabbering, have a listen to a good country song by a talented group of guys!
It’s that time again! Latest remix is a faith rock type song called Eternal Eyes by a gentleman by the name of John Demena. Really interesting vibe to this song that is right up my alley so this one really was a pleasure to work on (and I’ll be damned it’s still stuck in my head). I recently picked up a Presonus Faderport 8 and UAD-2 PCI card, so this was my first chance to be able to mix at home with faders instead of a keyboard and mouse and to use some pretty amazing hardware based plug-ins like a Neve 33609 compressor. At some point I’ll review them but I want to get some more time and experience in with them. Anyway, I’ll stop yapping now and get to the music! Here’s my mix of Eternal Eyes!
So I’m about half way through the engineering program I’m enrolled in, all that’s left are essentially technical courses that I’m already more than familiar with, but the actual mixing and production courses are done. My final exam was a three parter, the big part being mix a song in 3 hours, which is an insanely short amount of time to do that. Managed to do it and walk away with an A+ average so I’m extremely happy with that. So where does that leave me? Well with lots of time to practice what I’ve learned and to start building a portfolio of what I’m capable of, which leads to this post.
A while back I joined a site called ProduceLikeAPro.com run my a gentlemen by the name of Warren Huart, he’s worked with artists like Aerosmith, Ace Frehely, The Fray, Korn, James Blunt, the list goes on and on, but he also loves sharing what he’s learned, and most importantly for me, shares ProTools sessions so that guys like me can build out our own portfolios. I’ll probably do a post on these type of sites at some point, and how and why I ended up at Warren’s, but if you’re like me and want to continue learning about audio engineering and production I would recommend his site, it is well worth the cost.
So this is one of those sessions, a song called Locked up by a young lady named London Lawhon, going into this I did not look her or the original mix of the song up ahead of time, I wanted to go into it blind (so to speak) so that the end result was completely uninfluenced. After the mix was complete and I was happy with it I went back and listened to the original for the first time with my wife. Even though it’s the same song from the same source, the two versions are very different sounding and we both much preferred my mix of it. That’s not a slight against the original, music is an extremely subjective medium, and my tastes and approach to it is that less is more and that is apparent in my mix with London’s voice, the piano and guitars being the focus of the mix. So here’s my remix and I’ll link the original video just after it:
YouTube Video and Original mix by Warren Huart can be found here.
Hey everyone, being from and IT background I thought I would spend a few minutes going over some tips and tricks you can use to get the most possible out of your DAW. I recently did an episode of my Five Minute Sound Study Podcast where I went over the basics of what components you need. As you can imagine, you can’t even begin to scratch the surface of the surface you’re trying to scratch in that amount of time 😉 So over time I will write a series of articles that will go into more detail about the various components, peripherals and software you can use in the audio creation process.
I’m starting off at a bit of weird place when it comes to all the pieces involved, that being optimizing an already built Windows based PC for digital audio, the reason I’m starting off here is because I spent last night doing it so while it was fresh on my mind I thought I would tackle that subject. I use a bunch of different computers in my life, but the one I would call my favorite is a PC I built about a year and a half ago, it was designed for digital audio and for Virtual Reality, and thus is a bit of a monster. I had a crash recently that forced me to start from a barebones OS again and after installing all the drivers and my DAW(s) of choice I encountered every home recording enthusiasts arch nemesis, latency.
What is latency? The time between your audio entering your computer and it reaching your ears. So, in my case when I hit a chord on my guitar there would be a half second or so delay before the result would get to me. There’s lots of workarounds for this, but I know that my computer, and most modern computers for that matter are more than capable of providing me with undetectable amounts of latency, the problem is my system wasn’t properly configured for it. So let’s get to it.
1. BIOS settings that will affect your performance. SpeedStep and Quiet N’ Cool are not your friend. Basically what these technologies do is reduce the clock speed of your CPU based on demand. For the average user this is awesome, for audio creation it is not. The reason why is there a brief amount of lag between it detecting the demand and it addressing it, and that can under the right conditions cause problems for you, so start by disabling that feature. Now depending on your processor you may see the options of Turbo Boost or Turbo Core as well. These do the opposite, they increase the maximum speed of your CPU depending on demand. So if you have a 4.0 ghz CPU, you may get a boost to 4.18 ghz when needed. I would (and do) turn that function on, but make sure you have adequate cooling installed in your system to deal with the added heat of running at higher clock speeds causes. C-States, it may also be referred to as CPU Idle State, we want to make sure that it is disabled, we don’t want cores being disabled or enabled, we want to try and keep the data path as consistent as possible. Onboard sound cards, if you have one, make sure it’s enabled and I’ll explain why in a later setting.
2. Power Settings. Open up your Control Panel, head over to the Power Options icon, and choose High Performance. Wait, we’re not done. The goal here, like in the BIOS settings is to achieve as consistent a data path as possible, so hit Change Plan Settings, the display turn off, I’m not really worried about that, but that sleep option, change it to “Never”, we don’t want a PC going to sleep because I’ve seen it cause problems with audio devices when it wakes up. Now, open “Change Advanced Power Settings”. There’s two settings in this plan I would change. First, USB selective suspend. Disable it if you’re using a USB based audio device, again predictable and consistent, though an argument could be made for wanting to suspend unused devices, but I would address that in the physical setup of the computer and we will talk about that at a future point. Next, Processor power Management, the Minimum Process State should be 100%. In audio terms, think about data like you would transients. We want to be able to be able to deal with them (if we choose to) as they occur, not, oh crap, there goes a transient, should have been ready for that. That should be it for your power plan settings.
3. Background Services. This one here is a biggie. Go to the System icon in your control panel, then to Advanced system settings and Performance. First adjust for best performance, then go to the Advanced tab and adjust for Background services. I know, I know, but my DAW is a program and I want that to perform the best possible. Here’s the thing, your audio interface, the drivers for that run as a background service, and when that is delayed for a foreground task you get latency problems. This one setting might be the best move you make in reducing latency, so make sure you do it.
4. The Sound icon in your control panel. OK I’m including this here even though it’s more than just a setting I will be discussing. Earlier on I mentioned enabling the sound card on your motherboard if you have one. Ideally what you should do is get an average set of computer speakers, connect them to your onboard audio device and set it as the default audio device. Your audio monitors should be hooked up your audio interface and used exclusively by your DAW, this way Windows sounds get routed through the onboard audio and don’t interfere with your DAW output. The added bonus of doing this is, when you do your mix down, you can listen to what it sounds like on a regular set of speakers.
5. Anti-virus, firewall, Windows Update, etc. Again we’re getting into territories here that are not strictly in the settings domain. In an ideal world I would tell you to get your software installed, patched up to the latest version and then disconnect it from the network, disable all network cards, all anti-virus, firewalls and unnecessary services. This is not always practical, but if you’re running a dedicated PC for audio, that’s the route to go. All those added pieces of software take away memory, disk bandwidth, and CPU cycles from you, but it’s not always practical or safe to disable them. I will delve into this a little deeper when I discuss system design, but it may be something to consider as well.
6. Latency settings in your DAW. You will see settings for buffer size, but not in one place. There’s two buffers you have to worry about, one used by the bus (meaning USB, firewire, etc) and then another by the device itself in it’s ASIO driver. So it’s buffers on top of buffers. There’s a formula that tells you how much latency your adding and it looks like this:
Buffer Samples/Frequency = Latency
So with that in mind, let’s say we’re recording at 48khz, and we’re going to set our Firewire buffer size to 256 samples and our ASIO driver at 512 samples.
512/48000 = 0.010667
256/48000 = 0.005333
Combined we would be talking about 16ms of latency. Our goal is to reduce each buffer until we have problems. Start at a setting of 512 samples on the bus and move it down until you start hearing problems like crackling, dropouts, popping, distortion. Then do the same for device. What you end up with is, what you end up with.
How much latency is a problem? I don’t really want to open up that can of worms because everyone has an opinion, but here’s a thought. Sound travels around 10000 feet per second. so with that example above if you were standing about 16 feet away from your amp while playing, that’s the amount of time it would take you hitting a note on your guitar to get back to you. Personally I don’t have a problem with that. Some people say they could hear that amount of latency, I can’t, so go with what works for you. If you are still running into problems with latency I would recommend downloading a product called LatencyMon from here, it’s a great little piece of software that will analyze your system and give you ideas of what needs to be addressed.
So that’s some basic optimizations you can use on your Windows 10 PC to get the most out of your DAW. Hope it helps, and if you have any questions or comments don’t be afraid to ask!
Hello everyone and welcome to Episode Two of the Five Minute Sound Study Podcast discussing computers. Last episode we covered a basic description of what a sound wave is. Obviously we haven’t even scratched the surface of that topic, but rather than bombard you with science, the approach I’m going to take on this show is to mix the science in with the stuff we care about so that there’s context and you can understand why it’s important.
So with that in mind we’re going to start talking about what we need to capture and do something with those sounds waves we discussed last episode. Our focus here is recording audio, be that music, sounds, or voice like this podcast. So to do that we need a few basic things. Now I’m sure some of you will remember mixing consoles (those big boards with lots of sliders and knobs), tape machines, multi-track records and so and so forth. Recording pristine audio used to be a challenge and very expensive to do. These days it’s relatively inexpensive. All you really need is a microphone, a device to capture the audio from that, and somewhere to store it and listen back to it on. The easiest way to do that these days is something called “In The Box recording”. All that means is using your computer to do that.
Over the next few episodes we’ll spend some time going over each of those components in a little more detail, but working in the IT world as my primary job, I thought I would start by discussing the computer first. Put away any preconceived stereotypes you may have about Mac’s and PC’s, both are more than capable of doing what you need. In terms of music production, like many of you I’m learning, and as I learn I share. When it comes to computers however, I’ve been doing this professionally for a quarter of a century, and been proficient much longer than that. I use both Mac’s and PC’s in my personal and professional life, and either one will do the job you need. There is a common theme I have discovered while learning and speaking with teachers and music professionals however, that is use what works for you. Music is all about personal preferences, in the style, content and approach. The people that work in that space tend to have very strong opinions about what they like and what works for them. BUT, you are you’re own person and will need to figure out what you are comfortable with. So if you are happy with Windows based PC’s go with that, Mac’s are your thing? Go for that, what you feel comfortable working on should be what you use.
So with that in mind, let’s talk about some of the commonalities between the two platforms that will make a difference. There are some basic parts of the computer you use that will impact what you are able to do. We’ll start with the CPU. Honestly these days any basic computer will have the processing power to record and manipulate audio,but the CPU will affect the amount of manipulation you can do, if you are using a large amount of effects on your audio that will impact the performance. If you are just recording a voice and a guitar, no problem, but if you start getting into large track counts with loads of effects you could run into problems, so the more powerful the CPU, the more you will be able to do. In the early 2017 landscape, an Intel I5 based CPU should keep you happy, if you can afford an I7, you will have more horsepower than you need and the gadget junkie geek in me says there’s nothing wrong with that.
Memory of the RAM variety. This one is really important because the more your system can load into RAM, the better it will perform. I would recommend 8 Gigs as a starting point, and 16 as the sweet spot.
OK Hard disks. This one may get a little complicated, but it’s worth understanding. Basically this is the chief bottleneck in today’s systems. Your system has a main drive where it stores all the files it needs, being programs, operating system related files and even data. There are two types of popular drives these days. Conventional physical drives and SSD’s. Conventional drives use magnetic heads that go across a platter to read and write data, they provide more space at a cheaper price. SSD’s store data on a semiconductor and there are no moving parts. SSD’s are much faster, but also more expensive and don’t approach the storage capacity of conventional drives. If you can afford to go the SSD route however it is the way to go. You can get monstrous track counts without any hiccups. If you are using physical drives however, it can handle recording more than enough tracks for the average user and will give you the benefit of more storage space for your buck but I will make a recommendation. Get a separate drive to store your music projects on. If you have a low amount of RAM the system will begin swapping information to the drive and if that is the drive where your audio is you could run into problems. Personally I use SSD’s for my working drives and conventional drives for my long term storage.
Finally the connection to the interface. We will touch on the actual audio interface in a future episode but this is what will connect you to that. As of today’s date the available option are an internal card, USB, Firewire, Thunderbolt and Ethernet based devices. Like a lot of things in this hobby we love, this boils down to what you can afford and what you plan on achieving. Firewire ports have been discontinued and so have most devices based on that port. You can get those devices heavily discounted these days, BUT you may run into problems with support for them down the road. USB is pretty well the standard for entry level musicians/producers. Thunderbolt, Ethernet and internal cards are in the higher end range. Like I mentioned these tie into the interface you will be using and there is a lot more to discuss about that in the next episode, so we will do that there.
I want to end this off by contradicting everything I just said. Wait what? Ready? OK here we go. We are making music, that is the goal. You don’t need the best and the fastest stuff. I would argue that that may even ultimately hurt your music. Why? Because when you are challenged by the process, it forces you to be inventive. Pink Floyd’s Dark Side Of The Moon was recorded using only sixteen tracks!! Think about that. In a recent studio session I worked on, I used that many tracks just for the drums, and I can assure you it was no Dark Side of The Moon! The point I’m getting at is what really matters is the music. Use what you have and can afford and don’t get too obsessed with the equipment you think you may need. Be creative, be inventive, enjoy the process! And I will leave you with that thought, until next I’m Rob Blazik and this has been the Five Minute Sound Study.
There’s an interesting documentary on YouTube called The Art of Listening that I watched the other night that I thought I would share. It’s basically about all the work that goes into making music right from the stage of building instruments, an artist choosing theirs and then all the people involved in the process ending with the listener. For someone like me it was a fascinating watch and I loved it right up until the end where it started to sound like an ad for HD Audio formats and then I start to take issue with it a bit. I’ll explain why and then get to the documentary. Right now I’m going through the process of learning how to be an audio engineer, and I’m realizing how much goes on in a song that I’ve never heard before. Especially when it comes to EQ’ing, there is stuff I simply can’t hear, yet. I say yet because every day I seem to have “Ah ha! I hear it!!” moments as my ears are being trained as to what to listen for. The average person however doesn’t care about that and I’m willing to bet would never care about the difference between an MP3 and a 96khz/24 bit HD Audio track. They care more about the style, lyrics, and music than the nuanced details of it.
That being said, I really want to emphasize that is a really small criticism that may be more of a personal bias. The focus of the documentary is the attention to detail that everyone participating puts into making making music, so logically that same attention to detail should be carried through to the end format so the listener can enjoy the piece the way it was intended to be heard. I encourage you to watch it because it is a beautiful film made by people who you can tell are passionate about music.
Hello everyone, my name is Rob Blazik and I would like to welcome you to the inaugural edition of the Five Minute Sound Study Podcast! The Five Minute Sound Study Podcast is exactly what it sounds like, learning about audio and sound engineering in teenie tiny five minute segments, and what better place to start than by defining what we’re learning about? So let’s take a look at how Websters defines audio:
Definition of audio
1: of or relating to acoustic, mechanical, or electrical frequencies corresponding to normally audible sound waves which are of frequencies approximately from 15 to 20,000 hertz
a : of or relating to sound or its reproduction and especially high-fidelity reproduction
b : relating to or used in the transmission or reception of sound — compare video
c : of, relating to, or utilizing recorded sound
So What Is Sound?
Audible sound waves, frequencies, hertz, sounds like a lot of sciency type stuff doesn’t it? I know what you’re thinking, I don’t want to learn sciency stuff, I want to record some sick beats, make lots of money, and sit by a pool drinking Mai Tai’s somewhere… And to that I say, patience grasshopper, this will help you get there. Really the basics aren’t all that difficult to understand at all. Sound is basically a vibration that moves through a medium and is then received by someone. When you speak, your vocal cords vibrate the air particles around them, those particles keep bumping up against and transmitting those vibrations to the air particles around them, until they reach your ear drum and vibrate it and your brain recognizes that as sound. Let’s take a quick look at what a sound wave looks like:
Whoa, squiggly lines! OK so I can’t take a picture of a sound wave, but let’s pretend we can, and that that is what it looks like. In that diagram you can see two labels, one is amplitude, the other is frequency. Amplitude is how big the sound wave is and to your ears that translates to how loud it is. Amplitude is basically the size of the vibration, we typically measure that in decibels, and we’ll get to that soon enough. How fast the vibration is, we call the frequency. See how in the low frequency example the waves are further apart and in the high frequency example they’re closer together? That’s what are brains translate into bass, mid range and treble. The faster the particles vibrate, the higher the sound, and that’s what Hertz is. The number of vibrations a second.
So they say humans can hear in the range of 15-20,000 hertz. In reality however that’s not the case. As we age our hearing range decreases, as a man in his late 40’s I can only hear up to about 14, 000 hz (or 15khz, same thing, damn metric system, I know). If we listen to music too loudly we also damage our eardrums and our ability to hear as well. Being someone who has loved metal all his life, you can imagine the damage I’ve done. Want to check what frequency you can hear up until? Here’s a neat link that you can do that at.
And we learned this why?
Going back to why we need to learn all this sciency stuff about sound waves, and frequencies and all that other mumbo jumbo? Well we’re interested in audio recording, and as you will see later in our discussions about mixing and engineering all this stuff comes into play. Recording and mixing audio is like putting together a giant puzzle, the pieces have to fit together. Drums occupy a certain frequency range, so do guitar, vocals and bass, and they overlap in areas. How do we record each instrument properly so that it sounds good, or better yet amazing? How do we make each instrument stand out in a mix? Or how do we get all of them to sound great together?
If we have a good understanding of the basics, that becomes much easier as we progress. And progress we will! Thank you very much for listening and or reading, and I look forward to our time together in future episodes of the Five Minute Sound Study!