Interference simulation from first principals

I've got some experiments with subs that I want to do but current simulation software that I have doesn't allow me to try them (more on this at a later date). This means that I need to build my own simulation tool. I thought it'd be interesting to document the process.

My First Simulation! (blue = quiet red = loud)


Maths is not my forte, and I have no idea how acoustics simulation works "under the hood". But I do know my fundamentals (I have my time working at d&b audiotechnik to thank for that). So I'm going to go from first principals and build a really simple interference visualisation tool. I have no idea if what I've done is correct but it feels about right. 

Lets keep it simple and say that there can only be two sources, both sources are perfect point sources (they are infinitely small, have a flat frequency response from 0 to infinity, and they can go infinitely loud), and we are only going to work in 2D.

So what am I trying to do? Well I want to see how loud a specific frequency is at every point on a plane. That's too complicated, let's simplify. How about: I want to find out how loud a single frequency is at one point in space, given two sound sources.

That's probably simple enough. That's addition and subtraction of sound waves. That's electroacoustics 101 (if you haven't done this course you really should. It's free! Also read this book. It's not free!).

Time for some maths, yep we need some because we actually want a loudness number in the end!


Addition and Subtraction of Soundwaves

A sine wave: 
y = sin(x) 
But because computers like to work in radians and humanbeans like to talk about degrees we just need to add a little bit to this equation so that x can be in degrees
y=sin((x*pi)/180)

Great there is one sine wave. But we want to add two together:
y=sin((x*pi)/180) + sin((x*pi)/180)

But our two sine waves are not going to arrive at our measurement point at the same time. There will be an offset in arrival time of the two sinewaves. We can see this as a phase difference between the two waves. So we actually need to add a phase difference to the formula above. I'm going to call the phase difference p:
y=sin((x*pi)/180) + sin(((x-p)*pi)/180)
Or if we take out our radian conversions to make it look nicer:
y=sin(x) + sin(x-p)

If you plot the above with p=90 for x=0:540 you get this image (the two signals are red and green, the sum is black):


I made that plot in R using this code:
   p <- 90
   x <- seq(0,540,1)
   w1 <- sin((x*pi)/180)
   w2 <- sin(((x-p)*pi)/180)
   out <- w1 + w2

   plot(x, out, type="l", ylim=c(-2,2))
   lines(x, w1, col="green")
   lines(x, w2, col="red")

But you could use libraCalc or excel too. 

Calculating the phase difference

In the calculation above we just decided that p should be 90. But how do we work out what it actually is?

Well first of all we need to work out the difference in distance between our measurement point and each of the sources. The easiest way to do this is to put everything on a grid and give everything coordinates.

So lets put source one at (0, -0.86) and source two at (0,0.86). Why? Well those numbers give us a distance of 1.72 between the two sources. 1.72 is half of 3.44. The speed of sound is roughly around 344m/s (it was a good value to choose when working to two d.p!) so we will be able to get predictable patterns for 50Hz, 100Hz, and 200Hz etc. It keeps the maths easy!

For now, let's put the mic at (10,0). We've got a small isosceles triangle with the two sources at one end and the measurement point at the tip. I've set it up like that so that we know that we should have no phase difference between the two. It means that we can check our maths really easily.


Now we need to find the distance between points on a graph (thanks google! Or just remember Pythagoras from highschool...). So then we've got two distances (d1 and d2). Now we just need the difference between these (d2-d1), I'm going to call this difference d (for the positions described above d should be 0).

We've got the difference in distance now how do you turn that into difference in phase angle? With this cute little formula where d is the difference in distance between one source and the measurement point and the other source and the measurement point:
360d/λ = p
Let's say we're calculating for 100Hz, using the wave equation:
c/f=λ 
344/100 = 3.44. So (360*0)/3.44 = 0! Great, we can now calculate the phase angle! Check it with the measurement position of (0,10) you should get a nice predictable number!

Calculating the level

This is all very well but we've just been talking about distance and we've not taken into account that sources get quieter as you move away from them. So we need to work out what the level should be at the measurement point. This means updating the formula for adding sound waves together. All we need to do is add a level multiplier to each sine wave:
y=L1sin((x*pi)/180) + L2sin(((x-p)*pi)/180)
Now we can plot with level differences:
But we need to calculate what the level should actually be. Because we're using point sources we can use the inverse square law where sourceInitialLevel is the level of the source at 1m.
Level drop in dB = 20*(log10(abs(distance)));
Level at point = sourceInitialLevel - level drop;
We've nearly got everything we need. Now we just need to find the actual summation level.

Finding the final level at the measurement point

This is the bit that I got stuck on for ages. I was trying to differentiate the sum formula to find the maxima and it was getting really quite messy. Then I took a step back and decided - it's only 360 points (I think you only actually need to do it on 180), why don't I just rattle through them and manually find the maximum point?! So that's what I did.

Now we come to a point that I need to think about a bit more before I can give a nice justification for why it's needed: I chose to convert my dBSPL into pressure when it came to adding together the two sinewaves. I calculated the level at the measurement point in dBSPL (using all the stuff above) but then converted that level in to raw Pascals. Once I had found the maximum level of the summed sine wave I converted the raw Pascals back in to dBSPL. I think that's the right way to get correct dBSPL results. 

All done!

Putting it all together.

We have now got all of the tools needed to calculate the level at any point in space from two given omnidirectional sources. Now we need to display this data.

First build a grid of measurement points, then iterate through the grid calculating the level at each point using the maths from above each time.
  1. Find the distance to each source from the measurement point.
  2. Find the difference between the two distances.
  3. Calculate the level of each source at the measurement point.
  4. Sum the two sources.
  5. Find the peak of the summed sine wave.
Now that you have a level for each measurement point it's just a case of assigning a colour to that level (I have just mapped the level range to 0-50% of Hue on a HSB colour picker, but you can also find the max and work down in 1, 3, or 6dB increments to show you exactly where the -3/6/9/12dB points are).

So you've got a grid of colours, all you have to do now is display them!

I did my version in processing because it's such a fast way to chuck a proof of concept together but you can probably use any language you like. If anyone would like to see my code, or fancy fixing my mistakes ask me and I'll make the BitBucket public (warning: the project has moved on quite a bit from here).

Below you can see two screenshots from my processing project. I created a grid with 0.25m spacing for the measurement points. You can see the sources as white cubes. Each measurement point is coloured depending on how loud it is: red is loud, then it goes through yellow, green and ends with blue for the quiet areas.


The mark in the middle of this image just shows the centre of the plane.
I wrote this post a good few days ago. The project can now do delay, polarity, and most importantly many sources. Here are some screenshots:

End-fire cardioid simulation with 3dB per colour band.
Development mode showing actual level values at measurement points.

However, I think I have made a mistake. Because here is a screenshot with identical settings in ArrayCalc (from d&b audiotechnik) and my simulator. Sources are the same, resolution is the same, physics should be the same! But the pattern is totally different. I have a lime.

My Simulator vs ArrayCalc - I think I've made a mistake somewhere!
All feedback gratefully received etc etc.....

A very british hotel - A very poor advert for the entertainment industry

A few weeks ago I was watching a Channel 4 documentary about an exclusive London hotel (the Mandarin Oriental). It's a very posh hotel and they were putting on a wedding which seemed to be a very high budget event.

In my opinion (as everything is on this blog) the event was let-down by the tech. Look at the care and attention that has gone into dressing the room and then look at the plastic speakers on tatty black stands that have been supplied. In one picture you can see that the power cable to the loudspeaker wasn't even taped to the stand!







Perhaps this is an education thing? Perhaps people aren't aware that you can have really high quality audio, colour matched to your event, hiding in the background with cables colour matched to your carpet or walls, or even hidden under the carpet. You've just got to pay for it and find someone willing to go that extra mile.

The event manager says that he won't allow an event to go wrong, but yet he is happy relying on loudspeakers that are worth about the same as a glass of wine to his clients. I don't understand.

I would love to hear from the rental company that did this job and hear their side of the story. Did they educate the clients in what is possible? Why didn't they make extra effort, knowing that there was a camera crew in the room?

Also, I would love to hear from the Hotel to know if they are aware that it is possible to provide lighting and video equipment that is up to the same standard as their world-class dining and service.

Perhaps I should start a company specialising in top quality, bespoke audio and lighting systems....

I-Simpa: open-source acoustic modelling

In my search for an open-source, lower cost, and hopefully more user-friendly version of EASE I discovered I-Simpa. On the face of it I-Simpa looks great. Unfortunately they haven't released an installer since 2014.

UPDATE: The developer of I-Simpa commented below and pointed me towards a windows installer. It can be found here. I've not had the opportunity to delve into it yet. So the rest of this post is now obsolete...

If I-Simpa would run on Ubuntu and I could still justify running it, building from source wouldn't be an issue. Unfortunately because most audio software doesn't support Ubuntu I am now bound to Windows, and building from source on Windows is a totally different story for me! I've actually given up trying to get I-Simpa to build (I didn't want to have to download and learn how to use Microsoft Visual Studio and I couldn't get it to compile using the terminal).

It is a shame. It looks like a great bit of software, but unless it's easy to build then it won't gain any traction.

In the meantime I'll continue looking for a low-cost acoustic simulation tool.

As an aside - has everyone found the "Bash on Ubuntu on Windows"? It's epic. Findout how to enable it here.

Entertainment industry press - Yawn

Ok, enough about me and what I've been doing, it's time to ruffle some feathers. I'm sure that anyone reading this blog has read at least one of the industry magazines either online or in paper form (PSN, LS&I, AMI, Installation, etc). Let me summarise the content of the next issue:
  1. Here is a review of a show, wasn't it epic?!
  2. Look at this brand new product, it's going to revolutionise the industry.
  3. Here is an interview with a person saying the same stuff as the person before them.
  4. etc.....

What's the pattern?
They can't criticize. The industry press is sponsored by the industry, their income comes from advertising. That means that they can't give negative reviews for fear of loosing revenue. I don't know about you, but I'm bored! Not every product to be released is the next amazing thing. Some of them are, to put it mildly, rather disappointing. Wouldn't it be amazing to have a product review that was honest? Whilst it could damage a companies reputation, yes (and so it should if they keep releasing bad products), it could also give them great feedback and help drive innovation in the industry.

This is true for show reviews too: imagine a review of a show saying "this felt just like the last 4 shows I've been to, the only difference was a new person stood on stage" or, "it sounded awful, I couldn't understand a word the lead singer was saying". If I were working on either of those shows I would welcome the feedback, learn from it, and invite the reviewer to my next show to see if I got better.

Press isn't supposed to be nice and keep everyone happy but whilst it's funded by the very people it's reviewing I don't think anything is likely to change.

Immersive Audio

I've been experimenting with ways of demonstrating immersive soundscapes to people without having to have a large number of loudspeakers.

Introducing VR audio. Oh wow I wish everyone could agree on a nice way of representing a 3D sound field. Most people seem to be settling on ambisonics, but should it be 1st order or 2nd order? What order should the tracks be in (alphabetical order obviously - looking at you YouTube..)? What format should the content be in?

So here is what I tried. First I mixed an example in 3D and then uploaded it to YouTube only to discover that the soundfield doesn't move when you look around! Perhaps I did something wrong in the convoluted process of channel ordering and metadata. I'm not sure. Frankly the process is far too difficult to be of any use in the real world at the moment. You can listen to that failed experiment here if you should wish.

Then, using Bruce's convolutions and correction filter I did an example mixed in Ambisonics and uploaded it to soundcloud as binaural. You can find that example here. I have never been impressed with binaural audio. Even if it's a recording made on a dummy head with pinnae. I think perhaps it's linked to the art of foley. A realistic gun shot does not sound realistic when it's shown on screen. If you put yourself in an unrealistic situation everything must be exaggerated in order to be believed. I think it's the same with immersive audio - things that try to be too realistic loose their realism.

But that's a topic for a different day - "The path of academic research into immersive audio".

Art-Net Q-SYS plugin

UPDATE:
 - This post continues to get quite a lot of traffic. The plugin still exists, I know the download link is broken, I've left the freelance market since I wrote this so keeping side projects up to date has had to take a back seat for a while. I hope that I can pick it up again in the future!


I've written a plugin for QSC Q-SYS that enables it to output Art-Net. You can download it here.

Q-SYS is an amazingly powerful integration tool that I believe can be used for much more than the boardroom A/V or commercial audio processing (airports, shopping centres etc) that it's known for.

With a little programming knowledge it is pretty simple to write plugins for Q-SYS, and it's easy to get TCP or UDP communications up and running. The real power of Q-SYS lies in its tried and tested architecture - it is amazing at redundancy but yet so simple to use and setup.

So with a little imagination it's easy to make things like custom control interfaces. A small microcontroller with a network stack (I use Texas C series, but even arduino would do fine) and a custom plugin for Q-SYS and you're away. Using the same method it's easy to make adapters - perhaps a Q-SYS to DMX512-A adapter, or Q-SYS to CAN-BUS? How about a Q-SYS hosted object based surround processor (I've made one; perhaps I'll share it on here...).

The wonder of technology that's so open is that it's just a toolkit, it's up to you how you use it.

Who am I and what am I doing?

Who am I?

My name is Tom and my Linkedin page informs me that I “provide technical services to the entertainment industry on a freelance basis”. Which just about sums it up. My world ranges from technical literature, through to system design, integration, bits of research and design, and even some RFID race timing. You can find out more about me on my Linkedin should you wish. 

What am I doing?

Well as I find myself with a bit of time on my hands I thought I would begin a blog to share my thoughts on various audio related topics. I want to share my reasonably non-technical take on some, often controversial, subjects and add my voice to the online debate. Hopefully I can learn some things along the way and if I’m lucky get the chance to impart some of my knowledge to someone else.

What will I cover?

I’ve got some plans for topics. Yes I do want to cover the cliché topics such as my thoughts on how to tune a system, hi-fi snake-oil, and no doubt I’ll mention sub arrays at some point. But I also want to address the boundary between audio industry, audio academia, and computer science. Occasionally I may even scratch psychology (unconscious bias and double blind tests). I won't make it too technical, I like to talk about things in the simplest way I can because that's how I understand them best. Warning - there will be lots of analogies!

I hope you find this blog interesting and I hope I have some fun writing it too. Please join in the conversation. Comment, share, and discuss.

Processing as a proof of concept tool

I've given my self a few days to do some of my favourite sort of work: quickly throwing together a proof of concept of an idea. I had fo...