Search This Blog

Friday 6 November 2020

Composing some organ music and some spooky Halloween music, but with a twist and a view of 2030

My first job out of high school was as a pipe organ builder; I was one of the only apprentices to start that year. I’ve lost touch with Peter Jewkes who was my boss and who taught me. But I’ve got a funny cross-over between my first job and my current work today.
I realise that organ music isn’t everyone’s cup of tea, but would you listen to 30 seconds of something original and interesting?
And as a composition, it’s not exactly Bach, but it’s believably something you might hear during Mass at a traditional Catholic service or at a high-Anglican church.
Here’s the punchline: it wasn’t really composed by a human being. I created a one-line melody that had a few random notes and then I told AWS Deepcomposer to create a similar melody and fugue based on it. 

There’s no musical genius behind this: it’s just an autoregression algorithm. I’m really quite surprised at how well it works.
The Deepcomposer platform also has a way of creating accompaniments in a variety of styles (rock, pop, orchestral) which is so spectacularly bad it has to be heard to be believed. About the best I could do was to make it create background breathy noises and ocean effects (which sort of came in at the right time):
Incidentally, if you want to upvote that track on Soundcloud I might win their Halloween competition. It’s a spooky enough track, although I cheated a little bit. After each set of iterations I went back into the resulting composition and put it firmly back into F-minor. For some reason it couldn’t quite cope with the idea that E natural and A-flat could co-exist in one composition — which is odd because they are both together in the F-minor harmonic scale and the F-minor melodic scale. So I had to go in and turn the A-naturals back into A-flats, and vice-versa with the E-natural.
I’m not about to start composing professionally again so I don’t see DeepComposer as some kind of threat. It’s more of a fun toy at the moment. I can easily believe that it could get to the point where it could create an orchestration from a piano accompaniment quite quickly though. And creating a backing track from a pop song melody isn’t something the requires more than a superficial understanding of music theory; it’s more a matter of fashion and adapting to this year’s newest soundscape. I wouldn’t put this beyond about 10 years in the future.
So then we have:
  • can do the vocals
  • Garritan and other sound providers who can provide the authentic-sounding instruments to play an arrangement
  • DeepComposer embellishing a melody and creating an accompaniment.
So I’ll predict that by 2030 it should be possible to take a melody lines and some lyrics and generate a nice song performance with a nice backing automatically. If you can play chopsticks on the piano you will probably have enough knowledge to create something good.
I’m hoping that that will expand the scope of people who can take some joy in composing. At the moment it’s not something that everyone can do, but creating a new music is such a wonderful feeling that I wish more people could experience it.

Sunday 27 September 2020

Something funny is about to happen to some prices

Landfill is an interesting product. The price fluctuates far enough that it can fluctuate from positive to negative and back again. If someone is building a tunnel, you can probably get paid to take landfill. If someone is building an island, you can probably get paid for supplying landfill. There are projects that can only go ahead at certain times: the Hornsby quarry remediation project was only viable because the north connex was being built.

That's slightly obscure, and there's not much you can do with landfill other than, well, fill land. But we are about to hit a very unusual combination of circumstances which will create some head-scratching economics.

Solar panels are getting cheaper. The learning rate for solar energy is (probably) the steepest of any energy technology yet deployed. So we can be pretty much assured that within the next decade we'll have enough solar power to power anything we want (as long as it is day time). We'll overprovision our houses and factories (and offices if they still exist) with generation capacity so that in the morning or evening when the sun is hitting at an oblique angle, we still have enough for our needs.

That means that at peak times, we'll have more than enough solar energy. What do we do with that surplus energy?

Actually, it's slightly worse than that, because there are some (non-renewable) power plants that can't fully idle. A nuclear reactor can't fully turn off for a few minutes in the middle of the day. Neither can a coal power plant. So they will be running at a time when nobody has a need for power. That may be OK -- if you make a lot of money generating power overnight, you might be prepared to pay some small penalties during the day for overproduction if that's what it takes to keep the system running.

Which means that someone, somewhere is going to get paid to create a dummy electrical load. We've never (as far as I can tell) seen "grid stability" payments for using electricity before. But it's the inevitable conclusion of where we are the moment: discounted rates for consumers who can change their requirements to help with grid stability exist already; there's no reason that this wouldn't still be necessary when the price of electricity hits zero.

Therfore, there will be times of the day (in certain weather conditions) when some companies will get paid to use electricity.

The kinds of consumers that will get paid are likely to be doing something that is energy inefficient (obviously), and not particularly capital intensive (since their equipment will be idle a lot of the time), and making something that can be stored indefinitely (because you definitely can't do just-in-time manufacturing if you are waiting for times when the price for electricity goes negative). But quite a lot of products could be made to fit that.

A possible product that I think is likely to play out this way is water desalination. Desalinated water is very expensive because of the embedded energy cost: it's often described as "bottled electricity". Currently we tend to use reverse osmosis because it's more energy efficient; but if you don't have to worry about energy costs, you might choose instead to boil water and distil it: since the capital costs are cheaper.

Now things get even weirder. Let's say you are getting paid 0.5c per kWH (a not unreasonable price) that you use. You might find that you can sell the water for -0.01c per kL. Yes, you can make a profit: not just giving it away, but you could still be profitable while paying someone to consume it.

If this all plays out, we might have water and electricity (at times) having a negative price. These are common inputs to other processes, so it doesn't end there: there might be a business that requires water and electricity occasionally, but the input costs normally make the business unviable. They in turn might be able to make some money selling the end product at a negative price if their inputs have a negative price.

If this sounds all a bit unbelievable, remember that up until very recently, many economists thought that it was impossible to have negative interest rates... right up until the moment when they started happening. We already have examples of products that oscillate between positive and negative prices, it's just that they were never fundamental inputs to other processes. My prediction is that by 2030 we will have a small part of the economy ticking away profitably making negative price products.

(Post scriptum: now for the completely weird: how do negative price products interact with negative interest rates?)

Tuesday 16 June 2020

Microfocus (HP) Data Protector, Azure backups and archiving

DataProtector can only restore from Azure container blobs that have an access tier of Cool or Hot. This is a pity, because Archive is so much cheaper, particularly if you buy it up-front. If the media you are trying to restore from is in the Archive tier, you will get an error like this:

[Normal] From: ""  Time: 16/06/2020 11:06:26 AM
Restore session 2020/06/16-8 started.

[Normal] From: "Azure_gw_via_cellmgr [GW 6588:0:16462780082409836283]"  Time: 16/06/2020 11:06:27 AM
STARTING Media Agent "Azure_gw_via_cellmgr [GW 6588:0:16462780082409836283]"

[Normal] From: "Azure_gw_via_cellmgr [GW 6588:0:16462780082409836283]"  Time: 16/06/2020 11:06:28 AM
Loading medium from slot \\\general\531504b0_5ee2f861_04e4_038f to device Azure_gw_via_cellmgr [GW 6588:0:16462780082409836283]

[Major] From: "Azure_gw_via_cellmgr [GW 6588:0:16462780082409836283]"  Time: 16/06/2020 11:06:29 AM
[90:54]  \\\general\531504b0_5ee2f861_04e4_038f
Cannot open device (System error)

[Normal] From: "Azure_gw_via_cellmgr [GW 6588:0:16462780082409836283]"  Time: 16/06/2020 11:06:29 AM
Unloading medium to slot \\\general\531504b0_5ee2f861_04e4_038f from device Azure_gw_via_cellmgr [GW 6588:0:16462780082409836283]

[Normal] From: "Azure_gw_via_dc1hdp01 [GW 6588:0:16462780082409836283]"  Time: 16/06/2020 11:06:29 AM
ABORTED Media Agent "Azure_gw_via_cellmgr [GW 6588:0:16462780082409836283]"
It’s not a big deal, you just have to find the corresponding blob(s) in the azure portal and bring them out of archive.
If you are reading this blog post because you’ve just seen that error, it’s pretty straightforward to find what you need to fix. The medium it is loading is: \\\general\531504b0_5ee2f861_04e4_038f
 ... which corresponds to the name of the name of the blob, except for the _0 on the end. (Azure has limitations on the size of a blob, so Data Protector media can span several blobs.)


On the other hand, if you are reading this blog because you are writing procedures for the future, then go to the Media tab, select "All media" and expand the column until you can see the medium id.

Then you can double-check in the Azure portal that it is in the right tier.

Saturday 30 May 2020

What's the etiquette for greetings on video calls?

When you meet face-to-face, you can start with “good morning” or “good afternoon”. This can be achieved with a subtle check of your watch, or some convenient clock, or even — and I’ve heard of people doing this — actually looking outside to see where the sun is.
But when it’s a video call (or even a telephone call) where the other person is in a different time zone do you still say “good morning” (because it is your morning) or should you say “good afternoon” (if it’s their afternoon)? What do you say when you join a call with people in many different time zones for whom it might even be the middle of the night? Is the appropriate thing to do to keep an almanac of all the world’s timezones on hand (including Summer / Winter daylight savings time changes), so that you can give the right greeting?
And on that, is there a way of subtly emphasising that you had to get up at some ungodly hour (like 8:30am) to make the call at a time convenient for everyone else? Is this a situation where you should deliberately put the video on so that everyone can see you are still in your pyjamas, or is that a bit too unsubtle to get sympathy?

Monday 25 May 2020

Looking on the bright side

Since 2007, even in boom times our emissions have declined, just very slowly.

During times of recession though, Australia's greenhouse gas emissions decline more quickly (e.g. 1991-1992).

So would a long protracted covid-19 recession get us on track for the Paris accord?

Wednesday 18 March 2020

How I teach classes remotely

I’ve been teaching classes remotely for over a decade now — mostly to adult learners — and so I thought I’d share what I’ve learned. With a lot of schools and universities having to switch to remote classes, here’s what I can suggest:
  • One of the most useful tools is Krisp.AI. (Affiliate link to get one month free: ; link for the free version for students and educators ) — this automatically removes background noise, so you can be delivering a class in a noisy coffee shop and it sounds to your listeners like you are in a quiet recording studio. It is free to students and educators at this time.
  • The second most useful tool is a good quality microphone. I use the Blue Yeti (affiliate link Blue Yeti ; direct link and it is very, very good. You can then use cheap earphones to listen, because the Yeti has a good phone-out audio monitor.
  • If you have a modern laptop, you can use it to remove the background behind you automatically. Otherwise you can make a green screen very cheaply: (affiliate link: tension wire, direct link ). Either way it means that you can run a class from your bedroom (or wherever) without much invasion of privacy.
  • If you are presenting a computer desktop (e.g. I teach programming classes, so I do a lot of this), then get your IT department to organise an Amazon Workspace for you. Join the conference call twice — once from your laptop, and once from the Workspaces session. Share the Workspaces screen, not your home laptop. That way, if a message pops up on your screen, the students won’t see it.
  • If you are low on bandwidth (which shouldn’t be the case in Australia, as Optus and Telstra have lifted their link speeds for everyone), you can use Workspaces & phone in to the call. Your telephone voice has priority over data, so your voice will be clear and crisp. Your Workspaces computer will never be affected by low bandwidth.
  • Remote classes scale up better than face-to-face. It is a brave teacher who would be willing to teach a 60-person face-to-face, but it can be done remotely. So ideally, pair up with another teacher teaching the same class at the same time: one of you will deliver the lesson, and the other teacher will handle audio muting, responding to comments in the chat channel and keeping the class on track. I’ve generally done this with a model of senior instructor + junior support teacher.
  • Depending on the class, you can do open-mike for everyone, or otherwise do a structured question asking: e.g. if you want to ask a question, flag in the conferencing tool, or ask the question in the chat channel. Then your support instructor can interrupt the class with the question.
  • It is OK to watch a video (e.g. from Khan Academy who have got daily schedules organised) together, and then discuss it afterwards. It’s OK to admit that there is a resource on the internet that can be better than you at explaining something through a computer screen.
  • You need to have some kind of “exit tickets” from each lesson (or at least each week) where students tell you what the lesson was about, and any questions that they have. It is often quite enlightening.
  • The chat channel in Zoom and google hangouts doesn’t quite work; but they are definitely the right choice for running the video session as you can mute participants, create breakout rooms and so on. Slack or Discord can work better as a chat channel, particularly if you have a co-instructor monitoring it for you.