It's T minus 5 days till the opening of the 2020 G̶r̶a̶z̶ online edition ; today we recorded the last panel Q&A. Next week they will all go public, and there is an online party on Wednesday July 8, 6 pm CET. We'll post the link.

Fragment (2017)

"nur die fließende, stille Bewegung wird von der Macht unerkannt bleiben"

"only the fluent, quiet movement will remain undetected by power."

Just after eight, we encountered these strange patches on a house wall. After closer inspection, it turns out these are all spider webs, moist after the rain and thus reflecting in the sunset (it took me five minutes to get the camera, so the illumination was already less impressive than we saw it first; I'll try again tomorrow to shoot this).

For , we found 47 of these acrylic discs, only needing to be laser cut to serve as speaker walls. This was just such an amazing "lucky find" in the light lab of TU Graz, we only need to get one extra plate.

If you are not familiar with acoustics, one has to place speakers either inside an enclosure (the conventional "box"), or you create a large surface, the travelling distance from the back centre around the boundary back to the front determining the lowest producible frequency.

Announcing the upcoming exhibitions of - still lot's of work, but we're on track, going to be a busy summer/autumn 🐙

Status: freeing my laptop by moving some rendering over to a Raspi cluster 🤓

Q: since I launched the rendering on them by logging in through ssh – what happens if I unplug the Ethernet? I suppose ssh will just choke, but the processes keep running, until I reconnect the cable. Right?

Uh, this should be perfect reconstruction (2D -> 1D -> 2D hilbert curve encoding).

Any ideas of esoteric ways to translate particular types of images to sound? I tried all the usual suspects, from direct encoding to Fourier and Wavelet transforms, to 2D -> 1D singular value decomposition. Nothing seems to produce anything interesting from this…

(vs. St-Exupéry friendship)
"you have my back"

barfuß den Steinen ausweichen
barfooted, evading the stones

Determining regions in rendered sounds based on loudness contour (lots of FScape debugging as a "fun" side activity)

Some advances in developing the sound installation 'Through Segments' with a layer running on a Pi 4.

Definitely slower than on the laptop, but no real issues so far. Few xruns with binaural simulation and no jack params adjusted yet, so I'm fairly optimistic.

Have to run for long time to see if there are any memory leaks or such.

In 2011, I was pencilling around to understand how confluently persistent data structures work.

I'm adding a Histogram UGen to FScape; such a simple mechanism, just throw stuff into buckets and count… I will need this to look at the distribution of spectral features of a microphone signal.

Sometimes I wish I had more input on what building blocks other people would need, but I guess that's ok.

The plot shows a 17-bin histogram fed with a sine oscillator, and the histogram being flushed every ten steps.

Lastest Raspbian Buster comes with JDK 11, QJackCtl, SuperCollider 3.10.0 preinstalled; Mellite universal download was running out of the box 🎉

Seems much more powerful than the Pi 3, my bet is the 4 GB RAM are a significant point here with running the JDK app. So far all smooth ✔️

another, one of my favourites; I shot that landscape for the video piece 'Moor' (2016)

Show thread
Show more

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!