Nose Ladder Filter

This a pun on the Moog ladder filter emulation available in AudioKit: https://github.com/AudioKit/SoundpipeAudioKit/blob/main/Sources/SoundpipeAudioKit/Effects/MoogLadder.swift. It's called "Nose" ladder filter because I figured out a way to control it with my nose thanks to Core ML and the Vision framework. Using the output from PoseNet, I was able to extract the position of my nose and map the x and y position to the cutoff frequency and resonance values of the filter respectively.

Tap to view Swift Code (Audio Spatializer Methods)

extension ViewController: PoseNetDelegate {
    func poseNet(_ poseNet: PoseNet, didPredict predictions: PoseNetOutput) {
        defer {
            // Release `currentFrame` when exiting this method.
            self.currentFrame = nil
        }

        guard let currentFrame = currentFrame else {
            return
        }

        let poseBuilder = PoseBuilder(output: predictions,
                                      configuration: poseBuilderConfiguration,
                                      inputImage: currentFrame)

        let poses = algorithm == .single
            ? [poseBuilder.pose]
            : poseBuilder.poses

        for pose in poses {
            // Draw the joints as circles above the segment lines.
            for joint in pose.joints.values.filter({ $0.isValid }) {
                if joint.name == .nose {
                    let cutoffFrequency = joint.position.x.mapped(from: self.previewImageView.bounds.minX ... self.previewImageView.bounds.maxX, to: CGFloat((filter?.$cutoffFrequency.minValue)!) ... CGFloat((filter?.$cutoffFrequency.maxValue)!))
                    let resonance = joint.position.y.mapped(from: self.previewImageView.bounds.minY ... self.previewImageView.bounds.maxY, to: CGFloat((filter?.$resonance.minValue)!) ... CGFloat((filter?.$resonance.maxValue)!))
                    filter?.cutoffFrequency = AUValue(cutoffFrequency.native)
                    filter?.resonance = AUValue(resonance.native)
                }
            }
        }

        previewImageView.show(poses: poses, on: currentFrame)
    }
}
						

I later ended up writing a UGen in C++ for an open source sound generation program called SuperCollider. This UGen made use of the Kinect sensor to map my body motion to sound.

Kinect Audio Control at the Lab

Kinect Audio Control for MIDI and iOS Music