Introducing StrangeLine and More RetroStrange-ness

This weekend I did some work on RetroStrange infrastructure and scheduling.

RetroStrange TV (our 24/7 streaming TV channel) which is now fully autonomous and publishes notifications to Twitter when each show or movie begins with the #RSTV hashtag. You can find my TV station code on GitHub. The current setup of two Linode 4GB servers this should provide us with enough space and power to run it basically forever at $40/month. Support via Patreon appreciated.

The next RetroStrange Movie Night is November 23rd and we’re showing film noir classic D.O.A. (1949) see the Facebook Event.

The other big RetroStrange feature is the StrangeLine. I’ve set up a phone number you can call for various RetroStrange stuff. Right now you can call to get info on the next Movie Night, or listen to the Skulking Permit by Robert Sheckley as heard on LOFI SCIFI. We’ll add and change up the content regularly, so go ahead and give ‭(814) 787-2643‬ (that’s 814-STRANGE) a call.

Introducing The OpenCV AI Kit

For the last several months I’ve been helping OpenCV ready their biggest launch ever, and today it’s here. The OpenCV AI Kit is now available on Kickstarter.

A Spatial AI platform so small, it’s going to be huge.

The best press mention so far has been Devin Coldewey’s piece for TechCrunch: OpenCV AI Kit aims to do for computer vision what Raspberry Pi did for hobbyist hardware

The campaign has been up for a little over 4 hours, and we’ve passed 500 backers, smashed our goal, and are about to cross the $100,000 mark.

Squeezing the soul out of digital video

The image above is from the original Teenage Mutant Ninja Turtles cartoon title sequence. Pretty iconic, right? It is the result of a new video technique I came up with. For more examples and a thorough explanation, read on:

I was taken by a Strange Mood and created a small combination of shell and python scripts that:

1) Creates a still image from every frame of a given input video, then
2) Compares each of these images against each other, round-robin style, in order to
3) Find the two images (and therefore, the two “shots”) which are the LEAST like each other in the source video.

Essentially it take a video input, and finds the two frames that are least like each other. My theory is that all of this will Tell Us Something. I don’t really know what. This is something like digital mysticism, trying to find the soul of a string of bits and surface it.

The current method is sub-optimal in several ways, for one it takes a long time to run on a laptop. Remember: We’re comparing every second of video to every other second of video, and that adds up. Running the script against a full 22-minute episode of a TV should would require 970921 comparisons, so I’ll set that up to run tonight and maybe it’ll be done by morning? This sounds like a job for EC2.

Some more examples:

A Statement From Louis C.K.

His direct-download, no-DRM, concert video has sold over 100,000 copies. Louis:

I really hope people keep buying it a lot, so I can have shitloads of money, but at this point I think we can safely say that the experiment really worked. If anybody stole it, it wasn’t many of you. Pretty much everybody bought it. And so now we all get to know that about people and stuff. I’m really glad I put this out here this way and I’ll certainly do it again. If the trend continues with sales on this video, my goal is that i can reach the point where when I sell anything, be it videos, CDs or tickets to my tours, I’ll do it here and I’ll continue to follow the model of keeping my price as far down as possible, not overmarketing to you, keeping as few people between you and me as possible in the transaction.

This is news that I am very happy about.