Introducing TapMeasure – Occipital’s new measurement / room CAD generation tool

TapMeasure is a whole new way to build a 3D model of any room in a few seconds. It works with Apple’s new ARKit framework, and adds Occipital’s special sauce to provide artwork alignment, quick measurements, and the aforementioned generation of (SketchUp-compatible) CAD files. It’s free, and will ship as soon as iOS 11 drops later today.

See the website at tapmeasure.io.

For my part, I got to push some iOS code to this one! As well as helping out with some of the graphic design and UX, I was also able to design and edit the tutorial videos and the launch trailer embedded above. We have the luxury of one of the most seasoned computer vision teams in the world here, and I think it shows.

Rassler Release 8 AKA Random Events Part II

Just pushed a new release of my free retro pro wrestling RPG, Rassler. This one contains some small updates and an interesting, complicated, unintended consequence:

The last bullet point, explaining the changes to the activity system, is essentially a naive implementation of drug dependency. The player needs their health to be above 0 in order to work or go out. If the player has $40 they can buy some pain killers to get them through the next match, but pain killers only provide a +6 to short-term health (and a -6 to max health, affecting their future prospects and longevity) so it is not a sustainable solution. Either the player gets lucky, and their next several matches are relatively easy and they aren’t further injured, or something bad happens and they’re rendered immobile and bankrupt in the end.

More in the Devlog.

Squeezing the soul out of digital video

The image above is from the original Teenage Mutant Ninja Turtles cartoon title sequence. Pretty iconic, right? It is the result of a new video technique I came up with. For more examples and a thorough explanation, read on:

I was taken by a Strange Mood and created a small combination of shell and python scripts that:

1) Creates a still image from every frame of a given input video, then
2) Compares each of these images against each other, round-robin style, in order to
3) Find the two images (and therefore, the two “shots”) which are the LEAST like each other in the source video.

Essentially it take a video input, and finds the two frames that are least like each other. My theory is that all of this will Tell Us Something. I don’t really know what. This is something like digital mysticism, trying to find the soul of a string of bits and surface it.

The current method is sub-optimal in several ways, for one it takes a long time to run on a laptop. Remember: We’re comparing every second of video to every other second of video, and that adds up. Running the script against a full 22-minute episode of a TV should would require 970921 comparisons, so I’ll set that up to run tonight and maybe it’ll be done by morning? This sounds like a job for EC2.

Some more examples: