Happy spring, everyone! The weather is finally getting warmer here in Maine, and I’ve been getting outside more and feeling inspired to go places and take pictures. During our kids’ school vacation week last month, we took a family trip to New York, which included a day at the Met. One day is barely enough time to scratch the surface of their collection, but I made sure to check out the current William Eggleston: Los Alamos exhibition. The dye-transfer prints looked amazing in person, and gave me some good food for thought about emulating alternative color processes in future versions of FilmLab.
My favorite photograph from the Eggleston exhibition
I spent the rest of April developing a new frame detection engine for FilmLab. This is the code that looks at an image coming from the camera and tries to automatically find film frames and slides. I had been feeling uncertain about the approach to take for this task: whether to use classic computer vision techniques or cutting edge machine learning. In the end, I decided to keep it simple and leave machine learning out for now, and I think that was the right call. The code is working well for 90% of use cases, and you’ll be able to override it when necessary. Most importantly, I was able to get it working pretty quickly without it turning into a multi-month project. I’m building the frame detection engine using techniques I already know, and staying focused on shipping version 1.0. There will be plenty of time to explore new technology later.
Related to frame detection is the task of tracking the position of a known frame over time, even when the camera moves. This is necessary to have a smooth live preview, and also to be able to align multiple camera captures to produce a single high-quality image. This week I’m going to be working on getting all this new frame detection and tracking code integrated into the app, and playing nice with automatic film type detection, exposure, and color balance. Having this technology working will some big UI improvements in the live preview view, making the capture interface more intuitive and predictable.
Now many of you are probably wondering: What about that Android update? I had planned on releasing version 0.3 for Android this month, but once I got into the work and looked at the schedule, I decided to skip the 0.3 release on Android and go right to 0.4. This will probably save a week or two of work. Building any new FilmLab feature is kind of a two-step process: First I get the code functioning, and then I figure out how to optimize it so it can run in real-time on phone hardware. Sometimes the optimization step requires rewriting the code several different ways until I find a good solution, so it can be pretty time-consuming. In this case, I knew the new frame detection / tracking code would be ready soon, so it made sense to wait until those parts were done before diving back into Android optimization. So the next beta release will be version 0.4, for both iOS and Android, coming in May.
I started working on FilmLab because I really wanted this app to exist so I could use it. That remains true today, more than ever. Like many of you, I have lots of film and slides to scan. I use the beta version of FilmLab all the time, and often get frustrated by the parts that are missing or buggy. But increasingly, I’m getting little glimpses of what the finished app is going to look like, and the quality of images it will be able to produce, and it’s really looking promising. We’re getting close to the home stretch here. Thanks to everybody for your continued support!