The first 12 months of FilmLab: 2017 year in review

I will remember 2017 as the year when FilmLab went from an idea to something real. I’d been thinking about the concept of a film viewing / scanning app for a couple of years, although more from the perspective of thing I wished I had to use, than as a a viable business idea. But during 2016, I started taking the idea more seriously, and talking about it with my wife and business partner Hannah.

If I was ever going to work on the film scanning app idea, the time seemed right. Film photography was making a comeback, but almost all the local photo labs had gone out of business, so many film shooters had started developing and scanning their own film at home. Existing film scanning software was designed for desktop computers, and some of it was no longer being updated and only ran on old operating systems. Meanwhile, more and more computing was moving to mobile devices. Smartphone camera capabilities had been improving rapidly, with features like manual controls and raw capture available to developers for the first time. Didn’t it make sense that there should be smartphone software for viewing and capturing film?

Hannah and I decided it made sense for me to work on the app for a month or two, as an experiment. If it turned out to be too much work, or there wasn’t enough interest, we’d move on to other projects. But it was time to give the idea a try.

January to March: Bootstrapping the Prototype

On January 25, I checked in the first bit of code for what would become FilmLab. It didn’t actually run on a phone, just on my computer, and it didn’t have any user interface at all. But it did load a JPEG “scan” of some film I’d taken with my camera phone, and attempt to find the outlines of the film frames.

By early Feburary, I had the code running in iOS on my iPhone 7. This required a bunch of computer plumbing: all the FilmLab image processing is written in C++, a classic albeit boring programming language, while the iOS user interface code is written in Apple’s modern computer language Swift. But Swift can’t talk to C++ directly, so I had to write a bridge layer between the two in an older Apple programming language called Objective-C. Fortunately there were some helpful tutorials online about how to do this.

On Febrary 9, the app started to do something useful. If you pointed it at some black and white film with clearly defined borders, it would identify the frames and draw an outline around the one closest to the middle. Then if you tapped the frame, it would take a full resolution photo, extract the frame, invert it to a positive, and display the result.

I made my first demo video and put it up on Instagram. (Apparently by that point I’d settled on the name FilmLab, since I mention it in the post description). I was really curious to know what other film shooters thought of the idea. I think I only had about 40 followers at the time, but there were enough positive comments to make me feel encouraged.

The rest of February was spent improving the automatic frame detection, initially targeting 35mm film, 120 film, and slides. (I ended up setting slide support aside, and still haven’t gotten back to it, although it’s still on my to-do list).

On March 10, I posted another demo video showing an updated prototype that could detect multiple frames, in different types of film, simultaneously. It still had obvious bugs: the frame borders were jumpy, and the color was way off (check out those super-saturated blues), but it had come a long way from the previous month.

April to June: The Kickstarter Campaign

In early April, I spent a couple of days doing the first real UI work on FilmLab. I added the ability to zoom in on a frame to focus (which you had to do by moving the phone closer to or farther from the negative, since autofocus wasn’t working yet), and buttons to capture and save/export the image. This meant that, for the first time, I could actually save an image I’d scanned with FilmLab! It was starting to feel like a real app. I posted another Instagram demo video:

 

Over the past few days I’ve made some progress on FilmLab. Now you get a live preview of the negative-to-positive conversion (instead of waiting until after you pick a frame to scan) which is really nice for quickly viewing a sheet of negs. And I added preliminary support for color negatives, as you can see in this video. I’m still working on improving the code that automatically detects frame boundaries, but it’s getting better—here it’s able to correctly capture the individual 35mm frame I tapped. I’ve been thinking about the future of this project, and one thing it’s definitely going to need (besides a whole lot of coding and design work) is an audience. If you could help spread the word by tagging a friend or two who shoots film, I’d really appreciate it!

A post shared by Abe Fettig (@abefettig) on

At this point I had been working on FilmLab close to full time for about four months. I was excited about the possibilities of the app from a technical standpoint, and I was getting an increasing amount of positive feedback about the idea from film shooters. But at the same time, I could see that it was going to be a big project, with months more work before I’d have something I’d be able to sell on the App Store. I’d need some financing to afford to be able to invest that much time in development.

I had never pictured myself doing a Kickstarter project, but FilmLab just seemed like a perfect product to crowd fund. Film shooters are a passionate community, and based on my own experience I thought it was going to meet a real need that people would be willing to pay for. And if I was going to owe something to investors, I liked the idea of answering to the people who were actual users of the product – it felt like our interests would be aligned. Plus, I hoped the Kickstarter project itself would get some press and help spread the word about FilmLab.

I wanted to start spreading the word ahead of time, so I decided to bite the bullet and get in front of the camera (not something I was looking forward to) and make a YouTube video about the project. I bought an audio recorder and a mic, set up my digital camera on a tripod, and tried to find an angle in our small house where the background wouldn’t be distracting (I ended up sitting at the dining room table with the living room behind me). After many takes, I managed to record this video:

If you look closely, you can see that at the end of the video it’s getting dark outside, and Hannah is patiently sitting in the car in the driveway waiting for me to be done so she can come inside. I think one of my repeated phrases this year was “it took longer than I thought it would.”

The rest of April and early May were a whirlwind of Kickstarter preparations. There were app features to finish, a video to make, bank accounts to open, music to license, and press to reach out to. I had to move back the launch date a couple of times, and by the end I was physically and emotionally exhausted. Finally, everything was ready, and at noon on May 11 it was time to push the launch button.

At the moment the Kickstarter campaign went live, I was feeling excited but also very nervous. I had invested five months in this project, more than we had originally planned on, and now it was going to sink or swim based on how people responded to it. I believed in it, but it definitely wasn’t a sure thing, and the possibility of massive public failure felt very real.

Thankfully, there was a great response to the Kickstarter campaign. In the end, thousands of people came forward to donate to the project. The campaign ended up almost doubling its funding goal (which has proved useful since then, since development has ended up taking longer than I thought it would). I carry a huge debt of gratitude toward everybody who supported the Kickstarter project, especially the people who jumped in to pledge on the first day to help the campaign get some momentum. It’s a cliche to say “I couldn’t have done it without you”, but in this case it’s literally true. If it wasn’t for the Kickstarter backers, I don’t think FilmLab could have had a chance to succeed. Many thanks to everybody who supported it!

Looking back on my Kickstarter campaign now, there are a couple things I would changes if I was doing it over again. First, I cringe at my “three to six month” estimate for completing version 1.0 of FilmLab. That was overly optimistic. “More than six months” would have been more accurate. 1

Second, I wish I hadn’t used the word “beta” to describe the early releases I’d be delivering to testers. A “beta” release is usually an almost-finished product, which may still have bugs but which has all major features completed. But with FilmLab, I promised to deliver in-progress snapshots of the app starting only a month after the Kickstarter campaign ended. Calling these “beta” set expectations too high. I was trying to keep things simple by using a word people are familiar with, but I chose the wrong word. If I could do it over I’d call these “sneak peek preview builds” or something along those lines.

All things considered, though, I’m super happy with how the Kickstarter campaign went. And in the months since then, the community of backers have been great to work with. They’re an extremely supportive, understanding, and helpful group. They’ve provided much more than just financial backing, and FilmLab is going to be a better product because of them.

July to September: The core of a cross platform app

After the Kickstarter campaign ended, I started working on a new version of FilmLab, with some major changes from the existing prototype version. This new version would be cross-platform: it would have a flexible core that could run on both iOS and Android (and be adapted for other platforms and uses in the future). And it would also be designed to work in the messy, complicated real world. Instead of only needing to work with my personal model phone, using my light box and the film stocks I usually shoot, it would need to support all the different phone models and light sources and film stocks in the real world.

The first step was to get the Android version bootstrapped. Up to this point, my personal experience developing Android apps was minimal, so it was a bit of a learning curve to get up to speed. And like the iOS app, there was some additional plumbing work necessary to get the Android UI (written in Java) talking to FilmLab’s C++ image processing layer. I learned a lot about Android JNI layer and how to efficiently pass image data around.

On August 1st, I released the first preview build of FilmLab for iOS, followed by an Android build on August 9. These builds were made available to Kickstarter backers (as part of their reward for supporting the project), and they immediately started trying out the app and giving feedback, even posting some of their FilmLab scans on Instagram. It was an exciting moment the first time I saw a FilmLab-scanned image in the wild:

As summer went on, and I kept working on improving the iOS and Android app builds, I realized the app could be thought of as a set of separate components that work together. First, there’s an augmented reality camera, which analyzes the video stream coming from your phone’s camera, detects film, and gives you a real-time preview of negatives as positives. Second, there’s still image capture, which has the job of controlling the exposure and focus of the device camera, capturing the negative data in as high resolution as possible, and processing raw files. Third, there’s the film processing engine, which takes negatives as input and simulates the analog printing process to produce positive output. And finally, there’s the UI, which gives you the ability to see and control what the app is doing. I’m building that part separately for each platform like iOS and Android, so the user interface can have a native look and feel.

The challenge was (and continues to be) deciding how to split my time between these different components. Ultimately they all have to get done, so in a sense the order I work on them doesn’t really matter. But since people are actively using the preview releases, I hoped to work on the components in an order that helped the app become useful even while it was incomplete.

I decided to focus first on still image capture, the film processing engine, and the UI for editing captured images. My thinking was that this way, even if the automatic film detection wasn’t working yet, people could still use FilmLab as an editor for their film scans (even those captured through other means like DSLRs with macro lenses). I was able to make some progress in these areas during September, even while much of my time ended up being spent working through device-specific issues on Android. (It turns out that Android camera capture is quite complicated, and there are a lot of differences between phone models). On October 12, I posted this demo video to Instagram, showing the currently available tools for editing a captured negative on Android:

October to December: Tackling the Hard Problem of Accurate Color

Before I moved on to improving the other components of FilmLab, I wanted to both finish the image editor and improve the quality of negative-to-postive color conversion. In Filmlab, these two things really go hand in hand. One of my goals has been to make an app that’s grounded in the analog techniques and technology of film. I want the colors and tones of images produced with FilmLab to look like what you’d get if you made your prints with chemicals in a lab. And I want the creative controls available in FilmLab to be based on what a skilled darkroom printer would have been able to do. There will always be other image editors available for digital postprocessing, but FilmLab’s editing is all about controlling the negative-to-positive process.

This turned out to be way more work than I thought it was going to be. I had a lot to learn about color, and there were a bunch of hard problems to solve to make FilmLab’s color conversion accurate, good-looking, and fast. In particular, I struggled to properly emulate the subtractive CMY color of chromogenic paper dyes. This work involved a lot of math and algorithms, which aren’t my strongest suit as a developer, and when things went wrong it was hard to debug. It was easy to see when the results looked bad (and they often did), but harder to figure out exactly what was wrong, mathematically, with the digital output, and then trace it back to a specific step in the system. And even after isolating a problem, there would be questions: Was my mental model of how the analog process worked wrong, or was my code buggy? Or both? It ended up being as much a research problem as it was a coding problem.

All of that took a lot of time, which has made me worry that, from the outside, it looks like FilmLab development has stalled out. That’s not the case at all – I’m really excited about the work I’ve been doing, and I think it’s important for the future of FilmLab. I want people to be really happy with the output colors and tones in FilmLab, and now I think they will be. And the underlying technology is super cool. It’s recreating the whole analog process in software, and I think will open the door to interesting possibilities in the future.

Looking forward to 2018

What’s in store for FilmLab in 2018? First up will be preview release 0.3, coming in January. This will include the new print simulation code for negative to positive conversion, as well as a new editing UI which gives more manual controls based on traditional film processes. I’m excited about it.

After that, my goal is to work towards releasing FilmLab 1.0 by the end of the first quarter (that’s the end of March, three months from now.) Most of the remaining work is already designed and planned out, but there is one significant piece that I haven’t quite figured out yet. Which is:

Smarter frame detection

Since the very first preview release of FilmLab went out to backers at the beginning of August, I haven’t spent any time working on improving automatic detection of film frames. This is a core feature of FilmLab, and having this part unfinished makes the whole app feel kind of broken. So I’m really looking forward to improving it. However, before I get back to work on it, I have a big technical decision to make.

There’s a revolution happening in software development, which is the use of machine learning to solve problems. In the past, the job of the software developer was to write super-specific instructions, telling the program exactly what to do in every possible situation it might encounter. But with machine learning, the developer teaches a system how to do a task in a looser way, similar to how you might teach another person: by providing it with lots of examples to learn from, and giving feedback so it can learn from its mistakes. Surprisingly, this turns out to work better than traditional programming for many types of problems, including computer vision (teaching a computer to recognize and differentiate objects).

Machine learning seems super promising for FilmLab’s film detection, but I personally don’t have any experience with it, so it’s hard to know how long it will take to get working, and just how much better the results could be than if I use more traditional computer vision techniques. Probably what I’ll do is give myself a week or two to experiment with machine learning and see how far I can get. I’m pretty sure that machine learning represents the future of computer vision, and FilmLab will adopt it eventually, but I don’t want to delay the version 1.0 release if another solution can be made to work in less time. If anybody out there has experience implementing machine learning for this kind of problem, please feel free to get in touch – I’d love to chat!

My goals for the new year

My number one goal for 2018 is to be better at communication, especially through the FilmLab mailing list / blog and on social media. As a one-man development team, my tendency is to feel like the most important thing I can do at the current moment is work on developing the app. But I also need to prioritize time for communication, because communication is important. Everybody who backed FilmLab expects (and deserves) updates about how development is going. And for FilmLab to be successful in the future, after the 1.0 launch, the pool of people who know about it needs to keep growing. So part of my job is to keep talking about FilmLab and posting samples and demos to show what it’s currently capable of.

In addition, I’m going to try to do better at staying on top of email. I try to respond to everyone who writes, but sometimes the email piles up and then days or weeks go by before I get to the bottom of it. Hopefully sometime in the future I’ll be able to hire a support person who can focus on answering questions quickly. But for now, my goal is going to be to respond within 1 business day. In any case, please know that I do read and appreciate all email (and DMs on social media). Please keep in touch, and I apologize if I don’t get back to you as quickly as I wish I could.

If you’re read this far, thanks for taking the time! Writing this year-in-review has been a helpful exercise for me, in that it’s helped me get some perspective. I’ve been neck-deep in this project, focusing on what needs to get done today or this week. It’s refreshing to step back and consider the progress on a larger time scale. A year ago, FilmLab didn’t exist. Now it does. It’s not finished yet, but it’s well on its way. 2017 was a huge year for FilmLab, but I think 2018 is going to be even better. Thanks to all of you for your continued support!

-Abe

Footnotes
  1. I fell into a known software development trap, which is believing that you’ve solved all the hard problems in the course of getting a basic version of your program working. In reality, that’s never true. In The Mythical Man-Month, the classic 1975 book on software development, Fred Brooks estimated that making an existing program usable for other people represents a 3X increase in time and expense, and that making a program integrate reliably with other software is another 3X increase in time and expense. He wrote that, compared to the initial program, the finished software “costs nine times as much. But it is the truly useful object, the intended product of most system programming efforts.”

    As a software developer, once you get the basic program working you tend to feel like the hard part is over. But according to Brooks you’re only one-ninth of the way there. There are a lot of engineering challenges that only become clear after software starts to deal with the complexity of the real world.