Just gave the new beta a quick test. I’m using a real lightpad now (as opposed to my iPad) and that definitely helps.
The new beta seems to ‘settle’ on the correct colors much more quickly, so that’s definitely moving in the right direction.
I tried it with some color positives (transparencies) but it insisted on seeing them as negatives, and I couldn’t persuade it otherwise. I think the final version should perhaps include a switch for neg/pos, color/mono to help the algorithm search more efficiently. I could even imagine it having film profiles, so you could choose the particular film used (but maybe that’s better done in post-processing on the desktop).
With some of my color negatives, it seemed to find and display the correct colors quite quickly and accurately. When I touched the screen to scan the image however, it was something of a crapshoot: once, it produced an acceptable color positive image (a bit blown out, but that could be the original). Other times, I just got a photograph of the negative.
It’s pretty clear that it’s early days yet and that this is better described as ‘dev’ or ‘pre-alpha’ rather than ‘beta’ software, but that’s what we expected. It’s definitely moving in the right direction.
I wonder if it would give better results with some kind of stand to hold the iDevice steady and at a fixed distance from the film. I would think that hand-holding must lead inevitably to blur. Maybe someone will start a hardware Kickstarter for a FilmLab Stand/Scanning Station …