Skip to main content
We no longer support Internet Explorer. Please upgrade your browser to improve your experience. Find out more.

Interview with Andy Somerfield, lead Affinity Photo developer

With Affinity Photo approaching its fifth birthday we talk to the app’s lead developer, Andy Somerfield, to learn more about the creation of the app, what we can look forward to in the 1.8 version, and how it feels to have developed the first (and at the time of writing, only!) fully-fledged image editing app on iPad.
After its launch almost five years ago, you described Affinity Photo as “groundbreaking, coarse and incomplete. It is a version 1 after all!”—how would you describe it now?

“Getting better” I suppose! It’s a long game with products like this. I’m pleased to see that we are finding enough time to refine and improve the core tools and features in Photo—tools and features which benefit pretty much every user. That said, there is no ‘finishing line’ as such. Photo is being used by more people in more industries with more/different requirements each and every day. Keeping the product where those people need it to be is the ongoing challenge.

How do you go about adding new features? And how do you decide which ones to include?

It’s a mixture of a few things to be honest. We have a roadmap internally—things which we always wanted to be in Photo. That list is somewhat ordered; items on the list depend on other items and we try to implement at least some of the list with each point release.

Secondly, feedback from users is very important. With more and more people using Photo, we keep a close eye on what they’re asking for and implement things which we think will benefit a broad enough section of the user base. User-suggested features account for about 50% of what we add.

“…feedback from users is very important. With more and more people using Photo, we keep a close eye on what they’re asking for and implement things which we think will benefit a broad enough section of the user base. User-suggested features account for about 50% of what we add.”

Finally, we try to predict where things are going to go in the future. Whilst this doesn’t always result in a new user-facing feature, it can dictate what ‘under the hood’ work we need to do to prepare ourselves for when features are actually required. Support for EDR monitors in 1.7 is a great example of this. On Mac, only the Pro Display XDR has genuinely useful EDR support. That device is not targeted at most of our users right now, but in time, all displays will have similar capability and it will fundamentally change how people shoot, develop, and edit photographs.

Support for HDR & EDR displays was added in Affinity Photo 1.7, including support for the new Apple Pro Display XDR.
How did developing the iPad version differ from developing the desktop version?

Well, we had the core of the application already—Affinity is completely platform agnostic aside from UI. The iPad just needed some UI and some extra work to make it go fast, such as Metal support.

UI was an interesting challenge for us and one of those things which will never be truly finished. Each subsequent version of Photo for iPad iterates more and more on UI—something which can be annoying for users, I concede—but there are no 30-year-old rules to follow here, so we feel our way in and listen to feedback even more than usual, both from users and internally within the product teams.

Why do you think no other development team has been able to recreate the success/power of Affinity Photo for iPad?

I would say timing. We started work on the Affinity codebase in 2009, so we had a good idea of what devices we would be targeting over the next 10 years or so. Affinity was not actually designed for iPad from the start (a common misconception)—it would be more correct to say that Affinity was designed for ‘a future iPad, perhaps three or four years down the line’. There was some guesswork here and fortunately the iPad hardware evolved in the way we predicted (desktop class CPU, GPU with shared memory, etc.).

That said, we made it more complicated for ourselves by deciding that it wasn’t worth shipping Photo on iPad unless it had all the features of the desktop product (or as near as was reasonable). Other companies who have entered this space talk about the concept of shipping the ‘MVP’ (Minimum Viable Product), which is fine if your users don’t have an expectation of what they’ll be buying. A cut down Photo-lite product, whilst possibly commercially successful, didn’t really interest us—there are a sea of them already and we wouldn’t stand out.

Affinity Photo for iPad

So, we kind of went for the ‘Maximum Viable Product’—like ‘how much of the desktop app can we actually get in here before we start to make compromises which would harm the experience too much?’. We got pretty much all of it in and the stuff we didn’t, we are starting to circle back to and fill in the holes.

“So, we kind of went for the ‘Maximum Viable Product’—like ‘how much of the desktop app can we actually get in here before we start to make compromises which would harm the experience too much?’. We got pretty much all of it in and the stuff we didn’t, we are starting to circle back to and fill in the holes.”

This meant that, to be honest, some of the UI for some of the more advanced features was/is a little rough, but it’s like building a house—you don’t build the foundations for just one wall, brick it, plaster it, paint it and hang a picture on it before your start the foundations for the next wall. You build all the foundations first, then all the walls, and so on.

We’ve ended up with a nice big house which has a roof on but isn’t fully painted inside and lacks a few carpets. Other approaches we could have taken would have ended up with something uninhabitable, or a garden shed :)

What makes Affinity Photo unique from other photo editing apps/what sets Affinity Photo apart?

I don’t know if we’re unique as such—being unique isn’t something we necessarily aim for—at least in terms of software engineering. That said, because we are working with a new, modern codebase, there are things that we can do architecturally that other products based on older code would never have the computing horsepower to realise (Live Filters are a good example of this).

Live Filters in Affinity Photo
Is there a feature or other specific part of the app that you feel most proud of?

The GPU support on Mac and iPad is pretty cool, although it’s still early days (we still need to ship it on Windows—soon!). I think it’s cool because to an extent it validates the underlying architecture. Photo was designed to accommodate GPU support from day one, but nobody knew what form that would take (eventually Metal). We made architectural decisions early on that we hoped would allow us to add good GPU support after a few years. It worked for pretty much every feature in Photo and enabled the iPad version of Photo to exist.

What has been the most challenging task in developing Affinity Photo?

Trying to predict where the industry will be in three-five years’ time and making sure we are laying the groundwork for that now—that’s by far the most challenging aspect of what we do.

How does it feel now knowing how successful both the desktop and iPad versions have become—were there any ‘pinch me’ moments or a specific event that made you feel especially proud of what you and your team had created?

The WWDC launch was a big highlight for us, although quite nerve-racking! We joked about it here when we got an early internal alpha build working on iPad—“This is keynote material right here! Lol!”. Who would have thought!

What can we look forward to with the 1.8 update?

PSD smart object support! At last!

Where do you see Affinity Photo in the next few years?

More stable on more platforms with more features for more types of users.