A couple of things happened recently bringing home something that I’ve found increasingly important: technology decisions are social.
Social decisions in software architecture
The other day in conversation about team structures the Guardian’s lead software architect, Mat Wall, mentioned that architecture is social. This is a good, and often disregarded, observation. In that context he meant that architectural decisions influence who works with who, what issues they need to sort out together, how they regard their work, and so on. Three examples…
The Guardian has an API that allows access to its content. That means that when developing an iPhone application the iOS team can work much more closely together, requiring less information about the guts of the back-end, and become more focused on the user experience.
Consider also the team embracing Scala. Our platform lead Graham Tackley has gone on record to say that one consequence of this is that it has “reinvigorated the team”. That’s not the only reason to take up a technology, but it is a significant consideration.
Third, I’ll relate the story of one of our recent projects. The team devised four candidate architectures and needed to choose between them. The first decision was to make the call in conjunction with the business owner. The second decision was for them to ask not “which architecture do you prefer?” but rather “what do you want your business to look like?” Because each architecture had a different impact on the end users inside and outside the company: certain things would be difficult, others would be easy, etc.
Social decisions in algorithms
And it’s not just about architecture and software design. It’s true also of the more abstract matter of algorithms, as seen by a couple of examples from the masters (or should that be “slaves”?) of the algorithm: Google.
The most recent example concerns Google+. Rocky Agrawal wrote:
I finally blocked Robert Scoble in Google+. I have absolutely nothing against Scoble. I quite admire him, actually. He’s a great asset to the startup scene and he works damn hard. I’ve met him a few times and I’m sure we’ll meet again. But he was just getting to be way too much.
My Google+ feed was dominated by him. I tried to take a half-step and just remove Scoble from my circles. But then he became Google’s perpetual #1 suggestion for a new friend.
Google have created an algorithm which provides recommendations, and of course highly-referenced people will be recommended more than others. But Robert Scoble’s star power is clearly so significant it’s ended up disrupting Rocky’s experience, and getting in the way of him using the service effectively. Clearly the algorithm needs to be tweaked for the optimum user experience, and how that is done is entirely at the discretion of Google engineers. They could cap the influence of exceptional stars like Scoble if they wanted. They have choices, and ultimately those choices are based on social, human instincts.
The second Google example comes from the spat earlier this year between the Google search team and Bing. Google noticed that Bing was copying some of its search results; they demonstrated this by deliberately generating irrelevant Google search results for particular nonsense queries, and then observing that the same results appeared in Bing for the same queries; Microsoft said, sure, “We use multiple signals and approaches”, including observing what search results people click on, even if those clicks happen to be by Google engineeers conducting a sting operation; Google cried foul; the world moved on.
What was happening was that the Bing toolbar was tracking search results that the user clicked on. If a user had the Bing toolbar installed, did a Google search, and clicked on a link, then the toolbar would send a message back to Bing HQ saying “Hey, someone thought this link was good for this query” and Bing would consider that next time it needed to respond to the same query.
I found the whole episode quite amusing and slightly baffling, most notably these rather indignant words on the Official Google Blog:
At Google we strongly believe in innovation and are proud of our search quality. We’ve invested thousands of person-years into developing our search algorithms because we want our users to get the right answer every time they search, and that’s not easy. We look forward to competing with genuinely new search algorithms out there—algorithms built on core innovation, and not on recycled search results from a competitor.
To be fair, I think that post was written before Google knew exactly how Bing was doing its job, but even so, they jumped to conclusions. My personal reading of the above is that “pure” algorithms using only digital, abstract data are good and acceptable, while algorithms that use human data and feedback from a wide variety of sources are cheating and bad.
Unfortunately that’s just wrong. Google may like to say they don’t editorialise and their search results are purely machine-generated. But what they choose to feed into their algorithm, and how they weight those things, is an entirely human affair influenced by very human traits: culture, prejudice, politics and more. That’s not a critique of Google or its staff; it’s a critique of human beings.
Jonathan Stray has made this point, too. He said “It’s impossible to build a computer system that helps people find or filter information without at some point making editorial judgements.” And he quoted Matt Cutts of Google: “In some sense when people come to Google, that’s exactly what they’re asking for — our editorial judgment. They’re expressed via algorithms.”
So however “pure” we think a technical decision might be, ultimately it’s not. Whether it’s software architecture, algorithms, or something else, it’s all influenced by and influences our social world.