Search engines are not neutral things. Their algorithms have the power to shape how we understand everything from breakfast cereals to geopolitics, and their sway is being thrown into relief at a time when we increasingly rely on the internet for a quick grasp of complex topics. In a climate of misinformation and false news, search engines are a lens to what we see of the world.
AI looks to augment this relationship further, with search engines becoming more intelligent in how they respond to ripples in the internet; shifting their weight to kick up the most authoritative sources. At least, that’s the hope. The worry is that the less and less humans have to do with tweaking search algorithms, the more scope there is for machine learning to gloss over epistemological nuance.
While much attention has been on Google’s actions around refitting search, Microsoft has been quietly introducing some changes to Bing. At the tail end of last year, it started to roll out “multi-perspective answers” on its platform. Instead of raising a standard list of links, the search engine associates particular questions with subjects that have multiple points of view, and presents these as equally weighted answers.
Search “Is coffee good for you?”, for example, and you’ll be met with a box out at the top of the results with two answers from two different sources, separated with a little ‘vs’ icon. According to Microsoft, this is a way to tackle the issues of polarisation big internet platforms face, popping online filter bubbles with multiple perspectives on a subject.
“In the current climate of fake news and misinformation on the web, we feel that search engines need to take a step up and provide as comprehensive results as possible,” Jordi Ribas, corporate vice president of AI Products at Microsoft and Bing, tells me.
“Otherwise people tend to be in their bubbles. If you use traditional algorithms you find that, whether it’s in social media or search, you provide results that are personalised but ultimately reinforce the biases that the person would already have. With this technology we’re trying to break those bubbles.”
Providing a number of responses to a given question could be read as offering users a way to pick their own truth, and there’s also the danger of offering a false equivalence for subjects that don’t warrant it. You don’t want pros and cons for flat-earth theories to be shown, for example. Ribas emphasises that the system is based on the authoritative weight of source, however, and presses that the approach is intended to encompass nuance, not misinformation.
Whatever the ethical thinking behind Bing’s new approach, the amount of subjects currently given the multiperspective treatment is tellingly limited. Innocuous subjects about the relative benefits of coffee or cholesterol have ‘positive’ and ‘negative’ responses, but anything nearing the political is off the table.
Type “Is Brexit a good idea?”, for example, or “Should Donald Trump be impeached?” and there’s no sign of a “multi-perspective answer”. Type in “Is there a God?” and the first answer links to an article with the title “Six straightforward reasons to believe that God is really there” – not exactly the nuanced future of search Microsoft talks about.
Ribas admits that Bing’s coverage of multifaceted search results is “relatively low at the moment”, and explains that the AI behind the multi-perspective answers needs to be refined before it can be set loose on contentious topics. “We’re going to try and fine tune the algorithms so that we can understand authoritativeness better, and have that lead to more objective results. Are we going to get it right all the time? No. There are so many questions people ask, and we’re going to make mistakes along the way, but we have the inspiration to provide as trustworthy information as possible.”
He explains that Bing uses something it calls “sentiment analysis” to gauge whether or not a question requires a multi-perspective answer. The machine-learning algorithms do this by clustering documents to a given query into various “sentiments”, weighing these based on the perceived authoritativeness of a source. If one cluster drastically outweighs all others, then it is taken as the truth and presented as a definitive answer. If, however, two different clusters of responses are relatively equal, it signposts this as a response with more than one “valid” answer.
A spokesperson for the company told Alphr that publishers can submit themselves to the search engine as a verified source, which would flag their content as authoritative for the purposes of Bing’s algorithms: “Publishers can become a verified source of information on Bing, via pubhub.bing.com. News sites will be judged based on the following criteria and also manually checked by a team of journalists for newsworthiness, originality, authority, and readability.”
Despite the currently quite limited scope of “multi-perspective answers”, Ribas emphasises he wants his company to stand for “trustworthy AI”. An important part of this is having clear parameters in place to guide how Microsoft’s deep-learning systems develop, particularly around the insidious effects of training AI with biased data.
“AI has the particularity that it can learn and evolve, but at the end of the day it’s just algorithms,” he says. “What we are seeing is that these algorithms need to have parameters so that they do the right thing. For example, we talked about bias. If we let these algorithms learn from biased data then they’ll become biased. We need to have guidelines that help us ensure the data we use to train the algorithms is as unbiased as possible.”
Microsoft is a member of several groups dedicated to interrogating the practical and ethical questions around artificial intelligence, including the Partnership on AI and MIT’s Center for Brains, Minds and Machines (CBMM). These are the sorts of centres that endeavour to write the guidelines that Ribas gestures towards. It’s not only Western universities and tech companies forging practice around AI, however. China in particular is pushing to become a world leader in AI, with the surveillance-tech company SenseTime recently becoming the world’s most valuable AI startup.
(Jordi Ribas. Credit: Microsoft)
According to Ribas, competition between the West and other countries is “very close everywhere right now”. He points to the Stanford Question Answering Dataset (SQuAD) test, where an AI must provide exact answers to more than 100,000 questions, drawn from more than 500 Wikipedia articles. Microsoft beat the test in January, but was pipped to the post by Chinese giant Alibaba.
Does the competition between Chinese companies and Western companies give Microsoft cause for concern, particularly considering the question of “trustworthy AI” and the differing societal values? “There are implications that definitely need to be thought through, not only with [societal] values but also [the] privacy laws of each country,” says Ribas. “In that sense, some countries may develop these technologies faster, because they’ll be more relaxed in terms of the privacy laws versus others. As time goes on we’ll have to see how it all evolves.”
These are large questions about the future ethical direction of AI. For Bing, the issue that’s closer to hand is how best to use its machine-learning capabilities for search. At a time when attention is being directed to how algorithms can shape the way we see the world, Ribas wants Bing to offer a new model for information on the internet. Whether or not it manages to do this remains to be seen.
“If we’re the only one doing it and it creates some significant differentiation and people come to Bing, great, at the end of the day we want that too. But this is more profound. We want to send a signal for search engines in particular to be as objective as possible. Objectivity in search couldn’t be more important than it is today.”
AddSearch Custom Site Search