‘Project Owl’ – Google’s action to fix fake news and hateful content

Google has been very aware of the fake news and troubling search suggestions that have been appearing at the top of their search results since late last year, and Tuesday 25th April announced some major changes to fix the problem.

After being recently reprimanded for producing some questionable (and some rather humorous) results in their featured snippet answers, Google is now taking steps to combat these questionable search results.

Whilst some of the answers might be quite amusing (and the majority of us will be able to see the fact from the fiction), Google, of course, has a responsibility to provide true and accurate information given that so many people look to them as a trusted source of information.

‘Do you want [Google] to err on the side of free speech or on the side of censorship?’

Just as Facebook have been criticized for their late influx of “fake news”, including influencing people’s opinions on US election candidates by allowing “fake news” to circulate across the channel around the time of the US election, Google’s featured snippets are promoted as an authoritative information source and therefore have a similar ability to influence.

Now that fake news is becoming more prevalent as we’ve seen over previous months, as more and more instances crop up of fake stories making it to the top position in the search results and taking featured snippets, it’s clear that something has to change. Google are taking control of the algorithm issue with Project Owl.


What is Project Owl?

Project Owl is Google’s new initiative that gives users options to report problematic content or other problems they uncover while searching.

There are four points of attack being launched, the first two revolving around improved ways consumers can report an issue, the third a more detailed manual assessment of content to ensure excellent search quality and the fourth, ongoing tweaks to the algorithm.


1.      Modified report form to flag an inappropriate search prediction

Below you can see there is now an option in the search autocomplete suggestions to report a query that is problematic.

google search to show project owl modified report form under search bar

If you click on this you can report one or multiple selected search queries as either ‘hateful’, ‘sexually explicit’, ‘violent’, ‘dangerous’ or something else.

google search to show project owl modified report form under search bar

However, whilst this crack down on dodgy content had to come eventually, there is a careful line that Google must tread between which content they decide to demote. Whilst some fake news is easy to spot, ensuring all ranking content is truthful and accurate is not always as easy as it sounds.

Take this rather amusing example:

google search to show project owl featured snippet issues

Whilst a study has been conducted and has produced some findings on the topic, others have argued that the results are out of context or just plain wrong. The reports challenging this claim appear in 2nd, 6th, 9th and dotted about few and far between continuing down the SERPs. This is because sites are jumping on the opportunity to newsjack this controversial and click-baity topic, regardless of the wider context.


I saw this example last week and the featured snippet still has not been corrected to portray the fact that only one study has shown farts as being a good preventative method for avoiding cancer, and that the topic is still under hot debate.

This brings us to the second improvement Google is rolling out.


2.      New feedback form for reporting problematic or inappropriate featured snippets

Following on our example above, you can now leave feedback if you see a featured snippet that looks a little questionable by clicking on the highlighted Feedback link below the snippet:

google search to show project owl modified feedback form under featured snippet

This greys out the SERPS and opens to show a report that allows you to flag the featured snippet as problematic – or leave positive feedback, if the snippet is in fact useful!

google search to show project owl modified feedback form for featured snippet

Google’s featured snippets and their ability to discern fact from fiction may be improved over time as Google get better at vetting the sources of a piece of content, and AI brings further possibilities for the algorithm to understand a piece within the context of wider conversation. However, for now, the fact that the Google algorithm simply cannot catch everything that falls through the net, has called for humans to reset the balance.


3.      Hiring 10,000 search quality raters to manually check content

Google have hired 10,000 search quality raters as part of a new process to manually flag and vet ‘upsetting or offensive’ content – this came shortly after the queries such as whether the Holocaust ever happened were brought to the attention of authorities.

These humans will be working off a comprehensive previously released set of guidelines (first made public in 2013) outlining the possible problems that could occur with a search query and its results.

But having humans at the helm comes with its own issues of bias, error and once again – the possibility to fail to flag up fiction presented as fact.

Andrew Nusca over at Fortune Tech discusses (very quicky!) with journalist Jeff John Roberts some of the issues Google might face.

The most poignant part is Nusca’s demand that we need ‘to get rid of hate speech.’

And whilst Google need to try and be fair and produce truthful information, the real question is:

‘Do you want [Google] to err on the side of free speech or on the side of censorship?’

In my opinion, it has to be free speech – the good stuff just needs to sound out over the bad. This is what Google is going for, and frankly, I think they’ve hit the nail on the head.


4.      Continuing to update the algorithm to demote non-authoritative information

Google will always continue to updating it’s algorithm to improve the quality of search results, but currently they are currently putting extra focus on ensuring they ‘surface more authoritative pages and demote low-quality content’, as stated by Google’s spokesperson Ben Gomes in his recent blog post.

In a statement Google gave to Search Engine Land, their speaker noted:

‘When non-authoritative information ranks too high in our search results, we develop scalable, automated approaches to fix the problems, rather than manually removing these one-by-one. We recently made improvements to our algorithm that will help surface more high quality, credible content on the web. We’ll continue to change our algorithms over time in order to tackle these challenges.’


So, what now?

Fake news and hateful content is always going to be out there on the web – there’s no question about that – and there always will be. For the purposes of free speech there should always be too. The good stuff, the true stuff, should just cloud out the unjust, the illegal, the nasty – and incorrect information.

The internet users are Google’s biggest resource, so utilising them to help weed out some of the hurtful and misleading content out there on the web is where they need to start, and thankfully have done.

They have also recently made a move to fund some big fact-checking organisations, investing €150,000 in projects such as Full Fact, Snopes, PolitiFact, Factmata (created at University College London and University of Sheffield) and a few others including an Italian fact-checking project which received €47,000.

Whilst this was a great move on Google’s part to help the crackdown on verifying content online, these projects and organisations also rely on manual vetting, so hit the same snags we have discussed throughout this article.

However, the fact that this issue is now being properly addressed means that it is at the forefront of our minds and, over time, new faster and more efficient solutions will emerge.

Empowering people to be their own fact checkers is key, and Factmata’s founder, Dhruv Ghulati, said in his pitch to Google that they aimed to use machine intelligence ‘to empower people to question the digital content they read on a daily basis, and not take anything for granted’.

Google on tablet