Predicting a Google antitrust settlement, and creating a Twitter truth-o-meter
A man checks out the homepage of Google internet search engine in an office in Washington, DC, on February 8, 2011.
It's likely that Google will go into the holidays with what some see as a nice bouquet of holiday flowers courtesy of the Federal Trade Commission. The FTC could wrap up its antitrust investigation without forcing Google to make big changes to the way it displays search results. At the heart of the case were complaints about the way Google sometimes puts its own products at the head of the pack when listing results.
"If Google's able to get away with only having to put some sort of icon next to the results that are specific Google results because they're Google properties," says University of Chicago Law professor Randy Picker, "I think Google will have escaped."
Escaped what exactly? For instance the kind of punishments that Microsoft had to deal with in the early 2000s in an antitrust settlement over its operating system, Windows. Professor Picker says there are ways for Google to make its search results more transparent.
"You could structure those results so that for example," he says, "I could choose to make Yelp my reviews provider if I wanted to do that."
But Professor Picker doesn't think that will happen though. Google has not been commenting ahead of a settlement, which could force it make it easier for competitors to use some of its patents. Some of its critics want the search dispute to go beyond the FTC and get taken up by the Justice Department.
Microsoft this week has been needling Google, saying Google's online shopping guides sometimes only display the results of paying customers. Microsoft has taken out ads suggesting consumers risk getting quote "Scroogled" if they do a query with Google's "shopping" search. Microsoft would prefer customers use its Bing search engine instead.
A rumor mill is one thing, but in an emergency, does social media become an outright misinformation factory? Even as authorities in Newtown, Connecticut were dealing with vastly more pressing matters in wake of the school shootings, they had to take time to warn the public not to believe what they were reading online. Stuff like threats from people pretending to be the killer or an utterly fake letter supposedly written by one of the young victims. If only there were a "truth-o-meter," some software algorithm to help separate social media fact from fiction. Slate Magazine noticed that there's already a Chilean team on the case and they're publishing a report on their findings.
"It all started because I was in Chile during the earthquake we had a few years ago," says Barbara Poblete, assistant professor of computer science at the University of Chile and co-author of a paper on the topic being published in the next issue of Internet Research. "There were a lot of rumors that were propegated on Twitter. Some of them were true, and others turned out to be false."
What are the signs a tweet is a bunch of horse-hockey? Her team found that question marks in a tweet are a red flag. Links to an internet address--a URL--raise the credibility score. First person is a good but third person pronouns are suspect. Poblete says that the age of a user's profile can be a factor for instance--and a negative one if the profile hasn't been around for long. Another crucial finding is that what people in social networks judge as credible tends to be credible. But can you get a machine to simulate what is --essentially--critical thinking?
"Our results show that you can," says Poblete. "You can't say whether something is true or not, but we can say what the network thinks. In most cases when the network feels that information is credible, in real life it ends up being true."
Slate also notes research from India showing that swear words were signs of bunk. Plus, tweets with emoticons that look like frowns were more credible than those with smileys.