Notice how Google News has sources on the bottom of each article excerpt.

The Guardian - ABC News - Reuters - Bloomberg

I'm trying to imitate that.

For example, upon submitting the URL http://www.washingtontimes.com/news/2010/dec/3/debt-panel-fails-test-vote/ I want to return The Washington Times

How is this possible with php?


Solution 1:

My answer is expanding on @AI W's answer of using the title of the page. Below is the code to accomplish what he said.

<?php

function get_title($url){
  $str = file_get_contents($url);
  if(strlen($str)>0){
    $str = trim(preg_replace('/\s+/', ' ', $str)); // supports line breaks inside <title>
    preg_match("/\<title\>(.*)\<\/title\>/i",$str,$title); // ignore case
    return $title[1];
  }
}
//Example:
echo get_title("http://www.washingtontimes.com/");

?>

OUTPUT

Washington Times - Politics, Breaking News, US and World News

As you can see, it is not exactly what Google is using, so this leads me to believe that they get a URL's hostname and match it to their own list.

http://www.washingtontimes.com/ => The Washington Times

Solution 2:

$doc = new DOMDocument();
@$doc->loadHTMLFile('http://www.washingtontimes.com/news/2010/dec/3/debt-panel-fails-test-vote/');
$xpath = new DOMXPath($doc);
echo $xpath->query('//title')->item(0)->nodeValue."\n";

Output:

Debt commission falls short on test vote - Washington Times

Obviously you should also implement basic error handling.

Solution 3:

Using get_meta_tags() from the domain home page, for NYT brings back something which might need truncating but could be useful.

$b = "http://www.washingtontimes.com/news/2010/dec/3/debt-panel-fails-test-vote/" ;

$url = parse_url( $b ) ;

$tags = get_meta_tags( $url['scheme'].'://'.$url['host'] );
var_dump( $tags );

includes the description 'The Washington Times delivers breaking news and commentary on the issues that affect the future of our nation.'

Solution 4:

You could fetch the contents of the URL and do a regular expression search for the content of the title element.

<?php
$urlContents = file_get_contents("http://example.com/");
preg_match("/<title>(.*)<\/title>/i", $urlContents, $matches);

print($matches[1] . "\n"); // "Example Web Page"
?>

Or, if you don't want to use a regular expression (to match something very near the top of the document), you could use a DOMDocument object:

<?php
$urlContents = file_get_contents("http://example.com/");

$dom = new DOMDocument();
@$dom->loadHTML($urlContents);

$title = $dom->getElementsByTagName('title');

print($title->item(0)->nodeValue . "\n"); // "Example Web Page"
?>

I leave it up to you to decide which method you like best.

Solution 5:

PHP manual on cURL

<?php

$ch = curl_init("http://www.example.com/");
$fp = fopen("example_homepage.txt", "w");

curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);

curl_exec($ch);
curl_close($ch);
fclose($fp);
?>

PHP manual on Perl regex matching

<?php
$subject = "abcdef";
$pattern = '/^def/';
preg_match($pattern, $subject, $matches, PREG_OFFSET_CAPTURE, 3);
print_r($matches);
?>

And putting those two together:

<?php 
// create curl resource 
$ch = curl_init(); 

// set url 
curl_setopt($ch, CURLOPT_URL, "example.com"); 

//return the transfer as a string 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); 

// $output contains the output string 
$output = curl_exec($ch); 

$pattern = '/[<]title[>]([^<]*)[<][\/]titl/i';

preg_match($pattern, $output, $matches);

print_r($matches);

// close curl resource to free up system resources 
curl_close($ch);      
?>

I can't promise this example will work since I don't have PHP here, but it should help you get started.