Tag Archives: scraping

bingLogo_5F00_lgBased on my last post for scraping the Google SERP I decided to make the small change to scrape the organic search results from Bing.

I wasn’t able to find a way to display 100 results per page in the Bing results so this script will only return the top 10. However it could be enhanced to loop through the pages of results but I have left that out of this code.

Example Usage:

$ python BingScrape.py
http://twitter.com/halotis
http://www.halotis.com/
http://www.halotis.com/progress/
http://doi.acm.org/10.1145/367072.367328
http://runtoloseweight.com/privacy.php
http://twitter.com/halotis/statuses/2391293559
http://friendfeed.com/mfwarren
http://www.date-conference.com/archive/conference/proceedings/PAPERS/2001/DATE01/PDFFILES/07a_2.pdf
http://twitterrespond.com/
http://heatherbreen.com

Here’s the Python Code:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# (C) 2009 HalOtis Marketing
# written by Matt Warren
# http://halotis.com/
 
import urllib,urllib2
 
from BeautifulSoup import BeautifulSoup
 
def bing_grab(query):
 
    address = "http://www.bing.com/search?q=%s" % (urllib.quote_plus(query))
    request = urllib2.Request(address, None, {'User-Agent':'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)'} )
    urlfile = urllib2.urlopen(request)
    page = urlfile.read(200000)
    urlfile.close()
 
    soup = BeautifulSoup(page)
    links =   [x.find('a')['href'] for x in soup.find('div', id='results').findAll('h3')]
 
    return links
 
if __name__=='__main__':
    # Example: Search written to file
    links = bing_grab('halotis')
    print '\n'.join(links)

alexa_logoSometimes it’s useful to know where all the back-links to a website are coming from.

As a competitor it can give you information about how your competition is promoting their site. You can shortcut the process of finding the good places to get links from, and who might be a client or a good contact for your business by finding out who is linking to your competitors.

If you’re buying or selling a website the number and quality of back-links helps determine the value of a site. checking the links to a site should be on the checklist you use when buying a website.

With that in mind I wrote a short script that scrapes the links to a particular domain from the list that Alexa provides.

import urllib2
 
from BeautifulSoup import BeautifulSoup
 
def get_alexa_linksin(domain):
 
    page = 0
    linksin = []
 
    while True :
        url='http://www.alexa.com/site/linksin;'+str(page)+'/'+domain
        req = urllib2.Request(url)
        HTML = urllib2.urlopen(req).read()
        soup = BeautifulSoup(HTML)
 
        next = soup.find(id='linksin').find('a', attrs={'class':'next'})
 
        linksin += [(link['href'], link.string) for link in soup.find(id='linksin').findAll('a')]
 
        if next :
	    page = page+1
        else :
	    break
 
    return linksin
 
if __name__=='__main__':
    linksin = get_alexa_linksin('halotis.com')
    print linksin