Tag Archives: clickbank

I got an email the other day from Frank Kern who was pimping another make money online product from his cousin Trey. The Number Effect is a DVD containing the results of an experiment where he created an affiliate link to every one of the 12,000 products for sale on ClickBank and sent paid (PPV) traffic to all of those links and found which ones were profitable. He found 54 niches with profitable campaigns out of 12,000.

Trey went on to talk about the software that he had written for this experiment. It apparently took a bit of work to get going from his outsourced programmer.

I thought it would be fun to try and implement the same script myself. It took about 1 hour to program the whole thing.

So if you want to create your own clickbank affiliate link for all of the clickbank products for sale here’s a script that will do it. Keep in mind that I never did any work to make this thing fast. and it takes about 8 hours to scrape all 13,000 products, create the affiliate links, and resolve the urls for where it goes. Sure I could make it faster, but I’m lazy.

Here’s the python script to do it:

#!/usr/bin/env python
# encoding: utf-8
"""
ClickBankMarketScrape.py
 
Created by Matt Warren on 2010-09-07.
Copyright (c) 2010 HalOtis.com. All rights reserved.
 
"""
 
 
 
CLICKBANK_URL = 'http://www.clickbank.com'
MARKETPLACE_URL = CLICKBANK_URL+'/marketplace.htm'
AFF_LINK_FORM = CLICKBANK_URL+'/info/jmap.htm'
 
AFFILIATE = 'mfwarren'
 
import urllib, urllib2
from BeautifulSoup import BeautifulSoup
import re
 
product_links = []
product_codes = []
pages_to_scrape = []
 
def get_category_urls():
	request = urllib2.Request(MARKETPLACE_URL, None)
	urlfile = urllib2.urlopen(request)
	page = urlfile.read()
	urlfile.close()
 
	soup = BeautifulSoup(page)
	parentCatLinks = [x['href'] for x in soup.findAll('a', {'class':'parentCatLink'})]
	return parentCatLinks
 
def get_products():
 
	fout = open('ClickBankLinks.csv', 'w')
 
	while len(pages_to_scrape) > 0:
 
		url = pages_to_scrape.pop()
		request = urllib2.Request(url, None)
		urlfile = urllib2.urlopen(request)
		page = urlfile.read()
		urlfile.close()
 
		soup = BeautifulSoup(page)
 
		results = [x.find('a') for x in soup.findAll('tr', {'class':'result'})]
 
		nextLink = soup.find('a', title='Next page')
		if nextLink:
			page_to_scrape.append(nextLink['href'])
 
		for product in results:
			try:
				product_code = str(product).split('.')[1]
				product_codes.append(product_code)
				m = re.search('^< (.*)>(.*)< ', str(product))
				title = m.group(2)
				my_link = get_hoplink(product_code)
				request = urllib2.Request(my_link)
				urlfile = urllib2.urlopen(request)
				display_url = urlfile.url
				#page = urlfile.read()  #continue here if you want to scrape keywords etc from landing page
 
				print my_link, display_url
				product_links.append({'code':product_code, 'aff_link':my_link, 'dest_url':display_url})
				fout.write(product_code + ', ' + my_link + ', ' + display_url + '\n')
				fout.flush()
			except:
				continue  # handle cases where destination url is offline
 
	fout.close()
 
def get_hoplink(vendor):
	request = urllib2.Request(AFF_LINK_FORM + '?affiliate=' + AFFILIATE + '&promocode=&submit=Create&vendor='+vendor+'&results=', None)
	urlfile = urllib2.urlopen(request)
	page = urlfile.read()
	urlfile.close()
	soup = BeautifulSoup(page)
	link = soup.findAll('input', {'class':'special'})[0]['value']
	return link
 
if __name__=='__main__':
	urls = get_category_ids()
	for url in urls:
		pages_to_scrape.append(CLICKBANK_URL+url)
	get_products()

ClickBankClickbank is an amazing service that allows anyone to easily to either as a publisher create and sell information products or as an advertiser sell other peoples products for a commission. Clickbank handles the credit card transactions, and refunds while affiliates can earn as much as 90% of the price of the products as commission. It’s a pretty easy to use system and I have used it both as a publisher and as an affiliate to make significant amounts of money online.

The script I have today is a Python program that uses Clickbank’s REST API to download the latest transactions for your affiliate IDs and stuffs the data into a database.

The reason for doing this is that it keeps the data in your control and allows you to more easily see all of the transactions for all your accounts in one place without having to go to clickbank.com and log in to your accounts constantly. I’m going to be including this data in my Business Intelligence Dashboard Application

One of the new things I did while writing this script was made use of SQLAlchemy to abstract the database. This means that it should be trivial to convert it over to use MySQL – just change the connection string.

Also you should note that to use this script you’ll need to get the “Clerk API Key” and the “Developer API Key” from your Clickbank account. To generate those keys go to the Account Settings tab from the account dashboard. If you have more than one affiliate ID then you’ll need one Clerk API Key per affiliate ID.

This is the biggest script I have shared on this site yet. I hope someone finds it useful.

Here’s the code:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# (C) 2009 HalOtis Marketing
# written by Matt Warren
# http://halotis.com/
 
import csv
import httplib
import logging
 
from sqlalchemy import Table, Column, Integer, String, MetaData, Date, DateTime, Float
from sqlalchemy.schema import UniqueConstraint
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
 
LOG_FILENAME = 'ClickbankLoader.log'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG,filemode='w')
 
#generate these keys in the Account Settings area of ClickBank when you log in.
ACCOUNTS = [{'account':'YOUR_AFFILIATE_ID',  'API_key': 'YOUR_API_KEY' },]
DEV_API_KEY = 'YOUR_DEV_KEY'
 
CONNSTRING='sqlite:///clickbank_stats.sqlite'
 
Base = declarative_base()
class ClickBankList(Base):
    __tablename__ = 'clickbanklist'
    __table_args__ = (UniqueConstraint('date','receipt','item'),{})
 
    id                 = Column(Integer, primary_key=True)
    account            = Column(String)
    processedPayments  = Column(Integer)
    status             = Column(String)
    futurePayments     = Column(Integer)
    firstName          = Column(String)
    state              = Column(String)
    promo              = Column(String)
    country            = Column(String)
    receipt            = Column(String)
    pmtType            = Column(String)
    site               = Column(String)
    currency           = Column(String)
    item               = Column(String)
    amount             = Column(Float)
    txnType            = Column(String)
    affi               = Column(String)
    lastName           = Column(String)
    date               = Column(DateTime)
    rebillAmount       = Column(Float)
    nextPaymentDate    = Column(DateTime)
    email              = Column(String)
 
    format = '%Y-%m-%dT%H:%M:%S'
 
    def __init__(self, account, processedPayments, status, futurePayments, firstName, state, promo, country, receipt, pmtType, site, currency, item, amount , txnType, affi, lastName, date, rebillAmount, nextPaymentDate, email):
        self.account            = account
        if processedPayments != '':
        	self.processedPayments  = processedPayments
        self.status             = status
        if futurePayments != '':
            self.futurePayments     = futurePayments
        self.firstName          = firstName
        self.state              = state
        self.promo              = promo
        self.country            = country
        self.receipt            = receipt
        self.pmtType            = pmtType
        self.site               = site
        self.currency           = currency
        self.item               = item
        if amount != '':
        	self.amount             = amount 
        self.txnType            = txnType
        self.affi               = affi
        self.lastName           = lastName
        self.date               = datetime.strptime(date[:19], self.format)
        if rebillAmount != '':
        	self.rebillAmount       = rebillAmount
        if nextPaymentDate != '':
        	self.nextPaymentDate    = datetime.strptime(nextPaymentDate[:19], self.format)
        self.email              = email
 
    def __repr__(self):
        return "<clickbank ('%s - %s - %s - %s')>" % (self.account, self.date, self.receipt, self.item)
 
def get_clickbank_list(API_key, DEV_key):
    conn = httplib.HTTPSConnection('api.clickbank.com')
    conn.putrequest('GET', '/rest/1.0/orders/list')
    conn.putheader("Accept", 'text/csv')
    conn.putheader("Authorization", DEV_key+':'+API_key)
    conn.endheaders()
    response = conn.getresponse()
 
    if response.status != 200:
        logging.error('HTTP error %s' % response)
        raise Exception(response)
 
    csv_data = response.read()
 
    return csv_data
 
def load_clickbanklist(csv_data, account, dbconnection=CONNSTRING, echo=False):
    engine = create_engine(dbconnection, echo=echo)
 
    metadata = Base.metadata
    metadata.create_all(engine) 
 
    Session = sessionmaker(bind=engine)
    session = Session()
 
    data = csv.DictReader(iter(csv_data.split('\n')))
 
    for d in data:
        item = ClickBankList(account, **d)
        #check for duplicates before inserting
        checkitem = session.query(ClickBankList).filter_by(date=item.date, receipt=item.receipt, item=item.item).all()
 
        if not checkitem:
            logging.info('inserting new transaction %s' % item)
            session.add(item)
 
    session.commit()
 
if  __name__=='__main__':
    try:
        for account in ACCOUNTS:
            csv_data = get_clickbank_list(account['API_key'], DEV_API_KEY)
            load_clickbanklist(csv_data, account['account'])
    except:
        logging.exception('Crashed')
</clickbank>