Category Archives: Internet

I’m ramping up a couple of eCommerce websites as part of a strategy to create a home based business that my wife can take charge of when our daughter starts school in 9 months.  Part of building the foundation for a business like that is to get re-acquainted with the world of internet marketing… something that I was deep into several years ago but am now way out of date.

Facebook Ads seems to be an interesting platform for advertising now and is significantly different from Search advertising.  I’ve been reading and learning about various strategies for marketing on Facebook and I’m hearing about some really crazy and convoluted advertising funnels, like leveraging cheaper advertising markets for likes and engagement to help drive down costs and help improve social proof in the target market.

The interesting marketing psychology is that people are not on Facebook to look to buy products and so ads which sell things stand out as an ad and are not engaging. There really needs to be some kind of entertainment or curiosity value.  Like paid advertorials in print media the goal should be to mix in with the rest of a person’s news feed and not stand out as an advertisement.

To that end I’m going to be pumping a fair bit of money into Facebook over the coming months to test ad campaigns and find viable ways to run an eCommerce business.

 

One of the welcome additions to Amazon’s AWS offerings is a simplified server provisioning service to compete directly with Digital Ocean called Lightsail.  Lightsail provides a nicer web UI for launching instances, many quick launch options for common apps like WordPress or GitLab and simplified billing (yay!).  With Lightsail you don’t need to pre-pay for Reserved Instances to get a good price on an EC2 server.

Dokku is mini heroku you can run on your own servers.  It uses the same buildpacks that Heroku does to enable git push deployments. By building on top of Docker a collection of available dokku plugins make it easy to start up databases, caching or tie in other services.  In this tutorial I add Postgresql and get an SSL cert using Let’s Encrypt

Together, Lightsail and Dokku create an easy way to manage your application deployment on an inexpensive server.

Get started on Lightsail by starting up a new virtual server:

And then selecting an Ubuntu Image:

There’s a spot here for ‘Add launch script’ where you can drop in these commands to automatically install dokku on first boot:

wget https://raw.githubusercontent.com/dokku/dokku/v0.7.2/bootstrap.sh
sudo DOKKU_TAG=v0.7.2 bash bootstrap.sh

Give it a name and press Create to start booting up the server. You should be able to SSH to the new server very quickly though you can connect before dokku and package updates have been applied (it’ll take a couple minutes for the dokku command to become available)

After a couple of minutes have passed and things are installed and running visit your server in a web browser:

For the public key you’ll want to grab the key on your computer.  if you have linux or macOS you can grab the contents of ~/.ssh/id_rsa.pub.  If you need to generate a key there’s a good How-To on Github about generating them.

Set the hostname you’ll use for the server if you have one and Finish Setup.

Next step is to SSH to the server and fiddle with it there using the private key you can download from Lightsail:

ssh -i LightsailDefaultPrivateKey.pem ubuntu@<YOUR PUBLIC IP ADDRESS>

And create the app you will be deploying:

dokku apps:create your-app

Add a postgres database (there are other great plugins available for dokku too)

sudo dokku plugin:install https://github.com/dokku/dokku-postgres.git
dokku postgres:create database-name
dokku postgres:link database-name your-app

Now, back to your local app add a git remote to this new server and deploy it:

git remote add dokku dokku@<PUBLIC IP OR HOSTNAME> your-app
git push dokku master

If that is successful then the project should be visible online. Yay!

Then there are some next steps to help complete the app. Set any environment variables you need for the app:

dokku config:set your-app ENV=prod

You can install an SSL cert using Let’s Encrypt very easily:

sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git
dokku letsencrypt your-app

You can configure some pre and post deploy hooks inside the app.yaml file in your project repository to run checks or execute database migrations.

That’s about it! git push deploys to update your project whenever you want.

I consider myself a fairly well rounded developer. So when it comes time to choose a technology to accomplish a task I will happily choose the one that is the best fit whether it requires Ruby, Python, Java, Objective-C, PHP, CSS, or Bash or anything else.

Recently I was given a project to take on.  At first it looked like it would be adding some PHP code to an inherited code base.  Upon starting it became apparent that the code was a disaster and too far gone to be recoverable… a full re-write would probably be best however it was way out of the budget for now.  Rather than continue patching in the new features we needed, we decided to create a new side project that tied into the existing database to accomplish what we needed.  It gave me a chance to evaluate how I wanted to build this application.

Generally we do Ruby on Rails work with my team.  However given that the existing database schema was not even close to normalized or conventional Rails would have been a difficult way to go.  There would need to be a lot of boilerplate code to map things to new field names.  It just wasn’t a good fit.

Instead I opted to go with a simple, lightweight Node.js App.  It would leverage the SQL queries copied out of the original PHP code with minimal overhead.  The app is heavy on database IO and has zero application logic so Node is a good fit.  We already have Javascript skills on the team and experience with testing frameworks for Javascript code, so support wouldn’t be difficult.

Turns out, it didn’t go over well with some people on the team.

“We use Rails as a  technology stack”

Passion can run high in technology circles over tools. Apple vs. Windows. VIM vs Emacs there are religious fights about which is better.

I like to be good at a breadth of technologies, that means staying sharp on a fistful of languages and tools, while always looking to learn new things.  Others like to focus on being world-class with one technology stack.

Which is the better approach?

The web is still suffering through a page refresh disaster. HTTP started with simple pages where full page reloads when navigating wasn’t so bad. Modern sites have boatloads of images, css, js and html files on each page that need to be requested each time a page loads and checked to see if the file has changed to re-download it.

It’s wasteful and slows down everyone’s experience with browsing.

There are various approaches to overcoming these shortfalls: gzip everything, image atlas files, concat and uglify javascript etc. These lead to further technical complexity on the server side and nuances that make web development more difficult to get right.

The rise of client side MVC frameworks is a breath of fresh air in web development.

Ajax was just a stepping stone to this new web development paradigm.

There are many competing frameworks right now: Ember.js, Backbone.js, Angular.js, and Meteor.js to name a few.  They provide several big wins for web developers: simplification on the server side (by removing the need for template engines, asset management, and a multitude of views)  This also allows for greater performance on the server side since template rendering is generally slow and rendering a full page is no longer required.  Web backends can be lean and optimized much more easily.  It lowers the bar to get other more performant languages like Go to compete with Django and Ruby on Rails.

With an API first approach the front-end web development can be completely isolated.  All front end files can reside on high performance, highly available and cheap hosting like Amazon S3 and CDNed around the world.

Done smartly the backend can be used against multiple front-end technologies: iOS, Android,  Windows Phone, and desktop apps can all use the same API.  It’s the best of both worlds. You get the superior feel of a native app while maintaining much of the code savings of consolidating business logic on the server.

For the past few years we’ve been living in a web development world where there is a tug of war to decide what pages, or parts of pages should be rendered on the server side and what should be done with javascript and DOM manipulation.  It’s a design decision that has to be thought about consistently when working on complex web sites and generally creates a messy code base with javascript scattered around everywhere.  The future of API first web design makes it very clear. The server provides JSON, and javascript on the client side does the rest.

From a testing perspective the backend becomes easier to unit test. CRUD interfaces and validations are simple to assert.

The messiness of testing the UI now falls entirely on the javascript application where they can be safely written without needing a complete web stack of infrastructure. The MVC frameworks nudge you towards more organized and easier to test javascript code.

For your next web development project consider taking this new approach to the web.  Simplify your backend by doing a RESTful JSON API only, and simplify your front-end by using one of the Javascript MVC frameworks.

homepage-docker-logoI’m looking at launching a series of new web services in the future and that got me thinking about finding ways to improve my development and deployment workflow to make it super trivial to launch new services and APIs.

Recent experience with Heroku’s PaaS got me looking in that direction. Pushing a git repository to kick off a server build is an eloquently simple approach that makes spinning up new services a snap.

However Heroku costs can quickly spiral up and out of control. Running it myself on raw Amazon EC2 or a local dev server would be ideal.

Thanks to several new projects it’s possible to create an easy to use mini-heroku on your own server.

Docker is a brilliantly simple idea that has sparked a huge following in the last few months.  It is one of the most exciting developments in Open Source at the moment.  Basically Docker lets you run an extremely lightweight virtual machine on top of Linux.  You can spin up and down services in 1-2 seconds that are isolated from the host server which allows you to play with various server configurations quickly, easily, and without touching the core system.

Docker allows you to specify your entire server configuration in a Dockerfile of which there are already thousands defined and shared and searchable at http://index.docker.io.  Need a WordPress server? Redis? Memcached? you can now get them up and running in a virtualized environment extremely easily.

Dokku, builds on top of Docker with the ability to replicate the most impressive feature of Heroku: git push to deploy.  Once set up you can configure a git push remote and have Dokku automatically build and run a Docker for your app and host it on a subdomain.  Brilliant.

Managing all these Docker services can be done with another awesome tool called shipyard. Bring your virtualized multi-host environment under control.  It even supports automatic restarts of crashed services.

Getting it all up and running has been fairly straight forward. Once in place, it will make managing my own cloud of services much more robust.

I’ve become a little bit more paranoid over the last few weeks about computer security. I used to think I will be fine – why would anyone want to hack my computer and find my personal files. But after a bit of reading I’m more concerned about the far more likely situation of mass spying and infiltration programs either churning through my network traffic or infecting my computers with a botnet. In those cases people will collect everything they can and data mine it afterwards.

Starting a VPN should help greatly with preserving my privacy.  It also gives me the added benefit of having an IP address in the USA which allows me to watch Hulu and other US only content sites.

After several false starts trying to get OpenVPN working I decided to try something easier.

Starting with an existing AMI was easy.  I searched for and launched a new instance using ami-744c971d which is pre-configured for pptp with a simple username/password.  Create a new security group for the instance when configuring it.

Once the server is launched and running you’ll need to open up some ports so that it is accessible.  PPTP uses TCP port 1723.

In the Security Group settings open up TCP port 1723 and TCP port 22 (SSH).

Ssh to the server using the key pair you selected when starting the instance and the username ‘ubuntu’ to edit the settings file.  Edit the file /etc/ppp/chap-secrets  change the username ‘seamus’ and password ‘wiles’ to something else.

Restart pptp: ‘sudo service pptpd restart’

You should be able to connect then to the new VPN server.

 

There’s a lot of people out there who have their favorite technologies and there are few more passionate debates than the issue of using HTML5 or native development for mobile apps.

It’s still early days for HTML5 on a mobile platform, the technologies are still maturing and so there are fewer libraries out there for doing things like writing games or interfacing with the hardware. Due to the nature of HTML and interpreted Javascript, performance will never match that of a native app, but with a fast enough phone and a good performing renderer performance issues will eventually not be issues at all.

Native development allows apps to build on the look and feel that users are used to on their devices. Usability should therefore be better on a native app because buttons will be using standard (tested) sizes and provide feedback that the user expects. Because native apps are compiled and optimized for the hardware they naturally have a performance advantage over HTML.

Personally I prefer to use native stuff whenever possible. But sometimes it is just far easier to hook into a web server to deliver some dynamic content in which case it can be simpler just to display it in it’s HTML form.

The current feature I’m working on is to add a commenting system to a bunch of my apps. Using the django commenting app and jquery mobile I was able to get a pretty good web based commenting system in place in a few hours. Translating that to a native form in Objective-C would have taken much longer. So I’m just going to throw up a web rendering widget in the app to load the webpage.

I see the lines blurring over the next few years in this way. HTML is catching up to native in a lot of ways but native will always have some natural advantages and is also improving. Any new hardware features will first be accessible to native APIs before they are wrapped up by webkit and exposed for use in HTML. And differing platforms will always result in HTML5 apps that have to things like “if iPhone then …”. The main benefit for HTML5 then is centralizing to servers that can be quickly updated with bug fixes and that people have HTML/JS/CSS skillsets and want to work on mobile apps.

Right now is simply too early to jump into a pure HTML5 mobile app for anything serious though. As seen by the HTML5 based Facebook App which is terribly slow on both iOS and Android and got bad user reviews as a result of those performance woes. A native Facebook app is in the works to address those shortcomings. But I would predict that it won’t be until iPhone 7 comes out that HTML5 will be able to compete from a user experience perspective with native apps.

Fabric is a pretty awesome tool for deploying projects. But it turns out that it’s also pretty awesome for lots of other stuff.

At PyCon 2012 there was a talk given by Ricardo Kirkner (which you can watch here) that inspired me to play around with fabric in some new ways.

It’s possible to use fabric as a wrapper around the standard django ./manage.py script, to help setting up virtualenvs and install packages. Using fabric to script around these things means that there are fewer tools that new developers will need to get set up and know how to use. Scripts that normally might have been loose bash files can now be collected, organized and documented.

I’m currently working on a large django project with 4 other developers who are new to python and django. Getting everyone’s development environment working was a big pain since there were multiple platforms (Mac and Linux) and different configurations for base packages. If I had thought of this sooner I might have been able to create a reliable fabric script so that “easy_install fabric; fab init_project” got them from zero to running django app.

There’s also several oneliners that I run fairly regularly which can be saved in fabfile.py and called much easier. For example:

def clean():
    local('find . -name "*\.pyc" -exec rm -r {} \;')

Now

$ fab clean

will clear out any .pyc files in the project.

It’s also possible to manage the virtualenv environment through fabric:

VIRTUALENV = '.virtualenv/'
def setup_virtualenv():
    created = False
    virtual_env = os.environ.get('VIRTUAL_ENV', None)
    if virtual_env is None:
        if not os.path.exists(VIRTUALENV):
            _create_virtualenv()
            created = True
        virtual_env = VIRTUALENV
    env.virtualenv = os.path.abspath(virtual_env)
    _activate_virtualenv()
    return created
 
def _activate_virtualenv():
    activate_this = os.path.abspath("%s/bin/activate_this.py" % env.virtualenv)
    execfile(activate_this, dict(__file__=activate_this))
 
def _create_virtualenv(clear=False):
    if not os.path.exists(VIRTUALENV) or clear:
        args = '--no-site-packages --distribute --clear'
        local("%s /usr/local/bin/virtualenv %s %s" % (sys.executable, args, VIRTUALENV), capture=False)
 
def virtualenv_local(command, capture=True):
    prefix = ''
    virtual_env = env.get('virtualenv', None)
    if virtual_env:
        prefix = ". %s/bin/activate && " % virtual_env
    command = prefix + command
    return local(command, capture=capture)
 
def manage(command, *args):
    virtualenv_local("python manage.py {0} {1}".format(command, ' '.join(args)), capture=False,)
 
def runserver(*args):
    manage('runserver', *args)

These functions let you create the virtualenv and run commands in the virtualenv (without having manually activated it). virtualenv_local wraps the call to fabric’s “local” function and sources the activate script before launching the command specified. the manage function provides a way to call django’s manage.py script, and the runserver function gives a way to use fabric to launch the server (in the virtualenv). So now the fab command can be used to consolidate both virtualenv tools and manage.py script into one document-able fabfile.py with a consistent command-line interface.

Technorati Tags:

I was surprised that there wasn’t already a script out there to download iAd reports from Apple’s iTunes Connect website.

Apple released a Java based command line tool to download the sales reports for Apps but neglected to provide something similar for iAd publishers. Some Googling around I was further surprised that I couldn’t find any 3rd party scripts to download this data.

I did however find a python script called appsalesdaily.py which is a web scraper that downloads the sales reports. It handled a lot of the nasty http cookies and login stuff that is usually very tricky to do with a script. I modified that script and extended it with a function to download daily iAd reports.

The rather fascinating thing was just how complex the single page of the iAd publisher dashboard is. It was built with GWT which is perhaps the worst thing ever developed. It produced a webpage that contains 40,000+ lines of javascript and all it does is draw a few graphs. The code was terribly convoluted and nearly impossible to reverse engineer. But 6 frustrating hours later I was able to download the files I wanted.

I don’t wish this sort of struggle on anyone so I’m making the code available. https://github.com/mfwarren/appdailysales

After getting UFO Invader published there were a couple of things that became obvious points of improvement from a marketing perspective.

Web sites have the advantage that you can count on having internet access when visiting a website so it’s easy to tie things in dynamically all over the  place.  Things such as Adsense ads are loaded and managed by other people using other services.  they change depending on who you are, what the page is like, and what time of day it is.

With a turn around time for publishing native apps to the iPhone being more than a few days – having anything from news messages, advertisements or dynamic content becomes a bit more complex.  You can’t simply push an app update out to update a message to say you’re working on a big update for example.  You also can’t count on internet access being available.

The idea for the iPhone App Control backend is to pair a web admin service with a library of Objective-C UIKit components.  An API call would be made out to the service to get marketing messages, news or alerts text or whatever else and it would also serve as a bit of an analytics service, tracking clicks on ads, and perhaps performing A/B split tests on offers.

Once built, this software will help control all of my current and future iOS Apps.

Will I release this software?  I’m not sure.  What do you think?  should I sell it? open-source it?  sell accounts and access?  Let me know your thoughts…