If you benefit from web2py hope you feel encouraged to pay it forward by contributing back to society in whatever form you choose!

You can download the full script here: https://raw.github.com/rochacbruno/web2py/master/scripts/setup-web2py-nginx-uwsgi-ubuntu.sh


On your ubuntu 12.04 server:


$wget https://raw.github.com/rochacbruno/web2py/master/scripts/setup-web2py-nginx-uwsgi-ubuntu.sh

$ chmod +x setup-web2py-nginx-uwsgi-ubuntu.sh

$ ./setup-web2py-nginx-uwsgi-ubuntu.sh

Follow the setup instructions and done!



Optimized script for nginx and uwsgi on ubuntu 12.04


I've installed this script on several machines and after a month of testing I figured out the basic configuration for a performatic uwsgi.


Without this settings uwsgi can be a memory eater. (see the thread about wsgi by Bruce Wade on the web2py list, this script was based on Bruce tips)


  1. Use unix socket instead of tcp
  2. set the app to start the master process - it is useful for monitoring with uwsgitop and also for no-orphans
  3. set the processes to 4 (but this depends on the installed machine, for web2py 4 is the minimum)
  4. harakiri 60 - Every request that will take longer than the seconds specified in the harakiri timeout will be dropped and the corresponding worker recycled.
  5. set the maximum amount of seconds to wait for a worker death during a graceful reload
  6. cpu-affinity - http://lists.unbit.it/pipermail/uwsgi/2011-March/001594.html
  7. Stats - output the stats on tmp so you can use $uwsgitop /tmp/stats.socket to monitoring the workers


  1. reload-mercy - set the maximum amount of seconds to wait for a worker death during a graceful reload

  2. limit-as limit the address space usage of each uWSGI process using POSIX/UNIX setrlimit()

  3. max-requests 2000 - set the maximum number of requests for each worker. When a worker reaches this number it will get recycled. You can use this option to dumb fight memory leaks (even if reload-on-as and reload-on-rss are more useful for this kind of problem)

  4. reload-on-as recycle a workers when its address space usage is over the limit specified

  5. reload-on-rss Works as reload-on-as but it control the physical unshared memory. You can enable both

  6. no-orphans automatically kill workers without a master process

  7. vacuum - automatically remove unix socket and pidfiles on server exit

I did a lot of tests and on a Linode 1024 machine with Ubuntu 12.04 that is the best setup. 
If running a Linode 2048 (or any machine with 2GB RAM) the config can be doubled to:

max-requests - 4000, reload-on-as 512, reload-on-rss 192, limit-as 1024 , processes 8, cpu-affinity 3

It is important: http://lists.unbit.it/pipermail/uwsgi/2011-March/001594.html

Also on nginx.conf it is usefull to set the number of processes/workers - I am using 12

user www-data;
worker_processes 12;
pid /var/run/nginx.pid;

events {
    worker_connections 768;
    # multi_accept on;

http {


    <app mountpoint="/">


location / {
                uwsgi_pass      unix:///run/uwsgi/app/web2py/web2py.socket;
                include         uwsgi_params;
                uwsgi_param     UWSGI_SCHEME $scheme;
                uwsgi_param     SERVER_SOFTWARE    nginx/$nginx_version;


Related slices

Comments (2)

  • Login to post

  • 0
    sabbir-ahmed-10163 8 years ago

    Superb! It just works.

    Iam trying to set NGINX to serve multiple domain. As a first step I have altered 

    Path from this - root /home/www-data/web2py/applications/;


    to this : root /home/www-data/web2py/applications/examples/;
    This is to check if I can serve a specific app. But it is not working .. and always shows the welcome app. 

  • 0
    bryan-leasure-11055 8 years ago

    That was awesome.  I have it running in a promox openvz container.  It took like 10 minutes to set up my ip address, and then run the script.  It was almost as simple as turnkey linux exept I got nginx and all of your optimizations.  Nice work.  Seriously.

Hosting graciously provided by:
Python Anywhere