Skip to content

NODEBB: Nginx error performance & High CPU

Solved Performance
  • @DownPW Change your config.json so that it looks like the below

    {
        "url": "https://XX-XX.net",
        "secret": "XXXXXXXXXXXXXXXX",
        "database": "mongo",
        "mongo": {
            "host": "127.0.0.1",
            "port": "27017",
            "username": "XXXXXXXXXXX",
            "password": "XXXXXXXXXXX",
            "database": "nodebb",
            "uri": ""
        },
        "redis": {
            "host":"127.0.0.1",
            "port":"6379",
            "database": 5
        },
        "port": ["4567", "4568", "4569"]  // will start three processes
    }
    

    Your nginx.conf also needs modification (see commented steps for changes etc)

    # add the below block for nodeBB clustering
    upstream io_nodes {
        ip_hash;
        server 127.0.0.1:4567;
        server 127.0.0.1:4568;
        server 127.0.0.1:4569;
    }
    
    server {
    	server_name XX-XX.net www.XX-XX.net;
    	listen XX.XX.XX.X;
    	listen [XX:XX:XX:XX::];
    	root /home/XX-XX/nodebb;
    	index index.php index.htm index.html;
    	access_log /var/log/virtualmin/XX-XX.net_access_log;
    	error_log /var/log/virtualmin/XX-XX.net_error_log;
    
    # add the below block which will force all traffic into the cluster when referenced with @nodebb
        
    location @nodebb {
         proxy_pass http://io_nodes;
        }
    
            location / {
    				
    		limit_req zone=flood burst=100 nodelay; 
    		limit_conn ddos 10; 
    		proxy_read_timeout 180; 
    
                    proxy_set_header X-Real-IP $remote_addr;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_set_header X-NginX-Proxy true;
                   # It's necessary to set @nodebb here so that the clustering works
                    proxy_pass @nodebb;
                    proxy_redirect off;
    
                    # Socket.IO Support
                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";
            }
    
    	listen XX.XX.XX.XX:XXssl http2;
    	listen [XX:XX:XX:XX::]:443 ssl http2;
    	ssl_certificate /etc/ssl/virtualmin/166195366750642/ssl.cert;
    	ssl_certificate_key /etc/ssl/virtualmin/166195366750642/ssl.key;
    	if ($scheme = http) {
    		rewrite ^/(?!.well-known)(.*) "https://XX-XX.net/$1" break;
    	}
    }
    
  • @phenomlab

    ok I have to talk to the staff members first before i do anything.

    So there is nothing to put in the vhost XX-XX.net ?, everything is in the nginx.conf and the config.json if I understand correctly

    In terms of disk space, memory, cpu what does it give ?

    After all of that, we need to restart nodebb I imagine ?

    These new ports (4567", “4568”, "4569) must also be open in the Hetzner interface?

    I wanted all the details if possible

  • @DownPW said in NODEBB: Nginx error performance & High CPU:

    everything is in the nginx.conf and the config.json if I understand correctly

    Yes, that’s correct

    @DownPW said in NODEBB: Nginx error performance & High CPU:

    In terms of disk space, memory, cpu what does it give ?

    There should be little change in terms of the usage. What clustering does is essentially provide multiple processes to carry out the same tasks, but obviously much faster than one process only ever could.

    @DownPW said in NODEBB: Nginx error performance & High CPU:

    After all of that, we need to restart nodebb I imagine ?

    Correct

    @DownPW said in NODEBB: Nginx error performance & High CPU:

    These new ports (4567", “4568”, "4569) must also be open in the Hetzner interface?

    Will not be necessary as they are not available publicly, but only to the reverse proxy on 127.0.0.1

  • OK.

    @phenomlab

    I resume in details.

    1- Stop nodebb
    2- Stop iframely
    3- Stop nginx
    4- Install redis server : sudo apt install redis-server
    5- Change nodebb Config.json file (can you verifiy this synthax please ? nodebb documentation tell “database”: 0 and not “database”: 5 - but maybe it’s just a name and i can use the same as mongo like “database”: nodebb , I moved the port directive) :

    {
        "url": "https://XX-XX.net",
        "secret": "XXXXXXXXXXXXXXXX",
        "database": "mongo",
       "port": [4567, 4568,4569],
        "mongo": {
            "host": "127.0.0.1",
            "port": "27017",
            "username": "XXXXXXXXXXX",
            "password": "XXXXXXXXXXX",
            "database": "nodebb",
            "uri": ""
        },
        "redis": {
            "host":"127.0.0.1",
            "port":"6379",
            "database": 5
        }
    }
    

    6- Change nginx.conf :

    # add the below block for nodeBB clustering
    upstream io_nodes {
        ip_hash;
        server 127.0.0.1:4567;
        server 127.0.0.1:4568;
        server 127.0.0.1:4569;
    }
    
    server {
    	server_name XX-XX.net www.XX-XX.net;
    	listen XX.XX.XX.X;
    	listen [XX:XX:XX:XX::];
    	root /home/XX-XX/nodebb;
    	index index.php index.htm index.html;
    	access_log /var/log/virtualmin/XX-XX.net_access_log;
    	error_log /var/log/virtualmin/XX-XX.net_error_log;
    
    # add the below block which will force all traffic into the cluster when referenced with @nodebb
        
    location @nodebb {
         proxy_pass http://io_nodes;
        }
    
            location / {
    				
    		limit_req zone=flood burst=100 nodelay; 
    		limit_conn ddos 10; 
    		proxy_read_timeout 180; 
    
                    proxy_set_header X-Real-IP $remote_addr;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header Host $http_host;
                    proxy_set_header X-NginX-Proxy true;
                   # It's necessary to set @nodebb here so that the clustering works
                    proxy_pass @nodebb;
                    proxy_redirect off;
    
                    # Socket.IO Support
                    proxy_http_version 1.1;
                    proxy_set_header Upgrade $http_upgrade;
                    proxy_set_header Connection "upgrade";
            }
    
    	listen XX.XX.XX.XX:XXssl http2;
    	listen [XX:XX:XX:XX::]:443 ssl http2;
    	ssl_certificate /etc/ssl/virtualmin/166195366750642/ssl.cert;
    	ssl_certificate_key /etc/ssl/virtualmin/166195366750642/ssl.key;
    	if ($scheme = http) {
    		rewrite ^/(?!.well-known)(.*) "https://XX-XX.net/$1" break;
    	}
    }
    

    7- restart redis server systemctl restart redis-server.service
    8- Restart nginx
    9- Restart iframely
    10- Restart nodebb
    11- test configuration

  • @DownPW said in NODEBB: Nginx error performance & High CPU:

    5- Change nodebb Config.json file (can you verifiy this synthax please ? nodebb documentation tell “database”: 0 and not “database”: 5,

    All fine from my perspective - no real need to stop iFramely, but ok. The database number doesn’t matter as long as it’s not being used. You can use 0 if you wish - it’s in use on my side, hence 5.

  • @DownPW I’d add another set of steps here. When you move the sessions away from mongo to redis you are going to encounter issues logging in as the session tables will no longer match meaning none of your users will be able to login

    To resolve this issue

    Review https://sudonix.com/topic/249/invalid-csrf-on-dev-install and implement all steps - note that you will also need the below string when connecting to mongodb

    mongo -u admin -p <password> --authenticationDatabase=admin
    

    Obviously, substitute <password> with the actual value. So in summary:

    1. Open the mondogb console
    2. Select your database - in my case use sudonix;
    3. Issue this command db.objects.update({_key: "config"}, {$set: {cookieDomain: ""}});
    4. Press enter and then type quit() into the mongodb shell
    5. Restart NodeBB
    6. Clear cache on browser
    7. Try connection again
  • @phenomlab

    Hmm ok when perform these steps ?

  • @DownPW After you’ve setup the cluster and restarted NodeBB

  • @phenomlab said in NODEBB: Nginx error performance & High CPU:

    @crazycells said in NODEBB: Nginx error performance & High CPU:

    4567, 4568, and 4569… Is your NodeBB set up this way?

    It’s not (I set their server up). Sudonix is not configured this way either, but from memory, this also requires redis to handle the session data. I may configure this site to do exactly that.

    yes, you might be right about the necessity. We have redis installed.

  • @DownPW since you pointed it out, I just remembered. Since we know when this crowd will come and be online on our forum, that particular day, we switch off iframely and all preview plugins. That also helps to open the pages faster.

  • @phenomlab a general but related question. Since opening three ports help, is it possible to increase this number? For example, can we run 5 ports NodeBB at the same time to smooth the web page experience; or is “3” goldilocks number for maximum efficiency?

  • @crazycells It’s not necessarily the “Goldilocks” standard - it really depends on the system resources you have available. You could easily extend it as long as you allow for the additional port(s) in the nginx.conf file also.

    Personally, I don’t see the need for more than 3 though.

  • @phenomlab

    Ok redis is ok now. Thanks for your help 🙂

    I would like know to obtain the connecting clients Real IP on Nginx log.

    I read I have need ngx_http_realip_module for nginx but not active by default but I don’t know if virtualmin have this module enabled.

  • @DownPW said in NODEBB: Nginx error performance & High CPU:

    @phenomlab

    Ok redis is ok now. Thanks for your help 🙂

    I would like know to obtain the connecting clients Real IP on Nginx log.

    I read I have need ngx_http_realip_modulefor nginx but not active by default but I don’t know if virtualmin have this module enabled.

    EDIT: OK it will be enabled by default on virtualmin :

    nginx -v
    

    3dbf7d61-b31f-4da3-9715-756c1cdf84dc-image.png

  • @phenomlab

    I have activate ngx_http_realip_module on /etc/nginx/nginx.conf on http block like this :

    f7dc6276-6c9b-4a26-9226-63ef8ee080ba-image.png

    It seems to be good for you ?

  • @DownPW yes, that looks fine.

  • @phenomlab

    We have sometimes this error, what do you think about it ?

    df42c308-96a7-4811-b63e-663e8fad9a09-image.png

  • @DownPW I suspect that’s a failure of the socket server to talk to redis, but the NodeBB Devs would need to confirm.

  • @phenomlab

    Seems to be better with some scaling fix for redis on redis.conf. I haven’t seen the message yet since the changes I made

    # I increase it to the value of /proc/sys/net/core/somaxconn
    tcp-backlog 4096 
    
    
    # I'm uncommenting because it can slow down Redis. Uncommented by default !!!!!!!!!!!!!!!!!!! 
    #save 900 1
    #save 300 10
    #save 60 10000
    

    If you have other Redis optimizations. I take all your advice

    https://severalnines.com/blog/performance-tuning-redis/

  • DownPWundefined DownPW has marked this topic as solved on

Did this solution help you?
Did you find the suggested solution useful? Why not buy me a coffee? It's a nice gesture, and a great way to show your appreciation 💗

Related Topics
  • Error certification on virtualmin/Nginx

    Solved Linux
    17
    0 Votes
    17 Posts
    765 Views

    @DownPW anytime

  • error with v3 in browser console

    Solved Performance
    4
    0 Votes
    4 Posts
    249 Views

    @DownPW it’s in relation to the response I provided above

  • Is nginx necessary to use?

    Moved Solved Hosting
    2
    1 Votes
    2 Posts
    381 Views

    @Panda said in Cloudflare bot fight mode and Google search:

    Basic question again, is nginx necessary to use?

    No, but you’d need something at least to handle the inbound requests, so you could use Apache, NGINX, Caddy… (there are plenty of them, but I tend to prefer NGINX)

    @Panda said in Cloudflare bot fight mode and Google search:

    Do these two sites need to be attached to different ports, and the ports put in the DNS record?

    No. They will both use ports 80 (HTTP) and 443 (HTTPS) by default.

    @Panda said in Cloudflare bot fight mode and Google search:

    Its not currently working, but how would the domain name know which of the two sites to resolve to without more info?
    Currently it only says the IP of the whole server.

    Yes, that’s correct. Domain routing is handled (for example) at the NGINX level, so whatever you have in DNS will be presented as the hostname, and NGINX will expect a match which once received, will then be forwarded onto the relevant destination.

    As an example, in your NGINX config, you could have (at a basic level used in reverse proxy mode - obviously, the IP addresses here are redacted and replaced with fakes). We assume you have created an A record in your DNS called “proxy” which resolves to 192.206.28.1, so fully qualified, will be proxy.sudonix.org in this case.

    The web browser requests this site, which is in turn received by NGINX and matches the below config

    server { server_name proxy.sudonix.org; listen 192.206.28.1; root /home/sudonix.org/domains/proxy.sudonix.org/ogproxy; index index.php index.htm index.html; access_log /var/log/virtualmin/proxy.sudonix.org_access_log; error_log /var/log/virtualmin/proxy.sudonix.org_error_log; location / { proxy_set_header Access-Control-Allow-Origin *; proxy_set_header Host $host; proxy_pass http://localhost:2000; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Api-Key $http_x_api_key; } location /images { index index.php index.htm index.html; root /home/sudonix.org/domains/proxy.sudonix.org/ogproxy; } fastcgi_split_path_info "^(.+\.php)(/.+)$"; listen 192.206.28.1:443 ssl http2; ssl_certificate /home/sudonix.org/domains/proxy.sudonix.org/ssl.combined; ssl_certificate_key /home/sudonix.org/ssl.key; }

    The important part here is server_name proxy.sudonix.org; as this is used to “map” the request to the actual domain name, which you can see in the root section as root /home/sudonix.org/domains/proxy.sudonix.org/ogproxy;

    As the DNS record you specified matches this hostname, NGINX then knows what to do with the request when it receives it.

  • Composer options on nodebb

    Solved Configure
    8
    3 Votes
    8 Posts
    430 Views

    @Panda You should be able to expose the CSS for these using F12 to get into console

    3591518c-e3a3-4ada-a43c-6b32a5e0359c-image.png

    a2b8ed46-4157-4ff2-85f0-576543380107-image.png

    That should then expose the element once selected

    89d9c545-a47a-40d1-98f4-80cf3b958e8f-image.png

    Here’s the below CSS you need based on the screenshot provided.

    .composer .formatting-bar .formatting-group li[data-format="picture-o"], .composer .formatting-bar .formatting-group li[data-format="spoiler"] { display: none; }
  • NodeBB v3 Chat Very Slow

    Moved Performance
    47
    11 Votes
    47 Posts
    4k Views

    @DownPW Seems fine.

  • Optimum config for NodeBB under NGINX

    Performance
    4
    3 Votes
    4 Posts
    826 Views

    @crazycells hi - no security reason, or anything specific in this case. However, the nginx.conf I posted was from my Dev environment which uses this port as a way of not interfering with production.

    And yes, I use clustering on this site with three instances.

  • ineffecient use of space on mobile

    Solved Customisation
    10
    7 Votes
    10 Posts
    690 Views

    @phenomlab Thanks 🙏

  • [NODEBB] Welcome Message

    Solved Customisation
    18
    13 Votes
    18 Posts
    2k Views

    For anyone reviewing this post, there’s an updated version here that also includes an sunrise / sun / moon icon depending on the time of day

    https://sudonix.com/topic/233/nodebb-welcome-message-with-logo-footer-change/3?_=1645445273209