Skip to content

Virtualmin SQL problem

Solved Performance
  • @DownPW To my mind, you need to increase the thread_stack value but I don’t see the my.cnf file for and I think Virtualmin does this another way.

    When did this start happening ? From what I see, you have data waiting to be committed to mySQL, which is usually indicating that there is no backup which in turn would truncate the logs which are now out of control.

  • @phenomlab This is now fixed. For reference, your system is running at its maximum capacity with even the virtual memory 100% allocated. I needed to reboot the server to release the lock (which I’ve completed with no issues) and have also modified

    etc/mysql/mysql.conf.d/mysqld.cnf
    

    And increased the thread_stack size from 128k to 256k. The mysql service has now started successfully. You should run a backup of all databases ASAP so that remaining transactions are committed and the transaction logs are flushed.

    22450ab2-01db-4fdb-bbed-b6f700a2bec0-image.png

  • phenomlabundefined phenomlab has marked this topic as solved on
  • @phenomlab

    I come back to you regarding the MySQL problem of virtualmin.

    the service run smoothly, the backups of virtualmin are made without error, but I always have files with large sizes that always get bigger.

    The /var/log/mysql/error.log is empty.

    MySQL.idb is as 15 Go ???!! Very Big
    Same for undo_001 (4,12 Go) and undo_002 (14,32 Go)

    image.png

    Virtualmin backup log:
    becbd82e-dc1d-446c-9fe1-402f7406f410-image.png

    I still don’t understand.

    Your help is welcome 😉

  • phenomlabundefined phenomlab has marked this topic as unsolved on
  • @DownPW You should consider using the below inside the my.cnf file, then restart the mySQL service

    SET GLOBAL innodb_undo_log_truncate=ON;
    
  • @phenomlab

    Are you sure for my.cnf file because it is located on /etc/alternatives/my.cnf

    863ce682-b3f5-4cd4-b4a1-0d7f5cf63750-image.png

    And here is the file:

    e00d4526-09ec-43aa-ae5a-5af8388ef104-image.png

    like this ?:

    be3fa85c-ef10-4b69-8096-7461cce87cc8-image.png

    if i see into webmin the mySQL servers, it’s already activated:

    0154ab2a-dac9-4dbc-94be-b439a49dccd8-image.png

  • @phenomlab

    If I add this line on my.cnf file, the mySQL service don’t start - failed

    This is problematic because mySQL takes 36 GB of disk space so it alone takes up half of the server’s disk space.

    I don’t think this is a normal situation.

  • @DownPW it’s certainly not normal as I’ve never seen this on any virtualmin build and I’ve created hundreds of them. Are you able to manually delete the undo files ?

  • @phenomlab

    I have delete these 2 files manually with webmin. Stop and start the service

    3d2cc6df-8c05-4bf2-8abd-02cb4d864968-image.png

    I will monitor this and get back to you if it happens again.

  • –> For mysql.ibd file, is his size normal? (15,6 Go)

  • @DownPW not normal, no, but you mustn’t delete it or it will cause you issues.

  • @DownPW the thing that concerns me here is that I’ve never seen an issue like this occur with no cause or action taken by someone else.

    Do you know if anyone else who has access to the server has made any changes ?

  • @phenomlab said in Virtualmin SQL problem:

    the thing that concerns me here is that I’ve never seen an issue like this occur with no cause or action taken by someone else.
    Do you know if anyone else who has access to the server has made any changes ?

    hmm no or nothing special. we don’t touch mySQL but it seems to be a known problem
    We manage our nodebb/virtualmin/wiki backup, manage iframely or nodebb, update package but nothing more…

    –> Could you take a look at it when you have time?

  • @DownPW yes, of course. I’ll see what I can do with this over the weekend.

  • @phenomlab That’s great, Thanks Mark 👍

  • @DownPW I’ve just re read this post and apologies - this command

    SET GLOBAL innodb_undo_log_truncate=ON;
    

    Has to be entered within the mySQL console then the service stopped and restarted.

    Can you try this first before we do anything else?

  • @phenomlab

    Hello 😊

    Can yo read this post and the screen at the end (mySQL variable already activate) :

    https://sudonix.com/user/downpw

  • @DownPW can you post the output from the mySQL console

    SELECT NAME, SPACE_TYPE, STATE FROM INFORMATION_SCHEMA.INNODB_TABLESPACES WHERE SPACE_TYPE = 'Undo' ORDER BY NAME;
    

    I’m interested to see exactly which tables are causing this. It’s absolutely an artefact of a transaction that has not been completed. The question here is exactly what has caused this. I considered the possibility that this could be a bug in the virtualmin version you are running, although mine is the same, and I’m not experiencing this issue at all.

    To be completely sure, I build another instance on my local network at home and couldn’t replicate this either.

    Can you check with anyone else who has access to this server to see if any installations or upgrades have been attempted that night have failed? Understanding the origin is important at this stage in order to prevent recurrence.

    The below SQL statement should produce a list of running transactions

    SELECT trx.trx_id,
           trx.trx_started,
           trx.trx_mysql_thread_id
    FROM INFORMATION_SCHEMA.INNODB_TRX trx
    JOIN INFORMATION_SCHEMA.PROCESSLIST ps ON trx.trx_mysql_thread_id = ps.id
    WHERE trx.trx_started < CURRENT_TIMESTAMP - INTERVAL 1 SECOND
      AND ps.user != 'system_user';
    

    Finally, you should be able to identify the process itself, and kill it by using the below SQL

    SELECT *
    FROM performance_schema.threads
    WHERE processlist_id = thread_id;
    

    Ideally, once the rogue process has been killed, the rollback attempt should be terminated and disk space reclaimed (after a few hours)

    Let me know how you get on.

    You should also perhaps review this article as it will likely be very useful

    https://stackoverflow.com/questions/62740079/mysql-undo-log-keep-growing

  • @phenomlab

    Yep Mister. I will do this tonight after work 😉

    Thanks

  • @DownPW Great. Keep me updated. Interested to know the outcome.

  • @phenomlab

    Since I manually deleted these 2 big files: undo_001 & undo_002, these 2 files have been regenerated and have not changed in size for 2 days (16Mb)

    Only the mysql.ibd file (15,6 Go) is still big but it doesn’t change in size for the moment.

    We have no other information to our knowledge but I see an update of kernel which was not yet done because it required a reboot at that time.

    – Here is the output of :

    SELECT NAME, SPACE_TYPE, STATE FROM INFORMATION_SCHEMA.INNODB_TABLESPACES WHERE SPACE_TYPE = 'Undo' ORDER BY NAME;

    mysql> SELECT NAME, SPACE_TYPE, STATE FROM INFORMATION_SCHEMA.INNODB_TABLESPACES WHERE SPACE_TYPE = 'Undo' ORDER BY NAME;
    +-----------------+------------+--------+
    | NAME            | SPACE_TYPE | STATE  |
    +-----------------+------------+--------+
    | innodb_undo_001 | Undo       | active |
    | innodb_undo_002 | Undo       | active |
    +-----------------+------------+--------+
    2 rows in set (0.05 sec)
    

    – Here is the output of :

    SELECT trx.trx_id,
           trx.trx_started,
           trx.trx_mysql_thread_id
    FROM INFORMATION_SCHEMA.INNODB_TRX trx
    JOIN INFORMATION_SCHEMA.PROCESSLIST ps ON trx.trx_mysql_thread_id = ps.id
    WHERE trx.trx_started < CURRENT_TIMESTAMP - INTERVAL 1 SECOND
      AND ps.user != 'system_user';
    

    I don’t know if I write it correctly in SQL console ?

    mysql> SELECT trx.trx_id,
        ->        trx.trx_started,
        ->        trx.trx_mysql_thread_id
        -> FROM INFORMATION_SCHEMA.INNODB_TRX trx
        -> JOIN INFORMATION_SCHEMA.PROCESSLIST ps ON trx.trx_mysql_thread_id = ps.id
        -> WHERE trx.trx_started < CURRENT_TIMESTAMP - INTERVAL 1 SECOND
        ->   AND ps.user != 'system_user';
    Empty set (0.00 sec)
    

    Well that doesn’t tell me why the mysql.ibd file is 15.6 GB 😒


Did this solution help you?
Did you find the suggested solution useful? Why not buy me a coffee? It's a nice gesture, and a great way to show your appreciation 💗

Related Topics
  • 0 Votes
    1 Posts
    722 Views
    No one has replied
  • Coding question: fetch vs $.ajax call from Shopify

    Solved Performance
    4
    3 Votes
    4 Posts
    245 Views

    @Panda You should be able to use {% javscript %} as shown in this video - it’s quite the watch, but very educational, and provides insight as to how this works - see below screenshot for an example

    cdb160e9-d955-498c-b921-982db2986e2b-image.png

  • SEO and Nodebb

    Performance
    2
    2 Votes
    2 Posts
    317 Views

    @Panda It’s the best it’s ever been to be honest. I’ve used a myriad of systems in the past - most notably, WordPress, and then Flarum (which for SEO, was absolutely dire - they never even had SEO out of the box, and relied on a third party extension to do it), and NodeBB easily fares the best - see below example

    https://www.google.com/search?q=site%3Asudonix.org&oq=site%3Asudonix.org&aqs=chrome..69i57j69i60j69i58j69i60l2.9039j0j3&sourceid=chrome&ie=UTF-8#ip=1

    However, this was not without significant effort on my part once I’d migrated from COM to ORG - see below posts

    https://community.nodebb.org/topic/17286/google-crawl-error-after-site-migration/17?_=1688461250365

    And also

    https://support.google.com/webmasters/thread/221027803?hl=en&msgid=221464164

    It was painful to say the least - as it turns out, there was an issue in NodeBB core that prevented spiders from getting to content, which as far as I understand, is now fixed. SEO in itself is a dark art - a black box that nobody really fully understands, and it’s essentially going to boil down to one thing - “content”.

    Google’s algorithm for indexing has also changed dramatically over the years. They only now crawl content that has value, so if it believes that your site has nothing to offer, it will simply skip it.

  • NodeBB v3 Chat Very Slow

    Moved Performance
    47
    11 Votes
    47 Posts
    4k Views

    @DownPW Seems fine.

  • build nodebb Warning in entrypoint size limit

    Solved Performance
    2
    0 Votes
    2 Posts
    215 Views

    @eeeee they are nothing to worry about, and can be ignored.

  • NodeBB v3.0.0-rc.1

    Performance
    1
    1 Votes
    1 Posts
    125 Views
    No one has replied
  • 5 Votes
    13 Posts
    659 Views
    'use strict'; const winston = require('winston'); const user = require('../user'); const notifications = require('../notifications'); const sockets = require('../socket.io'); const plugins = require('../plugins'); const meta = require('../meta'); module.exports = function (Messaging) { Messaging.notifyQueue = {}; // Only used to notify a user of a new chat message, see Messaging.notifyUser Messaging.notifyUsersInRoom = async (fromUid, roomId, messageObj) => { let uids = await Messaging.getUidsInRoom(roomId, 0, -1); uids = await user.blocks.filterUids(fromUid, uids); let data = { roomId: roomId, fromUid: fromUid, message: messageObj, uids: uids, }; data = await plugins.hooks.fire('filter:messaging.notify', data); if (!data || !data.uids || !data.uids.length) { return; } uids = data.uids; uids.forEach((uid) => { data.self = parseInt(uid, 10) === parseInt(fromUid, 10) ? 1 : 0; Messaging.pushUnreadCount(uid); sockets.in(`uid_${uid}`).emit('event:chats.receive', data); }); if (messageObj.system) { return; } // Delayed notifications let queueObj = Messaging.notifyQueue[`${fromUid}:${roomId}`]; if (queueObj) { queueObj.message.content += `\n${messageObj.content}`; clearTimeout(queueObj.timeout); } else { queueObj = { message: messageObj, }; Messaging.notifyQueue[`${fromUid}:${roomId}`] = queueObj; } queueObj.timeout = setTimeout(async () => { try { await sendNotifications(fromUid, uids, roomId, queueObj.message); } catch (err) { winston.error(`[messaging/notifications] Unabled to send notification\n${err.stack}`); } }, meta.config.notificationSendDelay * 1000); }; async function sendNotifications(fromuid, uids, roomId, messageObj) { const isOnline = await user.isOnline(uids); uids = uids.filter((uid, index) => !isOnline[index] && parseInt(fromuid, 10) !== parseInt(uid, 10)); if (!uids.length) { return; } if (roomId != 11) { // 5 Is the ID of the ID of the global chat room. Messaging.getUidsInRoom(roomId, 0, -1); // Proceed as normal. } else { user.getUidsFromSet('users:online', 0, -1); // Only notify online users. } const { displayname } = messageObj.fromUser; const isGroupChat = await Messaging.isGroupChat(roomId); const notification = await notifications.create({ type: isGroupChat ? 'new-group-chat' : 'new-chat', subject: `[[email:notif.chat.subject, ${displayname}]]`, bodyShort: `[[notifications:new_message_from, ${displayname}]]`, bodyLong: messageObj.content, nid: `chat_${fromuid}_${roomId}`, from: fromuid, path: `/chats/${messageObj.roomId}`, }); delete Messaging.notifyQueue[`${fromuid}:${roomId}`]; notifications.push(notification, uids); } };
  • NODEBB: Nginx error performance & High CPU

    Solved Performance
    69
    14 Votes
    69 Posts
    6k Views

    @phenomlab

    Seems to be better with some scaling fix for redis on redis.conf. I haven’t seen the message yet since the changes I made

    # I increase it to the value of /proc/sys/net/core/somaxconn tcp-backlog 4096 # I'm uncommenting because it can slow down Redis. Uncommented by default !!!!!!!!!!!!!!!!!!! #save 900 1 #save 300 10 #save 60 10000

    If you have other Redis optimizations. I take all your advice

    https://severalnines.com/blog/performance-tuning-redis/