Skip to content

OGProxy - a replacement for iFramely

Announcements
40 5 8.4k 1
  • Can’t wait 😲

    🙂

  • Can’t wait 😲

    🙂

    @DownPW Coming soon…

  • A release date maybe ?

  • A release date maybe ?

    @DownPW in the coming weeks.

  • @DownPW sorry for the delay here - real life is getting in the way, but the latest release will be pushed out soon.

  • no problem.
    I was waiting for this new version to change my server but I think I’m going to install the old one.

    Keep the good work bro 🙂

  • @DownPW sorry for the delay here - real life is getting in the way, but the latest release will be pushed out soon.

    @phenomlab said:

    @DownPW sorry for the delay here - real life is getting in the way, but the latest release will be pushed out soon.

    Is the new version still in development @phenomlab ?

    I ask because personally, I think OG Proxy is really cool, but it uses a lot of RAM on the server, sometimes up to 2GB.
    That’s quite a lot.

    Optimizing it in this area would be really great.

  • @phenomlab

    I have a pro account of Claude AI and i Tell him to optimize the code for the problem memory conssumtion)

    – here his answer :

    Root causes of the 2 GB RAM usage

    1. Unbounded cache (memory-cache)

    The original code stores every visited URL forever, with no cap and no reliable automatic expiration. On an active forum with thousands of unique links, this grows indefinitely.

    Fix → lru-cache: maximum 500 entries (configurable via CACHE_MAX), automatic TTL expiration at 1 hour. Least-recently-used entries are evicted automatically.

    2. Redundant fetches per request

    For every uncached URL, the original performs: OGS scrape + axios (full page) + cheerio + meta-parser + axios (favicon). If multiple users click the same link simultaneously, all of these are duplicated.

    Fix → In-flight request deduplication (inFlight Map): if 5 requests arrive for youtube.com/xxx at the same time, only one fetch is fired — the other 4 await the same promise.

    3. Unbounded favicon buffering

    The original downloads any favicon as an arraybuffer without checking its size first. A favicon URL pointing to a large image can load several MB into memory.

    Fix → HEAD request first to check Content-Length, then skip if > 50 KB (FAVICON_MAX). The HTML response body is also explicitly freed (htmlRes.data = null) once parsed.

    4. Axios with no timeout or size limit

    Without a timeout, slow requests remain suspended in memory indefinitely. Without maxContentLength, a 50 MB page is fully loaded into RAM.

    Fix → REQ_TIMEOUT=8000ms and MAX_BODY=2MB per outbound request.

    5. No compression

    Without gzip, large JSON responses consume more bandwidth and remain in transit longer.

    Fix → compression middleware.


    Near-certain gains:

    • The bounded LRU cache will prevent unbounded memory growth. This is most likely the main cause of the 2 GB if your forum has a large number of unique links.
    • Timeouts will prevent stalled requests from sitting in memory indefinitely.
    • Page and favicon size limits will eliminate pathological edge cases.

    Traffic-dependent gains:

    • In-flight request deduplication only helps if multiple users click the same link simultaneously — very useful on a busy forum, negligible on a small one.

    What may not change:

    • If the 2 GB comes from something else (a leak inside open-graph-scraper itself, or Node.js not returning memory to the OS after objects are freed), this won’t fix everything. Node.js tends to hold onto allocated memory even after objects are released — this is normal V8 behaviour.

    In that case, a simple complementary solution is to add an automatic nightly restart to the systemd service:

    [Service]
    Restart=always
    RuntimeMaxSec=86400   # restart after 24h max
    

    This doesn’t fix the root cause but prevents accumulation over several days.

    – The JS Code (server.js) :

    "use strict";
    
    /**
     * OGProxy - Optimized server
     * Improvements over original:
     *  - LRU cache (bounded) instead of unbounded memory-cache
     *  - Request deduplication (coalescing) to avoid redundant fetches
     *  - Compression middleware
     *  - Rate limiting
     *  - Helmet security headers
     *  - Axios timeout + max response size
     *  - Favicon size cap + skip if too large
     *  - Graceful shutdown
     *  - Winston structured logging
     *  - Joi input validation
     *  - .env support
     *  - Health check endpoint
     */
    
    require("dotenv").config();
    
    const express     = require("express");
    const cors        = require("cors");
    const helmet      = require("helmet");
    const compression = require("compression");
    const rateLimit   = require("express-rate-limit");
    const morgan      = require("morgan");
    const winston     = require("winston");
    const { LRUCache }= require("lru-cache");
    const Joi         = require("joi");
    const axios       = require("axios");
    const ogs         = require("open-graph-scraper");
    const cheerio     = require("cheerio");
    const metaParser  = require("meta-parser");
    const { URL }     = require("url");
    const path        = require("path");
    
    // ─── Config ────────────────────────────────────────────────────────────────
    
    const PORT         = parseInt(process.env.PORT   || "2000", 10);
    const API_KEY      = process.env.API_KEY          || "YOUR_API_KEY_HERE";
    const ORIGIN       = process.env.ORIGIN           || "https://your-forum.example.com";
    const CACHE_MAX    = parseInt(process.env.CACHE_MAX    || "500",  10);  // max entries
    const CACHE_TTL    = parseInt(process.env.CACHE_TTL    || "3600", 10) * 1000; // ms (default 1h)
    const REQ_TIMEOUT  = parseInt(process.env.REQ_TIMEOUT  || "8000", 10);  // ms per outbound request
    const MAX_BODY     = parseInt(process.env.MAX_BODY      || "2",    10) * 1024 * 1024; // MB → bytes
    const FAVICON_MAX  = parseInt(process.env.FAVICON_MAX   || "51200",10);  // bytes (50 KB)
    const RATE_WINDOW  = parseInt(process.env.RATE_WINDOW   || "60",   10) * 1000; // ms
    const RATE_LIMIT   = parseInt(process.env.RATE_LIMIT    || "30",   10);  // req per window
    
    // ─── Logger ────────────────────────────────────────────────────────────────
    
    const logger = winston.createLogger({
      level: process.env.LOG_LEVEL || "info",
      format: winston.format.combine(
        winston.format.timestamp(),
        winston.format.errors({ stack: true }),
        winston.format.json()
      ),
      transports: [
        new winston.transports.Console({ format: winston.format.simple() }),
        new winston.transports.File({ filename: "error.log",  level: "error" }),
        new winston.transports.File({ filename: "access.log" }),
      ],
    });
    
    // ─── LRU Cache (bounded) ───────────────────────────────────────────────────
    
    const cache = new LRUCache({
      max: CACHE_MAX,       // max number of items
      ttl: CACHE_TTL,       // auto-expire entries
      updateAgeOnGet: false,
    });
    
    // ─── In-flight request deduplication ──────────────────────────────────────
    // If two requests arrive for the same URL before the first completes,
    // the second one waits for the first promise instead of spawning a new fetch.
    
    const inFlight = new Map(); // url → Promise
    
    // ─── Axios instance (shared, keep-alive, bounded) ─────────────────────────
    
    const http  = require("http");
    const https = require("https");
    
    const axiosInstance = axios.create({
      timeout: REQ_TIMEOUT,
      maxContentLength: MAX_BODY,
      maxBodyLength: MAX_BODY,
      headers: {
        "User-Agent":
          "Mozilla/5.0 (compatible; OGProxy/2.0; +https://github.com/phenomlab/ogproxy)",
        "Accept-Language": "en-US,en;q=0.9",
      },
      httpAgent:  new http.Agent ({ keepAlive: true, maxSockets: 10 }),
      httpsAgent: new https.Agent({ keepAlive: true, maxSockets: 10 }),
    });
    
    // ─── Input validation ─────────────────────────────────────────────────────
    
    const urlSchema = Joi.object({
      url: Joi.string().uri({ scheme: ["http", "https"] }).required(),
    });
    
    // ─── App setup ────────────────────────────────────────────────────────────
    
    const app = express();
    
    // Security headers
    app.use(helmet({ contentSecurityPolicy: false }));
    
    // CORS
    app.use(cors({ origin: ORIGIN }));
    
    // Gzip compression
    app.use(compression());
    
    // HTTP request logging (stream to winston)
    app.use(morgan("combined", {
      stream: { write: (msg) => logger.info(msg.trim()) },
    }));
    
    // Rate limiting per IP
    app.use(
      "/ogproxy",
      rateLimit({
        windowMs: RATE_WINDOW,
        max: RATE_LIMIT,
        standardHeaders: true,
        legacyHeaders: false,
        message: { error: "Too many requests, please try again later." },
      })
    );
    
    // Static images directory
    app.use("/images", express.static(path.join(__dirname, "images")));
    
    // ─── Health check ─────────────────────────────────────────────────────────
    
    app.get("/health", (_req, res) => {
      res.json({
        status: "ok",
        uptime: process.uptime(),
        cacheSize: cache.size,
        inFlight: inFlight.size,
        memory: process.memoryUsage(),
      });
    });
    
    // ─── API key middleware ────────────────────────────────────────────────────
    
    function requireApiKey(req, res, next) {
      const key = req.headers["x-api-key"];
      if (!key || key !== API_KEY) {
        return res.status(401).json({ error: "Unauthorized" });
      }
      next();
    }
    
    // ─── Core fetch logic ─────────────────────────────────────────────────────
    
    async function fetchOGData(targetUrl) {
      // 1. OGS scrape
      const { result: ogsResult, error: ogsError } = await ogs({ url: targetUrl });
      if (ogsError) throw new Error(`OGS failed: ${ogsError}`);
    
      // 2. Fetch HTML (with size cap)
      let metadata = {};
      let faviconB64 = null;
    
      try {
        const htmlRes = await axiosInstance.get(targetUrl, {
          responseType: "text",
          decompress: true,
        });
    
        const $ = cheerio.load(htmlRes.data);
    
        // meta-parser on the raw HTML
        try {
          const parsed = metaParser(htmlRes.data);
          metadata = parsed || {};
        } catch (_) { /* non-fatal */ }
    
        // 3. Favicon – only fetch if small enough
        let faviconHref =
          $('link[rel="icon"]').attr("href") ||
          $('link[rel="shortcut icon"]').attr("href");
    
        if (faviconHref) {
          try {
            const base = new URL(targetUrl);
            const faviconUrl = new URL(faviconHref, base).href;
    
            // HEAD first to check Content-Length before downloading
            const headRes = await axiosInstance.head(faviconUrl).catch(() => null);
            const contentLength = headRes
              ? parseInt(headRes.headers["content-length"] || "0", 10)
              : 0;
    
            if (contentLength === 0 || contentLength <= FAVICON_MAX) {
              const iconRes = await axiosInstance.get(faviconUrl, {
                responseType: "arraybuffer",
                maxContentLength: FAVICON_MAX,
              });
              const mime =
                iconRes.headers["content-type"] || "image/x-icon";
              faviconB64 =
                `data:${mime};base64,` +
                Buffer.from(iconRes.data).toString("base64");
            } else {
              logger.info(`Favicon too large (${contentLength}B), skipping: ${faviconUrl}`);
            }
          } catch (err) {
            logger.warn("Favicon fetch failed", { url: targetUrl, err: err.message });
          }
        }
    
        // Free the HTML string early
        htmlRes.data = null;
      } catch (err) {
        logger.warn("HTML fetch failed (using OGS only)", {
          url: targetUrl,
          err: err.message,
        });
      }
    
      return {
        ...ogsResult,
        metaProperties: metadata,
        faviconUrl: faviconB64,
      };
    }
    
    // ─── /ogproxy route ───────────────────────────────────────────────────────
    
    app.get("/ogproxy", requireApiKey, async (req, res, next) => {
      try {
        // Validate input
        let { url: targetUrl } = req.query;
    
        const { error } = urlSchema.validate({ url: targetUrl });
        if (error) {
          return res
            .status(400)
            .json({ error: `Invalid URL: ${error.details[0].message}` });
        }
    
        // Normalise (strip trailing slash etc.)
        targetUrl = new URL(targetUrl).href;
    
        // Cache hit?
        const cached = cache.get(targetUrl);
        if (cached) {
          res.setHeader("X-Cache", "HIT");
          return res.json(cached);
        }
    
        res.setHeader("X-Cache", "MISS");
    
        // Deduplicate concurrent requests for the same URL
        if (!inFlight.has(targetUrl)) {
          const promise = fetchOGData(targetUrl)
            .then((data) => {
              cache.set(targetUrl, data);
              return data;
            })
            .finally(() => inFlight.delete(targetUrl));
    
          inFlight.set(targetUrl, promise);
        }
    
        const data = await inFlight.get(targetUrl);
        return res.json(data);
      } catch (err) {
        next(err);
      }
    });
    
    // ─── Centralised error handler ────────────────────────────────────────────
    
    // eslint-disable-next-line no-unused-vars
    app.use((err, _req, res, _next) => {
      logger.error("Unhandled error", { message: err.message, stack: err.stack });
      res.status(500).json({ error: "Internal server error" });
    });
    
    // ─── Start ────────────────────────────────────────────────────────────────
    
    const server = app.listen(PORT, () => {
      logger.info(`OGProxy listening on port ${PORT}`);
      logger.info(`Cache: max=${CACHE_MAX} entries, TTL=${CACHE_TTL / 1000}s`);
    });
    
    // ─── Graceful shutdown ────────────────────────────────────────────────────
    
    function shutdown(signal) {
      logger.info(`${signal} received – shutting down gracefully`);
      server.close(() => {
        logger.info("HTTP server closed");
        process.exit(0);
      });
    
      // Force exit after 10 s if still busy
      setTimeout(() => {
        logger.warn("Forcing exit after timeout");
        process.exit(1);
      }, 10_000).unref();
    }
    
    process.on("SIGTERM", () => shutdown("SIGTERM"));
    process.on("SIGINT",  () => shutdown("SIGINT"));
    
    process.on("uncaughtException", (err) => {
      logger.error("Uncaught exception", { err });
      shutdown("uncaughtException");
    });
    
    process.on("unhandledRejection", (reason) => {
      logger.error("Unhandled rejection", { reason });
    });
    

    –> What do you think of this code?

  • @phenomlab

    I have a pro account of Claude AI and i Tell him to optimize the code for the problem memory conssumtion)

    – here his answer :

    Root causes of the 2 GB RAM usage

    1. Unbounded cache (memory-cache)

    The original code stores every visited URL forever, with no cap and no reliable automatic expiration. On an active forum with thousands of unique links, this grows indefinitely.

    Fix → lru-cache: maximum 500 entries (configurable via CACHE_MAX), automatic TTL expiration at 1 hour. Least-recently-used entries are evicted automatically.

    2. Redundant fetches per request

    For every uncached URL, the original performs: OGS scrape + axios (full page) + cheerio + meta-parser + axios (favicon). If multiple users click the same link simultaneously, all of these are duplicated.

    Fix → In-flight request deduplication (inFlight Map): if 5 requests arrive for youtube.com/xxx at the same time, only one fetch is fired — the other 4 await the same promise.

    3. Unbounded favicon buffering

    The original downloads any favicon as an arraybuffer without checking its size first. A favicon URL pointing to a large image can load several MB into memory.

    Fix → HEAD request first to check Content-Length, then skip if > 50 KB (FAVICON_MAX). The HTML response body is also explicitly freed (htmlRes.data = null) once parsed.

    4. Axios with no timeout or size limit

    Without a timeout, slow requests remain suspended in memory indefinitely. Without maxContentLength, a 50 MB page is fully loaded into RAM.

    Fix → REQ_TIMEOUT=8000ms and MAX_BODY=2MB per outbound request.

    5. No compression

    Without gzip, large JSON responses consume more bandwidth and remain in transit longer.

    Fix → compression middleware.


    Near-certain gains:

    • The bounded LRU cache will prevent unbounded memory growth. This is most likely the main cause of the 2 GB if your forum has a large number of unique links.
    • Timeouts will prevent stalled requests from sitting in memory indefinitely.
    • Page and favicon size limits will eliminate pathological edge cases.

    Traffic-dependent gains:

    • In-flight request deduplication only helps if multiple users click the same link simultaneously — very useful on a busy forum, negligible on a small one.

    What may not change:

    • If the 2 GB comes from something else (a leak inside open-graph-scraper itself, or Node.js not returning memory to the OS after objects are freed), this won’t fix everything. Node.js tends to hold onto allocated memory even after objects are released — this is normal V8 behaviour.

    In that case, a simple complementary solution is to add an automatic nightly restart to the systemd service:

    [Service]
    Restart=always
    RuntimeMaxSec=86400   # restart after 24h max
    

    This doesn’t fix the root cause but prevents accumulation over several days.

    – The JS Code (server.js) :

    "use strict";
    
    /**
     * OGProxy - Optimized server
     * Improvements over original:
     *  - LRU cache (bounded) instead of unbounded memory-cache
     *  - Request deduplication (coalescing) to avoid redundant fetches
     *  - Compression middleware
     *  - Rate limiting
     *  - Helmet security headers
     *  - Axios timeout + max response size
     *  - Favicon size cap + skip if too large
     *  - Graceful shutdown
     *  - Winston structured logging
     *  - Joi input validation
     *  - .env support
     *  - Health check endpoint
     */
    
    require("dotenv").config();
    
    const express     = require("express");
    const cors        = require("cors");
    const helmet      = require("helmet");
    const compression = require("compression");
    const rateLimit   = require("express-rate-limit");
    const morgan      = require("morgan");
    const winston     = require("winston");
    const { LRUCache }= require("lru-cache");
    const Joi         = require("joi");
    const axios       = require("axios");
    const ogs         = require("open-graph-scraper");
    const cheerio     = require("cheerio");
    const metaParser  = require("meta-parser");
    const { URL }     = require("url");
    const path        = require("path");
    
    // ─── Config ────────────────────────────────────────────────────────────────
    
    const PORT         = parseInt(process.env.PORT   || "2000", 10);
    const API_KEY      = process.env.API_KEY          || "YOUR_API_KEY_HERE";
    const ORIGIN       = process.env.ORIGIN           || "https://your-forum.example.com";
    const CACHE_MAX    = parseInt(process.env.CACHE_MAX    || "500",  10);  // max entries
    const CACHE_TTL    = parseInt(process.env.CACHE_TTL    || "3600", 10) * 1000; // ms (default 1h)
    const REQ_TIMEOUT  = parseInt(process.env.REQ_TIMEOUT  || "8000", 10);  // ms per outbound request
    const MAX_BODY     = parseInt(process.env.MAX_BODY      || "2",    10) * 1024 * 1024; // MB → bytes
    const FAVICON_MAX  = parseInt(process.env.FAVICON_MAX   || "51200",10);  // bytes (50 KB)
    const RATE_WINDOW  = parseInt(process.env.RATE_WINDOW   || "60",   10) * 1000; // ms
    const RATE_LIMIT   = parseInt(process.env.RATE_LIMIT    || "30",   10);  // req per window
    
    // ─── Logger ────────────────────────────────────────────────────────────────
    
    const logger = winston.createLogger({
      level: process.env.LOG_LEVEL || "info",
      format: winston.format.combine(
        winston.format.timestamp(),
        winston.format.errors({ stack: true }),
        winston.format.json()
      ),
      transports: [
        new winston.transports.Console({ format: winston.format.simple() }),
        new winston.transports.File({ filename: "error.log",  level: "error" }),
        new winston.transports.File({ filename: "access.log" }),
      ],
    });
    
    // ─── LRU Cache (bounded) ───────────────────────────────────────────────────
    
    const cache = new LRUCache({
      max: CACHE_MAX,       // max number of items
      ttl: CACHE_TTL,       // auto-expire entries
      updateAgeOnGet: false,
    });
    
    // ─── In-flight request deduplication ──────────────────────────────────────
    // If two requests arrive for the same URL before the first completes,
    // the second one waits for the first promise instead of spawning a new fetch.
    
    const inFlight = new Map(); // url → Promise
    
    // ─── Axios instance (shared, keep-alive, bounded) ─────────────────────────
    
    const http  = require("http");
    const https = require("https");
    
    const axiosInstance = axios.create({
      timeout: REQ_TIMEOUT,
      maxContentLength: MAX_BODY,
      maxBodyLength: MAX_BODY,
      headers: {
        "User-Agent":
          "Mozilla/5.0 (compatible; OGProxy/2.0; +https://github.com/phenomlab/ogproxy)",
        "Accept-Language": "en-US,en;q=0.9",
      },
      httpAgent:  new http.Agent ({ keepAlive: true, maxSockets: 10 }),
      httpsAgent: new https.Agent({ keepAlive: true, maxSockets: 10 }),
    });
    
    // ─── Input validation ─────────────────────────────────────────────────────
    
    const urlSchema = Joi.object({
      url: Joi.string().uri({ scheme: ["http", "https"] }).required(),
    });
    
    // ─── App setup ────────────────────────────────────────────────────────────
    
    const app = express();
    
    // Security headers
    app.use(helmet({ contentSecurityPolicy: false }));
    
    // CORS
    app.use(cors({ origin: ORIGIN }));
    
    // Gzip compression
    app.use(compression());
    
    // HTTP request logging (stream to winston)
    app.use(morgan("combined", {
      stream: { write: (msg) => logger.info(msg.trim()) },
    }));
    
    // Rate limiting per IP
    app.use(
      "/ogproxy",
      rateLimit({
        windowMs: RATE_WINDOW,
        max: RATE_LIMIT,
        standardHeaders: true,
        legacyHeaders: false,
        message: { error: "Too many requests, please try again later." },
      })
    );
    
    // Static images directory
    app.use("/images", express.static(path.join(__dirname, "images")));
    
    // ─── Health check ─────────────────────────────────────────────────────────
    
    app.get("/health", (_req, res) => {
      res.json({
        status: "ok",
        uptime: process.uptime(),
        cacheSize: cache.size,
        inFlight: inFlight.size,
        memory: process.memoryUsage(),
      });
    });
    
    // ─── API key middleware ────────────────────────────────────────────────────
    
    function requireApiKey(req, res, next) {
      const key = req.headers["x-api-key"];
      if (!key || key !== API_KEY) {
        return res.status(401).json({ error: "Unauthorized" });
      }
      next();
    }
    
    // ─── Core fetch logic ─────────────────────────────────────────────────────
    
    async function fetchOGData(targetUrl) {
      // 1. OGS scrape
      const { result: ogsResult, error: ogsError } = await ogs({ url: targetUrl });
      if (ogsError) throw new Error(`OGS failed: ${ogsError}`);
    
      // 2. Fetch HTML (with size cap)
      let metadata = {};
      let faviconB64 = null;
    
      try {
        const htmlRes = await axiosInstance.get(targetUrl, {
          responseType: "text",
          decompress: true,
        });
    
        const $ = cheerio.load(htmlRes.data);
    
        // meta-parser on the raw HTML
        try {
          const parsed = metaParser(htmlRes.data);
          metadata = parsed || {};
        } catch (_) { /* non-fatal */ }
    
        // 3. Favicon – only fetch if small enough
        let faviconHref =
          $('link[rel="icon"]').attr("href") ||
          $('link[rel="shortcut icon"]').attr("href");
    
        if (faviconHref) {
          try {
            const base = new URL(targetUrl);
            const faviconUrl = new URL(faviconHref, base).href;
    
            // HEAD first to check Content-Length before downloading
            const headRes = await axiosInstance.head(faviconUrl).catch(() => null);
            const contentLength = headRes
              ? parseInt(headRes.headers["content-length"] || "0", 10)
              : 0;
    
            if (contentLength === 0 || contentLength <= FAVICON_MAX) {
              const iconRes = await axiosInstance.get(faviconUrl, {
                responseType: "arraybuffer",
                maxContentLength: FAVICON_MAX,
              });
              const mime =
                iconRes.headers["content-type"] || "image/x-icon";
              faviconB64 =
                `data:${mime};base64,` +
                Buffer.from(iconRes.data).toString("base64");
            } else {
              logger.info(`Favicon too large (${contentLength}B), skipping: ${faviconUrl}`);
            }
          } catch (err) {
            logger.warn("Favicon fetch failed", { url: targetUrl, err: err.message });
          }
        }
    
        // Free the HTML string early
        htmlRes.data = null;
      } catch (err) {
        logger.warn("HTML fetch failed (using OGS only)", {
          url: targetUrl,
          err: err.message,
        });
      }
    
      return {
        ...ogsResult,
        metaProperties: metadata,
        faviconUrl: faviconB64,
      };
    }
    
    // ─── /ogproxy route ───────────────────────────────────────────────────────
    
    app.get("/ogproxy", requireApiKey, async (req, res, next) => {
      try {
        // Validate input
        let { url: targetUrl } = req.query;
    
        const { error } = urlSchema.validate({ url: targetUrl });
        if (error) {
          return res
            .status(400)
            .json({ error: `Invalid URL: ${error.details[0].message}` });
        }
    
        // Normalise (strip trailing slash etc.)
        targetUrl = new URL(targetUrl).href;
    
        // Cache hit?
        const cached = cache.get(targetUrl);
        if (cached) {
          res.setHeader("X-Cache", "HIT");
          return res.json(cached);
        }
    
        res.setHeader("X-Cache", "MISS");
    
        // Deduplicate concurrent requests for the same URL
        if (!inFlight.has(targetUrl)) {
          const promise = fetchOGData(targetUrl)
            .then((data) => {
              cache.set(targetUrl, data);
              return data;
            })
            .finally(() => inFlight.delete(targetUrl));
    
          inFlight.set(targetUrl, promise);
        }
    
        const data = await inFlight.get(targetUrl);
        return res.json(data);
      } catch (err) {
        next(err);
      }
    });
    
    // ─── Centralised error handler ────────────────────────────────────────────
    
    // eslint-disable-next-line no-unused-vars
    app.use((err, _req, res, _next) => {
      logger.error("Unhandled error", { message: err.message, stack: err.stack });
      res.status(500).json({ error: "Internal server error" });
    });
    
    // ─── Start ────────────────────────────────────────────────────────────────
    
    const server = app.listen(PORT, () => {
      logger.info(`OGProxy listening on port ${PORT}`);
      logger.info(`Cache: max=${CACHE_MAX} entries, TTL=${CACHE_TTL / 1000}s`);
    });
    
    // ─── Graceful shutdown ────────────────────────────────────────────────────
    
    function shutdown(signal) {
      logger.info(`${signal} received – shutting down gracefully`);
      server.close(() => {
        logger.info("HTTP server closed");
        process.exit(0);
      });
    
      // Force exit after 10 s if still busy
      setTimeout(() => {
        logger.warn("Forcing exit after timeout");
        process.exit(1);
      }, 10_000).unref();
    }
    
    process.on("SIGTERM", () => shutdown("SIGTERM"));
    process.on("SIGINT",  () => shutdown("SIGINT"));
    
    process.on("uncaughtException", (err) => {
      logger.error("Uncaught exception", { err });
      shutdown("uncaughtException");
    });
    
    process.on("unhandledRejection", (reason) => {
      logger.error("Unhandled rejection", { reason });
    });
    

    –> What do you think of this code?

    @DownPW yes, I’ve noticed similar and it is to do with the cache because it is stored in memory and not on disk…

    However, when I wrote the code, I did allow for a maximum cache age so I’m puzzled as to why this isn’t being honoured.

    A review of the code is long overdue to be honest, but it’s slipped down the priority list. The fastest way to resolve this in the short term is to perform a daily restart of the service which will flush the cache.

  • Don’t hesitate to tell me what you think of this modified code.

    Until next time.


Related Topics
  • 14 Votes
    9 Posts
    954 Views
    @DownPW of course. As I mentioned in the first post, Sudonix isn’t going anywhere. It’ll continue to be free as it always has been.
  • Ch..ch..ch..ch..changes!

    Announcements themes layout
    16
    2
    15 Votes
    16 Posts
    2k Views
    @phenomlab of course, to be recognised is fantastic. @phenomlab said in Ch..ch..ch..ch..changes!: Sadly, no. Web crawlers and scrapers are often JS based and read text only, so styles don’t have any bearing. I’ve read mixed things about this, but no that does make sense, it was something I read a many years back when using Wordpress.
  • Planned sunset of NTFY plugin

    Pinned Announcements push nodebb ntfy
    7
    1
    8 Votes
    7 Posts
    1k Views
    I’ve noticed that I’m the only one subscribed to the push notifications on this site. If you were using NTFY previously, and have noticed that you’ve not had any alerts for a while, it’s because this feature has been disabled. You’ll now need to use the push notification to replace NTFY as mentioned in the first post.
  • ANNOUNCEMENT: Social Login Changes

    Announcements openid oauth
    4
    1
    6 Votes
    4 Posts
    1k Views
    @DownPW Always looking for ways to improve the overall experience.
  • Theme retirement

    Announcements themes switcher
    21
    16 Votes
    21 Posts
    5k Views
    I relented somewhat here and added another swatch - one I missed, which was previous called “blackout”. This specific one has been adapted to work on the new theming engine, but the others have been reclassified, and renamed to suit. [image: 1693924764891-d7f3a7a1-9702-4238-99bd-5c0e0d53f244-image.png] As a result, the theme you might have had will probably be reflecting something else, so you (might) need to change your themes accordingly. The changes are as follows Light -> No Change Cloudy -> Is now the old “Dim” Dim -> Is now the old “Dark” Dark -> Now a new theme based on the revamped “Blackout”
  • Clustering for NodeBB enabled

    Announcements cluster
    22
    1
    16 Votes
    22 Posts
    3k Views
    @Madchatthew True. I think this is the reason as to why most Open Source projects are abandoned because they are not sustainable in the long-term.
  • Testing out Webdock.io

    Moved Announcements webdock recovery speed
    2
    5
    5 Votes
    2 Posts
    1k Views
    Just coming back to this thread for review (as I often do), and it looks like Webdock have increased their available offerings - some are extremely powerful, yet very competitive from the pricing perspective. [image: 1692559685163-7cf9a928-ac21-44fe-99c6-90439030d631-image.png] 10 CPU cores, plus 20Gb RAM? Well worth a look (and the asking price) - there’s also a fixed IP which is hugely beneficial. Clearly, this is well beyond what most people will want to spend - it’s more of an example (but interestingly, Sudonix runs on something not too different from the above). However, not all that glitters is gold - just have a walk through the benchmark report I found below and you’ll see a huge difference between Heztner and Webdock https://www.vpsbenchmarks.com/compare/hetzner_vs_webdock That being said, the amount of HTTP requests that Webdock handles in relation to Hetzner is superior - @DownPW you might want to have a look at this - there’s a free 24 hour trial… [image: 1692560710486-5203639b-2f62-47e6-b87b-37580ce5deae-image.png]
  • Fancybox now used for image handling

    Announcements fancybox
    16
    6 Votes
    16 Posts
    3k Views
    And it seems to be less conflicting!