Skip Navigation

Native Docker approach for automated feed updates in FreshRSS with Ofelia

I've started using FreshRSS for my RSS backend. Reeder on iOS, of course. I was rather surprised the tool doesn't do automated feed refreshes out of the box. Further, the documentation didn't offer any container-native approach to automatically updating my feeds.

I'd recently discovered ofelia and thought this might be a good use case for it. I added the following to the services block in my FreshRSS docker-compose.yml file:

ofelia: image: mcuadros/ofelia:latest depends_on: - freshrss command: daemon --docker volumes: - /var/run/docker.sock:/var/run/docker.sock:ro

Then I added the following labels to my freshrss container definition:

- "ofelia.enabled=true" - "ofelia.job-exec.datecron.schedule=@every 30m" - "ofelia.job-exec.datecron.command=/usr/bin/php /var/www/FreshRSS/app/actualize_script.php"

A couple of notes:

  • ofelia lets you run commands inside the other container with job-exec; there are a few other options for job types.
  • Output from the command you pass as a label will show up in ofelia's logs.

I suspect I will find many more use cases for ofelia now that I've discovered it.

0
Jump
Any projects coming down the tracks that you are looking forward to?
  • The problem is these devices don’t have the hardware to process input locally — it’s all sent back to their respective clouds for processing.

    I believe Siri on newer phones can do some processing entirely local, but it’s not the norm.

    1
  • Jump
    What radicalized you? For me it was Spore.
  • This is the way. HA is what really got me thinking about self-hosting more seriously — I realized Google and Amazon likely knew what room I was in and when I was in it. That was enough to go down the rabbit hole.

    1
  • Jump
    Setting up docker, qbittorrent, and gluetun with protonVPN - seems to work cept wrong pw on web-ui
  • Just ran into this myself. I fixed it by disabling auth for my local network in the config file. IMPORTANT: Stop the container first because it overwrites config files on shutdown.

    I was then able to log in and set the password. After that the login worked fine.

    1
  • Jump
    Help to set up HomeLab in a newly built house
  • Hey friend, you’re aiming for a setup very close to what I run. Some lessons from my fumbling:

    • Get a NUC and install HASSOS on it for Home Assistant. Treat it like an appliance and leave it to do it’s thing.
    • I run Frigate on a separate host with 2x Coral USB TPUs processing 5x 4K cameras. It sits at about 20% utilization.
    • Check out Simply NUC and UCTRONICS for rack shelves for NUCs and Pis. If you have other random hubs you want to rack check out this Etsy store: https://www.etsy.com/shop/Print3DSteve
    • I highly recommend a NAS for media and other storage. I have a Synology RS819 and it’s solid for this, but I just bought a used Dell R720XD that I plan on loading up with drives and installing TrueNAS on. I’d check out used Synology options on eBay for your use case.

    Given your low power consumption requirements, I’d probably look at something like this:

    • 1x NUC for HASS
    • 1x NUC with Coral TPU for Frigate + other apps
    • 1-3x Pi4s or 5s for other random apps. I run mine using PoE hats to my UniFi gear.
    • 1x Synology rack mount w/ 5400 RPM drives (lower power consumption).

    If you want to do any AI/ML beyond Frigate, you’ll want a desktop GPU in the rack. I still haven’t found a good option here. I’ll likely get a rack case that works with desktop hardware and throw a 3090 into it.

    1
  • Jump
    "Simple" centralized logging solution?
  • Did you check out the Loki Docker plugin for the daemon? That worked like a charm for me.

    Promtail will grab host level logs as well.

    DM if you’re comfortable with Ansible; I have the whole stack (host + Docker services) automated and can share.

    1
  • Jump
    Best GPUs for self-hosted AI?
  • Much appreciated — I think the rack mounted desktop GPU approach is best for now. Another commenter suggested we should see better options in 1-2 years and I strongly suspect they’re correct.

    1
  • Jump
    Best GPUs for self-hosted AI?
  • I totally agree on the Coral TPUs. Great for Frigate, but not much else. I’ve got 2x of the USB ones cranking on half a dozen 4K stream - works wonderfully.

    And I agree in theory these Nanos should be great for all sorts of stuff, but nothing supports them. Everything I’ve seen is custom one offs outside of DeepStake (though CodeProject.AI purports there’s someone now working on a Nano port).

    Sounding like a decent gaming GPU and a 2-3U box is the ticket here.

    1
  • Best GPUs for self-hosted AI?

    Hello friends,

    I’m pretty deep into self-hosting - especially on the home automation side. I’ve got a couple of options for self-hosted AI, but I don’t think they’ll meet my long term goals:

    • Coral TPUs: I have 2x processing my Frigate data. These seem fine for that purpose, but not useful for generative AIs?

    • Jetson Nano: Near as I can tell nothing supports these things except DeepStack, which appears to be abandoned. Bummed these haven’t gotten broader support in the community.

    I’ve got plenty of rack space and my day job is managing thousands of machines, so not afraid of a more technical setup.

    The used NVIDIA rack mounted Tesla GPU servers look interesting. What are y’all using?

    Requirements:

    • Rack mounted
    • Supports local LLM and GenAI
    • Linux-based
    • Works with Docker
    10