I’m in the process of selecting a web based ssh app to add all my ssh servers in one place and i’ve tried apache guacamole and it’s been working fine,
Also I’m trying sshwifty but the thing is, sshwifty doesn’t have a login interface before accessing the data so it’s not the best thing, so I’ve made an install and asking if that’s the best for my current setup..
I actually don’t have authelia nor authentik to put it behind 2fa app, and i don’t plan to install one soon BUT i installed sshwifty on oci vm that have a public ip of 123.123.123.123, and i only allowed port 8182 for this ip address so i added in the security list 123.123.123.123/32 so no one can access this app except localhost, and then i installed cloudflare tunnel into this vm and activated otp by email and allowed only my email.
So my question is, is this secure enough?
Tailscale 👍👍👍
So, i've been struggling with this for few days, i have Ubuntu server with docker & portainer installed and radarr, qBittorrent are installed in portainer
This is my radarr docker compose within portainer https://imgur.com/hBh2XU9
And this is qBittorrent docker compose within portainer https://imgur.com/a/QfAKtss
these are the only docker-compose files that worked with me
inside radarr i created remote path mapping https://imgur.com/Fz8dlh6
qBittorrent is downloading the files I choose from radarr without any problem https://imgur.com/Ou31i4S
However there are two problems:
1: radarr it's stuck on this error
https://imgur.com/eDzsTPP
and in events it's telling me this error
Import failed, path does not exist or is not accessible by Radarr: /download/Bao (2018) [BluRay] [1080p] [YTS.AM]. Ensure the path exists and the user running Radarr has the correct permissions to access this file/folder
2: in the original path /mnt/data-500-g/downloads
which is mounted on both apps in /download
there are no files downloaded?! i mean it's empty, shouldn't it at least have the downloaded files from qBittorrent?
the whole path /mnt/data-500g have permissions of 777 so i don't think it's permission issue so what it could be 🤦♂️🤦♂️🤦♂️🤦♂️🤦♂️
So, I've been trying to figure out how to install jellyfin with the best options here so i made my choice, and it's finally installed but i have some issue with refreshing the library,
Ok, I have a ProxMox server (Host) with 1T HDD installed, and i created an ubuntu server vm (Not Container) and installed jellyfin on that vm.
I attached my 1T hdd as a virtual hdd from host server to jellyfin vm using this code
/sbin/qm set 500 -virtio2 /dev/disk/by-id/UUID
And made it auto mount using fstab, and it's working great
However, if i made any changes to the original dir on host, it doesn't reflect on jellyfin until reboot
i.e, if i added new movie say it's Batman Begins to the host original dir, it doesn't show up in the mount point in jellyfin vm when i use ls -l until i reboot the vm.
This is a very painful process which i don't think that it suppose to be that way when i add every new file or even rename a folder i should reboot the whole vm
I tried using mount -a to remount the vda but it's not working
I tried using "sync"option in the fstab but it 's not working also
What could be wrong? and how do i fix this issue?
smb share is only used by windows pc to add/update/delete library
plex is mounting the hdd as virtual hd
So my setup is
- Proxmox server with the main HDD for media
- Plex VM with auto library refresh enabled
- My main windows pc at home
I copied all my media folders/files to a folder on proxmox server hdd and named it media-server
I attached this hdd as vda and mounted it on the plex vm so it can read all the files and add it to the library
I created a smb share from proxmox server hdd so i can update my library with all new movies or series
So I thought that with this setup whenever i add new movie to the proxmox hdd from my windows pc it will make plex library auto refresh and scan to add the new data
However this isn’t happening when i add new folder or change anything it doesn’t reflect on plex, i even tried to manually scan library but sometimes it work and sometimes it doesn’t
What could be wrong with this setup?
Thanks, now i got to the UI and added sttaic dns to the IP address in my dns server so when i ping zabbix.local it reply with my server ip 10.0.10.249 but when i go to zabbix.local i get
running this script but cannot open UI or login to shell using admin@example.com and changeme
Ok i fixed this error and npm is running now but still can't reverse proxy proxmox vm using npm
modifed my yml file to match yours
version: '3.8'
services:
networks:
reverse_proxy:
external: true
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
networks:
- reverse_proxy
ports:
# These ports are in format :
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '81:81' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP
# Uncomment the next line if you uncomment anything in the section
# environment:
# Uncomment this if you want to change the location of
# the SQLite DB file within the container
# DB_SQLITE_FILE: "/data/database.sqlite"
# Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
but when running docer compose up i get this error
validating /home/ubuntu/nginx-npm/docker-compose.yml: networks.app Additional property restart is not allowed
So i have a vm hosted on my proxmox server with docker and nginx proxy manager, it can reverse proxy any container on the same network “bridge” without any problems..
However, i have other vms on proxmox “non docker” that i eant to reverse proxy to it but I couldn’t do it… I tried using (network_mode: host) and (network: external) But it doesn’t seem to work, whenever i inspect docker network it always says that npm is running on bridge network..
Tldr I need containerized npm to reverse proxy other “non docker” proxmox vms
Proxmox 👍👍👍
I’m using mikrotik as my main router with ip 10.0.10.254
And I got a new desktop and turned it into proxmox server with ip 10.0.10.253
And installed adguard home to be my local dns server with ip of 10.0.10.250 and changed how it resolve hostnames by using 1.1.1.1 & 8.8.8.8
I also have another adguard server vm hosted on cloud with ip 190.190.190.200 for example
Configured mikrotik to use adguard as my primary dns server & cloud as secondary in (ip>dns)
But it only resolves hostnames when Allow remote requests is checked, when i uncheck it & remove my cloud dns vm it doesn’t resolve anything
Also in adguard quiery log the only client ip is my miktotik 10.0.10.254 it should be my local devices ip
What could be wrong?
I’m using mikrotik as my main router with ip 10.0.10.254
And I got a new desktop and turned it into proxmox server with ip 10.0.10.253
And installed adguard home to be my local dns server with ip of 10.0.10.250 and changed how it resolve hostnames by using 1.1.1.1 & 8.8.8.8
I also have another adguard server vm hosted on cloud with ip 190.190.190.200 for example
Configured mikrotik to use adguard as my primary dns server & cloud as secondary in (ip>dns)
But it only resolves hostnames when Allow remote requests is checked, when i uncheck it & remove my cloud dns vm it doesn’t resolve anything
Also in adguard quiery log the only client ip is my miktotik 10.0.10.254 it should be my local devices ip
What could be wrong?
I think tailscale has this feature, Look at the admin panel