Few weeks late to pitch in now but I can +1 docker-mailserver.
It has almost everything included and the configuration files are quite straightforward and flexible enough that you can drop little edits into the individual services if you need to tweak something.
My setup is very close to what you want: I use fetchmail to pull in from my old gmail and yahoo inboxes, I also have my own domain so I configured the MX records so that emails go straight to my server, with a fallback to my email provider (any mail that doesn't make it directly to my mailserver will still get pulled into my inbox with fetchmail when it comes back online).
Docker-mailserver allows you to set the SMTP of your instance to use your provider. This is important because it means that they do all the reputation stuff so that your emails work properly (and both my home ISP and my VPS provider don't do sending over pot 25 anyway).
So when I need to connect a new client (like Thunderbird) to my email I don't need to manually config anything as docker-mailserver has all the auto config messages so its really seamless. At the same time my risk is low because even if my sever is off my provider will still receive anything on my behalf. I can only send using the username I have paid for from my provider and switching between gmail and yahoo is not possible without rewriting configs and restarting services but its not something I want anyway. On the receiving side I can have any number of aliased usernames that will all be received by my server (but only when its on so i use them rarely and for disposable addresses).
Big downsides are:
- backups are now my problem as I don't keep duplicates.
- I route my traffic via a VPS+VPN to get a static public IP address - this was a headache to get all the little details just right but its stable now
- I have to host separately my own webmail, although I'm mostly using my phone with k-9 Mail and Thunderbird (I use roundcube)
- Getting server side filtering rules working was also annoying and so far I still have to add new rules through roundcube (there was a plugin for Thunderbird but I don't want to open the additional ports required)
- !!! Spam !!! Docker-mailserver has a great Rspamd default settings out the box so its actually fine but now I have to manage all the additional rules and its not super intuitive especially because I am doing all this just for myself (yay!) but the tools are clearly meant for managing a fleet of inboxes so everything takes me longer to figure out
- integrating contacts is not included and might be important for your experience (again I was able to add this as a plugin to roundcube ..... eventually)
Most of my complaints stem from the fact that I'm not very good at this but in the end it has been very satisfying to drop the occasional: "I host my own email BTW"
Good luck! Let us know how you get along!
And they have know about it for a long time..... https://climateintegrity.org/plastics-fraud
Many great reasons raised to explain why the current system persists so its clear to me that there isn't just a single roadblock. One reason, in my mind, is the challenge of reaching consensus. Even in small groups this can be problematic and more so for an entire population. It frustrates me no end that often small disagreements in the details of an idea is enough to prevent any positive change, even though at a macro scale the idea is good and universally supported.
Thank you for the post, I do like reading what experts have to say about our digital privacy. I don't like that many of these articles/discussions focus on specific choices that a user can make to gain more privacy. Please can we stop pretending that there is any alternative to WhatsApp. The network effect is why we use their platform not for any other reason. Its like advising someone to speak Fuzhou instead of Mandarin when in China, its not that its wrong to do so just that it is poor advice, or at the very least assumes that your priority is to speak Fuzhou over actually communicating with other people. The author says as much themselves: “Collective problems need collective solutions.” This is great! But shortly after we read: “Instead of using WhatsApp, use Signal.” groan + face-palm. I want to be positive and reiterate that I am happy that this is being debated at all.
Network Effect is the biggest hurdle for sure. I think it it true for so many other services too. I think we can agree there is no real technical problem to solve, we only look at the technical problems because trying to "fix" the social and political issues is a lot harder. Digital Markets Act is supposed to address this but time will tell if it has any lasting impact (in the EU).
I'm trying to do a 3-2-1 but instead I'm doing a 4-3-0. Original is on SSD with scheduled backups to two separate HDs so that I have 3 copies on two different media (if SSD + HD counts as distinct enough) so then I added in BDR as an infrequent 4th manual copy for my most irreplaceable data (and I'm very strict with what counts as irreplaceable so that the total is just over 100GB at this point). Eventually I need to get a copy of the disks off site but for now they are in the basement.
I have no illusions about how long the BDRs will last. (Seems like it is anywhere between 100 days and 100 years).My aim is to just have another copy that is distinct from magnetic or flash storage. My plan is to burn new updated copies so that any data on an old disk will get burned to a newer disk at some point. Maybe in ten years I'll abandon this approach but for now it makes me feel better.
Grossman, who co-founded the Grossman Burn Foundation, killed Mark Iskander, 11, and brother Jacob, 8, in a speeding car in 2020.
cross-posted from: https://lemmy.world/post/16402340
> California socialite Rebecca Grossman sentenced to 15 to life for killing 2 kids in crosswalk > > A wealthy California woman who co-founded a burn center foundation in the Los Angeles area was sentenced to 15 years to life in prison Monday for the hit-and-run killings of two children while they were in a crosswalk more than three years ago. > > Rebecca Grossman was speeding when she struck and killed Mark Iskander, 11, and his brother Jacob, 8, while they were in a crosswalk in the Los Angeles-area city of Westlake Village on Sept. 29, 2020. > > “The loss of these two innocent lives has devastated their family and our community. Ms. Grossman’s blatant disregard for human life is a stark reminder of the grave consequences of irresponsible behavior behind the wheel,” Los Angeles County District Attorney George Gascón said in a statement. > > A jury convicted Grossman in February on two counts of second-degree murder, two counts of vehicular manslaughter with gross negligence and one count of hit-and-run driving resulting in death. > >
Grossman, who co-founded the Grossman Burn Foundation, killed Mark Iskander, 11, and brother Jacob, 8, in a speeding car in 2020.
cross-posted from: https://lemmy.world/post/16402340
> California socialite Rebecca Grossman sentenced to 15 to life for killing 2 kids in crosswalk > > A wealthy California woman who co-founded a burn center foundation in the Los Angeles area was sentenced to 15 years to life in prison Monday for the hit-and-run killings of two children while they were in a crosswalk more than three years ago. > > Rebecca Grossman was speeding when she struck and killed Mark Iskander, 11, and his brother Jacob, 8, while they were in a crosswalk in the Los Angeles-area city of Westlake Village on Sept. 29, 2020. > > “The loss of these two innocent lives has devastated their family and our community. Ms. Grossman’s blatant disregard for human life is a stark reminder of the grave consequences of irresponsible behavior behind the wheel,” Los Angeles County District Attorney George Gascón said in a statement. > > A jury convicted Grossman in February on two counts of second-degree murder, two counts of vehicular manslaughter with gross negligence and one count of hit-and-run driving resulting in death. > >
I am so sorry! I took another look at SMS Import / Export and to my surprise it does support scheduled backups! I had seen it before and had overlooked that feature so disqualified it. I'm going to test it now but if you don't hear back its probably because I made the switch without issue.
I saw this but its missing a key feature: daily backups. SMS Backup & Restore can make a backup every day then I can sync the file out to Nextcloud or similar. It also supports Dropbox if that's your thing.
This and so many others that are irreplaceable because of the Network effect. Google Maps, Uber and so on...
However if you are looking for a self contained app to bring into the Foss ecosystem then I would recommend making a game that you like?
My first game that I bought on Google Play was Osmos making a version of this that is open source would make me happy....
SMS Backup & Restore? Unless there is an alternative that I'm missing? Play store link
cross-posted from: https://startrek.website/post/10912845
> Glad I was too dumb to finish college...
Your average user is comparing the time to setup a new game vs a punch in the face, no contest punch in the face all day! Now if you are getting punched in the face for more than 5 hours then maybe they will start considering an alternative....
YunoHost is trying to make it easier than a synology NAS to install services and get them setup properly but I agree that to configure your network properly is difficult and everyone's setup is different so specific knowledge is required.
Been keeping my eye on these guys hoping they can turn the tide: Taler
Now I'm just waiting for someone to do the same thing for the Nest Thermostats .....
A cursory review of all the non-metasearch, indexing search engines I have been able to find.
cross-posted from: https://reddthat.com/post/19788762
> If you ever wanted to know too much about where the majority of our search results come from and the many niche alternatives trying something different.....
A cursory review of all the non-metasearch, indexing search engines I have been able to find.
cross-posted from: https://reddthat.com/post/19788762
> If you ever wanted to know too much about where the majority of our search results come from and the many niche alternatives trying something different.....
A cursory review of all the non-metasearch, indexing search engines I have been able to find.
If you ever wanted to know too much about where the majority of our search results come from and the many niche alternatives trying something different.....
I really hope they do but I'm keeping my expectations realistic.
An open alternative to Apple/Google/Samsung pay seems impossible.....
The actual % numbers are probably not that important. Software developers and hardware manufacturers are looking for a critical mass of users of their product. So if 20% of the world switch from Windows to Linux but they are the 20% that only use a web browser then why would the compatibility landscape change? Adobe are not going to do the hard work to support Linux just because schools and libraries switch to Linux. Even if every government mandates using Linux for government offices would Cricut suddenly support Linux?
I think this is the only feature that matters. For a user switching away from Windows I would love to hear about the user experience between buying a system76 (or another Linux system seller) vs a Mac laptop. Complaining that Linux doesn't work with your hardware is like complaining that the hackintosh that you built doesn't work with your hardware.
Unfortunately this is mostly true.......
I had a similar idea: Could search engines be broken up and distributed instead of being just a couple of monoliths?
Reading the HN thread, the short answer is: NO.
Still, its fun to imagine what it might look like if only......
I think the OP is looking for an answer to the problem of Google having a monopoly that gives them the power to make it impossible to be challenged. The cost to replicate their search service is just so astronomical that its basically impossible to replace them. Would the OP be satisfied if we could make cheaper components that all fit together to make a competing but decentralized search service? Breaking down the technical problems is just the first step, the basic concepts for me are:
Crawling -> Indexing -> Storing/host index -> Ranking
All of them are expensive because the internet is massive! If each of these were isolated but still interoperable then we get some interesting possibilities: Basically you could have many smaller specialized companies that can focus on better ranking algorithms for example.
- What if crawling was done by the owners of each website and then submitted to an index database of their choice? This flips the model around so things like robots.txt might become less relevant. Bad actors and spam however now don't need any SEO tricks to flood a database or mislead as to their actual content, they can just submit whatever they like!. These concerns feed into the next step:
- What if there were standard indexing functions similar to how you have many standard hash functions. How a site is indexed plays an important role in how ranking will work (or not) later. You could have a handful of popular general purpose index algorithms that most sites would produce and then submit (e.g. keywords, images, podcasts, etc.) combined with many more domain specific indexing algorithms (e.g. product listings, travel data, mapping, research). Also if the functions were open standards then it would be possible for a browser to run the index function on the current page and compare the result to the submitted index listing. It could warn users that the page they are viewing is probably either spam or misconfigured in some way to make the index not match what was submitted.
- What if the stored indexes were hosted in a distributed way similar to DNS? Sharing the database would lower individual costs. Companies with bigger budgets could replicate the database to provide their users with a faster service. Companies with fewer resources would be able to use the publicly available indexes yet still be competitive.
- Enabling more competition between different ranking methods will hopefully reduce the effectiveness of SEO gaming (or maybe make it worse as the same content is repackaged for each and every index/rank combination). Ranking could happen locally (although this would probably not be efficient at all but that fact that it might even be possible at all is quite a novel thought)
Sigh enough daydreaming already........
So upsetting that it takes decades for this information to surface.....
https://climateintegrity.org/plastics-fraud
Following on from this discussion: https://reddthat.com/post/6044040 I finally updated my VPS setup - deleted everything and started fresh with a whole new approach. I decided to make a full writeup for anyone that might find it useful or at least mildly interesting. I'm not an expert in any of the concepts that I wrote about so looking forward to the many many ways that I can improve! Many thanks!
TLDR~ Networking is deceptively easy to grasp conceptually and infuriatingly fiddly to implement in the real world. I am looking for help and advice to design a solution that fits my needs but done 'the right way'.
The Hardware and Physical Network:
The main server is living in my home, it is an intel NUC running Ubuntu. I rent a tiny VPS (linode) running Debian with a public facing static IP (hello internet!). My networking is fairly standard consumer grade hardware with most things wired into my main gigabit switch. I have more than one wifi access point but all that is managed by my router. The router is connected to my ISP router in a way that creates a double NAT situation. Before you comment - I can't change ISP, I can't open ports, I can't change the ISP router, my ISP doesn't hand out static IP addresses, if you have any questions about my ISP the worst possible answer is probably correct. (The connection however is fiber and I'm getting about 800Mbps down / 80Mbps up.)
The Software and Setup:
I now have about 65 docker containers running all the usual services with plenty of extra apps that are somewhat useful but also just for fun (the number of containers is a bit misleading because I often have separate containers for databases or cron jobs and the like). The greatest hits include: Nextcloud, Homeassistant, Jellyfin, Photoprism, Vaultwarden, Pihole, Mailu and more. I also have some services setup natively (apt install): tailscale, wireguard and mergefs. About half of my services are 100% local only where I can access them on an assigned port number (e.g. jellyfin would be "server_name:port_number") and I can access those services in a pinch either through tailscale or via wireguard which bring us nicely onto the VPS: The VPS is runnng a wireguard 'server' and I have wireguard client configs for my devices and for my home server so that if I connect to the wireguard VPN I can access my server and also route traffic to the internet just like any other VPN provider.
Reverse Proxy
Now this is the really tricky part. I also have my own domain and I have configured a series of subdomains for services that I want to be able to access seamlessly from anywhere. I don't want to use the VPS/VPN unnecessarily when I'm at home and I don't want to have to remember to toggle wireguard/tailscale whenever go out or come home. For the most part I have solved this but I am sure I have done this in an amateur way: I run two duplicate nginx reverse proxy containers, one on the VPS and one on the home server. The VPS is able to request and obtain https certificates from letsencrypt and these allow either reverse proxy to terminate the encrypted web traffic. When connecting from the public internet, nginx (on the VPS) then proxy forwards the connection unencrypted down through the wireguard tunnel (so still encrypted) back to my server at home. At home I am running a Pihole DNS server so that when I request the same domain but from my local network the request goes instead to the duplicate nginx reverse proxy. When connecting nginx (on my home server) the https termination is handled there and then forwarded on the wireguard network but now this is all happening on the server and not across the internet. Happy days? Yes for the most part this all works great but it does seem like the wrong way to do it.
Mailu (but not really)
Currently my Mailu setup is limited as I am using the fetchmail service to get incoming emails from my existing email providers (gmail etc) and I am using a SMTP relay to send email. Mailu is not actually doing the job of a fully independent email server which I'm fine with for now and it gives me the possibility to expand its functionality in the future. So really everything up to this point has been the back-story to explain my situation with Mailu now. Currently I have Mailu setup exactly as the developers suggest: Mailu is a collection of docker containers that make up the individual parts of an email server and the stack is all wrapped up nicely by using its own specially configured nginx reverse proxy (if you've been keeping score that makes 3). This reverse proxy manages the https connections and then routes them to the back-end servers (containers). I was able to make this work for the webmail service because I configured it to serve unencrypted over http and then use my existing nginx (VPS and home server) the same way as for all my other services. However I wanted to have access to the IMAP and the SMTP services and for this I took a different approach. I used the nginx "stream" block in the configuration to forward the IMAP and the SMTP connections from the public internet via the VPS but on my local network the connection is just direct to the home server. So I can connect Thunderbird from my laptop and K-9 from my phone and send and receive email. And with the magic of having my own DNS server this actually works totally fine because all the connections require authentication and I don't need to know the source IP address of the client connecting for this to work. So when I setup my mail clients I use my own domain for both the IMAP and the SMTP servers - very satisfying!
Sending and Receiving Email Properly:
Firstly, if the reports about actually sending email from your own sever are true then I can safely say its not for me. I am happy to use a relay I'm not a masochist.... I do want to properly receive emails to my email server and I think it should be much easier than sending. I did manage to receive some emails. I configured my domain MX records correctly and then by opening port 25 on my VPS and having nginx stream that connection to Mailu I was able to receive email directly (yay!) but with one major problem: the source IP for all the connections were from my internal wireguard IP and not the actual public IP of the server sending me the email. This was a huge problem for my spam filter as it needs the source IP when trying to identify spam. The number of false positives just skyrocketed.
Issues with Proxy Forwarding
I am obviously not the first person to face this issue so I know that technically this can be configured correctly. The issues I am facing is that if I try and enable any of the proxy protocols in nginx to send the client IP in the header then Mailu spits an error (like 500 bad command) because we are interfering with the email protocol and the connecting server doesn't like that. I have tried changing settings in the Mailu reverse proxy but everything I have tried so far has no effect. Alternatively we can go up a level and use some IP forwarding rules to rewrite the destination IP of the incoming packets (maybe using iptables instead of nginx) and this will transparently preserve the 'from IP' packet header but then we need to setup my home server to relay everything back to the VPS otherwise the return packets will come from my home IP and not the VPS IP which will fail to correctly establish the connection. My current thinking is to create another wireguard client (in a container this time) that is part of the Mailu stack and that I can assign its own virtual NIC and IP address. Then I can bind port 25 to a unique IP and then have a routing rule to redirect all this traffic from that IP back to the VPS.
Cry for help:
How do I get this to work without breaking everything!? Have I made this more complicated than it needs to be? Have I just reached the edge of what these systems were deigned to do?
Thank you!
Just fill in this form – then the problems start. They look fine in Preview or another app, but don’t work right. Here’s why.
I was forced to fill out an XFA form (that was pretending to be a PDF) from the Canadian government and the experience left me feeling completely subjugated. The lengths that Adobe go to to make sure that you have the most frustrating experience possible is unbelieveable. Searching for alternatives or help leads you to either: be forced to buy their premium software (or a licensed equivalent) or subscribe for Adobe's online tools. Why is this propriety format allowed in government forms? What is so fantastic/irreplaceable about this format?
These are toys for kids but the original company made real vacuum cleaners that look almost identical: Henry (vacuum) From the wiki page: >Hetty HET200 - Short for Henrietta, pink "feminine" version of Henry, introduced on 7 July 2007 to be marketed towards women. Her face is different from all the other models' in having long eyelashes.
The connotation behind the word "criminal" has progressed as society has too. 'Criminal' used to and still does refer to those who performed inexcusable acts of legally unethical sin.
Is there really no alternative justice system than crime and punishment? Seems that punishments are taken for granted as necessary and that we only debate on the reason it is accepted.
Penkesu Computer - A Homebrew Retro-style Handheld PC
Very Nokia communicator eske....
The GRIZ Sextant is an expression of my love of 80s computers. But what computers exactly? There is a multitude of wild machines from the 80s to choose from but in the end, I ended up choosing a somewhat surprising choice as my design inspiration, the GRID Compass. It is difficult to properly desc
Can't match that 80's style!
Do you like hurting other people? Ten years later, it's probably the music that drove you to it, let's be honest.
Listend to the soundtrack recently but haven't played for a while......