Bug reports on any software
Every Lemmy instance chooses its own name for the meta community. Some don’t even have one. Some choose quite bizarre names.
That’s shit. If you walk into an office building, the receptionist is almost always close to the main entrance. When you enter a restaurant, the host(ess) is either close to the front door or there is a clear path to the host(ess). Yet Lemmy is terribly organised in this way. The power of defaults can go a long way here. A meta community should be created by default with a default name. And by default it should be listed at the top on the communities list.
Best way to cope with the madness is sort communities chronologically with oldest first. But it’s not solid. Sometimes the meta community is created late in the game.
- fedia.io AZERTY→QWERTY keyboard conversion painful - Europe - Fedia
My laptop’s keyboard is QWERTY and the OS is configured for that. But when I attach an external AZERTY keyboard, it’s a disaster. Only five keys need to be rearranged (Q, W, A, Z, M), or so it seems superficially. But the punctuation is all wrong. It’s not just an arrangement problem but in fact the...
Both Lemmy and mbin have a shitty way of treating authors of content that is censored by a moderator.
Lemmy: if your post is removed from a community timeline, you still have the content. In fact, your logged-in profile looks no different, as if the message is still there. It’s quite similar to shadow banning. Slightly better though because if you pay attention or dig around, you can at least discover that you were censored. But shitty nonetheless that you get no notification of the censorship.
Mbin: if your post is removed, you are subjected to data loss. I just wrote a high effort post europe@feddit.org and it was censored for not being “news”. There is no rule that your post must be news, just a subtle mention in the topic of news. In fact they delete posts that are not news, despite not having a rule along those lines. So my article is lost due to this heavy-handed moderation style. Mbin authors are not deceived about the status of their post like on lemmy, but authors suffer from data loss. They do not get a copy of what they wrote so they cannot recover and post it elsewhere.
It’s really disgusting that a moderator’s trigger happy delete button has data loss for someone else as a consequence. I probably spent 30 minutes writing the post only to have that effort thrown away by a couple clicks. Data loss is obviously a significant software defect.
Discuss. (But plz, it’s only interesting to hear from folks who have some healthy degree of contempt for exclusive corporate walled-gardens and the technofeudal system the fedi is designed to escape.)
And note that links can come into existence that are openly universally accessible and then later become part of a walled-garden... and then later be open again. For example, youtube. And a website can become jailed in Cloudflare but then be open again at the flip of a switch. So a good solution would be a toggle of sorts.
When an arrogant presumptuous dick dumps hot-headed uncivil drivel into a relatively apolitical thread about plumbing technology and reduces the quality of the discussion to a Trump vs. $someone style shitshow of threadcrap, the tools given to the moderator are:
- remove the comment (chainsaw)
- ban the user from the community (sledge hammer)
Where are the refined sophisticated tools?
When it comes to nannying children, we don’t give teachers a baseball bat. It’s the wrong tool. We are forced into a dilemma: either let the garbage float, or censor. This encourages moderators to be tyrants and too many choose that route. Moderators often censor civil ideas purely because they want to control the narrative (not the quality).
I want to do quality control, not narrative control. I oppose the tyranny of censorship in all but the most vile cases of bullying or spam. The modlog does not give enough transparency. If I wholly remove that asshole’s comment, then I become an asshole too.
He is on-topic. Just poor quality drivel that contributes nothing of value. Normally voting should solve this. X number of down votes causes the comment to be folded out of view, but not censored. It would rightfully keep the comment accessible to people who want to pick through the garbage and expand the low quality posts.
Why voting fails:
- tiny community means there can never be enough down votes to fold a comment.
- votes have no meaning. Bob votes emotionally and down votes every idea he dislikes, while Alice down votes off-topic or uncivil comments, regardless of agreement.
Solutions:
I’m not trying to strongly prescribe a fix in particular, but have some ideas to brainstorm:
- Mods get the option to simply fold a shitty comment when the msg is still on-topic and slightly better quality than spam. This should come with a one-line field (perhaps mandatory) where the mod must rationalise the action (e.g. “folded for uncivil rant with no useful contribution to the technical information sought”).
- A warning counter. Mods can send a warning to a user in connection with a comment. This is already possible but requires moderators to have an unhuman memory. A warning should not just be like any DM.. it should be tracked and counted. Mods should see a counter next to participants indicating how many warnings they have received and a page to view them all, so as to aid in decisions on whether to ban a user from a community.
- Moderator votes should be heavier than user votes. Perhaps an ability to choose how many votes they want to cast on a particular comment to have an effect like folding. Of course this should be transparent so it’s clear that X number of votes were cast by a mod. Rationale:
- mods have better awareness of the purpose and rules of the community
- mods are stakeholders with more investment into the success of a community than users
- Moderators could control the weight of other user’s votes. When 6 people upvote an uncivil post and only 2 people down vote it, it renders voting as a tool impotent and in fact harm inducing. Lousy/malicious voters have no consequences for harmful voting and thus no incentive to use voting as an effective tool for good. A curator should be able to adjust voting weight accordingly. E.g. take an action on a particular poll that results in a weight adjustment (positive or negative) on the users who voted a particular direction. The effect would be to cause voters to prioritize civil quality above whether they simply like/dislike an idea, so that votes actually take on a universal meaning. Which of course then makes voting an effective tool for folding poor quality content (as it was originally intended).
- (edit) Ability for a moderator to remove a voting option. If a comment is uncivil, allowing upvotes is only detrimental. So a moderator should be able to narrow the ballot to either down vote or neutral. And perhaps the contrary as well (like some beehaw is instance-wide). And perhaps the option to neutralise voting on a specific comment.
If you open a PDF document in the browser (thus in pdf.js) and click the down arrow (↓) to save it locally, it redownloads the document instead of simply saving it from the cache. If you lose network connectivity or disconnect then try to save the PDF locally for later viewing, the browser reports connection issues when there was no need for the network.
Tor Browser (Firefox based) does not have this problem.
- fedia.io Onion hosts are not recognized as URLs and thus get funny treatment. - Fedia Discussions - Fedia
As the linked post demonstrates, if you enter a link like this:...
An important part of the Youtube content is the transcript at the bottom of the video description. There are some 3rd-party sites that collect and share the YT transcripts separately but then the naive admins put the service in Cloudflare’s walled garden, which is worse than YT itself and purpose-defeating to a large extent. (exceptionally this service is CF-free, but it says “Transcript is disabled on this video” in my test: https://youtubetranscript.io)
Invidious should be picking up the slack here.
And Lemmy could do better by automatically fetching the transcript of youtube/invidious links and include it, perhaps spoiler style like this.
- https:// iejideks5zu2v3zuthaxu5zz6m5o2j7vmbd24wh6dnuiyl7c6rfkcryd.onion /@JosephMeyer@c.im/112923392848232303• 100%
Mastodon links on open decentralised nodes are auto redirected to access-restricted Cloudflare nodes
I browse with images disabled. But sometimes I encounter a post where I want to see the image, like this one:
https://iejideks5zu2v3zuthaxu5zz6m5o2j7vmbd24wh6dnuiyl7c6rfkcryd.onion/@JosephMeyer@c.im/112923392848232303
When opening that link in a browser configured to fetch images, it redirects to the original instance, which is inside an access-restricted walled garden. This seems like a new behaviour for Mastodon thus may be a regression.
It’s a terrible design because it needlessly forces people on open decentralised networks into centralised walled gardens. The behaviour arises out of the incorrect assumption that everyone has equal access. As Cloudflare proves, access equality is non-existent. The perversion in this particular case is an onion is redirecting to Cloudflare (an adversary to all those who have onion access).
There should be two separate links to each post: one to the source node, and one to the mirror. This kind of automatic redirect is detrimental. Lemmy demonstrates the better approach of giving two links and not redirecting. (But Lemmy has that problem of not mirroring images).
There are some very slow nodes (like Beehaw) where the server is apparently so overworked it cannot render a login form most of the time. The browser times out waiting. In the rare moments that there is a login opportunity, about ½ the time the login fails with a 2 second popup saying “incorrect login credentials”.
It’s quite terrible because obviously users would assume their account has been deleted --- because that’s how most online services work. Admins do not generally give warnings or say why an account is deleted. They just hit the delete button. Like Marvin in Office Space who was not told he was laid off.. they just “fixed the payroll glitch”. This is generally how communication works on communication platforms.. admins just pull the plug.
So because of how people learn that their account is deleted, users cannot distinguish a purposeful account removal from a faulty server. If you have a Beehaw account and you are told “incorrect login credentials”, don’t believe it. Keep trying. Eventually you’ll get in.
In the stock Lemmy web client there is apparently no mechanism for users to fetch their history of posts. The settings page gives only a way to download settings. This contrasts with Mastodon where users can grab an archive of everything they have posted which is still stored on the server.
Or am I missing something?
IIUC, there is no GDPR issue here because no data is personal (because all Lemmy accounts are anonymous). But if a Lemmy server were to hypothetically require users to identify themselves with first+last name, then the admin would have a substantial manual burden to comply with GDPR Art.20 requests. Correct?
These environment variables designate a parameter that holds the value of a HTTP proxy:
http_proxy
https_proxy
HTTP_PROXY
HTTPS_PROXY
It’s a convention, but the name “HTTP proxy” can only imply HTTP proxy, not a SOCKS proxy. The golang¹ standard libraries expect the above HTTP proxy parameters to specify a SOCKS proxy. How embarrassing is that? So any Go app that offers a proxy feature replicates getting the proxy kind backwards. Such as hydroxide, which requires passing a SOCKS proxy as a HTTP proxy.
¹ “Go” is such a shitty unsearchable name for a language. It’s no surprise that the developers of the language infra itself struggle with the nuances of natural language. HTTP≠SOCKS. And IIUC, this language is a product of Google. WTF. It’s the kind of amateurish screwup you would expect to come from some teenager’s mom’s basement, not a fortune 500 company among the world’s biggest tech giants.
(edit) It’s a bit amusing and simultaneously disasappointing that reporting bugs and suggesting enhancements to Google’s language requires using Microsoft’s platform:
https://github.com/golang/proposal#the-proposal-process
FOSS developers: plz avoid Golang - it’s a shit show.
Lingva & Simply Translate are two different front-ends to Google Translate. I’m not running the software myself because I run Argos locally (for privacy), but when Argos gives a really bad translation I resort to Lingva and Simply Translate instances.
I tried to translate a privacy policy. Results:
Lingva instances:
- translate.plausibility.cloud ← goes to lunch
- lingva.lunar.icu ← gives “414 Request-URI Too Large”
- lingva.ml & lingva.garudalinux.org ← fuck off Cloudflare! Obviously foolishly purpose defeating to surreptitiously expose people to CF who are trying to avoid direct Google connections.
- translate.igna.wtf ← dead
- translate.dr460nf1r3.org ← dead
Simply Translate instances (list of instances broken for me but found a year-old mirror of that):
- simplytranslate.org ← just gives a blank
- st.tokhmi.xyz ← up but results are just CSS garbage
- translate.bus-hit.me (ST fork mozhi) ← shoots a blank result
- simplytranslate.pussthecat.org ← redirects to mozhi.pussthecat.org
- mozhi.pussthecat.org (ST fork mozhi) ← shoots a blank result
- translate.projectsegfau.lt (ST fork mozhi) ←translates the first word then drops the rest; this instance is incorrectly listed as Lingva
- translate.northboot.xyz ← up but results are just CSS garbage
- st.privacydev.net ← up but results are just CSS garbage
- tl.vern.cc ← up but results are just CSS garbage
It looks as if Simply Translate is not keeping up with Google API changes.(edit: actually the CSS garbage is what we get when feeding it bulky input -- those instances work on small input)graveyard of dead sites:
- simplytranslate.manerakai.com ← redirects to vacated site
- translate.josias.dev
- translate.riverside.rocks
- translate.tiekoetter.com
- simplytranslate.esmailelbob.xyz
- translate.slipfox.xyz
- translate.priv.pw
- st.odyssey346.dev
- fyng2tsmzmvxmojzbbwmfnsn2lrcyftf4cw6rk5j2v2huliazud3fjid.onion
- xxtbwyb5z5bdvy2f6l2yquu5qilgkjeewno4qfknvb3lkg3nmoklitid.onion
- translate.prnoid54e44a4bduq5due64jkk7wcnkxcp5kv3juncm7veptjcqudgyd.onion
- simplytranslate.esmail5pdn24shtvieloeedh7ehz3nrwcdivnfhfcedl7gf4kwddhkqd.onion
- tl.vernccvbvyi5qhfzyqengccj7lkove6bjot2xhh5kajhwvidqafczrad.onion
- st.g4c3eya4clenolymqbpgwz3q3tawoxw56yhzk4vugqrl6dtu3ejvhjid.onion
Why this is a bug --- Frond-ends and proxies exist to circumvent the anti-features of the service they are facilitating access to. So if there is a volume limitation, the front-end should be smart enough to split the content into pieces, translate the pieces separately, and reassemble. In fact that should be done anyway for privacy, to disassociate pieces of text from each other.
Alternatively (and probably better), would be to have a front-end for the front-ends. Something that gives a different paragraph to several different Lingva/ST instances and reassembles the results. This would (perhaps?) link a different IP to each piece assuming the front-ends also proxy (not sure if that’s the case).
cross-posted from: https://slrpnk.net/post/11375008
> Whoever designed the OSM db either never uses ATM machines or they have never experienced anything like the ATM disaster in Netherlands. The OSM db has most ATM brands incorrect for Netherlands and seriously needs more fields so travelers can actually find a functioning ATM. > > ## brands are mostly incorrect > Pick any Dutch city. Search » Categories » custom search » Finance » ATM. The brands are mostly misinfo. These ATM brands do not exist anywhere in Netherlands: > * Rabobank > * ABN AMRO > * Ing > * SNS > > All those banks removed all their ATM machines and joined a monopolistic consortium called “Geldmaat”. There is generally an ATM at those locations but it’s always a Geldmaat ATM. So a simple find and replace is needed on all the Dutch maps. > > For indoor ATMs, the brand is often incorrectly named after the shop it’s in. That’s useful for finding it but still missing important info: the actual ATM brand. ATM brand is very important because different ATM brands give differing degrees of shitty treatment. If brand X refuses your card, all instances of that ATM brand will likely refuse your card. So the “brand” field should always reflect the ATM operator. Having a separate shop name field would be useful for locating the machine. > > # missing key attributes > Travelers should not have to spend hours running from one ATM to another until they find one that works. There are lots of basic variables that need to be accounted for in the db: > * (real or fixed point) ATM fee > * (enum set) currencies other than local (a rare but very useful option is to e.g. pull out GBP or USD in the eurozone) > * (enum set) card networks supported (visa, amex, discover, maestro, etc) > * (enum set) UI languages supported > * (integer) transaction limit for domestic cards > * (integer) transaction limit for foreign cards > * (integer set) denominations in the machine (Netherlands quietly removed all banknotes >€50 from all ATMs IIUC) > * (boolean) whether customers can control the denominations > * (boolean) indoor/outdoor (if the txn limit field is empty, indoor machines often have higher limits) > * (string) hours of operation (if indoor) > * (string) name of shop the ATM is inside (if indoor) > * (enum) whether a balance check is supported: [no | only some cards | any card]; this feature is non-existent in Belgium but common in Netherlands. Note that some ATMs only give balance on their own cards. > * (enum) whether the balance is on screen or printed to the receipt, or both > * (boolean) insertion style -- whether the card is sucked into the machine (this is very important because if the card is sucked in by a motor there is a real risk that the machine keeps the card [yes, that’s deliberate]). Motorised insertion is more reliable but carries the risk of confiscation. Manual insertion can be fussy and take many tries to get it to read the card but you never have to worry about confiscation. > * (boolean) dynamic currency conversion (DCC) > * (boolean) whether there is an earphone port for blind people (not sure if that’s always there)
In the Lemmy web client it used to be possible to open a new tab (control-tab) which would naturally be logged in. That goes for most websites. With Lemmy it started getting flakey (sometimes works, sometimes not). Lately it’s working less often and it seems browser flavor is a factor. Tor Browser (FF) generally works, but Ungoogled Chromium new tabs are logged out. So in UC, I have to do everything for a Lemmy instance under one tab.
I wonder what kind of funny business causes session cookies to fail. My guess is they are not using session cookies for logins but rather one of the rare alternatives.
update --- With just one tab running, I did a hard refresh (control-shift-R). That logged me out presumably doing the same as getting a new tab. Using the /back/ button does not recover from this.
I installed the Aria2 app from f-droid. I just want to take a list of URLs of files to download and feed it to something that does the work. That’s what Aria2c does on the PC. The phone app is a strange beast and it’s poorly described & documented. When I launch it, it requires creating a profile. This profile wants an address. It’s alienating as fuck. I have a long list of URLs to fetch, not just one. In digging around, I see sparse vague mention of an “Aria server”. I don’t have an aria server and don’t want one. Is the address it demands under the “connection” tab supposed to lead to a server?
The
readme.md
is useless:https://github.com/devgianlu/Aria2App
The app points to this link which has no navigation chain:
https://github.com/devgianlu/Aria2App/wiki/Create-a-profile
Following the link at the bottom of the page superfically seems like it could have useful info:
“To understand how DirectDownload work and how to set it up go here.”
but clicking /here/ leads to a dead page. I believe the correct link is this one. But on that page, this so-called “direct download” is not direct in the slightest. It talks about setting up a server and running python scripts. WTF.. why do I need a server? I don’t want a server. I want a direct download in the true sense of the word direct.
If fedi node A and node B both have an anti-spam rule, it makes good sense that when a moderator removes a post for spam that it would be removed from both nodes. But what about other cases? Lemmy is a bit blunt and nuance-lacking in this regard.
For example, the parent of this thread was censored despite not breaking any rules. More importantly, it breaks no rules on slrpnk.net. Yet the slrpnk version was also removed.
I’m not sure exactly what the fix is. But in principle an author should be able to ask a slrpnk admin to restore the post in the slrpnk version of that community, so long as no slrpnk rules are broken by the post.
It’s one thing for various nodes to federate based on having compatible side-wide rules, but they aren’t necessarily aligned 100% and there are also rogue moderators who apply a different set of rules than what’s prescribed for a community.
If you long-tap an image that someone sent, options are:
- share with…
- copy original URL
- delete image
The URL is not the local URL, it’s the network URL for fetching the image again. When you send outbound images, Snikket stores them in one place, but it’s nowhere near the place where it stores inbound images. I found it once after a lengthy hunt but did not take notes. I cannot find it now. I think it’s well buried somewhere. What a piece of shit.
Those who condemn centralised social media naturally block these nodes:
- #LemmyWorld
- #shItjustWorks
- #LemmyCA
- #programmingDev
- #LemmyOne
- #LemmEE
- #LemmyZip
The global timeline is the landing page on Mbin nodes. It’s swamped with posts from communities hosted in the above shitty centralised nodes, which break interoperability for all demographics that Cloudflare Inc. marginalises.
Mbin gives a way for users to block specific magazines (Lemmy communities), but no way to block a whole node. So users face this this very tedious task of blocking hundreds of magazines which is effectively like a game of whack-a-mole. Whenever someone else on the Mbin node subscribes to a CF/centralised node, the global timeline gets polluted with exclusive content and potentially many other users have to find the block button.
Secondary problem: (unblocking) My blocked list now contains hundreds of magazines spanning several pages. What if LemmEE decides one day to join the decentralised free world? I would likely want to stop blocking all communities on that node. But unblocking is also very tedious because you have to visit every blocked magazine and click “unblock”.
the fix --- ① Nix the global timeline. Lemmy also lacks whole-node blocking at the user level, but Lemmy avoids this problem by not even having a global timeline. Logged-in users see a timeline that’s populated only with communities they subscribe to.
«OR»
② Enable users to specify a list of nodes for which they want filtered out of their view of the global timeline.
While composing this post the Lemmy web client went to lunch. This is the classic behaviour of Lemmy when it has a problem. No error, just infinite spinner. After experimentation, it turns out that it tries to be smart but fails when treating URLs written with the gemini:// scheme.
(edit) It’s probably trying to visit the link for that convenience feature of pre-filling the title. If it does not recognise the scheme, it should just accept it without trying to be fancy. It likely screws up on other schemes as well, like dict, ftp, news, etc.
The workaround is to embed the #Gemini link in the body of the post.
I think the stock Lemmy client stops you from closing a browser tab if you have an editor open on a message, to protect you from accidental data loss.
Mbin does not.
A vast majority of the fediverse (particularly the threadiverse) is populated by people who have no sense of infosec or privacy, who run stock browsers over clearnet (e.g. #LemmyWorld users, the AOL users of today). They have a different reality than street wise people. They post a link to a page that renders fine in the world they see and they are totally oblivious to the fact that they are sending the rest of the fediverse into an exclusive walled garden.
There is no practical way for street wise audiences to signal “this article is exclusive/shitty/paywalled/etc”. Voting is too blunt of an instrument and does not convey the problem. Writing a comment “this article is unreachable/discriminatory because it is hosted in a shitty place” is high effort and overly verbose.
the fix --- The status quo:
- (👍/👎) ← no meaning.. different people vote on their own invented basis for voting
We need refined categorised voting. e.g.
- linked content is interesting and civil (👍/👎)
- body content is interesting and civil (👍/👎)
- linked article is reachable & inclusive (👎)¹
- linked is garbage free (no ads, popups, CAPTCHA, cookie walls, etc) (👍/👎)
¹ Indeed a thumbs up is not useful on inclusiveness because we know every webpage is reachable to someone or some group and likely a majority. Only the count of people excluded is worth having because we would not want to convey the idea that a high number of people being able to reach a site in any way justifies marginalization of others. It should just be a raw count of people who are excluded. A server can work out from the other 3 voting categories the extent by which others can access a page.
From there, how the votes are used can evolve. A client can be configured to not show an egalitarian user exclusive articles. An author at least becomes aware that a site is not good from a digital rights standpoint, and can dig further if they want.
update --- The fix needs to expand. We need a mechanism for people to suggest alternative replacement links, and those links should also be voted on. When a replacement link is more favorable than the original link, it should float to the top and become the most likely link for people to visit.
Some will regard this as an enhancement request. To each his own, but IMO *grep has always had a huge deficiency when processing natural languages due to line breaks. PDFGREP especially because most PDF docs carry a payload of natural language.
If I need to search for “the.orange.menace“ (dots are 1-char wildcards), of course I want to be told of cases like this:
A court whereby no one is above the law found the orange menace guilty on 34 counts of fraud..
When processing a natural language a sentence terminator is almost always a more sensible boundary. There’s probably no command older than grep that’s still in use today. So it’s bizarre that it has not evolved much. In the 90s there was a Lexis Nexus search tool which was far superior for natural language queries. E.g. (IIRC):
foo w/s bar
:: matches if “foo” appears within the same sentence as “bar”foo w/4 bar
:: matches if “foo” appears within four words of “bar”foo pre/5 bar
:: matches if “foo” appears before “bar”, within five wordsfoo w/p bar
:: matches if “foo” appears within the same paragraph as “bar”
Newlines as record separators are probably sensible for all things other than natural language. But for natural language grep is a hack.
I cannot believe how stupid Chromium is considering it’s the king of browsers from a US tech giant. It’s another bug that should be embarrassing for Google.
If you visit a PDF, it fetches the PDF and launches pdf.js as expected. If you use the download button within pdf.js, you would expect it to simply copy the already fetched PDF from the cache to the download folder. But no.. the stupid thing goes out on the WAN and redownloads the whole document from the beginning.
I always suspected this, but it became obvious when I recently fetched a 20mb PDF from a slow server. It struggled for a while to get the whole thing just for viewing. Then after clicking to download within pdf.js, it was crawling again from 1% progress.
What a stupid waste of bandwidth, energy and time.
cross-posted from: https://sopuli.xyz/post/12858874
> When an image is posted by someone on a Cloudflared instance like the following: > * #LemmyWorld > * #ShitJustworks > * #LemmyCA > * #LemmyEE > * #LemmyZip > * #LemmyOne > > the image is inaccessible to all demographics of people who Cloudflare discriminates against because images are not mirrored to federated nodes. > > We expect corporations to not give a shit about marginising people who are not profitable enough to care about. But when naive asshole users outnumber progressive egalitarians, it highlights a problem with the fedi, which still lacks the tooling needed to keep oppression at bay. > > The six listed nodes above effectively host the AOL users of our time. Lacking the sophistication needed to detect and grasp situations of eroded digital rights with a degree of blindness and lack of concern for centralised corporate control. > > Suggestions needed for Lemmy nodes that are defederated from the above listed six.
Different apps expect passwords in the
.netrc
file to be quoted in different ways. E.g.fetchmail
expects passwords to be quoted in a bash style way (quotes needed if there are special chars, but quotes themselves need quotes), while cURL gives no special meaning to quotes and takes them literally if present.Who to blame for this is a bit unclear, but I believe the original purpose of
.netrc
was for the standard CLI FTP program, so in principle everything should be aligned on that, IMO.Some apps will complain if they spot a
.netrc
syntax they don’t like, as if they get to decide that -- even if the line it complains about is not the record the app is looking for. OTOH, it’s useful to know what an app accepts and rejects.What a mess.
Updating my browser apparently caused extensions to get updated as well. Now uMatrix 1.1.2 is installed. The config box is very small compared to the size available to the browser window area. You have to scroll horizontally to reach the columns on the right, and the name of the 3rd party entity scrolls out of the window. This makes it inconvenient and cumbersome to alter the settings.
I suppose this change was motivated by complaints that the config window was too large on small screens:
https://github.com/gorhill/uMatrix/issues/483 https://github.com/gorhill/uMatrix/issues/683
- broken: Ungoogled Chromium ver. 90.0.4430.212-1.sid1
- works: Ungoogled Chromium ver. 112.0.5615.165-1
If anyone has problems getting Ungoogled Chromium (and likely Google’s Chromium as well) to work on Lemmy, notice the versions above. The Lemmy webclient is a dysfunctional disaster in the old version but they fixed whatever the problem was in recent versions.
I installed #neonmodem by simply grabbing the tarball, which expands files directly into the $CWD instead of nesting them in a folder named after the app. Not a big deal but it gave a slight hint that this project might have quality issues.
This command executes just fine:
$ torsocks neonmodem connect --type lemmy --url https://sopuli.xyz
It’s irritating that it does not inform the user where the data is being stored and it’s also undocumented. You have to guess how to use it and it’s misleading (I think the connect command does not actually result in a connection being made, it apparently just stores the login creds).
Simply running it crashes instantly: ``` $ torsocks neonmodem panic: Error(s) loading system(s)
goroutine 1 [running]: github.com/mrusme/neonmodem/cmd.glob..func1(0x1771140?, {0xe973eb?, 0x0?, 0x0?}) /home/runner/work/neonmodem/neonmodem/cmd/root.go:128 +0x268 github.com/spf13/cobra.(*Command).execute(0x1771140, {0xc00008c1f0, 0x0, 0x0}) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:944 +0x847 github.com/spf13/cobra.(*Command).ExecuteC(0x1771140) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3bd github.com/spf13/cobra.(*Command).Execute(...) /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992 github.com/mrusme/neonmodem/cmd.Execute(0xc0000061a0?) /home/runner/work/neonmodem/neonmodem/cmd/root.go:141 +0x3e main.main() /home/runner/work/neonmodem/neonmodem/neonmodem.go:13 +0x25 ```
The 112.be website drops all Tor traffic, which in itself is a shit show. No one should be excluded from access to emergency app info.
So this drives pro-privacy folks to visit http://web.archive.org/web/112.be/ but that just gets trapped in an endless loop of redirection.
Workaround: appending “en” breaks the loop. But that only works in this particular case. There are many redirection loops on archive.org and 112.be is just one example.
Why posted here: archive.org has their own bug tracker, but if you create an account on archive.org they will arbitrarily delete the account without notice or reason. I am not going to create a new account every time there is a new archive.org bug to report.
The cross-post mechanism has a limitation whereby you cannot simply enter a precise community to post to. Users are forced to search and select. When searching for “android” on infosec.pub within the cross-post page, the list of possible communities is totally clusterfucked with shitty centralized Cloudflare instances (lemmy world, sh itjust works, lemm ee, programming dev, etc). The list of these junk instances is so long !android@hilariouschaos.com does not make it to the list.
The workaround is of course to just create a new post with the same contents. And that is what I will do.
There are multiple bugs here: ① First of all, when a list of communities is given in this context, the centralized instances should be listed last (at best) because they are antithetical to fedi philosophy. ② Subscribed communities should be listed first, at the top ③ Users should always be able to name a community in its full form, e.g.:
!android@hilariouschaos.com
hilariouschaos.com/android
④ Users should be able to name just the instance (e.g. hilariouschaos.com) and the search should populate with subscribed communities therein.
Tedious to use. No way to import a list of URLs to download. Must enter files one by one by hand.
No control over when it downloads. Starts immediately when there is an internet connection. This can be costly for people on measured rate internet connections. Stop and Go buttons needed. And it should start in a stopped state.
When entering a new file to the list, the previous file shows a bogus “error” status.
Error messages are printed simply as “Error”. No information.
There is an embedded browser. What for?
What files are already present the download directory because another app put them there, GigaGet lists those files with “100%”. How does GigaGet know those files that another app put there are complete when gigaget does not even have URL for them (thus no way to check the content-length)?
Navi is an app in f-droid to manage downloads. It’s really tedious to use because there is no way to import a list of URLs. You either have to tap out each URL one at a time, or you have to do a lot of copy-paste from a text file. Then it forces you to choose filename for each download -- it does not default to the name of the source file.
bug 1 --- For a lot files it gives:
Error: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.
The /details/ page for the broken download neglects to give the error message, much less what the error means.bug 2 --- Broken downloads are listed under a tab named “completed”.
bug 3 --- Every failed fetch generates notification clutter that cannot be cleaned up. I have a dozen or so notifications of failed downloads. Tapping the notification results in no action and the notification is never cleared.
bug 4 --- With autostart and auto connect both disabled, Navi takes the liberty of making download attempts as soon as there is an internet connection.
bug 5? --- A web browser is apparently built-in. Does it make sense to embed a web browser inside a download manager?
Images can be fully embedded inline directly in the HTML. Tor Browser displays them unconditionally, regardless of the
permissions.default.image
setting, which if set to “2” indicates images should not be loaded.An example is demonstrated by the privacy-respecting search service called “dogs”:
If you search for a specific object like “sweet peppers”, embedded images appear in the results. This feature could easily be abused by advertisers. I’m surprised that it’s currently relatively rare.
It’s perhaps impossible to prevent embedded images from being fetched because the HTML standard does not include the length of the base64 blob ahead of it. Thus no way for the browser to know which position in the file to continue fetching from.
Nonetheless, the browser does not know /why/ the user disables images. Some people do it because they are on measured rate connections and need to keep their consumption low, like myself, and we are fucked in this case. But some people disable images just to keep garbage off the screen. In that case, the browser can (and should) respect their choice whether the images are embedded or not.
There should really be two config booleans:
- fetch non-local images
- render images that have been obtained The first controls whether the browser makes requests for images over the WAN. The second would just control whether the images are displayed.
I was trying to work out how I managed to waste so much of my bandwidth allowance in a short time. With a Lemmy profile page loaded, I hit control-r to refresh while looking at the bandwidth meter.
Over 1 meg! wtf. I have images disabled in my browser, so it should only be fetching a small amount of compressed text. For comparison, loading ~25 IRC channels with 200 line buffers is 0.1mb.
So what’s going on? Is Lemmy transferring thumbnails even though images are disabled in the browser config?
I simply wanted to submit a bug report. This is so fucked up. The process so far:
① solved a CAPTCHA just to reach a reg. form (I have image loading disabled but the graphical CAPTCHA puzzle displayed anyway (wtf Firefox?) ② disposable email address rejected (so Bitbucket can protect themselves from spam but other people cannot? #hypocrisy) ③ tried a forwarding acct instead of disposable (accepted) ③ another CAPTCHA, this time Google reCAPTCHA. I never solve these because it violates so many digital right principles and I boycott Google. But made an exception for this experiment. The puzzle was empty because I disable images (can’t afford the bandwidth). Exceptionally, I enable images and solve the piece of shit. Could not work out if a furry cylindrical blob sitting on a sofa was a “hat”, but managed to solve enough puzzles. ④ got the green checkmark ✓ ⑤ clicked “sign up” ⑥ “We are having trouble verifying reCAPTCHA for this request. Please try again. If the problem persists, try another browser/device or reach out to Atlassian Support.”
Are you fucking kidding me?! Google probably profited from my CAPTCHA work before showing me the door. Should be illegal. Really folks, a backlash of some kind is needed. I have my vision and couldn’t get registered (from Tor). Imagine a blind Tor user.. or even a blind clearnet user going through this shit. I don’t think the first CAPTCHA to reach the form even had an audio option.
Shame on #Bitbucket!
⑦ attempted to e-mail the code author:
status=bounced (host $authors_own_mx_svr said: 550-host $my_ip is listed at combined.mail.abusix.zone (127.0.0.11); 550 see https://lookup.abusix.com/search?q=$my_ip (in reply to RCPT TO command))
#A11y #enshitification
- web.archive.org Lance R. Vick (@lrvick@mastodon.social)
It's official. After 3 months of back and forth, a major medical provider has elected to drop me as a patient for not having a Google or Apple device. It is unclear if this is legal, but it is very clearly discriminatory and unethical. Any tech journalists or lawyers interested in this? I would...
There used to be no problem archiving a Mastodon thread in the #internetArchive #waybackMachine. Now on recent threads it just shows a blank page:
https://web.archive.org/web/20240318210031/https://mastodon.social/@lrvick/112079059323905912
Or is it my browser? Does that page have content for others?
If you’re logged out and reading a thread, you should be able to login in another tab and then do a forced refresh (control-shift-R); and it should show the thread with logged-in control. For some reason the cookie isn’t being passed or (perhaps more likely) the cookie is insufficient because Lemmy is using some mechanism other than cookies.
Scenario 2:
You’re logged in and reading threads in multiple tabs. Then one tab becomes spontaneously logged out after you take some action. Sometimes a hard-refresh (control-shift-R) recovers, sometimes not. It’s unpredictable. But note that the logged-in state is preserved in other tabs. So if several hard refreshes fail, I have to close the tab and use another tab to navigate to where I was in the other tab. And it seems navigation is important.. if I just copy the URL for where I was (same as opening a new tab), it’s more likely to fail.
In any case, there are no absolutes.. the behavior is chaotic and could be related to this security bug.
- • 100%
(all Cloudflared websites) Missing content-length header → harms the poor and environmentalists
People on a tight budget are limited to capped internet connections. So we disable images in our browser settings. Some environmentalists do the same to avoid energy waste. If we need to download a web-served file (image, PDF, or anything potentially large), we run this command:
$ curl -LI "$URL"
The HTTP headers should contain a
content-length
field. This enables us to know before we fetch something whether we can afford it. (Like seeing a price tag before buying something)#Cloudflare has taken over at least ~20% of the web. It fucks us over in terms of digital rights in so many ways. And apparently it also makes the web less usable to poor people in two ways:
- Cloudflare withholds content length information
- Cloudflare blocks people behind CGNAT, which is commonly used in impoverished communities do to limited number of IPv4 addresses.
The problem:
- !cashless_society@nano.garden is created
- node A users subscribe and post
- node B users subscribe and post
- nano.garden disappears forever
- users on node A and B have no idea; they carry on posting to their local mirror of
cashless_society
. - node C never federated with nano.garden before it was unplugged
So there are actually 3 bugs AFAICT:
- Transparency: users on nodes A and B get no indication that they are interacting with a ghost community.
- Broken comms: posts to the ghost community from node A are never sync’d, thus never seen by node B users; and vice-versa.
- Users on node C have no way to join the conversation because the search function only finds non-ghost communities.
The fix for ① is probably as simple as adding a field to the sidebar showing the timestamp of the last sync operation.
w.r.t. ②, presumably, A and B do not connect directly because they are each federated to the ghost node. So there is no way for node A posts to reach node B. Correct? Lemmy should be designed to accommodate a node disappearing at any time with no disruption to other nodes. Node A and B should directly sychronize.
w.r.t. ③ node C should still be able to join the conversation between A and B w.r.t the ghost community.
There are “announcement” communities where all posts are treated as announcements. This all-or-nothing blunt choice at the time of community creation could be more flexible. In principle, a community founder should have four choices:
- all posts are announcements (only mods can post)
- all posts are discussions
- (new) all posts are announcements (anyone can post)
- (new) authors choose at posting time whether their post is an announcement or a discussion
This would be particularly useful if an author cross-posts to multiple communities but prefers not to split the discussion. In which case the carbon copies could use the announcement option (or vice versa).
There is a side-effect here with pros and cons. This capability could be used for good by forcing a conversation to happen outside of a walled garden. E.g. you post to a small free-world instance then crosspost an “announcement” in a walled garden like sh.itjust.works, then the whole discussion takes place in the more socially responsible venue with open access. OTOH, the same capability in reverse could also be used detrimentally, e.g. by forcing a discussion onto the big centralized platforms.
update --- Perhaps the community creator should get a more granular specification. E.g. a community creator might want:
Original posts → author’s choice
Cross-posts coming from [
sh.itjust.works
,lemmy.world
] → discussions onlyCross-posts coming from [*] → author’s choice