Skip Navigation

As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.

What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?

57
Jump
Reddit is making sitewide protests basically impossible
  • Lemmy was better before the Reddit exodus last year, when people started insulting others by calling them tankies and fascists. Before that, it was much more peaceful.

    -31
  • Most of you will say that the succesor to eMule is BitTorrent as it is the most widely used P2P network today, but there are some things that BitTorrent lacks and eMule provides. The most notorious for me are the following:

    • Built-in network-wide search
    • Easy sharing
    • Unique links

    Maybe you don’t consider this features important, but the fact is that with the approach BitTorrent takes, we are highly dependent on central points that make the network vulnerable. With BitTorrent we depend on trackers and link listing websites to share content. A torrent client is useless on its own if we don’t have a link listing site to get torrents or magnet-links from. On the other side, with the built-in search eMule provides, one can start downloading without the need for a website to take links from.

    Easy sharing is also very important, because it provides more peers to download files from. This is specially important on rare files, because with torrents the seeds to download a file can become scattered between different torrents and there can be 5 different torrents seeding the same data, yet they don’t share peers. It is clear that one torrent with multiple seeds is preferred that multiple torrents with one seed each, for example.

    When there is one single way to identify a file on the network (like with ed2k hashlinks) even the less tech-savvy users are able to contribute. Sharing on eMule is as simple as dropping the file you want to share on your incoming folder (even if it is not the optimal way to do it). In BitTorrent, you must download an existing torrent file or magnet link, stop the download, replace the half downloaded files with the ones you already had downloaded, making sure that you use the same directory structure and filenames that are defined in the torrent, recheck the torrent and start it, all this in order to share files you had downloaded previously. Tell a noob user to do that to help you download some rare file…

    And now imagine that you have an entire drive full of sharing material, but the directory structure and filenames differ from the ones used on the torrents (because you like to keep things ordered in your hard drive). This scenario makes it impossible to share those files on the torrent network without creating brand new torrents, so you can’t contribute and be one more seed on already existing torrents.

    Why not use eMule then? Because it’s slow, inneficient, and there is practically only one client that is no longer actively developed. Searching for alternatives, the most similar program that has various clients and is multiplatform is Direct Connect, but it is not decentralized, and different servers don’t communicate with each other, so peers for the same file are not shared globally and instead are scattered around different hubs.

    Is there really no other program that works the way eMule does? Is there no true spiritual succesor to eMule nowadays?

    18
    Jump
    There should be a way to give directly to the developers
  • I also had a similar thought: a piracy software that enables users to download any content and encourages them to donate to creators using cryptocurrency voluntarily. This program would automatically distribute payments to verified creators who have provided a unique cryptocurrency address. This ensures that creators receive their rightful compensation for their work.

    At present, I only pay for content created by individual creators who receive funding through platforms like OpenCollective, Patreon, or similar services. I don't even donate to Lemmy because there are multiple developers, but only one person in charge of receiving and distributing donations. I don't want to waste time and effort making sure the funds are distributed between everyone involved. I'd instead prefer open-source software that simplifies the process and ensures that everyone receives their fair share.

    0
  • I'm excited to see the new meme browsing interface feature in PieFed. I expected PieFed to be yet another Reddit clone using a different software stack and without any innovation. I believe there's an opportunity to take things a step further by blending the best elements of platforms like Reddit and image boards like Safebooru.

    I wish there was a platform that was a mix between Reddit and image boards like Safebooru. The problem I have with Reddit is the time-consuming process of posting content; I should be able to post something in a few seconds, but often finding the right community takes longer than actually posting, and you have to decide whether to post in every relevant community or just the one that fits best. In the case of Lemmy, the existence of multiple similar communities across different instances makes this issue even worse.

    I like how image boards like Safebooru offer a streamlined posting experience, allowing users to share content within seconds. The real strength of these platforms lies in their curation and filtering capabilities. Users can post and curate content, and others can contribute to the curation process by adding or modifying tags. Leaderboards showcasing top taggers, posters, and commenters promote active participation and foster a sense of community. Thanks to the comprehensive tagging system, finding previously viewed content becomes a breeze, unlike the challenges often faced on Reddit and Lemmy. Users can easily filter out unwanted content by hiding specific tags, something that would require blocking entire communities on platforms like Lemmy.

    However, image boards also have their limitations. What I don't like about image boards is that they are primarily suited for image-based content and often lack robust text discussion capabilities or threaded comments, which are essential for fostering meaningful conversations.

    Ideally, I envision a platform that combines the best of both worlds: the streamlined posting experience of image boards with the robust text discussion capabilities of platforms like Reddit and Lemmy.

    I would be thrilled to contribute to a platform that considered some of the following features:

    I would also like to see more community-driven development, asking users for feedback periodically in a post, and publicly stating what features devs will be working on. Code repositories issue trackers have some limitations. A threaded tree-like comment system is better for discussions, and having upvotes/downvotes helps surface the best ideas. I propose using a lemmy community as the issue tracker instead.

    2

    GenAI Banner Chaos on Piracy Community

    https:// lemmy.dbzer0.com /post/16895485

    > Things got heated on the piracy community at lemmy.dbzer0.com when the admin, db0, announced plans to use a GenerativeAI tool to rotate the community's banner daily with random images. > > While some praised the creative idea, others strongly objected, arguing that AI-generated art lacks soul and meaning. A heated debate ensued over the artistic merits of AI art versus human-created art. > > One user threatened to unsubscribe from the entire instance over the "wasteful BS" of randomly changing the banner every day. The admin defended the experiment as a fun way to inject randomness and chaos. > > Caught in the crossfire were arguments about corporate ties to AI image generators, electricity waste, and whether the banner switch-up even belonged on a piracy community in the first place. > > In the end, the admin stubbornly insisted on moving forward with the AI banner rotation, leaving unhappy users to either embrace the chaotic visuals or jump ship. Such is the drama and controversy that can emerge from a seemingly innocuous banner change!

    — Claude, Anthropic AI

    2
    Jump
    *Permanently Deleted*
  • I don't understand platforms like Mastodon that mimic Twitter without incorporating the features that contribute to its popularity. If I were looking for a most recent sorting algorithm I would use a chat.

    0
  • Jump
    *Permanently Deleted*
  • Well, that would only be implemented if it were offered by the API; otherwise, just use what is available right now, which are votes and the number of comments. I find it more invasive that other users can see the post history in my profile than admins being able to see the amount of time I spend reading each post. Revealing my feed feels akin to exposing my browsing history.

    -1
  • Jump
    *Permanently Deleted*
  • I thought the ‘hot’ ranking was a mixture of votes and comment engagement?

    Hot: Like active, but uses time when the post was published

    https://join-lemmy.org/docs/users/03-votes-and-ranking.html

    I do feel like there needs to be some further tweaking, controversial should have a time falloff so it shows recent controversy instead of something 6 months old for example.

    Yeah, I believe the "Most Comments" sort should have a time limit too. There is an issue opened about it: Controversial post sort should have time limit

    2
  • Jump
    Looking for a good left alternative for Lemmygrad
  • Human bias is a pervasive element in many online communities, and finding a platform entirely free from it can be akin to searching for the holy grail. Maybe look into self-hosting an instance and punish moderators who don't follow their own rules.

    7
  • Jump
    *Permanently Deleted*
  • This is not possible because sorting is done in the database, so adding a new sort option requires a database migration with new indexes, columns and updated queries. Not something that can be done with a simple plugin.

    @nutomic@lemmy.ml in https://github.com/LemmyNet/lemmy/issues/3936#issuecomment-1738847763

    An alternative approach could involve utilizing an API endpoint that provides metadata for recent posts, allowing users to implement custom sorting logic on their client side using JavaScript. This API endpoint is currently accessible only to moderators and administrators

    There is already such an API endpoint which is available for mods and admins.

    @nutomic@lemmy.ml in https://lemmy.ml/comment/9159963

    8
  • Jump
    The playground schematic analogy for designing a fediverse service.
  • Regrettably, complaining tends to be a common pastime for many individuals. I acknowledge your frustrations with certain users who may appear entitled or unappreciative of the considerable effort you've dedicated to developing Lemmy. Shifting towards a mindset that perceives complaints as opportunities for enhancement can be transformative. Establishing a set of transparent rules or guidelines on how you prioritize issues and feature requests could help turn critiques into opportunities for improvement. This transparency can help manage expectations and foster a more collaborative relationship with the users in your community. While not all complaints may be actionable, actively listening to feedback and explaining your prioritization criteria could go a long way in building trust and goodwill. Open communication and a willingness to consider diverse perspectives can lead to a stronger, more user-centric product in the long run.

    The philosophy of Complaint-Driven Development provides a simple, transparent way to prioritize issues based on user feedback:

    1. Get the platform in front of as many users as possible.
    2. Listen openly to all user complaints and feedback. Expect a lot of it.
    3. Identify the top 3 most frequently reported issues/pain points.
    4. Prioritize fixing those top 3 issues.
    5. Repeat the process, continuously improving based on prominent user complaints.

    Following these straightforward rules allows you to address the most pressing concerns voiced by your broad user community, rather than prioritizing the vocal demands of a few individuals. It keeps development efforts focused on solving real, widespread issues in a transparent, user-driven manner.

    Here's a suggestion that could help you implement this approach: Consider periodically making a post like What are your complaints about Lemmy? Developers may want your feedback. This post encourages users to leave one top-level comment per complaint, allowing others to reply with ideas or existing GitHub issues that could address those complaints. This will help you identify common complaints and potential solutions from your community.

    Once you have a collection of complaints and suggestions, review them carefully and choose the top 3 most frequently reported issues to focus on for the next development cycle. Clearly communicate to the community which issues you and the team will be prioritizing based on this user feedback, and explain why you've chosen those particular issues. This transparency will help users understand your thought process and feel heard.

    As you work on addressing those prioritized issues, keep the community updated on your progress. When the issues are resolved, make a new release and announce it to the community, acknowledging their feedback that helped shape the improvements.

    Then, repeat the process: Make a new post gathering complaints and suggestions, review them, prioritize the top 3 issues, communicate your priorities, work on addressing them, release the improvements, and start the cycle again.

    By continuously involving the community in this feedback loop, you foster a sense of ownership and leverage the collective wisdom of your user base in a transparent, user-driven manner.

    2
  • I like open-source projects with transparency and community-driven approach to development. How does Sublinks ensure transparency and community involvement in its development process? Could you shed some light on the guidelines or process by which feature requests are evaluated, approved, rejected, and prioritized for inclusion in the roadmap?

    As someone with a background in Java from college and a newfound interest in Spring Boot, I am eager to contribute to the Sublinks codebase. However, transitioning from small example projects to a large, complex codebase can be intimidating. Could Sublinks have a mentorship program or opportunities for pair programming to support new contributors in navigating the codebase? Having a mentor to guide me through the initial stages would be invaluable in building my confidence and understanding of the codebase, enabling me to eventually tackle issues independently. Then I could mentor a new contributor. I believe it's a nice way to recruit new contributors.

    2

    I've been pondering the idea of creating a community right here on Discuss Online that mirrors the activity from the GitHub issue trackers across the various Sublinks repositories. My goal is to establish a space where both a bot and community members can share updates on issues, as well as provide feedback and suggestions in a more discussion-friendly format.

    Previously, I set up a similar system for the Lemmy issue tracker at !issue_tracker@lemm.ee, but unfortunately, bot accounts were banned due to excessive activity. I'm seeking approval beforehand to avoid setting it up only to face potential bans later on.

    This community would serve as a real-time mirror of the GitHub issues from repositories like sublinks-api and others within https://github.com/sublinks. It would not only facilitate better visibility for the issues but also allow for a more structured conversation flow, thanks to the nested comments feature. Plus, the ability to sort comments by votes can help us quickly identify the most valuable ideas and feedback.

    Before moving forward with this initiative, I'd love to hear your thoughts. Do you think this would be a valuable addition to this community? Are there any concerns regarding the potential activity levels from bot postings?

    Looking forward to your feedback and hoping to make our collaboration even more productive and enjoyable!

    3

    cross-posted from: https://discuss.online/post/5772572

    > The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth. > > In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation. > > Key features of a trust level system include: > > - Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community. > - Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior. > - Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust. > > Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users. > > For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many. > > As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone. > > #### Related > > - Grant users privileges based on activity level > - Understanding Discourse Trust Levels > - Federated Reputation

    20

    cross-posted from: https://discuss.online/post/5772572

    > The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth. > > In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation. > > Key features of a trust level system include: > > - Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community. > - Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior. > - Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust. > > Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users. > > For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many. > > As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone. > > #### Related > > - Grant users privileges based on activity level > - Understanding Discourse Trust Levels > - Federated Reputation

    19

    The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

    In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

    Key features of a trust level system include:

    • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
    • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
    • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

    Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

    For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

    As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

    Related

    54

    If Stack Overflow taught us anything, it's that

    > "people will do anything for fake internet points" > > Source: Five years ago, Stack Overflow launched. Then, a miracle occurred.

    Ever noticed how people online will jump through hoops, climb mountains, and even summon the powers of ancient memes just to earn some fake digital points? It's a wild world out there in the realm of social media, where karma reigns supreme and gamification is the name of the game.

    But what if we could harness this insatiable thirst for validation and turn it into something truly magnificent? Imagine a social media platform where an army of monkeys tirelessly tags every post with precision and dedication, all in the pursuit of those elusive internet points. A digital utopia where every meme is neatly categorized, every cat video is meticulously labeled, and every shitpost is lovingly sorted into its own little corner of the internet.

    Reddit tried this strategy to increase their content quantity, but alas, the monkeys got a little too excited and flooded the place with reposts and low-effort bananas. Stack Overflow, on the other hand, employed their chimp overlords for moderation and quality control, but the little guys got a bit too overzealous and started scaring away all the newbies with their stern glares and downvote-happy paws.

    But fear not, my friends! For we shall learn from the mistakes of our primate predecessors and strike the perfect balance between order and chaos, between curation and creativity. With a leaderboard showcasing the top users per day, week, month, and year, the competition would be fierce, but not too fierce. Who wouldn't want to be crowned the Tagging Champion of the Month or the Sultan of Sorting? The drive for recognition combined with the power of gamification could revolutionize content curation as we know it, without sacrificing the essence of what makes social media so delightfully weird and wonderful.

    And the benefits? Oh, they're endless! Imagine a social media landscape where every piece of content is perfectly tagged, allowing users to navigate without fear of stumbling upon triggering or phobia-inducing material. This proactive approach can help users avoid inadvertently coming across content that triggers phobias, traumatic events, or other sensitive topics. It's like a digital safe haven where you can frolic through memes and cat videos without a care in the world, all while basking in the glory of a well-organized and properly tagged online paradise.

    So next time you see someone going to great lengths for those fake internet points, just remember - they might just be part of the Great Monkey Tagging Army, working tirelessly to make your online experience safer, more enjoyable, and infinitely more entertaining. Embrace the madness, my friends, for in the chaos lies true innovation! But not too much chaos, mind you – just the right amount to keep things interesting.

    Related

    17