Skip Navigation
Jump
Is all my privacy effectively gone if I visit China or South Korea?
  • And they are going to know the differences between my flagship loaded with creds and the base model i just bought and loaded just the travel apps? I didn't say don't bring anything, I said don't bring anything with real access.

    I don't care about privacy more than I care about having my gear confiscated and it taking months to return should they decide to make me thier day.

    7
  • Jump
    Is all my privacy effectively gone if I visit China or South Korea?
  • if I was travelling abroad I would not be bringing anything that has universal access to my accounts or data (primary phone for example). I would most likely get a new device loaded only with what I needed to get things done using a new account I transferred money into. For the most part I would not expect any issues but the laws at most borders, even US international borders generally say they can clean you out of all your data and force you to log into any devices you have with you. So the 2 countries mentioned are only part of the story when it comes to privacy when travelling international.

    11
  • Jump
    is the lemmyverse dying already?
  • plenty of content on my screen, I do admit Im posting less everywhere right now while hunting for a job.

    35
  • Jump
    Teaching with AI
  • You still use data stores for that, either token databases or classic ones. Using the model itself for datamining is mostly novelty at this point due to the error.

    1
  • openai.com Teaching with AI

    We’re releasing a guide for teachers using ChatGPT in their classroom—including suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.

    2
    github.com GitHub - BerriAI/litellm: lightweight package to simplify LLM API calls - Azure, OpenAI, Cohere, Anthropic, Replicate. Manages input/output translation

    lightweight package to simplify LLM API calls - Azure, OpenAI, Cohere, Anthropic, Replicate. Manages input/output translation - GitHub - BerriAI/litellm: lightweight package to simplify LLM API cal...

    0
    Jump
    Skibidi Romper
  • its spreading quickly, like butt bacteria.

    12
  • spacy.io spaCy · Industrial-strength Natural Language Processing in Python

    spaCy is a free open-source library for Natural Language Processing in Python. It features NER, POS tagging, dependency parsing, word vectors and more.

    0
    Jump
    Bleeding edge tech
  • the computer wrote the 2nd one on accident when someone asked it to bake a cake.

    12
  • Jump
    Will Lemmy ever add federated delete?
  • updates are federated so really its just a matter of the client changing behavior a bit.

    20
  • github.com GitHub - danswer-ai/danswer: Ask Questions in natural language and get Answers backed by private sources. Connects to tools like Slack, GitHub, Confluence, etc.

    Ask Questions in natural language and get Answers backed by private sources. Connects to tools like Slack, GitHub, Confluence, etc. - GitHub - danswer-ai/danswer: Ask Questions in natural language ...

    0
    Jump
    Agent Protocol
  • bound to happen, though AFAIK, this is the first attempt

    know of others?

    2
  • www.agentprotocol.ai Agent Protocol

    Agent Protocol - The open source communication protocol for AI agents.

    2
    Jump
    Raspberry Pi 4 replacement
  • switching to a khadas vim4 myself.

    1
  • cross-posted from: https://lemmy.intai.tech/post/234347

    > https://twitter.com/i/broadcasts/1nAJErpDYgRxL

    https://www.youtube.com/watch?v=6yQEA18C-XI

    1

    cross-posted from: https://lemmy.intai.tech/post/215134

    > Blog Post: https://cloudblogs.microsoft.com/opensource/2023/08/01/introducing-onnx-script-authoring-onnx-with-the-ease-of-python/

    0
    Jump
    HashiCorp changes license from Mozilla Public License 2.0 to Business Source License 1.1 on their products
  • was about to include it in my stack, guess i wont be now.

    10
  • Jump
    raspberry pi 4 cooling
  • should be fine, if you don't like how warm it gets a set of small heatsinks for amplifiers will run you a few bucks and takes all of 10 seconds to install.

    2
  • Jump
    raspberry pi 4 cooling
  • i like both the argon and the simple heatsink setups, either work great. i did end up adding an additional heatsink to the argon, the flat case does not provide great heat exchange in an enclosed space.

    you can do passive cooling as well, just all depends on how hot the location gets.

    5
  • github.com GitHub - geekan/MetaGPT: 🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo

    🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo - GitHub - geekan/MetaGPT: 🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Task...

    0
    Jump
    Anyone know what this is?
  • old floppy disks of different sizes. the bottom looks like 5 1/4" the ones on top with the metal centers are all 3 1/2". Both standards needed sleeves to be read. Many of these are likely trash now but that wouldn't stop me from trying to load them.

    5
  • NSFW
    Jump
    Whats up with my little pony?
  • why do this to yourself?

    7
  • Jump
    *LLMs and AI art stepping over the corpse of NFTs*
  • this one changes entirely depending on if you know what the image is from or not.

    4
  • Jump
    What's the fundamental difference between Fediverse/ActivityPub/Lemmy and the Usenet?
  • as time goes on i think techs that mark human made content will be more practical.

    the only reason that read as "off" is because the poster did not put any time into it, prob just a simple question in a default chat somewhere. well made systems tuned to thier use are going to be surprisingly effective.

    0
  • Jump
    What's the fundamental difference between Fediverse/ActivityPub/Lemmy and the Usenet?
  • i think it may have been. might as well, even if you put in time writing its likely to be assumed AI anyway, esp as it improves.

    3
  • Jump
    We’re rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week
  • pretty sure bing picked it up from one of the many apps that was doing it first like POE.

    strange criticism

    2
  • https:// twitter.com /OpenAI/status/1687159114047291392

    cross-posted from: https://lemmy.intai.tech/post/171416

    > We’re rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week: > > 1. Prompt examples: A blank page can be intimidating. At the beginning of a new chat, you’ll now see examples to help you get started. > 2. Suggested replies: Go deeper with a click. ChatGPT now suggests relevant ways to continue your conversation. > 3. GPT-4 by default, finally: When starting a new chat as a Plus user, ChatGPT will remember your previously selected model — no more defaulting back to GPT-3.5. > 4. Upload multiple files: You can now ask ChatGPT to analyze data and generate insights across multiple files. This is available with the Code Interpreter beta for all Plus users. > 5. Stay logged in: You’ll no longer be logged out every 2 weeks! When you do need to log in, you’ll be greeted with a much more welcoming page. > 6. Keyboard shortcuts: Work faster with shortcuts, like ⌘ (Ctrl) + Shift + ; to copy last code block. Try ⌘ (Ctrl) + / to see the complete list. > > > !

    12

    cross-posted from: https://lemmy.intai.tech/post/149044

    > tweet > > github

    0

    cross-posted from: https://lemmy.intai.tech/post/149044

    > tweet > > github

    0

    cross-posted from: https://lemmy.intai.tech/post/141942

    > https://github.com/nomic-ai/gpt4all

    https://huggingface.co/blog/starcoder

    3
    blog.google A new partnership to promote responsible AI

    Today, Google, Microsoft, OpenAI and Anthropic published a joint announcement establishing the Frontier Model Forum.

    5

    https://duckai.org/

    cross-posted from: https://lemmy.intai.tech/post/134262

    > DuckAI is an open and scalable academic lab and open-source community working on various Machine Learning projects. Our team consists of researchers from the Georgia Institute of Technology and beyond, driven by our passion for investigating large language models and multimodal systems. > > Our present endeavors concentrate on the development and analysis of a variety of dataset projects, with the aim of comprehending the depth and performance of these models across diverse domains. > > Our objective is to welcome people with a variety of backgrounds to cutting-edge ML projects and rapidly scale up our community to make an impact on the ML landscape. > > We are particularly devoted to open-sourcing datasets that can turn into an important infrastructure for the community and exploring various ways to improve the design of foundation models.

    0

    cross-posted from: https://lemmy.intai.tech/post/133548

    > https://arxiv.org/pdf/1706.03762.pdf > > Attention Is All You Need > > By Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin > > Word count: 4221 > > Estimated read time: 17 minutes > > Links: > > - Paper: https://arxiv.org/abs/1706.03762 > - Code: https://github.com/tensorflow/tensor2tensor > > Summary: > This paper proposes a new neural network architecture called the Transformer that is based solely on attention mechanisms, without using sequence aligned RNNs or convolutions. The Transformer achieves state-of-the-art results in machine translation while being more parallelizable and requiring significantly less time to train. Key contributions: > > Proposes multi-head self-attention as a replacement for recurrence and convolutions in encoder-decoder architectures. Self-attention connects all positions with a constant number of sequentially executed operations, whereas recurrent layers require O(n) sequential operations. > > Introduces scaled dot-product attention, which performs better than additive attention for large values of attention dimension. Applies attention scaling to improve training. > > Employs positional encodings instead of recurrence to enable the model to make use of sequence order. Shows that learned positional embeddings can replace sinusoids with negligible loss in quality. > > Achieves state-of-the-art BLEU scores on WMT 2014 English-to-German and English-to-French translation at a fraction of the training cost of previous models. Outperforms all previously published models on English constituency parsing with limited training data. > > The Transformer's reliance on attention and positional encodings rather than recurrence make it very promising for parallelization and scaling to longer sequences. The results demonstrate the potential of attention-based models to supplant RNNs and CNNs in sequence transduction tasks. > > Evaluation: > The Transformer architecture presents several advantages for using large language models and generative adversarial networks: > > The Transformer is highly parallelizable since it does away with sequence-aligned RNNs. This makes it very suitable for scaling up with more parameters and data. > > The multi-head self-attention provides a way to jointly attend to information from different representation subspaces at different positions, allowing modeling of dependencies regardless of distance. This is useful for long-range dependencies in large contexts. > > Positional encodings allow the model to make use of sequence order without recurrence. This can enable generating coherent, ordered outputs in GANs and large LMs. > > The Transformer achieves excellent results with limited training data, suggesting its representations transfer well. This is promising for few-shot learning and fine-tuning large LMs. > > The paper provides useful analysis into the roles different attention heads learn, which can inform work on interpretable attention-based representations. > > Overall, the Transformer architecture seems very promising as a foundation for large scale language modeling and GAN training. The representations it learns appear powerful yet transparent. The results on parsing suggest it can capture linguistic phenomena well. The parallelizability enables scaling. Much follow-on work has already adapted and refined the Transformer, making it very relevant today.

    2