
Weโre releasing a guide for teachers using ChatGPT in their classroomโincluding suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.

And they are going to know the differences between my flagship loaded with creds and the base model i just bought and loaded just the travel apps? I didn't say don't bring anything, I said don't bring anything with real access.
I don't care about privacy more than I care about having my gear confiscated and it taking months to return should they decide to make me thier day.
if I was travelling abroad I would not be bringing anything that has universal access to my accounts or data (primary phone for example). I would most likely get a new device loaded only with what I needed to get things done using a new account I transferred money into. For the most part I would not expect any issues but the laws at most borders, even US international borders generally say they can clean you out of all your data and force you to log into any devices you have with you. So the 2 countries mentioned are only part of the story when it comes to privacy when travelling international.
Applied Machine Learning (Cornell Tech CS 5787, Fall 2020)
DeepMind x UCL | Reinforcement Learning Course 2018
is the lemmyverse dying already?
plenty of content on my screen, I do admit Im posting less everywhere right now while hunting for a job.
You still use data stores for that, either token databases or classic ones. Using the model itself for datamining is mostly novelty at this point due to the error.
Weโre releasing a guide for teachers using ChatGPT in their classroomโincluding suggested prompts, an explanation of how ChatGPT works and its limitations, the efficacy of AI detectors, and bias.
Introducing ChatGPT Enterprise: enterprise-grade security, unlimited high-speed GPT-4 access, extended context windows, and much more.
Get enterprise-grade security & privacy and the most powerful version of ChatGPT yet.
lightweight package to simplify LLM API calls - Azure, OpenAI, Cohere, Anthropic, Replicate. Manages input/output translation - GitHub - BerriAI/litellm: lightweight package to simplify LLM API cal...
its spreading quickly, like butt bacteria.
Dr Stephen Wolfram says THIS about ChatGPT, Natural Language and Physics
Click to view this content.
spaCy is a free open-source library for Natural Language Processing in Python. It features NER, POS tagging, dependency parsing, word vectors and more.
the computer wrote the 2nd one on accident when someone asked it to bake a cake.
GitHub - facebookresearch/audiocraft: Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable...
updates are federated so really its just a matter of the client changing behavior a bit.
Ask Questions in natural language and get Answers backed by private sources. Connects to tools like Slack, GitHub, Confluence, etc. - GitHub - danswer-ai/danswer: Ask Questions in natural language ...
bound to happen, though AFAIK, this is the first attempt
know of others?
Agent Protocol - The open source communication protocol for AI agents.
switching to a khadas vim4 myself.
George Hotz and Eliezer Yudkowsky AI Safety Debate - 8/15/23 5pm ET
Something like Ansible+Proxmox will do what you want. https://vectops.com/2020/01/provision-proxmox-vms-with-ansible-quick-and-easy/
Introducing ONNX Script: Authoring ONNX with the ease of Python - Microsoft Open Source Blog
was about to include it in my stack, guess i wont be now.
should be fine, if you don't like how warm it gets a set of small heatsinks for amplifiers will run you a few bucks and takes all of 10 seconds to install.
i like both the argon and the simple heatsink setups, either work great. i did end up adding an additional heatsink to the argon, the flat case does not provide great heat exchange in an enclosed space.
you can do passive cooling as well, just all depends on how hot the location gets.
๐ The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo - GitHub - geekan/MetaGPT: ๐ The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Task...
old floppy disks of different sizes. the bottom looks like 5 1/4" the ones on top with the metal centers are all 3 1/2". Both standards needed sleeves to be read. Many of these are likely trash now but that wouldn't stop me from trying to load them.
Whats up with my little pony?
why do this to yourself?
this one changes entirely depending on if you know what the image is from or not.
as time goes on i think techs that mark human made content will be more practical.
the only reason that read as "off" is because the poster did not put any time into it, prob just a simple question in a default chat somewhere. well made systems tuned to thier use are going to be surprisingly effective.
i think it may have been. might as well, even if you put in time writing its likely to be assumed AI anyway, esp as it improves.
Generally Available
pretty sure bing picked it up from one of the many apps that was doing it first like POE.
strange criticism
Weโre rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week
cross-posted from: https://lemmy.intai.tech/post/171416
Weโre rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week:
- Prompt examples: A blank page can be intimidating. At the beginning of a new chat, youโll now see examples to help you get started.
- Suggested replies: Go deeper with a click. ChatGPT now suggests relevant ways to continue your conversation.
- GPT-4 by default, finally: When starting a new chat as a Plus user, ChatGPT will remember your previously selected model โ no more defaulting back to GPT-3.5.
- Upload multiple files: You can now ask ChatGPT to analyze data and generate insights across multiple files. This is available with the Code Interpreter beta for all Plus users.
- Stay logged in: Youโll no longer be logged out every 2 weeks! When you do need to log in, youโll be greeted with a much more welcoming page.
- Keyboard shortcuts: Work faster with shortcuts, like โ (Ctrl) + Shift + ; to
claude2 is apparently competent with XML
cross-posted from: https://lemmy.intai.tech/post/149044
claude2 is apparently competent with XML
cross-posted from: https://lemmy.intai.tech/post/149044
Today, Google, Microsoft, OpenAI and Anthropic published a joint announcement establishing the Frontier Model Forum.
DuckAI - An open-source ML research community
cross-posted from: https://lemmy.intai.tech/post/134262
DuckAI is an open and scalable academic lab and open-source community working on various Machine Learning projects. Our team consists of researchers from the Georgia Institute of Technology and beyond, driven by our passion for investigating large language models and multimodal systems.
Our present endeavors concentrate on the development and analysis of a variety of dataset projects, with the aim of comprehending the depth and performance of these models across diverse domains.
Our objective is to welcome people with a variety of backgrounds to cutting-edge ML projects and rapidly scale up our community to make an impact on the ML landscape.
We are particularly devoted to open-sourcing datasets that can turn into an important infrastructure for the community and exploring various ways to improve the design of foundation models.
Attention Is All You Need
cross-posted from: https://lemmy.intai.tech/post/133548
https://arxiv.org/pdf/1706.03762.pdf
Attention Is All You Need
By Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, ลukasz Kaiser, Illia Polosukhin
Word count: 4221
Estimated read time: 17 minutes
Links:
Summary: This paper proposes a new neural network architecture called the Transformer that is based solely on attention mechanisms, without using sequence aligned RNNs or convolutions. The Transformer achieves state-of-the-art results in machine translation while being more parallelizable and requiring significantly less time to train. Key contributions:
Proposes multi-head self-attention as a replacement for recurrence and convolutions in encoder-decoder architectures. Self-attention connects all positions with a constant number of sequentially execute