Slack trains machine-learning models on user messages, files and other content without explicit permission. The training is opt-out, meaning your private data will be leeched by default.
First all companies were afraid of giving access to these models, for trade secret issues and security. But then they basically all met at the white house to agree that they would make way more fucking money stealing it than they would pay in restitution or damages to people and small businesses.
Suddenly everybody had a chatbot and generated art ready for commercial sale. They also had to make the shift quickly enough before official laws and protections (mostly from the EU) came in.
Now AI is plateauing a bit so they must hurry to get valuated at 10 trillion dollars and get their energy needs subsidized and have taxpayers invest into the nation's energy requirements on their behalf.
I doubt that most corporations would even consider allowing Slack as a trusted app if they weren't hosting their own instances themselves.
I have to assume that this training is exclusively on instances hosted on Slacks' servers. So probably lots of smaller businesses that don't know any better. And this was probably agreed to in the ToS as part of utilizing free and easy to set up cloud servers.
it's funny how the conventional wisdom at the end of the last decade was that slack was preferred over other simpler/free alternatives because of its UX. People were hailing it for how simple and intuitive it was to use, etc.
5, 6 years later, it has become a bloated piece of crap riddled with bugs. And the UI changes which come unannounced... it should be a criminal offense to change UI through automated updates.
Anyway, here we are, companies have handed their data to this monster and we'll see how they react when the data gets misused. Hopefully that would be the beginning of the end for it
I miss Slack, though circa several years back. “Just worked,” on most any platform, without the BS or “help”.
Wouldn’t like it now, I’m sure, but haven’t had a chance to use it since I started working for a co who is “all in” on MS, including foisting AI on us.
I am capable of drafting an email or message, bitches. If I am concerned about tone, etc., I’d prefer to employ an actual human I have a close relationship with to review the same.
I have zero desire to be constantly corrected, and there are certain niche scenarios where very minor errors are actually endearing, and indicate enthusiasm.
“Bob, I saw the posting for your role, can you tell me about your avg day?” is effective because it’s honest, coherent, and just excited enough that you made a minor error that slipped through.
When Bob gets 25 of those emails and they all look the same because AI, it’s much harder to make the connection.
It was a the comma splice, wasn't it? Depending on Bob's cohort, he may never notice.
.. and if I was receiving notes and questions about a role, an error like "emails" would earn relegation for sure; so be careful which error you leave in.
At this point, you should be able to ask, if you missed something important in the last few years. Is there any open conversation waiting for a reply somewhere?
Edit: if they use our data, they should at least give us some useful tools, in order for us to be able to see what personal information is out there ...
There's a safe bet that if you've put something on the internet, it's been scraped by a bot by now for training. I don't like that, for the record, just saying I'm not surprised at this point. Companies are morally bankrupt
I don't know why everyone is all shocked all of a sudden, there have been various scraper bots collecting text info for...many years now, LONG before LLMs came onto the scene.
I agree, but it's one thing if I post to public places like Lemmy or Reddit and it gets scraped.
It's another thing if my private DMs or private channels are being scraped and put into a database that will most likely get outsourced for prepping the data for training.
Not only that, but the trained model will have internal knowledge of things that are sure to give anxiety to any cyber security experts. If users know how to manipulate the AI model, they could cause the model to divulge some of that information.
The more they push to train AI on our shitpostings on social networks, the more I'm certain we're fucking doomed if their AI ever reaches consciousness.
We may very well be doomed if AI reaches consciousness but I'm not quite convinced LLM's is the way to get there but even if it was and it was solely trained on social media content I still wouldn't expect it to adopt the behaviour of your typical social media commentor. The toxic behaviour on social media is, in my view, almost solely driven by our human ego and pettiness. It's not obvious to me that AI would care about things like winning arguments or coming up with snide remarks and such. What I see as the most likely outcome would be endlessly patient and quite autistic-like being that's balanced in it's views and would most likely be pretty difficult to argue against. I doubt humans are anywhere even near the far-end of the intelligence spectrum and something with the information processing capability that's orders of magnitude greater than ours would more than likely not get caught up in stuff like confirmation bias, partisan thinking, motivated reasoning, being tossed around by emotions, cognitive dissonance etc. Those are by definitions human features.
Sounds like a lot of this is for non-generative AI. It’s for dumb things like that frequently used emoji feature.
Knowing how my legal teams have worked in my tech companies, I’m a bet that a lawyer updated the terms language to be in compliance with privacy legislation, but they did a shit job, and didn’t clarify what specifically was being covered in the TOS. They were lazy, and crafted something broad, so they wouldn’t have to actually talk to product or marketing people in their org.
Anyone aware if they are also getting data from their slack for government offering? I was looking at the govslack site and I can't tell one way or the other. While they claim to meet most of the big compliance regs I don't see anything about training AI being included/excluded.
I know that stealing trade secrets is a concern but seems like stealing state secrets might have some other implications. I know you're not supposed to talk on slack about any classified info, but that doesn't mean that sensitive info isn't shared which also has some rather profound implications as well.
I can't really tell you which one is the best, since I never used any of these (except for Session) for an extended period of time. Briar seems to be the best for anonymity, because it routes everything through the Tor network. SimpleX allows you to host your own node, which is pretty cool.