AI Infosec
- • 85%airisk.mit.edu AI Risk Repository
Navigate the complex landscape of risks from artificial intelligence with a comprehensive database of over 700 AI risks and two taxonomies to understand the causes and domains of AI risks.
- • 100%
AI hallucinates software packages and devs download them – even if potentially poisoned with malware
www.theregister.com AI bots hallucinate software packages and devs download themSimply look out for libraries imagined by ML and make them real, with actual malicious code. No wait, don't do that
- • 63%spectrum.ieee.org Why Are Large AI Models Being Red Teamed?
Intelligent systems demand more than just repurposed cybersecurity tools
- • 100%www.theregister.com How 'sleeper agent' AI assistants can sabotage code
Today's safety guardrails won't catch these backdoors, study warns
- www.theregister.com NIST warns of 'snake oil' security claims by AI makers
You really think someone would do that? Go on the internet and tell lies?
- www.theregister.com Boffins devise 'universal backdoor' for image models
Data poisoning appears open to all
- • 100%www.theregister.com Microsoft Bing Chat spotted pushing malware via bad ads
From AI to just plain aaaiiiee!
YouTube Video
Click to view this content.
- • 100%www.deepmind.com Identifying AI-generated images with SynthID
Today, in partnership with Google Cloud, we’re beta launching SynthID, a new tool for watermarking and identifying AI-generated images. It’s being released to a limited number of Vertex AI customers using Imagen, one of our latest text-to-image models that uses input text to create photorealistic im...
- • 100%www.ncsc.gov.uk Thinking about the security of AI systems
Why established cyber security principles are still important when developing or implementing machine learning models.
- • 100%github.com GitHub - google/model-transparency
Contribute to google/model-transparency development by creating an account on GitHub.
Hi all,
Had a small chat on #AI with somebody yesterday, when this video came up: "10 Things They're NOT Telling You About The New AI" (*)
What strikes me the most on this video is not the message, but the way it is brought. It has all the prints of #disinformation over it, .. especially as it is coming from a youtube-channel that does not even post a name or a person.
Does anybody know this organisation and who is behind it?
Is this "you are all going to lose your job of AI and that's all due to " message new? What is the goal behind this?
(Sorry to post this message here. I have been looking for a lenny/kbin forum on disinformation, but did not find it, so I guess it is most relevant here)
(*) https://www.youtube.com/watch?v=qxbpTyeDZp0
- owasp.org OWASP Top 10 for Large Language Model Applications | OWASP Foundation
Aims to educate developers, designers, architects, managers, and organizations about the potential security risks when deploying and managing Large Language Models (LLMs)
- • 88%www.bleepingcomputer.com Cybercriminals train AI chatbots for phishing, malware attacks
In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google's AI experiment, Bard.
Anyone else getting tired of all the click bait articles regarding PoisonGPT, WormGPT, etc without them ever providing any sort of evidence to back up their claims?
They’re always talking about how the models are so good and can write malware but damn near every GPT model I’ve seen can barely write basic code - no shot it’s writing actually valuable malware, not to mention FUD malware as some are claiming.
Thoughts?
- boringappsec.substack.com Edition 21: A framework to securely use LLMs in companies - Part 1: Overview of Risks
Part 1 of a multi-part series on using LLMs securely within your organisation. This post provides a framework to categorize risks based on different use cases and deployment type.
- • 100%defensescoop.com Army looking at the possibility of 'AI BOMs'
The Army is exploring the possibility of asking commercial companies to open up the hood of their artificial intelligence algorithms as a means of better understanding what’s in them to reduce risk and cyber vulnerabilities.
- blog.mithrilsecurity.io PoisonGPT: How to poison LLM supply chainon Hugging Face
We will show in this article how one can surgically modify an open-source model, GPT-J-6B, and upload it to Hugging Face to make it spread misinformation while being undetected by standard benchmarks.
- • 100%github.com GitHub - JetP1ane/Callisto: Callisto - An Intelligent Binary Vulnerability Analysis Tool
Callisto - An Intelligent Binary Vulnerability Analysis Tool - GitHub - JetP1ane/Callisto: Callisto - An Intelligent Binary Vulnerability Analysis Tool
Callisto is an intelligent automated binary vulnerability analysis tool. Its purpose is to autonomously decompile a provided binary and iterate through the psuedo code output looking for potential security vulnerabilities in that pseudo c code. Ghidra's headless decompiler is what drives the binary decompilation and analysis portion. The pseudo code analysis is initially performed by the Semgrep SAST tool and then transferred to GPT-3.5-Turbo for validation of Semgrep's findings, as well as potential identification of additional vulnerabilities.
This tool's intended purpose is to assist with binary analysis and zero-day vulnerability discovery. The output aims to help the researcher identify potential areas of interest or vulnerable components in the binary, which can be followed up with dynamic testing for validation and exploitation. It certainly won't catch everything, but the double validation with Semgrep to GPT-3.5 aims to reduce false positives and allow a deeper analysis of the program.
- • 100%github.com GitHub - mahaloz/DAILA: A decompiler-unified plugin for accessing the OpenAI API to improve your decompilation experience
A decompiler-unified plugin for accessing the OpenAI API to improve your decompilation experience - GitHub - mahaloz/DAILA: A decompiler-unified plugin for accessing the OpenAI API to improve your ...
- • 100%github.com GitHub - trailofbits/Codex-Decompiler
Contribute to trailofbits/Codex-Decompiler development by creating an account on GitHub.
Codex Decompiler is a Ghidra plugin that utilizes OpenAI's models to improve the decompilation and reverse engineering experience. It currently has the ability to take the disassembly from Ghidra and then feed it to OpenAI's models to decompile the code. The plugin also offers several other features to perform on the decompiled code such as finding vulnerabilities using OpenAI, generating a description using OpenAI, or decompiling the Ghidra pseudocode.
In this post, I introduce a new Ghidra script that elicits high-level explanatory comments for decompiled function code from the GPT-3 large language model. This script is called G-3PO. In the first few sections of the post, I discuss the motivation and rationale for building such a tool, in the context of existing automated tooling for software reverse engineering. I look at what many of our tools — disassemblers, decompilers, and so on — have in common, insofar as they can be thought of as automatic paraphrase or translation tools. I spend a bit of time looking at how well (or poorly) GPT-3 handles these various tasks, and then sketch out the design of this new tool.
If you want to just skip the discussion and get yourself set up with the tool, feel free to scroll down to the last section, and then work backwards from there if you like.
The Github repository for G-3PO can be found HERE.
- • 100%github.com GitHub - ant4g0nist/polar: A LLDB plugin which queries OpenAI's davinci-003 language model to explain the disassembly
A LLDB plugin which queries OpenAI's davinci-003 language model to explain the disassembly - GitHub - ant4g0nist/polar: A LLDB plugin which queries OpenAI's davinci-003 language model to ex...
LLDB plugin which queries OpenAI's davinci-003 language model to speed up reverse-engineering. Treat it like an extension of Lisa.py, an Exploit Dev Swiss Army Knife.
At the moment, it can ask davinci-003 to explain what the current disassembly does. Here is a simple example of what results it can provide:
- • 100%github.com GitHub - MayerDaniel/ida_gpt
Contribute to MayerDaniel/ida_gpt development by creating an account on GitHub.
IDAPython script by Daniel Mayer that uses the unofficial ChatGPT API to generate a plain-text description of a targeted routine. The script then leverages ChatGPT again to obtain suggestions for variable and function names.
- • 100%github.com GitHub - JusticeRage/Gepetto: IDA plugin which queries OpenAI's gpt-3.5-turbo language model to speed up reverse-engineering
IDA plugin which queries OpenAI's gpt-3.5-turbo language model to speed up reverse-engineering - GitHub - JusticeRage/Gepetto: IDA plugin which queries OpenAI's gpt-3.5-turbo language model...
Gepetto is a Python script which uses OpenAI's gpt-3.5-turbo and gpt-4 models to provide meaning to functions decompiled by IDA Pro. At the moment, it can ask gpt-3.5-turbo to explain what a function does, and to automatically rename its variables.
- • 100%github.com GitHub - moyix/gpt-wpre: Whole-Program Reverse Engineering with GPT-3
Whole-Program Reverse Engineering with GPT-3. Contribute to moyix/gpt-wpre development by creating an account on GitHub.
This is a little toy prototype of a tool that attempts to summarize a whole binary using GPT-3 (specifically the text-davinci-003 model), based on decompiled code provided by Ghidra. However, today's language models can only fit a small amount of text into their context window at once (4096 tokens for text-davinci-003, a couple hundred lines of code at most) -- most programs (and even some functions) are too big to fit all at once.
GPT-WPRE attempts to work around this by recursively creating natural language summaries of a function's dependencies and then providing those as context for the function itself. It's pretty neat when it works! I have tested it on exactly one program, so YMMV.
- socket.dev Introducing Socket AI – ChatGPT-Powered Threat Analysis - Socket
Socket is using ChatGPT to examine every npm and PyPI package for security issues.
A very interesting approach. Apparently it generates lots of results: https://twitter.com/feross/status/1672401333893365761?s=20
- • 100%www.csoonline.com Most popular generative AI projects on GitHub are the least secure
Researchers use the OpenSSF Scorecard to measure the security of the 50 most popular generative AI large language model projects on GitHub.
They used OpenSSF Scorecard to check the most starred AI projects on GitHub and found that many of them didn't fare well.
The article is based on the report from Rezilion. You can find the report here: https://info.rezilion.com/explaining-the-risk-exploring-the-large-language-models-open-source-security-landscape (any email name works, you'll get access to the report without email verification)