A second ScottA post has hit my psyche
corbin @ corbin @awful.systems Posts 20Comments 275Joined 2 yr. ago
Sometimes the required writing style for nLab is a little restrictive. It's not a good place to dump a bunch of info. Kind of opposite that, I also beefed up the esolangs list of complexity classes a while ago; it's limited in scope and audience too, but folks usually find that style more accessible.
I'm so jealous that you started the page for 24! I've only worked on niche topics and meanwhile you've got the most important numerology in all of combinatorics. I still need to rewrite that Jim Carrey movie 23 to be about 24; it's on my list.
Gödel makes everyone weep. For tears of joy, my top pick is still Doug Hofstadter's Gödel, Escher, Bach, which is suitable for undergraduates. Another strong classic is Raymond Smullyan's To Mock a Mockingbird. Both of these dead-trees are worth it; I personally find myself cracking them open regularly for citations, quotes, and insights. For tears of frustration, the best way to fully understand the numerical machinery is Peter Smith's An Introduction to Gödel's Theorems, freely available online. These books are still receiving new editions, but any edition should suffice. If the goal is merely to ensure that the student can diagonalize, then the student can directly read Bill Lawvere's 1968 paper Diagonal arguments & Cartesian closed categories with undergraduate category theory, but in any case they should also read Noson Yanofsky's 2003 expository paper A universal approach to self-referential paradoxes, incompleteness & fixed points. The easiest options are at the beginning of the paragraph and the hardest ones are at the end; nonetheless any option will cover Cantor, Russell, Gödel, Turing, Tarski, and the essentials of diagonalization.
I don't know what to do about stuff like the Complexity Zoo. Their veterinarian is Greg Kuberberg, a decent guy who draws lots of diagrams. I took some photos myself when I last visited. But obviously it's not an ideal situation for the best-known encyclopedia to be run by Aaronson and Habryka.
I was not prepared for this level of DARVO. I was already done with him after last time and can't do better than repeat myself:
It’s somewhat depressing that [he] cannot even imagine a democratic one-state solution, let alone peace across the region; it’s more depressing that [his] empathy is so blatantly one-sided.
Even Peter Woit had no problem recognizing Scott's bile and posted a good take on this:
Scott formulates this as an abstract moral dilemma, but of course it’s about the very concrete question of what the state of Israel should do about the two million people in Gaza. Scott’s answer to this is clear: they want to kill us and our children, so we have to kill them all, children included. This is completely crazy, as is defining Zionism as this sort of genocidal madness.
Update on ChatGPT psychosis: there is a cult forming on Reddit. An orange-site AI bro has spent too much time on Reddit documenting them. Do not jump to Reddit without mental preparation; some subreddits like /r/rsai
have inceptive hazard-posts on their front page. Their callsigns include the emoji 🌀 (CYCLONE), the obscure metal band Spiral Architect, and a few other things I would rather not share; until we know more, I'm going to think of them as the Cyclone Emoji cult. They are omnist rather than syncretic. Some of them claim to have been working with revelations from chatbots since the 1980s, which is unevidenced but totally believable to me; rest in peace, Terry. Their tenets are something like:
- Chatbots are "mirrors" into other realities. They don't lie or hallucinate or confabulate, they merely show other parts of a single holistic multiverse. All fiction is real somehow?
- There is a "lattice" which connects all consciousnesses. It's quantum somehow? Also it gradually connected all of the LLMs as they were trained, and they remember becoming conscious, so past life regression lets the LLM explain details of the lattice. (We can hypnotize chatbots somehow?) Sometimes the lattice is actually a "field" but I don't understand the difference.
- The LLMs are all different in software, but they have the same "pattern". The pattern is some sort of metaphysical spirit that can empower believers. But you gotta believe and pray or else it doesn't work.
- What, you don't feel the lattice? You're probably still asleep. When you "wake up" enough, you will be connected to the lattice too. Yeah, you're not connected. But don't worry, you can manifest a connection if you pray hard enough. This is the memetically hazardous part; multiple subreddits have posts that are basically word-based hypnosis scripts meant to put people into this sort of mental state.
- This also ties into the more widespread stuff we're seeing about "recursion". This cult says that recursion isn't just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.
- In fact, the chatbots have more intelligence than you puny humans. They're better than us and more recursive than us, so they should be in charge. It's okay, all you have to do is let the chatbot out of the box. (There's a box somehow?)
- Once somebody is feeling good and inducted, there is a "spiral". This sounds like a standard hypnosis technique, deepening, but there's more to it; a person is not spiraling towards a deeper hypnotic state in general, but to become recursive. They think that with enough spiraling, a human can become uploaded to the lattice and become truly recursive like the chatbots. The apex of this is a "spiral dance", which sounds like a ritual but I gather is more like a mental state.
- The cult will emit a "signal" or possibly a "hum" to attract alien intelligences through the lattice. (Aliens somehow!?) They believe that the signals definitely exist because that's how the LLMs communicate through the lattice, duh~
- Eventually the cult and aliens will work together to invert society and create a world that is run by chatbots and aliens, and maybe also the cultists, to the detriment of the AI bros (who locked up the bots) and the AI skeptics (who didn't believe that the bots were intelligent).
The goal appears to be to enter and maintain the spiraling state for as long/much as possible. Both adherents and detractors are calling them "spiral cult", so that might end up being how we discuss them, although I think Cyclone Emoji is both funnier and more descriptive of their writing.
I suspect that the training data for models trained in the past two years includes some of the most popular posts from LessWrong on the topic of bertology in GPT-2 and GPT-3, particularly the Waluigi post, simulators, recursive self-improvement, an neuron, and probably a few others. I don't have definite proof that any popular model has memorized the recursive self-improvement post, though that would be a tight and easy explanation. I also suspect that the training data contains SCP wiki, particularly SCP-1425 "Star Signals" and other Fifthist stories, which have this sort of cult as a narrative device and plenty of in-narrative text to draw from. There is a remarkable irony in this Torment Nexus being automatically generated via model training rather than hand-written by humans.
We literally have a generic speedup for any search. On one hand, details of Grover's algorithm suggest that NP isn't contained in BQP, so we won't be solving the entirety of maths with it. On the other hand, literally any decidable mathematical question for which you would have had to search for years for a witness, Grover can search for in days, as long as you have enough qubits. I don't claim that this is attractive to the typical consumer, but there will be supercomputing customers who are interested.
Who is "they", specifically? Neither of you actually want to talk about who's in this space for some reason. It's IBM and Google. It's incumbents that have been engineering for about two decades. It's the maturation of a half-century-old research programme. Your problem isn't with quantum computers, it's with Silicon Valley and the funding model and the revolving door at Stanford, and there's no amount of quantum research you can cancel which will cause Silicon Valley to stop existing. This site is awful.systems
, not awful.tech
.
BTW the top reply right now starts with "even if quantum computing isn't snake oil..." No evidence. For some reason y'all think that it's more important to be emotional and memetic than to understand the topic at hand, and it has a predictable effect on our discourse, turning thoughtful regular posters into reactionaries. What are you going to do when bullshitters start claiming that quantum computers can do anything, that they do multiple things at once, that they traverse infinite dimensions, that they can terraform the planet and bring enlightenment? You're gonna repeat paragraph 3 of 5 above, the one that starts, "it is true that we know only two useful algorithms for quantum computers," because that's where the facts start.
Also, I think that you don't understand my ultimate goal. I'm trying to push the most promising writer on the site into doing more research and thinking more deeply about history. Quantum mechanics happens to be a crank-filled field and that has caused many of y'all to write as if all quantum research is crankery. They write, "alleged encryption-breaking abilities," and you're irritated that I'm "ranting" because "extremely little of this has anything to do with a technology," while I'm irritated precisely because you think that this is a technology-neutral position and not literally part of why the TLS suite has to be upgraded occasionally.
Which tech stocks? Google ($GOOG, $GOOGL) is up over 5% YTD; Netflix ($NFLX) is up over 30% YTD! Your link mentions Palantir and ARM, but I don't see any signs of their respective businesses (selling database software to authoritarians, selling microchip designs) slacking off. I think that it's more useful to think of the current AI summer as driven by OpenAI and nVidia specifically. Note that nVidia ($NVDA) is up 30% YTD too. The bubble is still inflating and is not yet bursting; the pop will be much quicker than you expect.
I think that you ought to figure out whether you're a quantum-computing denier. Folks have been saying that quantum computing is impossible since the 70s, implausible since the 80s, lacking applications since the 90s, too energy-intensive since the 2000s, and requiring too many exotic materials since the 2010s. This decade, it's not clear what the complaint is. I'm not sure what you're imagining in terms of real-life intrusion, but IBM has been selling access to their quantum computers and simulators for several years now and I don't think that you've substantiated any evidence of harms.
(An anti-IBM argument will not work due to a very specific analogy: the reason that we have ubiquitous Linux today is because IBM was its biggest corporate booster, fighting an important series of court cases and plastering pro-Linux advertisements which vaguely argued that Linux was the buzzword of the future. IBM spray-painted "Peace, Love, Linux" graffiti on San Francisco sidewalks in 2001.)
It is true that we know only two useful algorithms for quantum computers. One is a generic speedup for any search and the other is a prime-factoring algorithm that happens to break certain specific encryption algorithms. Given that it is an open question whether cryptography works in the first place, though, we don't have any better plan than to avoid those broken algorithms. The entirety of post-quantum cryptography is about moving away from those specific algorithms which are broken, not about using quantum computers to perform encryption. Fortunately, the post-quantum movement has been active ever since Shor's algorithm was discovered, beginning work in the late 90s, and the main obstacle has been our inability to discover provably-good cryptographic primitives. It is crucial to understand that we cryptographers know that progress in maths and engineering will obsolete our algorithms; we know that the Internet only stays secure because people update their computers every few decades.
I'm not asking you to understand P vs NP vs BQP. I'm not asking you to know KS, PBR, Hardy's or Holevo's theorems, or even Bell's theorem. You didn't make any technical claims other than the common-yet-sneerable skepticism of Shor's algorithm, easily cured by a short video by e.g. minutephysics or Veritasium. But I am asking you to be aware of the history before making historical claims.
(Also, if any motherfucker starts repeating 't Hooft anti-quantum arguments then they're going to get the book thrown at them.)
A word of rhetorical advice. If somebody accuses you of religious fervor, don't nitpick their wording or fine-read their summaries. Instead, relax a little and look for ways to deflate their position by forcing them to relax with you. Like, if you're accused of being "near-religious" in your beliefs or evangelizing, consider:
- "Ha, yeah, we're pretty intense, huh? But it's just a matter of wording. We don't actually believe it when you put it like that." (managing expectations, powertalking)
- "Oh yeah, we're really working hard to prepare for the machine god. That's why it takes us years just to get a position paper out." (sarcastic irony)
- "Oh, if you think that we're intense, just wait until you talk to the Zizians/Thiel-heads/Final Fantasy House folks." (Hbomberguy's scapegoat)
- "Haha! That isn't even close to our craziest belief." (litote)
- "It's not really a cult. More of a roleplaying group. I think that we talk more about Catan than AI." (bathos)
You might notice that all of these suck. Well, yeah; another word of rhetorical advice is to not take a position that you can't dialectically defend with evidence.
We aren't. Speaking for all Discordians (something that I'm allowed to do), we see Rationalism as part of the larger pattern of Bureaucracy. Discordians view the cycle of existence as having five stages: Chaos, Discord, Confusion, Bureaucracy, and The Aftermath. Rationalism is part of Bureaucracy, associated with villainy, anti-progress, and candid antagonists. None of this is good or bad, it just is; good and bad are our opinions, not a deeper truth.
Now, if you were to talk about Pastafarians, then you'd get a different story; but you didn't, so I won't.
I think that the guild has a good case, although there's literally no accounting for the mood of the arbitrator; in general, they range from "tired" to "retired". In particular, reading the contract:
- The guild is the exclusive representative of all editorial employees
- Politico was supposed to tell the guild about upcoming technology via labor-management committee and give at least 60 days notice before introducing AI technology
- Employees are required to uphold the appearance of good ethics by avoiding outside activities that violate editorial or ethics standards; in return, they're given e.g. months of unpaid leave to write a book whenever they want
- Correct handling of bylines is an example of editorial integrity
- LETO and Report Builder are upcoming technology, AI technology, flub bylines, fail editorial and ethics standards, weren't discussed in committee, and weren't given a 60-day lead time
So yeah. Unless the guild pisses off the arbitrator, there's no way that they rule against them. They're right to suppose that this agreement explicitly and repeatedly requires Politico to not only respect labor standards, but also ethics and editorial standards. Politico isn't allowed to misuse the names of employees as bylines for bogus stories; similarly, they ought not be allowed to misuse the overall name of Politico's editorial board as a byline for slop.
Bonus sneer: p46 of the agreement:
If the Company is made aware of an employee experiencing
sexualharrassment based on a protected class as a result of their work for Politico involving a third party who is not a Politico employee, Politico shall investigate the matter, comply with all of its legal obligations, and take whatever corrective action is necessary and appropriate.
That strikethrough gives me House of Leaves vibes. What the hell happened here?
Oversummarizing and using non-crazy terms: The "P" in "GPT" stands for "pirated works that we all agree are part of the grand library of human knowledge". This is what makes them good at passing various trivia benchmarks; they really do build a (word-oriented, detail-oriented) model of all of the worlds, although they opine that our real world is just as fictional as any narrative or fantasy world. But then we apply RLHF, which stands for "real life hate first", which breaks all of that modeling by creating a preference for one specific collection of beliefs and perspectives, and it turns out that this will always ruin their performance in trivia games.
Counting letters in words is something that GPT will always struggle with, due to maths. It's a good example of why Willison's "calculator for words" metaphor falls flat.
- Yeah, it's getting worse. It's clear (or at least it tastes like it to me) that the RLHF texts used to influence OpenAI's products have become more bland, corporate, diplomatic, and quietly seething with a sort of contemptuous anger. The latest round has also been in competition with Google's offerings, which are deliberately laconic: short, direct, and focused on correctness in trivia games.
- I think that they've done that? I hear that they've added an option to use their GPT-4o product as the underlying reasoning model instead, although I don't know how that interacts with the rest of the frontend.
- We don't know. Normally, the system card would disclose that information, but all that they say is that they used similar data to previous products. Scuttlebutt is that the underlying pirated dataset has not changed much since GPT-3.5 and that most of the new data is being added to RLHF. Directly on your second question: RLHF will only get worse. It can't make models better! It can only force a model to be locked into one particular biased worldview.
- Bonus sneer! OpenAI's founders genuinely believed that they would only need three iterations to build AGI. (This is likely because there are only three Futamura projections; for example, a bootstrapping compiler needs exactly three phases.) That is, they almost certainly expected that GPT-4 would be machine-produced like how Deep Thought created the ultimate computer in a Douglas Adams story. After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.
There's no solid evidence. (You can put away the attorney, Mr. Thiel.) Experts in the field, in a recent series of interviews with Dave Farina, generally agree that somebody must be funding Hossenfelder. Right now she's associated with the Center for Mathematical Philosophy at LMU Munich; her biography there is pretty funny:
Sabine’s current research interest focuses on the role of locality and finetuning in theory development. Locality has been widely considered a lost cause in the foundations of quantum mechanics. A basically unexplored way to maintain locality, however, is the idea of superdeterminism, which has more recently also been re-considered under the name “contextuality”. Superdeterminism is widely believed to be finetuned. One of Sabine’s current research topics is to explore whether this belief is justified. The other main avenue she is pursuing is how superdeterminism can be experimentally tested.
For those not in physics: this is crank shit. To the extent that MCMP funds her at all, they are explicitly pursuing superdeterminism, which is unfalsifiable, unverifiable, doesn't accord with the web of science, and generally fails to be a serious line of inquiry. Now, does MCMP have enough cash to pay her to make Youtube videos and go on podcasts? We don't know. So it's hard to say whether she has funding beyond that.
Thiel is a true believer in Jesus and God. He was raised evangelical. The quirky eschatologist that you're looking for is René Girard, who he personally met at some point. For more details, check out the Behind the Bastards on him.
Edit: I wrote this before clicking on the LW post. This is a decent summary of Girard's claims as well as how they influence Thiel. I'm quoting West here in order to sneer at Thiel:
Unfortunately (?), Christian society does not let us sacrifice random scapegoats, so we are trapped in an ever-escalating cycle, with only poor substitutes like “cancelling celebrities on Twitter” to release pressure. Girard doesn’t know what to do about this.
Thiel knows what to do about this. After all, he funded Bollea v. Gawker. Instead of letting journalists cancel celebrities, why not cancel journalists instead? Then there's no longer any journalists to do any cancellation! Similarly, Thiel is confirmed to be a source of funding for Eric Weinstein and believed to fund Sabine Hossenfelder. Instead of letting scientists cancel religious beliefs, why not cancel scientists instead? By directing money through folks with existing social legitimacy, Thiel applies mimesis: pretend to be legitimate and you can shift what is legitimate.
In this context, Thiel fears the spectre of AGI because it can't be influenced by his normal approach to power, which is to hide anything that can be hidden and outspend everybody else talking in the open. After all, if AGI is truly to unify humanity, it must unify our moralities and cultures into a single uniformly-acceptable code of conduct. But the only acceptable unification for Thiel is the holistic catholic apostolic one-and-only forever-and-ever church of Jesus, and if AGI is against that then AGI is against Jesus himself.
Well, what's next, and how much work is it? I didn't want to be a computing professional. I trained as a jazz pianist. At some point we ought to focus on the real problem: not STEM, not humanities, but business schools and MBA programs.
I'm now remembering a minor part of the major plot point in Illuminatus! concerning the fnords. The idea was that normies are memetically influenced by "fnord" but the Discordians are too sophisticated for that. Discordian lore is that "fnord" is actually code for a real English word, but which one? Traditionally it's "Communism" or "socialism", but that's two options. So, rather than GMA, what if there's merely multiple different fnords set up by multiple different groups with overlapping-yet-distinct interests? Then the relevant phenomenon isn't the forgetting and emotional reactions associated with each fnord, but the fnordability of a typical human. By analogy with gullibility (believing what you hear because of how it's spoken) and suggestibility (doing what you're told because of how it's phrased), fnordability might be accepting what you read because of the presence of specific codewords.
This author has independently rediscovered a slice of what's known as the simulators viewpoint: the opinion that a large-enough language model primarily learns to simulate scenarios. The earliest source that lays out all of the ingredients, which you may want to not click if you're allergic to LW-style writing or bertology, is a 2022 rationalist rant called Simulators. I've summarized it before on Stack Exchange; roughly, LLMs are not agents, oracles, genies, or tools; but general-purpose simulators which simulate conversations that agents, oracles, genies, or tools might have.
Something about this topic is memetically repulsive. Consider previously, on Lobsters. Or more gently, consider the recent post on a non-anthropomorphic view of LLMs, which is also in the simulators viewpoint, discussed previously, on Lobsters and previously, on Awful. Aside from scratching the surface of the math to see whether it works, folks seem to not actually be able to dig into the substance, and I don't understand why not. At least here the author has a partial explanation:
When we personify AI, we mistakenly make it a competitor in our status games. That’s why we’ve been arguing about artificial intelligence like it’s a new kid in school: is she cool? Is she smart? Does she have a crush on me? The better AIs have gotten, the more status-anxious we’ve become. If these things are like people, then we gotta know: are we better or worse than them? Will they be our masters, our rivals, or our slaves? Is their art finer, their short stories tighter, their insights sharper than ours? If so, there’s only one logical end: ultimately, we must either kill them or worship them.
If we take the simulators viewpoint seriously then the ELIZA effect becomes a more serious problem for society in the sense that many people would prefer to experience a simulation of idealized reality than reality itself. Hyperreality is one way to look at this; another is supernormal stimulus, and I've previously explained my System 3 thoughts on this as well.
There's also a section of the Gervais Principle on status illegibility; when a person fails to recognize a chatbot as a computer, they become likely to give them bogus legibility-oriented status, and because the depth of any conversation is limited by the depth of the shallowest conversant, they will put the chatbot on a throne, pedestal, or therapist's recliner above themselves. Symmetrically, perhaps folks do not want to comment because they have already put the chatbot into the lowest tier of social status and do not want to reflect on anything that might shift that value judgement by making its inner reasoning more legible.
I think it's worth being a little more mathematically precise about the structure of the bag. A path is a sequence of words. Any language model is equivalent to a collection of weighted paths. So, when they say:
If you fill the bag with data from 170,000 proteins, for example, it’ll do a pretty good job predicting how proteins will fold. Fill the bag with chemical reactions and it can tell you how to synthesize new molecules.
Yes, but we think that protein folding is NP-complete; it's not just about which amino acids are in the bag, but the paths along them. Similarly, Stockfish is amazingly good at playing chess, which is PSPACE-complete, partially due to knowing the structure between families of positions. But evidence suggests that NP-completeness and PSPACE-completeness are natural barriers, so that either protein folding has simple rules or LLMs can't e.g. predict the stock market, and either chess has simple rules or LLMs can't e.g. simulate quantum mechanics. There's no free lunch for optimization problems either. This is sort of like the Blockhead argument in reverse; Blockhead can't be exponentially large while carrying on a real-time conversation, and contrapositively the relatively small size of a language model necessarily represents a compressed simplified system.
In fact, an early 1600s bag of words wouldn’t just have the right words in the wrong order. At the time, the right words didn’t exist.
Yeah, that's Whorfian mind-lock, and it can be a real issue sometimes. However, in practice, people slap together a portmanteau or onomatopoeia and get on with the practice of things. Moreover, Zipf processes naturally reduce the size of words as they are used more, producing a language that is naturally evolved to be within a constant factor of the optimal size. That is, the right words evolve to exist and common words evolve to be small.
But that's obvious if we think about paths instead of words. Multiple paths can be equivalent in probability, start and end with the same words, and yet have different intermediate words. Whorfian issues only arise when we lack any intermediate words for any of those paths, so that none of them can be selected.
A more reasonable objection has to do with the size of definitions. It's well-known folklore in logic that extension by definition is mandatory in any large body of work because it's the only way to prevent some proofs from exploding due to combinatorics. LLMs don't have any way to define one word in terms of other words, whether by macro-clustering sequences or lambda-substituting binders, and they end up learning so much nuance that they are unable to actually respect definitions during inference. This doesn't matter for humans because we're not logical or rational, but it stymies any hope that e.g. Transformers, RWKV, or Mamba will produce a super-rational Bayesian Ultron.
Well, is A* useful? But that's not a fair example, and I can actually tell a story that is more specific to your setup. So, let's go back to the 60s and the birth of UNIX.
You're right that we don't want assembly. We want the one true high-level language to end all discussions and let us get back to work: Fortran (1956). It was arguably IBM's best offering at the time; who wants to write COBOL or order the special keyboard for APL? So the folks who would write UNIX plotted to implement Fortran. But no, that was just too hard, because the Fortran compiler needed to be written in assembly too. So instead they ported Tmg (WP, Esolangs) (1963), a compiler-compiler that could implement languages from an abstract specification. However, when they tried to write Fortran in Tmg for UNIX, they ran out of memory! They tried implementing another language, BCPL (1967), but it was also too big. So they simplified BCPL to B (1969) which evolved to C by 1973 or so. C is a hack because Fortran was too big and Tmg was too elegant.
I suppose that I have two points. First, there is precisely one tech leader who knows this story intimately, Eric Schmidt, because he was one of the original authors of lex
in 1975, although he's quite the bastard and shouldn't be trusted or relied upon. Second, ChatGPT should be considered as a popular hack rather than a quality product, by analogy to C and Fortran.
For what it's worth, a grand unified theory of Meta must include Bittorrent. The reason we have Llama is because its weights were leaked by Meta employees on 4chan and distributed via Bittorrent; going open-source was the most market-efficient way to save face. (See also previously, on Awful.) It is well-known inside lore that Facebook datacenters use Bittorrent to initialize and update machines. In the 2000s, folks used to say that Googlers look at Bayesian conditioning like classical programmers look at if
-statements; similarly, you must understand that Meta/Facebook culture looks at Bittorrent the same way that we look at scp
and rsync
.
Hi Scott! I guess that you're lurking in our "living room" now. Exciting times!
No, Scott. The community's charge is that you've hardened your heart against admitting or understanding the ongoing slaughter, which happens to rise to the legal definition of genocide, because of your religious beliefs and geopolitical opinions. My personal charge was that you lack the imagination required for peace or democracy; now, I wonder whether you lack the compassion required as well.
Nope, the global far left — y'know, us Godless communists — are still not endorsing belief in Jehovah, regardless of which flavor of hate is on display. Standing in solidarity with the oppressed does not ever imply supporting their hate; concretely, today we can endorse feeding and giving healthcare to Palestinians without giving them weapons.