Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EN
Posts
0
Comments
43
Joined
1 mo. ago
  • See for example the storage systems from Vast or Pure. You can increase window size for compression and dedup far smaller blocks. Fast random IO also allows you to do that ”online” in the background. In the case of Vast, you also have multiple readers on the same SSD doing that compression and dedup.

    So the feature isn’t that special. What you can do with it in practice changes drastically.

  • The flaw with hard drives comes with large pools. The recovery speed is simply too slow when a drive fails, unless you build huge pools. So you need additional drives for more parity.

    I don’t know who cares about shelf life. Drives spin all their lives, which is 5-10 years. Use M-Disk or something if you want shelf life.

  • I agree with you, mostly. Margins in the datacenter are thin for some players. Not Nvidia, they are at like 60% pure profit per chip, including software and RnD. That will have an effect on how we design stuff in the next few years.

    I think we’ll need both ”GPU” and traditional CPUs for the foreseeable future. GPU-style for bandwidth or compute constrained workloads and CPU-style for latency sensitive workloads or pointer chasing. Now, I do think we’ll slap them both on top of the same memory, APU-style á la MI300A.

    That is, as long as x86 has the single-threaded advantage, RISC-V won’t take over that marked, and as long as GPUs have higher bandwidth, RISC-V won’t take over that market.

    Finally, I doubt we’ll see a performant RISC-V chip from China the next decade - they simply lack the EUV fabs. From outside of China, maybe, but the demand isn’t nearly as large.

  • Not economical. Storage is already done on far larger fab nodes than CPUs and other components. This is a case where higher density actually can be cheaper. ”Mature” nodes are most likely cheaper than the ”ancient” process nodes simply due to age and efficiency. (See also the disaster in the auto industry during covid. Car makers stopped ordering parts made on ancient process nodes, so the nodes were shut down permanently due to cost. After covid, fun times for automakers that had to modernise.)

    Go compare prices, new NVMe M.2 will most likely be cheaper than SATA 2.5” per TB. The extra plastic shell, extra shipping volume and SATA-controller is that difference. 3.5” would make it even worse. In the datacenter, we are moving towards ”rulers” with 61TB available now, probably 120TB soon. Now, these are expensive, but the cost per TB is actually not that horrible when compared to consumer drives.

  • It’s your country, your attempt at a democratic system and your mess to deal with internally. That it’s a fundamentally broken system you have over there has been known (and in some cases mathematically proven) for a long time now. Personally, I’m getting tired of the ”Not all americans”-stuff. It kinda worked the first time around, but you had four years to deal with him peacefully through your legal system, demonstrations, manifestations and public pressure. He got reelected and there was a peaceful transition of power, possibly the last.

    Enough people voted for him. The margins weren’t even that thin. You are now represented by President Orange in your international affairs and force projection. We can’t really help you that much either, as you have positioned yourselves as a dominant global power, with economical, soft and hard power.

    My entirely unqualified guess - we’ll start accepting political refugees from the US, fearing for their lives, fairly soon. I’m guessing LGBTQ+, some ethnicities, some scientists and some public servants are in the danger zone. Stop being ”horrified” and start being ”absolutely fucking terrified”.

    In short, go deal with your carrot man, we can’t do it for you. We can hopefully provide some refuge. But we can only deal with you as a nation, represented by Trump. Sorry.

  • Unless you have actual tooling (i.e. RedHat erratas + some service on top of that), just don’t even try.

    Stop downloading random shit from dockerhub and github. Pick a distro that has whatever you need packaged, install from the repositories and turn on automatic updates. If you need stuff outside of repos, use first party packages and turn on auto updates. If there aren’t any decent packages, just don’t do it. There is a reason people pay RedHat a shitton of money, and that’s because they deal with much of this bullshit for you.

    At home, I simply won’t install anything unless I can enable automatic updates. Nixos solves much of it. Two times a year I need to bump the distro version, bump the nextcloud release, and deal with depreciations, and that’s it.

    I also highly recommend turning on automatic periodic reboots, so you actually get new kernels running…

  • Dumbware.io - Stupid Simple Software

  • Just going off the marketing here:

    Git server with CI/CD, kanban, and packages.

    From the looks of it, they also seem to bundle the vscode server and a bunch of other stuff. I’m actually kinda surprised they do it with only 1G of RAM.

  • Dumbware.io - Stupid Simple Software

  • Not to be that guy, but 12% of 8G isn’t even close to ”heavy as fuck” for a CI/CD and collaboration suite that seems aimed at enterprise users.

    You can also tweak how much memory you’d like the jvm to grab with ’-Xms100m’. Any defaults are most likely aimed at much larger deployments than yours.

    But yes, Java is a disease.

  • The thing is, consumers didn’t push Nvidias stock sky high, AI did. Microsoft isn’t pushing anything sane to consumers, Microsoft is pushing AI. AMD, Intel, Nvidia and Qualcomm are all pushing AI to consumers. Additionally, on the graphics side of things, AMD is pushing APUs to consumers. They are all pushing things that require higher memory bandwidth.

    Consumer will get ”trickle down silicon”, like it or not. Out of package memory will die. Maybe not with you next gaming rig, but maybe the one after that.

  • Wrote a longer reply to someone else, but briefly, yes, you are correct. Kinda.

    Caches won’t help with bandwidth-bound compute (read: ”AI”) it the streamed dataset is significantly larger than the cache. A cache will only speed up repeated access to a limited set of data.

  • Yeah, the cache hierarchy is behaving kinda wonky lately. Many AI workloads (and that’s what’s driving development lately) are constrained by bandwidth, and cache will only help you with a part of that. Cache will help with repeated access, not as much with streaming access to datasets much larger than the cache (i.e. many current AI models).

    Intel already tried selling CPUs with both on-package HBM and slotted DDR-RAM. No one wanted it, as the performance gains of the expensive HBM evaporated completely as soon as you touched memory out-of-package. (Assuming workloads bound by memory bandwidth, which currently dominate the compute market)

    To get good performance out of that, you may need to explicitly code the memory transfers to enable prefetch (preferably asynchronous) from the slower memory into the faster, á la classic GPU programming. YMMW.