Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)DY
Posts
63
Comments
33
Joined
1 yr. ago

dynomight internet forum @lemmy.world

New colors without shooting lasers into your eyes

dynomight internet forum @lemmy.world

Should the Federal Government Sell Land?

dynomight internet forum @lemmy.world

My 9-week unprocessed food self-experiment

dynomight internet forum @lemmy.world

Links for July

dynomight internet forum @lemmy.world

Do blue-blocking glasses improve sleep?

dynomight internet forum @lemmy.world

The Good Sides Of Nepotism

dynomight internet forum @lemmy.world

Response to Dynomight on Scribble-based Forecasting

dynomight internet forum @lemmy.world

Scribble-based forecasting and AI 2027

dynomight internet forum @lemmy.world

"One of the options is a burrito. But there's only one. You can put in a bid, and 1/3 of the way into the flight the person who bids the most gets it."

dynomight internet forum @lemmy.world

"The Yoruba people have the highest rate of twinning in the world, possibly because of high consumption of a specific type of yam"

  • I think this is a fair argument. Current AIs are quite bad about "knowing if they know". I think it's likely that we can/will solve this problem, but I don't have any particularly compelling reason for that, and I agree that my argument fails if it never gets solved.

  • dynomight internet forum @lemmy.world

    The AI safety problem is wanting

    dynomight internet forum @lemmy.world

    Thoughts on the AI 2027 discourse

    dynomight internet forum @lemmy.world

    A deep critique of AI 2027’s bad timeline models

  • I'm sure many people feel the same way. But wouldn't that just make that observation even stronger—people care about animal welfare so much that they'd like to go even further than in-ovo testing?

  • dynomight internet forum @lemmy.world

    Moral puzzles: Man vs. machine

  • Agree with your first point. For the second point, I felt like I had to add some artifice because otherwise the morally correct choice in almost all situations would seem to obviously be "ask humanity and let it choose for itself"! Which is correct, but not very interesting.

    (In any case, I'm not actually that interested in these particular moral puzzles, I have other purposes in asking...)

  • dynomight internet forum @lemmy.world

    Please take my weird moral puzzles quiz

    dynomight internet forum @lemmy.world

    The Fordow Paradox: Where do Iran and Israel go from here?

    dynomight internet forum @lemmy.world

    StretchText

    dynomight internet forum @lemmy.world

    The magic of through running

    dynomight internet forum @lemmy.world

    Futarchy’s fundamental flaw

    dynomight internet forum @lemmy.world

    In Which I Defend Fruit's Honor

  • Ah, I see, very nice. I wonder if it might make sense to declare the dimensions that are supposed to match once and for all when you wrap the function?

    E.g. perhaps you could write:

     
        
    @new_wrap('m, n, m n->')
    def my_op(x,y,a):
        return y @ jnp.linalg.solve(a,x)
    
      

    to declare the matching dimensions of the wrapped function and then call it with something like

     
        
    Z = my_op('i [:], j [:], i j [: :]->i j', X, Y, A)
    
      

    It's a small thing but it seems like the matching declaration should be done "once and for all"?

    (On the other hand, I guess there might be cases where the way things match depend on the arguments...)

    Edit: Or perhaps if you declare the matching shapes when you wrap the function you wouldn't actually need to use brackets at all, and could just call it as:

     
        
    Z = my_op('i :, j :, i j : :->i j', X, Y, A)
    
      

    ?

  • OK, I gave it a shot on the initial example in my post:

     
        
    import einx
    from jax import numpy as jnp
    import numpy as onp
    import jax
    
    X = jnp.array(onp.random.randn(20,5))
    Y = jnp.array(onp.random.randn(30,5))
    A = jnp.array(onp.random.randn(20,30,5,5))
    
    def my_op(x,y,a):
        print(x.shape)
        return y @ jnp.linalg.solve(a,x)
    
    Z = einx.vmap("i [m], j [n], i j [m n]->i j", X, Y, A, op=my_op)
    
      

    Aaaaand, it seemed to work the first time! Well done!

    I am a little confused though, because if I use "i [a], j [b], i j [c d]->i j" it still seems to work, so maybe I don't actually 100% understand that bracket notation after all...

    Two more thoughts:

    1. I added a link.
    2. You gotta add def wrap(fun): partial(vmap, op=fun) for easy wrapping. :)
  • Hey, thanks for pointing this out! I quite like the bracket notation for indicating axes that operations should be applied "to" vs. "over".

    One question I have—is it possible for me as a user to define my own function and then apply it with einx-type notation?

  • Thanks, the one problem with that is that you have to use dumpy.wrap if you ever create a function that uses loops and then you want to call it inside another loop. But I don't see any way around that.

  • Well, Einstein summation is good, but it only does multiplication and sums. (Or, more generally, some scalar operation and some scalar reduction.) I want a notation that works for ANY type of operation, including non-scalar ones, and that's what DumPy does. So I'd argue it moves further than Einstein summation.

  • At one point, I actually had some (LLM-generated) boxes where you could click to switch between the different implementations for the same problem. But in the end I didn't like how it looked, so I switched to simple expandy-boxes. Design is hard...

    There's no magical significance to the assert x.ndim==1 check. I think I just wanted to demonstrate that the softmax code was "simple" and didn't have to think about high dimensions. I think I'll just remove that, thanks.

  • Yeah, I totally agree with this point! DNA is definitely not sufficient to build an organism. Originally, I thought there was definitely a large (albeit hard to quantify) amount of information embodied in the cells. Though there's been some debate on that point about how large that really is. For example, if I provided a single photograph of an adult human and—I don't know—gave the typical fraction of different atoms in a human body, would a sufficiently intelligent alien race reverse engineer how to make a zygote?

    In any case, my (annoying) answer to this challenge is to retreat: I don't technically have to solve this problem because I'm not trying to estimate the amount of information in a cell, just the information in DNA.

  • Yeah, I tried to cut the line at "trading money" as opposed to a general examination of libertarian principles. But I agree that for euthanasia, once you start considering higher-order effects, it's not clear that it's net positive for society. For example, if I definitely never want to do euthanasia, then legalizing it does seem to hurt me. Because maybe someday and I'm old and disabled and my children have to go to enormous effort to take care of me. Even if they'd never consider the idea the idea of euthanasia, the mere possibility of it might make me feel like more of a burden to them and make me feel guilty for not doing it.

    Of course there are obviously downsides to making it illegal, too! I don't really have a strong view on which is net-positive. Seems very hard.

  • I don't think sexism is a very useful concept here. After all, you could equally well argue that it's sexist to forbid surrogacy, since that's removing autonomy.

    Personally, I'm squishy enough that I'm willing to be convinced by empirical data. Like, if there was data that showed a huge percentage of surrogate mothers regret agreeing to it, then that would matter a lot to me, though I'd still probably lean towards education / screening / etc. before jumping all the way to making it illegal.

    There’s a reason that voluntary slavery is illegal: Desperate people would do it (and have historically done it), and that didn’t make it right.

    I think this is the point I was trying to make at the end of the post. If someone does surrogacy (or donates a kidney) out of desperation, that seems gross. Whereas if they are OK financially and decide to do it for some "extra money" (whatever that means) then that seems less gross.

  • My instinct is that $20 per A would not be enough to move the needle, and might be net-harmful when you consider intrinsic motivation. But how about $500 per A? (Or $1000 for straight As) Still might be cheaper than tutoring?