Skip Navigation

What Do Neural Networks Really Learn?

What Do Neural Networks Really Learn? Exploring the Brain of an AI Model

2
2 comments
  • The first half of this video is entirely dumb, which is shocking, because the second half actually accurately describes the issues the first half makes out to be "mysterious". It's not at all.

    We can view model decisions AFTER they execute, but they are too fast to observe live. This is why constraints are put into place for reinforcement models to begin with. You want an expected outcome, just fast.

    This video is confusing two different worlds that operate completely different from each other: computer vision models, and generative models.

    We know exactly why vision models do what they do, because it's predetermined, and a result is expected. Training these models includes large sample sets which can be observed, and the resulting model has outputs describing what happened during training. There are a jillion tools out there that let you even run a step-by-step of such models to see what the before and after of the input is, and allow you to adjust to your liking if the result is not correct. We wouldn't be able to program them if not.

    Generative models that are predictive operate differently. They attempt to guess a variation of input after a few filters, and then sort of run on their own. This is not reinforced learning, and is why it differs heavily from what this video describes.

    There's a massive difference between the different operations of neural networks, and this video just confuses all of them in some spots, but accurately describes them in others. It's all over the place.

    Base fact being that a model meant for vision is not having the same issues as one meant for languages or deep learning.

    3
    • Yea, the video doesn't make it super clear, it's not a generative model at all. Those weird "ai" looking images are the result of taking the specific node and applying a filter to visualize what the node is looking for

      2