Results 1 to 2 of 2
  1. #1
    Mandeemoo007 is offline
    SG Addict
    Mandeemoo007's Avatar
    Join Date:
    6 Jan 2014
    Location:
    Hell because I'm a bad memer.
    Posts:
    530
    Thanked:
    92 in 73 Posts

    Project Deep Dream

    Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.

    We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

    One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.

    One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.
    So basically you can make pictures look fucking trippy. http://googleresearch.blogspot.com/2...to-neural.html This website has the research about it. And they just released recently how to do it yourself, here> http://googleresearch.blogspot.com/2...sualizing.html And there is a website that can do it for you if you are too dumb to figure it out for yourself (me). http://dreamdeeply.com/ I did it myself and this is what happened.

    Before-iZxfK.jpg
    After- iZxc1.jpg
    【=◈︿◈=】

  2. #2
    Mandeemoo007 is offline
    SG Addict
    Mandeemoo007's Avatar
    Join Date:
    6 Jan 2014
    Location:
    Hell because I'm a bad memer.
    Posts:
    530
    Thanked:
    92 in 73 Posts
    Here is a 9/11 edit someone did. Because Bush did it. http://dreamdeeply.com/result/Nc5BA5A5vXdHQRmJL2bUFhGr
    【=◈︿◈=】

User Tag List

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •