MDMA makes octopuses cuddle

https://www.npr.org/sections/health-shots/2018/09/20/648788149/octopuses-get-strangely-cuddly-on-the-mood-drug-ecstasy

Eric Edsinger, Gül Dölen. A Conserved Role for Serotonergic Neurotransmission in Mediating Social Behavior in Octopus. Current Biology, September 20, 2018.

Advertisements

Neural CGI

“In this work, published in Science (Open Access version), we introduce the Generative Query Network (GQN), a framework within which machines learn to perceive their surroundings by training only on data obtained by themselves as they move around scenes…The GQN model is composed of two parts: a representation network and a generation network. The representation network takes the agent’s observations as its input and produces a representation (a vector) which describes the underlying scene. The generation network then predicts (‘imagines’) the scene from a previously unobserved viewpoint…The representation network does not know which viewpoints the generation network will be asked to predict, so it must find an efficient way of describing the true layout of the scene as accurately as possible. It does this by capturing the most important elements, such as object positions, colours and the room layout, in a concise distributed representation…The GQN’s generation network can ‘imagine’ previously unobserved scenes from new viewpoints with remarkable precision. When given a scene representation and new camera viewpoints, it generates sharp images without any prior specification of the laws of perspective, occlusion, or lighting. The generation network is therefore an approximate renderer that is learned from data”

https://deepmind.com/blog/neural-scene-representation-and-rendering/

Neural scene representation and rendering by S. M. Ali Eslami*,†, Danilo Jimenez Rezende†, Frederic Besse, Fabio Viola, Ari S. Morcos, Marta Garnelo, Avraham Ruderman, Andrei A. Rusu, Ivo Danihelka, Karol Gregor, David P. Reichert, Lars Buesing, Theophane Weber, Oriol Vinyals, Dan Rosenbaum, Neil Rabinowitz, Helen King, Chloe Hillier, Matt Botvinick, Daan Wierstra, Koray Kavukcuoglu, Demis Hassabis.

https://deepmind.com/documents/211/Neural_Scene_Representation_and_Rendering_preprint.pdf

Stochastic computing

One method of computing is to require that all numbers be real values between 0 and 1, and then instead of encoding these numbers into bit streams using binary, represent them with a long stream of random bits which are 1 with probability x, where x is the number being encoded. An advantage is that computations which require many logic gates to implement can implemented more simply (assuming that the randomness in the input bit streams are uncorrelated); eg x*y can be implemented by ANDing the bit streams together, and (x+y)/2 can be implemented by evenly sampling both of the inputs (select  about half the bits from x, and the other half of the bits from y, and concatenate all of the selected bits (in any order) to produce the output). Another advantage is that this method is naturally tolerant to noise.

If the circuit is tolerant to noise, power can be saved because circuit elements can be designed to consume less power at the cost of producing noisy results.

A disadvantage is that the numbers of bits needed to represent each number scales exponentially with required precision, as opposed to radix encodings such as binary which scale linearly (eg to represent one of 256 values, you need 8 bits in binary but 256 bits using stochastic computing).

Obviously, this sort of thing is a candidate neural code.

Some links:

 

 

Memory improvement via stimulation of temporal cortex

Stimulation of temporal cortex with electrodes at memory encoding time boosted recall by 15%, in humans.

In the first phase, researchers listened to brain activity while subjects were memorizing nouns. They trained a model to try to predict, based on the brain activity at encoding time, if that word would be remembered or not. In the second phase, researchers ran the model while subjects were memorizing words, and if the model predicted that the word was more than 50% likely to be forgotten, they zapped the brain for 0.5 seconds (through a single pair of adjacent electrodes in the lateral temporal cortex, at amplitudes ranging from 0.5 mA to 1.5 mA (for electrodes deep in the cortex) or 3.5 mA (for the cortical surface) (amplitude was the maximum within this range such that stimulation didn’t appear to cause afterdischarges). Stimulation in this fashion improved recall by 15%. After stimulation, the classifier was more likely to say that the subject would remember the word, which might suggest that the stimulation improved recall by sometimes nudging the brain into a state that the classifier recognized as good for memory encoding.

As an aside, imo one should keep in mind that this doesn’t necessarily mean that this would be a good thing to do every time you are learning something. The way i like to think about this experiment is to imagine that you have some big machine that you don’t know how it works. The machine sometimes makes humming noises and other times it makes sputtering noises. You notice that when it makes the sputtering noise, this correlates somewhat with it not doing its job so well. So, whenever you hear a sputtering noise, you kick it really hard. Sometimes when you kick it, it makes it hum again. You record data and find out that if you kick it when it sputters, that improves output by 15%. That’s very interesting, but does it mean that it’s a good idea to kick the machine whenever it sputters? No — maybe kicking the machine damages it a little (or has some small probability of damaging it sometimes), or maybe the sputtering was something (such as a self-cleaning cycle) that the machine needs to do for its long-term health even at the cost of short-term performance. In other words, there is a clear gain to kicking the machine when it sputters, but it is unknown if there is also a subtle cost.

 

Youssef Ezzyat, Paul A. Wanda, Deborah F. Levy, Allison Kadel, Ada Aka, Isaac Pedisich, Michael R. Sperling, Ashwini D. Sharan, Bradley C. Lega, Alexis Burks, Robert E. Gross, Cory S. Inman, Barbara C. Jobst, Mark A. Gorenstein, Kathryn A. Davis, Gregory A. Worrell, Michal T. Kucewicz, Joel M. Stein, Richard Gorniak, Sandhitsu R. Das, Daniel S. Rizzuto & Michael J. Kahana. Closed-loop stimulation of temporal cortex rescues functional networks and improves memory.

 

https://www.wired.com/story/ml-brain-boost

Gene ‘Arc’ transports mRNA across cells and is required for some forms of plasticity

Also it appears to have evolved from viruses.

Elissa D. Pastuzyn, Cameron E. Day, Rachel B. Kearns, Madeleine Kyrke-Smith, Andrew V. Taibi, John McCormick, Nathan Yoder, David M. Belnap, Simon Erlendsson, Dustin R. Morado, John A.G. Briggs, Cédric Feschotte, Jason D. Shepherd. The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer

James Ashley, Benjamin Cordy, Diandra Luci, Lee G. Fradkin, Vivian Budnik, Travis Thomson. Retrovirus-like Gag Protein Arc1 Binds RNA and Traffics across Synaptic Boutons

 

Neural Engineering System Design programme (DARPA funding award announced)

https://www.darpa.mil/news-events/2017-07-10

 

“The NESD program looks ahead to a future in which advanced neural devices offer improved fidelity, resolution, and precision sensory interface for therapeutic applications,” said Phillip Alvelda, the founding NESD Program Manager. “By increasing the capacity of advanced neural interfaces to engage more than one million neurons in parallel…”

 

  • A Brown University team led by Dr. Arto Nurmikko will seek to decode neural processing of speech, focusing on the tone and vocalization aspects of auditory perception. The team’s proposed interface would be composed of networks of up to 100,000 untethered, submillimeter-sized “neurograin” sensors implanted onto or into the cerebral cortex. A separate RF unit worn or implanted as a flexible electronic patch would passively power the neurograins and serve as the hub for relaying data to and from an external command center that transcodes and processes neural and digital signals.
  • A Columbia University team led by Dr. Ken Shepard will study vision and aims to develop a non-penetrating bioelectric interface to the visual cortex. The team envisions layering over the cortex a single, flexible complementary metal-oxide semiconductor (CMOS) integrated circuit containing an integrated electrode array. A relay station transceiver worn on the head would wirelessly power and communicate with the implanted device.
  • A Fondation Voir et Entendre team led by Drs. Jose-Alain Sahel and Serge Picaud will study vision. The team aims to apply techniques from the field of optogenetics to enable communication between neurons in the visual cortex and a camera-based, high-definition artificial retina worn over the eyes, facilitated by a system of implanted electronics and micro-LED optical technology.
  • A John B. Pierce Laboratory team led by Dr. Vincent Pieribone will study vision. The team will pursue an interface system in which modified neurons capable of bioluminescence and responsive to optogenetic stimulation communicate with an all-optical prosthesis for the visual cortex.
  • A Paradromics, Inc., team led by Dr. Matthew Angle aims to create a high-data-rate cortical interface using large arrays of penetrating microwire electrodes for high-resolution recording and stimulation of neurons. As part of the NESD program, the team will seek to build an implantable device to support speech restoration. Paradromics’ microwire array technology exploits the reliability of traditional wire electrodes, but by bonding these wires to specialized CMOS electronics the team seeks to overcome the scalability and bandwidth limitations of previous approaches using wire electrodes.
  • A University of California, Berkeley, team led by Dr. Ehud Isacoff aims to develop a novel “light field” holographic microscope that can detect and modulate the activity of up to a million neurons in the cerebral cortex. The team will attempt to create quantitative encoding models to predict the responses of neurons to external visual and tactile stimuli, and then apply those predictions to structure photo-stimulation patterns that elicit sensory percepts in the visual or somatosensory cortices, where the device could replace lost vision or serve as a brain-machine interface for control of an artificial limb.

See https://www.darpa.mil/attachments/FactsheetNESDKickoffFinal.pdf for more details.