How are Web Networks similar to the brain?

by on under Research
4 minute read

Web networks resemble the brain if we think of web pages as neurons. Indeed, interconnections have complicated structure, while nodes produce time-series of activations (visits of web pages and spike-trains of neurons). A detailed and very exciting comparison available on ExplainThatStuff.com. Here, we focus on memory properties of the Web and show in what way it is similar to human memory.

Wikipedia Web network as a global memory

Graph In my previous post, I showed how we can identify collective interests of people over time using Wikipedia Web network and its page view statistics. We called these collective interests Collective memories. We demonstrated that the structures, which we extracted from the Wikipedia Web network, comprise pages related to certain events. Clearly, if we look carefully at the structures of the learned graphs we will see that they have associative nature. This insight led us to another interesting question. Are these structures similar to artificial models of human memory?

To answer this question, we performed a few experiments. We took one of the most popular models of the associative memory, Hopfield network, and modeled the recall process. Hopfield network is an artificial recurrent neural network model that serves as an associative memory. This model was also used for understanding of human memory, so we decided to use it to test our assumption.

Method

Instead of learning the weights of the neural network in a conventional way, we took the weighted adjacency matrices of the graph structures that correspond to the detected collective memories. As you remember from the previous post, we learned them using a modified Hebbian learning rule. Then, we applied these weight matrices to the time-series of the Wikipedia page views (for more details, read preprint on arXiv).

Experiments

In the first experiment, we used the collective memory graphs that we learned for each month. In the images below, you can see the recall results. The model re-enforced activity levels for every month in our dataset. Red areas show the periods when the memory is inactive, while green areas correspond to the moments of memory activation.

October   November   December   January   February   March   April
           

After that, we tested if our model is able to reconstruct missing collective memories. Here, to give an example, we show the second wave of Ferguson unrest collective memory. The unrest occurred on the 24 of November 2014. We remove 20% of activations in the cluster of the collective memory and apply Hopfield recall model. The image below illustrates a recall from a partial pattern.

As you can see, in this case, the model behavior is quite interesting. The model does reconstruct missing memories, but also it acts as a filter. It keeps only those parts of the signal that are relevant to the event. Indeed, the behavior reminds of the way our memory works. We memorize better if we find an association with something. We focus our memory on a certain period of time when an event occurred and forget scattered unassociated memories.

Check out the latest version of our paper on ArXiV to get the technical details of the experiments. The code is available on GitHub. If you want to try to reproduce the experiments but are new to Spark Scala projects, take a look at this tutorial.

Besides, we realized that we can use the aforementioned memory properties to recover missing or corrupted records reflecting the dynamics of social and web networks. Take a look at another blog post for more details.

Wikipedia, Machine Learning, Research, Network Analysis, Collective Memory, Hopfield Network, Neural Network, Memory
comments powered by Disqus