Connectomics and the future of Neuroscience

Source

In my previous article, I talked about fMRI (functional magnetic resonance imaging) and complained about its main pitfall: it cannot record the activity of neurons — instead, the images you see of particular brain regions being lit-up reflect blood flow in certain areas of the brain. In fact, the signal you see reflects the relative presence of oxygenated versus deoxygenated blood; active regions require more oxygenated blood, so scientists can infer neuronal activity based on this measure.

So, to be clear, fMRIs don’t record neurons activity but blood flow on huge areas of about 1mm3 which can contain an average…


Shedding Some Light On The Brain Scans Crisis

Source

I get it, brain scans are cool. Since 25 years ago, brain scans have been a very attractive topic for research and have managed to attract vast amounts of funding. I can understand that titles like “Vegetative state patients can respond to questions” or “This is your brain on writing” have the potential to attract people’s attention to neuroscience for the first time, and that’s a good thing.

However, things got messy in 2009, But in 2009, researchers put a dead salmon into an fMRI scanner, just to see what would happen. To their surprise parts of the brain lit…


Don’t Waste Time Tuning Hyperparameters Never Again

Source: https://en.wikipedia.org/wiki/The_Persistence_of_Memory

I became a data scientist because I like finding solutions for complex problems, the creative part of the job and the insights I gain from the data is what I enjoy the most. The boring stuff like cleaning data, preprocessing, and tuning hyperparameters brings me little joy, and that’s why I try to automate these tasks as much as possible.

If you also like automating the boring stuff you will love the library I am about to introduce in this article.

As I mentioned in a previous article, the current state of the art in machine learning is dominated by…


Causal Forecasting with CausalNex

Source

Causal Networks are having a huge impact in the world of Artificial Intelligence and their importance is only going to grow. Since Judea Pearl and his colleagues created a mathematical language for causality termed “do- calculus” we have a way to explicitly calculate causal effects and therefore we can answer “what-if” questions a.k.a we can make predictions.

I’ve picked up boxing for this article for two reasons. First, it’s my favorite sport, but most importantly it is a very explicit example of cause and effect isn’t it?

I believe even a 5 years old kid knows pretty well that it’s…


Hands-on Tutorials

Build and Deploy Your Own AI Platform with Python and Django

Source: https://en.wikipedia.org/wiki/Artificial_intelligence

The rise of Artificial Intelligence has made machine learning platforms (ML as a service) become popular. If your company is not into building a framework and deploying their own models, chances are they are using some of these platforms like H2O or KNIME.

Even many data scientists who want to save some time are using these tools to quickly prototype and test their models, deciding later whether they are work further work or not. …


This where we are on 2021 our roadmap to the Connectome

Credit: Alex Norton for EyeWire from the Seung Lab at MIT

During the last decade, we have seen an explosion of efforts to map the human connectome, which translates into drawing 10¹⁵ connections between 100 billion neurons in the brain. The advances made in brain imaging together will an astonishing increase in sheer computing power, have paved the way for scientists to start seeing such an intimidating task as feasible.

In this article, I will leave aside the questions about how useful the brain map will be (would you understand how a car works just by looking at the engines?) or what kind of connectome do we want to map (every…


Deep Learning for Climate Time Series Forecasting

Source: https://unsplash.com/photos/JZRlnfsdcj0

In my previous article about climate change, I complained about the relative scarcity of AI research dedicated to such an important topic. In this post, I want to dig deeper into the lack of deep learning efforts for climate forecasting and contribute to the topic by showing a nice Keras project.

The truth is that most climatic models currently are built using either multiple regression or time series forecasting techniques such as ARIMA or ARMA. This is so mainly because neural networks have focused on unstructured perceptual data (like images, video, text, or speech) which is difficult to deal with…


All you need to know about feature engineering

Source: https://unsplash.com/photos/Kp9z6zcUfGw

When building a machine learning model, most likely you will not use all the variables available in your training dataset. In fact, training datasets with hundreds or thousands of features are not uncommon, creating a problem for the beginner data scientist who might struggle to decide what variables to include in her model.

There are two main groups of techniques that we can use in dimensionality reduction: feature selection and feature extraction.

For example, a 256 × 256–pixel color image can be transformed into 196,608 features. Furthermore, because each of these pixels can take one of 256 possi‐ ble values…


Source: https://unsplash.com/photos/1qIsv86S79E

Climate change is one of those critical issues that don’t receive enough attention from the AI community. The main reason why machine learning developers and data scientists are building so few climate models is that climate change is painfully hard to forecast in the long run.

While weather forecasts are increasing their accuracy every year, climate predictions and their socioeconomic impact are much harder to estimate, this is due to the huge amount of human variables that play a role in climate change.

However, climatic models have experienced a boost in recent years thanks to Integrated Assessment Models.

Integrated assessment…


Improve your Predictions with XGboost

Source: https://unsplash.com/photos/twukN12EN7c

In this article, I want to share my approach to solve a house price forecasting competition from Kaggle. This is a classical multivariable regression problem that data scientists face very often in their jobs, that’s why I think it is very interesting to dive deeper into this problem.

First of all, we load our data into a Pandas DataFrame and then perform data cleaning and data preprocessing, consisting of:

  • Replacing categorical variables by numbers with Pandas “replace” function
  • Filling empty “NaN” values with Pandas “fillna(0)” function
  • Deleting columns for low correlation variables with the Pandas “drop()” function, in this particular…

Diego Salinas

Follow me on a journey through AI and Neuroscience on https://medium.com/artificial-intelligence-and-cognition

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store