TIdDLe Forum

... where Deep Learning happens in Toulouse

You are not logged in.

#1 Job offers / tips » Postdoc position Deep Learning for weather forecast generation » 2021-12-10 15:12:09

Rufin VanRullen
Replies: 0

Dear all,

Météo-France/CNRM (Toulouse, France) is opening a postdoc position on the generation of weather forecasts with deep learning techniques.

Feel free to forward this information to anyone who could be interested.

Best regards,
Laure Raynaud

This Post-doctoral position on probabilistic prediction of extreme weather events based on AI/physics synergy is part of the POESY project funded by the French National Research Agency.
Application deadline: 15 January 2022
Duration of contract : 21 to 27 months depending on experience
Expected starting date : 1 April 2022

Context
High-Impact Weather (HIW) events have devastating effects on society, causing human losses, degradation of infrastructures and large economic impacts. Severe precipitating events, damaging thunderstorms and strong winds are among the most impacting events from a meteorological point of view, with various severe indirect effects such as flooding, landslides and marine submersion. Being rare, HIW events lie in the tail of climatological distribution of weather events. Although meteorological services such as Météo-France  have done significant progress in predicting weather for the last decades, accurately predicting the occurrence, intensity, location and timing of HIW still remains challenging.
Currently operational weather forecasts rely on physically-based modelling approaches, and Numerical Weather Prediction (NWP) models are operated daily to determine the future atmospheric states and the risk of HIW. In particular, Ensemble prediction systems (EPSs) aim at sampling the probability distribution of future atmospheric states. They consist in running several NWP forecasts in order to account for the different sources of uncertainty. At Météo-France, the operational AROME-EPS, which runs 16 perturbed forecasts with a spatial resolution of 1.3km, is used to anticipate the risk of HIW. However, properly capturing the associated uncertainty requires very high resolution (few hundred meters) large-size (few hundred members) ensembles. Nonetheless, such enhanced systems are currently unfeasible for operational NWP because of the associated computational cost.
In this context, the main objective of the POESY project is to explore the scientific feasibility and relevance of an innovative hybrid EPS design, combining standard physical modelling with computationally-efficient Artificial Intelligence (AI) techniques, in order to produce disruptive probabilistic forecasts for high-impact weather.

Objectives
The goal of this post-doctoral position is to improve the representation of forecast probability distributions by increasing the AROME-EPS sampling from O(10) to 0(1000) forecasts thanks to complementary AI-generated forecasts. For that purpose, physically-constrained deep generative models such as GANs and Variational AutoEncoders will be developed and evaluated. Besombes et al. (2021) provides a first example of GAN-based weather scenario. A crucial part of the work will be to adapt off-the-shelf learning architectures to the particularities of this geophysical problem. A specific attention will be paid to the following points : the learning of extremes, ensuring spatial, temporal and physical consistencies in the generated forecasts, mode collapse problem.

Required skills
The ideal candidate would have the following qualifications :
- A  PhD degree in atmopsheric sciences, statistics or artificial intelligence
- a strong background in deep learning algorithms, in particular convolutional neural networks and deep generative models (GANs)
- experience in geophysical problems would be appreciated, at least a strong interest for applied research in atmopsheric physics is highly recommended
- Proficiency with Python programming and AI librairies
- Experience with processing large volumes of data
- Experience of working in a Linux-based environment
- Aptitude for scientific work, written and oral communication in English, meetings abroad possible
- A scientific curiosity, autonomy, rigor in the interpretation of the results

Bibliography
Besombes et al., 2021 : Producing realistic climate data with Generative Adversarial Networks, Nonlin. Processes Geophys., 28, 347–370, https://doi.org/10.5194/npg-28-347-2021.

For more information and application please directly contact Laure Raynaud (laure.raynaud@meteo.fr).

#2 Job offers / tips » CNRS researcher position on advanced methods of artificial intelligenc » 2021-12-10 15:11:29

Rufin VanRullen
Replies: 0

Dear colleagues,

One of the novelties at this year’s edition of the “concours chercheurs CNRS”,

   https://www.dgdr.cnrs.fr/drhchercheurs/ … ult-en.htm

is the availability of five permanent positions as junior full-time researchers in the newly created interdisciplinary section “sciences and data”. They can be found at the very end of the list of positions on the website quoted above (“interdisciplinary committee no. 55”). The “concours chercheurs” is a competitive recruitment process used by CNRS to hire its researchers.

None of the five positions is attached to a given CNRS laboratory, i.e. candidates are free to propose a research project with any CNRS laboratory. One of the five positions is on the subject

  “Advanced methods of artificial intelligence for the processing, reconstruction and analysis of ATLAS experiment data at the Large Hadron Collider”.

Are you a young colleague with demonstrated experience in the field of advanced artificial intelligence? Are you interested in applications to particle physics, but do not have much/any hands-on experience with physics? Please do not hesitate to get in touch with members of the ATLAS team at the “Laboratoire des 2 Infinis – Toulouse” (L2IT). We are happy to discuss ATLAS and the LHC with you.

One of our recent conference proceedings on the use of artificial intelligence for ATLAS and the LHC can be found here:

  https://www.epj-conferences.org/article … 03047.html

More general information on L2IT, a new laboratory created in January 2020, can be found here:

  https://indico.in2p3.fr/event/24978/

Best regards,
Jan

[Do not hesitate to forward this message as you deem appropriate.]

#3 Seminars » CerCo (online) Conference on April 16th @9am | Randall O'Reilly » 2021-04-13 17:32:13

Rufin VanRullen
Replies: 0

CerCo Conference - Friday, 16 April 2021, 9:00 AM
Title: Predictive Error-driven Learning in the Brain
Speaker: Randall O'Reilly (Center for Neuroscience, University of California Davis)

Abstract: How do humans learn from raw sensory experience?  Throughout life, but most obviously in infancy, we learn without explicit instruction.  We propose a detailed biological mechanism for the widely-embraced idea that learning is driven by the differences between predictions and actual outcomes (i.e., predictive error-driven learning).  Specifically, numerous weak projections into the pulvinar nucleus of the thalamus generate top-down predictions, and sparse, focal driver inputs from lower areas supply the actual outcome, originating in layer 5 intrinsic bursting (5IB) neurons.  Thus, the outcome representation is only briefly activated, roughly every 100 ms (i.e., 10 Hz, alpha), resulting in a temporal difference error signal, which drives local synaptic changes throughout the neocortex.  This results in a biologically-plausible form of error backpropagation learning.  We implemented these mechanisms in a large-scale model of the visual system, and found that the simulated inferotemporal (IT) pathway learns to systematically categorize 3D objects according to invariant shape properties, based solely on predictive learning from raw visual inputs.  These categories match human judgments on the same stimuli, and are consistent with neural representations in IT cortex in primates.  The same learning mechanisms apply to sensory-motor predictive learning, and are being tested in an immersive rodent-level navigation and foraging environment, and in speech perception and production.

Zoom link:
https://univ-tlse3-fr.zoom.us/j/8504047 … ZPWm02QT09

Meeting ID: 850 4047 0167
Passcode: 616413

#4 Re: Seminars » ANITI seminar: Mathieu Chalvidal, @LAAS-CNRS, Friday October 30th @3pm » 2020-10-30 11:19:39

[lockdown update]:
This talk will still happen, but 100% virtual. Obviously, do not go to LAAS for this talk. The link is still valid: https://seminar.laas.fr/b/gui-lyf-mp4-6v7
See you all at 3pm!

#5 Seminars » ANITI seminar: Mathieu Chalvidal, @LAAS-CNRS, Friday October 30th @3pm » 2020-10-28 20:55:37

Rufin VanRullen
Replies: 1

Mathieu Chalvidal, ANITI PhD student in the chairs of Thomas Serre and Rufin VanRullen, will present his recent work on:

"A control perspective on parameterization in infinite-depth neural networks''.
Abstract: The perspective of artificial neural modules as mathematical functions describing a vector field over a topological activity space opens interesting perspectives for deep learning: This view interprets the sequential computation of stacked or recurrent neural modules in traditional deep neural networks as the discrete evolution of a dynamical system, such that a particular parametrization of the module gives rise to particular trajectories of hidden state/activity tensor given their initial point (input stimuli). Recently, new approaches have developed further this view suggesting to solve the ordinary differential equation (Neural ODE) associated with the said module function, such that model depth corresponds to integration over time. This interpretation culminates in the assimilation of neural networks as particular flows. Despite their elegant formulation and lightweight memory cost, neural ordinary differential equations (NODEs) suffer from known representatio!
nal limitations. In particular, the single flow learned by NODEs cannot express all homeomorphisms from a given data space to itself, and their static weight parametrization restricts the type of functions they can learn compared to discrete architectures with layer-dependent weights. Here, we describe a new module called neurally- controlled ODE (N-CODE) designed to improve the expressivity of NODEs. The parameters of N-CODE modules are dynamic variables governed by a trainable map from initial or current activation state, resulting in forms of open-loop and closed-loop control, respectively.

Anyone is welcome to join this ANITI seminar in-person at the LAAS (just say you come for the ANITI talk). The seminar will also be accessible through the link:: https://seminar.laas.fr/b/gui-lyf-mp4-6v7 (no need for a password,  just identify yourself with a name or pseudo to enter the virtual room).

#6 Seminars » TIdDLe/ANITI Keynote Lecture by Thomas Serre (ANITI/Brown University) » 2020-01-06 17:02:37

Rufin VanRullen
Replies: 1

TIdDLe/ANITI Keynote Lecture by Thomas Serre (ANITI/Brown University): Jan 14, 11am@B612 (6th floor)

Dear TIdDLers,

to kick off the new year, we're delighted to present our first TIdDLe/ANITI Keynote Lecture, by Thomas Serre (Brown University & ANITI external chair), on January 14, 11am at the B612 building (3 rue Tarfaya, 6th floor). All TIdDLe members are invited to attend the Lecture, which will be followed by a cocktail reception (generously sponsored by ANITI). Don't miss this opportunity to catch up with your TIdDLe/ANITI colleagues, and to meet a fabulous guest speaker who perfectly embodies the intersection between Deep Learning and Neuroscience. (Keep reading for the Lecture Summary and Speaker Bio).

Hope to see you next Tuesday!
Rufin (on behalf of the TIdDLe organization committee).

-
Title:        Feedforward and feedback processes in visual recognition
Speaker:  Thomas Serre
Affiliation: Cognitive, Linguistic & Psychological Sciences Department
               Carney Institute for Brain Science, Brown University (USA)
Abstract:  Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive fields that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

Bio:         Dr. Serre is an Associate Professor in Cognitive Linguistic & Psychological Sciences and an affiliate of the Carney Institute for Brain Science at Brown University. He received a Ph.D. in Neuroscience from MIT in 2006 and an MSc in EECS from Télécom Bretagne (France) in 2000. His research seeks to understand the neural computations supporting visual perception and has been featured in the BBC series “Visions from the Future” and other news articles (The Economist, New Scientist, Scientific American, IEEE Computing in Science and Technology, Technology Review and Slashdot). Dr. Serre is the Faculty Director of the Center for Computation and Visualization and co-Director of the Initiative for Computation in Brain and Mind at Brown University. He also holds an International Chair in AI within the Artificial and Natural Intelligence Toulouse Institute (France). Dr. Serre has served as an area chair and a senior program committee member for top-tier machine learning and computer vision conferences including AAAI, CVPR, and NeurIPS. He is currently serving as a domain expert for IARPA’s Machine Intelligence from Cortical Networks (MICrONS) program and as a scientific advisor for Vium, Inc. He was the recipient of an NSF Early Career Award as well as DARPA’s Young Faculty Award and Director’s Award.

#7 Seminars » Oct 1, 2019, 10:00 - ENAC amphi Breguet - N. Bouaynaya (Rowan Univ.) » 2019-09-26 09:44:54

Rufin VanRullen
Replies: 0

Title: Deep Learning: Promise and Challenges - A Bayesian Perspective
By:    Dr. Nidhal Carla Bouaynaya, Rowan University (USA)

Abstract: Within the field of machine learning, deep learning approaches have resulted in state-of-the-art accuracy in visual object detection, speech recognition, and many other domains including transportation. Deep learning techniques hold the promise of emerging technologies, such as autonomous unmanned vehicles, smart cities infrastructure, and cybersecurity. However, deep learning models are deterministic, and as a result are unable to understand or assess their uncertainty, a critical part of any predictive system’s output. This can have disastrous consequences, especially when the output of such models is fed into higher-level decision making procedures, such as autonomous drones and vehicles. This talk is divided into two parts. First, we provide intuitive insights into deep learning models and show their applications in flight safety and operational efficiency. We then introduce Bayesian deep learning to assess the model’s confidence in its prediction and show pioneering results on robustness to noise and artifacts in the data as well as resilience to adversarial attacks.

Location: ENAC amphi Breguet **Valid identification required for entry**

#8 Re: Seminars » April 29, 2019, 12:30 - UPS amphi 1R3-Schwartz - F. Malgouyres (IMT) » 2019-04-22 09:57:05

ATTENTION: DATE CHANGE!

This will now take place on May 9th, 2019 from 10:30am to 11:45am (same place).

Résumé :
Cet exposé est la suite de l’exposé du 15 avril à propos de résultats théoriques pour le deep learning. Nous présenterons ici deux types de résultats : des garanties sur la complexité statistique des réseaux feed-forward (qui impliquent un contrôle de l’erreur de généralisation), et des propriétés relatives à l’optimisation (non-convexe) des paramètres de ces réseaux : calcul du gradient, propriétés de la fonction objectif.

#9 Seminars » April 29, 2019, 12:30 - UPS amphi 1R3-Schwartz - F. Malgouyres (IMT) » 2019-04-09 12:49:36

Rufin VanRullen
Replies: 1

Title: A quick overview of deep learning theory: part 2
By:    François Malgouyres, IMT

Abstract: The recent successful practical results in deep learning have triggered a renewed interest in deep learning theory. In this talk we will briefly review old and new results about optimization algorithms and guarantees for feed-forward neural networks

Location: UPS amphi 1R3-Schwartz

Note: These introductory lectures, organized by IMT and the AOC group, are directed to an audience of deep learning beginners *with an already solid mathematical background*.

#10 Seminars » April 15, 2019, 12:30 - UPS amphi 1R3-Schwartz - S. Gerchinovitz (IMT) » 2019-04-09 12:47:54

Rufin VanRullen
Replies: 0

Title: A quick overview of deep learning theory: part 1
By:    Sébastien Gerchinovitz, IMT

Abstract: The recent successful practical results in deep learning have triggered a renewed interest in deep learning theory. In this talk we will briefly review old and new results about approximation theory and statistical complexity for feed-forward neural networks.

Location: UPS amphi 1R3-Schwartz

Note: These introductory lectures, organized by IMT and the AOC group, are directed to an audience of deep learning beginners *with an already solid mathematical background*.

#11 Seminars » April 8th, 2019, 12:30 - UPS amphi 1R3-Schwartz - E. Rachelson (ISAE) » 2019-04-03 10:09:10

Rufin VanRullen
Replies: 0

Title: An Introduction to Deep Reinforcement Learning
By:    Emmanuel Rachelson, ISAE-SUPAERO.

Abstract: Deep Reinforcement Learning has become a hot topic over the recent years. This talk is designed for those who want to discover the topic from a rigorous (yet illustrative) point of view. Here is a tentative outline :
+Introduction [target duration : 5 minutes],
+Deep RL is RL, but deep. So we need an overview of RL foundations [target duration : 20 minutes],
+Deep Q-networks [target duration : 10 minutes],
+Towards the state of the art / hot topics [target duration : 10 minutes].

Location: UPS amphi 1R3-Schwartz

Note: These introductory lectures, organized by IMT and the AOC group, are directed to an audience of deep learning beginners *with an already solid mathematical background*.

#12 Web presence » Missing forum categories/topics? » 2019-03-12 00:22:01

Rufin VanRullen
Replies: 0

If there's something you want to discuss, but don't find the right place for it, feel free to propose here your ideas for new forum categories or topics.

#13 Seminars » March 28th, 2019, 2pm - SUPAERO - Tim Masquelier (CerCo, CNRS) » 2019-03-08 20:54:19

rufin
Replies: 0

Title: Spike-based computing and learning in brains, machines, and visual systems in particular
By:    Tim Masquelier, CerCo (CNRS).

Abstract: In recent years, deep learning has been a revolution in the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is usually trained in a supervised manner, using a stochastic gradient descent method known as backpropagation. Huge amounts of labeled examples are required, but the resulting classification accuracy is truly impressive, often outperforming humans. Neurons in an ANN are characterized by a single, static, continuous-valued activation. Yet biological neurons use discrete electrical impulses called spikes to compute and transmit information, and the spike times, in addition to the spike rates, matter. Spiking neural networks (SNNs) are thus more biologically realistic than ANNs, and arguably the only viable option if one wants to understand how the brain computes. SNNs are also more hardware friendly and energy-efficient than ANNs, and are thus appealing for technology, especially for edge computing. However, training deep SNNs remains a challenge. Spiking neurons' transfer function is usually non-differentiable, which prevents using classic backpropagation. I will review recent supervised and unsupervised methods to train deep SNNs, and compare them in terms of accuracy, but also computational cost and hardware friendliness.

Location: ISAE-SUPAERO, room 05.035
Access to the ISAE-SUPAERO campus requires that you send an email to emmanuel.rachelson@isae-supaero.fr at least the day before the seminar.

#14 Web presence » TIdDLE logo? » 2019-03-08 19:17:08

rufin
Replies: 0

Anyone interested (and artistically skilled enough) to make us a shiny TIdDLe logo?

For inspiration: in English, a "tiddler" actually means "small fish", both in a literal and figurative sense. "To tiddle" means to deal with unimportant things. ;-)

Board footer