MLFWR1 
Machine Learning Fundamentals with R 
14 hours 
The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the R programming platform and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results.
Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications.
Introduction to Applied Machine Learning
Statistical learning vs. Machine learning
Iteration and evaluation
BiasVariance tradeoff
Regression
Linear regression
Generalizations and Nonlinearity
Exercises
Classification
Bayesian refresher
Naive Bayes
Logistic regression
KNearest neighbors
Exercises
Crossvalidation and Resampling
Crossvalidation approaches
Bootstrap
Exercises
Unsupervised Learning
Kmeans clustering
Examples
Challenges of unsupervised learning and beyond Kmeans

annmldt 
Artificial Neural Networks, Machine Learning, Deep Thinking 
21 hours 
DAY 1  ARTIFICIAL NEURAL NETWORKS
Introduction and ANN Structure.
Biological neurons and artificial neurons.
Model of an ANN.
Activation functions used in ANNs.
Typical classes of network architectures .
Mathematical Foundations and Learning mechanisms.
Revisiting vector and matrix algebra.
Statespace concepts.
Concepts of optimization.
Errorcorrection learning.
Memorybased learning.
Hebbian learning.
Competitive learning.
Single layer perceptrons.
Structure and learning of perceptrons.
Pattern classifier  introduction and Bayes' classifiers.
Perceptron as a pattern classifier.
Perceptron convergence.
Limitations of a perceptrons.
Feedforward ANN.
Structures of Multilayer feedforward networks.
Back propagation algorithm.
Back propagation  training and convergence.
Functional approximation with back propagation.
Practical and design issues of back propagation learning.
Radial Basis Function Networks.
Pattern separability and interpolation.
Regularization Theory.
Regularization and RBF networks.
RBF network design and training.
Approximation properties of RBF.
Competitive Learning and Self organizing ANN.
General clustering procedures.
Learning Vector Quantization (LVQ).
Competitive learning algorithms and architectures.
Self organizing feature maps.
Properties of feature maps.
Fuzzy Neural Networks.
Neurofuzzy systems.
Background of fuzzy sets and logic.
Design of fuzzy stems.
Design of fuzzy ANNs.
Applications
A few examples of Neural Network applications, their advantages and problems will be discussed.
DAY 2 MACHINE LEARNING
The PAC Learning Framework
Guarantees for finite hypothesis set – consistent case
Guarantees for finite hypothesis set – inconsistent case
Generalities
Deterministic cv. Stochastic scenarios
Bayes error noise
Estimation and approximation errors
Model selection
Radmeacher Complexity and VC – Dimension
Bias  Variance tradeoff
Regularisation
Overfitting
Validation
Support Vector Machines
Kriging (Gaussian Process regression)
PCA and Kernel PCA
Self Organisation Maps (SOM)
Kernel induced vector space
Mercer Kernels and Kernel  induced similarity metrics
Reinforcement Learning
DAY 3  DEEP LEARNING
This will be taught in relation to the topics covered on Day 1 and Day 2
Logistic and Softmax Regression
Sparse Autoencoders
Vectorization, PCA and Whitening
SelfTaught Learning
Deep Networks
Linear Decoders
Convolution and Pooling
Sparse Coding
Independent Component Analysis
Canonical Correlation Analysis
Demos and Applications

deeplearning1 
Introduction to Deep Learning 
21 hours 
This course is general overview for Deep Learning without going too deep into any specific methods. It is suitable for people who want to start using Deep learning to enhance their accuracy of prediction.
Backprop, modular models
Logsum module
RBF Net
MAP/MLE loss
Parameter Space Transforms
Convolutional Module
GradientBased Learning
Energy for inference,
Objective for learning
PCA; NLL:
Latent Variable Models
Probabilistic LVM
Loss Function
Handwriting recognition

cntk 
Using Computer Network ToolKit (CNTK) 
28 hours 
Computer Network ToolKit (CNTK) is Microsoft's Open Source, Multimachine, MultiGPU, Highly efficent RNN training machine learning framework for speech, text, and images.
Audience
This course is directed at engineers and architects aiming to utilize CNTK in their projects.
Getting started
Setup CNTK on your machine
Enabling 1bit SGD
Developing and Testing
CNTK Production Test Configurations
How to contribute to CNTK
Tutorial
Tutorial II
CNTK usage overview
Examples
Presentations
Multiple GPUs¹ and machines
Configuring CNTK
Config file overview
Simple Network Builder
BrainScript Network Builder
SGD block
Reader block
Train, Test, Eval
Toplevel configurations
Describing Networks
Basic concepts
Expressions
Defining functions
Full Function Reference
Data readers
Text Format Reader
CNTK Text Format Reader
UCI Fast Reader (deprecated)
HTKMLF Reader
LM sequence reader
LU sequence reader
Image reader
Evaluating CNTK Models
Overview
C++ Evaluation Interface
C# Evaluation Interface
Evaluating Hidden Layers
C# Image Transforms for Evaluation
Advanced topics
Command line parsing rules
Toplevel commands
Plot command
ConvertDBN command
¹ The topic related to the use of CNTK with a GPU is not available as a part of a remote course. This module can be delivered during classroombased courses, but only by prior agreement, and only if both the trainer and all participants have laptops with supported NVIDIA GPUs (not provided by NobleProg). NobleProg cannot guarantee the availability of trainers with the required hardware. 
aiintrozero 
From Zero to AI 
35 hours 
This course is created for people who have no previous experience in probability and statistics.
Probability (3.5h)
Definition of probability
Binomial distribution
Everyday usage exercises
Statistics (10.5h)
Descriptive Statistics
Inferential Statistics
Regression
Logistic Regression
Exercises
Intro to programming (3.5h)
Procedural Programming
Functional Programming
OOP Programming
Exercises (writing logic for a game of choice, e.g. noughts and crosses)
Machine Learning (10.5h)
Classification
Clustering
Neural Networks
Exercises (write AI for a computer game of choice)
Rules Engines and Expert Systems (7 hours)
Intro to Rule Engines
Write AI for the same game and combing solutions into hybrid approach

aiauto 
Artificial Intelligence in Automotive 
14 hours 
This course covers AI (emphasizing Machine Learning and Deep Learning) in Automotive Industry. It helps to determine which technology can be (potentially) used in multiple situation in a car: from simple automation, image recognition to autonomous decision making.
Current state of the technology
What is used
What may be potentially used
Rules based AI
Simplifying decision
Machine Learning
Classification
Clustering
Neural Networks
Types of Neural Networks
Presentation of working examples and discussion
Deep Learning
Basic vocabulary
When to use Deep Learning, when not to
Estimating computational resources and cost
Very short theoretical background to Deep Neural Networks
Deep Learning in practice (mainly using TensorFlow)
Preparing Data
Choosing loss function
Choosing appropriate type on neural network
Accuracy vs speed and resources
Training neural network
Measuring efficiency and error
Sample usage
Anomaly detection
Image recognition
ADAS

Neuralnettf 
Neural Networks Fundamentals using TensorFlow as Example 
28 hours 
This course will give you knowledge in neural networks and generally in machine learning algorithm, deep learning (algorithms and applications).
This training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Teano, DeepDrive, Keras, etc. The examples are made in TensorFlow.
TensorFlow Basics
Creation, Initializing, Saving, and Restoring TensorFlow variables
Feeding, Reading and Preloading TensorFlow Data
How to use TensorFlow infrastructure to train models at scale
Visualizing and Evaluating models with TensorBoard
TensorFlow Mechanics
Inputs and Placeholders
Build the GraphS
Inference
Loss
Training
Train the Model
The Graph
The Session
Train Loop
Evaluate the Model
Build the Eval Graph
Eval Output
The Perceptron
Activation functions
The perceptron learning algorithm
Binary classification with the perceptron
Document classification with the perceptron
Limitations of the perceptron
From the Perceptron to Support Vector Machines
Kernels and the kernel trick
Maximum margin classification and support vectors
Artificial Neural Networks
Nonlinear decision boundaries
Feedforward and feedback artificial neural networks
Multilayer perceptrons
Minimizing the cost function
Forward propagation
Back propagation
Improving the way neural networks learn
Convolutional Neural Networks
Goals
Model Architecture
Principles
Code Organization
Launching and Training the Model
Evaluating a Model

mldt 
Machine Learning and Deep Learning 
21 hours 
This course covers AI (emphasizing Machine Learning and Deep Learning)Machine learning
Introduction to Machine Learning
Applications of machine learning
Supervised Versus Unsupervised Learning
Machine Learning Algorithms
Regression
Classification
Clustering
Recommender System
Anomaly Detection
Reinforcement Learning
Regression
Simple & Multiple Regression
Least Square Method
Estimating the Coefficients
Assessing the Accuracy of the Coefficient Estimates
Assessing the Accuracy of the Model
Post Estimation Analysis
Other Considerations in the Regression Models
Qualitative Predictors
Extensions of the Linear Models
Potential Problems
Biasvariance trade off [underfitting/overfitting] for regression models
Resampling Methods
CrossValidation
The Validation Set Approach
LeaveOneOut CrossValidation
kFold CrossValidation
BiasVariance TradeOff for kFold
The Bootstrap
Model Selection and Regularization
Subset Selection [Best Subset Selection, Stepwise Selection, Choosing the Optimal Model]
Shrinkage Methods/ Regularization [Ridge Regression, Lasso & Elastic Net]
Selecting the Tuning Parameter
Dimension Reduction Methods
Principal Components Regression
Partial Least Squares
Classification
Logistic Regression
The Logistic Model cost function
Estimating the Coefficients
Making Predictions
Odds Ratio
Performance Evaluation Matrices
[Sensitivity/Specificity/PPV/NPV, Precision, ROC curve etc.]
Multiple Logistic Regression
Logistic Regression for >2 Response Classes
Regularized Logistic Regression
Linear Discriminant Analysis
Using Bayes’ Theorem for Classification
Linear Discriminant Analysis for p=1
Linear Discriminant Analysis for p >1
Quadratic Discriminant Analysis
KNearest Neighbors
Classification with Nonlinear Decision Boundaries
Support Vector Machines
Optimization Objective
The Maximal Margin Classifier
Kernels
OneVersusOne Classification
OneVersusAll Classification
Comparison of Classification Methods
Introduction to Deep Learning
ANN Structure
Biological neurons and artificial neurons
Nonlinear Hypothesis
Model Representation
Examples & Intuitions
Transfer Function/ Activation Functions
Typical classes of network architectures
Feed forward ANN.
Structures of Multilayer feed forward networks
Back propagation algorithm
Back propagation  training and convergence
Functional approximation with back propagation
Practical and design issues of back propagation learning
Deep Learning
Artificial Intelligence & Deep Learning
Softmax Regression
SelfTaught Learning
Deep Networks
Demos and Applications
Lab:
Getting Started with R
Introduction to R
Basic Commands & Libraries
Data Manipulation
Importing & Exporting data
Graphical and Numerical Summaries
Writing functions
Regression
Simple & Multiple Linear Regression
Interaction Terms
Nonlinear Transformations
Dummy variable regression
CrossValidation and the Bootstrap
Subset selection methods
Penalization [Ridge, Lasso, Elastic Net]
Classification
Logistic Regression, LDA, QDA, and KNN,
Resampling & Regularization
Support Vector Machine
Resampling & Regularization
Artificial Neural Network
Deep Learning
Note:
For ML algorithms, case studies will be used to discuss their application, advantages & potential issues.
Analysis of different data sets will be performed using R

deepmclrg 
Machine Learning & Deep Learning with Python and R 
14 hours 
MACHINE LEARNING
1: Introducing Machine Learning
The origins of machine learning
Uses and abuses of machine learning
Ethical considerations
How do machines learn?
Abstraction and knowledge representation
Generalization
Assessing the success of learning
Steps to apply machine learning to your data
Choosing a machine learning algorithm
Thinking about the input data
Thinking about types of machine learning algorithms
Matching your data to an appropriate algorithm
Using R for machine learning
Installing and loading R packages
Installing an R package
Installing a package using the pointandclick interface
Loading an R package
Summary
2: Managing and Understanding Data
R data structures
Vectors
Factors
Lists
Data frames
Matrixes and arrays
Managing data with R
Saving and loading R data structures
Importing and saving data from CSV files
Importing data from SQL databases
Exploring and understanding data
Exploring the structure of data
Exploring numeric variables
Measuring the central tendency – mean and median
Measuring spread – quartiles and the fivenumber summary
Visualizing numeric variables – boxplots
Visualizing numeric variables – histograms
Understanding numeric data – uniform and normal distributions
Measuring spread – variance and standard deviation
Exploring categorical variables
Measuring the central tendency – the mode
Exploring relationships between variables
Visualizing relationships – scatterplots
Examining relationships – twoway crosstabulations
Summary
3: Lazy Learning – Classification Using Nearest Neighbors
Understanding classification using nearest neighbors
The kNN algorithm
Calculating distance
Choosing an appropriate k
Preparing data for use with kNN
Why is the kNN algorithm lazy?
Diagnosing breast cancer with the kNN algorithm
Step 1 – collecting data
Step 2 – exploring and preparing the data
Transformation – normalizing numeric data
Data preparation – creating training and test datasets
Step 3 – training a model on the data
Step 4 – evaluating model performance
Step 5 – improving model performance
Transformation – zscore standardization
Testing alternative values of k
Summary
4: Probabilistic Learning – Classification Using
Naive Bayes
Understanding naive Bayes
Basic concepts of Bayesian methods
Probability
Joint probability
Conditional probability with Bayes' theorem
The naive Bayes algorithm
The naive Bayes classification
The Laplace estimator
Using numeric features with naive Bayes
Example – filtering mobile phone spam with the naive Bayes algorithm
Step 1 – collecting data
Step 2 – exploring and preparing the data
Data preparation – processing text data for analysis
Data preparation – creating training and test datasets
Visualizing text data – word clouds
Data preparation – creating indicator features for frequent words
Step 3 – training a model on the data
Step 4 – evaluating model performance
Step 5 – improving model performance
Summary
5: Divide and Conquer – Classification Using
Decision Trees and Rules
Understanding decision trees
Divide and conquer
The C5.0 decision tree algorithm
Choosing the best split
Pruning the decision tree
Example – identifying risky bank loans using C5.0 decision trees
Step 1 – collecting data
Step 2 – exploring and preparing the data
Data preparation – creating random training and test datasets
Step 3 – training a model on the data
Step 4 – evaluating model performance
Step 5 – improving model performance
Boosting the accuracy of decision trees
Making some mistakes more costly than others
Understanding classification rules
Separate and conquer
The One Rule algorithm
The RIPPER algorithm
Rules from decision trees
Example – identifying poisonous mushrooms with rule learners
Step 1 – collecting data
Step 2 – exploring and preparing the data
Step 3 – training a model on the data
Step 4 – evaluating model performance
Step 5 – improving model performance
Summary
6: Forecasting Numeric Data – Regression Methods
Understanding regression
Simple linear regression
Ordinary least squares estimation
Correlations
Multiple linear regression
Example – predicting medical expenses using linear regression
Step 1 – collecting data
Step 2 – exploring and preparing the data
Exploring relationships among features – the correlation matrix
Visualizing relationships among features – the scatterplot matrix
Step 3 – training a model on the data
Step 4 – evaluating model performance
Step 5 – improving model performance
Model specification – adding nonlinear relationships
Transformation – converting a numeric variable to a binary indicator
Model specification – adding interaction effects
Putting it all together – an improved regression model
Understanding regression trees and model trees
Adding regression to trees
Example – estimating the quality of wines with regression trees
and model trees
Step 1 – collecting data
Step 2 – exploring and preparing the data
Step 3 – training a model on the data
Visualizing decision trees
Step 4 – evaluating model performance
Measuring performance with mean absolute error
Step 5 – improving model performance
Summary
7: Black Box Methods – Neural Networks and
Support Vector Machines
Understanding neural networks
From biological to artificial neurons
Activation functions
Network topology
The number of layers
The direction of information travel
The number of nodes in each layer
Training neural networks with backpropagation
Modeling the strength of concrete with ANNs
Step 1 – collecting data
Step 2 – exploring and preparing the data
Step 3 – training a model on the data
Step 4 – evaluating model performance
Step 5 – improving model performance
Understanding Support Vector Machines
Classification with hyperplanes
Finding the maximum margin
The case of linearly separable data
The case of nonlinearly separable data
Using kernels for nonlinear spaces
Performing OCR with SVMs
Step 1 – collecting data
Step 2 – exploring and preparing the data
Step 3 – training a model on the data
Step 4 – evaluating model performance
Step 5 – improving model performance
Summary
8: Finding Patterns – Market Basket Analysis Using
Association Rules
Understanding association rules
The Apriori algorithm for association rule learning
Measuring rule interest – support and confidence
Building a set of rules with the Apriori principle
Example – identifying frequently purchased groceries with
association rules
Step 1 – collecting data
Step 2 – exploring and preparing the data
Data preparation – creating a sparse matrix for transaction data
Visualizing item support – item frequency plots
Visualizing transaction data – plotting the sparse matrix
Step 3 – training a model on the data
Step 4 – evaluating model performance
Step 5 – improving model performance
Sorting the set of association rules
Taking subsets of association rules
Saving association rules to a file or data frame
Summary
9: Finding Groups of Data – Clustering with kmeans
Understanding clustering
Clustering as a machine learning task
The kmeans algorithm for clustering
Using distance to assign and update clusters
Choosing the appropriate number of clusters
Finding teen market segments using kmeans clustering
Step 1 – collecting data
Step 2 – exploring and preparing the data
Data preparation – dummy coding missing values
Data preparation – imputing missing values
Step 3 – training a model on the data
Step 4 – evaluating model performance
Step 5 – improving model performance
Summary
10: Evaluating Model Performance
Measuring performance for classification
Working with classification prediction data in R
A closer look at confusion matrices
Using confusion matrices to measure performance
Beyond accuracy – other measures of performance
The kappa statistic
Sensitivity and specificity
Precision and recall
The Fmeasure
Visualizing performance tradeoffs
ROC curves
Estimating future performance
The holdout method
Crossvalidation
Bootstrap sampling
Summary
11: Improving Model Performance
Tuning stock models for better performance
Using caret for automated parameter tuning
Creating a simple tuned model
Customizing the tuning process
Improving model performance with metalearning
Understanding ensembles
Bagging
Boosting
Random forests
Training random forests
Evaluating random forest performance
Summary
DEEP LEARNING with R
1: Getting Started with Deep Learning
What is deep learning?
Conceptual overview of neural networks
Deep neural networks
R packages for deep learning
Setting up reproducible results
Neural networks
The deepnet package
The darch package
The H2O package
Connecting R and H2O
Initializing H2O
Linking datasets to an H2O cluster
Summary
2: Training a Prediction Model
Neural networks in R
Building a neural network
Generating predictions from a neural network
The problem of overfitting data – the consequences explained
Use case – build and apply a neural network
Summary
3: Preventing Overfitting
L1 penalty
L1 penalty in action
L2 penalty
L2 penalty in action
Weight decay (L2 penalty in neural networks)
Ensembles and model averaging
Use case – improving outofsample model performance
using dropout
Summary
4: Identifying Anomalous Data
Getting started with unsupervised learning
How do autoencoders work?
Regularized autoencoders
Penalized autoencoders
Denoising autoencoders
Training an autoencoder in R
Use case – building and applying an autoencoder model
Finetuning autoencoder models
Summary
5: Training Deep Prediction Models
Getting started with deep feedforward neural networks
Common activation functions – rectifiers, hyperbolic tangent,
and maxout
Picking hyperparameters
Training and predicting new data from a deep neural network
Use case – training a deep neural network for automatic
classification
Working with model results
Summary
6: Tuning and Optimizing Models
Dealing with missing data
Solutions for models with low accuracy
Grid search
Random search
Summary
DEEP LEARNING WITH PYTHON
I Introduction
1 Welcome
Deep Learning The Wrong Way
Deep Learning With Python
Summary
II Background
2 Introduction to Theano
What is Theano?
How to Install Theano
Simple Theano Example
Extensions and Wrappers for Theano
More Theano Resources
Summary
3 Introduction to TensorFlow
What is TensorFlow?
How to Install TensorFlow
Your First Examples in TensorFlow
Simple TensorFlow Example
More Deep Learning Models
Summary
4 Introduction to Keras
What is Keras?
How to Install Keras
Theano and TensorFlow Backends for Keras
Build Deep Learning Models with Keras
Summary
5 Project: Develop Large Models on GPUs Cheaply In the Cloud
Project Overview
Setup Your AWS Account
Launch Your Server Instance
Login, Configure and Run
Build and Run Models on AWS
Close Your EC2 Instance
Tips and Tricks for Using Keras on AWS
More Resources For Deep Learning on AWS
Summary
III Multilayer Perceptrons
6 Crash Course In Multilayer Perceptrons
Crash Course Overview
Multilayer Perceptrons
Neurons
Networks of Neurons
Training Networks
Summary
7 Develop Your First Neural Network With Keras
Tutorial Overview
Pima Indians Onset of Diabetes Dataset
Load Data
Define Model
Compile Model
Fit Model
Evaluate Model
Tie It All Together
Summary
8 Evaluate The Performance of Deep Learning Models
Empirically Evaluate Network Configurations
Data Splitting
Manual kFold Cross Validation
Summary
9 Use Keras Models With ScikitLearn For General Machine Learning
Overview
Evaluate Models with Cross Validation
Grid Search Deep Learning Model Parameters
Summary
10 Project: Multiclass Classification Of Flower Species
Iris Flowers Classification Dataset
Import Classes and Functions
Initialize Random Number Generator
Load The Dataset
Encode The Output Variable
Define The Neural Network Model
Evaluate The Model with kFold Cross Validation
Summary
11 Project: Binary Classification Of Sonar Returns
Sonar Object Classification Dataset
Baseline Neural Network Model Performance
Improve Performance With Data Preparation
Tuning Layers and Neurons in The Model
Summary
12 Project: Regression Of Boston House Prices
Boston House Price Dataset
Develop a Baseline Neural Network Model
Lift Performance By Standardizing The Dataset
Tune The Neural Network Topology
Summary
IV Advanced Multilayer Perceptrons and Keras
13 Save Your Models For Later With Serialization
Tutorial Overview .
Save Your Neural Network Model to JSON
Save Your Neural Network Model to YAML
Summary
14 Keep The Best Models During Training With Checkpointing
Checkpointing Neural Network Models
Checkpoint Neural Network Model Improvements
Checkpoint Best Neural Network Model Only
Loading a Saved Neural Network Model
Summary
15 Understand Model Behavior During Training By Plotting History
Access Model Training History in Keras
Visualize Model Training History in Keras
Summary
16 Reduce Overfitting With Dropout Regularization
Dropout Regularization For Neural Networks
Dropout Regularization in Keras
Using Dropout on the Visible Layer
Using Dropout on Hidden Layers
Tips For Using Dropout
Summary
17 Lift Performance With Learning Rate Schedules
Learning Rate Schedule For Training Models
Ionosphere Classification Dataset
TimeBased Learning Rate Schedule
DropBased Learning Rate Schedule
Tips for Using Learning Rate Schedules
Summary
V Convolutional Neural Networks
18 Crash Course In Convolutional Neural Networks
The Case for Convolutional Neural Networks
Building Blocks of Convolutional Neural Networks
Convolutional Layers
Pooling Layers
Fully Connected Layers
Worked Example
Convolutional Neural Networks Best Practices
Summary
19 Project: Handwritten Digit Recognition
Handwritten Digit Recognition Dataset
Loading the MNIST dataset in Keras
Baseline Model with Multilayer Perceptrons
Simple Convolutional Neural Network for MNIST
Larger Convolutional Neural Network for MNIST
Summary
20 Improve Model Performance With Image Augmentation
Keras Image Augmentation API
Point of Comparison for Image Augmentation
Feature Standardization
ZCA Whitening
Random Rotations
Random Shifts
Random Flips
Saving Augmented Images to File
Tips For Augmenting Image Data with Keras
Summary
21 Project Object Recognition in Photographs
Photograph Object Recognition Dataset
Loading The CIFAR10 Dataset in Keras
Simple CNN for CIFAR10
Larger CNN for CIFAR10
Extensions To Improve Model Performance
Summary
22 Project: Predict Sentiment From Movie Reviews
Movie Review Sentiment Classification Dataset
Load the IMDB Dataset With Keras
Word Embeddings
Simple Multilayer Perceptron Model
OneDimensional Convolutional Neural Network
Summary
VI Recurrent Neural Networks
23 Crash Course In Recurrent Neural Networks
Support For Sequences in Neural Networks
Recurrent Neural Networks
Long ShortTerm Memory Networks
Summary
24 Time Series Prediction with Multilayer Perceptrons
Problem Description: Time Series Prediction
Multilayer Perceptron Regression
Multilayer Perceptron Using the Window Method
Summary
25 Time Series Prediction with LSTM Recurrent Neural Networks
LSTM Network For Regression
LSTM For Regression Using the Window Method
LSTM For Regression with Time Steps
LSTM With Memory Between Batches
Stacked LSTMs With Memory Between Batches
Summary
26 Project: Sequence Classification of Movie Reviews
Simple LSTM for Sequence Classification
LSTM For Sequence Classification With Dropout
LSTM and CNN For Sequence Classification
Summary
27 Understanding Stateful LSTM Recurrent Neural Networks
Problem Description: Learn the Alphabet
LSTM for Learning OneChar to OneChar Mapping
LSTM for a Feature Window to OneChar Mapping
LSTM for a Time Step Window to OneChar Mapping
LSTM State Maintained Between Samples Within A Batch
Stateful LSTM for a OneChar to OneChar Mapping
LSTM with Variable Length Input to OneChar Output
Summary
28 Project: Text Generation With Alice in Wonderland
Problem Description: Text Generation
Develop a Small LSTM Recurrent Neural Network
Generating Text with an LSTM Network
Larger LSTM Recurrent Neural Network
Extension Ideas to Improve the Model
Summary

datamodeling 
Pattern Recognition 
35 hours 
This course provides an introduction into the field of pattern recognition and machine learning. It also touches on practical applications in statistics, computer science, signal processing, computer vision, data mining, and bioinformatics. The course is interactive and includes plenty of handson exercises, continuous feedback, and testing of knowledge and skills acquired.
Audience
Data analysts
PhD students, researchers and practitioners
Introduction
Probability theory, model selection, decision and information theory
Probability distributions
Linear models for regression and classification
Neural networks
Kernel methods
Sparse kernel machines
Graphical models
Mixture models and EM
Approximate inference
Sampling methods
Continuous latent variables
Sequential data
Combining models

aiint 
Artificial Intelligence Overview 
7 hours 
This course has been created for managers, solutions architects, innovation officers, CTOs, software architects and everyone who is interested overview of applied artificial intelligence and the nearest forecast for its development.
Artificial Intelligence History
Intelligent Agents
Problem Solving
Solving Problems by Searching
Beyond Classical Search
Adversarial Search
Constraint Satisfaction Problems
Knowledge and Reasoning
Logical Agents
FirstOrder Logic
Inference in FirstOrder Logic
Classical Planning
Planning and Acting in the Real World
Knowledge Representation
Uncertain Knowledge and Reasoning
Quantifying Uncertainty
Probabilistic Reasoning
Probabilistic Reasoning over Time
Making Simple Decisions
Making Complex Decisions
Learning
Learning from Examples
Knowledge in Learning
Learning Probabilistic Models
Reinforcement Learning
Communicating, Perceiving, and Acting;
Natural Language Processing
Natural Language for Communication
Perception
Robotics
Conclusions
Philosophical Foundations
AI: The Present and Future

Torch 
Torch: Getting started with Machine and Deep Learning 
21 hours 
Torch is an open source machine learning library and a scientific computing framework based on the Lua programming language. It provides a development environment for numerics, machine learning, and computer vision, with a particular emphasis on deep learning and convolutional nets. It is one of the fastest and most flexible frameworks for Machine and Deep Learning and is used by companies such as Facebook, Google, Twitter, NVIDIA, AMD, Intel, and many others.
In this course we cover the principles of Torch, its unique features, and how it can be applied in realworld applications. We step through numerous handson exercises all throughout, demonstrating and practicing the concepts learned.
By the end of the course, participants will have a thorough understanding of Torch's underlying features and capabilities as well as its role and contribution within the AI space compared to other frameworks and libraries. Participants will have also received the necessary practice to implement Torch in their own projects.
Audience
Software developers and programmers wishing to enable Machine and Deep Learning within their applications
Format of the course
Overview of Machine and Deep Learning
Inclass coding and integration exercises
Test questions sprinkled along the way to check understanding
Introduction to Torch
Like NumPy but with CPU and GPU implementation
Torch's usage in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking
Installing Torch
Linux, Windows, Mac
Bitmapi and Docker
Installing Torch packages
Using the LuaRocks package manager
Choosing an IDE for Torch
ZeroBrane Studio
Eclipse plugin for Lua
Working with the Lua scripting language and LuaJIT
Lua's integration with C/C++
Lua syntax: datatypes, loops and conditionals, functions, functions, tables, and file i/o.
Object orientation and serialization in Torch
Coding exercise
Loading a dataset in Torch
MNIST
CIFAR10, CIFAR100
Imagenet
Machine Learning in Torch
Deep Learning
Manual feature extraction vs convolutional networks
Supervised and Unsupervised Learning
Building a neural network with Torch
Ndimensional arrays
Image analysis with Torch
Image package
The Tensor library
Working with the REPL interpreter
Working with databases
Networking and Torch
GPU support in Torch
Integrating Torch
C, Python, and others
Embedding Torch
iOS and Android
Other frameworks and libraries
Facebook's optimized deeplearning modules and containers
Creating your own package
Testing and debugging
Releasing your application
The future of AI and Torch 
neuralnet 
Introduction to the use of neural networks 
7 hours 
The training is aimed at people who want to learn the basics of neural networks and their applications.
The Basics
Whether computers can think of?
Imperative and declarative approach to solving problems
Purpose Bedan on artificial intelligence
The definition of artificial intelligence. Turing test. Other determinants
The development of the concept of intelligent systems
Most important achievements and directions of development
Neural Networks
The Basics
Concept of neurons and neural networks
A simplified model of the brain
Opportunities neuron
XOR problem and the nature of the distribution of values
The polymorphic nature of the sigmoidal
Other functions activated
Construction of neural networks
Concept of neurons connect
Neural network as nodes
Building a network
Neurons
Layers
Scales
Input and output data
Range 0 to 1
Normalization
Learning Neural Networks
Backward Propagation
Steps propagation
Network training algorithms
range of application
Estimation
Problems with the possibility of approximation by
Examples
XOR problem
Lotto?
Equities
OCR and image pattern recognition
Other applications
Implementing a neural network modeling job predicting stock prices of listed
Problems for today
Combinatorial explosion and gaming issues
Turing test again
Overconfidence in the capabilities of computers

OpenNN 
OpenNN: Implementing neural networks 
14 hours 
OpenNN is an opensource class library written in C++ which implements neural networks, for use in machine learning.
In this course we go over the principles of neural networks and use OpenNN to implement a sample application.
Audience
Software developers and programmers wishing to create Deep Learning applications.
Format of the course
Lecture and discussion coupled with handson exercises.
Introduction to OpenNN, Machine Learning and Deep Learning
Downloading OpenNN
Working with Neural Designer
Using Neural Designer for descriptive, diagnostic, predictive and prescriptive analytics
OpenNN architecture
CPU parallelization
OpenNN classes
Data set, neural network, loss index, training strategy, model selection, testing analysis
Vector and matrix templates
Building a neural network application
Choosing a suitable neural network
Formulating the variational problem (loss index)
Solving the reduced function optimization problem (training strategy)
Working with datasets
The data matrix (columns as variables and rows as instances)
Learning tasks
Function regression
Pattern recognition
Compiling with QT Creator
Integrating, testing and debugging your application
The future of neural networks and OpenNN 
rneuralnet 
Neural Network in R 
14 hours 
This course is an introduction to applying neural networks in real world problems using Rproject software.
Introduction to Neural Networks
What are Neural Networks
What is current status in applying neural networks
Neural Networks vs regression models
Supervised and Unsupervised learning
Overview of packages available
nnet, neuralnet and others
differences between packages and itls limitations
Visualizing neural networks
Applying Neural Networks
Concept of neurons and neural networks
A simplified model of the brain
Opportunities neuron
XOR problem and the nature of the distribution of values
The polymorphic nature of the sigmoidal
Other functions activated
Construction of neural networks
Concept of neurons connect
Neural network as nodes
Building a network
Neurons
Layers
Scales
Input and output data
Range 0 to 1
Normalization
Learning Neural Networks
Backward Propagation
Steps propagation
Network training algorithms
range of application
Estimation
Problems with the possibility of approximation by
Examples
OCR and image pattern recognition
Other applications
Implementing a neural network modeling job predicting stock prices of listed

d2dbdpa 
From Data to Decision with Big Data and Predictive Analytics 
21 hours 
Audience
If you try to make sense out of the data you have access to or want to analyse unstructured data available on the net (like Twitter, Linked in, etc...) this course is for you.
It is mostly aimed at decision makers and people who need to choose what data is worth collecting and what is worth analyzing.
It is not aimed at people configuring the solution, those people will benefit from the big picture though.
Delivery Mode
During the course delegates will be presented with working examples of mostly open source technologies.
Short lectures will be followed by presentation and simple exercises by the participants
Content and Software used
All software used is updated each time the course is run so we check the newest versions possible.
It covers the process from obtaining, formatting, processing and analysing the data, to explain how to automate decision making process with machine learning.
Quick Overview
Data Sources
Minding Data
Recommender systems
Target Marketing
Datatypes
Structured vs unstructured
Static vs streamed
Attitudinal, behavioural and demographic data
Datadriven vs userdriven analytics
data validity
Volume, velocity and variety of data
Models
Building models
Statistical Models
Machine learning
Data Classification
Clustering
kGroups, kmeans, nearest neighbours
Ant colonies, birds flocking
Predictive Models
Decision trees
Support vector machine
Naive Bayes classification
Neural networks
Markov Model
Regression
Ensemble methods
ROI
Benefit/Cost ratio
Cost of software
Cost of development
Potential benefits
Building Models
Data Preparation (MapReduce)
Data cleansing
Choosing methods
Developing model
Testing Model
Model evaluation
Model deployment and integration
Overview of Open Source and commercial software
Selection of Rproject package
Python libraries
Hadoop and Mahout
Selected Apache projects related to Big Data and Analytics
Selected commercial solution
Integration with existing software and data sources

mlintro 
Introduction to Machine Learning 
7 hours 
This training course is for people that would like to apply basic Machine Learning techniques in practical applications.
Audience
Data scientists and statisticians that have some familiarity with machine learning and know how to program R. The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give a practical introduction to machine learning to participants interested in applying the methods at work
Sector specific examples are used to make the training relevant to the audience.
Naive Bayes
Multinomial models
Bayesian categorical data analysis
Discriminant analysis
Linear regression
Logistic regression
GLM
EM Algorithm
Mixed Models
Additive Models
Classification
KNN
Ridge regression
Clustering

appliedml 
Applied Machine Learning 
14 hours 
This training course is for people that would like to apply Machine Learning in practical applications.
Audience
This course is for data scientists and statisticians that have some familiarity with statistics and know how to program R (or Python or other chosen language). The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization.
The purpose is to give practical applications to Machine Learning to participants interested in applying the methods at work.
Sector specific examples are used to make the training relevant to the audience.
Naive Bayes
Multinomial models
Bayesian categorical data analysis
Discriminant analysis
Linear regression
Logistic regression
GLM
EM Algorithm
Mixed Models
Additive Models
Classification
KNN
Bayesian Graphical Models
Factor Analysis (FA)
Principal Component Analysis (PCA)
Independent Component Analysis (ICA)
Support Vector Machines (SVM) for regression and classification
Boosting
Ensemble models
Neural networks
Hidden Markov Models (HMM)
Space State Models
Clustering
