Deep Learning Training in Exeter

Deep Learning Training in Exeter

Deep machine learning, deep structured learning, hierarchical learning, DL courses

Exeter - The Senate

The Senate
Southernhay Gardens
Exeter EX1 1UG
United Kingdom
GB
Exeter - The Senate
The business centre is located in Exeter, with excellent transport links that is served by two mainline train stations, Exeter Central and Exeter St Davids,...Read more

Client Testimonials

Neural Networks Fundamentals using TensorFlow as Example

I was amazed at the standard of this class - I would say that it was university standard.

David Relihan - INTEL R&D IRELAND LIMITED

Neural Networks Fundamentals using TensorFlow as Example

Topic selection. Style of training. Practice orientation

Commerzbank AG

Introduction to Deep Learning

Interesting subject

Wojciech Wilk - Dolby Poland Sp. z o.o.

Machine Learning and Deep Learning

Coverage and depth of topics

Anirban Basu - Travix International

Artificial Neural Networks, Machine Learning and Deep Thinking

flexibility

Werner Philipp - Robert Bosch GmbH

Advanced Deep Learning

Doing exercises on real examples using Keras. Mihaly totally understood our expectations about this training.

Paul Kassis - OSONES

Neural Networks Fundamentals using TensorFlow as Example

Topic selection. Style of training. Practice orientation

Commerzbank AG

Introduction to Deep Learning

Topic. Very interesting!

Piotr - Dolby Poland Sp. z o.o.

Introduction to Deep Learning

The deep knowledge of the trainer about the topic.

Sebastian Görg - FANUC Europe Corporation

Artificial Neural Networks, Machine Learning and Deep Thinking

flexibility

Werner Philipp - Robert Bosch GmbH

TensorFlow for Image Recognition

Very updated approach or api (tensorflow, kera, tflearn) to do machine learning

Paul Lee - Hong Kong Productivity Council

Machine Learning and Deep Learning

The training provided the right foundation that allows us to further to expand on, by showing how theory and practice go hand in hand. It actually got me more interested in the subject than I was before.

Jean-Paul van Tillo - Travix International

Neural Networks Fundamentals using TensorFlow as Example

Given outlook of the technology: what technology/process might become more important in the future; see, what the technology can be used for

Commerzbank AG

Neural Networks Fundamentals using TensorFlow as Example

Very good all round overview.Good background into why Tensorflow operates as it does.

Kieran Conboy - INTEL R&D IRELAND LIMITED

Machine Learning and Deep Learning

We have gotten a lot more insight in to the subject matter. Some nice discussion were made with some real subjects within our company

Sebastiaan Holman - Travix International

Artificial Neural Networks, Machine Learning and Deep Thinking

flexibility

Werner Philipp - Robert Bosch GmbH

Advanced Deep Learning

The exercises are sufficiently practical and do not need a high knowledge in Python to be done.

Alexandre GIRARD - OSONES

Artificial Neural Networks, Machine Learning and Deep Thinking

Very flexible

Frank Ueltzhöffer - Robert Bosch GmbH

Introduction to Deep Learning

The topic is very interesting

Wojciech Baranowski - Dolby Poland Sp. z o.o.

Introduction to Deep Learning

Exercises after each topic were really helpful, despite there were too complicated at the end. In general, the presented material was very interesting and involving! Exercises with image recognition were great.

- Dolby Poland Sp. z o.o.

Neural Networks Fundamentals using TensorFlow as Example

Knowledgeable trainer

Sridhar Voorakkara - INTEL R&D IRELAND LIMITED

Neural Networks Fundamentals using TensorFlow as Example

I liked the opportunities to ask questions and get more in depth explanations of the theory.

Sharon Ruane - INTEL R&D IRELAND LIMITED

Introduction to Deep Learning

Trainers theoretical knowledge and willingness to solve the problems with the participants after the training

Grzegorz Mianowski - Dolby Poland Sp. z o.o.

Artificial Neural Networks, Machine Learning and Deep Thinking

flexibility

Werner Philipp - Robert Bosch GmbH

Advanced Deep Learning

The global overview of deep learning

Bruno Charbonnier - OSONES

Deep Learning Course Events - Exeter

Code Name Venue Duration Course Date PHP Course Price [Remote / Classroom]
radvml Advanced Machine Learning with R Exeter - The Senate 21 hours Tue, 2018-01-30 09:30 £3900 / £4800
deeplearning1 Introduction to Deep Learning Exeter - The Senate 21 hours Wed, 2018-01-31 09:30 £3900 / £4800
matlabdl Matlab for Deep Learning Exeter - The Senate 14 hours Thu, 2018-02-01 09:30 £2200 / £2800
tsflw2v Natural Language Processing with TensorFlow Exeter - The Senate 35 hours Mon, 2018-02-05 09:30 £6500 / £8000
dlfornlp Deep Learning for NLP (Natural Language Processing) Exeter - The Senate 28 hours Mon, 2018-02-05 09:30 £4400 / £5600
openface OpenFace: Creating Facial Recognition Systems Exeter - The Senate 14 hours Wed, 2018-02-07 09:30 £2200 / £2800
Torch Torch: Getting started with Machine and Deep Learning Exeter - The Senate 21 hours Wed, 2018-02-07 09:30 £3900 / £4800
dlv Deep Learning for Vision Exeter - The Senate 21 hours Mon, 2018-02-12 09:30 £3900 / £4800
tensorflowserving TensorFlow Serving Exeter - The Senate 7 hours Mon, 2018-02-12 09:30 £1100 / £1400
tf101 Deep Learning with TensorFlow Exeter - The Senate 21 hours Mon, 2018-02-12 09:30 £3300 / £4200
OpenNN OpenNN: Implementing neural networks Exeter - The Senate 14 hours Tue, 2018-02-13 09:30 £2600 / £3200
embeddingprojector Embedding Projector: Visualizing your Training Data Exeter - The Senate 14 hours Wed, 2018-02-14 09:30 £2200 / £2800
w2vdl4j NLP with Deeplearning4j Exeter - The Senate 14 hours Thu, 2018-02-15 09:30 £2600 / £3200
tfir TensorFlow for Image Recognition Exeter - The Senate 28 hours Mon, 2018-02-26 09:30 £4400 / £5600
mldt Machine Learning and Deep Learning Exeter - The Senate 21 hours Mon, 2018-02-26 09:30 £3900 / £4800
bspkannmldt Artificial Neural Networks, Machine Learning and Deep Thinking Exeter - The Senate 21 hours Wed, 2018-02-28 09:30 £3300 / £4200
dl4jir DeepLearning4J for Image Recognition Exeter - The Senate 21 hours Mon, 2018-03-05 09:30 £3300 / £4200
pythonadvml Python for Advanced Machine Learning Exeter - The Senate 21 hours Wed, 2018-03-07 09:30 £3300 / £4200
dsstne Amazon DSSTNE: Build a recommendation system Exeter - The Senate 7 hours Thu, 2018-03-08 09:30 £1100 / £1400
facebooknmt Facebook NMT: Setting up a Neural Machine Translation System Exeter - The Senate 7 hours Mon, 2018-03-12 09:30 £1100 / £1400
datamodeling Pattern Recognition Exeter - The Senate 35 hours Mon, 2018-03-12 09:30 £6500 / £8000
Neuralnettf Neural Networks Fundamentals using TensorFlow as Example Exeter - The Senate 28 hours Mon, 2018-03-12 09:30 £5200 / £6400
dladv Advanced Deep Learning Exeter - The Senate 28 hours Tue, 2018-03-13 09:30 £5200 / £6400
mlbankingpython_ Machine Learning for Banking (with Python) Exeter - The Senate 21 hours Wed, 2018-03-14 09:30 £3300 / £4200
Fairseq Fairseq: Setting up a CNN-based machine translation system Exeter - The Senate 7 hours Thu, 2018-03-15 09:30 £1100 / £1400
t2t T2T: Creating Sequence to Sequence models for generalized learning Exeter - The Senate 7 hours Mon, 2018-03-19 09:30 £1100 / £1400
mlbankingr Machine Learning for Banking (with R) Exeter - The Senate 28 hours Mon, 2018-03-19 09:30 £4400 / £5600
undnn Understanding Deep Neural Networks Exeter - The Senate 35 hours Mon, 2018-03-19 09:30 £5500 / £7000
dl4j Mastering Deeplearning4j Exeter - The Senate 21 hours Tue, 2018-03-20 09:30 £3300 / £4200
caffe Deep Learning for Vision with Caffe Exeter - The Senate 21 hours Wed, 2018-03-21 09:30 £3300 / £4200
tpuprogramming TPU Programming: Building Neural Network Applications on Tensor Processing Units Exeter - The Senate 7 hours Thu, 2018-03-22 09:30 £1100 / £1400
deeplearning1 Introduction to Deep Learning Exeter - The Senate 21 hours Mon, 2018-03-26 09:30 £3900 / £4800
matlabdl Matlab for Deep Learning Exeter - The Senate 14 hours Tue, 2018-03-27 09:30 £2200 / £2800
tsflw2v Natural Language Processing with TensorFlow Exeter - The Senate 35 hours Mon, 2018-04-02 09:30 £6500 / £8000
dlfornlp Deep Learning for NLP (Natural Language Processing) Exeter - The Senate 28 hours Tue, 2018-04-03 09:30 £4400 / £5600
tf101 Deep Learning with TensorFlow Exeter - The Senate 21 hours Wed, 2018-04-04 09:30 £3300 / £4200
dlv Deep Learning for Vision Exeter - The Senate 21 hours Wed, 2018-04-04 09:30 £3900 / £4800
radvml Advanced Machine Learning with R Exeter - The Senate 21 hours Wed, 2018-04-04 09:30 £3900 / £4800
Torch Torch: Getting started with Machine and Deep Learning Exeter - The Senate 21 hours Wed, 2018-04-04 09:30 £3900 / £4800
w2vdl4j NLP with Deeplearning4j Exeter - The Senate 14 hours Mon, 2018-04-09 09:30 £2600 / £3200
OpenNN OpenNN: Implementing neural networks Exeter - The Senate 14 hours Tue, 2018-04-10 09:30 £2600 / £3200
embeddingprojector Embedding Projector: Visualizing your Training Data Exeter - The Senate 14 hours Wed, 2018-04-11 09:30 £2200 / £2800
openface OpenFace: Creating Facial Recognition Systems Exeter - The Senate 14 hours Wed, 2018-04-11 09:30 £2200 / £2800
tensorflowserving TensorFlow Serving Exeter - The Senate 7 hours Mon, 2018-04-16 09:30 £1100 / £1400
MicrosoftCognitiveToolkit Microsoft Cognitive Toolkit 2.x Exeter - The Senate 21 hours Wed, 2018-04-18 09:30 £3300 / £4200
mldt Machine Learning and Deep Learning Exeter - The Senate 21 hours Mon, 2018-04-23 09:30 £3900 / £4800
bspkannmldt Artificial Neural Networks, Machine Learning and Deep Thinking Exeter - The Senate 21 hours Tue, 2018-04-24 09:30 £3300 / £4200
dsstne Amazon DSSTNE: Build a recommendation system Exeter - The Senate 7 hours Fri, 2018-04-27 09:30 £1100 / £1400
facebooknmt Facebook NMT: Setting up a Neural Machine Translation System Exeter - The Senate 7 hours Mon, 2018-04-30 09:30 £1100 / £1400
dl4jir DeepLearning4J for Image Recognition Exeter - The Senate 21 hours Mon, 2018-04-30 09:30 £3300 / £4200
Fairseq Fairseq: Setting up a CNN-based machine translation system Exeter - The Senate 7 hours Fri, 2018-05-04 09:30 £1100 / £1400
pythonadvml Python for Advanced Machine Learning Exeter - The Senate 21 hours Mon, 2018-05-07 09:30 £3300 / £4200
datamodeling Pattern Recognition Exeter - The Senate 35 hours Mon, 2018-05-07 09:30 £6500 / £8000
dladv Advanced Deep Learning Exeter - The Senate 28 hours Mon, 2018-05-07 09:30 £5200 / £6400
tfir TensorFlow for Image Recognition Exeter - The Senate 28 hours Tue, 2018-05-08 09:30 £4400 / £5600
Neuralnettf Neural Networks Fundamentals using TensorFlow as Example Exeter - The Senate 28 hours Tue, 2018-05-08 09:30 £5200 / £6400
t2t T2T: Creating Sequence to Sequence models for generalized learning Exeter - The Senate 7 hours Thu, 2018-05-10 09:30 £1100 / £1400
tpuprogramming TPU Programming: Building Neural Network Applications on Tensor Processing Units Exeter - The Senate 7 hours Thu, 2018-05-10 09:30 £1100 / £1400
dl4j Mastering Deeplearning4j Exeter - The Senate 21 hours Mon, 2018-05-14 09:30 £3300 / £4200
caffe Deep Learning for Vision with Caffe Exeter - The Senate 21 hours Mon, 2018-05-14 09:30 £3300 / £4200
mlbankingr Machine Learning for Banking (with R) Exeter - The Senate 28 hours Tue, 2018-05-15 09:30 £4400 / £5600
deeplearning1 Introduction to Deep Learning Exeter - The Senate 21 hours Wed, 2018-05-16 09:30 £3900 / £4800
matlabdl Matlab for Deep Learning Exeter - The Senate 14 hours Wed, 2018-05-16 09:30 £2200 / £2800
undnn Understanding Deep Neural Networks Exeter - The Senate 35 hours Mon, 2018-05-21 09:30 £5500 / £7000
mlbankingpython_ Machine Learning for Banking (with Python) Exeter - The Senate 21 hours Wed, 2018-05-23 09:30 £3300 / £4200
tf101 Deep Learning with TensorFlow Exeter - The Senate 21 hours Tue, 2018-05-29 09:30 £3300 / £4200
dlv Deep Learning for Vision Exeter - The Senate 21 hours Tue, 2018-05-29 09:30 £3900 / £4800
radvml Advanced Machine Learning with R Exeter - The Senate 21 hours Tue, 2018-05-29 09:30 £3900 / £4800
dlfornlp Deep Learning for NLP (Natural Language Processing) Exeter - The Senate 28 hours Tue, 2018-05-29 09:30 £4400 / £5600
Torch Torch: Getting started with Machine and Deep Learning Exeter - The Senate 21 hours Tue, 2018-05-29 09:30 £3900 / £4800
w2vdl4j NLP with Deeplearning4j Exeter - The Senate 14 hours Wed, 2018-05-30 09:30 £2600 / £3200
OpenNN OpenNN: Implementing neural networks Exeter - The Senate 14 hours Wed, 2018-05-30 09:30 £2600 / £3200
openface OpenFace: Creating Facial Recognition Systems Exeter - The Senate 14 hours Thu, 2018-05-31 09:30 £2200 / £2800
embeddingprojector Embedding Projector: Visualizing your Training Data Exeter - The Senate 14 hours Mon, 2018-06-04 09:30 £2200 / £2800
tsflw2v Natural Language Processing with TensorFlow Exeter - The Senate 35 hours Mon, 2018-06-04 09:30 £6500 / £8000
tensorflowserving TensorFlow Serving Exeter - The Senate 7 hours Tue, 2018-06-05 09:30 £1100 / £1400
mldt Machine Learning and Deep Learning Exeter - The Senate 21 hours Wed, 2018-06-13 09:30 £3900 / £4800
MicrosoftCognitiveToolkit Microsoft Cognitive Toolkit 2.x Exeter - The Senate 21 hours Wed, 2018-06-13 09:30 £3300 / £4200
bspkannmldt Artificial Neural Networks, Machine Learning and Deep Thinking Exeter - The Senate 21 hours Mon, 2018-06-18 09:30 £3300 / £4200
dsstne Amazon DSSTNE: Build a recommendation system Exeter - The Senate 7 hours Tue, 2018-06-19 09:30 £1100 / £1400
facebooknmt Facebook NMT: Setting up a Neural Machine Translation System Exeter - The Senate 7 hours Wed, 2018-06-20 09:30 £1100 / £1400
dl4jir DeepLearning4J for Image Recognition Exeter - The Senate 21 hours Mon, 2018-06-25 09:30 £3300 / £4200
Fairseq Fairseq: Setting up a CNN-based machine translation system Exeter - The Senate 7 hours Tue, 2018-06-26 09:30 £1100 / £1400
t2t T2T: Creating Sequence to Sequence models for generalized learning Exeter - The Senate 7 hours Fri, 2018-06-29 09:30 £1100 / £1400
tpuprogramming TPU Programming: Building Neural Network Applications on Tensor Processing Units Exeter - The Senate 7 hours Fri, 2018-06-29 09:30 £1100 / £1400
pythonadvml Python for Advanced Machine Learning Exeter - The Senate 21 hours Mon, 2018-07-02 09:30 £3300 / £4200
datamodeling Pattern Recognition Exeter - The Senate 35 hours Mon, 2018-07-02 09:30 £6500 / £8000
dladv Advanced Deep Learning Exeter - The Senate 28 hours Mon, 2018-07-02 09:30 £5200 / £6400
Neuralnettf Neural Networks Fundamentals using TensorFlow as Example Exeter - The Senate 28 hours Mon, 2018-07-02 09:30 £5200 / £6400
tfir TensorFlow for Image Recognition Exeter - The Senate 28 hours Tue, 2018-07-03 09:30 £4400 / £5600
dl4j Mastering Deeplearning4j Exeter - The Senate 21 hours Wed, 2018-07-04 09:30 £3300 / £4200
caffe Deep Learning for Vision with Caffe Exeter - The Senate 21 hours Wed, 2018-07-04 09:30 £3300 / £4200
matlabdl Matlab for Deep Learning Exeter - The Senate 14 hours Thu, 2018-07-05 09:30 £2200 / £2800
mlbankingr Machine Learning for Banking (with R) Exeter - The Senate 28 hours Mon, 2018-07-09 09:30 £4400 / £5600
deeplearning1 Introduction to Deep Learning Exeter - The Senate 21 hours Wed, 2018-07-11 09:30 £3900 / £4800
undnn Understanding Deep Neural Networks Exeter - The Senate 35 hours Mon, 2018-07-16 09:30 £5500 / £7000
mlbankingpython_ Machine Learning for Banking (with Python) Exeter - The Senate 21 hours Wed, 2018-07-18 09:30 £3300 / £4200
OpenNN OpenNN: Implementing neural networks Exeter - The Senate 14 hours Thu, 2018-07-19 09:30 £2600 / £3200
openface OpenFace: Creating Facial Recognition Systems Exeter - The Senate 14 hours Mon, 2018-07-23 09:30 £2200 / £2800
dlfornlp Deep Learning for NLP (Natural Language Processing) Exeter - The Senate 28 hours Mon, 2018-07-23 09:30 £4400 / £5600

Course Outlines

Code Name Duration Outline
mldt Machine Learning and Deep Learning 21 hours

This course covers AI (emphasizing Machine Learning and Deep Learning)

Machine learning

Introduction to Machine Learning

  • Applications of machine learning
  • Supervised Versus Unsupervised Learning
  • Machine Learning Algorithms
    • Regression
    • Classification
    • Clustering
    • Recommender System
    • Anomaly Detection
    • Reinforcement Learning

Regression

  • Simple & Multiple Regression
    • Least Square Method
    • Estimating the Coefficients
    • Assessing the Accuracy of the Coefficient Estimates
    • Assessing the Accuracy of the Model
    • Post Estimation Analysis
    • Other Considerations in the Regression Models
    • Qualitative Predictors
    • Extensions of the Linear Models
    • Potential Problems
    • Bias-variance trade off [under-fitting/over-fitting] for regression models

Resampling Methods

  • Cross-Validation
  • The Validation Set Approach
  • Leave-One-Out Cross-Validation
  • k-Fold Cross-Validation
  • Bias-Variance Trade-Off for k-Fold
  • The Bootstrap

Model Selection and Regularization

  • Subset Selection [Best Subset Selection, Stepwise Selection, Choosing the Optimal Model]
  • Shrinkage Methods/ Regularization [Ridge Regression, Lasso & Elastic Net]
  • Selecting the Tuning Parameter
  • Dimension Reduction Methods
    • Principal Components Regression
    • Partial Least Squares

Classification

  • Logistic Regression

    • The Logistic Model cost function

    • Estimating the Coefficients

    • Making Predictions

    • Odds Ratio

    • Performance Evaluation Matrices

    • [Sensitivity/Specificity/PPV/NPV, Precision, ROC curve etc.]

    • Multiple Logistic Regression

    • Logistic Regression for >2 Response Classes

    • Regularized Logistic Regression

  • Linear Discriminant Analysis

    • Using Bayes’ Theorem for Classification

    • Linear Discriminant Analysis for p=1

    • Linear Discriminant Analysis for p >1

  • Quadratic Discriminant Analysis

  • K-Nearest Neighbors

  • Classification with Non-linear Decision Boundaries

  • Support Vector Machines

    • Optimization Objective

    • The Maximal Margin Classifier

    • Kernels

    • One-Versus-One Classification

    • One-Versus-All Classification

  • Comparison of Classification Methods

Introduction to Deep Learning

ANN Structure

  • Biological neurons and artificial neurons

  • Non-linear Hypothesis

  • Model Representation

  • Examples & Intuitions

  • Transfer Function/ Activation Functions

  • Typical classes of network architectures

Feed forward ANN.

  • Structures of Multi-layer feed forward networks

  • Back propagation algorithm

  • Back propagation - training and convergence

  • Functional approximation with back propagation

  • Practical and design issues of back propagation learning

Deep Learning

  • Artificial Intelligence & Deep Learning

  • Softmax Regression

  • Self-Taught Learning

  • Deep Networks

  • Demos and Applications

Lab:

Getting Started with R

  • Introduction to R

  • Basic Commands & Libraries

  • Data Manipulation

  • Importing & Exporting data

  • Graphical and Numerical Summaries

  • Writing functions

Regression

  • Simple & Multiple Linear Regression

  • Interaction Terms

  • Non-linear Transformations

  • Dummy variable regression

  • Cross-Validation and the Bootstrap

  • Subset selection methods

  • Penalization [Ridge, Lasso, Elastic Net]

Classification

  • Logistic Regression, LDA, QDA, and KNN,

  • Resampling & Regularization

  • Support Vector Machine

  • Resampling & Regularization

Note:

  • For ML algorithms, case studies will be used to discuss their application, advantages & potential issues.

  • Analysis of different data sets will be performed using R

matlabdl Matlab for Deep Learning 14 hours

In this instructor-led, live training, participants will learn how to use Matlab to design, build, and visualize a convolutional neural network for image recognition.

By the end of this training, participants will be able to:

  • Build a deep learning model
  • Automate data labeling
  • Work with models from Caffe and TensorFlow-Keras
  • Train data using multiple GPUs, the cloud, or clusters

Audience

  • Developers
  • Engineers
  • Domain experts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

deepmclrg Machine Learning & Deep Learning with Python and R 14 hours

MACHINE LEARNING

1: Introducing Machine Learning

  • The origins of machine learning
  • Uses and abuses of machine learning
  • Ethical considerations
  • How do machines learn?
  • Abstraction and knowledge representation
  • Generalization
  • Assessing the success of learning
  • Steps to apply machine learning to your data
  • Choosing a machine learning algorithm
  • Thinking about the input data
  • Thinking about types of machine learning algorithms
  • Matching your data to an appropriate algorithm
  • Using R for machine learning
  • Installing and loading R packages
  • Installing an R package
  • Installing a package using the point-and-click interface
  • Loading an R package
  • Summary

2: Managing and Understanding Data

  • R data structures
  • Vectors
  • Factors
  • Lists
  • Data frames
  • Matrixes and arrays
  • Managing data with R
  • Saving and loading R data structures
  • Importing and saving data from CSV files
  • Importing data from SQL databases
  • Exploring and understanding data
  • Exploring the structure of data
  • Exploring numeric variables
  • Measuring the central tendency – mean and median
  • Measuring spread – quartiles and the five-number summary
  • Visualizing numeric variables – boxplots
  • Visualizing numeric variables – histograms
  • Understanding numeric data – uniform and normal distributions
  • Measuring spread – variance and standard deviation
  • Exploring categorical variables
  • Measuring the central tendency – the mode
  • Exploring relationships between variables
  • Visualizing relationships – scatterplots
  • Examining relationships – two-way cross-tabulations
  • Summary

3: Lazy Learning – Classification Using Nearest Neighbors

  • Understanding classification using nearest neighbors
  • The kNN algorithm
  • Calculating distance
  • Choosing an appropriate k
  • Preparing data for use with kNN
  • Why is the kNN algorithm lazy?
  • Diagnosing breast cancer with the kNN algorithm
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Transformation – normalizing numeric data
  • Data preparation – creating training and test datasets
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Transformation – z-score standardization
  • Testing alternative values of k
  • Summary

4: Probabilistic Learning – Classification Using

  • Naive Bayes
  • Understanding naive Bayes
  • Basic concepts of Bayesian methods
  • Probability
  • Joint probability
  • Conditional probability with Bayes' theorem
  • The naive Bayes algorithm
  • The naive Bayes classification
  • The Laplace estimator
  • Using numeric features with naive Bayes
  • Example – filtering mobile phone spam with the naive Bayes algorithm
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Data preparation – processing text data for analysis
  • Data preparation – creating training and test datasets
  • Visualizing text data – word clouds
  • Data preparation – creating indicator features for frequent words
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Summary

5: Divide and Conquer – Classification Using

  • Decision Trees and Rules
  • Understanding decision trees
  • Divide and conquer
  • The C5.0 decision tree algorithm
  • Choosing the best split
  • Pruning the decision tree
  • Example – identifying risky bank loans using C5.0 decision trees
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Data preparation – creating random training and test datasets
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Boosting the accuracy of decision trees
  • Making some mistakes more costly than others
  • Understanding classification rules
  • Separate and conquer
  • The One Rule algorithm
  • The RIPPER algorithm
  • Rules from decision trees
  • Example – identifying poisonous mushrooms with rule learners
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Summary

6: Forecasting Numeric Data – Regression Methods

  • Understanding regression
  • Simple linear regression
  • Ordinary least squares estimation
  • Correlations
  • Multiple linear regression
  • Example – predicting medical expenses using linear regression
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Exploring relationships among features – the correlation matrix
  • Visualizing relationships among features – the scatterplot matrix
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Model specification – adding non-linear relationships
  • Transformation – converting a numeric variable to a binary indicator
  • Model specification – adding interaction effects
  • Putting it all together – an improved regression model
  • Understanding regression trees and model trees
  • Adding regression to trees
  • Example – estimating the quality of wines with regression trees
  • and model trees
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
    • Step 3 – training a model on the data
  • Visualizing decision trees
    • Step 4 – evaluating model performance
  • Measuring performance with mean absolute error
    • Step 5 – improving model performance
  • Summary

7: Black Box Methods – Neural Networks and

  • Support Vector Machines
  • Understanding neural networks
  • From biological to artificial neurons
  • Activation functions
  • Network topology
  • The number of layers
  • The direction of information travel
  • The number of nodes in each layer
  • Training neural networks with backpropagation
  • Modeling the strength of concrete with ANNs
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Understanding Support Vector Machines
  • Classification with hyperplanes
  • Finding the maximum margin
  • The case of linearly separable data
  • The case of non-linearly separable data
  • Using kernels for non-linear spaces
  • Performing OCR with SVMs
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Summary

8: Finding Patterns – Market Basket Analysis Using

  • Association Rules
  • Understanding association rules
  • The Apriori algorithm for association rule learning
  • Measuring rule interest – support and confidence
  • Building a set of rules with the Apriori principle
  • Example – identifying frequently purchased groceries with
  • association rules
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Data preparation – creating a sparse matrix for transaction data
  • Visualizing item support – item frequency plots
  • Visualizing transaction data – plotting the sparse matrix
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Sorting the set of association rules
  • Taking subsets of association rules
  • Saving association rules to a file or data frame
  • Summary

9: Finding Groups of Data – Clustering with k-means

  • Understanding clustering
  • Clustering as a machine learning task
  • The k-means algorithm for clustering
  • Using distance to assign and update clusters
  • Choosing the appropriate number of clusters
  • Finding teen market segments using k-means clustering
    • Step 1 – collecting data
    • Step 2 – exploring and preparing the data
  • Data preparation – dummy coding missing values
  • Data preparation – imputing missing values
    • Step 3 – training a model on the data
    • Step 4 – evaluating model performance
    • Step 5 – improving model performance
  • Summary

10: Evaluating Model Performance

  • Measuring performance for classification
  • Working with classification prediction data in R
  • A closer look at confusion matrices
  • Using confusion matrices to measure performance
  • Beyond accuracy – other measures of performance
  • The kappa statistic
  • Sensitivity and specificity
  • Precision and recall
  • The F-measure
  • Visualizing performance tradeoffs
  • ROC curves
  • Estimating future performance
  • The holdout method
  • Cross-validation
  • Bootstrap sampling
  • Summary

11: Improving Model Performance

  • Tuning stock models for better performance
  • Using caret for automated parameter tuning
  • Creating a simple tuned model
  • Customizing the tuning process
  • Improving model performance with meta-learning
  • Understanding ensembles
  • Bagging
  • Boosting
  • Random forests
  • Training random forests
  • Evaluating random forest performance
  • Summary

DEEP LEARNING with R

1: Getting Started with Deep Learning

  • What is deep learning?
  • Conceptual overview of neural networks
  • Deep neural networks
  • R packages for deep learning
  • Setting up reproducible results
  • Neural networks
  • The deepnet package
  • The darch package
  • The H2O package
  • Connecting R and H2O
  • Initializing H2O
  • Linking datasets to an H2O cluster
  • Summary

2: Training a Prediction Model

  • Neural networks in R
  • Building a neural network
  • Generating predictions from a neural network
  • The problem of overfitting data – the consequences explained
  • Use case – build and apply a neural network
  • Summary

3: Preventing Overfitting

  • L1 penalty
  • L1 penalty in action
  • L2 penalty
  • L2 penalty in action
  • Weight decay (L2 penalty in neural networks)
  • Ensembles and model averaging
  • Use case – improving out-of-sample model performance
  • using dropout
  • Summary

4: Identifying Anomalous Data

  • Getting started with unsupervised learning
  • How do auto-encoders work?
  • Regularized auto-encoders
  • Penalized auto-encoders
  • Denoising auto-encoders
  • Training an auto-encoder in R
  • Use case – building and applying an auto-encoder model
  • Fine-tuning auto-encoder models
  • Summary

5: Training Deep Prediction Models

  • Getting started with deep feedforward neural networks
  • Common activation functions – rectifiers, hyperbolic tangent,
  • and maxout
  • Picking hyperparameters
  • Training and predicting new data from a deep neural network
  • Use case – training a deep neural network for automatic
  • classification
  • Working with model results
  • Summary

6: Tuning and Optimizing Models

  • Dealing with missing data
  • Solutions for models with low accuracy
  • Grid search
  • Random search
  • Summary

DEEP LEARNING WITH PYTHON

I Introduction

1 Welcome

  • Deep Learning The Wrong Way
  • Deep Learning With Python
  • Summary

II Background

2 Introduction to Theano

  • What is Theano?
  • How to Install Theano
  • Simple Theano Example
  • Extensions and Wrappers for Theano
  • More Theano Resources
  • Summary

3 Introduction to TensorFlow

  • What is TensorFlow?
  • How to Install TensorFlow
  • Your First Examples in TensorFlow
  • Simple TensorFlow Example
  • More Deep Learning Models
  • Summary

4 Introduction to Keras

  • What is Keras?
  • How to Install Keras
  • Theano and TensorFlow Backends for Keras
  • Build Deep Learning Models with Keras
  • Summary

5 Project: Develop Large Models on GPUs Cheaply In the Cloud

  • Project Overview
  • Setup Your AWS Account
  • Launch Your Server Instance
  • Login, Configure and Run
  • Build and Run Models on AWS
  • Close Your EC2 Instance
  • Tips and Tricks for Using Keras on AWS
  • More Resources For Deep Learning on AWS
  • Summary

III Multilayer Perceptrons

6 Crash Course In Multilayer Perceptrons

  • Crash Course Overview
  • Multilayer Perceptrons
  • Neurons
  • Networks of Neurons
  • Training Networks
  • Summary

7 Develop Your First Neural Network With Keras

  • Tutorial Overview
  • Pima Indians Onset of Diabetes Dataset
  • Load Data
  • Define Model
  • Compile Model
  • Fit Model
  • Evaluate Model
  • Tie It All Together
  • Summary

8 Evaluate The Performance of Deep Learning Models

  • Empirically Evaluate Network Configurations
  • Data Splitting
  • Manual k-Fold Cross Validation
  • Summary

9 Use Keras Models With Scikit-Learn For General Machine Learning

  • Overview
  • Evaluate Models with Cross Validation
  • Grid Search Deep Learning Model Parameters
  • Summary

10 Project: Multiclass Classification Of Flower Species

  • Iris Flowers Classification Dataset
  • Import Classes and Functions
  • Initialize Random Number Generator
  • Load The Dataset
  • Encode The Output Variable
  • Define The Neural Network Model
  • Evaluate The Model with k-Fold Cross Validation
  • Summary

11 Project: Binary Classification Of Sonar Returns

  • Sonar Object Classification Dataset
  • Baseline Neural Network Model Performance
  • Improve Performance With Data Preparation
  • Tuning Layers and Neurons in The Model
  • Summary

12 Project: Regression Of Boston House Prices

  • Boston House Price Dataset
  • Develop a Baseline Neural Network Model
  • Lift Performance By Standardizing The Dataset
  • Tune The Neural Network Topology
  • Summary

IV Advanced Multilayer Perceptrons and Keras

13 Save Your Models For Later With Serialization

  • Tutorial Overview .
  • Save Your Neural Network Model to JSON
  • Save Your Neural Network Model to YAML
  • Summary

14 Keep The Best Models During Training With Checkpointing

  • Checkpointing Neural Network Models
  • Checkpoint Neural Network Model Improvements
  • Checkpoint Best Neural Network Model Only
  • Loading a Saved Neural Network Model
  • Summary

15 Understand Model Behavior During Training By Plotting History

  • Access Model Training History in Keras
  • Visualize Model Training History in Keras
  • Summary

16 Reduce Overfitting With Dropout Regularization

  • Dropout Regularization For Neural Networks
  • Dropout Regularization in Keras
  • Using Dropout on the Visible Layer
  • Using Dropout on Hidden Layers
  • Tips For Using Dropout
  • Summary

17 Lift Performance With Learning Rate Schedules

  • Learning Rate Schedule For Training Models
  • Ionosphere Classification Dataset
  • Time-Based Learning Rate Schedule
  • Drop-Based Learning Rate Schedule
  • Tips for Using Learning Rate Schedules
  • Summary

V Convolutional Neural Networks

18 Crash Course In Convolutional Neural Networks

  • The Case for Convolutional Neural Networks
  • Building Blocks of Convolutional Neural Networks
  • Convolutional Layers
  • Pooling Layers
  • Fully Connected Layers
  • Worked Example
  • Convolutional Neural Networks Best Practices
  • Summary

19 Project: Handwritten Digit Recognition

  • Handwritten Digit Recognition Dataset
  • Loading the MNIST dataset in Keras
  • Baseline Model with Multilayer Perceptrons
  • Simple Convolutional Neural Network for MNIST
  • Larger Convolutional Neural Network for MNIST
  • Summary

20 Improve Model Performance With Image Augmentation

  • Keras Image Augmentation API
  • Point of Comparison for Image Augmentation
  • Feature Standardization
  • ZCA Whitening
  • Random Rotations
  • Random Shifts
  • Random Flips
  • Saving Augmented Images to File
  • Tips For Augmenting Image Data with Keras
  • Summary

21 Project Object Recognition in Photographs

  • Photograph Object Recognition Dataset
  • Loading The CIFAR-10 Dataset in Keras
  • Simple CNN for CIFAR-10
  • Larger CNN for CIFAR-10
  • Extensions To Improve Model Performance
  • Summary

22 Project: Predict Sentiment From Movie Reviews

  • Movie Review Sentiment Classification Dataset
  • Load the IMDB Dataset With Keras
  • Word Embeddings
  • Simple Multilayer Perceptron Model
  • One-Dimensional Convolutional Neural Network
  • Summary

VI Recurrent Neural Networks

23 Crash Course In Recurrent Neural Networks

  • Support For Sequences in Neural Networks
  • Recurrent Neural Networks
  • Long Short-Term Memory Networks
  • Summary

24 Time Series Prediction with Multilayer Perceptrons

  • Problem Description: Time Series Prediction
  • Multilayer Perceptron Regression
  • Multilayer Perceptron Using the Window Method
  • Summary

25 Time Series Prediction with LSTM Recurrent Neural Networks

  • LSTM Network For Regression
  • LSTM For Regression Using the Window Method
  • LSTM For Regression with Time Steps
  • LSTM With Memory Between Batches
  • Stacked LSTMs With Memory Between Batches
  • Summary

26 Project: Sequence Classification of Movie Reviews

  • Simple LSTM for Sequence Classification
  • LSTM For Sequence Classification With Dropout
  • LSTM and CNN For Sequence Classification
  • Summary

27 Understanding Stateful LSTM Recurrent Neural Networks

  • Problem Description: Learn the Alphabet
  • LSTM for Learning One-Char to One-Char Mapping
  • LSTM for a Feature Window to One-Char Mapping
  • LSTM for a Time Step Window to One-Char Mapping
  • LSTM State Maintained Between Samples Within A Batch
  • Stateful LSTM for a One-Char to One-Char Mapping
  • LSTM with Variable Length Input to One-Char Output
  • Summary

28 Project: Text Generation With Alice in Wonderland

  • Problem Description: Text Generation
  • Develop a Small LSTM Recurrent Neural Network
  • Generating Text with an LSTM Network
  • Larger LSTM Recurrent Neural Network
  • Extension Ideas to Improve the Model
  • Summary
mlbankingr Machine Learning for Banking (with R) 28 hours

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry. R will be used as the programming language.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of live projects.

Audience

  • Developers
  • Data scientists
  • Banking professionals with a technical background

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction

  • Difference between statistical learning (statistical analysis) and machine learning
  • Adoption of machine learning technology by finance and banking companies

Different Types of Machine Learning

  • Supervised learning vs unsupervised learning
  • Iteration and evaluation
  • Bias-variance trade-off
  • Combining supervised and unsupervised learning (semi-supervised learning)

Machine Learning Languages and Toolsets

  • Open source vs proprietary systems and software
  • R vs Python vs Matlab
  • Libraries and frameworks

Machine Learning Case Studies

  • Consumer data and big data
  • Assessing risk in consumer and business lending
  • Improving customer service through sentiment analysis
  • Detecting identity fraud, billing fraud and money laundering

Introduction to R

  • Installing the RStudio IDE
  • Loading R packages
  • Data structures
  • Vectors
  • Factors
  • Lists
  • Data Frames
  • Matrixes and Arrays

How to Load Machine Learning Data

  • Databases, data warehouses and streaming data
  • Distributed storage and processing with Hadoop and Spark
  • Importing data from a database
  • Importing data from Excel and CSV

Modeling Business Decisions with Supervised Learning

  • Classifying your data (classification)
  • Using regression analysis to predict outcome
  • Choosing from available machine learning algorithms
  • Understanding decision tree algorithms
  • Understanding random forest algorithms
  • Model evaluation
  • Exercise

Regression Analysis

  • Linear regression
  • Generalizations and Nonlinearity
  • Exercise

Classification

  • Bayesian refresher
  • Naive Bayes
  • Logistic regression
  • K-Nearest neighbors
  • Exercise

Hands-on: Building an Estimation Model

  • Assessing lending risk based on customer type and history

Evaluating the performance of Machine Learning Algorithms

  • Cross-validation and resampling
  • Bootstrap aggregation (bagging)
  • Exercise

Modeling Business Decisions with Unsupervised Learning

  • When sample data sets are not available
  • K-means clustering
  • Challenges of unsupervised learning
  • Beyond K-means
  • Bayes networks and Markov Hidden Models
  • Exercise

Hands-on: Building a Recommendation System

  • Analyzing past customer behavior to improve new service offerings

Extending your company's capabilities

  • Developing models in the cloud
  • Accelerating machine learning with additional GPUs
  • Applying Deep Learning neural networks for computer vision, voice recognition and text analysis

Closing Remarks

datamodeling Pattern Recognition 35 hours

This course provides an introduction into the field of pattern recognition and machine learning. It touches on practical applications in statistics, computer science, signal processing, computer vision, data mining, and bioinformatics.

The course is interactive and includes plenty of hands-on exercises, instructor feedback, and testing of knowledge and skills acquired.

Audience
    Data analysts
    PhD students, researchers and practitioners

 

Introduction

Probability theory, model selection, decision and information theory

Probability distributions

Linear models for regression and classification

Neural networks

Kernel methods

Sparse kernel machines

Graphical models

Mixture models and EM

Approximate inference

Sampling methods

Continuous latent variables

Sequential data

Combining models

 

mlbankingpython_ Machine Learning for Banking (with Python) 21 hours

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry. Python will be used as the programming language.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of team projects.

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction

  • Difference between statistical learning (statistical analysis) and machine learning
  • Adoption of machine learning technology and talent by finance and banking companies

Different Types of Machine Learning

  • Supervised learning vs unsupervised learning
  • Iteration and evaluation
  • Bias-variance trade-off
  • Combining supervised and unsupervised learning (semi-supervised learning)

Machine Learning Languages and Toolsets

  • Open source vs proprietary systems and software
  • Python vs R vs Matlab
  • Libraries and frameworks

Machine Learning Case Studies

  • Consumer data and big data
  • Assessing risk in consumer and business lending
  • Improving customer service through sentiment analysis
  • Detecting identity fraud, billing fraud and money laundering

Hands-on: Python for Machine Learning

  • Preparing the Development Environment
  • Obtaining Python machine learning libraries and packages
  • Working with scikit-learn and PyBrain

How to Load Machine Learning Data

  • Databases, data warehouses and streaming data
  • Distributed storage and processing with Hadoop and Spark
  • Exported data and Excel

Modeling Business Decisions with Supervised Learning

  • Classifying your data (classification)
  • Using regression analysis to predict outcome
  • Choosing from available machine learning algorithms
  • Understanding decision tree algorithms
  • Understanding random forest algorithms
  • Model evaluation
  • Exercise

Regression Analysis

  • Linear regression
  • Generalizations and Nonlinearity
  • Exercise

Classification

  • Bayesian refresher
  • Naive Bayes
  • Logistic regression
  • K-Nearest neighbors
  • Exercise

Hands-on: Building an Estimation Model

  • Assessing lending risk based on customer type and history

Evaluating the performance of Machine Learning Algorithms

  • Cross-validation and resampling
  • Bootstrap aggregation (bagging)
  • Exercise

Modeling Business Decisions with Unsupervised Learning

  • When sample data sets are not available
  • K-means clustering
  • Challenges of unsupervised learning
  • Beyond K-means
  • Bayes networks and Markov Hidden Models
  • Exercise

Hands-on: Building a Recommendation System

  • Analyzing past customer behavior to improve new service offerings

Extending your company's capabilities

  • Developing models in the cloud
  • Accelerating machine learning with GPU
  • Applying Deep Learning neural networks for computer vision, voice recognition and text analysis

Closing Remarks

deeplearning1 Introduction to Deep Learning 21 hours This course is general overview for Deep Learning without going too deep into any specific methods. It is suitable for people who want to start using Deep learning to enhance their accuracy of prediction.
  • Backprop, modular models
  • Logsum module
  • RBF Net
  • MAP/MLE loss
  • Parameter Space Transforms
  • Convolutional Module
  • Gradient-Based Learning 
  • Energy for inference,
  • Objective for learning
  • PCA; NLL: 
  • Latent Variable Models
  • Probabilistic LVM
  • Loss Function
  • Handwriting recognition
Torch Torch: Getting started with Machine and Deep Learning 21 hours

Torch is an open source machine learning library and a scientific computing framework based on the Lua programming language. It provides a development environment for numerics, machine learning, and computer vision, with a particular emphasis on deep learning and convolutional nets. It is one of the fastest and most flexible frameworks for Machine and Deep Learning and is used by companies such as Facebook, Google, Twitter, NVIDIA, AMD, Intel, and many others.

In this course we cover the principles of Torch, its unique features, and how it can be applied in real-world applications. We step through numerous hands-on exercises all throughout, demonstrating and practicing the concepts learned.

By the end of the course, participants will have a thorough understanding of Torch's underlying features and capabilities as well as its role and contribution within the AI space compared to other frameworks and libraries. Participants will have also received the necessary practice to implement Torch in their own projects.

Audience
    Software developers and programmers wishing to enable Machine and Deep Learning within their applications

Format of the course
    Overview of Machine and Deep Learning
    In-class coding and integration exercises
    Test questions sprinkled along the way to check understanding

Introduction to Torch
    Like NumPy but with CPU and GPU implementation
    Torch's usage in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking

Installing Torch
    Linux, Windows, Mac
    Bitmapi and Docker

Installing Torch packages
    Using the LuaRocks package manager

Choosing an IDE for Torch
    ZeroBrane Studio
    Eclipse plugin for Lua

Working with the Lua scripting language and LuaJIT
    Lua's integration with C/C++
    Lua syntax: datatypes, loops and conditionals, functions, functions, tables, and file i/o.
    Object orientation and serialization in Torch
    Coding exercise

Loading a dataset in Torch
    MNIST
    CIFAR-10, CIFAR-100
    Imagenet

Machine Learning in Torch
    Deep Learning
        Manual feature extraction vs convolutional networks
    Supervised and Unsupervised Learning
        Building a neural network with Torch    
    N-dimensional arrays

Image analysis with Torch
    Image package
    The Tensor library

Working with the REPL interpreter

Working with databases

Networking and Torch

GPU support in Torch

Integrating Torch
    C, Python, and others

Embedding Torch
    iOS and Android

Other frameworks and libraries
    Facebook's optimized deep-learning modules and containers

Creating your own package

Testing and debugging

Releasing your application

The future of AI and Torch

undnn Understanding Deep Neural Networks 35 hours

This course begins with giving you conceptual knowledge in neural networks and generally in machine learning algorithm, deep learning (algorithms and applications).

Part-1(40%) of this training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Theano, DeepDrive, Keras, etc.

Part-2(20%) of this training introduces Theano - a python library that makes writing deep learning models easy.

Part-3(40%) of the training would be extensively based on Tensorflow - 2nd Generation API of Google's open source software library for Deep Learning. The examples and handson would all be made in TensorFlow.

Audience

This course is intended for engineers seeking to use TensorFlow for their Deep Learning projects

After completing this course, delegates will:

  • have a good understanding on deep neural networks(DNN), CNN and RNN

  • understand TensorFlow’s structure and deployment mechanisms

  • be able to carry out installation / production environment / architecture tasks and configuration

  • be able to assess code quality, perform debugging, monitoring

  • be able to implement advanced production like training models, building graphs and logging
     

Not all the topics would be covered in a public classroom with 35 hours duration due to the vastness of the subject.

The Duration of the complete course will be around 70 hours and not 35 hours.

Part 1 – Deep Learning and DNN Concepts


Introduction AI, Machine Learning & Deep Learning

  • History, basic concepts and usual applications of artificial intelligence far Of the fantasies carried by this domain

  • Collective Intelligence: aggregating knowledge shared by many virtual agents

  • Genetic algorithms: to evolve a population of virtual agents by selection

  • Usual Learning Machine: definition.

  • Types of tasks: supervised learning, unsupervised learning, reinforcement learning

  • Types of actions: classification, regression, clustering, density estimation, reduction of dimensionality

  • Examples of Machine Learning algorithms: Linear regression, Naive Bayes, Random Tree

  • Machine learning VS Deep Learning: problems on which Machine Learning remains Today the state of the art (Random Forests & XGBoosts)


 

Basic Concepts of a Neural Network (Application: multi-layer perceptron)

  • Reminder of mathematical bases.

  • Definition of a network of neurons: classical architecture, activation and

  • Weighting of previous activations, depth of a network

  • Definition of the learning of a network of neurons: functions of cost, back-propagation, Stochastic gradient descent, maximum likelihood.

  • Modeling of a neural network: modeling input and output data according to The type of problem (regression, classification ...). Curse of dimensionality.

  • Distinction between Multi-feature data and signal. Choice of a cost function according to the data.

  • Approximation of a function by a network of neurons: presentation and examples

  • Approximation of a distribution by a network of neurons: presentation and examples

  • Data Augmentation: how to balance a dataset

  • Generalization of the results of a network of neurons.

  • Initialization and regularization of a neural network: L1 / L2 regularization, Batch Normalization

  • Optimization and convergence algorithms


 

Standard ML / DL Tools

A simple presentation with advantages, disadvantages, position in the ecosystem and use is planned.

  • Data management tools: Apache Spark, Apache Hadoop Tools

  • Machine Learning: Numpy, Scipy, Sci-kit

  • DL high level frameworks: PyTorch, Keras, Lasagne

  • Low level DL frameworks: Theano, Torch, Caffe, Tensorflow


 

Convolutional Neural Networks (CNN).

  • Presentation of the CNNs: fundamental principles and applications

  • Basic operation of a CNN: convolutional layer, use of a kernel,

  • Padding & stride, feature map generation, pooling layers. Extensions 1D, 2D and 3D.

  • Presentation of the different CNN architectures that brought the state of the art in classification

  • Images: LeNet, VGG Networks, Network in Network, Inception, Resnet. Presentation of Innovations brought about by each architecture and their more global applications (Convolution 1x1 or residual connections)

  • Use of an attention model.

  • Application to a common classification case (text or image)

  • CNNs for generation: super-resolution, pixel-to-pixel segmentation. Presentation of

  • Main strategies for increasing feature maps for image generation.


 

Recurrent Neural Networks (RNN).

  • Presentation of RNNs: fundamental principles and applications.

  • Basic operation of the RNN: hidden activation, back propagation through time, Unfolded version.

  • Evolutions towards the Gated Recurrent Units (GRUs) and LSTM (Long Short Term Memory).

  • Presentation of the different states and the evolutions brought by these architectures

  • Convergence and vanising gradient problems

  • Classical architectures: Prediction of a temporal series, classification ...

  • RNN Encoder Decoder type architecture. Use of an attention model.

  • NLP applications: word / character encoding, translation.

  • Video Applications: prediction of the next generated image of a video sequence.


Generational models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).

  • Presentation of the generational models, link with the CNNs

  • Auto-encoder: reduction of dimensionality and limited generation

  • Variational Auto-encoder: generational model and approximation of the distribution of a given. Definition and use of latent space. Reparameterization trick. Applications and Limits observed

  • Generative Adversarial Networks: Fundamentals.

  • Dual Network Architecture (Generator and discriminator) with alternate learning, cost functions available.

  • Convergence of a GAN and difficulties encountered.

  • Improved convergence: Wasserstein GAN, Began. Earth Moving Distance.

  • Applications for the generation of images or photographs, text generation, super-resolution.

Deep Reinforcement Learning.

  • Presentation of reinforcement learning: control of an agent in a defined environment

  • By a state and possible actions

  • Use of a neural network to approximate the state function

  • Deep Q Learning: experience replay, and application to the control of a video game.

  • Optimization of learning policy. On-policy && off-policy. Actor critic architecture. A3C.

  • Applications: control of a single video game or a digital system.

 

Part 2 – Theano for Deep Learning

Theano Basics

  • Introduction

  • Installation and Configuration

Theano Functions

  • inputs, outputs, updates, givens

Training and Optimization of a neural network using Theano

  • Neural Network Modeling

  • Logistic Regression

  • Hidden Layers

  • Training a network

  • Computing and Classification

  • Optimization

  • Log Loss

Testing the model


Part 3 – DNN using Tensorflow

TensorFlow Basics

  • Creation, Initializing, Saving, and Restoring TensorFlow variables

  • Feeding, Reading and Preloading TensorFlow Data

  • How to use TensorFlow infrastructure to train models at scale

  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics

  • Prepare the Data

  • Download

  • Inputs and Placeholders

  • Build the GraphS

    • Inference

    • Loss

    • Training

  • Train the Model

    • The Graph

    • The Session

    • Train Loop

  • Evaluate the Model

    • Build the Eval Graph

    • Eval Output

The Perceptron

  • Activation functions

  • The perceptron learning algorithm

  • Binary classification with the perceptron

  • Document classification with the perceptron

  • Limitations of the perceptron

From the Perceptron to Support Vector Machines

  • Kernels and the kernel trick

  • Maximum margin classification and support vectors

Artificial Neural Networks

  • Nonlinear decision boundaries

  • Feedforward and feedback artificial neural networks

  • Multilayer perceptrons

  • Minimizing the cost function

  • Forward propagation

  • Back propagation

  • Improving the way neural networks learn

Convolutional Neural Networks

  • Goals

  • Model Architecture

  • Principles

  • Code Organization

  • Launching and Training the Model

  • Evaluating a Model


 

Basic Introductions to be given to the below modules(Brief Introduction to be provided based on time availability):

Tensorflow - Advanced Usage

  • Threading and Queues

  • Distributed TensorFlow

  • Writing Documentation and Sharing your Model

  • Customizing Data Readers

  • Manipulating TensorFlow Model Files


TensorFlow Serving

  • Introduction

  • Basic Serving Tutorial

  • Advanced Serving Tutorial

  • Serving Inception Model Tutorial

bspkannmldt Artificial Neural Networks, Machine Learning and Deep Thinking 21 hours

1. Understanding classification using nearest neighbors 

  • The kNN algorithm 
  • Calculating distance 
  • Choosing an appropriate k 
  • Preparing data for use with kNN 
  • Why is the kNN algorithm lazy?

2.Understanding naive Bayes 

  • Basic concepts of Bayesian methods 
  • Probability 
  • Joint probability
  • Conditional probability with Bayes' theorem 
  • The naive Bayes algorithm 
  • The naive Bayes classification 
  • The Laplace estimator
  • Using numeric features with naive Bayes

3.Understanding decision trees 

  • Divide and conquer 
  • The C5.0 decision tree algorithm 
  • Choosing the best split 
  • Pruning the decision tree

4. Understanding classification rules 

  • Separate and conquer 
  • The One Rule algorithm 
  • The RIPPER algorithm 
  • Rules from decision trees

5.Understanding regression 

  • Simple linear regression 
  • Ordinary least squares estimation 
  • Correlations 
  • Multiple linear regression

6.Understanding regression trees and model trees 

  • Adding regression to trees

7. Understanding neural networks 

  • From biological to artificial neurons 
  • Activation functions 
  • Network topology 
  • The number of layers 
  • The direction of information travel 
  • The number of nodes in each layer 
  • Training neural networks with backpropagation

8. Understanding Support Vector Machines 

  • Classification with hyperplanes 
  • Finding the maximum margin 
  • The case of linearly separable data 
  • The case of non-linearly separable data 
  • Using kernels for non-linear spaces

9. Understanding association rules 

  • The Apriori algorithm for association rule learning 
  • Measuring rule interest – support and confidence 
  • Building a set of rules with the Apriori principle

10. Understanding clustering

  • Clustering as a machine learning task
  • The k-means algorithm for clustering 
  • Using distance to assign and update clusters 
  • Choosing the appropriate number of clusters

11. Measuring performance for classification 

  • Working with classification prediction data 
  • A closer look at confusion matrices 
  • Using confusion matrices to measure performance 
  • Beyond accuracy – other measures of performance 
  • The kappa statistic 
  • Sensitivity and specificity 
  • Precision and recall 
  • The F-measure 
  • Visualizing performance tradeoffs 
  • ROC curves 
  • Estimating future performance 
  • The holdout method 
  • Cross-validation 
  • Bootstrap sampling

12. Tuning stock models for better performance 

  • Using caret for automated parameter tuning 
  • Creating a simple tuned model 
  • Customizing the tuning process 
  • Improving model performance with meta-learning 
  • Understanding ensembles 
  • Bagging 
  • Boosting 
  • Random forests 
  • Training random forests
  • Evaluating random forest performance

13. Deep Learning

  • Three Classes of Deep Learning
  • Deep Autoencoders
  • Pre-trained Deep Neural Networks
  • Deep Stacking Networks

14. Discussion of Specific Application Areas

OpenNN OpenNN: Implementing neural networks 14 hours

OpenNN is an open-source class library written in C++  which implements neural networks, for use in machine learning.

In this course we go over the principles of neural networks and use OpenNN to implement a sample application.

Audience
    Software developers and programmers wishing to create Deep Learning applications.

Format of the course
    Lecture and discussion coupled with hands-on exercises.

Introduction to OpenNN, Machine Learning and Deep Learning

Downloading OpenNN

Working with Neural Designer
    Using Neural Designer for descriptive, diagnostic, predictive and prescriptive analytics

OpenNN architecture
    CPU parallelization

OpenNN classes
    Data set, neural network, loss index, training strategy, model selection, testing analysis
    Vector and matrix templates

Building a neural network application
    Choosing a suitable neural network
    Formulating the variational problem (loss index)
    Solving the reduced function optimization problem (training strategy)

Working with datasets
     The data matrix (columns as variables and rows as instances)

Learning tasks
    Function regression
    Pattern recognition

Compiling with QT Creator

Integrating, testing and debugging your application

The future of neural networks and OpenNN

dlfornlp Deep Learning for NLP (Natural Language Processing) 28 hours

Deep Learning for NLP allows a machine to learn simple to complex language processing. Among the tasks currently possible are language translation and caption generation for photos. DL (Deep Learning) is a subset of ML (Machine Learning). Python is a popular programming language that contains libraries for Deep Learning for NLP.

In this instructor-led, live training, participants will learn to use Python libraries for NLP (Natural Language Processing) as they create an application that processes a set of pictures and generates captions. 

By the end of this training, participants will be able to:

  • Design and code DL for NLP using Python libraries
  • Create Python code that reads a substantially huge collection of pictures and generates keywords
  • Create Python Code that generates captions from the detected keywords

Audience

  • Programmers with interest in linguistics
  • Programmers who seek an understanding of NLP (Natural Language Processing) 

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction to Deep Learning for NLP

Differentiating between the various types of  DL models

Using pre-trained vs trained models

Using word embeddings and sentiment analysis to extract meaning from text 

How Unsupervised Deep Learning works

Installing and Setting Up Python Deep Learning libraries

Using the Keras DL library on top of TensorFlow to allow Python to create captions

Working with Theano (numerical computation library) and TensorFlow (general and linguistics library) to use as extended DL libraries for the purpose of creating captions. 

Using Keras on top of TensorFlow or Theano to quickly experiment on Deep Learning

Creating a simple Deep Learning application in TensorFlow to add captions to a collection of pictures

Troubleshooting

A word on other (specialized) DL frameworks

Deploying your DL application

Using GPUs to accelerate DL

Closing remarks

dladv Advanced Deep Learning 28 hours
  • Machine Learning Limitations
  • Machine Learning, Non-linear mappings
  • Neural Networks
  • Non-Linear Optimization, Stochastic/MiniBatch Gradient Decent
  • Back Propagation
  • Deep Sparse Coding
  • Sparse Autoencoders (SAE)
  • Convolutional Neural Networks (CNNs)
  • Successes: Descriptor Matching
  • Stereo-based Obstacle
  • Avoidance for Robotics
  • Pooling and invariance
  • Visualization/Deconvolutional Networks
  • Recurrent Neural Networks (RNNs) and their optimizaiton
  • Applications to NLP
  • RNNs continued,
  • Hessian-Free Optimization
  • Language analysis: word/sentence vectors, parsing, sentiment analysis, etc.
  • Probabilistic Graphical Models
  • Hopfield Nets, Boltzmann machines, Restricted Boltzmann Machines
  • Hopfield Networks, (Restricted) Bolzmann Machines
  • Deep Belief Nets, Stacked RBMs
  • Applications to NLP , Pose and Activity Recognition in Videos
  • Recent Advances
  • Large-Scale Learning
  • Neural Turing Machines

 

Fairseq Fairseq: Setting up a CNN-based machine translation system 7 hours

Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).

In this training participants will learn how to use Fairseq to carry out translation of sample content.

By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course
    Part lecture, part discussion, heavy hands-on practice

Note

  • If you wish to use specific source and target language content, please contact us to arrange.

Introduction
    Why Neural Machine Translation?

Overview of the Torch project

Overview of a Convolutional Neural Machine Translation model
    Convolutional Sequence to Sequence Learning
    Convolutional Encoder Model for Neural Machine Translation
    Standard LSTM-based model

Overview of training approaches
    About GPUs and CPUs
    Fast beam search generation

Installation and setup

Evaluating pre-trained models

Preprocessing your data

Training the model

Translating

Converting a trained model to use CPU-only operations

Joining to the community

Closing remarks

tf101 Deep Learning with TensorFlow 21 hours

TensorFlow is a 2nd Generation API of Google's open source software library for Deep Learning. The system is designed to facilitate research in machine learning, and to make it quick and easy to transition from research prototype to production system.

Audience

This course is intended for engineers seeking to use TensorFlow for their Deep Learning projects

After completing this course, delegates will:

  • understand TensorFlow’s structure and deployment mechanisms
  • be able to carry out installation / production environment / architecture tasks and configuration
  • be able to assess code quality, perform debugging, monitoring
  • be able to implement advanced production like training models, building graphs and logging

Machine Learning and Recursive Neural Networks (RNN) basics

  • NN and RNN
  • Backprogation
  • Long short-term memory (LSTM)

TensorFlow Basics

  • Creation, Initializing, Saving, and Restoring TensorFlow variables
  • Feeding, Reading and Preloading TensorFlow Data
  • How to use TensorFlow infrastructure to train models at scale
  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics 101

  • Prepare the Data
    • Download
    • Inputs and Placeholders
  • Build the Graph
    • Inference
    • Loss
    • Training
  • Train the Model
    • The Graph
    • The Session
    • Train Loop
  • Evaluate the Model
    • Build the Eval Graph
    • Eval Output

Advanced Usage

  • Threading and Queues
  • Distributed TensorFlow
  • Writing Documentation and Sharing your Model
  • Customizing Data Readers
  • Using GPUs¹
  • Manipulating TensorFlow Model Files

TensorFlow Serving

  • Introduction
  • Basic Serving Tutorial
  • Advanced Serving Tutorial
  • Serving Inception Model Tutorial

¹ The Advanced Usage topic, “Using GPUs”, is not available as a part of a remote course. This module can be delivered during classroom-based courses, but only by prior agreement, and only if both the trainer and all participants have laptops with supported NVIDIA GPUs, with 64-bit Linux installed (not provided by NobleProg). NobleProg cannot guarantee the availability of trainers with the required hardware.

facebooknmt Facebook NMT: Setting up a Neural Machine Translation System 7 hours

Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).

In this training participants will learn how to use Fairseq to carry out translation of sample content.

By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course

  • Part lecture, part discussion, heavy hands-on practice

Note

  • If you wish to use specific source and target language content, please contact us to arrange.

Introduction
    Why Neural Machine Translation?
    Borrowing from image recognition techniques

Overview of the Torch and Caffe2 projects

Overview of a Convolutional Neural Machine Translation model
    Convolutional Sequence to Sequence Learning
    Convolutional Encoder Model for Neural Machine Translation
    Standard LSTM-based model

Overview of training approaches
    About GPUs and CPUs
    Fast beam search generation

Installation and setup

Evaluating pre-trained models

Preprocessing your data

Training the model

Translating

Converting a trained model to use CPU-only operations

Joining to the community

Closing remarks

tfir TensorFlow for Image Recognition 28 hours

This course explores, with specific examples, the application of Tensor Flow to the purposes of image recognition

Audience

This course is intended for engineers seeking to utilize TensorFlow for the purposes of Image Recognition

After completing this course, delegates will be able to:

  • understand TensorFlow’s structure and deployment mechanisms
  • carry out installation / production environment / architecture tasks and configuration
  • assess code quality, perform debugging, monitoring
  • implement advanced production like training models, building graphs and logging

Machine Learning and Recursive Neural Networks (RNN) basics

  • NN and RNN
  • Backpropagation
  • Long short-term memory (LSTM)

TensorFlow Basics

  • Creation, Initializing, Saving, and Restoring TensorFlow variables
  • Feeding, Reading and Preloading TensorFlow Data
  • How to use TensorFlow infrastructure to train models at scale
  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics 101

  • Tutorial Files
  • Prepare the Data
    • Download
    • Inputs and Placeholders
  • Build the Graph
    • Inference
    • Loss
    • Training
  • Train the Model
    • The Graph
    • The Session
    • Train Loop
  • Evaluate the Model
    • Build the Eval Graph
    • Eval Output

Advanced Usage

  • Threading and Queues
  • Distributed TensorFlow
  • Writing Documentation and Sharing your Model
  • Customizing Data Readers
  • Using GPUs¹
  • Manipulating TensorFlow Model Files

TensorFlow Serving

  • Introduction
  • Basic Serving Tutorial
  • Advanced Serving Tutorial
  • Serving Inception Model Tutorial

Convolutional Neural Networks

  • Overview
    • Goals
    • Highlights of the Tutorial
    • Model Architecture
  • Code Organization
  • CIFAR-10 Model
    • Model Inputs
    • Model Prediction
    • Model Training
  • Launching and Training the Model
  • Evaluating a Model
  • Training a Model Using Multiple GPU Cards¹
    • Placing Variables and Operations on Devices
    • Launching and Training the Model on Multiple GPU cards

Deep Learning for MNIST

  • Setup
  • Load MNIST Data
  • Start TensorFlow InteractiveSession
  • Build a Softmax Regression Model
  • Placeholders
  • Variables
  • Predicted Class and Cost Function
  • Train the Model
  • Evaluate the Model
  • Build a Multilayer Convolutional Network
  • Weight Initialization
  • Convolution and Pooling
  • First Convolutional Layer
  • Second Convolutional Layer
  • Densely Connected Layer
  • Readout Layer
  • Train and Evaluate the Model

Image Recognition

  • Inception-v3
    • C++
    • Java

¹ Topics related to the use of GPUs are not available as a part of a remote course. They can be delivered during classroom-based courses, but only by prior agreement, and only if both the trainer and all participants have laptops with supported NVIDIA GPUs, with 64-bit Linux installed (not provided by NobleProg). NobleProg cannot guarantee the availability of trainers with the required hardware.

tpuprogramming TPU Programming: Building Neural Network Applications on Tensor Processing Units 7 hours

The Tensor Processing Unit (TPU) is the architecture which Google has used internally for several years, and is just now becoming available for use by the general public. It includes several optimizations specifically for use in neural networks, including streamlined matrix multiplication, and 8-bit integers instead of 16-bit in order to return appropriate levels of precision.

In this instructor-led, live training, participants will learn how to take advantage of the innovations in TPU processors to maximize the performance of their own AI applications.

By the end of the training, participants will be able to:

  • Train various types of neural networks on large amounts of data
  • Use TPUs to speed up the inference process by up to two orders of magnitude
  • Utilize TPUs to process intensive applications such as image search, cloud vision and photos

Audience

  • Developers
  • Researchers
  • Engineers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

dl4j Mastering Deeplearning4j 21 hours

Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Spark, DL4J is designed to be used in business environments on distributed GPUs and CPUs.

 

Audience

This course is directed at engineers and developers seeking to utilize Deeplearning4j in their projects.

 

After this course delegates will be able to:

Getting Started

  • Quickstart: Running Examples and DL4J in Your Projects
  • Comprehensive Setup Guide

Introduction to Neural Networks

  • Restricted Boltzmann Machines
  • Convolutional Nets (ConvNets)
  • Long Short-Term Memory Units (LSTMs)
  • Denoising Autoencoders
  • Recurrent Nets and LSTMs

Multilayer Neural Nets

  • Deep-Belief Network
  • Deep AutoEncoder
  • Stacked Denoising Autoencoders

Tutorials

  • Using Recurrent Nets in DL4J
  • MNIST DBN Tutorial
  • Iris Flower Tutorial
  • Canova: Vectorization Lib for ML Tools
  • Neural Net Updaters: SGD, Adam, Adagrad, Adadelta, RMSProp

Datasets

  • Datasets and Machine Learning
  • Custom Datasets
  • CSV Data Uploads

Scaleout

  • Iterative Reduce Defined
  • Multiprocessor / Clustering
  • Running Worker Nodes

Text

  • DL4J's NLP Framework
  • Word2vec for Java and Scala
  • Textual Analysis and DL
  • Bag of Words
  • Sentence and Document Segmentation
  • Tokenization
  • Vocab Cache

Advanced DL2J

  • Build Locally From Master
  • Contribute to DL4J (Developer Guide)
  • Choose a Neural Net
  • Use the Maven Build Tool
  • Vectorize Data With Canova
  • Build a Data Pipeline
  • Run Benchmarks
  • Configure DL4J in Ivy, Gradle, SBT etc
  • Find a DL4J Class or Method
  • Save and Load Models
  • Interpret Neural Net Output
  • Visualize Data with t-SNE
  • Swap CPUs for GPUs
  • Customize an Image Pipeline
  • Perform Regression With Neural Nets
  • Troubleshoot Training & Select Network Hyperparameters
  • Visualize, Monitor and Debug Network Learning
  • Speed Up Spark With Native Binaries
  • Build a Recommendation Engine With DL4J
  • Use Recurrent Networks in DL4J
  • Build Complex Network Architectures with Computation Graph
  • Train Networks using Early Stopping
  • Download Snapshots With Maven
  • Customize a Loss Function
MicrosoftCognitiveToolkit Microsoft Cognitive Toolkit 2.x 21 hours

Microsoft Cognitive Toolkit 2.x (previously CNTK) is an open-source, commercial-grade toolkit that trains deep learning algorithms to learn like the human brain. According to Microsoft, CNTK can be 5-10x faster than TensorFlow on recurrent networks, and 2 to 3 times faster than TensorFlow for image-related tasks.

In this instructor-led, live training, participants will learn how to use Microsoft Cognitive Toolkit to create, train and evaluate deep learning algorithms for use in commercial-grade AI applications involving multiple types of data such data, speech, text, and images.

By the end of this training, participants will be able to:

  • Access CNTK as a library from within a Python, C#, or C++ program
  • Use CNTK as a standalone machine learning tool through its own model description language (BrainScript)
  • Use the CNTK model evaluation functionality from a Java program
  • Combine feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs)
  • Scale computation capacity on CPUs, GPUs and multiple machines
  • Access massive datasets using existing programming languages and algorithms

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Note

  • If you wish to customize any part of this training, including the programming language of choice, please contact us to arrange.

To request a customized course outline for this training, please contact us.

singa Mastering Apache SINGA 21 hours

SINGA is a general distributed deep learning platform for training big deep learning models over large datasets. It is designed with an intuitive programming model based on the layer abstraction. A variety of popular deep learning models are supported, namely feed-forward models including convolutional neural networks (CNN), energy models like restricted Boltzmann machine (RBM), and recurrent neural networks (RNN). Many built-in layers are provided for users. SINGA architecture is sufficiently flexible to run synchronous, asynchronous and hybrid training frameworks. SINGA also supports different neural net partitioning schemes to parallelize the training of large models, namely partitioning on batch dimension, feature dimension or hybrid partitioning.

Audience

This course is directed at researchers, engineers and developers seeking to utilize Apache SINGA as a deep learning framework.

After completing this course, delegates will:

  • understand SINGA’s structure and deployment mechanisms
  • be able to carry out installation / production environment / architecture tasks and configuration
  • be able to assess code quality, perform debugging, monitoring
  • be able to implement advanced production like training models, embedding terms, building graphs and logging

 

Introduction

Installation

Quick Start

Programming

  • NeuralNet
    • Layer
    • Param
  • TrainOneBatch
  • Updater 

Distributed Training

Data Preparation

Checkpoint and Resume

Python Binding

Performance test and Feature extraction

Training on GPU

Examples

  • Feed-forward models
    • CNN
    • MLP
  • RBM + Auto-encoder
  • Vanilla RNN for language modelling
  • Char-RNN
dsstne Amazon DSSTNE: Build a recommendation system 7 hours

Amazon DSSTNE is an open-source library for training and deploying recommendation models. It allows models with weight matrices that are too large for a single GPU to be trained on a single host.

In this instructor-led, live training, participants will learn how to use DSSTNE to build a recommendation application.

By the end of this training, participants will be able to:

  • Train a recommendation model with sparse datasets as input
  • Scale training and prediction models over multiple GPUs
  • Spread out computation and storage in a model-parallel fashion
  • Generate Amazon-like personalized product recommendations
  • Deploy a production-ready application that can scale at heavy workloads

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

 

caffe Deep Learning for Vision with Caffe 21 hours

Caffe is a deep learning framework made with expression, speed, and modularity in mind.

This course explores the application of Caffe as a Deep learning framework for image recognition using MNIST as an example

Audience

This course is suitable for Deep Learning researchers and engineers interested in utilizing Caffe as a framework.

After completing this course, delegates will be able to:

  • understand Caffe’s structure and deployment mechanisms
  • carry out installation / production environment / architecture tasks and configuration
  • assess code quality, perform debugging, monitoring
  • implement advanced production like training models, implementing layers and logging

Installation

  • Docker
  • Ubuntu
  • RHEL / CentOS / Fedora installation
  • Windows

Caffe Overview

  • Nets, Layers, and Blobs: the anatomy of a Caffe model.
  • Forward / Backward: the essential computations of layered compositional models.
  • Loss: the task to be learned is defined by the loss.
  • Solver: the solver coordinates model optimization.
  • Layer Catalogue: the layer is the fundamental unit of modeling and computation – Caffe’s catalogue includes layers for state-of-the-art models.
  • Interfaces: command line, Python, and MATLAB Caffe.
  • Data: how to caffeinate data for model input.
  • Caffeinated Convolution: how Caffe computes convolutions.

New models and new code

  • Detection with Fast R-CNN
  • Sequences with LSTMs and Vision + Language with LRCN
  • Pixelwise prediction with FCNs
  • Framework design and future

Examples:

  • MNIST

 

 

t2t T2T: Creating Sequence to Sequence models for generalized learning 7 hours

Tensor2Tensor (T2T) is a modular, extensible library for training AI models in different tasks, using different types of training data, for example: image recognition, translation, parsing, image captioning, and speech recognition. It is maintained by the Google Brain team.

In this instructor-led, live training, participants will learn how to prepare a deep-learning model to resolve multiple tasks.

By the end of this training, participants will be able to:

  • Install tensor2tensor, select a data set, and train and evaluate an AI model
  • Customize a development environment using the tools and components included in Tensor2Tensor
  • Create and use a single model to concurrently learn a number of tasks from multiple domains
  • Use the model to learn from tasks with a large amount of training data and apply that knowledge to tasks where data is limited
  • Obtain satisfactory processing results using a single GPU

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

dl4jir DeepLearning4J for Image Recognition 21 hours

Deeplearning4j is an Open-Source Deep-Learning Software for Java and Scala on Hadoop and Spark.

Audience

This course is meant for engineers and developers seeking to utilize DeepLearning4J in their image recognition projects.

Getting Started

  • Quickstart: Running Examples and DL4J in Your Projects
  • Comprehensive Setup Guide

Convolutional Neural Networks 

  • Convolutional Net Introduction
  • Images Are 4-D Tensors?
  • ConvNet Definition
  • How Convolutional Nets Work
  • Maxpooling/Downsampling
  • DL4J Code Sample
  • Other Resources

Datasets

  • Datasets and Machine Learning
  • Custom Datasets
  • CSV Data Uploads

Scaleout

  • Iterative Reduce Defined
  • Multiprocessor / Clustering
  • Running Worker Nodes

Advanced DL2J

  • Build Locally From Master
  • Use the Maven Build Tool
  • Vectorize Data With Canova
  • Build a Data Pipeline
  • Run Benchmarks
  • Configure DL4J in Ivy, Gradle, SBT etc
  • Find a DL4J Class or Method
  • Save and Load Models
  • Interpret Neural Net Output
  • Visualize Data with t-SNE
  • Swap CPUs for GPUs
  • Customize an Image Pipeline
  • Perform Regression With Neural Nets
  • Troubleshoot Training & Select Network Hyperparameters
  • Visualize, Monitor and Debug Network Learning
  • Speed Up Spark With Native Binaries
  • Build a Recommendation Engine With DL4J
  • Use Recurrent Networks in DL4J
  • Build Complex Network Architectures with Computation Graph
  • Train Networks using Early Stopping
  • Download Snapshots With Maven
  • Customize a Loss Function

 

embeddingprojector Embedding Projector: Visualizing your Training Data 14 hours

Embedding Projector is an open-source web application for visualizing the data used to train machine learning systems. Created by Google, it is part of TensorFlow.

This instructor-led, live training introduces the concepts behind Embedding Projector and walks participants through the setup of a demo project.

By the end of this training, participants will be able to:

  • Explore how data is being interpreted by machine learning models
  • Navigate through 3D and 2D views of data to understand how a machine learning algorithm interprets it
  • Understand the concepts behind Embeddings and their role in representing mathematical vectors for images, words and numerals.
  • Explore the properties of a specific embedding to understand the behavior of a model
  • Apply Embedding Project to real-world use cases such building a song recommendation system for music lovers

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

w2vdl4j NLP with Deeplearning4j 14 hours

Deeplearning4j is an open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Spark, DL4J is designed to be used in business environments on distributed GPUs and CPUs.

Word2Vec is a method of computing vector representations of words introduced by a team of researchers at Google led by Tomas Mikolov.

Audience

This course is directed at researchers, engineers and developers seeking to utilize Deeplearning4J to construct Word2Vec models.

Getting Started

  • DL4J Examples in a Few Easy Steps
  • Using DL4J In Your Own Projects: Configuring the POM.xml File

Word2Vec

  • Introduction
  • Neural Word Embeddings
  • Amusing Word2vec Results
  • the Code
  • Anatomy of Word2Vec
  • Setup, Load and Train
  • A Code Example
  • Troubleshooting & Tuning Word2Vec
  • Word2vec Use Cases
  • Foreign Languages
  • GloVe (Global Vectors) & Doc2Vec
openface OpenFace: Creating Facial Recognition Systems 14 hours

OpenFace is Python and Torch based open-source, real-time facial recognition software based on Google’s FaceNet research.

In this instructor-led, live training, participants will learn how to use OpenFace's components to create and deploy a sample facial recognition application.

By the end of this training, participants will be able to:

  • Work with OpenFace's components, including dlib, OpenVC, Torch, and nn4 to implement face detection, alignment, and transformation.
  • Apply OpenFace to real-world applications such as surveillance, identity verification, virtual reality, gaming, and identifying repeat customers, etc.

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

tsflw2v Natural Language Processing with TensorFlow 35 hours

TensorFlow™ is an open source software library for numerical computation using data flow graphs.

SyntaxNet is a neural-network Natural Language Processing framework for TensorFlow.

Word2Vec is used for learning vector representations of words, called "word embeddings". Word2vec is a particularly computationally-efficient predictive model for learning word embeddings from raw text. It comes in two flavors, the Continuous Bag-of-Words model (CBOW) and the Skip-Gram model (Chapter 3.1 and 3.2 in Mikolov et al.).

Used in tandem, SyntaxNet and Word2Vec allows users to generate Learned Embedding models from Natural Language input.

Audience

This course is targeted at Developers and engineers who intend to work with SyntaxNet and Word2Vec models in their TensorFlow graphs.

After completing this course, delegates will:

  • understand TensorFlow’s structure and deployment mechanisms
  • be able to carry out installation / production environment / architecture tasks and configuration
  • be able to assess code quality, perform debugging, monitoring
  • be able to implement advanced production like training models, embedding terms, building graphs and logging

Getting Started

  • Setup and Installation

TensorFlow Basics

  • Creation, Initializing, Saving, and Restoring TensorFlow variables
  • Feeding, Reading and Preloading TensorFlow Data
  • How to use TensorFlow infrastructure to train models at scale
  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics 101

  • Prepare the Data
    • Download
    • Inputs and Placeholders
  • Build the Graph
    • Inference
    • Loss
    • Training
  • Train the Model
    • The Graph
    • The Session
    • Train Loop
  • Evaluate the Model
    • Build the Eval Graph
    • Eval Output

Advanced Usage

  • Threading and Queues
  • Distributed TensorFlow
  • Writing Documentation and Sharing your Model
  • Customizing Data Readers
  • Using GPUs
  • Manipulating TensorFlow Model Files

TensorFlow Serving

  • Introduction
  • Basic Serving Tutorial
  • Advanced Serving Tutorial
  • Serving Inception Model Tutorial

Getting Started with SyntaxNet

  • Parsing from Standard Input
  • Annotating a Corpus
  • Configuring the Python Scripts

Building an NLP Pipeline with SyntaxNet

  • Obtaining Data
  • Part-of-Speech Tagging
  • Training the SyntaxNet POS Tagger
  • Preprocessing with the Tagger
  • Dependency Parsing: Transition-Based Parsing
  • Training a Parser Step 1: Local Pretraining
  • Training a Parser Step 2: Global Training

Vector Representations of Words

  • Motivation: Why Learn word embeddings?
  • Scaling up with Noise-Contrastive Training
  • The Skip-gram Model
  • Building the Graph
  • Training the Model
  • Visualizing the Learned Embeddings
  • Evaluating Embeddings: Analogical Reasoning
  • Optimizing the Implementation

 

 

pythonadvml Python for Advanced Machine Learning 21 hours

In this instructor-led, live training, participants will learn the most relevant and cutting-edge machine learning techniques in Python as they build a series of demo applications involving image, music, text, and financial data.

By the end of this training, participants will be able to:

  • Implement machine learning algorithms and techniques for solving complex problems
  • Apply deep learning and semi-supervised learning to applications involving image, music, text, and financial data
  • Push Python algorithms to their maximum potential
  • Use libraries and packages such as NumPy and Theano

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

dlv Deep Learning for Vision 21 hours

Audience

This course is suitable for Deep Learning researchers and engineers interested in utilizing available tools (mostly open source ) for analyzing computer images

This course provide working examples.

Deep Learning vs Machine Learning vs Other Methods

  • When Deep Learning is suitable
  • Limits of Deep Learning
  • Comparing accuracy and cost of different methods

Methods Overview

  • Nets and  Layers
  • Forward / Backward: the essential computations of layered compositional models.
  • Loss: the task to be learned is defined by the loss.
  • Solver: the solver coordinates model optimization.
  • Layer Catalogue: the layer is the fundamental unit of modeling and computation
  • Convolution​

Methods and models

  • Backprop, modular models
  • Logsum module
  • RBF Net
  • MAP/MLE loss
  • Parameter Space Transforms
  • Convolutional Module
  • Gradient-Based Learning 
  • Energy for inference,
  • Objective for learning
  • PCA; NLL: 
  • Latent Variable Models
  • Probabilistic LVM
  • Loss Function
  • Detection with Fast R-CNN
  • Sequences with LSTMs and Vision + Language with LRCN
  • Pixelwise prediction with FCNs
  • Framework design and future

Tools

  • Caffe
  • Tensorflow
  • R
  • Matlab
  • Others...
radvml Advanced Machine Learning with R 21 hours

In this instructor-led, live training, participants will learn advanced techniques for Machine Learning with R as they step through the creation of a real-world application.

By the end of this training, participants will be able to:

  • Use techniques as hyper-parameter tuning and deep learning
  • Understand and implement unsupervised learning techniques
  • Put a model into production for use in a larger application

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

Neuralnettf Neural Networks Fundamentals using TensorFlow as Example 28 hours

This course will give you knowledge in neural networks and generally in machine learning algorithm,  deep learning (algorithms and applications).

This training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Teano, DeepDrive, Keras, etc. The examples are made in TensorFlow.

TensorFlow Basics

  • Creation, Initializing, Saving, and Restoring TensorFlow variables
  • Feeding, Reading and Preloading TensorFlow Data
  • How to use TensorFlow infrastructure to train models at scale
  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics

  • Inputs and Placeholders
  • Build the GraphS
    • Inference
    • Loss
    • Training
  • Train the Model
    • The Graph
    • The Session
    • Train Loop
  • Evaluate the Model
    • Build the Eval Graph
    • Eval Output

The Perceptron

  • Activation functions
  • The perceptron learning algorithm
  • Binary classification with the perceptron
  • Document classification with the perceptron
  • Limitations of the perceptron

From the Perceptron to Support Vector Machines

  • Kernels and the kernel trick
  • Maximum margin classification and support vectors

Artificial Neural Networks

  • Nonlinear decision boundaries
  • Feedforward and feedback artificial neural networks
  • Multilayer perceptrons
  • Minimizing the cost function
  • Forward propagation
  • Back propagation
  • Improving the way neural networks learn

Convolutional Neural Networks

  • Goals
  • Model Architecture
  • Principles
  • Code Organization
  • Launching and Training the Model
  • Evaluating a Model
tensorflowserving TensorFlow Serving 7 hours

TensorFlow Serving is a system for serving machine learning (ML) models to production.

In this instructor-led, live training, participants will learn how to configure and use TensorFlow Serving to deploy and manage ML models in a production environment.

By the end of this training, participants will be able to:

  • Train, export and serve various TensorFlow models
  • Test and deploy algorithms using a single architecture and set of APIs
  • Extend TensorFlow Serving to serve other types of models beyond TensorFlow models

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

Other regions

Deep Learning training courses in Exeter, Weekend Deep Learning courses in Exeter, Evening Deep Learning training in Exeter, Deep Learning instructor-led in Exeter , Deep Learning instructor in Exeter, Deep Learning instructor-led in Exeter, Evening Deep Learning courses in Exeter, Deep Learning private courses in Exeter, Deep Learning on-site in Exeter, Deep Learning boot camp in Exeter,Weekend Deep Learning training in Exeter, Deep Learning one on one training in Exeter, Deep Learning trainer in Exeter,Deep Learning classes in Exeter

Course Discounts

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients