Artificial Intelligence Training in Scotland

Artificial Intelligence Training in Scotland

Artificial intelligence (AI) is an area of computer science that seeks to enable computers to behave intelligently, like humans. Some example of AI applications include Robotics, NLP, Voice Recognition, Text Processing, Speech Processing and Computer Vision.

NobleProg onsite live AI training courses demonstrate through hands-on practice how to implement AI solutions for solving real-world problems.

AI training is available in various formats, including onsite live training and live instructor-led training using an interactive, remote desktop setup. Local AI training can be carried out live on customer premises or in NobleProg local training centers.

Glasgow

Glasgow
69 Buchanan St
Glasgow, LKS G1 3HL
United Kingdom
Lanarkshire GB
Glasgow
The Buchanan Street Centre is located in the heart of Glasgow, in Scotland's most famous shopping and retail district. A major feature of this stunning large...Read more

Client Testimonials

Artificial Intelligence Course Events - Scotland

Code Name Venue Duration Course Date PHP Course Price [Remote / Classroom]
spmllib Apache Spark MLlib Edinburgh 35 hours Mon, 2018-03-12 09:30 £6500 / £9000
textsum Text Summarization with Python Edinburgh 14 hours Mon, 2018-03-12 09:30 £2200 / £3200
annmldt Artificial Neural Networks, Machine Learning, Deep Thinking Edinburgh Training and Conference Venue 21 hours Mon, 2018-03-12 09:30 £3900 / £4125
pythonmultipurpose Advanced Python Edinburgh 28 hours Mon, 2018-03-12 09:30 £4400 / £6400
spmllib Apache Spark MLlib Aberdeen - Berry Street 35 hours Mon, 2018-03-12 09:30 £6500 / £8150
annmldt Artificial Neural Networks, Machine Learning, Deep Thinking Glasgow 21 hours Mon, 2018-03-12 09:30 £3900 / £4950
deeplearning1 Introduction to Deep Learning Aberdeen - Berry Street 21 hours Mon, 2018-03-12 09:30 £3900 / £4890
bdbitcsp Big Data Business Intelligence for Telecom and Communication Service Providers Glasgow 35 hours Mon, 2018-03-12 09:30 £5500 / £7250
mlrobot1 Machine Learning for Robotics Edinburgh Training and Conference Venue 21 hours Mon, 2018-03-12 09:30 £3300 / £3525
marvin Marvin Image Processing Framework - creating image and video processing applications with Marvin Glasgow 14 hours Mon, 2018-03-12 09:30 £2600 / £3300
noolsint Introduction to Nools Edinburgh 7 hours Mon, 2018-03-12 09:30 £1100 / £1600
tsflw2v Natural Language Processing with TensorFlow Aberdeen - Berry Street 35 hours Mon, 2018-03-12 09:30 £6500 / £8150
aifortelecom AI Awareness for Telecom Edinburgh Training and Conference Venue 14 hours Mon, 2018-03-12 09:30 £2200 / £2350
matlabml1 Introduction to Machine Learning with MATLAB Edinburgh Training and Conference Venue 21 hours Mon, 2018-03-12 09:30 £3300 / £3525
opennmt OpenNMT: Setting up a Neural Machine Translation System Aberdeen - Berry Street 7 hours Mon, 2018-03-12 09:30 £1100 / £1430
annmldt Artificial Neural Networks, Machine Learning, Deep Thinking Edinburgh 21 hours Mon, 2018-03-12 09:30 £3900 / £5400
python_nltk Natural Language Processing with Python Aberdeen - Berry Street 28 hours Mon, 2018-03-12 09:30 £4400 / £5720
dlforbankingwithr Deep Learning for Banking (with R) Aberdeen - Berry Street 28 hours Mon, 2018-03-12 09:30 £4400 / £5720
matlabfundamentalsfinance MATLAB Fundamentals + MATLAB for Finance Edinburgh 35 hours Mon, 2018-03-12 09:30 £5500 / £8000
bigdatar Programming with Big Data in R Aberdeen - Berry Street 21 hours Mon, 2018-03-12 09:30 £3300 / £4290
odmblockchain IBM ODM and Blockchain: Applying business rules to Smart Contracts Edinburgh 14 hours Tue, 2018-03-13 09:30 £2200 / £3200
nifidev Apache NiFi for Developers Edinburgh Training and Conference Venue 7 hours Tue, 2018-03-13 09:30 £1100 / £1175
dlfinancewithr Deep Learning for Finance (with R) Glasgow 28 hours Tue, 2018-03-13 09:30 £4400 / £5800
hadoopadm Hadoop Administration Aberdeen - Berry Street 21 hours Tue, 2018-03-13 09:30 £3300 / £4290
datavault Data Vault: Building a Scalable Data Warehouse Aberdeen - Berry Street 28 hours Tue, 2018-03-13 09:30 £4400 / £5720
highcharts Highcharts for Data Visualization Edinburgh Training and Conference Venue 7 hours Tue, 2018-03-13 09:30 £1100 / £1175
tpuprogramming TPU Programming: Building Neural Network Applications on Tensor Processing Units Aberdeen - Berry Street 7 hours Tue, 2018-03-13 09:30 £1100 / £1430
dladv Advanced Deep Learning Aberdeen - Berry Street 28 hours Tue, 2018-03-13 09:30 £5200 / £6520
glusterfs GlusterFS for System Administrators Edinburgh Training and Conference Venue 21 hours Tue, 2018-03-13 09:30 £3300 / £3525
magellan Magellan: Geospatial Analytics with on Spark Edinburgh Training and Conference Venue 14 hours Tue, 2018-03-13 09:30 £2200 / £2350
datavisR1 Introduction to Data Visualization with R Aberdeen - Berry Street 28 hours Tue, 2018-03-13 09:30 £5200 / £6520
matlabdl Matlab for Deep Learning Aberdeen - Berry Street 14 hours Tue, 2018-03-13 09:30 £2200 / £2860
flockdb Flockdb: A Simple Graph Database for Social Media Edinburgh Training and Conference Venue 7 hours Tue, 2018-03-13 09:30 £1100 / £1175
apex Apache Apex: Processing big data-in-motion Edinburgh Training and Conference Venue 21 hours Tue, 2018-03-13 09:30 £3300 / £3525
cognitivecomputing Cognitive Computing: An Introduction for Business Managers Glasgow 7 hours Tue, 2018-03-13 09:30 £1100 / £1450
datamin Data Mining Edinburgh 21 hours Tue, 2018-03-13 09:30 £3900 / £5400
hadoopdeva Advanced Hadoop for Developers Glasgow 21 hours Tue, 2018-03-13 09:30 £3300 / £4350
dlforbankingwithr Deep Learning for Banking (with R) Glasgow 28 hours Tue, 2018-03-13 09:30 £4400 / £5800
droolsrlsadm Drools Rules Administration Edinburgh Training and Conference Venue 21 hours Tue, 2018-03-13 09:30 £3900 / £4125
powerbiforbiandanalytics Power BI for Business Analysts Edinburgh Training and Conference Venue 21 hours Tue, 2018-03-13 09:30 £3300 / £3525
matlabpredanalytics Matlab for Predictive Analytics Edinburgh Training and Conference Venue 21 hours Tue, 2018-03-13 09:30 £3300 / £3525
mlfinancer Machine Learning for Finance (with R) Glasgow 28 hours Tue, 2018-03-13 09:30 £4400 / £5800
glusterfs GlusterFS for System Administrators Glasgow 21 hours Tue, 2018-03-13 09:30 £3300 / £4350
solrdev Solr for Developers Aberdeen - Berry Street 21 hours Tue, 2018-03-13 09:30 £3300 / £4290
ApHadm1 Apache Hadoop: Manipulation and Transformation of Data Performance Edinburgh Training and Conference Venue 21 hours Tue, 2018-03-13 09:30 £3300 / £3525
matlabpredanalytics Matlab for Predictive Analytics Edinburgh 21 hours Tue, 2018-03-13 09:30 £3300 / £4800
matfin MATLAB for Financial Applications Edinburgh 21 hours Tue, 2018-03-13 09:30 £3900 / £5400
mdlmrah Model MapReduce and Apache Hadoop Edinburgh 14 hours Tue, 2018-03-13 09:30 £2200 / £3200
PentahoDI Pentaho Data Integration Fundamentals Edinburgh Training and Conference Venue 21 hours Tue, 2018-03-13 09:30 £3300 / £3525
dlforfinancewithpython Deep Learning for Finance (with Python) Glasgow 28 hours Tue, 2018-03-13 09:30 £4400 / £5800
dlfinancewithr Deep Learning for Finance (with R) Edinburgh Training and Conference Venue 28 hours Tue, 2018-03-13 09:30 £4400 / £4700
samza Samza for stream processing Edinburgh 14 hours Tue, 2018-03-13 09:30 £2200 / £3200
nifi Apache NiFi for Administrators Glasgow 21 hours Tue, 2018-03-13 09:30 £3300 / £4350
d2dbdpa From Data to Decision with Big Data and Predictive Analytics Aberdeen - Berry Street 21 hours Wed, 2018-03-14 09:30 £3900 / £4890
datavisualizationreports Data Visualization: Creating Captivating Reports Glasgow 21 hours Wed, 2018-03-14 09:30 £3300 / £4350
odmblockchain IBM ODM and Blockchain: Applying business rules to Smart Contracts Aberdeen - Berry Street 14 hours Wed, 2018-03-14 09:30 £2200 / £2860
d3js D3.js for Data Visualization Glasgow 7 hours Wed, 2018-03-14 09:30 £1100 / £1450
tf101 Deep Learning with TensorFlow Aberdeen - Berry Street 21 hours Wed, 2018-03-14 09:30 £3300 / £4290
druid Druid: Build a fast, real-time data analysis system Glasgow 21 hours Wed, 2018-03-14 09:30 £3300 / £4350
deckgl deck.gl: Visualizing Large-scale Geospatial Data Aberdeen - Berry Street 14 hours Wed, 2018-03-14 09:30 £2200 / £2860
Piwik Getting started with Piwik Glasgow 21 hours Wed, 2018-03-14 09:30 £3300 / £4350
nlg Python for Natural Language Generation Edinburgh 21 hours Wed, 2018-03-14 09:30 £3300 / £4800
druid Druid: Build a fast, real-time data analysis system Aberdeen - Berry Street 21 hours Wed, 2018-03-14 09:30 £3300 / £4290
ApHadm1 Apache Hadoop: Manipulation and Transformation of Data Performance Edinburgh 21 hours Wed, 2018-03-14 09:30 £3300 / £4800
highcharts Highcharts for Data Visualization Glasgow 7 hours Wed, 2018-03-14 09:30 £1100 / £1450
annmldt Artificial Neural Networks, Machine Learning, Deep Thinking Aberdeen - Berry Street 21 hours Wed, 2018-03-14 09:30 £3900 / £4890
powerbiforbiandanalytics Power BI for Business Analysts Aberdeen - Berry Street 21 hours Wed, 2018-03-14 09:30 £3300 / £4290
intror Introduction to R with Time Series Analysis Aberdeen - Berry Street 21 hours Wed, 2018-03-14 09:30 £3300 / £4290
Torch Torch: Getting started with Machine and Deep Learning Aberdeen - Berry Street 21 hours Wed, 2018-03-14 09:30 N/A / £4890
flockdb Flockdb: A Simple Graph Database for Social Media Edinburgh 7 hours Thu, 2018-03-15 09:30 £1100 / £1600
predmodr Predictive Modelling with R Aberdeen - Berry Street 14 hours Thu, 2018-03-15 09:30 £2200 / £2860
MLFWR1 Machine Learning Fundamentals with R Glasgow 14 hours Thu, 2018-03-15 09:30 £2600 / £3300
nifidev Apache NiFi for Developers Aberdeen - Berry Street 7 hours Thu, 2018-03-15 09:30 £1100 / £1430
flockdb Flockdb: A Simple Graph Database for Social Media Aberdeen - Berry Street 7 hours Thu, 2018-03-15 09:30 £1100 / £1430
osqlide Oracle SQL Intermediate - Data Extraction Edinburgh 14 hours Thu, 2018-03-15 09:30 £2200 / £3200
aiauto Artificial Intelligence in Automotive Aberdeen - Berry Street 14 hours Thu, 2018-03-15 09:30 £2600 / £3260
voldemort Voldemort: Setting up a key-value distributed data store Aberdeen - Berry Street 14 hours Thu, 2018-03-15 09:30 £2200 / £2860
tpuprogramming TPU Programming: Building Neural Network Applications on Tensor Processing Units Edinburgh Training and Conference Venue 7 hours Fri, 2018-03-16 09:30 £1100 / £1175
smtwebint Semantic Web Overview Aberdeen - Berry Street 7 hours Fri, 2018-03-16 09:30 £1100 / £1430
tidyverse Introduction to Data Visualization with Tidyverse and R Edinburgh Training and Conference Venue 7 hours Fri, 2018-03-16 09:30 £1100 / £1175
pmml Predictive Models with PMML Edinburgh 7 hours Fri, 2018-03-16 09:30 £1100 / £1600
aiint Artificial Intelligence Overview Edinburgh Training and Conference Venue 7 hours Fri, 2018-03-16 09:30 £1300 / £1375
dsstne Amazon DSSTNE: Build a recommendation system Edinburgh Training and Conference Venue 7 hours Fri, 2018-03-16 09:30 £1100 / £1175
datashrinkgov Data Shrinkage for Government Aberdeen - Berry Street 14 hours Mon, 2018-03-19 09:30 £2200 / £2860
kdbplusandq kdb+ and q: Analyze time series data Edinburgh Training and Conference Venue 21 hours Mon, 2018-03-19 09:30 £3300 / £3525
appliedml Applied Machine Learning Glasgow 14 hours Mon, 2018-03-19 09:30 £2600 / £3300
matlabfundamentalsfinance MATLAB Fundamentals + MATLAB for Finance Aberdeen - Berry Street 35 hours Mon, 2018-03-19 09:30 £5500 / £7150
predmodr Predictive Modelling with R Edinburgh 14 hours Mon, 2018-03-19 09:30 £2200 / £3200
dlforbankingwithpython Deep Learning for Banking (with Python) Edinburgh Training and Conference Venue 28 hours Mon, 2018-03-19 09:30 £4400 / £4700
devbot Developing a Bot Glasgow 14 hours Mon, 2018-03-19 09:30 £2200 / £2900
hadoopadm Hadoop Administration Edinburgh Training and Conference Venue 21 hours Mon, 2018-03-19 09:30 £3300 / £3525
hadoopdeva Advanced Hadoop for Developers Edinburgh 21 hours Mon, 2018-03-19 09:30 £3300 / £4800
textsum Text Summarization with Python Edinburgh Training and Conference Venue 14 hours Mon, 2018-03-19 09:30 £2200 / £2350
datashrinkgov Data Shrinkage for Government Edinburgh 14 hours Mon, 2018-03-19 09:30 £2200 / £3200
deckgl deck.gl: Visualizing Large-scale Geospatial Data Glasgow 14 hours Mon, 2018-03-19 09:30 £2200 / £2900
undnn Understanding Deep Neural Networks Edinburgh 35 hours Mon, 2018-03-19 09:30 £5500 / £8000
TalendDI Talend Open Studio for Data Integration Aberdeen - Berry Street 28 hours Mon, 2018-03-19 09:30 N/A / £5720
wfsadm WildFly Server Administration Edinburgh 14 hours Mon, 2018-03-19 09:30 £2600 / £3600
ApacheIgnite Apache Ignite: Improve speed, scale and availability with in-memory computing Edinburgh Training and Conference Venue 14 hours Mon, 2018-03-19 09:30 £2200 / £2350
tf101 Deep Learning with TensorFlow Edinburgh Training and Conference Venue 21 hours Mon, 2018-03-19 09:30 £3300 / £3525

Course Outlines

Code Name Duration Outline
iotemi IoT (Internet of Things) for Entrepreneurs, Managers and Investors 21 hours

Unlike other technologies, IoT is far more complex encompassing almost every branch of core Engineering-Mechanical, Electronics, Firmware, Middleware, Cloud, Analytics and Mobile. For each of its engineering layers, there are aspects of economics, standards, regulations and evolving state of the art. This is for the firs time, a modest course is offered to cover all of these critical aspects of IoT Engineering.

Summary

  • An advanced training program covering the current state of the art in Internet of Things

  • Cuts across multiple technology domains to develop awareness of an IoT system and its components and how it can help businesses and organizations.

  • Live demo of model IoT applications to showcase practical IoT deployments across different industry domains, such as Industrial IoT, Smart Cities, Retail, Travel & Transportation and use cases around connected devices & things

Target Audience

  • Managers responsible for business and operational processes within their respective organizations and want to know how to harness IoT to make their systems and processes more efficient.

  • Entrepreneurs and Investors who are looking to build new ventures and want to develop a better understanding of the IoT technology landscape to see how they can leverage it in an effective manner.

Estimates for Internet of Things or IoT market value are massive, since by definition the IoT is an integrated and diffused layer of devices, sensors, and computing power that overlays entire consumer, business-to-business, and government industries. The IoT will account for an increasingly huge number of connections: 1.9 billion devices today, and 9 billion by 2018. That year, it will be roughly equal to the number of smartphones, smart TVs, tablets, wearable computers, and PCs combined.

In the consumer space, many products and services have already crossed over into the IoT, including kitchen and home appliances, parking, RFID, lighting and heating products, and a number of applications in Industrial Internet.

However, the underlying technologies of IoT are nothing new as M2M communication existed since the birth of Internet. However what changed in last couple of years is the emergence of number of inexpensive wireless technologies added by overwhelming adaptation of smart phones and Tablet in every home. Explosive growth of mobile devices led to present demand of IoT.

Due to unbounded opportunities in IoT business, a large number of small and medium sized entrepreneurs jumped on a bandwagon of IoT gold rush. Also due to emergence of open source electronics and IoT platform, cost of development of IoT system and further managing its sizable production is increasingly affordable. Existing electronic product owners are experiencing pressure to integrate their device with Internet or Mobile app.

This training is intended for a technology and business review of an emerging industry so that IoT enthusiasts/entrepreneurs can grasp the basics of IoT technology and business.

Course Objective

Main objective of the course is to introduce emerging technological options, platforms and case studies of IoT implementation in home & city automation (smart homes and cities), Industrial Internet, healthcare, Govt., Mobile Cellular and other areas.

  1. Basic introduction of all the elements of IoT-Mechanical, Electronics/sensor platform, Wireless and wireline protocols, Mobile to Electronics integration, Mobile to enterprise integration, Data-analytics and Total control plane

  2. M2M Wireless protocols for IoT- WiFi, Zigbee/Zwave, Bluetooth, ANT+ : When and where to use which one?

  3. Mobile/Desktop/Web app- for registration, data acquisition and control –Available M2M data acquisition platform for IoT-–Xively, Omega and NovoTech, etc.

  4. Security issues and security solutions for IoT

  5. Open source/commercial electronics platform for IoT-Raspberry Pi, Arduino , ArmMbedLPC etc

  6. Open source /commercial enterprise cloud platform for AWS-IoT apps, Azure -IOT, Watson-IOT cloud in addition to other minor IoT clouds

  7. Studies of business and technology of some of the common IoT devices like Home automation, Smoke alarm, vehicles, military, home health etc.

bigdatar Programming with Big Data in R 21 hours
bspkannmldt Artificial Neural Networks, Machine Learning and Deep Thinking 21 hours
droolsdslba Drools 6 and DSL for Business Analysts 21 hours

This 3 days course is aimed to introduce Drools 6 to Business Analysts responsible for writing tests and rules.

This course focuses on creating pure logic. Analysts after this course can writing tests and logic which then can be further integrated by developers with business applications.

simplecv Computer Vision with SimpleCV 14 hours

SimpleCV is an open source framework — meaning that it is a collection of libraries and software that you can use to develop vision applications. It lets you work with the images or video streams that come from webcams, Kinects, FireWire and IP cameras, or mobile phones. It’s helps you build software to make your various technologies not only see the world, but understand it too.

Audience

This course is directed at engineers and developers seeking to develop computer vision applications with SimpleCV.

mldt Machine Learning and Deep Learning 21 hours

This course covers AI (emphasizing Machine Learning and Deep Learning)

Torch Torch: Getting started with Machine and Deep Learning 21 hours

Torch is an open source machine learning library and a scientific computing framework based on the Lua programming language. It provides a development environment for numerics, machine learning, and computer vision, with a particular emphasis on deep learning and convolutional nets. It is one of the fastest and most flexible frameworks for Machine and Deep Learning and is used by companies such as Facebook, Google, Twitter, NVIDIA, AMD, Intel, and many others.

In this course we cover the principles of Torch, its unique features, and how it can be applied in real-world applications. We step through numerous hands-on exercises all throughout, demonstrating and practicing the concepts learned.

By the end of the course, participants will have a thorough understanding of Torch's underlying features and capabilities as well as its role and contribution within the AI space compared to other frameworks and libraries. Participants will have also received the necessary practice to implement Torch in their own projects.

Audience
    Software developers and programmers wishing to enable Machine and Deep Learning within their applications

Format of the course
    Overview of Machine and Deep Learning
    In-class coding and integration exercises
    Test questions sprinkled along the way to check understanding

danagr Data and Analytics - from the ground up 42 hours

Data analytics is a crucial tool in business today. We will focus throughout on developing skills for practical hands on data analysis. The aim is to help delegates to give evidence-based answers to questions: 

What has happened?

  • processing and analyzing data
  • producing informative data visualizations

What will happen?

  • forecasting future performance
  • evaluating forecasts

What should happen?

  • turning data into evidence-based business decisions
  • optimizing processes

The course itself can be delivered either as a 6 day classroom course or remotely over a period of weeks if preferred. We can work with you to deliver the course to best suit your needs.

zeppelin Zeppelin for interactive data analytics 14 hours

Apache Zeppelin is a web-based notebook for capturing, exploring, visualizing and sharing Hadoop and Spark based data.

This instructor-led, live training introduces the concepts behind interactive data analytics and walks participants through the deployment and usage of Zeppelin in a single-user or multi-user environment.

By the end of this training, participants will be able to:

  • Install and configure Zeppelin
  • Develop, organize, execute and share data in a browser-based interface
  • Visualize results without referring to the command line or cluster details
  • Execute and collaborate on long workflows
  • Work with any of a number of plug-in language/data-processing-backends, such as Scala ( with Apache Spark ), Python ( with Apache Spark ), Spark SQL, JDBC, Markdown and Shell.
  • Integrate Zeppelin with Spark, Flink and Map Reduce
  • Secure multi-user instances of Zeppelin with Apache Shiro

Audience

  • Data engineers
  • Data analysts
  • Data scientists
  • Software developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
openface OpenFace: Creating Facial Recognition Systems 14 hours

OpenFace is Python and Torch based open-source, real-time facial recognition software based on Google’s FaceNet research.

In this instructor-led, live training, participants will learn how to use OpenFace's components to create and deploy a sample facial recognition application.

By the end of this training, participants will be able to:

  • Work with OpenFace's components, including dlib, OpenVC, Torch, and nn4 to implement face detection, alignment, and transformation.
  • Apply OpenFace to real-world applications such as surveillance, identity verification, virtual reality, gaming, and identifying repeat customers, etc.

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
tidyverse Introduction to Data Visualization with Tidyverse and R 7 hours

The Tidyverse is a collection of versatile R packages for cleaning, processing, modeling, and visualizing data. Some of the packages included are: ggplot2, dplyr, tidyr, readr, purrr, and tibble.

In this instructor-led, live training, participants will learn how to manipulate and visualize data using the tools included in the Tidyverse.

By the end of this training, participants will be able to:

  • Perform data analysis and create appealing visualizations
  • Draw useful conclusions from various datasets of sample data
  • Filter, sort and summarize data to answer exploratory questions
  • Turn processed data into informative line plots, bar plots, histograms
  • Import and filter data from diverse data sources, including Excel, CSV, and SPSS files

Audience

  • Beginners to the R language
  • Beginners to data analysis and data visualization

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
dlfornlp Deep Learning for NLP (Natural Language Processing) 28 hours

Deep Learning for NLP allows a machine to learn simple to complex language processing. Among the tasks currently possible are language translation and caption generation for photos. DL (Deep Learning) is a subset of ML (Machine Learning). Python is a popular programming language that contains libraries for Deep Learning for NLP.

In this instructor-led, live training, participants will learn to use Python libraries for NLP (Natural Language Processing) as they create an application that processes a set of pictures and generates captions. 

By the end of this training, participants will be able to:

  • Design and code DL for NLP using Python libraries
  • Create Python code that reads a substantially huge collection of pictures and generates keywords
  • Create Python Code that generates captions from the detected keywords

Audience

  • Programmers with interest in linguistics
  • Programmers who seek an understanding of NLP (Natural Language Processing) 

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
aifortelecom AI Awareness for Telecom 14 hours

AI is a collection of technologies for building intelligent systems capable of understanding data and the activities surrounding the data to make "intelligent decisions". For Telecom providers, building applications and services that make use of AI could open the door for improved operations and servicing in areas such as maintenance and network optimization.

In this course we examine the various technologies that make up AI and the skill sets required to put them to use. Throughout the course, we examine AI's specific applications within the Telecom industry.

Audience

  • Network engineers
  • Network operations personnel
  • Telecom technical managers

Format of the course

  •     Part lecture, part discussion, hands-on exercises
bdbitcsp Big Data Business Intelligence for Telecom and Communication Service Providers 35 hours

Overview

Communications service providers (CSP) are facing pressure to reduce costs and maximize average revenue per user (ARPU), while ensuring an excellent customer experience, but data volumes keep growing. Global mobile data traffic will grow at a compound annual growth rate (CAGR) of 78 percent to 2016, reaching 10.8 exabytes per month.

Meanwhile, CSPs are generating large volumes of data, including call detail records (CDR), network data and customer data. Companies that fully exploit this data gain a competitive edge. According to a recent survey by The Economist Intelligence Unit, companies that use data-directed decision-making enjoy a 5-6% boost in productivity. Yet 53% of companies leverage only half of their valuable data, and one-fourth of respondents noted that vast quantities of useful data go untapped. The data volumes are so high that manual analysis is impossible, and most legacy software systems can’t keep up, resulting in valuable data being discarded or ignored.

With Big Data & Analytics’ high-speed, scalable big data software, CSPs can mine all their data for better decision making in less time. Different Big Data products and techniques provide an end-to-end software platform for collecting, preparing, analyzing and presenting insights from big data. Application areas include network performance monitoring, fraud detection, customer churn detection and credit risk analysis. Big Data & Analytics products scale to handle terabytes of data but implementation of such tools need new kind of cloud based database system like Hadoop or massive scale parallel computing processor ( KPU etc.)

This course work on Big Data BI for Telco covers all the emerging new areas in which CSPs are investing for productivity gain and opening up new business revenue stream. The course will provide a complete 360 degree over view of Big Data BI in Telco so that decision makers and managers can have a very wide and comprehensive overview of possibilities of Big Data BI in Telco for productivity and revenue gain.

Course objectives

Main objective of the course is to introduce new Big Data business intelligence techniques in 4 sectors of Telecom Business (Marketing/Sales, Network Operation, Financial operation and Customer Relation Management). Students will be introduced to following:

  • Introduction to Big Data-what is 4Vs (volume, velocity, variety and veracity) in Big Data- Generation, extraction and management from Telco perspective
  • How Big Data analytic differs from legacy data analytic
  • In-house justification of Big Data -Telco perspective
  • Introduction to Hadoop Ecosystem- familiarity with all Hadoop tools like Hive, Pig, SPARC –when and how they are used to solve Big Data problem
  • How Big Data is extracted to analyze for analytics tool-how Business Analysis’s can reduce their pain points of collection and analysis of data through integrated Hadoop dashboard approach
  • Basic introduction of Insight analytics, visualization analytics and predictive analytics for Telco
  • Customer Churn analytic and Big Data-how Big Data analytic can reduce customer churn and customer dissatisfaction in Telco-case studies
  • Network failure and service failure analytics from Network meta-data and IPDR
  • Financial analysis-fraud, wastage and ROI estimation from sales and operational data
  • Customer acquisition problem-Target marketing, customer segmentation and cross-sale from sales data
  • Introduction and summary of all Big Data analytic products and where they fit into Telco analytic space
  • Conclusion-how to take step-by-step approach to introduce Big Data Business Intelligence in your organization

Target Audience

  • Network operation, Financial Managers, CRM managers and top IT managers in Telco CIO office.
  • Business Analysts in Telco
  • CFO office managers/analysts
  • Operational managers
  • QA managers
psr Introduction to Recommendation Systems 7 hours

Audience

Marketing department employees, IT strategists and other people involved in decisions related to the design and implementation of recommender systems.

Format

Short theoretical background follow by analysing working examples and short, simple exercises.

osqlide Oracle SQL Intermediate - Data Extraction 14 hours
python_nltk Natural Language Processing with Python 28 hours This course introduces linguists or programmers to NLP in Python. During this course we will mostly use nltk.org (Natural Language Tool Kit), but also we will use other libraries relevant and useful for NLP. At the moment we can conduct this course in Python 2.x or Python 3.x. Examples are in English or Mandarin (普通话). Other languages can be also made available if agreed before booking.
spmllib Apache Spark MLlib 35 hours

MLlib is Spark’s machine learning (ML) library. Its goal is to make practical machine learning scalable and easy. It consists of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as lower-level optimization primitives and higher-level pipeline APIs.

It divides into two packages:

  • spark.mllib contains the original API built on top of RDDs.

  • spark.ml provides higher-level API built on top of DataFrames for constructing ML pipelines.

 

Audience

This course is directed at engineers and developers seeking to utilize a built in Machine Library for Apache Spark

bpmndmncmmn BPMN, DMN, and CMNN - OMG standards for process improvement 28 hours Business Process Model and Notation (BPMN), Decision Model and Notation (DMN) and Case Management Model and Notation (CMMN) are three Object Management Group (OMG) standards for processes, decisions, and case modelling. This course provides an introduction to all of them and informs when should we use which.
OpenNN OpenNN: Implementing neural networks 14 hours

OpenNN is an open-source class library written in C++  which implements neural networks, for use in machine learning.

In this course we go over the principles of neural networks and use OpenNN to implement a sample application.

Audience
    Software developers and programmers wishing to create Deep Learning applications.

Format of the course
    Lecture and discussion coupled with hands-on exercises.

Fairseq Fairseq: Setting up a CNN-based machine translation system 7 hours

Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).

In this training participants will learn how to use Fairseq to carry out translation of sample content.

By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course
    Part lecture, part discussion, heavy hands-on practice

Note

  • If you wish to use specific source and target language content, please contact us to arrange.
samza Samza for stream processing 14 hours

Apache Samza is an open-source near-realtime, asynchronous computational framework for stream processing.  It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management.

This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.

By the end of this training, participants will be able to:

  • Use Samza to simplify the code needed to produce and consume messages
  • Decouple the handling of messages from an application
  • Use Samza to implement near-realtime asynchronous computation
  • Use stream processing to provide a higher level of abstraction over messaging systems

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
fiji Fiji: Introduction to scientific image processing 21 hours

Fiji is an open-source image processing package that bundles ImageJ (an image processing program for scientific multidimensional images) and a number of plugins for scientific image analysis.

In this instructor-led, live training, participants will learn how to use the Fiji distribution and its underlying ImageJ program to create an image analysis application.

By the end of this training, participants will be able to:

  • Use Fiji's advanced programming features and software components to extend ImageJ
  • Stitch large 3d images from overlapping tiles
  • Automatically update a Fiji installation on startup using the integrated update system
  • Select from a broad selection of scripting languages to build custom image analysis solutions
  • Use Fiji's powerful libraries, such as ImgLib on large bioimage datasets
  • Deploy their application and collaborate with other scientists on similar projects

Audience

  • Scientists
  • Researchers
  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
nifi Apache NiFi for Administrators 21 hours

Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn how to deploy and manage Apache NiFi in a live lab environment.

By the end of this training, participants will be able to:

  • Install and configure Apachi NiFi
  • Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes
  • Automate dataflows
  • Enable streaming analytics
  • Apply various approaches for data ingestion
  • Transform Big Data and into business insights

Audience

  • System administrators
  • Data engineers
  • Developers
  • DevOps

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
bigdatabicriminal Big Data Business Intelligence for Criminal Intelligence Analysis 35 hours

Advances in technologies and the increasing amount of information are transforming how law enforcement is conducted. The challenges that Big Data pose are nearly as daunting as Big Data's promise. Storing data efficiently is one of these challenges; effectively analyzing it is another.

In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results.

By the end of this training, participants will be able to:

  • Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation
  • Implement industrial big data storage and processing solutions for data analysis
  • Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation

Audience

  • Law Enforcement specialists with a technical background

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
tableaupython Tableau with Python 14 hours

Tableau is a business intelligence and data visualization tool. Python is a widely used programming language which provides support for a wide variety of statistical and machine learning techniques. Tableau's data visualization power and Python's machine learning capabilities, when combined, help developers rapidly build advanced data analytics applications for various business use cases.

In this instructor-led, live training, participants will learn how to combine Tableau and Python to carry out advanced analytics. Integration of Tableau and Python will be done via the TabPy API.

By the end of this training, participants will be able to:

  • Integrate Tableau and Python using TabPy API
  • Use the integration of Tableau and Python to analyze complex business scenarios with few lines of Python code

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
apacheh Administrator Training for Apache Hadoop 35 hours

Audience:

The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment

Goal:

Deep knowledge on Hadoop cluster administration.

MLFWR1 Machine Learning Fundamentals with R 14 hours

The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the R programming platform and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results.

Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications.

datapro Data Protection 35 hours

This is an Instructor led course, and is the non-certification version of the "CDP - Certificate in Data Protection" course

Those experienced in data protection issues, as well as those new to the subject, need to be trained so that their organisations are confident that legal compliance is continually addressed. It is necessary to identify issues requiring expert data protection advice in good time in order that organisational reputation and credibility are enhanced through relevant data protection policies and procedures.

Objectives:

The aim of the syllabus is to promote an understanding of how the data protection principles work rather than simply focusing on the mechanics of regulation. The syllabus places the Act in the context of human rights and promotes good practice within organisations. On completion you will have:

  • an appreciation of the broader context of the Act. 
  • an understanding of the way in which the Act and the Privacy and Electronic Communications (EC Directive) Regulations 2003 work
  • a broad understanding of the way associated legislation relates to the Act
  • an understanding of what has to be done to achieve compliance

Course Synopsis:

The syllabus comprises three main parts, each sub-sections.

  • Context - this will address the origins of and reasons for the Act together with consideration of privacy in general.
  • Law – Data Protection Act - this will address the main concepts and elements of the Act and subordinate legislation.
  • Application - this will consider how compliance is achieved and how the Act works in practice.
mlfsas Machine Learning Fundamentals with Scala and Apache Spark 14 hours

The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the Scala programming language and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results.

Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications.

aiintrozero From Zero to AI 35 hours

This course is created for people who have no previous experience in probability and statistics.

dsbda Data Science for Big Data Analytics 35 hours

Big data is data sets that are so voluminous and complex that traditional data processing application software are inadequate to deal with them. Big data challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating and information privacy.

drools7int Introduction to Drools 7 for Developers 21 hours

This 3 days course is aimed to introduce Drools 7 to developers.This course doesn't cover drools integration, performance or any other complex topics.

facebooknmt Facebook NMT: Setting up a Neural Machine Translation System 7 hours

Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).

In this training participants will learn how to use Fairseq to carry out translation of sample content.

By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course

  • Part lecture, part discussion, heavy hands-on practice

Note

  • If you wish to use specific source and target language content, please contact us to arrange.
flink Flink for scalable stream and batch data processing 28 hours

Apache Flink is an open-source framework for scalable stream and batch data processing.

This instructor-led, live training introduces the principles and approaches behind distributed stream and batch data processing, and walks participants through the creation of a real-time, data streaming application.

By the end of this training, participants will be able to:

  • Set up an environment for developing data analysis applications
  • Package, execute, and monitor Flink-based, fault-tolerant, data streaming applications
  • Manage diverse workloads
  • Perform advanced analytics using Flink ML
  • Set up a multi-node Flink cluster
  • Measure and optimize performance
  • Integrate Flink with different Big Data systems
  • Compare Flink capabilities with those of other big data processing frameworks

Audience

  • Developers
  • Architects
  • Data engineers
  • Analytics professionals
  • Technical managers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
kdbplusandq kdb+ and q: Analyze time series data 21 hours

kdb+ is an in-memory, column-oriented database and q is its built-in, interpreted vector-based language. In kdb+, tables are columns of vectors and q is used to perform operations on the table data as if it was a list. kdb+ and q are commonly used in high frequency trading and are popular with the major financial institutions, including Goldman Sachs, Morgan Stanley, Merrill Lynch, JP Morgan, etc.

In this instructor-led, live training, participants will learn how to create a time series data application using kdb+ and q.

By the end of this training, participants will be able to:

  • Understand the difference between a row-oriented database and a column-oriented database
  • Select data, write scripts and create functions to carry out advanced analytics
  • Analyze time series data such as stock and commodity exchange data
  • Use kdb+'s in-memory capabilities to store, analyze, process and retrieve large data sets at high speed
  • Think of functions and data at a higher level than the standard function(arguments) approach common in non-vector languages
  • Explore other time-sensitive applications for kdb+, including energy trading, telecommunications, sensor data, log data, and machine and network usage monitoring

Audience

  • Developers
  • Database engineers
  • Data scientists
  • Data analysts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
nifidev Apache NiFi for Developers 7 hours

Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.

By the end of this training, participants will be able to:

  • Understand NiFi's architecture and dataflow concepts
  • Develop extensions using NiFi and third-party APIs
  • Custom develop their own Apache Nifi processor
  • Ingest and process real-time data from disparate and uncommon file formats and data sources

Audience

  • Developers
  • Data engineers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
botsazure Developing Intelligent Bots with Azure 14 hours

The Azure Bot Service combines the power of the Microsoft Bot Framework and Azure functions to enable rapid development of intelligent bots.

In this instructor-led, live training, participants will learn how to easily create an intelligent bot using Microsoft Azure

By the end of this training, participants will be able to:

  • Learn the fundamentals of intelligent bots
  • Learn how to create intelligent bots using cloud applications
  • Understand how to use the Microsoft Bot Framework, the Bot Builder SDK, and the Azure Bot Service
  • Understand how to design bots using bot patterns
  • Develop their first intelligent bot using Microsoft Azure

Audience

  • Developers
  • Hobbyists
  • Engineers
  • IT Professionals

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
echarts ECharts 14 hours

eCharts is a free JavaScript library used for interactive charting and data visualization.

In this instructor-led, live training, participants will learn the fundamental functionalities of ECharts as they step through the process of creating and configuring charts using ECharts.

By the end of this training, participants will be able to:

  • Understand the fundamentals of ECharts
  • Explore and utilize the various features and configuration options in ECharts
  • Build their own simple, interactive, and responsive charts with ECharts

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
brmsdrools Business Rule Management (BRMS) with Drools 7 hours

This course is aimed at enterprise architects, business and system analysts and managers who want to apply business rules to their solution. With Drools you can write your business rules using almost natural language, therefore reducing the gap between business and IT.

hadoopadm Hadoop Administration 21 hours

The course is dedicated to IT specialists that are looking for a solution to store and process large data sets in distributed system environment

Course goal:

Getting knowledge regarding Hadoop cluster administration

mlfunpython Machine Learning Fundamentals with Python 14 hours

The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the Python programming language and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results.

Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications.

datama Data Mining and Analysis 28 hours

Objective:

Delegates be able to analyse big data sets, extract patterns, choose the right variable impacting the results so that a new model is forecasted with predictive results.

bigddbsysfun Big Data & Database Systems Fundamentals 14 hours

The course is part of the Data Scientist skill set (Domain: Data and Technology).

singa Mastering Apache SINGA 21 hours

SINGA is a general distributed deep learning platform for training big deep learning models over large datasets. It is designed with an intuitive programming model based on the layer abstraction. A variety of popular deep learning models are supported, namely feed-forward models including convolutional neural networks (CNN), energy models like restricted Boltzmann machine (RBM), and recurrent neural networks (RNN). Many built-in layers are provided for users. SINGA architecture is sufficiently flexible to run synchronous, asynchronous and hybrid training frameworks. SINGA also supports different neural net partitioning schemes to parallelize the training of large models, namely partitioning on batch dimension, feature dimension or hybrid partitioning.

Audience

This course is directed at researchers, engineers and developers seeking to utilize Apache SINGA as a deep learning framework.

After completing this course, delegates will:

  • understand SINGA’s structure and deployment mechanisms
  • be able to carry out installation / production environment / architecture tasks and configuration
  • be able to assess code quality, perform debugging, monitoring
  • be able to implement advanced production like training models, embedding terms, building graphs and logging

 

IntroToAvro Apache Avro: Data serialization for distributed applications 14 hours

This course is intended for

  • Developers

Format of the course

  • Lectures, hands-on practice, small tests along the way to gauge understanding
drools7dslba Drools 7 and DSL for Business Analysts 21 hours

This 3 days course is aimed to introduce Drools 7 to Business Analysts responsible for writing tests and rules.

This course focuses on creating pure logic. Analysts after this course can writing tests and logic which then can be further integrated by developers with business applications.

matfin MATLAB for Financial Applications 21 hours

MATLAB is a numerical computing environment and programming language developed by MathWorks.

alluxio Alluxio: Unifying disparate storage systems 7 hours

Alexio is an open-source virtual distributed storage system that unifies disparate storage systems and enables applications to interact with data at memory speed. It is used by companies such as Intel, Baidu and Alibaba.

In this instructor-led, live training, participants will learn how to use Alexio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio.

By the end of this training, participants will be able to:

  • Develop an application with Alluxio
  • Connect big data systems and applications while preserving one namespace
  • Efficiently extract value from big data in any storage format
  • Improve workload performance
  • Deploy and manage Alluxio standalone or clustered

Audience

  • Data scientist
  • Developer
  • System administrator

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
hypertable Hypertable: Deploy a BigTable like database 14 hours

Hypertable is an open-source software database management system based on the design of Google's Bigtable.

In this instructor-led, live training, participants will learn how to set up and manage a Hypertable database system.

By the end of this training, participants will be able to:

  • Install, configure and upgrade a Hypertable instance
  • Set up and administer a Hypertable cluster
  • Monitor and optimize the performance of the database
  • Design a Hypertable schema
  • Work with Hypertable's API
  • Troubleshoot operational issues

Audience

  • Developers
  • Operations engineers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Part lecture, part discussion, exercises and heavy hands-on practice

matlabpredanalytics Matlab for Predictive Analytics 21 hours

Predictive analytics is the process of using data analytics to make predictions about the future. This process uses data along with data mining, statistics, and machine learning techniques to create a predictive model for forecasting future events.

In this instructor-led, live training, participants will learn how to use Matlab to build predictive models and apply them to large sample data sets to predict future events based on the data.

By the end of this training, participants will be able to:

  • Create predictive models to analyze patterns in historical and transactional data
  • Use predictive modeling to identify risks and opportunities
  • Build mathematical models that capture important trends
  • Use data to from devices and business systems to reduce waste, save time, or cut costs

Audience

  • Developers
  • Engineers
  • Domain experts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
devbot Developing a Bot 14 hours

A bot or chatbot is like a computer assistant that is used to automate user interactions on various messaging platforms and get things done faster without the need for users to speak to another human.

In this instructor-led, live training, participants will learn how to get started in developing a bot as they step through the creation of sample chatbots using bot development tools and frameworks.

By the end of this training, participants will be able to:

  • Understand the different uses and applications of bots
  • Understand the complete process in developing bots
  • Explore the different tools and platforms used in building bots
  • Build a sample chatbot for Facebook Messenger
  • Build a sample chatbot using Microsoft Bot Framework

Audience

  • Developers interested in creating their own bot

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
drlpython Deep Reinforcement Learning with Python 21 hours

Deep Reinforcement Learning refers to the ability of an "artificial agents" to learn by trial-and-error and rewards-and-punishments. An artificial agent aims to emulate a human's ability to obtain and construct knowledge on its own, directly from raw inputs such as vision. To realize reinforcement learning, deep learning and neural networks are used. Reinforcement learning is different from machine learning and does not rely on supervised and unsupervised learning approaches.

In this instructor-led, live training, participants will learn the fundamentals of Deep Reinforcement Learning as they step through the creation of a Deep Learning Agent.

By the end of this training, participants will be able to:

  • Understand the key concepts behind Deep Reinforcement Learning and be able to distinguish it from Machine Learning
  • Apply advanced Reinforcement Learning algorithms to solve real-world problems
  • Build a Deep Learning Agent

Audience

  • Developers
  • Data Scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
aiint Artificial Intelligence Overview 7 hours

This course has been created for managers, solutions architects, innovation officers, CTOs, software architects and everyone who is interested overview of applied artificial intelligence and the nearest forecast for its development.

datamin Data Mining 21 hours

Course can be provided with any tools, including free open-source data mining software and applications

hadoopmapr Hadoop Administration on MapR 28 hours

Audience:

This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand.

rintrob Introductory R for Biologists 28 hours

R is an open-source free programming language for statistical computing, data analysis, and graphics. R is used by a growing number of managers and data analysts inside corporations and academia. R has also found followers among statisticians, engineers and scientists without computer programming skills who find it easy to use. Its popularity is due to the increasing use of data mining for various goals such as set ad prices, find new drugs more quickly or fine-tune financial models. R has a wide variety of packages for data mining.

rprogda R Programming for Data Analysis 14 hours

This course is part of the Data Scientist skill set (Domain: Data and Technology)

caffe Deep Learning for Vision with Caffe 21 hours

Caffe is a deep learning framework made with expression, speed, and modularity in mind.

This course explores the application of Caffe as a Deep learning framework for image recognition using MNIST as an example

Audience

This course is suitable for Deep Learning researchers and engineers interested in utilizing Caffe as a framework.

After completing this course, delegates will be able to:

  • understand Caffe’s structure and deployment mechanisms
  • carry out installation / production environment / architecture tasks and configuration
  • assess code quality, perform debugging, monitoring
  • implement advanced production like training models, implementing layers and logging
deepmclrg Machine Learning & Deep Learning with Python and R 14 hours
accumulo Apache Accumulo: Building highly scalable big data applications 21 hours

Apache Accumulo is a sorted, distributed key/value store that provides robust, scalable data storage and retrieval. It is based on the design of Google's BigTable and is powered by Apache Hadoop, Apache Zookeeper, and Apache Thrift.
 
This courses covers the working principles behind Accumulo and walks participants through the development of a sample application on Apache Accumulo.

Audience

  • Application developers
  • Software engineers
  • Technical consultants

Format of the course

  • Part lecture, part discussion, hands-on development and implementation, occasional tests to gauge understanding
storm Apache Storm 28 hours

Apache Storm is a distributed, real-time computation engine used for enabling real-time business intelligence. It does so by enabling applications to reliably process unbounded streams of data (a.k.a. stream processing).

"Storm is for real-time processing what Hadoop is for batch processing!"

In this instructor-led live training, participants will learn how to install and configure Apache Storm, then develop and deploy an Apache Storm application for processing big data in real-time.

Some of the topics included in this training include:

  • Apache Storm in the context of Hadoop
  • Working with unbounded data
  • Continuous computation
  • Real-time analytics
  • Distributed RPC and ETL processing

Request this course now!

Audience

  • Software and ETL developers
  • Mainframe professionals
  • Data scientists
  • Big data analysts
  • Hadoop professionals

Format of the course

  •     Part lecture, part discussion, exercises and heavy hands-on practice
apex Apache Apex: Processing big data-in-motion 21 hours

Apache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable.

This instructor-led, live training introduces Apache Apex's unified stream processing architecture and walks participants through the creation of a distributed application using Apex on Hadoop.

By the end of this training, participants will be able to:

  • Understand data processing pipeline concepts such as connectors for sources and sinks, common data transformations, etc.
  • Build, scale and optimize an Apex application
  • Process real-time data streams reliably and with minimum latency
  • Use Apex Core and the Apex Malhar library to enable rapid application development
  • Use the Apex API to write and re-use existing Java code
  • Integrate Apex into other applications as a processing engine
  • Tune, test and scale Apex applications

Audience

  • Developers
  • Enterprise architects

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
pythonadvml Python for Advanced Machine Learning 21 hours

In this instructor-led, live training, participants will learn the most relevant and cutting-edge machine learning techniques in Python as they build a series of demo applications involving image, music, text, and financial data.

By the end of this training, participants will be able to:

  • Implement machine learning algorithms and techniques for solving complex problems
  • Apply deep learning and semi-supervised learning to applications involving image, music, text, and financial data
  • Push Python algorithms to their maximum potential
  • Use libraries and packages such as NumPy and Theano

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
matlabdl Matlab for Deep Learning 14 hours

In this instructor-led, live training, participants will learn how to use Matlab to design, build, and visualize a convolutional neural network for image recognition.

By the end of this training, participants will be able to:

  • Build a deep learning model
  • Automate data labeling
  • Work with models from Caffe and TensorFlow-Keras
  • Train data using multiple GPUs, the cloud, or clusters

Audience

  • Developers
  • Engineers
  • Domain experts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
sparkpython Python and Spark for Big Data (PySpark) 21 hours

Python is a high-level programming language famous for its clear syntax and code readibility. Spark is a data processing engine used in querying, analyzing, and transforming big data. PySpark allows users to interface Spark with Python.

In this instructor-led, live training, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises.

By the end of this training, participants will be able to:

  • Learn how to use Spark with Python to analyze Big Data
  • Work on exercises that mimic real world circumstances
  • Use different tools and techniques for big data analysis using PySpark

Audience

  • Developers
  • IT Professionals
  • Data Scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
smtwebint Semantic Web Overview 7 hours

The Semantic Web is a collaborative movement led by the World Wide Web Consortium (W3C) that promotes common formats for data on the World Wide Web. The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries.

droolsrlsadm Drools Rules Administration 21 hours

This course has been prepared for people who are involved in administering corporate knowledge assets (rules, process) like system administrators, system integrators, application server administrators, etc... We are using the newest stable community version of Drools to run this course, but older versions are also possible if agreed before booking.

mdldromgdmn Modelling Decision and Rules with OMG DMN 14 hours

This course teaches how to design and execute decisions in rules with OMG DMN (Decision Model and Notation) standard.

dladv Advanced Deep Learning 28 hours
dmmlr Data Mining & Machine Learning with R 14 hours

R is an open-source free programming language for statistical computing, data analysis, and graphics. R is used by a growing number of managers and data analysts inside corporations and academia. R has a wide variety of packages for data mining.

dl4jir DeepLearning4J for Image Recognition 21 hours

Deeplearning4j is an Open-Source Deep-Learning Software for Java and Scala on Hadoop and Spark.

Audience

This course is meant for engineers and developers seeking to utilize DeepLearning4J in their image recognition projects.

Piwik Getting started with Piwik 21 hours

Audience

  • Web analysist
  • Data analysists
  • Market researchers
  • Marketing and sales professionals
  • System administrators

Format of course

  •     Part lecture, part discussion, heavy hands-on practice

druid Druid: Build a fast, real-time data analysis system 21 hours

Druid is an open-source, column-oriented, distributed data store written in Java. It was designed to quickly ingest massive quantities of event data and execute low-latency OLAP queries on that data. Druid is commonly used in business intelligence applications to analyze high volumes of real-time and historical data. It is also well suited for powering fast, interactive, analytic dashboards for end-users. Druid is used by companies such as Alibaba, Airbnb, Cisco, eBay, Netflix, Paypal, and Yahoo.

In this course we explore some of the limitations of data warehouse solutions and discuss how Druid can compliment those technologies to form a flexible and scalable streaming analytics stack. We walk through many examples, offering participants the chance to implement and test Druid-based solutions in a lab environment.

Audience
    Application developers
    Software engineers
    Technical consultants
    DevOps professionals
    Architecture engineers

Format of the course
    Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding

glusterfs GlusterFS for System Administrators 21 hours

GlusterFS is an open-source distributed file storage system that can scale up to petabytes of capacity. GlusterFS is designed to provide additional space depending on the user's storage requirements. A common application for GlusterFS is cloud computing storage systems.

In this instructor-led training, participants will learn how to use normal, off-the-shelf hardware to create and deploy a storage system that is scalable and always available. 

By the end of the course, participants will be able to:

  • Install, configure, and maintain a full-scale GlusterFS system.
  • Implement large-scale storage systems in different types of environments.

Audience

  • System administrators
  • Storage administrators

Format of the Course

  • Part lecture, part discussion, exercises and heavy hands-on practice.
vespa Vespa: Serving large-scale data in real-time 14 hours

Vespa an open-source big data processing and serving engine created by Yahoo.  It is used to respond to user queries, make recommendations, and provide personalized content and advertisements in real-time.

This instructor-led, live training introduces the challenges of serving large-scale data and walks participants through the creation of an application that can compute responses to user requests, over large datasets in real-time.

By the end of this training, participants will be able to:

  • Use Vespa to quickly compute data (store, search, rank, organize) at serving time while a user waits
  • Implement Vespa into existing applications involving feature search, recommendations, and personalization
  • Integrate and deploy Vespa with existing big data systems such as Hadoop and Storm.

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
radvml Advanced Machine Learning with R 21 hours

In this instructor-led, live training, participants will learn advanced techniques for Machine Learning with R as they step through the creation of a real-world application.

By the end of this training, participants will be able to:

  • Use techniques as hyper-parameter tuning and deep learning
  • Understand and implement unsupervised learning techniques
  • Put a model into production for use in a larger application

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
powerbiforbiandanalytics Power BI for Business Analysts 21 hours

Microsoft Power BI is a free Software as a Service (SaaS) suite for analyzing data and sharing insights. Power BI dashboards provide a 360-degree view of the most important metrics in one place, updated in real time, and available on all of their devices.

In this instructor-led, live training, participants will learn how to use Microsoft Power Bi to analyze and visualize data using a series of sample data sets.

By the end of this training, participants will be able to:

  • Create visually compelling dashboards that provide valuable insights into data
  • Obtain and integrate data from multiple data sources
  • Build and share visualizations with team members
  • Adjust data with Power BI Desktop

Audience

  • Business managers
  • Business analystss
  • Data analysts
  • Business Intelligence (BI) and Data Warehouse (DW) teams
  • Report developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

 

mlfinancepython Machine Learning for Finance (with Python) 21 hours

Machine learning is a branch of Artificial Intelligence wherein computers have the ability to learn without being explicitly programmed.

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the finance industry. Python will be used as the programming language.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of team projects.

By the end of this training, participants will be able to:

  • Understand the fundamental concepts in machine learning
  • Learn the applications and uses of machine learning in finance
  • Develop their own algorithmic trading strategy using machine learning with Python

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
d2dbdpa From Data to Decision with Big Data and Predictive Analytics 21 hours

Audience

If you try to make sense out of the data you have access to or want to analyse unstructured data available on the net (like Twitter, Linked in, etc...) this course is for you.

It is mostly aimed at decision makers and people who need to choose what data is worth collecting and what is worth analyzing.

It is not aimed at people configuring the solution, those people will benefit from the big picture though.

Delivery Mode

During the course delegates will be presented with working examples of mostly open source technologies.

Short lectures will be followed by presentation and simple exercises by the participants

Content and Software used

All software used is updated each time the course is run so we check the newest versions possible.

It covers the process from obtaining, formatting, processing and analysing the data, to explain how to automate decision making process with machine learning.

bigdarch Big Data Architect 35 hours

Day 1 - provides a high-level overview of essential Big Data topic areas. The module is divided into a series of sections, each of which is accompanied by a hands-on exercise.

Day 2 - explores a range of topics that relate analysis practices and tools for Big Data environments. It does not get into implementation or programming details, but instead keeps coverage at a conceptual level, focusing on topics that enable participants to develop a comprehensive understanding of the common analysis functions and features offered by Big Data solutions.

Day 3 - provides an overview of the fundamental and essential topic areas relating to Big Data solution platform architecture. It covers Big Data mechanisms required for the development of a Big Data solution platform and architectural options for assembling a data processing platform. Common scenarios are also presented to provide a basic understanding of how a Big Data solution platform is generally used. 

Day 4 - builds upon Day 3 by exploring advanced topics relatng to Big Data solution platform architecture. In particular, different architectural layers that make up the Big Data solution platform are introduced and discussed, including data sources, data ingress, data storage, data processing and security. 

Day 5 - covers a number of exercises and problems designed to test the delegates ability to apply knowledge of topics covered Day 3 and 4. 

bigdatastore Big Data Storage Solution - NoSQL 14 hours

When traditional storage technologies don't handle the amount of data you need to store there are hundereds of alternatives. This course try to guide the participants what are alternatives for storing and analyzing Big Data and what are theirs pros and cons.

This course is mostly focused on discussion and presentation of solutions, though hands-on exercises are available on demand.

predmodr Predictive Modelling with R 14 hours

R is an open-source free programming language for statistical computing, data analysis, and graphics. R is used by a growing number of managers and data analysts inside corporations and academia. R has a wide variety of packages for data mining.

w2vdl4j NLP with Deeplearning4j 14 hours

Deeplearning4j is an open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Spark, DL4J is designed to be used in business environments on distributed GPUs and CPUs.

Word2Vec is a method of computing vector representations of words introduced by a team of researchers at Google led by Tomas Mikolov.

Audience

This course is directed at researchers, engineers and developers seeking to utilize Deeplearning4J to construct Word2Vec models.

DM7 Getting started with DM7 21 hours

Audience

  • Beginner or intermediate database developers
  • Beginner or intermediate database administrators
  • Programmers

Format of the course

  • Heavy emphasis on hands-on practice. Most of the concepts are learned through samples, exercises and hands-on development
nlpwithr NLP: Natural Language Processing with R 21 hours

It is estimated that unstructured data accounts for more than 90 percent of all data, much of it in the form of text. Blog posts, tweets, social media, and other digital publications continuously add to this growing body of data.

This course centers around extracting insights and meaning from this data. Utilizing the R Language and Natural Language Processing (NLP) libraries, we combine concepts and techniques from computer science, artificial intelligence, and computational linguistics to algorithmically understand the meaning behind text data. Data samples are available in various languages per customer requirements.

By the end of this training participants will be able to prepare data sets (large and small) from disparate sources, then apply the right algorithms to analyze and report on its significance.

Audience
    Linguists and programmers

Format of the course
    Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding

kylin Apache Kylin: From classic OLAP to real-time data warehouse 14 hours

Apache Kylin is an extreme, distributed analytics engine for big data.

In this instructor-led live training, participants will learn how to use Apache Kylin to set up a real-time data warehouse.

By the end of this training, participants will be able to:

  • Consume real-time streaming data using Kylin
  • Utilize Apache Kylin's powerful features, including snowflake schema support, a rich SQL interface, spark cubing and subsecond query latency

Note

  • We use the latest version of Kylin (as of this writing, Apache Kylin v2.0)

Audience

  • Big data engineers
  • Big Data analysts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
ApacheIgnite Apache Ignite: Improve speed, scale and availability with in-memory computing 14 hours

Apache Ignite is an in-memory computing platform that sits between the application and data layer to improve speed, scale and availability.

In this instructor-led, live training, participants will learn the principles behind persistent and pure in-memory storage as they step through the creation of a sample in-memory computing project.

By the end of this training, participants will be able to:

  • Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database
  • Achieve persistence without syncing data back to a relational database
  • Use Ignite to carry out SQL and distributed joins
  • Improve performance by moving data closer to the CPU, using RAM as a storage
  • Spread data sets across a cluster to achieve horizontal scalability
  • Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
encogadv Encog: Advanced Machine Learning 14 hours

Encog is an open-source machine learning framework for Java and .Net.

In this instructor-led, live training, participants will learn advanced machine learning techniques for building accurate neural network predictive models.

By the end of this training, participants will be able to:

  • Implement different neural networks optimization techniques to resolve underfitting and overfitting
  • Understand and choose from a number of neural network architectures
  • Implement supervised feed forward and feedback networks

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
mlbankingr Machine Learning for Banking (with R) 28 hours

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry. R will be used as the programming language.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of live projects.

Audience

  • Developers
  • Data scientists
  • Banking professionals with a technical background

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
pythoncomputervision Computer Vision with Python 7 hours

Computer Vision is a field that involves automatically extracting, analyzing, and understanding useful information from digital media. Python is a high-level programming language famous for its clear syntax and code readibility.

In this instructor-led, live training, participants will learn the basics of Computer Vision as they step through the creation of simple Computer Vision apps using Python.

By the end of this training, participants will be able to:

  • Understand the basics of Computer Vision
  • Use Python to implement Computer Vision tasks
  • Build their own Computer Vision apps using Python

Audience

  • Python programmers interested in Computer Vision

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
mdlmrah Model MapReduce and Apache Hadoop 14 hours

The course is intended for IT specialist that works with the distributed processing of large data sets across clusters of computers.

optaprac OptaPlanner in Practice 21 hours

This course uses a practical approach to teaching OptaPlanner. It provides participants with the tools needed to perform the basic functions of this tool.

annmldt Artificial Neural Networks, Machine Learning, Deep Thinking 21 hours
hadoopdeva Advanced Hadoop for Developers 21 hours

Apache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase.  These advanced programming techniques will be beneficial to experienced Hadoop developers.

Audience: developers

Duration: three days

Format: lectures (50%) and hands-on labs (50%).

 

altdomexp Analytics Domain Expertise 7 hours

This course is part of the Data Scientist skill set (Domain: Analytics Domain Expertise).

tsflw2v Natural Language Processing with TensorFlow 35 hours

TensorFlow™ is an open source software library for numerical computation using data flow graphs.

SyntaxNet is a neural-network Natural Language Processing framework for TensorFlow.

Word2Vec is used for learning vector representations of words, called "word embeddings". Word2vec is a particularly computationally-efficient predictive model for learning word embeddings from raw text. It comes in two flavors, the Continuous Bag-of-Words model (CBOW) and the Skip-Gram model (Chapter 3.1 and 3.2 in Mikolov et al.).

Used in tandem, SyntaxNet and Word2Vec allows users to generate Learned Embedding models from Natural Language input.

Audience

This course is targeted at Developers and engineers who intend to work with SyntaxNet and Word2Vec models in their TensorFlow graphs.

After completing this course, delegates will:

  • understand TensorFlow’s structure and deployment mechanisms
  • be able to carry out installation / production environment / architecture tasks and configuration
  • be able to assess code quality, perform debugging, monitoring
  • be able to implement advanced production like training models, embedding terms, building graphs and logging
wolfdata Data Science: Analysis and Presentation 7 hours

The Wolfram System's integrated environment makes it an efficient tool for both analyzing and presenting data. This course covers aspects of the Wolfram Language relevant to analytics, including statistical computation, visualization, data import and export and automatic generation of reports.

voldemort Voldemort: Setting up a key-value distributed data store 14 hours

Voldemort is an open-source distributed data store that is designed as a key-value store.  It is used at LinkedIn by numerous critical services powering a large portion of the site.

This course will introduce the architecture and capabilities of Voldomort and walk participants through the setup and application of a key-value distributed data store.

Audience
    Software developers
    System administrators
    DevOps engineers

Format of the course
    Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding

matlabfundamentalsfinance MATLAB Fundamentals + MATLAB for Finance 35 hours

This course provides a comprehensive introduction to the MATLAB technical computing environment + an introduction to using MATLAB for financial applications. The course is intended for beginning users and those looking for a review. No prior programming experience or knowledge of MATLAB is assumed. Themes of data analysis, visualization, modeling, and programming are explored throughout the course. Topics include:

  • Working with the MATLAB user interface
  • Entering commands and creating variables
  • Analyzing vectors and matrices
  • Visualizing vector and matrix data
  • Working with data files
  • Working with data types
  • Automating commands with scripts
  • Writing programs with logic and flow control
  • Writing functions
  • Using the Financial Toolbox for quantitative analysis
snorkel Snorkel: Rapidly process training data 7 hours

Snorkel is a system for rapidly creating, modeling, and managing training data. It focuses on accelerating the development of structured or "dark" data extraction applications for domains in which large labeled training sets are not available or easy to obtain.

In this instructor-led, live training, participants will learn techniques for extracting value from unstructured data such as text, tables, figures, and images through modeling of training data with Snorkel.

By the end of this training, participants will be able to:

  • Programmatically create training sets to enable the labeling of massive training sets
  • Train high-quality end models by first modeling noisy training sets
  • Use Snorkel to implement weak supervision techniques and apply data programming to weakly-supervised machine learning systems

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
encogintro Encog: Introduction to Machine Learning 14 hours

Encog is an open-source machine learning framework for Java and .Net.

In this instructor-led, live training, participants will learn how to create various neural network components using ENCOG. Real-world case studies will be discussed and machine language based solutions to these problems will be explored.

By the end of this training, participants will be able to:

  • Prepare data for neural networks using the normalization process
  • Implement feed forward networks and propagation training methodologies
  • Implement classification and regression tasks
  • Model and train neural networks using Encog's GUI based workbench
  • Integrate neural network support into real-world applications

Audience

  • Developers
  • Analysts
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
mlbankingpython_ Machine Learning for Banking (with Python) 21 hours

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry. Python will be used as the programming language.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of team projects.

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
mlfinancer Machine Learning for Finance (with R) 28 hours

Machine learning is a branch of Artificial Intelligence wherein computers have the ability to learn without being explicitly programmed. R is a popular programming language in the financial industry. It is used in financial applications ranging from core trading programs to risk management systems.

In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the finance industry. R will be used as the programming language.

Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete a number of team projects.

By the end of this training, participants will be able to:

  • Understand the fundamental concepts in machine learning
  • Learn the applications and uses of machine learning in finance
  • Develop their own algorithmic trading strategy using machine learning with R

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
nlp Natural Language Processing 21 hours

This course has been designed for people interested in extracting meaning from written English text, though the knowledge can be applied to other human languages as well.

The course will cover how to make use of text written by humans, such as  blog posts, tweets, etc...

For example, an analyst can set up an algorithm which will reach a conclusion automatically based on extensive data source.

mlintro Introduction to Machine Learning 7 hours

This training course is for people that would like to apply basic Machine Learning techniques in practical applications.

Audience

Data scientists and statisticians that have some familiarity with machine learning and know how to program R. The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give a practical introduction to machine learning to participants interested in applying the methods at work

Sector specific examples are used to make the training relevant to the audience.

dsguihtml5jsre Designing Inteligent User Interface with HTML5, JavaScript and Rule Engines 21 hours

Coding interfaces which allow users to get what they want easily is hard. This course guides you how to create effective UI with newest technologies and libraries.

It introduces idea of coding logic in Rule Engines (mostly Nools and PHP Rules) to make it easier to modify and test. After that the courses shows a way of integrating the logic on the front end of the website using JavaScript. Logic coded this way can be reused on the backend.

hadoopdev Hadoop for Developers (4 days) 28 hours

Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. This course will introduce a developer to various components (HDFS, MapReduce, Pig, Hive and HBase) Hadoop ecosystem.

 

tf101 Deep Learning with TensorFlow 21 hours

TensorFlow is a 2nd Generation API of Google's open source software library for Deep Learning. The system is designed to facilitate research in machine learning, and to make it quick and easy to transition from research prototype to production system.

Audience

This course is intended for engineers seeking to use TensorFlow for their Deep Learning projects

After completing this course, delegates will:

  • understand TensorFlow’s structure and deployment mechanisms
  • be able to carry out installation / production environment / architecture tasks and configuration
  • be able to assess code quality, perform debugging, monitoring
  • be able to implement advanced production like training models, building graphs and logging
opencv Computer Vision with OpenCV 28 hours

OpenCV (Open Source Computer Vision Library: http://opencv.org) is an open-source BSD-licensed library that includes several hundreds of computer vision algorithms.

Audience

This course is directed at engineers and architects seeking to utilize OpenCV for computer vision projects

neo4j Beyond the relational database: neo4j 21 hours

Relational, table-based databases such as Oracle and MySQL have long been the standard for organizing and storing data. However, the growing size and fluidity of data have made it difficult for these traditional systems to efficiently execute highly complex queries on the data. Imagine replacing rows-and-columns-based data storage with object-based data storage, whereby entities (e.g., a person) could be stored as data nodes, then easily queried on the basis of their vast, multi-linear relationship with other nodes. And imagine querying these connections and their associated objects and properties using a compact syntax, up to 20 times lighter than SQL. This is what graph databases, such as neo4j offer.

In this hands-on course, we will set up a live project and put into practice the skills to model, manage and access your data. We contrast and compare graph databases with SQL-based databases as well as other NoSQL databases and clarify when and where it makes sense to implement each within your infrastructure.

Audience

  • Database administrators (DBAs)
  • Data analysts
  • Developers
  • System Administrators
  • DevOps engineers
  • Business Analysts
  • CTOs
  • CIOs

Format of the course

  • Heavy emphasis on hands-on practice. Most of the concepts are learned through samples, exercises and hands-on development.
BigData_ A practical introduction to Data Analysis and Big Data 35 hours

Participants who complete this training will gain a practical, real-world understanding of Big Data and its related technologies, methodologies and tools.

Participants will have the opportunity to put this knowledge into practice through hands-on exercises. Group interaction and instructor feedback make up an important component of the class.

The course starts with an introduction to elemental concepts of Big Data, then progresses into the programming languages and methodologies used to perform Data Analysis. Finally, we discuss the tools and infrastructure that enable Big Data storage, Distributed Processing, and Scalability.

Audience

  • Developers / programmers
  • IT consultants

Format of the course

  • Part lecture, part discussion, hands-on practice and implementation, occasional quizing to measure progress.
matlabdsandreporting MATLAB Fundamentals, Data Science & Report Generation 126 hours

In the first part of this training, we cover the fundamentals of MATLAB and its function as both a language and a platform.  Included in this discussion is an introduction to MATLAB syntax, arrays and matrices, data visualization, script development, and object-oriented principles.

In the second part, we demonstrate how to use MATLAB for data mining, machine learning and predictive analytics. To provide participants with a clear and practical perspective of MATLAB's approach and power, we draw comparisons between using MATLAB and using other tools such as spreadsheets, C, C++, and Visual Basic.

In the third part of the training, participants learn how to streamline their work by automating their data processing and report generation.

Throughout the course, participants will put into practice the ideas learned through hands-on exercises in a lab environment. By the end of the training, participants will have a thorough grasp of MATLAB's capabilities and will be able to employ it for solving real-world data science problems as well as for streamlining their work through automation.

Assessments will be conducted throughout the course to gauge progress.

Format of the course

  • Course includes theoretical and practical exercises, including case discussions, sample code inspection, and hands-on implementation.

Note

  • Practice sessions will be based on pre-arranged sample data report templates. If you have specific requirements, please contact us to arrange.
jupyter Jupyter for Data Science Teams 7 hours

Jupyter is an open-source, web-based interactive IDE and computing environment.

This instructor-led, live training introduces the idea of collaborative development in data science and demonstrates how to use Jupyter to track and participate as a team in the "life cycle of a computational idea".  It walks participants through the creation of a sample data science project based on top of the Jupyter ecosystem.

By the end of this training, participants will be able to:

  • Install and configure Jupyter, including the creation and integration of a team repository on Git
  • Use Jupyter features such as extensions, interactive widgets, multiuser mode and more to enable project collaboraton
  • Create, share and organize Jupyter Notebooks with team members
  • Choose from Scala, Python, R, to write and execute code against big data systems such as Apache Spark, all through the Jupyter interface

Audience

  • Data science teams

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

 

Note

  • The Jupypter Notebook supports over 40 languages including R, Python, Scala, Julia, etc. To customize this course to your language(s) of choice, please contact us to arrange.
pythontextml Python: Machine Learning with Text 21 hours

In this instructor-led, live training, participants will learn how to use the right machine learning and NLP (Natural Language Processing) techniques to extract value from text-based data.

By the end of this training, participants will be able to:

  • Solve text-based data science problems with high-quality, reusable code
  • Apply different aspects of scikit-learn (classification, clustering, regression, dimensionality reduction) to solve problems
  • Build effective machine learning models using text-based data
  • Create a dataset and extract features from unstructured text
  • Visualize data with Matplotlib
  • Build and evaluate models to gain insight
  • Troubleshoot text encoding errors

Audience

  • Developers
  • Data Scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
opennlp OpenNLP for Text Based Machine Learning 14 hours

The Apache OpenNLP library is a machine learning based toolkit for processing natural language text. It supports the most common NLP tasks, such as language detection, tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing and coreference resolution.

In this instructor-led, live training, participants will learn how to create models for processing text based data using OpenNLP. Sample training data as well customized data sets will be used as the basis for the lab exercises.

By the end of this training, participants will be able to:

  • Install and configure OpenNLP
  • Download existing models as well as create their own
  • Train the models on various sets of sample data
  • Integrate OpenNLP with existing Java applications

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
dlfinancewithr Deep Learning for Finance (with R) 28 hours

Machine learning is a branch of Artificial Intelligence wherein computers have the ability to learn without being explicitly programmed. Deep learning is a subfield of machine learning which uses methods based on learning data representations and structures such as neural networks. R is a popular programming language in the financial industry. It is used in financial applications ranging from core trading programs to risk management systems.

In this instructor-led, live training, participants will learn how to implement deep learning models for finance using R as they step through the creation of a deep learning stock price prediction model.

By the end of this training, participants will be able to:

  • Understand the fundamental concepts of deep learning
  • Learn the applications and uses of deep learning in finance
  • Use R to create deep learning models for finance
  • Build their own deep learning stock price prediction model using R

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
wfsadm WildFly Server Administration 14 hours

This course is created for Administrators, Developers or anyone who is interested in managing WildFly Application Server (AKA JBoss Application Server).

This course usually runs on the newest version of the Application Server, but it can be tailored (as a private course) to older versions starting from version 5.1.

appliedml Applied Machine Learning 14 hours

This training course is for people that would like to apply Machine Learning in practical applications.

Audience

This course is for data scientists and statisticians that have some familiarity with statistics and know how to program R (or Python or other chosen language). The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization.

The purpose is to give practical applications to Machine Learning to participants interested in applying the methods at work.

Sector specific examples are used to make the training relevant to the audience.

datashrinkgov Data Shrinkage for Government 14 hours
cassdev Cassandra for Developers 21 hours

This course will introduce Cassandra –  a popular NoSQL database.  It will cover Cassandra principles, architecture and data model.   Students will learn data modeling  in CQL (Cassandra Query Language) in hands-on, interactive labs.  This session also discusses Cassandra internals and some admin topics.

Audience : Developers

intror Introduction to R with Time Series Analysis 21 hours

R is an open-source free programming language for statistical computing, data analysis, and graphics. R is used by a growing number of managers and data analysts inside corporations and academia. R has a wide variety of packages for data mining.

datavis1 Data Visualization 28 hours

This course is intended for engineers and decision makers working in data mining and knoweldge discovery.

You will learn how to create effective plots and ways to present and represent your data in a way that will appeal to the decision makers and help them to understand hidden information.

datamodeling Pattern Recognition 35 hours

This course provides an introduction into the field of pattern recognition and machine learning. It touches on practical applications in statistics, computer science, signal processing, computer vision, data mining, and bioinformatics.

The course is interactive and includes plenty of hands-on exercises, instructor feedback, and testing of knowledge and skills acquired.

Audience
    Data analysts
    PhD students, researchers and practitioners

 

octnp Octave not only for programmers 21 hours

Course is dedicated for those who would like to know an alternative program to the commercial MATLAB package. The three-day training provides comprehensive information on moving around the environment and performing the OCTAVE package for data analysis and engineering calculations. The training recipients are beginners but also those who know the program and would like to systematize their knowledge and improve their skills. Knowledge of other programming languages is not required, but it will greatly facilitate the learners' acquisition of knowledge. The course will show you how to use the program in many practical examples.

tpuprogramming TPU Programming: Building Neural Network Applications on Tensor Processing Units 7 hours

The Tensor Processing Unit (TPU) is the architecture which Google has used internally for several years, and is just now becoming available for use by the general public. It includes several optimizations specifically for use in neural networks, including streamlined matrix multiplication, and 8-bit integers instead of 16-bit in order to return appropriate levels of precision.

In this instructor-led, live training, participants will learn how to take advantage of the innovations in TPU processors to maximize the performance of their own AI applications.

By the end of the training, participants will be able to:

  • Train various types of neural networks on large amounts of data
  • Use TPUs to speed up the inference process by up to two orders of magnitude
  • Utilize TPUs to process intensive applications such as image search, cloud vision and photos

Audience

  • Developers
  • Researchers
  • Engineers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
datameer Datameer for Data Analysts 14 hours

Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion.

In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.

By the end of this training, participants will be able to:

  • Create, curate, and interactively explore an enterprise data lake
  • Access business intelligence data warehouses, transactional databases and other analytic stores
  • Use a spreadsheet user-interface to design end-to-end data processing pipelines
  • Access pre-built functions to explore complex data relationships
  • Use drag-and-drop wizards to visualize data and create dashboards
  • Use tables, charts, graphs, and maps to analyze query results

Audience

  • Data analysts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
flockdb Flockdb: A Simple Graph Database for Social Media 7 hours

FlockDB is an open source distributed, fault-tolerant graph database for managing wide but shallow network graphs. It was initially used by Twitter to store relationships among users.

In this instructor-led, live training, participants will learn how to setup and use a FlockDB database to help answer social media questions such as who follows whom, who blocks whom, etc.

By the end of this training, participants will be able to:

  • Install and configure FlockDB
  • Understand the unique features of FlockDB, relative to other graph databases such Neo4j
  • Use FlockDB to maintain a large graph dataset
  • Use FlockDB together with MySQL to provide provide distributed storage capabilities
  • Query, create and update extremely fast graph edges
  • Scale FlockDB horizontally for use in on-line, low-latency, high throughput web environments

Audience

  • Developers
  • Database engineers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
undnn Understanding Deep Neural Networks 35 hours

This course begins with giving you conceptual knowledge in neural networks and generally in machine learning algorithm, deep learning (algorithms and applications).

Part-1(40%) of this training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Theano, DeepDrive, Keras, etc.

Part-2(20%) of this training introduces Theano - a python library that makes writing deep learning models easy.

Part-3(40%) of the training would be extensively based on Tensorflow - 2nd Generation API of Google's open source software library for Deep Learning. The examples and handson would all be made in TensorFlow.

Audience

This course is intended for engineers seeking to use TensorFlow for their Deep Learning projects

After completing this course, delegates will:

  • have a good understanding on deep neural networks(DNN), CNN and RNN

  • understand TensorFlow’s structure and deployment mechanisms

  • be able to carry out installation / production environment / architecture tasks and configuration

  • be able to assess code quality, perform debugging, monitoring

  • be able to implement advanced production like training models, building graphs and logging
     

Not all the topics would be covered in a public classroom with 35 hours duration due to the vastness of the subject.

The Duration of the complete course will be around 70 hours and not 35 hours.

dlforbankingwithpython Deep Learning for Banking (with Python) 28 hours

Machine learning is a branch of Artificial Intelligence wherein computers have the ability to learn without being explicitly programmed. Deep learning is a subfield of machine learning which uses methods based on learning data representations and structures such as neural networks. Python is a high-level programming language famous for its clear syntax and code readability.

In this instructor-led, live training, participants will learn how to implement deep learning models for banking using Python as they step through the creation of a deep learning credit risk model.

By the end of this training, participants will be able to:

  • Understand the fundamental concepts of deep learning
  • Learn the applications and uses of deep learning in banking
  • Use Python, Keras, and TensorFlow to create deep learning models for banking
  • Build their own deep learning credit risk model using Python

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
drools6int Introduction to Drools 6 for Developers 21 hours

This 3 days course is aimed to introduce Drools 6 to developers.This course doesn't cover drools integration, performance or any other complex topics.

apachemdev Apache Mahout for Developers 14 hours

Audience

Developers involved in projects that use machine learning with Apache Mahout.

Format

Hands on introduction to machine learning. The course is delivered in a lab format based on real world practical use cases.

bspkaml Machine Learning 21 hours
This course will be a combination of theory and practical work with specific examples used throughout the event.
hadoopba Hadoop for Business Analysts 21 hours

Apache Hadoop is the most popular framework for processing Big Data. Hadoop provides rich and deep analytics capability, and it is making in-roads in to tradional BI analytics world. This course will introduce an analyst to the core components of Hadoop eco system and its analytics

Audience

Business Analysts

Duration

three days

Format

Lectures and hands on labs.

predio Machine Learning with PredictionIO 21 hours

PredictionIO is an open source Machine Learning Server built on top of state-of-the-art open source stack.

Audience

This course is directed at developers and data scientists who want to create predictive engines for any machine learning task.

genealgo Genetic Algorithms 28 hours

This four day course is aimed at teaching how genetic algorithms work; it also covers how to select model parameters of a genetic algorithm; there are many applications for genetic algorithms in this course and optimization problems are tackled with the genetic algorithms.

processmining Process Mining 21 hours

Process mining, or Automated Business Process Discovery (ABPD), is a technique that applies algorithms to event logs for the purpose of analyzing business processes. Process mining goes beyond data storage and data analysis; it bridges data with processes and provides insights into the trends and patterns that affect process efficiency. 

Format of the course
    The course starts with an overview of the most commonly used techniques for process mining. We discuss the various process discovery algorithms and tools used for discovering and modeling processes based on raw event data. Real-life case studies are examined and data sets are analyzed using the ProM open-source framework.

Audience
    Data science professionals
    Anyone interested in understanding and applying process modeling and data mining

DatSci7 Data Science Programme 245 hours

The explosion of information and data in today’s world is un-paralleled, our ability to innovate and push the boundaries of the possible is growing faster than it ever has. The role of Data Scientist is one of the highest in-demand skills across industry today.

We offer much more than learning through theory; we deliver practical, marketable skills that bridge the gap between the world of academia and the demands of industry.

This 7 week curriculum  can be tailored to your specific Industry requirements, please contact us for further information or visit the Nobleprog Institute website www.inobleprog.co.uk

Audience:

This programme is aimed post level graduates as well as anyone with the required pre-requisite skills which will be determined by an assessment and interview. 

Delivery:

Delivery of the course will be a mixture of Instructor Led Classroom and Instructor Led Online; typically the 1st week will be 'classroom led', weeks 2 - 6 'virtual classroom' and week 7  back to 'classroom led'. 

 

 

TalendDI Talend Open Studio for Data Integration 28 hours

Talend Open Studio for Data Integration is an open-source data integration product used to combine, convert and update data in various locations across a business.

In this instructor-led, live training, participants will learn how to use the Talend ETL tool to carry out data transformation, data extraction, and connectivity with Hadoop, Hive, and Pig.
 
By the end of this training, participants will be able to

  • Explain the concepts behind ETL (Extract, Transform, Load) and propagation
  • Define ETL methods and ETL tools to connect with Hadoop
  • Efficiently amass, retrieve, digest, consume, transform and shape big data in accordance to business requirements
  • Upload to and extract large records from Hadoop, Hive, and NoSQL databases

Audience

  • Business intelligence professionals
  • Project managers
  • Database professionals
  • SQL Developers
  • ETL Developers
  • Solution architects
  • Data architects
  • Data warehousing professionals
  • System administrators and integrators

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
dsstne Amazon DSSTNE: Build a recommendation system 7 hours

Amazon DSSTNE is an open-source library for training and deploying recommendation models. It allows models with weight matrices that are too large for a single GPU to be trained on a single host.

In this instructor-led, live training, participants will learn how to use DSSTNE to build a recommendation application.

By the end of this training, participants will be able to:

  • Train a recommendation model with sparse datasets as input
  • Scale training and prediction models over multiple GPUs
  • Spread out computation and storage in a model-parallel fashion
  • Generate Amazon-like personalized product recommendations
  • Deploy a production-ready application that can scale at heavy workloads

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
datavisualizationreports Data Visualization: Creating Captivating Reports 21 hours

In this instructor-led, live training, participants will learn the skills, strategies, tools and approaches for visualizing and reporting data for different audiences. Case studies are also analyzed and discussed to exemplify how data visualization solutions are being applied in the real world to derive meaning out of data and answer crucial questions.

By the end of this training, participants will be able to:

  • Write reports with captivating titles, subtitles, and annotations using the most suitable highlighting, alignment, and color schemes for readability and user friendliness.
  • Design charts that fit the audience's information needs and interests
  • Choose the best chart types for a given dataset (beyond pie charts and bar charts)
  • Identify and analyze the most valuable and relevant data quickly and efficiently
  • Select the best file formats to include in reports (graphs, infographics, references, GIFs, etc.)
  • Create effective layouts for displaying time series data, part-to-whole relationships, geographic patterns, and nested data
  • Use effective color-coding to display qualitative and text-based data such as sentiment analysis, timelines, calendars, and diagrams
  • Apply the most suitable tools for the job (Excel, R, Tableau, mapping programs, etc.)
  • Prepare datasets for visualization

Audience

  • Data analysts
  • Business managers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
d3js D3.js for Data Visualization 7 hours

D3.js (or D3 for Data-Driven Documents) is a JavaScript library that uses SVG, HTML5, and CSS for producing dynamic, interactive data visualizations in web browsers.

In this instructor-led, live training, participants will learn how to create web-based data-driven visualizations that run on multiple devices responsively.

By the end of this training, participants will be able to:

  • Use D3 to create interactive graphics, information dashboards, infographics and maps
  • Control HTML with jQuery-like selections
  • Transform the DOM by selecting elements and joining to data
  • Export SVG for use in print publications

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
dlforbankingwithr Deep Learning for Banking (with R) 28 hours

Machine learning is a branch of Artificial Intelligence wherein computers have the ability to learn without being explicitly programmed. Deep learning is a subfield of machine learning which uses methods based on learning data representations and structures such as neural networks. R is a popular programming language in the financial industry. It is used in financial applications ranging from core trading programs to risk management systems.

In this instructor-led, live training, participants will learn how to implement deep learning models for banking using R as they step through the creation of a deep learning credit risk model.

By the end of this training, participants will be able to:

  • Understand the fundamental concepts of deep learning
  • Learn the applications and uses of deep learning in banking
  • Use R to create deep learning models for banking
  • Build their own deep learning credit risk model using R

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
noolsint Introduction to Nools 7 hours
manbrphp Managing Business Rules with PHP Business Rules 14 hours

This course explain how to write declarative rules using PHP Business Rules (http://sourceforge.net/projects/phprules/). It shows how to write, organize and integrate rules with existing code. Most of the course is based on exercises preceded with short introduction and examples.

matlab2 MATLAB Fundamentals 21 hours

This three-day course provides a comprehensive introduction to the MATLAB technical computing environment. The course is intended for beginning users and those looking for a review. No prior programming experience or knowledge of MATLAB is assumed. Themes of data analysis, visualization, modeling, and programming are explored throughout the course. Topics include:

  •     Working with the MATLAB user interface
  •     Entering commands and creating variables
  •     Analyzing vectors and matrices
  •     Visualizing vector and matrix data
  •     Working with data files
  •     Working with data types
  •     Automating commands with scripts
  •     Writing programs with logic and flow control
  •     Writing functions
dataar Data Analytics With R 21 hours

R is a very popular, open source environment for statistical computing, data analytics and graphics. This course introduces R programming language to students.  It covers language fundamentals, libraries and advanced concepts.  Advanced data analytics and graphing with real world data.

Audience

Developers / data analytics

Duration

3 days

Format

Lectures and Hands-on

cntk Using Computer Network ToolKit (CNTK) 28 hours

Computer Network ToolKit (CNTK) is Microsoft's Open Source, Multi-machine, Multi-GPU, Highly efficent RNN training machine learning framework for speech, text, and images.

Audience

This course is directed at engineers and architects aiming to utilize CNTK in their projects.

datavisR1 Introduction to Data Visualization with R 28 hours

This course is intended for data engineers, decision makers and data analysts and will lead you to create very effective plots using R studio that appeal to decision makers and help them find out hidden information and take the right decisions

 

patternmatching Pattern Matching 14 hours

Pattern Matching is a technique used to locate specified patterns within an image. It can be used to determine the existence of specified characteristics within a captured image, for example the expected label on a defective product in a factory line or the specified dimensions of a component. It is different from "Pattern Recognition" (which recognizes general patterns based on larger collections of related samples) in that it specifically dictates what we are looking for, then tells us whether the expected pattern exists or not.

Audience
    Engineers and developers seeking to develop machine vision applications
    Manufacturing engineers, technicians and managers

Format of the course
    This course introduces the approaches, technologies and algorithms used in the field of pattern matching as it applies to Machine Vision.

mlentre Machine Learning Concepts for Entrepreneurs and Managers 21 hours

This training course is for people that would like to apply Machine Learning in practical applications for their team.  The training will not dive into technicalities and revolve around basic concepts and business/operational applications of the same.

Target Audience

  1. Investors and AI entrepreneurs
  2. Managers and Engineers whose company is venturing into AI space
  3. Business Analysts & Investors
pythonmultipurpose Advanced Python 28 hours

In this instructor-led training, participants will learn advanced Python programming techniques, including how to apply this versatile language to solve problems in areas such as distributed applications, finance, data analysis and visualization, UI programming and maintenance scripting.

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Notes

  • If you wish to add, remove or customize any section or topic within this course, please contact us to arrange.
cognitivecomputing Cognitive Computing: An Introduction for Business Managers 7 hours

Cognitive computing refers to systems that encompass machine learning, reasoning, natural language processing, speech recognition and vision (object recognition), human–computer interaction, dialog and narrative generation, to name a few. A cognitive computing system is often comprised of multiple technologies working together to process in-memory ‘hot’ contextual data as well as large sets of ‘cold’ historical data in batch. Examples of such technologies include Kafka, Spark, Elasticsearch, Cassandra and Hadoop.

In this instructor-led, live training, participants will learn how Cognitive Computing compliments AI and Big Data and how purpose-built systems can be used to realize human-like behaviors that improve the performance of human-machine interactions in business.

By the end of this training, participants will understand:

  • The relationship between cognitive computing and artificial intelligence (AI)
  • The inherently probabilistic nature of cognitive computing and how to use it as a business advantage
  • How to manage cognitive computing systems that behave in unexpected ways
  • Which companies and software systems offer the most compelling cognitive computing solutions

Audience

  • Business managers

Format of the course

  • Lecture, case discussions and exercises
nlg Python for Natural Language Generation 21 hours

Natural language generation (NLG) refers to the production of natural language text or speech by a computer.

In this instructor-led, live training, participants will learn how to use Python to produce high-quality natural language text by building their own NLG system from scratch. Case studies will also be examined and the relevant concepts will be applied to live lab projects for generating content.

By the end of this training, participants will be able to:

  • Use NLG to automatically generate content for various industries, from journalism, to real estate, to weather and sports reporting
  • Select and organize source content, plan sentences, and prepare a system for automatic generation of original content
  • Understand the NLG pipeline and apply the right techniques at each stage
  • Understand the architecture of a Natural Language Generation (NLG) system
  • Implement the most suitable algorithms and models for analysis and ordering
  • Pull data from publicly available data sources as well as curated databases to use as material for generated text
  • Replace manual and laborious writing processes with computer-generated, automated content creation

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
highcharts Highcharts for Data Visualization 7 hours

Highcharts is an open-source JavaScript library for creating interactive graphical charts on the Web. It is commonly used to represent data in a more user-readable and interactive fashion.

In this instructor-led, live training, participants will learn how to create high-quality data visualizations for web applications using Highcharts.

By the end of this training, participants will be able to:

  • Set up interactive charts on the Web using only HTML and JavaScript
  • Represent large datasets in visually interesting and interactive ways
  • Export charts to JPEG, PNG, SVG, or PDF
  • Integrate Highcharts with jQuery Mobile for cross-platform compatibility

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
dlforfinancewithpython Deep Learning for Finance (with Python) 28 hours

Machine learning is a branch of Artificial Intelligence wherein computers have the ability to learn without being explicitly programmed. Deep learning is a subfield of machine learning which uses methods based on learning data representations and structures such as neural networks. Python is a high-level programming language famous for its clear syntax and code readability.

In this instructor-led, live training, participants will learn how to implement deep learning models for finance using Python as they step through the creation of a deep learning stock price prediction model.

By the end of this training, participants will be able to:

  • Understand the fundamental concepts of deep learning
  • Learn the applications and uses of deep learning in finance
  • Use Python, Keras, and TensorFlow to create deep learning models for finance
  • Build their own deep learning stock price prediction model using Python

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
neuralnet Introduction to the use of neural networks 7 hours

The training is aimed at people who want to learn the basics of neural networks and their applications.

pmml Predictive Models with PMML 7 hours The course is created to scientific, developers, analysts or any other people who want to standardize or exchange their models with Predictive Model Markup Language (PMML) file format.
mlrobot1 Machine Learning for Robotics 21 hours

This course introduce machine learning methods in robotics applications.

It is a broad overview of existing methods, motivations and main ideas in the context of pattern recognition.

After short theoretical background, participants will perform simple exercise using open source (usually R) or any other popular software.

hadoopadm1 Hadoop For Administrators 21 hours

Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three (optionally, four) days course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem, how to plan cluster deployment and growth, how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, get familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos.

“…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising

Audience

Hadoop administrators

Format

Lectures and hands-on labs, approximate balance 60% lectures, 40% labs.

systemml Apache SystemML for Machine Learning 14 hours

Apache SystemML is a distributed and declarative machine learning platform.

SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations, to distributed computations on Apache Hadoop and Apache Spark.

Audience

This course is suitable for Machine Learning researchers, developers and engineers seeking to utilize SystemML as a framework for machine learning.

dlv Deep Learning for Vision 21 hours

Audience

This course is suitable for Deep Learning researchers and engineers interested in utilizing available tools (mostly open source ) for analyzing computer images

This course provide working examples.

kdd Knowledge Discover in Databases (KDD) 21 hours

Knowledge discovery in databases (KDD) is the process of discovering useful knowledge from a collection of data. Real-life applications for this data mining technique include marketing, fraud detection, telecommunication and manufacturing.

In this course, we introduce the processes involved in KDD and carry out a series of exercises to practice the implementation of those processes.

Audience
    Data analysts or anyone interested in learning how to interpret data to solve problems

Format of the course
    After a theoretical discussion of KDD, the instructor will present real-life cases which call for the application of KDD to solve a problem. Participants will prepare, select and cleanse sample data sets and use their prior knowledge about the data to propose solutions based on the results of their observations.

opennmt OpenNMT: Setting up a Neural Machine Translation System 7 hours

OpenNMT is a full-featured, open-source (MIT) neural machine translation system that utilizes the Torch mathematical toolkit.

In this training participants will learn how to set up and use OpenNMT to carry out translation of various sample data sets. The course starts with an overview of neural networks as they apply to machine translation. Participants will carry out live exercises throughout the course to demonstrate their understanding of the concepts learned and get feedback from the instructor. By the end of this training, participants will have the knowledge and practice needed to implement a live OpenNMT solution.

Source and target language samples will be pre-arranged per the audience's requirements.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course

  • Part lecture, part discussion, heavy hands-on practice
MicrosoftCognitiveToolkit Microsoft Cognitive Toolkit 2.x 21 hours

Microsoft Cognitive Toolkit 2.x (previously CNTK) is an open-source, commercial-grade toolkit that trains deep learning algorithms to learn like the human brain. According to Microsoft, CNTK can be 5-10x faster than TensorFlow on recurrent networks, and 2 to 3 times faster than TensorFlow for image-related tasks.

In this instructor-led, live training, participants will learn how to use Microsoft Cognitive Toolkit to create, train and evaluate deep learning algorithms for use in commercial-grade AI applications involving multiple types of data such data, speech, text, and images.

By the end of this training, participants will be able to:

  • Access CNTK as a library from within a Python, C#, or C++ program
  • Use CNTK as a standalone machine learning tool through its own model description language (BrainScript)
  • Use the CNTK model evaluation functionality from a Java program
  • Combine feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs)
  • Scale computation capacity on CPUs, GPUs and multiple machines
  • Access massive datasets using existing programming languages and algorithms

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Note

  • If you wish to customize any part of this training, including the programming language of choice, please contact us to arrange.
datavault Data Vault: Building a Scalable Data Warehouse 28 hours

Data vault modeling is a database modeling technique that provides long-term historical storage of data that originates from multiple sources. A data vault stores a single version of the facts, or "all the data, all of the time". Its flexible, scalable, consistent and adaptable design encompasses the best aspects of 3rd normal form (3NF) and star schema.

In this instructor-led, live training, participants will learn how to build a Data Vault.

By the end of this training, participants will be able to:

  • Understand the architecture and design concepts behind Data Vault 2.0, and its interaction with Big Data, NoSQL and AI.
  • Use data vaulting techniques to enable auditing, tracing, and inspection of historical data in a data warehouse
  • Develop a consistent and repeatable ETL (Extract, Transform, Load) process
  • Build and deploy highly scalable and repeatable warehouses

Audience

  • Data modelers
  • Data warehousing specialist
  • Business Intelligence specialists
  • Data engineers
  • Database administrators

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
odmblockchain IBM ODM and Blockchain: Applying business rules to Smart Contracts 14 hours

Smart Contracts are used to encode and encapsulate the rules for automatically initiating and processing transactions on the Blockchain.

In this instructor-led, live training, participants will learn how to use IBM Operational Decision Manager (ODM) with Hyperledger Composer to implement the business logic of a Smart Contract using business rules.

By the end of this training, participants will be able to:

  • Use ODM's rule engine together with Blockchain to "unbury" rules from the codebase of a Blockchain application
  • Set up a system to allow specialist such as accountants, auditors, lawyers, and analysts to define the rules of exchange for themselves
  • Use Decision Center as a platform to collaboratively govern rules
  • Use ODM's rule engine to update, test and deploy rules without touching the code of the Smart Contract
  • Deploy the IBM ODM Rule Execution Server
  • Integrate IBM ODM with Hyperledger Composer running on Hyperledger Fabric

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
graphcomputing Introduction to Graph Computing 28 hours

A large number of real world problems can be described in terms of graphs. For example, the Web graph, the social network graph, the train network graph and the language graph. These graphs tend to be extremely large; processing them requires a specialized set of tools and mindset referred to as graph computing.

In this instructor-led, live training, participants will learn about the various technology offerings and implementations for processing graph data. The aim is to identify real-world objects, their characteristics and relationships, then model these relationships and process them as data using graph computing approaches. We start with a broad overview and narrow in on specific tools as we step through a series of case studies, hands-on exercises and live deployments.

By the end of this training, participants will be able to:

  • Understand how graph data is persisted and traversed
  • Select the best framework for a given task (from graph databases to batch processing frameworks)
  • Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel
  • View real-world big data problems in terms of graphs, processes and traversals

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
monetdb MonetDB 28 hours

MonetDB is an open-source database that pioneered the column-store technology approach.

In this instructor-led, live training, participants will learn how to use MonetDB and how to get the most value out of it.

By the end of this training, participants will be able to:

  • Understand MonetDB and its features
  • Install and get started with MonetDB
  • Explore and perform different functions and tasks in MonetDB
  • Accelerate the delivery of their project by maximizing MonetDB capabilities

Audience

  • Developers
  • Technical experts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
sspsspas Statistics with SPSS Predictive Analytics Software 14 hours

Goal:

Learning to work with SPSS at the level of independence

The addressees:

Analysts, researchers, scientists, students and all those who want to acquire the ability to use SPSS package and learn popular data mining techniques.

68736 Hadoop for Developers (2 days) 14 hours
deeplearning1 Introduction to Deep Learning 21 hours This course is general overview for Deep Learning without going too deep into any specific methods. It is suitable for people who want to start using Deep learning to enhance their accuracy of prediction.
solrdev Solr for Developers 21 hours

This course introduces students to the Solr platform. Through a combination of lecture, discussion and labs students will gain hands on experience configuring effective search and indexing.

The class begins with basic Solr installation and configuration then teaches the attendees the search features of Solr. Students will gain experience with faceting, indexing and search relevance among other features central to the Solr platform. The course wraps up with a number of advanced topics including spell checking, suggestions, Multicore and SolrCloud.

Duration: 3 days

Audience: Developers, business users, administrators

tfir TensorFlow for Image Recognition 28 hours

This course explores, with specific examples, the application of Tensor Flow to the purposes of image recognition

Audience

This course is intended for engineers seeking to utilize TensorFlow for the purposes of Image Recognition

After completing this course, delegates will be able to:

  • understand TensorFlow’s structure and deployment mechanisms
  • carry out installation / production environment / architecture tasks and configuration
  • assess code quality, perform debugging, monitoring
  • implement advanced production like training models, building graphs and logging
aiauto Artificial Intelligence in Automotive 14 hours

This course covers AI (emphasizing Machine Learning and Deep Learning) in Automotive Industry. It helps to determine which technology can be (potentially) used in multiple situation in a car: from simple automation, image recognition to autonomous decision making.

scylladb Scylla database 21 hours

Scylla is an open-source distributed NoSQL data store. It is compatible with Apache Cassandra but performs at significantly higher throughputs and lower latencies.

In this course, participants will learn about Scylla's features and architecture while obtaining practical experience with setting up, administering, monitoring, and troubleshooting Scylla.  

Audience
    Database administrators
    Developers
    System Engineers

Format of the course
    The course is interactive and includes discussions of the principles and approaches for deploying and managing Scylla distributed databases and clusters. The course includes a heavy component of hands-on exercises and practice.

Fairsec Fairsec: Setting up a CNN-based machine translation system 7 hours

Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT).

In this training participants will learn how to use Fairseq to carry out translation of sample content. By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution. Source and target language content samples can be prepared according to audience's requirements.

Audience

  • Localization specialists with a technical background
  • Global content managers
  • Localization engineers
  • Software developers in charge of implementing global content solutions

Format of the course
    Part lecture, part discussion, heavy hands-on practice

PentahoDI Pentaho Data Integration Fundamentals 21 hours

Pentaho Data Integration is an open-source data integration tool for defining jobs and data transformations.

In this instructor-led, live training, participants will learn how to use Pentaho Data Integration's powerful ETL capabilities and rich GUI to manage an entire big data lifecycle, maximizing the value of data to the organization.

By the end of this training, participants will be able to:

  • Create, preview, and run basic data transformations containing steps and hops
  • Configure and secure the Pentaho Enterprise Repository
  • Harness disparate sources of data and generate a single, unified version of the truth in an analytics-ready format.
  • Provide results to third-part applications for further processing

Audience

  • Data Analyst
  • ETL developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
t2t T2T: Creating Sequence to Sequence models for generalized learning 7 hours

Tensor2Tensor (T2T) is a modular, extensible library for training AI models in different tasks, using different types of training data, for example: image recognition, translation, parsing, image captioning, and speech recognition. It is maintained by the Google Brain team.

In this instructor-led, live training, participants will learn how to prepare a deep-learning model to resolve multiple tasks.

By the end of this training, participants will be able to:

  • Install tensor2tensor, select a data set, and train and evaluate an AI model
  • Customize a development environment using the tools and components included in Tensor2Tensor
  • Create and use a single model to concurrently learn a number of tasks from multiple domains
  • Use the model to learn from tasks with a large amount of training data and apply that knowledge to tasks where data is limited
  • Obtain satisfactory processing results using a single GPU

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
fsharpfordatascience F# for Data Science 21 hours

Data science is the application of statistical analysis, machine learning, data visualization and programming for the purpose of understanding and interpreting real-world data. F# is a well suited programming language for data science as it combines efficient execution, REPL-scripting, powerful libraries and scalable data integration.

In this instructor-led, live training, participants will learn how to use F# to solve a series of real-world data science problems.

By the end of this training, participants will be able to:

  • Use F#'s integrated data science packages
  • Use F# to interoperate with other languages and platforms, including Excel, R, Matlab, and Python
  • Use the Deedle package to solve time series problems
  • Carry out advanced analysis with minimal lines of production-quality code
  • Understand how functional programming is a natural fit for scientific and big data computations
  • Access and visualize data with F#
  • Apply F# for machine learning

Explore solutions for problems in domains such as business intelligence and social gaming

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
pythonfinance Python Programming for Finance 35 hours

Python is a programming language that has gained huge popularity in the financial industry. Used by the largest investment banks and hedge funds, it is being employed to build a wide range of financial applications ranging from core trading programs to risk management systems.

In this instructor-led, live training, participants will learn how to use Python to develop practical applications for solving a number of specific finance related problems.

By the end of this training, participants will be able to:

  • Understand the fundamentals of the Python programming language
  • Download, install and maintain the best development tools for creating financial applications in Python
  • Select and utilize the most suitable Python packages and programming techniques to organize, visualize, and analyze financial data from various sources (CSV, Excel, databases, web, etc.)
  • Build applications that solve problems related to asset allocation, risk analysis, investment performance and more
  • Troubleshoot, integrate deploy and optimize a Python application

Audience

  • Developers
  • Analysts
  • Quants

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Note

  • This training aims to provide solutions for some of the principle problems faced by finance professionals. However, if you have a particular topic, tool or technique that you wish to append or elaborate further on, please please contact us to arrange.
apachedrill Apache Drill for On-the-Fly Analysis of Multiple Big Data Formats 21 hours

Apache Drill is a schema-free, distributed, in-memory columnar SQL query engine for Hadoop, NoSQL and and other Cloud and file storage systems. Apache Drill's power lies in its ability to join data from multiple data stores using a single query. Apache Drill supports numerous NoSQL databases and file systems, including HBase, MongoDB, MapR-DB, HDFS, MapR-FS, Amazon S3, Azure Blob Storage, Google Cloud Storage, Swift, NAS and local files.

In this instructor-led, live training, participants will learn the fundamentals of Apache Drill, then leverage the power and convenience of SQL to interactively query big data without writing code. Participants will also learn how to optimize their Drill queries for distributed SQL execution.

By the end of this training, participants will be able to:

  • Perform "self-service" exploration on structured and semi-structured data on Hadoop
  • Query known as well as unknown data using SQL queries
  • Understand how Apache Drills receives and executes queries
  • Write SQL queries to analyze different types of data, including structured data in Hive, semi-structured data in HBase or MapR-DB tables, and data saved in files such as Parquet and JSON.
  • Use Apache Drill to perform on-the-fly schema discovery, bypassing the need for complex ETL and schema operations
  • Integrate Apache Drill with BI (Business Intelligence) tools such as Tableau, Qlikview, MicroStrategy and Excel

Audience

  • Data analysts
  • Data scientists
  • SQL programmers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
rneuralnet Neural Network in R 14 hours

This course is an introduction to applying neural networks in real world problems using R-project software.

dataminr Data Mining with R 14 hours

R is an open-source free programming language for statistical computing, data analysis, and graphics. R is used by a growing number of managers and data analysts inside corporations and academia. R has a wide variety of packages for data mining.

matlabml1 Introduction to Machine Learning with MATLAB 21 hours

MATLAB is a numerical computing environment and programming language developed by MathWorks.

sparkdev Spark for Developers 21 hours

OBJECTIVE:

This course will introduce Apache Spark. The students will learn how  Spark fits  into the Big Data ecosystem, and how to use Spark for data analysis.  The course covers Spark shell for interactive data analysis, Spark internals, Spark APIs, Spark SQL, Spark streaming, and machine learning and graphX.

AUDIENCE :

Developers / Data Analysts

jenetics Jenetics 21 hours

Jenetics is an advanced Genetic Algorithm, respectively an Evolutionary Algorithm, library written in modern day Java.

Audience

This course is directed at Researchers seeking to utilize Jenetics in their projects

 

Neuralnettf Neural Networks Fundamentals using TensorFlow as Example 28 hours

This course will give you knowledge in neural networks and generally in machine learning algorithm,  deep learning (algorithms and applications).

This training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Teano, DeepDrive, Keras, etc. The examples are made in TensorFlow.

aitech Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP 21 hours

This course is aimed at developers and data scientists who wish to understand and implement AI within their applications. Special focus is given to Data Analysis, Distributed AI and NLP.

ApHadm1 Apache Hadoop: Manipulation and Transformation of Data Performance 21 hours


This course is intended for developers, architects, data scientists or any profile that requires access to data either intensively or on a regular basis.

The major focus of the course is data manipulation and transformation.

Among the tools in the Hadoop ecosystem this course includes the use of Pig and Hive both of which are heavily used for data transformation and manipulation.

This training also addresses performance metrics and performance optimisation.

The course is entirely hands on and is punctuated by presentations of the theoretical aspects.

hdp Hortonworks Data Platform (HDP) for administrators 21 hours

Hortonworks Data Platform is an open-source Apache Hadoop support platform that provides a stable foundation for developing big data solutions on the Apache Hadoop ecosystem.

This instructor-led live training introduces Hortonworks and walks participants through the deployment of Spark + Hadoop solution.

By the end of this training, participants will be able to:

  • Use Hortonworks to reliably run Hadoop at a large scale
  • Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
  • Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project
  • Process different types of data, including structured, unstructured, in-motion, and at-rest.

Audience

  • Hadoop administrators

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
deckgl deck.gl: Visualizing Large-scale Geospatial Data 14 hours

deck.gl is an open-source, WebGL-powered library for exploring and visualizing data assets at scale. Created by Uber, it is especially useful for gaining insights from geospatial data sources, such as data on maps.

This instructor-led, live training introduces the concepts and functionality behind deck.gl and walks participants through the set up of a demonstration project.

By the end of this training, participants will be able to:

  • Take data from very large collections and turn it into compelling visual representations
  • Visualize data collected from transportation and journey-related use cases, such as pick-up and drop-off experiences, network traffic, etc.
  • Apply layering techniques to geospatial data to depict changes in data over time
  • Integrate deck.gl with React (for Reactive programming) and Mapbox GL (for visualizations on Mapbox based maps).
  • Understand and explore other use cases for deck.gl, including visualizing points collected from a 3D indoor scan, visualizing machine learning models in order to optimize their algorithms, etc.

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
mlios Machine Learning on iOS 14 hours

In this instructor-led, live training, participants will learn how to use the iOS Machine Learning (ML) technology stack as they as they step through the creation and deployment of an iOS mobile app.

By the end of this training, participants will be able to:

  • Create a mobile app capable of image processing, text analysis and speech recognition
  • Access pre-trained ML models for integration into iOS apps
  • Create a custom ML model
  • Add Siri Voice support to iOS apps
  • Understand and use frameworks such as coreML, Vision, CoreGraphics, and GamePlayKit
  • Use languages and tools such as Python, Keras, Caffee, Tensorflow, sci-kit learn, libsvm, Anaconda, and Spyder

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
rforfinance R Programming for Finance 28 hours

R is a popular programming language in the financial industry. It is used in financial applications ranging from core trading programs to risk management systems.

In this instructor-led, live training, participants will learn how to use R to develop practical applications for solving a number of specific finance related problems.

By the end of this training, participants will be able to:

  • Understand the fundamentals of the R programming language
  • Select and utilize R packages and techniques to organize, visualize, and analyze financial data from various sources (CSV, Excel, databases, web, etc.)
  • Build applications that solve problems related to asset allocation, risk analysis, investment performance and more
  • Troubleshoot, integrate deploy and optimize an R application

Audience

  • Developers
  • Analysts
  • Quants

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Note

  • This training aims to provide solutions for some of the principle problems faced by finance professionals. However, if you have a particular topic, tool or technique that you wish to append or elaborate further on, please please contact us to arrange.
cortana Turning Data into Intelligent Action with Cortana Intelligence 28 hours

Cortana Intelligence Suite is a bundle of integrated products and services on the Microsoft Azure Cloud that enable entities to transform data into intelligent actions.

In this instructor-led, live training, participants will learn how to use the components that are part of the Cortana Intelligence Suite to build data-driven intelligent applications.

By the end of this training, participants will be able to:

  • Learn how to use Cortana Intelligence Suite tools
  • Acquire the latest knowledge of data management and analytics
  • Use Cortana components to turn data into intelligent action
  • Use Cortana to build applications from scratch and launch it on the cloud

Audience

  • Data scientists
  • Programmers
  • Developers
  • Managers
  • Architects

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
bdbiga Big Data Business Intelligence for Govt. Agencies 35 hours

Advances in technologies and the increasing amount of information are transforming how business is conducted in many industries, including government. Government data generation and digital archiving rates are on the rise due to the rapid growth of mobile devices and applications, smart sensors and devices, cloud computing solutions, and citizen-facing portals. As digital information expands and becomes more complex, information management, processing, storage, security, and disposition become more complex as well. New capture, search, discovery, and analysis tools are helping organizations gain insights from their unstructured data. The government market is at a tipping point, realizing that information is a strategic asset, and government needs to protect, leverage, and analyze both structured and unstructured information to better serve and meet mission requirements. As government leaders strive to evolve data-driven organizations to successfully accomplish mission, they are laying the groundwork to correlate dependencies across events, people, processes, and information.

High-value government solutions will be created from a mashup of the most disruptive technologies:

  • Mobile devices and applications
  • Cloud services
  • Social business technologies and networking
  • Big Data and analytics

IDC predicts that by 2020, the IT industry will reach $5 trillion, approximately $1.7 trillion larger than today, and that 80% of the industry's growth will be driven by these 3rd Platform technologies. In the long term, these technologies will be key tools for dealing with the complexity of increased digital information. Big Data is one of the intelligent industry solutions and allows government to make better decisions by taking action based on patterns revealed by analyzing large volumes of data — related and unrelated, structured and unstructured.

But accomplishing these feats takes far more than simply accumulating massive quantities of data.“Making sense of thesevolumes of Big Datarequires cutting-edge tools and technologies that can analyze and extract useful knowledge from vast and diverse streams of information,” Tom Kalil and Fen Zhao of the White House Office of Science and Technology Policy wrote in a post on the OSTP Blog.

The White House took a step toward helping agencies find these technologies when it established the National Big Data Research and Development Initiative in 2012. The initiative included more than $200 million to make the most of the explosion of Big Data and the tools needed to analyze it.

The challenges that Big Data poses are nearly as daunting as its promise is encouraging. Storing data efficiently is one of these challenges. As always, budgets are tight, so agencies must minimize the per-megabyte price of storage and keep the data within easy access so that users can get it when they want it and how they need it. Backing up massive quantities of data heightens the challenge.

Analyzing the data effectively is another major challenge. Many agencies employ commercial tools that enable them to sift through the mountains of data, spotting trends that can help them operate more efficiently. (A recent study by MeriTalk found that federal IT executives think Big Data could help agencies save more than $500 billion while also fulfilling mission objectives.).

Custom-developed Big Data tools also are allowing agencies to address the need to analyze their data. For example, the Oak Ridge National Laboratory’s Computational Data Analytics Group has made its Piranha data analytics system available to other agencies. The system has helped medical researchers find a link that can alert doctors to aortic aneurysms before they strike. It’s also used for more mundane tasks, such as sifting through résumés to connect job candidates with hiring managers.

68780 Apache Spark 14 hours
osovv OpenStack Overview 7 hours

The course is dedicated to IT engineers and architects who are looking for a solution to host private or public IaaS (Infrastructure as a Service) cloud.
This is also great opportunity for IT managers to gain knowledge overview about possibilities which could be enabled by OpenStack.

Before You spend a lot of money on OpenStack implementation, You could consider all pros and cons by attending on our course.
This topic is also avaliable as individual consultancy.

Course goal:

  • gaining basic knowledge regarding OpenStack

hbasedev HBase for Developers 21 hours

This course introduces HBase – a NoSQL store on top of Hadoop.  The course is intended for developers who will be using HBase to develop applications,  and administrators who will manage HBase clusters.

We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course  is very  hands-on with lots of lab exercises.


Duration : 3 days

Audience : Developers  & Administrators

dl4j Mastering Deeplearning4j 21 hours

Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Spark, DL4J is designed to be used in business environments on distributed GPUs and CPUs.

 

Audience

This course is directed at engineers and developers seeking to utilize Deeplearning4j in their projects.

 

After this course delegates will be able to:

cassadmin Cassandra Administration 14 hours

This course will introduce Cassandra –  a popular NoSQL database.  It will cover Cassandra principles, architecture and data model.   Students will learn data modeling  in CQL (Cassandra Query Language) in hands-on, interactive labs.  This session also discusses Cassandra internals and some admin topics.

marvin Marvin Image Processing Framework - creating image and video processing applications with Marvin 14 hours

Marvin is an extensible, cross-platform, open-source image and video processing framework developed in Java.  Developers can use Marvin to manipulate images, extract features from images for classification tasks, generate figures algorithmically, process video file datasets, and set up unit test automation.

Some of Marvin's video applications include filtering, augmented reality, object tracking and motion detection.

In this course participants will learn the principles of image and video analysis and utilize the Marvin Framework and its image processing algorithms to construct their own application.

Audience
    Software developers wishing to utilize a rich, plug-in based open-source framework to create image and video processing applications

Format of the course
    The basic principles of image analysis, video analysis and the Marvin Framework are first introduced. Students are given project-based tasks which allow them to practice the concepts learned. By the end of the class, participants will have developed their own application using the Marvin Framework and libraries.

hadoopforprojectmgrs Hadoop for Project Managers 14 hours

As more and more software and IT projects migrate from local processing and data management to distributed processing and big data storage, Project Managers are finding the need to upgrade their knowledge and skills to grasp the concepts and practices relevant to Big Data projects and opportunities.

This course introduces Project Managers to the most popular Big Data processing framework: Hadoop.  

In this instructor-led training, participants will learn the core components of the Hadoop ecosystem and how these technologies can be used to solve large-scale problems. In learning these foundations, participants will also improve their ability to communicate with the developers and implementers of these systems as well as the data scientists and analysts that many IT projects involve.

Audience

  • Project Managers wishing to implement Hadoop into their existing development or IT infrastructure
  • Project Managers needing to communicate with cross-functional teams that include big data engineers, data scientists and business analysts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
magellan Magellan: Geospatial Analytics with on Spark 14 hours

Magellan is an open-source distributed execution engine for geospatial analytics on big data. Implemented on top of Apache Spark, it extends Spark SQL and provides a relational abstraction for geospatial analytics.

This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark.

By the end of this training, participants will be able to:

  • Efficiently query, parse and join geospatial datasets at scale
  • Implement geospatial data in business intelligence and predictive analytics applications
  • Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables

Audience

  • Application developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
embeddingprojector Embedding Projector: Visualizing your Training Data 14 hours

Embedding Projector is an open-source web application for visualizing the data used to train machine learning systems. Created by Google, it is part of TensorFlow.

This instructor-led, live training introduces the concepts behind Embedding Projector and walks participants through the setup of a demo project.

By the end of this training, participants will be able to:

  • Explore how data is being interpreted by machine learning models
  • Navigate through 3D and 2D views of data to understand how a machine learning algorithm interprets it
  • Understand the concepts behind Embeddings and their role in representing mathematical vectors for images, words and numerals.
  • Explore the properties of a specific embedding to understand the behavior of a model
  • Apply Embedding Project to real-world use cases such building a song recommendation system for music lovers

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
tensorflowserving TensorFlow Serving 7 hours

TensorFlow Serving is a system for serving machine learning (ML) models to production.

In this instructor-led, live training, participants will learn how to configure and use TensorFlow Serving to deploy and manage ML models in a production environment.

By the end of this training, participants will be able to:

  • Train, export and serve various TensorFlow models
  • Test and deploy algorithms using a single architecture and set of APIs
  • Extend TensorFlow Serving to serve other types of models beyond TensorFlow models

Audience

  • Developers
  • Data scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
textsum Text Summarization with Python 14 hours

In Python Machine Learning, the Text Summarization feature is able to read the input text and produce a text summary. This capability is available from the command-line or as a Python API/Library. One exciting application is the rapid creation of executive summaries; this is particularly useful for organizations that need to review large bodies of text data before generating reports and presentations.

In this instructor-led, live training, participants will learn to use Python to create a simple application that auto-generates a summary of input text.

By the end of this training, participants will be able to:

  • Use a command-line tool that summarizes text.
  • Design and create Text Summarization code using Python libraries.
  • Evaluate three Python summarization libraries: sumy 0.7.0, pysummarization 1.0.4, readless 1.0.17

Audience

  • Developers
  • Data Scientists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
intelligentmobileapps Building Intelligent Mobile Applications 35 hours

Intelligent applications are next generation apps that can continually learn from user interactions to provide better value and relevance to users.

In this instructor-led, live training, participants will learn how to build intelligent mobile applications and bots.

By the end of this training, participants will be able to:

  • Understand the fundamental concepts of intelligent applications
  • Learn how to use various tools for building intelligent applications
  • Build intelligent applications using Azure, Cognitive Services API, Stream Analytics, and Xamarin

Audience

  • Developers
  • Programmers
  • Hobbyists

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice
bldrools Managing Business Logic with Drools 21 hours

This course is aimed at enterprise architects, business and system analysts, technical managers and developers who want to apply business rules to their solutions.

This course contains a lot of simple hands-on exercises during which the participants will create working rules. Please refer to our other courses if you just need an overview of Drools.

This course is usually delivered on the newest stable version of Drools and jBPM, but in case of a bespoke course, can be tailored to a specific version.

Artificial Intelligence training courses in Scotland, Weekend Artificial Intelligence courses in Scotland, Evening Artificial Intelligence training in Scotland, Artificial Intelligence instructor-led in Scotland , Evening Artificial Intelligence courses in Scotland, Artificial Intelligence private courses in Scotland, Artificial Intelligence trainer in Scotland, Artificial Intelligence boot camp in Scotland, Artificial Intelligence on-site in Scotland, Artificial Intelligence instructor in Scotland,Weekend Artificial Intelligence training in Scotland,Artificial Intelligence classes in Scotland, Artificial Intelligence one on one training in Scotland, Artificial Intelligence coaching in Scotland

Course Discounts

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients

Outlines Extract
Machine-generated

Artificial intelligence the course to a machine standards and processes and configuring the configuration and private new models to use and the security processing development to create the control. Of traditional data structure of service of security and components of an application complex content performance regression server command with the security server and state. Of the security to show the control of the support testing to ensure a complex and source in service with the compensation of an experience in processes and series that hands. Control procedures and applications that is a case of a development can do to secure links provides the application in the technology in the services the organization and it. The assembly configure the content of a statement and statistical process for services to explore the syntax and requirements and components of engineers continual security u.