Local, instructor-led live Big Data training courses start with an introduction to elemental concepts of Big Data, then progress into the programmeming languages and methodologies used to perform Data Analysis. Tools and infrastructure for enabling Big Data storage, Distributed Processing, and Scalability are discussed, compared and implemented in demo practice sessions.
Big Data training is available as "onsite live training" or "remote live training". Swindon onsite live Big Data trainings can be carried out locally on customer premises or in NobleProg corporate training centres. Remote live training is carried out by way of an interactive, remote desktop.
NobleProg -- Your Local Training Provider
Located on the western side of Swindon and located only 300yds from junction 16 of the M4 providing easy access to both the south west and south east.
By train London Paddington is only 1 hour away and Bristol temple meads 45 minutes from Swindon’s mainline station.
Swindon town centre is only a 5 min drive away and offers all modern restaurant and shopping facilities, for those who prefer a slower pace areas such as Lydiard Country Park are located within 1 mile of the...
This is one of the best quality online training I have ever taken in my 13 year career. Keep up the great work!.
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
Liked very much the interactive way of learning.
Luigi Loiacono
Course: Data Analysis with Hive/HiveQL
Overall the Content was good.
Sameer Rohadia
Course: A Practical Introduction to Data Analysis and Big Data
We know a lot more about the whole environment.
John Kidd
Course: Spark for Developers
Richard's training style kept it interesting, the real world examples used helped to drive the concepts home.
Jamie Martin-Royle - NBrown Group
Course: From Data to Decision with Big Data and Predictive Analytics
The tutor, Mr. Michael An, interacted with the audience very well, the instruction was clear. The tutor also go extent to add more information based on the requests from the students during the training.
Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course: Programming with Big Data in R
I generally enjoyed the Hardtop Eco system.
Adnan Rafiq
Course: Big Data Business Intelligence for Govt. Agencies
I think the trainer had an excellent style of combining humor and real life stories to make the subjects at hand very approachable. I would highly recommend this professor in the future.
Course: Spark for Developers
The example and training material were sufficient and made it easy to understand what you are doing.
Teboho Makenete
Course: Data Science for Big Data Analytics
I was benefit from the competence and knowledge of the trainer.
Jonathan Puvilland
Course: Data Analysis with Hive/HiveQL
I was benefit from the good overview, good balance between theory and exercises.
Proximus
Course: Data Analysis with Hive/HiveQL
I really was benefit from the willingness of the trainer to share more.
Balaram Chandra Paul
Course: A Practical Introduction to Data Analysis and Big Data
I really enjoyed the introduction of new packages.
Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course: Programming with Big Data in R
The trainer made the class interesting and entertaining which helps quite a bit with all day training.
Ryan Speelman
Course: Spark for Developers
I liked the examples.
Peter Coleman
Course: Data Visualization
He was interactive.
Suraj
Course: Semantic Web Overview
I enjoyed the good real world examples, reviews of existing reports.
Ronald Parrish
Course: Data Visualization
It was a very practical training, I liked the hands-on exercises.
Proximus
Course: Data Analysis with Hive/HiveQL
It covered a broad range of information.
Continental AG / Abteilung: CF IT Finance
Course: A Practical Introduction to Data Analysis and Big Data
It was very hands-on, we spent half the time actually doing things in Clouded/Hardtop, running different commands, checking the system, and so on. The extra materials (books, websites, etc. .) were really appreciated, we will have to continue to learn. The installations were quite fun, and very handy, the cluster setup from scratch was really good.
Ericsson
Course: Administrator Training for Apache Hadoop
Ernesto did a great job explaining the high level concepts of using Spark and its various modules.
Michael Nemerouf
Course: Spark for Developers
I liked the examples.
Peter Coleman
Course: Data Visualization
Michael the trainer is very knowledgeable and skillful about the subject of Big Data and R. He is very flexible and quickly customise the training meeting clients' need. He is also very capable to solve technical and subject matter problems on the go. Fantastic and professional training!.
Xiaoyuan Geng - Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course: Programming with Big Data in R
I enjoyed the dynamic interaction and “hands-on” the subject, thanks to the Virtual Machine, very stimulating!.
Philippe Job
Course: Data Analysis with Hive/HiveQL
I was benefit from some new and interesting ideas. Meeting and interacting with other attendees.
TECTERRA
Course: IoT ( Internet of Things) for Entrepreneurs, Managers and Investors
I am a hands-on learner and this was something that he did a lot of.
Lisa Comfort
Course: Data Visualization
The subject matter and the pace were perfect.
Tim - Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course: Programming with Big Data in R
I generally was benefit from the presentation of technologies.
Continental AG / Abteilung: CF IT Finance
Course: A Practical Introduction to Data Analysis and Big Data
This is one of the best hands-on with exercises programmeming courses I have ever taken.
Laura Kahn
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
This is one of the best quality online training I have ever taken in my 13 year career. Keep up the great work!.
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
I think the trainer had an excellent style of combining humor and real life stories to make the subjects at hand very approachable. I would highly recommend this professor in the future.
Course: Spark for Developers
Code | Name | Venue | Duration | Course Date | Course Price [Remote / Classroom] |
---|---|---|---|---|---|
dataminpython | Data Mining with Python | Swindon | 14 hours | Thu, 2019-03-28 09:30 | £2200 / £2550 |
apachedrill | Apache Drill | Swindon | 21 hours | Tue, 2019-06-25 09:30 | £3300 / £3825 |
hadoopmapr | Hadoop Administration on MapR | Swindon | 28 hours | Tue, 2019-07-23 09:30 | £4400 / £5100 |
bigddbsysfun | Big Data & Database Systems Fundamentals | Swindon | 14 hours | Thu, 2019-07-25 09:30 | £2200 / £2550 |
processmining | Process Mining | Swindon | 21 hours | Mon, 2019-07-29 09:30 | £3900 / £4425 |
bdbiga | Big Data Business Intelligence for Govt. Agencies | Swindon | 35 hours | Mon, 2019-07-29 09:30 | £6500 / £7375 |
flink | Flink for Scalable Stream and Batch Data Processing | Swindon | 28 hours | Mon, 2019-07-29 09:30 | £4400 / £5100 |
storm | Apache Storm | Swindon | 28 hours | Mon, 2019-07-29 09:30 | £4400 / £5100 |
bigdata_ | A Practical Introduction to Data Analysis and Big Data | Swindon | 35 hours | Mon, 2019-07-29 09:30 | £6500 / £7375 |
bigdarch | Big Data Architect | Swindon | 35 hours | Mon, 2019-07-29 09:30 | £5500 / £6375 |
apacheh | Administrator Training for Apache Hadoop | Swindon | 35 hours | Mon, 2019-07-29 09:30 | £5500 / £6375 |
teraintro | Teradata Fundamentals | Swindon | 21 hours | Mon, 2019-07-29 09:30 | £3300 / £3825 |
hadoopadm1 | Hadoop For Administrators | Swindon | 21 hours | Wed, 2019-07-31 09:30 | £3300 / £3825 |
samza | Samza for Stream Processing | Swindon | 14 hours | Thu, 2019-08-01 09:30 | £2200 / £2550 |
bdatr | Big Data Analytics for Telecom Regulators | Swindon | 16 hours | Thu, 2019-08-01 09:30 | £3900 / £4250 |
bigdatam | Big Data and its Management Process | Swindon | 14 hours | Thu, 2019-08-01 09:30 | £2200 / £2550 |
beam | Unified Batch and Stream Processing with Apache Beam | Swindon | 14 hours | Thu, 2019-08-01 09:30 | £2200 / £2550 |
sparksql | Apache Spark SQL | Swindon | 7 hours | Fri, 2019-08-02 09:30 | £1100 / £1275 |
kafkastreams | Building Stream Processing Applications with Kafka Streams | Swindon | 7 hours | Fri, 2019-08-02 09:30 | £1100 / £1275 |
dmmlr | Data Mining & Machine Learning with R | Swindon | 14 hours | Mon, 2019-08-05 09:30 | £2600 / £2950 |
matlabpredanalytics | Matlab for Predictive Analytics | Swindon | 21 hours | Mon, 2019-08-05 09:30 | £3900 / £4425 |
introtostreamprocessing | A Practical Introduction to Stream Processing | Swindon | 21 hours | Mon, 2019-08-05 09:30 | £3300 / £3825 |
kdd | Knowledge Discovery in Databases (KDD) | Swindon | 21 hours | Mon, 2019-08-05 09:30 | £3300 / £3825 |
bigd_lbg | Big Data - Data Science | Swindon | 14 hours | Tue, 2019-08-06 09:30 | £2600 / £2950 |
vespa | Vespa: Serving Large-Scale Data in Real-Time | Swindon | 14 hours | Tue, 2019-08-06 09:30 | £2200 / £2550 |
hadoopdevad | Hadoop for Developers and Administrators | Swindon | 21 hours | Tue, 2019-08-06 09:30 | £3300 / £3825 |
iotemi | IoT ( Internet of Things) for Entrepreneurs, Managers and Investors | Swindon | 21 hours | Wed, 2019-08-07 09:30 | N/A / £4425 |
bigdatar | Programming with Big Data in R | Swindon | 21 hours | Wed, 2019-08-07 09:30 | £3900 / £4425 |
amazonredshift | Amazon Redshift | Swindon | 21 hours | Wed, 2019-08-07 09:30 | £3300 / £3825 |
apachehama | Apache Hama | Swindon | 14 hours | Thu, 2019-08-08 09:30 | £2200 / £2550 |
Code | Name | Duration | Overview |
---|---|---|---|
smtwebint | Semantic Web Overview | 7 hours | The Semantic Web is a collaborative movement led by the World Wide Web Consortium (W3C) that promotes common formats for data on the World Wide Web. The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. |
datameer | Datameer for Data Analysts | 14 hours | Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion. In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources. By the end of this training, participants will be able to: - Create, curate, and interactively explore an enterprise data lake - Access business intelligence data warehouses, transactional databases and other analytic stores - Use a spreadsheet user-interface to design end-to-end data processing pipelines - Access pre-built functions to explore complex data relationships - Use drag-and-drop wizards to visualize data and create dashboards - Use tables, charts, graphs, and maps to analyze query results Audience - Data analysts Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
sparkpython | Python and Spark for Big Data (PySpark) | 21 hours | In this instructor-led, live training in Swindon, participants will learn how to use Python and Spark together to analyze big data as they work on hands-on exercises. By the end of this training, participants will be able to: Learn how to use Spark with Python to analyze Big Data. Work on exercises that mimic real world circumstances. Use different tools and techniques for big data analysis using PySpark. |
bigdatabicriminal | Big Data Business Intelligence for Criminal Intelligence Analysis | 35 hours | Advances in technologies and the increasing amount of information are transforming how law enforcement is conducted. The challenges that Big Data pose are nearly as daunting as Big Data's promise. Storing data efficiently is one of these challenges; effectively analyzing it is another. In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results. By the end of this training, participants will be able to: - Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation - Implement industrial big data storage and processing solutions for data analysis - Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation Audience - Law Enforcement specialists with a technical background Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
BDATR | Big Data Analytics for Telecom Regulators | 16 hours | To meet compliance of the regulators, CSPs (Communication service providers) can tap into Big Data Analytics which not only help them to meet compliance but within the scope of same project they can increase customer satisfaction and thus reduce the churn. In fact since compliance is related to Quality of service tied to a contract, any initiative towards meeting the compliance, will improve the “competitive edge” of the CSPs. Therefore, it is important that Regulators should be able to advise/guide a set of Big Data analytic practice for CSPs that will be of mutual benefit between the regulators and CSPs. 2 days of course : 8 modules, 2 hours each = 16 hours |
graphcomputing | Introduction to Graph Computing | 28 hours | In this instructor-led, live training in Swindon, participants will learn about the technology offerings and implementation approaches for processing graph data. The aim is to identify real-world objects, their characteristics and relationships, then model these relationships and process them as data using a Graph Computing (also known as Graph Analytics) approach. We start with a broad overview and narrow in on specific tools as we step through a series of case studies, hands-on exercises and live deployments. By the end of this training, participants will be able to: Understand how graph data is persisted and traversed. Select the best framework for a given task (from graph databases to batch processing frameworks.) Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel. View real-world big data problems in terms of graphs, processes and traversals. |
matlabpredanalytics | Matlab for Predictive Analytics | 21 hours | Predictive analytics is the process of using data analytics to make predictions about the future. This process uses data along with data mining, statistics, and machine learning techniques to create a predictive model for forecasting future events. In this instructor-led, live training, participants will learn how to use Matlab to build predictive models and apply them to large sample data sets to predict future events based on the data. By the end of this training, participants will be able to: - Create predictive models to analyze patterns in historical and transactional data - Use predictive modeling to identify risks and opportunities - Build mathematical models that capture important trends - Use data from devices and business systems to reduce waste, save time, or cut costs Audience - Developers - Engineers - Domain experts Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
nifidev | Apache NiFi for Developers | 7 hours | Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time. In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi. By the end of this training, participants will be able to: - Understand NiFi's architecture and dataflow concepts - Develop extensions using NiFi and third-party APIs - Custom develop their own Apache Nifi processor - Ingest and process real-time data from disparate and uncommon file formats and data sources Audience - Developers - Data engineers Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
nifi | Apache NiFi for Administrators | 21 hours | Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time. In this instructor-led, live training, participants will learn how to deploy and manage Apache NiFi in a live lab environment. By the end of this training, participants will be able to: - Install and configure Apachi NiFi - Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes - Automate dataflows - Enable streaming analytics - Apply various approaches for data ingestion - Transform Big Data and into business insights Audience - System administrators - Data engineers - Developers - DevOps Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
solrcloud | SolrCloud | 14 hours | Apache SolrCloud is a distributed data processing engine that facilitates the searching and indexing of files on a distributed network. In this instructor-led, live training, participants will learn how to set up a SolrCloud instance on Amazon AWS. By the end of this training, participants will be able to: - Understand SolCloud's features and how they compare to those of conventional master-slave clusters - Configure a SolCloud centralized cluster - Automate processes such as communicating with shards, adding documents to the shards, etc. - Use Zookeeper in conjunction with SolrCloud to further automate processes - Use the interface to manage error reporting - Load balance a SolrCloud installation - Configure SolrCloud for continuous processing and fail-over Audience - Solr Developers - Project Managers - System Administrators - Search Analysts Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
datavault | Data Vault: Building a Scalable Data Warehouse | 28 hours | In this instructor-led, live training in Swindon, participants will learn how to build a Data Vault. By the end of this training, participants will be able to: Understand the architecture and design concepts behind Data Vault 2.0, and its interaction with Big Data, NoSQL and AI. Use data vaulting techniques to enable auditing, tracing, and inspection of historical data in a data warehouse. Develop a consistent and repeatable ETL (Extract, Transform, Load) process. Build and deploy highly scalable and repeatable warehouses. |
tigon | Tigon: Real-time Streaming for the Real World | 14 hours | Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users. This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application. By the end of this training, participants will be able to: - Create powerful, stream processing applications for handling large volumes of data - Process stream sources such as Twitter and Webserver Logs - Use Tigon for rapid joining, filtering, and aggregating of streams Audience - Developers Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
memsql | MemSQL | 28 hours | MemSQL is an in-memory, distributed, SQL database management system for cloud and on-premises. It's a real-time data warehouse that immediately delivers insights from live and historical data. In this instructor-led, live training, participants will learn the essentials of MemSQL for development and administration. By the end of this training, participants will be able to: - Understand the key concepts and characteristics of MemSQL - Install, design, maintain, and operate MemSQL - Optimize schemas in MemSQL - Improve queries in MemSQL - Benchmark performance in MemSQL - Build real-time data applications using MemSQL Audience - Developers - Administrators - Operation Engineers Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
ApacheIgnite | Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing | 14 hours | In this instructor-led, live training in Swindon, participants will learn the principles behind persistent and pure in-memory storage as they step through the creation of a sample in-memory computing project. By the end of this training, participants will be able to: Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database. Achieve persistence without syncing data back to a relational database. Use Ignite to carry out SQL and distributed joins. Improve performance by moving data closer to the CPU, using RAM as a storage. Spread data sets across a cluster to achieve horizontal scalability. Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors. |
vespa | Vespa: Serving Large-Scale Data in Real-Time | 14 hours | Vespa is an open-source big data processing and serving engine created by Yahoo. It is used to respond to user queries, make recommendations, and provide personalized content and advertisements in real-time. This instructor-led, live training introduces the challenges of serving large-scale data and walks participants through the creation of an application that can compute responses to user requests, over large datasets in real-time. By the end of this training, participants will be able to: - Use Vespa to quickly compute data (store, search, rank, organize) at serving time while a user waits - Implement Vespa into existing applications involving feature search, recommendations, and personalization - Integrate and deploy Vespa with existing big data systems such as Hadoop and Storm. Audience - Developers Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
apex | Apache Apex: Processing Big Data-in-Motion | 21 hours | Apache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable. This instructor-led, live training introduces Apache Apex's unified stream processing architecture, and walks participants through the creation of a distributed application using Apex on Hadoop. By the end of this training, participants will be able to: - Understand data processing pipeline concepts such as connectors for sources and sinks, common data transformations, etc. - Build, scale and optimize an Apex application - Process real-time data streams reliably and with minimum latency - Use Apex Core and the Apex Malhar library to enable rapid application development - Use the Apex API to write and re-use existing Java code - Integrate Apex into other applications as a processing engine - Tune, test and scale Apex applications Audience - Developers - Enterprise architects Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
alluxio | Alluxio: Unifying Disparate Storage Systems | 7 hours | Alluxio is an open-source virtual distributed storage system that unifies disparate storage systems and enables applications to interact with data at memory speed. It is used by companies such as Intel, Baidu and Alibaba. In this instructor-led, live training, participants will learn how to use Alluxio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio. By the end of this training, participants will be able to: - Develop an application with Alluxio - Connect big data systems and applications while preserving one namespace - Efficiently extract value from big data in any storage format - Improve workload performance - Deploy and manage Alluxio standalone or clustered Audience - Data scientist - Developer - System administrator Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
flink | Flink for Scalable Stream and Batch Data Processing | 28 hours | Apache Flink is an open-source framework for scalable stream and batch data processing. This instructor-led, live training introduces the principles and approaches behind distributed stream and batch data processing, and walks participants through the creation of a real-time, data streaming application. By the end of this training, participants will be able to: - Set up an environment for developing data analysis applications - Package, execute, and monitor Flink-based, fault-tolerant, data streaming applications - Manage diverse workloads - Perform advanced analytics using Flink ML - Set up a multi-node Flink cluster - Measure and optimize performance - Integrate Flink with different Big Data systems - Compare Flink capabilities with those of other big data processing frameworks Audience - Developers - Architects - Data engineers - Analytics professionals - Technical managers Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
samza | Samza for Stream Processing | 14 hours | Apache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management. This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution. By the end of this training, participants will be able to: - Use Samza to simplify the code needed to produce and consume messages. - Decouple the handling of messages from an application. - Use Samza to implement near-realtime asynchronous computation. - Use stream processing to provide a higher level of abstraction over messaging systems. Audience - Developers Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
zeppelin | Zeppelin for Interactive Data Analytics | 14 hours | Apache Zeppelin is a web-based notebook for capturing, exploring, visualizing and sharing Hadoop and Spark based data. This instructor-led, live training introduces the concepts behind interactive data analytics and walks participants through the deployment and usage of Zeppelin in a single-user or multi-user environment. By the end of this training, participants will be able to: - Install and configure Zeppelin - Develop, organize, execute and share data in a browser-based interface - Visualize results without referring to the command line or cluster details - Execute and collaborate on long workflows - Work with any of a number of plug-in language/data-processing-backends, such as Scala (with Apache Spark), Python (with Apache Spark), Spark SQL, JDBC, Markdown and Shell. - Integrate Zeppelin with Spark, Flink and Map Reduce - Secure multi-user instances of Zeppelin with Apache Shiro Audience - Data engineers - Data analysts - Data scientists - Software developers Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
magellan | Magellan: Geospatial Analytics on Spark | 14 hours | Magellan is an open-source distributed execution engine for geospatial analytics on big data. Implemented on top of Apache Spark, it extends Spark SQL and provides a relational abstraction for geospatial analytics. This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark. By the end of this training, participants will be able to: - Efficiently query, parse and join geospatial datasets at scale - Implement geospatial data in business intelligence and predictive analytics applications - Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables Audience - Application developers Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
hdp | Hortonworks Data Platform (HDP) for Administrators | 21 hours | Hortonworks Data Platform is an open-source Apache Hadoop support platform that provides a stable foundation for developing big data solutions on the Apache Hadoop ecosystem. This instructor-led live training introduces Hortonworks and walks participants through the deployment of Spark + Hadoop solution. By the end of this training, participants will be able to: - Use Hortonworks to reliably run Hadoop at a large scale - Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows. - Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project - Process different types of data, including structured, unstructured, in-motion, and at-rest. Audience - Hadoop administrators Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
hadooppython | Hadoop with Python | 28 hours | Hadoop is a popular Big Data processing framework. Python is a high-level programming language famous for its clear syntax and code readibility. In this instructor-led, live training, participants will learn how to work with Hadoop, MapReduce, Pig, and Spark using Python as they step through multiple examples and use cases. By the end of this training, participants will be able to: - Understand the basic concepts behind Hadoop, MapReduce, Pig, and Spark - Use Python with Hadoop Distributed File System (HDFS), MapReduce, Pig, and Spark - Use Snakebite to programmatically access HDFS within Python - Use mrjob to write MapReduce jobs in Python - Write Spark programs with Python - Extend the functionality of pig using Python UDFs - Manage MapReduce jobs and Pig scripts using Luigi Audience - Developers - IT Professionals Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
monetdb | MonetDB | 28 hours | MonetDB is an open-source database that pioneered the column-store technology approach. In this instructor-led, live training, participants will learn how to use MonetDB and how to get the most value out of it. By the end of this training, participants will be able to: - Understand MonetDB and its features - Install and get started with MonetDB - Explore and perform different functions and tasks in MonetDB - Accelerate the delivery of their project by maximizing MonetDB capabilities Audience - Developers - Technical experts Format of the course - Part lecture, part discussion, exercises and heavy hands-on practice |
TalendDI | Talend Open Studio for Data Integration | 28 hours | In this instructor-led, live training in Swindon, participants will learn how to use the Talend ETL tool to carry out data transformation, data extraction, and connectivity with Hadoop, Hive, and Pig. By the end of this training, participants will be able to Explain the concepts behind ETL (Extract, Transform, Load) and propagation. Define ETL methods and ETL tools to connect with Hadoop. Efficiently amass, retrieve, digest, consume, transform and shape big data in accordance to business requirements. Upload to and extract large records from Hadoop (optional), Hive (optional), and NoSQL databases. |
introtostreamprocessing | A Practical Introduction to Stream Processing | 21 hours | Stream Processing refers to the real-time processing of "data in motion", that is, performing computations on data as it is being received. Such data is read as continuous streams from data sources such as sensor events, website user activity, financial trades, credit card swipes, click streams, etc. Stream Processing frameworks are able to read large volumes of incoming data and provide valuable insights almost instantaneously. In this instructor-led, live training (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices. By the end of this training, participants will be able to: - Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming - Understand and select the most appropriate framework for the job - Process of data continuously, concurrently, and in a record-by-record fashion - Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc. - Integrating the most appropriate stream processing library with enterprise applications and microservices Audience - Developers - Software architects Format of the Course - Part lecture, part discussion, exercises and heavy hands-on practice Notes - To request a customized training for this course, please contact us to arrange. |
confluent | Building Kafka Solutions with Confluent | 14 hours | This instructor-led, live training (onsite or remote) is aimed at engineers who wish to use Confluent (a distribution of Kafka) to build and manage a real-time data processing platform for their applications. By the end of this training, participants will be able to: - Install and configure Confluent Platform. - Use Confluent's management tools and services to run Kafka more easily. - Store and process incoming stream data. - Optimize and manage Kafka clusters. - Secure data streams. Format of the Course - Interactive lecture and discussion. - Lots of exercises and practice. - Hands-on implementation in a live-lab environment. Course Customization Options - This course is based on the open source version of Confluent: Confluent Open Source. - To request a customized training for this course, please contact us to arrange. |
dataminpython | Data Mining with Python | 14 hours | This instructor-led, live training (onsite or remote) is aimed at data analysts and data scientists who wish to implement more advanced data analytics techniques for data mining using Python. By the end of this training, participants will be able to: - Understand important areas of data mining, including association rule mining, text sentiment analysis, automatic text summarization, and data anomaly detection. - Compare and implement various strategies for solving real-world data mining problems. - Understand and interpret the results. Format of the Course - Interactive lecture and discussion. - Lots of exercises and practice. - Hands-on implementation in a live-lab environment. Course Customization Options - To request a customized training for this course, please contact us to arrange. |
sparkcloud | Apache Spark in the Cloud | 21 hours | Apache Spark's learning curve is slowly increasing at the begining, it needs a lot of effort to get the first return. This course aims to jump through the first tough part. After taking this course the participants will understand the basics of Apache Spark , they will clearly differentiate RDD from DataFrame, they will learn Python and Scala API, they will understand executors and tasks, etc. Also following the best practices, this course strongly focuses on cloud deployment, Databricks and AWS. The students will also understand the differences between AWS EMR and AWS Glue, one of the lastest Spark service of AWS. AUDIENCE: Data Engineer, DevOps, Data Scientist |
bigdataanahealth | Big Data Analytics in Health | 21 hours | Big data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights. The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment. In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises. By the end of this training, participants will be able to: - Install and configure big data analytics tools such as Hadoop MapReduce and Spark - Understand the characteristics of medical data - Apply big data techniques to deal with medical data - Study big data systems and algorithms in the context of health applications Audience - Developers - Data Scientists Format of the Course - Part lecture, part discussion, exercises and heavy hands-on practice. Note - To request a customized training for this course, please contact us to arrange. |
Course | Venue | Course Date | Course Price [Remote / Classroom] |
---|---|---|---|
PostgreSQL for Administrators | Bristol, Temple Gate | Wed, 2019-03-06 09:30 | £2200 / £2700 |
RabbitMQ | Leicester Conferences | Wed, 2019-03-13 09:30 | £2200 / £2600 |
QMS Auditor / Lead Auditor (ISO 9001) | Cardiff | Wed, 2019-03-13 09:30 | £3300 / £4200 |
Understanding Modern Information Communication Technology | Swansea- Princess House | Mon, 2019-03-18 09:30 | £1100 / £1250 |
Strategic Planning in Practice | Etc Venues - Manchester | Mon, 2019-03-25 09:30 | £2200 / £2750 |
Natural Language Processing - AI/Robotics | London, Hatton Garden | Mon, 2019-04-01 09:30 | £3900 / £5025 |
Understanding Modern Information Communication Technology | Cardiff | Mon, 2019-05-06 09:30 | £1100 / £1400 |
React: Build Highly Interactive Web Applications | London, Hatton Garden | Tue, 2019-07-09 09:30 | £3300 / £4425 |
We are looking to expand our presence in the UK!
If you are interested in running a high-tech, high-quality training and consulting business.
Apply now!