Inhalt: Business analytics encompasses a set of tools, technologies, processes, and best practices that are required to derive knowledge from data. It''s an iterative and methodical exploration of data to derive insights from it-and, in turn, make smarter, more strategic decisions that are grounded in facts. In this course, learn about the stages in business analytics that are used to predict and build the future-predictive analytics, prescriptive analytics, and experimental analytics. This course dives into each stage, discussing the tools and techniques used for each, as well as best practices leveraged in the field. In addition, the course lends a real-world context to these concepts by using a use case to demonstrate how to execute analytics in each stage. Umfang: 00:42:10.00
Inhalt: Cloud computing brings unlimited scalability and elasticity to data science applications. Expertise in the major platforms, such as Google Cloud Platform (GCP), is essential to the IT professional. This course-one of a series by veteran cloud engineering specialist and data scientist Kumaran Ponnambalam-shows how to design and build data warehouses using GCP. Explore the different types of storage options available in GCP for files, relational data, documents, and big data, including Cloud SQL, Cloud Bigtable, and Cloud BigQuery. Then learn how to use one solution, BigQuery, to perform data storage and query operations, and review advanced use cases, such as working with partition tables and external data sources. Finally, learn best practices for table design, storage and query optimization, and monitoring of data warehouses in BigQuery. Umfang: 01:00:21.00
Inhalt: Cloud computing brings unlimited scalability and elasticity to data science applications. Expertise in the major platforms, such as Google Cloud Platform (GCP), is essential to the IT professional. This course-one of a series by cloud engineering specialist and data scientist Kumaran Ponnambalam-shows how to conduct exploratory data analytics with GCP. First, review the concepts of segmentation and profiling. Then get hands on, as you learn to perform both text and visual analysis of data using tools provided by GCP: Cloud Datalab, BigQuery, Cloud Dataflow, and Data Studio. Finally, look at an end-to-end use case that applies what you've learned in the course. Umfang: 00:57:30.00
Inhalt: Apache Hadoop was a pioneer in the world of big data technologies, and it continues to be a leader in enterprise big data storage. Apache Spark is the top big data processing engine and provides an impressive array of features and capabilities. When used together, the Hadoop Distributed File System (HDFS) and Spark can provide a truly scalable big data analytics setup. In this course, learn how to leverage these two technologies to build scalable and optimized data analytics pipelines. Instructor Kumaran Ponnambalam explores ways to optimize data modeling and storage on HDFS; discusses scalable data ingestion and extraction using Spark; and provides tips for optimizing data processing in Spark. Plus, he provides a use case project that allows you to practice your new techniques. Umfang: 01:01:55.00
Inhalt: Exploratory data analytics is a key phase in data science that deals with investigating data to extract insights. In a world of big data, exploring massive datasets is a challenge, since it requires technologies that are scalable, fast, and feature rich. Apache Flink-the popular stream-processing platform-is well suited for this effort. This course focuses on exploring datasets with SQL on Apache Flink. Instructor Kumaran Ponnambalam starts off by reviewing the relational APIs that Flink provides for big data analytics. Kumaran then takes a deeper look at the Table API and SQL functions. He explores various SQL capabilities available for exploring data, including filtering, aggregations and joins. To wrap up, he provides a use case project that allows you to practice your new skills. Umfang: 01:07:36.00
Inhalt: Batch mode consolidates data-related operations in order to reduce the load on networks. Batch mode helps software architects build big data applications that operate smoothly and efficiently under real-world conditions. In this course, you can learn about use cases and best practices for architecting batch mode applications using technologies such as Hive and Apache Spark. There is no coding involved. Instead you will see how big data tools can help solve some of the most complex challenges for businesses that generate, store, and analyze large amounts of data. The use cases are drawn from a variety of industries, including ecommerce and IT. Instructor Kumaran Ponnambalam shows how to analyze a problem, draw an architectural outline, choose the right technologies, and finalize the solution. After each use case, he reviews related best practices for data acquisition, transport, processing, storage, and service. Each lesson is rich in practical techniques and insights from a developer who has experienced the benefits and shortcomings of these technologies firsthand. Umfang: 01:37:32.00
Inhalt: Real-time systems have guaranteed response times that can be sub-seconds from the trigger. Meaning that when a user clicks a button, your app better respond-and fast. Architecting applications under real-time constraints is an even bigger challenge when you''re dealing with big data. Excessive latency can cost you money, in terms of system resources consumed and customers lost. Luckily, big data technology and efficient architecture can provide the real-time responsiveness your business needs. In this course, you can learn about use cases and best practices for architecting real-time applications with technologies such as Kafka, Hazelcast, and Apache Spark. There is no coding involved. Instead you will see how big data tools can help solve some of the most complex challenges for businesses that generate, store, and analyze large amounts of data. The use cases are drawn from a variety of industries, including ecommerce and IT. Instructor Kumaran Ponnambalam shows how to analyze a problem, draw an architectural outline, choose the right technologies, and finalize the solution. After each use case, he reviews related best practices for real-time streaming, predictive analytics, parallel processing, and pipeline management. Each lesson is rich in practical techniques and insights from a developer who has experienced the benefits and shortcomings of these technologies firsthand. Umfang: 01:04:56.00
Inhalt: IT operations is one of the key business functions for modern enterprises. As data centers become large, distributed, and integrated, the need to monitor and manage hardware, software, networks, and data increases exponentially. And while the elements in a network generate tons of data in terms of logs and events, the need to collect and understand this data to predict future outcomes is also increasing. In this course, learn how to solve common challenges in IT operations using the power of AI. Instructor Kumaran Ponnambalam reviews the key issues that IT ops teams face in their day-to-day operations. He then goes over several uses cases in the world of IT ops, explaining in detail how AI technology can speed up processes like root cause analysis, improve response times at your IT help desk, and more. Along the way, he uses Python, Jupyter Notebooks, Keras, and deep learning techniques to step through practical solutions. Umfang: 01:15:30
Inhalt: Use big data to tell your customer''s story, with predictive analytics. In this course, you can learn about the customer life cycle and how predictive analytics can help improve every step of the customer journey. Start off by learning about the various phases in a customer''s life cycle. Explore the data generated inside and outside your business, and ways the data can be collected and aggregated within your organization. Then review three use cases for predictive analytics in each phase of the customer''s life cycle, including acquisition, upsell, service, and retention. For each phase, you also build one predictive analytics solution in Python. In the final videos, author Kumaran Ponnambalam introduces best practices for creating a customer analytics process from the ground up. Umfang: 01:37:58.00
Inhalt: Cloud computing brings unlimited scalability and elasticity to data science applications. Expertise in the major platforms, such as Google Cloud Platform (GCP), is essential to the IT professional. This course-one of a series by veteran cloud engineering specialist and data scientists Kumaran Ponnambalam-shows how to use the latest technologies in GCP to build a big data pipeline that ingests, transports, and transforms data entirely in the cloud. Learn how to set up data processing jobs using Apache Beam and Cloud Dataflow. Discover how to leverage Cloud Pub/Sub for stream ingestion and real-time messaging. Finally, find out how to process the stream events in Cloud Dataflow. The course uses an end-to-end use case that shows how to apply the knowledge and best practices from the course in a practical data science workflow. Umfang: 01:07:37.00
Inhalt: From an engineering perspective, scalability is one of the most pressing challenges in data science. Apache Flink, the powerful and popular stream-processing platform, offers features and functionality that can help developers tackle this challenge. In this course, learn how to build a real-time stream processing pipeline with Apache Flink. Instructor Kumaran Ponnambalam begins by reviewing key streaming concepts and features of Apache Flink. He then takes a deeper look at the DataStream API and explores various capabilities available for real-time stream processing, including windowing and joins. After delving into the platform's event-time processing and state management features, he provides a use case project that allows you to put your new skills to the test. Umfang: 01:11:10.00
Inhalt: Data science is an application area that''s exponentially growing, consuming huge amounts of data and making revolutionary predictions. At the same time, Google Cloud Platform (GCP) is fast tracking the cloud movement by providing cutting-edge tools and options. In this course, learn how to architect data science solutions on GCP and harness the power of these two technologies for your business. Instructor Kumaran Ponnambalam starts off by reviewing technology options available in GCP for executing various data science processes, as well as the benefits and shortcomings of this suite of cloud computing services. He then analyzes different technologies and steps through the architecture building process for various use cases, including customer analytics and real-time mobile couponing. Umfang: 00:58:25.00
Inhalt: In order to construct data pipelines and networks that stream, process, and store data, data engineers and data-science DevOps specialists must understand how to combine multiple big data technologies. In this course, discover how to build big data pipelines around Apache Spark. Join Kumaran Ponnambalam as he takes you through how to make Apache Spark work with other big data technologies. He covers the basics of Apache Kafka Connect and how to integrate it with Spark for real-time streaming. In addition, he demonstrates how to use the various technologies to construct an end-to-end project that solves a real-world business problem. Umfang: 01:40:14.00
Inhalt: One of the key components of a big data processing pipeline is a scalable and distributed message queue. Message queues enable real-time streaming capabilities with multiple producers and consumers of data. This enables real-time applications that can analyze data and produce insights in a scalable fashion. Apache Kafka provides these capabilities. As the de facto standard for open-source messaging, Apache Kafka is an essential skill for data scientists, big data engineers, data architects, and solution architects. In this course, instructor Kumaran Ponnambalam introduces Apache Kafka and explains its fundamental concepts and basic operations. Kumaran covers basic concepts like messages, topics, logs, and more. He shows you how to use the Kafka command line, as well as partitions and groups. He goes over Kafka Java programming, then concludes with a use case project. Umfang: 01:18:21
Inhalt: Today's big data and analytics pipelines are consuming more and more text data generated through websites, social media, and private communications. But deriving insights from text isn't straightforward; it requires a series of techniques and forms for preparing text for analytics and machine learning. In this course, learn the essential techniques for cleansing and processing text in R, and discover how to convert text to a form that's ready for analytics and predictions. Kumaran Ponnambalam begins by reviewing techniques for extracting, cleansing, and processing text. He then shows how to convert text into an analytics-ready form, including how to use n-grams and TF-IDF. Throughout the course, he provides examples for exercising these techniques using the R and tm libraries. Umfang: 00:55:57.00
Inhalt: Data engineering is the foundation for enabling analytics and data science applications in the world of big data. It requires building scalable data processing pipelines and delivering them in short time frames. Apache Flink, the powerful and popular stream-processing platform, was designed to help you achieve these goals. In this course, join Kumaran Ponnambalam as he focuses on how to build batch mode data pipelines with Apache Flink. Kumaran kicks off the course by reviewing the features and architecture of Apache Flink. He then takes a deeper look at the DataSet API and explores various capabilities available for transforming, aggregating, and combining data. To wrap up the course, he presents a use case project that allows you to leverage your new skills. Umfang: 01:07:15.00
Inhalt: Predictive analytics use historic data to look forward, enabling organizations to make better decisions. However, making accurate predictions from big data can be an overwhelming task. Enter Google Cloud Platform (GCP), a suite of cloud-computing services that bring scalability, elasticity, and automated machine learning to predictive analytics. This course-one of a series by data scientist Kumaran Ponnambalam-shows how to apply the power of GCP to generate predictions for your business. Start off by exploring the different tools and features for predictive analytics in GCP, including Cloud Dataproc, Cloud ML Engine, and the machine learning APIs such as Cloud Translation, Cloud Vision, and Cloud Video Intelligence. Then explore learn how to build, train, and deploy models to create predictions. Plus, learn best practices for cost control, testing, and performance monitoring of predictive models. Umfang: 00:39:37.00
Inhalt: Frameworks such as Apache Flink can help you build fast, scalable stream processing applications, but big data engineers still need to design smart use cases to achieve maximum efficiency. In this course, instructor Kumaran Ponnambalam demonstrates how to use Apache Flink and associated technologies to build stream-processing use cases leveraging popular patterns. Kumaran begins by highlighting the opportunities and challenges that stream processing brings to big data. He then goes over four popular patterns for stream processing: streaming analytics, alerts and thresholds, leaderboards, and real-time predictions. Along the way, he reviews example use cases and explains how to leverage Flink, as well as key technologies like MariaDB and Redis, to implement key examples. Umfang: 01:06:40
Inhalt: Scalable and distributed message queuing plays an important role in building real time big data pipelines. Asynchronous publisher/subscriber models are required to handle unpredictable loads in these pipelines. Apache Kafka is the leading technology today that provides these capabilities and is an essential skill for a big data professional. In this course, Kumaran Ponnambalam provides insights into the scalability and manageability aspects of Kafka and demonstrates how to build asynchronous applications with Kafka and Java. Kumaran starts by demonstrating how to set up a Kafka cluster and explores the basics of Java programming in Kafka. He then takes a deep dive into the various messaging and schema options available. Kumaran also goes over some best practices for designing Kafka applications before finishing with a use case project that applies the lessons covered in the course. Umfang: 01:17:33
Inhalt: Turn your knowledge and experiences into opportunity by writing on LinkedIn. With more than 450 million professionals worldwide, capturing the attention of even a fraction of this network can lead to internships, jobs, and valuable connections. Learn how to leverage LinkedIn's world-class publishing platform to showcase your ideas and skills and gain the attention of peers, recruiters, and future employers. Maya Pope-Chappell, news editor at LinkedIn, shows newly graduated college students and first-time jobseekers-anyone who is just beginning to build a professional presence and personal brand-how to succeed on LinkedIn. She explains why, what, and how to write. She helps you figure out what topics get the most traction and shares the best practices for building an audience and establishing your brand. Umfang: 00:32:17.00
Inhalt: Among all other JavaScript libraries, React.js stands out. It relies on reusable components, not templates, for UI development, allowing developers to render views where data changes over time. React applications are more scalable and more maintainable, making developers more efficient and users more satisfied. In this course, Eve Porcello introduces the basics of the React library using the most modern syntax and best practices for creating React components. Along the way, learn how to set up Chrome tools for React; create new components; use props and state to pass data between components; and more. By the end of the course, you'll understand the essentials of React.js and be able to start building your own browser-based projects. Umfang: 01:25:39.00
Inhalt: webpack has become a key standard among front-end development tools. A module bundler built for modern JavaScript applications, webpack is exploding in popularity due to its many configuration options. webpack 4 expands on the original promise of task, code, and dependency management with cleaner syntax, simpler configuration, and thorough support for ES6. In this course, instructor Eve Porcello covers the basics of this versatile tool. Discover how to install webpack, run a build, and edit the config file to facilitate automation. Find out how to use loaders to run tasks and process files such as CSS and inline images. Plus, Eve demos webpack plugins for managing tasks such as code splitting. Umfang: 00:41:47.00
Inhalt: ECMAScript-the standardized version of JavaScript-keeps getting more powerful. ES6 was a large leap forward, introducing features that changed how developers structure programs. But every year since there have been updates and additional improvements. This course helps you create modern JavaScript applications leveraging the most interesting and useful features in ES6+. Eve Porcello introduces the new keywords and operators that can help simplify code, as well as new ways of creating functions and objects. She also shows you how to write and search through template strings, create map objects to store key/value pairs, move values from one array to another-or one object to another-with the spread operator, build reusable classes, and use arrow functions and generators. Plus, learn how to handle asynchronous data and tasks with promises, fetch, and the async/await syntax. This course was created by Eve Porcello. We are pleased to offer this training in our library. Umfang: 01:15:57
Inhalt: webpack is the latest and greatest addition to a front-end developer's toolset. It is a module bundler suitable for the largest single-page web applications, and it can process JavaScript, CSS, and more. Learn the basics of transforming, bundling, and processing JavaScript modules and other web assets with webpack, in this introductory course with Eve Porcello. Discover how to install webpack, run a build, and edit the config file. Find out how to use loaders to run tasks and process files such as CSS, Sass, and inline images. Eve then demos on-demand code splitting with webpack, which allows your code to run faster and more efficiently. In addition, learn how to set up a webpack-dev-server to serve and reload files in real time as you make changes. Umfang: 01:02:17.00
Inhalt: React.js is designed to make the process of building modular, reusable user interface components simple and intuitive. In this course, Eve Porcello guides you through the foundations of React development, including the use of React hooks-a new feature starting in React 16.8 that allows developers to add functionality to components without writing classes. Eve shows how to create components, display dynamic data with properties, and render components using JSX syntax. Eve also manages the state of components with hooks; uses powerful React enhancements, such as the useEffect hook for loading remote data; and leverages cloud deployment options for apps created with create-react-app. This course was created by Eve Porcello. We are pleased to offer this training in our library. Umfang: 01:00:09.00
Programm Findus Internet-OPAC findus.pl V20.235/8 auf Server windhund2.findus-internet-opac.de,
letztes Datenbankupdate: 26.04.2024, 12:10 Uhr. 4.174 Zugriffe im April 2024. Insgesamt 509.689 Zugriffe seit Januar 2009
Mobil - Impressum - Datenschutz - CO2-Neutral