Resources

posit::conf(2023) Workshop: Big Data with Arrow

Register now: http://pos.it/conf Instructors: Nic Crane and Stephanie Hazlitt Workshop Duration: 1-Day Workshop This course is for you if you: • want to learn how to work with tabular data that is too large to fit in memory using existing R and tidyverse syntax implemented in Arrow • want to learn about Parquet and other file formats that are powerful alternatives to CSV files • want to learn how to engineer your tabular data storage for more performant access and analysis with Apache Arrow Data analysis pipelines with larger-than-memory data are becoming more and more commonplace. In this workshop you will learn how to use Apache Arrow, a multi-language toolbox for working with larger-than-memory tabular data, to create seamless “big” data analysis pipelines with R. The workshop will focus on using the the arrow R package—a mature R interface to Apache Arrow— to process larger-than-memory files and multi-file data sets with arrow using familiar dplyr syntax. You’ll learn to create and use interoperable data file formats like Parquet for efficient data storage and access, with data stored both on disk and in the cloud, and also how to exercise fine control over data types to avoid common large data pipeline problems. This workshop will provide a foundation for using Arrow, giving you access to a powerful suite of tools for performant analysis of larger-than-memory data in R

image: thumbnail.jpg

Transcript#

This transcript was generated automatically and may contain errors.

Hi, my name is Steph Hazlitt, and I'm here to share with you three reasons why I hope you will join me at the Big Data with Arrow in R workshop at PositConf this coming September.

My first reason, doing data analysis on bigger and bigger data is becoming commonplace for many data professionals, and I think that it is fair to say that our data is only going to get bigger. In the Big Data with Arrow in R workshop, you will learn how to wrangle your very large data, even when that data is so large it won't fit on the memory of your computer. This will feel like a superpower.

This will feel like a superpower.

My second reason, you will learn to wrangle your too-large-for-memory data on your computer without having to learn a new programming language or even new R syntax. You'll be introduced to Apache Arrow, which is a toolkit designed for efficient in-memory data analysis on our large data. And the Arrow for R package is built with a dplyr backend, so you'll be wrangling this large data on your laptop with familiar to you dplyr and base R syntax.

My third reason, along the way while we are learning these things, you will pick up many data engineering tips and tricks that will definitely save you time and friction when you're building out your data workflows using R, and especially when you're building those data workflows with growing or already very large datasets.

And I'm going to throw in a bonus reason for joining me at the Big Data with Arrow in R workshop, and that is I'll be co-instructing with Nic Crane. This is Nic. Not only is Nic an R educator, Nic is currently one of the core developers and maintainers of the Arrow for R package, so we will have an expert in the room with us. So if you are ready to tackle those two big datasets in your life with Arrow using R, please join Nic and I at the PositConf 2023 in Chicago.