PIG how to count a number of rows in alias

HadoopApache Pig

Hadoop Problem Overview


I did something like this to count the number of rows in an alias in PIG:

logs = LOAD 'log'
logs_w_one = foreach logs generate 1 as one;
logs_group = group logs_w_one all;
logs_count = foreach logs_group generate SUM(logs_w_one.one);
dump logs_count;

This seems to be too inefficient. Please enlighten me if there is a better way!

Hadoop Solutions


Solution 1 - Hadoop

COUNT is part of pig see the manual

LOGS= LOAD 'log';
LOGS_GROUP= GROUP LOGS ALL;
LOG_COUNT = FOREACH LOGS_GROUP GENERATE COUNT(LOGS);

Solution 2 - Hadoop

Arnon Rotem-Gal-Oz already answered this question a while ago, but I thought some may like this slightly more concise version.

LOGS = LOAD 'log';
LOG_COUNT = FOREACH (GROUP LOGS ALL) GENERATE COUNT(LOGS);

Solution 3 - Hadoop

Be careful, with COUNT your first item in the bag must not be null. Else you can use the function COUNT_STAR to count all rows.

Solution 4 - Hadoop

Basic counting is done as was stated in other answers, and in the pig documentation:

logs = LOAD 'log';
all_logs_in_a_bag = GROUP logs ALL;
log_count = FOREACH all_logs_in_a_bag GENERATE COUNT(logs);
dump log_count

You are right that counting is inefficient, even when using pig's builtin COUNT because this will use one reducer. However, I had a revelation today that one of the ways to speed it up would be to reduce the RAM utilization of the relation we're counting.

In other words, when counting a relation, we don't actually care about the data itself so let's use as little RAM as possible. You were on the right track with your first iteration of the count script.

logs = LOAD 'log'
ones = FOREACH logs GENERATE 1 AS one:int;
counter_group = GROUP ones ALL;
log_count = FOREACH counter_group GENERATE COUNT(ones);
dump log_count

This will work on much larger relations than the previous script and should be much faster. The main difference between this and your original script is that we don't need to sum anything.

This also doesn't have the same problem as other solutions where null values would impact the count. This will count all the rows, regardless of if the first column is null or not.

Solution 5 - Hadoop

USE COUNT_STAR

LOGS= LOAD 'log';
LOGS_GROUP= GROUP LOGS ALL;
LOG_COUNT = FOREACH LOGS_GROUP GENERATE COUNT_STAR(LOGS);

Solution 6 - Hadoop

Here is a version with optimization. All the solutions above would require pig to read and write full tuple when counting, this script below just write '1'-s

DEFINE row_count(inBag, name) RETURNS result {
    X = FOREACH $inBag generate 1;
    $result = FOREACH (GROUP X ALL PARALLEL 1) GENERATE '$name', COUNT(X);
};

The use it like

xxx = row_count(rows, 'rows_count');

Solution 7 - Hadoop

What you want is to count all the lines in a relation (dataset in Pig Latin)

This is very easy following the next steps:

logs = LOAD 'log'; --relation called logs, using PigStorage with tab as field delimiter
logs_grouped = GROUP logs ALL;--gives a relation with one row with logs as a bag
number = FOREACH LOGS_GROUP GENERATE COUNT_STAR(logs);--show me the number

I have to say it is important Kevin's point as using COUNT instead of COUNT_STAR we would have only the number of lines which first field is not null.

Also I like Jerome's one line syntax it is more concise but in order to be didactic I prefer to divide it in two and add some comment.

In general I prefer:

numerito = FOREACH (GROUP CARGADOS3 ALL) GENERATE COUNT_STAR(CARGADOS3);

over

name = GROUP CARGADOS3 ALL
number = FOREACH name GENERATE COUNT_STAR(CARGADOS3);

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionkeeView Question on Stackoverflow
Solution 1 - HadoopArnon Rotem-Gal-OzView Answer on Stackoverflow
Solution 2 - HadoopJerome SerranoView Answer on Stackoverflow
Solution 3 - HadoopKevinView Answer on Stackoverflow
Solution 4 - HadoopWattsInABoxView Answer on Stackoverflow
Solution 5 - Hadoophello_abhishekView Answer on Stackoverflow
Solution 6 - HadoopIgor KatkovView Answer on Stackoverflow
Solution 7 - HadoopJavier BañezView Answer on Stackoverflow