This project has retired. For details please refer to its Attic page.
PartitionedMapOutputFunction (Apache Crunch 0.10.0 API)

org.apache.crunch.impl.spark.fn
Class PartitionedMapOutputFunction<K,V>

java.lang.Object
  extended by scala.runtime.AbstractFunction1<T,R>
      extended by org.apache.spark.api.java.function.WrappedFunction1<T,scala.Tuple2<K,V>>
          extended by org.apache.spark.api.java.function.PairFunction<Pair<K,V>,IntByteArray,byte[]>
              extended by org.apache.crunch.impl.spark.fn.PartitionedMapOutputFunction<K,V>
All Implemented Interfaces:
Serializable, scala.Function1<Pair<K,V>,scala.Tuple2<IntByteArray,byte[]>>

public class PartitionedMapOutputFunction<K,V>
extends org.apache.spark.api.java.function.PairFunction<Pair<K,V>,IntByteArray,byte[]>

See Also:
Serialized Form

Constructor Summary
PartitionedMapOutputFunction(SerDe<K> keySerde, SerDe<V> valueSerde, PGroupedTableType<K,V> ptype, Class<? extends org.apache.hadoop.mapreduce.Partitioner> partitionerClass, int numPartitions, SparkRuntimeContext runtimeContext)
           
 
Method Summary
 scala.Tuple2<IntByteArray,byte[]> call(Pair<K,V> p)
           
 
Methods inherited from class org.apache.spark.api.java.function.PairFunction
keyType, valueType
 
Methods inherited from class org.apache.spark.api.java.function.WrappedFunction1
apply
 
Methods inherited from class scala.runtime.AbstractFunction1
andThen, andThen$mcDD$sp, andThen$mcDF$sp, andThen$mcDI$sp, andThen$mcDJ$sp, andThen$mcFD$sp, andThen$mcFF$sp, andThen$mcFI$sp, andThen$mcFJ$sp, andThen$mcID$sp, andThen$mcIF$sp, andThen$mcII$sp, andThen$mcIJ$sp, andThen$mcJD$sp, andThen$mcJF$sp, andThen$mcJI$sp, andThen$mcJJ$sp, andThen$mcVD$sp, andThen$mcVF$sp, andThen$mcVI$sp, andThen$mcVJ$sp, andThen$mcZD$sp, andThen$mcZF$sp, andThen$mcZI$sp, andThen$mcZJ$sp, apply$mcDD$sp, apply$mcDF$sp, apply$mcDI$sp, apply$mcDJ$sp, apply$mcFD$sp, apply$mcFF$sp, apply$mcFI$sp, apply$mcFJ$sp, apply$mcID$sp, apply$mcIF$sp, apply$mcII$sp, apply$mcIJ$sp, apply$mcJD$sp, apply$mcJF$sp, apply$mcJI$sp, apply$mcJJ$sp, apply$mcVD$sp, apply$mcVF$sp, apply$mcVI$sp, apply$mcVJ$sp, apply$mcZD$sp, apply$mcZF$sp, apply$mcZI$sp, apply$mcZJ$sp, compose, compose$mcDD$sp, compose$mcDF$sp, compose$mcDI$sp, compose$mcDJ$sp, compose$mcFD$sp, compose$mcFF$sp, compose$mcFI$sp, compose$mcFJ$sp, compose$mcID$sp, compose$mcIF$sp, compose$mcII$sp, compose$mcIJ$sp, compose$mcJD$sp, compose$mcJF$sp, compose$mcJI$sp, compose$mcJJ$sp, compose$mcVD$sp, compose$mcVF$sp, compose$mcVI$sp, compose$mcVJ$sp, compose$mcZD$sp, compose$mcZF$sp, compose$mcZI$sp, compose$mcZJ$sp, toString
 
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, wait, wait, wait
 

Constructor Detail

PartitionedMapOutputFunction

public PartitionedMapOutputFunction(SerDe<K> keySerde,
                                    SerDe<V> valueSerde,
                                    PGroupedTableType<K,V> ptype,
                                    Class<? extends org.apache.hadoop.mapreduce.Partitioner> partitionerClass,
                                    int numPartitions,
                                    SparkRuntimeContext runtimeContext)
Method Detail

call

public scala.Tuple2<IntByteArray,byte[]> call(Pair<K,V> p)
                                       throws Exception
Specified by:
call in class org.apache.spark.api.java.function.WrappedFunction1<Pair<K,V>,scala.Tuple2<IntByteArray,byte[]>>
Throws:
Exception


Copyright © 2014 The Apache Software Foundation. All Rights Reserved.