ug4
pcl Namespace Reference

Namespaces

 interface_tags
 Interface tags allow to differentiate between interfaces with different features.
 
 layout_tags
 Layout tags allow to differentiate between layouts with different features.
 

Classes

class  BasicInterface
 You may add elements to this interface and iterate over them. More...
 
class  DataTypeDirectlySupported
 
class  DataTypeIndirectlySupported
 
class  DataTypeTraits
 
class  DataTypeTraits< char >
 
class  DataTypeTraits< double >
 
class  DataTypeTraits< float >
 
class  DataTypeTraits< int >
 
class  DataTypeTraits< long >
 
class  DataTypeTraits< unsigned char >
 
class  DataTypeTraits< unsigned long >
 
class  DataTypeTraits< unsigned long long >
 
struct  FileBufferDescriptor
 
class  ICommunicationPolicy
 specializations are responsible to pack and unpack interface data during communication. More...
 
class  IDomainDecompositionInfo
 
class  InterfaceCommunicator
 Performs communication between interfaces on different processes. More...
 
class  MultiGroupCommunicator
 communicator for simultaneous data exchange between many small groups More...
 
class  MultiLevelLayout
 the standard multi-level-layout implementation More...
 
class  OrderedInterface
 You may add elements to this interface and iterate over them. More...
 
class  ParallelArchive
 
class  ProcessCommunicator
 
struct  reduce_traits
 methods defined in those traits are used by ComPol_AttachmentReduce More...
 
struct  reduce_traits< double >
 
struct  reduce_traits< float >
 
class  Reducer
 
class  SelectionCommPol
 communicates selection-status of interface elements More...
 
class  SingleLevelLayout
 the standard single-level-layout implementation More...
 
class  StandardDomainDecompositionInfo
 
struct  type_traits
 associate internally used types with an external typename More...
 
struct  type_traits< ug::Edge >
 Edge interfaces and layouts store elements of type Vertex*. More...
 
struct  type_traits< ug::Face >
 Face interfaces and layouts store elements of type Vertex*. More...
 
struct  type_traits< ug::Vertex >
 Vertex interfaces and layouts store elements of type Vertex*. More...
 
struct  type_traits< ug::Volume >
 Volume interfaces and layouts store elements of type Vertex*. More...
 

Typedefs

typedef MPI_Datatype DataType
 
typedef int ProcID
 
typedef MPI_Op ReduceOperation
 

Enumerations

enum  ProcessCommunicatorDefaults { PCD_EMPTY = 0 , PCD_WORLD = 1 , PCD_LOCAL = 2 }
 values that can be passed to a ProcessCommunicators constructor. More...
 

Functions

void Abort (int errorcode=1)
 call this method to abort all mpi processes More...
 
template<typename TLayout >
void AddLayout (TLayout &destLayout, const TLayout &sourceLayout)
 
bool AllProcsTrue (bool bFlag, ProcessCommunicator comm)
 
void AllReduce (void *sendBuf, void *recBuf, int count, DataType type, ReduceOperation op)
 reduces the data to a single buffer using the specified ReduceOperation and distributes the result to all processes. More...
 
template<class TLayout >
size_t CollectAssociatedProcesses (std::vector< int > &procIDsOut, TLayout &layout)
 collects the ids of all processes to which interfaces exist. More...
 
void CollectData (ProcID thisProcID, int firstSendProc, int numSendProcs, void *pBuffer, int bufferSizePerProc, int tag)
 collect the data send with send_data from proc firstSendProc to numSendProcs excluding destProc. More...
 
template<class TLayout >
void CollectElements (std::vector< typename TLayout::Element > &elemsOut, TLayout &layout, bool clearContainer=true)
 writes all elements in the interfaces into the vector. More...
 
template<class TLayout >
void CollectUniqueElements (std::vector< typename TLayout::Element > &elemsOut, const TLayout &layout)
 writes all elements in the interfaces into the resulting vector. avoids doubles. More...
 
void CommunicateInvolvedProcesses (std::vector< int > &vReceiveFromRanksOut, const std::vector< int > &vSendToRanks, const ProcessCommunicator &procComm=ProcessCommunicator())
 exchanges information about which process wants to communicate with which other process. More...
 
void DisableMPIInit ()
 call this method before 'Init' to avoid a call to MPI_Init. More...
 
void DistributeData (ProcID thisProcID, int *pRecProcMap, int numRecProcs, void *pBuffer, int *pBufferSegSizes, int tag)
 sends the data in the different sections of the buffer to the specified processes. More...
 
void DistributeData (ProcID thisProcID, int firstRecProc, int numRecProcs, void *pBuffer, int *pBufferSegSizes, int tag)
 sends the data in the different sections of the buffer to the specified processes. More...
 
void Finalize ()
 call this method right before quitting your application More...
 
size_t Get512Padding (size_t s)
 
size_t GetSize (const DataType &t)
 
void Init (int *argcp, char ***argvp)
 call this method before any other pcl-operations. More...
 
template<class TType , class TLayoutMap >
void LogLayoutMapStructure (TLayoutMap &lm)
 Logs the internals of a layout-map for a given type. More...
 
template<class TLayout >
void LogLayoutStructure (TLayout &layout, const char *prefix="")
 Logs the internals of a layout. More...
 
template<typename TKey , typename TValue , typename Compare >
void MinimalKeyValuePairAcrossAllProcs (TKey &keyInOut, TValue &valInOut, const Compare &cmp=Compare())
 Find minimal key/value pair across processes This function will receive one key/value pair from each process. They will be gathered on proc 0, where the minimal key (w.r.t. given Compare object, e.g., std::less<TKey> ) and corresponding value will be determined. The minimal pair will be made known to all procs and returned. More...
 
int MPI_Wait (MPI_Request *request, MPI_Status *status=MPI_STATUS_IGNORE)
 
void MPI_Waitall (int count, MPI_Request *array_of_requests, MPI_Status *array_of_statuses)
 
void MPIErrorHandler (MPI_Comm *comm, int *err,...)
 
int NumProcs ()
 returns the number of processes More...
 
bool OneProcTrue (bool bFlag, ProcessCommunicator comm)
 
std::ostream & operator<< (std::ostream &out, const ProcessCommunicator &processCommunicator)
 
bool ParallelReadFile (std::string &filename, std::vector< char > &file, bool bText, bool bDistributedLoad, const ProcessCommunicator &pc=ProcessCommunicator())
 util function to read a file in parallel. More...
 
bool ParallelReadFile (string &filename, vector< char > &file, bool bText, bool bDistributedLoad, const ProcessCommunicator &pc)
 
template<typename TLayout >
bool PrintLayout (const pcl::ProcessCommunicator &processCommunicator, pcl::InterfaceCommunicator< TLayout > &com, const TLayout &masterLayout, const TLayout &slaveLayout)
 
template<typename TLayout , typename TValue >
bool PrintLayout (const pcl::ProcessCommunicator &processCommunicator, pcl::InterfaceCommunicator< TLayout > &com, const TLayout &masterLayout, const TLayout &slaveLayout, boost::function< TValue(typename TLayout::Element)> cbToValue=TrivialToValue< typename TLayout::Element >)
 
template<typename TLayout >
void PrintLayout (const TLayout &layout)
 
void PrintPC (const pcl::ProcessCommunicator &processCommunicator)
 
int ProcRank ()
 returns the rank of the process More...
 
void ReadCombinedParallelFile (ug::BinaryBuffer &buffer, std::string strFilename, pcl::ProcessCommunicator pc)
 
void ReceiveData (void *pBuffOut, ProcID srcProc, int bufferSize, int tag)
 receives the data that was send with More...
 
template<class TLayout >
void RemoveEmptyInterfaces (TLayout &layout)
 removes all empty interfaces from the given layout. More...
 
template<class TLayout , class TSelector >
bool RemoveUnselectedInterfaceEntries (TLayout &layout, TSelector &sel)
 
template<class TType , class TLayoutMap , class TSelector >
bool RemoveUnselectedInterfaceEntries (TLayoutMap &lm, TSelector &sel)
 
void SendData (ProcID destProc, void *pBuffer, int bufferSize, int tag)
 sends data to another process. data may be received using More...
 
bool SendRecvBuffersMatch (const std::vector< int > &recvFrom, const std::vector< int > &recvBufSizes, const std::vector< int > &sendTo, const std::vector< int > &sendBufSizes, const ProcessCommunicator &involvedProcs=ProcessCommunicator())
 checks whether matching buffers in send- and recv-lists have the same size More...
 
bool SendRecvListsMatch (const std::vector< int > &recvFrom, const std::vector< int > &sendTo, const ProcessCommunicator &involvedProcs=ProcessCommunicator())
 checks whether proc-entries in send- and recv-lists on participating processes match More...
 
void SetErrHandler ()
 sets error handler More...
 
void SynchronizeProcesses ()
 
template<typename TLayout >
bool TestLayout (const pcl::ProcessCommunicator &processCommunicator, pcl::InterfaceCommunicator< TLayout > &com, const TLayout &masterLayout, const TLayout &slaveLayout, bool bPrint=false, bool compareValues=false)
 
template<typename TLayout , typename TValue >
bool TestLayout (const pcl::ProcessCommunicator &processCommunicator, pcl::InterfaceCommunicator< TLayout > &com, const TLayout &masterLayout, const TLayout &slaveLayout, bool bPrint=false, boost::function< TValue(typename TLayout::Element)> cbToValue=TrivialToValue< typename TLayout::Element >, bool compareValues=false)
 Checks whether the given layouts are consistent. More...
 
template<typename TLayout >
bool TestLayoutIsDoubleEnded (const pcl::ProcessCommunicator processCommunicator, pcl::InterfaceCommunicator< TLayout > &com, const TLayout &masterLayout, const TLayout &slaveLayout)
 tests if masterLayouts proc id's find a match in corresponding slaveLayouts proc ids. More...
 
template<typename TLayout , typename TValue >
bool TestSizeOfInterfacesInLayoutsMatch (pcl::InterfaceCommunicator< TLayout > &com, const TLayout &masterLayout, const TLayout &slaveLayout, bool bPrint=false, boost::function< TValue(typename TLayout::Element)> cbToValue=TrivialToValue< typename TLayout::Element >, bool compareValues=false)
 if processor P1 has a interface to P2, then the size of the interface P1->P2 has to be the same as the size of interface P2->P1 More...
 
double Time ()
 returns the time in seconds More...
 
std::string ToString (const ProcessCommunicator &pc)
 
template<class TElem >
TElem TrivialToValue (TElem e)
 Trivial implementation of a to-value callback. More...
 
void Waitall (std::vector< MPI_Request > &requests)
 
void Waitall (std::vector< MPI_Request > &requests, std::vector< MPI_Request > &requests2)
 
void Waitall (std::vector< MPI_Request > &requests, std::vector< MPI_Status > &statuses)
 
void WriteCombinedParallelFile (ug::BinaryBuffer &buffer, std::string strFilename, pcl::ProcessCommunicator pc)
 
template<typename TBuffer >
void WriteParallelArchive (pcl::ProcessCommunicator &pc, std::string strFilename, const std::map< std::string, TBuffer > &files)
 
void WriteParallelArchive (ProcessCommunicator &pc, std::string strFilename, const std::vector< FileBufferDescriptor > &files)
 

Variables

static bool PERFORM_MPI_INITIALIZATION = true
 

Function Documentation

◆ Get512Padding()

size_t pcl::Get512Padding ( size_t  s)

References s.

Referenced by WriteParallelArchive().

◆ MPIErrorHandler()

void pcl::MPIErrorHandler ( MPI_Comm *  comm,
int *  err,
  ... 
)

◆ ParallelReadFile()

bool pcl::ParallelReadFile ( string &  filename,
vector< char > &  file,
bool  bText,
bool  bDistributedLoad,
const ProcessCommunicator pc 
)

◆ ReadCombinedParallelFile()

void pcl::ReadCombinedParallelFile ( ug::BinaryBuffer buffer,
std::string  strFilename,
pcl::ProcessCommunicator  pc = pcl::ProcessCommunicator(pcl::PCD_WORLD) 
)

This function reads a binarybuffers to all participating cores from a combined parallel file, that is one file which contains data for each core. Note that this is not ParallelReadFile, so each core gets DIFFERENT data. It HAS to be used together with WriteCombinedParallelFile

NOTE: you have to use this function to do i/o from a lot of cores (1000+), otherwise you will get big i/o problems.

Parameters
buffera Binary buffer to read data to
strFilenamethe filename
pca processes communicator (default pcl::World)

References ug::BinaryBuffer::clear(), pcl::ProcessCommunicator::get_mpi_communicator(), pcl::ProcessCommunicator::get_proc_id(), NumProcs(), p, ProcRank(), ug::BinaryBuffer::reserve(), pcl::ProcessCommunicator::size(), UG_COND_THROW, UG_THROW, and ug::BinaryBuffer::write().

Referenced by ug::ReadFromFile().

◆ WriteCombinedParallelFile()

void pcl::WriteCombinedParallelFile ( ug::BinaryBuffer buffer,
std::string  strFilename,
pcl::ProcessCommunicator  pc = pcl::ProcessCommunicator(pcl::PCD_WORLD) 
)

This function writes a binarybuffers from all participating cores into one combined parallel file. Note that to read these files, you HAVE to use ReadCombinedParallelFile

NOTE: you have to use this function to do i/o from a lot of cores (1000+), otherwise you will get big i/o problems.

The file format is as follows:

size_t numProcs int nextOffset[numProcs] byte data1[...] byte data2[...] ...

That means, in the file the first entry is size_t numProcs, then an array of ints with the offset of the next data set (see more description below), and then the actual data. This function is executed in parallel, so if core 0 has 1024 bytes of data, core 1 has 256 bytes of data, and core 3 has 500 bytes, we have sizeof(size_t) + 3*sizeof(int) = 4*8 = 32 bytes of header, so (size_t) 3 (int) 32+1024 (int) 32+1024+256 (int) 32+1024+256+500 32: data1 32+1024: data2 32+1024+256: data2

We store the nextOffset to get access to the size of the data written.

Parameters
buffera Binary buffer with data
strFilenamethe filename
pca processes communicator (default pcl::World)

References ug::BinaryBuffer::buffer(), pcl::ProcessCommunicator::get_mpi_communicator(), pcl::ProcessCommunicator::get_proc_id(), NumProcs(), ProcRank(), pcl::ProcessCommunicator::size(), UG_THROW, and ug::BinaryBuffer::write_pos().

Referenced by ug::SaveToFile().

◆ WriteParallelArchive() [1/2]

template<typename TBuffer >
void pcl::WriteParallelArchive ( pcl::ProcessCommunicator pc,
std::string  strFilename,
const std::map< std::string, TBuffer > &  files 
)

◆ WriteParallelArchive() [2/2]

Variable Documentation

◆ PERFORM_MPI_INITIALIZATION

bool pcl::PERFORM_MPI_INITIALIZATION = true
static

Referenced by DisableMPIInit(), Finalize(), and Init().