Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Compiling program on Beagle Board with Linux and profiling

Status
Not open for further replies.

sam33r

Member level 2
Joined
Jan 24, 2011
Messages
52
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,286
Location
NC
Activity points
1,576
Hi fellas,

I am using gettimeofday function to calculate the time spent in doing particular part in program. Can someone please tell me how to convert this epoch time returned into microseconds? Also I am using GPOF to do profiling. Are there any ways to improve the execution time of the program after doing the profiling. Any help is much appreciated :)
 

I am using gettimeofday function to calculate the time spent in doing particular part in program. Can someone please tell me how to convert this epoch time returned into microseconds?

To convert the Epoch into microseconds you must add both members of the structure, by first converting seconds contained in timeval.tv_sec and adding timeval.tv_usec both contained in the timeval structure.

For wall-clock time:
Code:
#include <sys/time.h>
#include <iostream>

int main(int argc, char **argv) {
    // ProcessTime example
    struct timeval startTime;
    struct timeval endTime;
    // get the current time
    // - NULL because we don't care about time zone
    gettimeofday(&startTime, NULL);
    // algorithm goes here
    what_you_want_to_time();
    // get the end time
    gettimeofday(&endTime, NULL);
    // calculate time in microseconds
    double tS = startTime.tv_sec*1000000 + (startTime.tv_usec);
    double tE = endTime.tv_sec*1000000  + (endTime.tv_usec);
    std::cout << "Total Time Taken: " << tE - tS << std::endl;
    return 0;
}

For process time:
Code:
#include <sys/time.h>
#include <iostream>
int main(int argc, char **argv) {
    // ProcessTime example
    struct timeval startTime;
    struct timeval endTime;
    //structure for rusage
    struct rusage ru;
    // get the current time
    // - RUSAGE_SELF for current process
    // - RUSAGE_CHILDREN for *terminated* subprocesses
    getrusage(RUSAGE_SELF, &ru);
    startTime = ru.ru_utime;
    // algorithm goes here
    what_you_want_to_time();
    // get the end time
    getrusage(RUSAGE_SELF, &ru);
    endTime = ru.ru_utime;
    // calculate time in microseconds
    double tS = startTime.tv_sec*1000000 + (startTime.tv_usec);
    double tE = endTime.tv_sec*1000000  + (endTime.tv_usec);
    std::cout << "Total Time Taken: " << tE - tS << std::endl;
    return 0;
}


Also I am using GPOF to do profiling. Are there any ways to improve the execution time of the program after doing the profiling.

Depends largely on the code being analyzed and GPOF's report. I recommend you post both, so that we can examine them.

BigDog
 
  • Like
Reactions: sam33r

    sam33r

    Points: 2
    Helpful Answer Positive Rating
Awesome BigDog this exactly I was looking for. I am doing profiling with GPROF's and OPROFILE . I am currenlty examing my code if I am unable to comeup with solution I will post it here. Thanks for the help. :-D
 

I am using the function gettimeofday() to calculate time of execution of a certain function . I am running it 10000 times to get the mean, max and min values of execution time. the mean is expected but the max value is way greater then the mean execution time approximately 6-7 times the mean. Can you please tell me if there a specific reason fro this? Or it is just sporadic?
 

I am using the function gettimeofday() to calculate time of execution of a certain function . I am running it 10000 times to get the mean, max and min values of execution time. the mean is expected but the max value is way greater then the mean execution time approximately 6-7 times the mean. Can you please tell me if there a specific reason fro this? Or it is just sporadic?

The variations in execution times could be the result of several factors, the most likely of which is pipelining, cache memory and other background/concurrent processes. The cache memory coupled with OS process switching can result in wide fluctuations in execution times.

Have you actually logged each execution time in a file after the 10,000 interations has completed?

BigDog
 

The variations in execution times could be the result of several factors, the most likely of which is pipelining, cache memory and other background/concurrent processes. The cache memory coupled with OS process switching can result in wide fluctuations in execution times.

Have you actually logged each execution time in a file after the 10,000 interations has completed?

BigDog

Yes I did dumped the execution times of 10000 runs into a file and plotted a graph. Well I thought of the same reason that there might a cache miss which is prolonging the execution. The only thing amazed me was every time I ran the code only one run took so much time and less were very near to the mean. My next project consist of profiling this code may I will find the exact reason then. Thanks for all you help :smile:
 

Where in relation to the other 9,999 execution times did this maximum execution time occur?

BigDog
 

Where in relation to the other 9,999 execution times did this maximum execution time occur?

BigDog

It was random. Sometimes it occur between 2000-3000th or sometimes it occured between 5000-7000th call of the function. But again it was only one run which took so much time out of the 10000.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top