error handling in C

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
Hi guys,
I'd like to get peoples' ideas on how to handle errors in C, a language without any built-in 'try-catch' type of framework. I'll begin by describing what we (me+labgroup) do now and what I think is good/bad about it. Our project is called "ProjectX"; it's a computational fluids package.

So at the moment, every function in the entire code returns an error code. Things like "NO_ERROR", "BAD_INPUT", "SEARCH_NOT_FOUND", "NON_PHYSICAL", etc.

Then almost anytime you call one of the ProjectX functions, you wrap the call with a macro:
Code:
PXErrorReturn( PXSomeFunction(arg1,...) );

Which will get 'translated' to:
Code:
{
  int _ierr = PXSomeFunction(arg1,...);
  if(_ierr != PX_NO_ERROR){
     printf("Error %s occured in file %s on line %d",...);
     return _ierr;
  }
}
The extra {} brackets are there so that "_ierr" cannot interfere with any previous declarations. We could also just write "_ierr = ..." and depend on every function to have a local "_ierr" variable, but that seems cumbersome.

When the error is not catastrophic, you might write:
Code:
_ierr = PXSomeFunction(arg1,...);
if(_ierr == PX_SEARCH_NOT_FOUND){
  /* do something in response to search not found... like create the object */
}else if(_ierr != PX_NO_ERROR){
   printf("Error %s occured in file %s on line %d",...);
   return _ierr;
}

But the idea is that whenever a catastrophic error code is returned by a function, the code will propagate upward all the way to main. Then since the PXErrorReturn macro will print at every level, you get a 'call-stack' trace showing you where the error occurred & how you got there.

So advantages:
-easy to use, people are used to it & use it consistently
-prints out a full call-stack trace, displays information about what specific error code occured

disadvantages:
-not easy to turn off
-does not "clean up" when exiting--once an error code occurs, the current function returns. Then the caller returns; then the caller's caller returns, etc. So if there are malloc() calls in the functions of that sequence (e.g., if myfunc allocates space to a pointer on its own stack), the corresponding free() calls never happen.

The last point is mostly annoying in our unit testing framework. Unit tests check the failure conditions for functions, so when free() isn't called and the stack is thrown out after return, valgrind reports lost memory.

We try to ensure that at least our unit tests have clean valgrind reports--no errors, no leaks. And it's pretty annoying/impossible to have to remember "oh this leak is OK b/c it is associated with an error condition."

Is it possible to change this behavior? An obvious choice would be to have appropriate free() calls before returning an error code. But this would lead to a lot of code duplication... and duplication of code that runs very rarely at that. I was thinking to do


Additionally, a ton of our error checks are related to input sanitization. These issues really only arise when the developer screwed up a function call or when the input file is misconfigured. It seems like in the former case, the C assert() built-in would be much better. Even better, assert() is really easy to turn off. So the PXErrorReturn system would be reserved for errors like "the current solution is non-physical" whereas assert() would be used for "this input pointer is not null". The difference being that in the first case, any user would want to know about a non-physical solution, whereas in the second case, all such errors "should" be impossible outside of development.

The only downside of assert() is that it doesn't automatically provide that call-stack trace. So people here will balk at it I'm sure, lol. But is it considered more "proper" to have assert() for pre/post condition + invariant checks? Then you could just invoke a debugger and place a breakpoint on the offending assert().

Thoughts?
-Eric
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Consistency is key with C. Here are my thoughts.

With this sort of error handling, where there isn't a try catch going on, this is a GOOD place to use a goto for cleanup. (That is a rare thing). Try to fix errors, avoid them, ect. But at the end of the day, if you need to free up memory and don't want to copy lots of code, your best bet is to put the freeing in some portion of the function and jumping to it when an error occurs.

For your functions, I would tend to lean toward returning an error code on noncatastrophic errors and Asserting for catastrophic errors with a detailed error message (IE parameters passed into the function, ect). Looking at the call stack can be useful, sometimes. But, IMO, A solid error message is 100x better than having to dig through the call stack.

BTW, if an error will cause the program to crash, ignore allocated memory. There is no point in worrying about something the OS is going to take care of anyways (despite what valgrind might say)

These are my thoughts which may or may not apply. If you do use the goto, make sure you stay within the function that you are working with, and try to limit yourself to 1 point of jumping, ie "FreeMemory:".
 

Schmide

Diamond Member
Mar 7, 2002
5,694
936
126
Let me just say I hate ASSERTS. I've worked on many a team and the programmers that rely on ASSERTS are the ones who GPF on release more often than not. It's a lazy way of catching errors and some errors pop up only in release.

I personally follow a Microsoft HRESULT error code return policy. The nice thing about using this is, HRESULT is declared as a long and will be negative for OS/MS library returns because the high bit is always set for success or failure. You can add your own codes for positive results and allow zero to be success. If a function returns an error, I call an error reporting routine that outputs a series of messages based on codes and can further call any other system error to string routines. This gives me the ability to set a breakpoint at one place to track down errors during debugging, and robust messaging for tracking runtime errors.

All my routines follow a clean up procedure from the error occurrence and should correct and retry, or exit gracefully.

I also avoid heavy use of MACROS. When you're debugging they can be annoying and I don't mind a bit of extra typing.
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
Macro-handling of errors is pretty good in C. Despite 'best practices', disciplined and correct use of macros improves code.
-not easy to turn off
Sure it is:
Code:
#ifdef DEBUG
#define PXErrorReturn(arg1...) { \
  int _ierr = PXSomeFunction(arg1,...); \
  if(_ierr != PX_NO_ERROR){ \
     printf("Error %s occured in file %s on line %d",...); \
     return _ierr; \
  } \
}
#else
#define PXErrorReturn(arg1...) { \
  (void) PXSomeFunction(arg1,...); \
}
#endif

Furthermore:
-does not "clean up" when exiting ...

This relates to how you've orchestrated your macros. Consider giving each function an error_encountered flag. e.g.,

Code:
#define PxErrorReturn(arg1...) { \
  int _ierr = PX_NO_ERROR; \
  if( error_encountered == PX_NO_ERROR ) _ierr = PXSomeFunction(arg1,...); \
  if(_ierr != PX_NO_ERROR){ \
     printf("Error %s occured in file %s on line %d",...); \
     error_encountered = _ierr; \
  } \
}

Then, anything inside of a macro of this kind isn't even called if an error arises. Ergo, if you malloc'd something at the start the function, you should eventually reach the end and free(). Of course, you have to architect the rest of the function accordingly.

Another option: have your macro accept an extra parameter: the function to call when an error arises. It's a good way to implement your own try/catch/finally clause if you really want it.

Alternatively, you can give each stack a data structure containing outstanding allocations -- error code can auto-free this.

Lastly, you can write valgrind suppressions for error conditions leaving behind leaked memory, but this assumes your errors are meant to fail-stop and not to be recoverable.

The only downside of assert() is that it doesn't automatically provide that call-stack trace.
That's probably a function of your environment. Asserts should give a core dump, and a core dump includes the stack. Try setting your core dump size to unlimited (i.e., 'limit coredumpsize unlimited' in the shell) prior to running your asserting app -- I'm assuming you're using Linux, I suppose (valgrind for Windows?).
 

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
Consistency is key with C. Here are my thoughts.
With this sort of error handling, where there isn't a try catch going on, this is a GOOD place to use a goto for cleanup. (That is a rare thing). Try to fix errors, avoid them, ect. But at the end of the day, if you need to free up memory and don't want to copy lots of code, your best bet is to put the freeing in some portion of the function and jumping to it when an error occurs.

Oh right I meant to ask about 'goto'. In the OP, I wrote "I was thinking to do" and just stopped. Apparently I got distracted, lol. Anyway, I was thinking to have the PXErrorReturn macro have a "GOTO FREEBLOCK" or whatever in it. But what I'm confused about then is get back:
Code:
int somefun(int arg1){
  double * ptr = malloc(arg1);
  {
    _ierr = somefun2();
    if(_ierr != PX_NO_ERROR){
      GOTO FREEBLOCK;
      printf("error msg...",...);
      return _ierr;
    }
  }

FREEBLOCK:
  free(ptr);
  //how to get BACK to the error handler?
  //in particular, what if there are numerous expansions from PXErrorReturn macros?
  
  return PX_NO_ERROR; //under normal conditions, return no error
}

For your functions, I would tend to lean toward returning an error code on noncatastrophic errors and Asserting for catastrophic errors with a detailed error message (IE parameters passed into the function, ect). Looking at the call stack can be useful, sometimes. But, IMO, A solid error message is 100x better than having to dig through the call stack.

It's not a real call-stack... like not from a core dump. It just arises because every function call is wrapped by PXErrorReturn, so when the bottom level function returns an error code, every caller in the sequence will print out an error message. Also if you're using assert(), how do you give a detailed message? As far as I know, assert() calls abort(), so not too much I can do? Maybe? And a detailed list of inputs is potentially impossible... not going to print out 1e6 numbers (array input), for example.

But yes, that is how I was envisioning using error codes & asserts together, supposing that's the solution I choose.


BTW, if an error will cause the program to crash, ignore allocated memory. There is no point in worrying about something the OS is going to take care of anyways (despite what valgrind might say)

These are my thoughts which may or may not apply. If you do use the goto, make sure you stay within the function that you are working with, and try to limit yourself to 1 point of jumping, ie "FreeMemory:".

I'm not sure what you mean by 'cause program to crash.' If that means that w/most errors, our desired behvaior is to exit, then yes. Outside of development bugs, I'd say most errors you encounter while using the code result from an ill-posed problem, providing bad inputs, or something where we're not really interested in trying to recover.

That said, the reason it's annoying is that when we valgrind a set of unit tests, it's unclear whether the leak is genuine or arises from testing that some function fails out under certain inputs. Maybe this is just a downside of the unit testing framework we use (cutest), which has all unit tests for a .c file written into the same .cutest file. (Certainly splitting up the unit tests into more files would be a huge pita & cumbersome like all hell.)

And I was under the impression that GOTO has to stay within the current function environment?
 
Last edited:

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
Let me just say I hate ASSERTS. I've worked on many a team and the programmers that rely on ASSERTS are the ones who GPF on release more often than not. It's a lazy way of catching errors and some errors pop up only in release.

I personally follow a Microsoft HRESULT error code return policy. The nice thing about using this is, HRESULT is declared as a long and will be negative for OS/MS library returns because the high bit is always set for success or failure. You can add your own codes for positive results and allow zero to be success. If a function returns an error, I call an error reporting routine that outputs a series of messages based on codes and can further call any other system error to string routines. This gives me the ability to set a breakpoint at one place to track down errors during debugging, and robust messaging for tracking runtime errors.

All my routines follow a clean up procedure from the error occurrence and should correct and retry, or exit gracefully.

I also avoid heavy use of MACROS. When you're debugging they can be annoying and I don't mind a bit of extra typing.

Hm in my googling, I did run across some people who objected to assert(). I encountered it last summer with another computational fluids group at LBNL and it struck me as being an elegant solution. Why do you say it's a lazy way of catching errors?

Yeah I've noticed the difficulty of debugging w/macros. I guess to combat this, instead of just the naked printf, we should wrap it w/a function call, say "PXErrorReport" or something. Though for now, printfs occur infrequently enough that it isn't too painful to break on them.

Also PXErrorReturn is very nearly the only macro we use.

I don't think I'm fully understanding your MS-based system. (Keep in mind that I've never programmed a windows app so I have no familiarity.) It sounds like the system we have set up does something similar? Our error reporting abilities are probably more limited; but I'm not really sure what kind of extra info you're outputting.
 

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
Macro-handling of errors is pretty good in C. Despite 'best practices', disciplined and correct use of macros improves code.

Sure it is:
How do you deal with the following situation:
FuncA calls FuncB calls FuncC.

FuncC searches for objectX and can't find it, so FuncC returns PX_SEARCH_NOT_FOUND. FuncB does nothing if FuncC returns an error, simply passing the error up to FuncA. FuncA responds to the error by creating objectX.

With your macro FuncB won't return the error, and some undefined behavior will arise. So making that change right now will be a pain b/c it's completely unclear if/where/when these things happen.

Now sure you can just say "well, don't set up situations like this." But our group is one of aero engineering grad students, 90% of whom came to the group with minimal programming experience and 0 software engineering experience. So asking for more coding discipline can be a big headache... not to mention if everyone else can just be like "no, that's too much trouble, screw it."

Furthermore:
This relates to how you've orchestrated your macros. Consider giving each function an error_encountered flag. e.g.,

Code:
#define PxErrorReturn(arg1...) { \
  int _ierr = PX_NO_ERROR; \
  if( error_encountered == PX_NO_ERROR ) _ierr = PXSomeFunction(arg1,...); \
  if(_ierr != PX_NO_ERROR){ \
     printf("Error %s occured in file %s on line %d",...); \
     error_encountered = _ierr; \
  } \
}

Then, anything inside of a macro of this kind isn't even called if an error arises. Ergo, if you malloc'd something at the start the function, you should eventually reach the end and free(). Of course, you have to architect the rest of the function accordingly.

Hm I kind of like this. Though I think it would be easier to use a GOTO instead, b/c generally after an error, you really don't want any subsequent code to run... all sorts of opportunities for undefined behaviours. So you could have error_encountered = PX_NO_ERROR to start. Then every function returns error_encountered. And "error_encountered" could be a parameter passed to the macro.
So like:
Code:
int myfun(int arg1){
  error_encountered = PX_NO_ERROR;
  {
    _ierr = myfun2();
    if(_ierr != PX_NO_ERROR){
      error_encountered = _ierr;
      printf("failure message");
      GOTO CLEANUP;
    }
  }

  //other code...

CLEANUP:
  //whatever

  return error_encountered;
}
Thoughts? I kind of like this. At this point there's probably no reason to keep "_ierr" around, and just do everything with "error_encountered" instead (well, maybe let be a parameter to the macro, so the coder can pick the name; also maybe force a single name for consistency).

Another option: have your macro accept an extra parameter: the function to call when an error arises. It's a good way to implement your own try/catch/finally clause if you really want it.

But then you'd need every function that frees something to have its own helper function for freeing stuff. Seems a little cumbersome? Not to mention potentially painful if there is a lot of freeing to be done--long, long argument lists.

Alternatively, you can give each stack a data structure containing outstanding allocations -- error code can auto-free this.

Mmm sounds too complicated. So much code would have to be edited, yuck.

Lastly, you can write valgrind suppressions for error conditions leaving behind leaked memory, but this assumes your errors are meant to fail-stop and not to be recoverable.

That's probably a function of your environment. Asserts should give a core dump, and a core dump includes the stack. Try setting your core dump size to unlimited (i.e., 'limit coredumpsize unlimited' in the shell) prior to running your asserting app -- I'm assuming you're using Linux, I suppose (valgrind for Windows?).

Probably don't want to suppress valgrind output. It's hard to say a priori when to suppress.

Yeah I'm in linux. When you use assert.h & an assert(x==0) fails, there's a core dump? I didn't know that. I thought all you see/get is something like:
Assertion failed: expression, file filename, line line-number

Not having some version of the call stack probably isn't the end of the world. People need to learn to use debuggers; most people in my group rely on printf().
 

Schmide

Diamond Member
Mar 7, 2002
5,694
936
126
Hm in my googling, I did run across some people who objected to assert(). I encountered it last summer with another computational fluids group at LBNL and it struck me as being an elegant solution. Why do you say it's a lazy way of catching errors?

I guess if used correctly, it has some quality; however, as I've noticed, it leads to disparities between debug and release error handling. When a program fails in the field, which can happen, a robust error code is a lot more professional than a GPF. More than a few times, I've had to take over a suite of code filled with asserts and implement a more robust error handling system.
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
How do you deal with the following situation:
FuncA calls FuncB calls FuncC.

FuncC searches for objectX and can't find it, so FuncC returns PX_SEARCH_NOT_FOUND. FuncB does nothing if FuncC returns an error, simply passing the error up to FuncA. FuncA responds to the error by creating objectX.

This is a different scenario than propagating an error to main and printing a call stack along the way. Obviously you need to write code in FuncA to handle the error, rather than just use a generic macro around the call to FuncB. My suggestion about the function pointer is a good one for this, in fact:
Another option: have your macro accept an extra parameter: the function to call when an error arises. It's a good way to implement your own try/catch/finally clause if you really want it.

Thoughts? I kind of like this. At this point there's probably no reason to keep "_ierr" around, and just do everything with "error_encountered" instead (well, maybe let be a parameter to the macro, so the coder can pick the name; also maybe force a single name for consistency).
I've seen this in some large C projects. It takes discipline to work it out, but it is another way to deal with handling errors, even if you use macros around all the calls. E.g, in our hypothetical funcA:
Code:
PXErrorReturn(funcA...);
if( error_encountered == WTF_IS_OBJECT_X ) {
  ...
}

Yeah I'm in linux. When you use assert.h & an assert(x==0) fails, there's a core dump? I didn't know that. I thought all you see/get is something like:
Assertion failed: expression, file filename, line line-number
My first guess is that coredumpsize limit is zero, as per my earlier post. But SIGABRT will dump core if the environment allows it to do so.
 

EagleKeeper

Discussion Club Moderator<br>Elite Member
Staff member
Oct 30, 2000
42,589
5
0
I am working with a decent size embedded avionics project in C.

Any method that allocates memory has the responsibility of cleaning it up.
Error detections from a called method have a block of cleanup code before the error code is returned to the parent.

The project after 3+ years is being deployed this fall.
No memory leaks have been detected by the development and test teams.
The test teams have hooks into the actaul code, not just exposed APIs that they run test cases on that valid error handling by injecting error conditions into low level code as well as the exposed API code.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
How do you deal with the following situation:
FuncA calls FuncB calls FuncC.

FuncC searches for objectX and can't find it, so FuncC returns PX_SEARCH_NOT_FOUND. FuncB does nothing if FuncC returns an error, simply passing the error up to FuncA. FuncA responds to the error by creating objectX.

Personally, my first rule of exception handling is to use it to handle exceptions. If it can be a legitimate condition of the program's runtime state that objectX is not found, then that isn't an error, imo. Since there is a method that can create objectX in lieu of its being found, it seems this is a legitimate path of execution. In a case like that the search function can return a code, or a null pointer, but using a macro-based exception handling framework to "catch" this condition and respond to it would be like having the fire department show up and put out your barbeque .
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
Personally, my first rule of exception handling is to use it to handle exceptions ... using a macro-based exception handling framework to "catch" this condition and respond to it would be like having the fire department show up and put out your barbeque .

Does that mean that in Java, all barbeques are fire hazards?

All kidding aside, /agree.
 

lambchops511

Senior member
Apr 12, 2005
659
0
0
here is an opportunity to use goto statement

every function have a label that is "cleanup_and_exit", your macro can call that label and cleanup after itself too
 

dighn

Lifer
Aug 12, 2001
22,820
4
81
at my workplace one of the lead developers created a form of exception handling using some macro sorcery (implemented with gotos). basically there are three clauses (try, catch, finally), and the "exception" are status returns e.g. you'd call check(function_call(...)) within the try clause. it works very well.
 

Markey

Junior Member
Sep 28, 2010
20
0
0
The way I do it is to write code that handles every possible outcome of every call. For example, if a function can return a NULL pointer for a lookup, then the app needs to be able to detect that and take the appropriate action. The main application I work on has a command line interface and I've implemented a class mask for debug messages, to simplify debugging. Where things could go wrong, every path gets a printf that's wrapped in a check to see if the class for that debug logging is turned on. Then when the app is running, if I see bad behavior I can turn on the debug mask for the appropriate class and trace back through the code and find the bug.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
The way I do it is to write code that handles every possible outcome of every call. For example, if a function can return a NULL pointer for a lookup, then the app needs to be able to detect that and take the appropriate action. The main application I work on has a command line interface and I've implemented a class mask for debug messages, to simplify debugging. Where things could go wrong, every path gets a printf that's wrapped in a check to see if the class for that debug logging is turned on. Then when the app is running, if I see bad behavior I can turn on the debug mask for the appropriate class and trace back through the code and find the bug.

That is impractical for large projects with multiple developers.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |