Library Submission

Note: you must be a logged in registered user in order to submit a library!
  • logo_linkweb_linkcomment 
    Add a row
Display Statistics
Reviews There are 2 reviews
There are 22 comments

Comment on This Page

  • Having spent some time playing with the library, I still cannot determine what its goal or scope is. I have as my input:

    Library name.
    Documentation.
    The code.
    Responses and clarifications from the author.

    But they do not give me a clear, unambiguous picture of what the library is meant to be. In fact, my reception is that the different input appears to contradict one another.

    I can think of the following answers to my question:

    A drop-in replacement for type int: a small integer that has object int as its resource, and reports resource exhaustion via an exception.
    A library that solves any problem you ever had with type int (overflows, conversions): it offers a set of otherwise unrelated tools (like safe_compare).
    A library that solves any problem you ever had with types int or float (and their friends).

    Honestly, I am not doing this for arguing’s sake. I cannot determine the main goal of the library. And for this reason, I cannot review it properly: I cannot compare it against its goal.

    • Robert Ramey says:

      Here is the simple case which I believe shows the utility of the library

      #include
      void main(){
      int x, y, z;
      std::cin >> x >> y; // get integer values from the user
      z = x + y;
      std::cout < < z; // display sum of the values
      return 0;
      }

      One expects from a plain reading of the code that given any two integer input values, the program will display the arithmetic sum. But this is not true for all inputs! The program will actually display the result of the C++ "+" operator which is not the same as the arithmetic sum. There are two ways to address this:

      • Tell the user that he has to know in advance that the sum will be incorrect if it exceeds a certain number
      • Have the program detect the case and display an error message to the user.

      Clearly the first option defeats the whole purpose of computation. So how do we implement the second option. With the safe numerics library one can rewrite the program as:

      #include
      #include
      void main(){
      boost::safe_numerics/safe x, y, z;
      std::cin >> x >> y; // get integer values from the user
      try{
      z = x + y;
      std::cout < < z << std::endl; // display sum of the values
      }{
      std::cout << "sum exceeds capacity of this computer" << std::endl;
      }
      return 0;
      }

      Now ask yourself, how would you implement this program without a library such as safe numerics?

      Robert Ramey

      • This is not what I meant. I do see value in using safe<int>. But I do not understand why the library also offers things like safe_compare functions. What do they have to do with safe<int>? I mean, I know what: they both help solve some problems with int. But should this answer be enough to pack them both into one library? I.e., why not only offer safe<int> in the library and have other libraries offer other conveniences?

        Or to put it in still different words, when you gave me the example of usefulness of the library, you mentioned safe<int>, and not safe_compare. It makes me believe that safe<int> is the main character in the library, and the guys like safe_compare are of little relevance.

        Or to state the same concern again, Is SafeNumerics library a template safe<int> and that’s it? Or is it a set of loosely related tools, for solving problems with int?

        I consider all the problems worth solving. I am just not sure which way the library wants to go? Only safe<int>? — remove the rest. A set of different tools? — do mention them in the tutorial section.

        • Robert Ramey says:

          “This is not what I meant. I do see value in using safe<int>. ”

          So we don’t have to discuss safe_unsigned_range<T>, safe_signed_range<T> ? That leaves safe_compare<T, U> and safe_cast<T, U>

          safe_cast<T, U> would trap a cast from U to T which changes the arithmetic value. Seems to me it fits well within the concept of the library by trapping cases where the result of the C++ operation is different than the expected arithmetic result. Let me know if you don’t agree here.

          safe_compare<T, U> addresses the same issue. From the documentation of safe_compare<T, U>

          void f(){
          int i = -1;
          unsigned char i = 129;
          unsigned int j = 1;
          assert(j < i); // program traps because expression is false. But this is a surprise because 1 < 129
          assert( safe_compare::less_than(j,i)); // expression is true as we would expect
          }

          So again, we're trapping cases where the C++ operation doesn't match the expected result. But you're right that this doesn't quite fit in. It's really an implementation detail used to implement the < operator for safe integers. So the above example would more likely be rendered as

          void f(){
          safe<int> i = -1;
          safe<unsigned char> i = 0x129;
          safe<unsigned int> j = 1;
          assert(j < i); // program works as expected
          }

          In addressing your original comment - how would one describe this library in one concise sentence? Here' s my attempt. (OK it's two sentences)

          "Operations on C++intrinsic integer types do not always produce a result equal to the corresponding operations on integers. This library defines replacements for these C++ types which trap on operations which produce arithmetically incorrect results. "

          It could probably be phrased even better - be my guest.

          Robert Ramey

          • Again, I am confused about the goal of the library. The sentence that confuses me now, about safe_compare, is “It’s really an implementation detail used to implement the < operator for safe integers.”

            My impression of the library (somewhat negative) is this: “We give you class template ‘safe’ for wrapping integer types so that they are safe. While implementing it we coded a number of functions that can perhaps be used in other contexts, so you may have them as well, although you do not need them when you use safe<int>”

            I would be more confident with either of the two below versions:

            (1). “This library provides a set of tools that help to deal with C++ integer scalar types in a way that avoids UB caused integer overflows or integral conversions.” — but if you go this way, I would expect the docs to say less about safe<int> and give more examples of safe_cast, safe_compare, etc.

            (2). “This library offers a class template safe<T> that can be used as a drop-in replacement for any integral scalar type like int. It avoids any UB caused by integer overflow or integral conversions.” — but if you go this way, I would expect that you get rid of these afe_cast, safe_compare, etc.

            I hope that it illustrates the difficulties I have with understanding the scope of the library. I have the same “concerns” about safe_unsigned_range and safe_cast. I just wanted to focus on one.

  • david stone says:

    This is an issue that is especially important to me, as I have written a library that also has the goal of making integer arithmetic safe: the bounded::integer library: http://doublewise.net/c++/bounded/ . I presented this library at C++Now 2014 this year. It has a different philosophy from the Safe Numerics library. It instead has compile-time min and max bounds on each integer type, and the goal is to replace all uses of built-in integer types. The result of the arithmetic operators is a new type that is able to hold any result of the operation (so bounded::integer + bounded::integer == bounded::integer).

  • akrzemi1 says:

    Hi,

    I was going through the Safe Numerics library in Boost Library Incubator (my goal was to make a review), and I realized I disagree with the basic idea it is built on. I wanted to rise my concerns here.

    If I were to summarize in one sentence what this library is, I would say: a drop-in replacement for type int that checks for overflow in run-time and throws an exception if it finds one. Did I get it right? The remainder of this post is based on this interpretation.

    If so, I am not sure if this idea is a good one and worth promoting in Boost. BTW this is one of my criteria for letting a library into Boost: whether it promotes worthy ideas. I agree with the statement that a program should be UB-free. But I do not think that the approach of letting the programmer do what he did before, having the library or some run-time tool check for potential UB, and throwing an exception instead makes the program any better (or safer). It is just hiding the symptoms but not curing the disease. The programmer should not plant the UB in the first place – I agree. But this is different than first doing the mess and then having the run-time clean it up for you. I know it works for many people, in a number of languages, and it may even be considered a practical solution, but (by inclusion into Boost) I wouldn’t like to be sending the message “this is how you are suppose to code”.

    I try to recall how I use type int. I do not think I ever use it for anything that would be close to “numeric” as I know the term from math.

    Use Case 1 (an index):

    [code]
    for (size_t i = 0, I = v.size(); i != I; ++i) {
    if (i != 0) str += ",";
    str += v[i];
    }
    [/code]

    There doesn’t appear to be a good reason to wrap it into safe<int> here, even though the incrementation could possibly overflow. Plus, it would kill my performance.

    Use Case 2 (using reasonably small range):

    I used an int to represent a square on a chessboard. There is only 64 squares, so I couldn’t possibly overflow, on whatever platform. And even if there exists a platform where 64 doesn’t fit into an int, I would not use safe<int> there. I would rather go for something like double_int.

    If I were to use some numeric computations on integers and I perceived any risk that I may overflow, I would not be satisfied with having the computations stop because of an exception. I would rather use a bigger type (BigInt?). I do not think int is even meant to be used in numerical computations. I believe it is supposed to be a building block for building more useful types like BigInt.

    One good usage example I can think of is this. After a while of trying to chase a bug I comae up with a hypothesis that my int could be overflowing. I temporarily replace it with safe<int> and put a break point in function overflow() to trap it and support my hypothesis. I would probably use a configurable typedef then:

    [code]
    #ifndef NDEBUG
    typedef safe int_t;
    #else
    typedef int int_t;
    #endif
    [/code]

    But is this the intent?

    But perhaps it is just my narrow perspective. Can you give me a real-life example where substituting safe<int> for int has merit and is not controversial? I do not mean the code, just a story.

    Regards,
    &rzej

    • jmaddock says:

      > One good usage example I can think of is this. After a while of trying to
      > chase a bug I comae up with a hypothesis that my int could be overflowing.
      > I temporarily replace it with safe and put a break point in function
      > overflow() to trap it and support my hypothesis. I would probably use a
      > configurable typedef then:
      >
      > #ifndef NDEBUG
      > typedef safe int_t;
      > #else
      > typedef int int_t;
      > #endif
      >
      > But is this the intent?
      >
      > But perhaps it is just my narrow perspective. Can you give me a real-life
      > example where substituting safe for int has merit and is not
      > controversial? I do not mean the code, just a story.

      This is all a very good question, which I don’t have a good answer to, but I’ll add some comments anyway

      One thing I’ve been asked from time to time is to extend support for boost::math::factorial or boost::math::binomial_constant to integer types – and it always gets the same response: “are you serious?”.

      With Boost.Multiprecision one of the first support requests was for integer exponentiation and I reluctantly added it (as well as it’s modular version) because I know there are situations where it’s really needed, even though is clearly dangerous as hell.

      Now on to safe numerics: perhaps many folks don’t realise this, but boost::multiprecision::cpp_int has always supported a “safe mode” where all operations are checked for overflow etc. What’s more you can use this to create checked 32-bit int’s right now if you really want to (it’s a sledge hammer #include solution to the problem though). And yes, I have found bugs in number-theoretic type coding problems by using those types (mostly this is the algorithms within the multiprecision lib including the modular-exponentiation mentioned above).

      However there is going to be a noticeable performance hit if you really do use this with 32-bit integers. But not for extended precision integers – in fact I doubt very much you will be able to detect whether checking is turned on or not for those types – because the check is a fundamental part of the addition/subtraction/multiplication code anyway – you simply check at the end of the operation whether there is an unused carry. It’s utterly trivial compared to everything else going on.

      So… I think yes, if you are writing a number theoretic algorithm then routine testing with a checked integer type is downright essential. However, for multiprecision types it has to be implemented as part of the number type’s own arithmetic algorithms, not as an external add on which would be so utterly expensive as to be useless (all those multi-precision divides would kill you). Which is to say the proposed library would be quite useless for multiprecision types.

      None of which really answers your question. I guess if your pacemaker or your aeroplane uses integer arithmetic for critical control systems, then I rather hope that some form of defensive programming is in use. Whether this is the correct method, or whether some form of hardware support would be more effective is another issue.

      And my “favourite” integer bug: why subtracting (or heaven forbid negating) unsigned integers of course!

    • Robert Ramey says:

      If I were to summarize in one sentence what this library is, I would say: a drop-in replacement for type int that checks for overflow in run-time and throws an exception if it finds one. Did I get it right?

      yep

      But note that the library also includes safe_integer_range and safe_unsigned_range.

      Also note that the library focus isn’t so much as undefined behavior as incorrect arithmetic. Overflow of addition of unsigned integer types is defined even though it’s an incorrect result. casting of types is defined, but yields changes in numeric value, etc.

      Use Case 1 (an index): …

      agreed – I wouldn’t expect it to be use it here.

      Use Case2 (an index): …

      A much more interesting case. You know that a chess board has 64 squares. But do you really know that your program has no mistakes? Suppose you accept user input and it exceeds 64 . You’re supposed to check this – but suppose you forgot? Or suppose you get this value from someone else – again you’re supposed to check – but suppose you forgot. Suppose the chess board index is the product of some other calculation. Should you verify the result of every calculation. Suppose that there’s an overflow and the answer rolls over to a number less than 64. Are you going to check for that to? Can you really check every path through your program ahead of time and absolutely know that there will never be an overflow?

      If I were to use some numeric computations on integers and I perceived any risk that I may overflow, I would not be satisfied with having the computations stop because of an exception.

      what would you prefer instead? BTW the library actually calls and “overflow” function. The default implementation is to throw an exception.

      I would rather use a bigger type (BigInt?). I do not think int is even meant to be used in numerical computations.

      Hmmm – how big to make BigInt? Lot’s of opportunity to pick wrong here. And a lot of work to figure out.

      I believe it is supposed to be a building block for building more useful types like BigInt.

      I believe it is supposed to represent the natural size of the word that the compiler is designed for. The operations on integer are designed to implement the primitive operations which the underlying hardware supports. The choice of he name “integer” was meant to promote portability of programs to different hardwares. (This has been creating chaos ever since). The problem is that when we say x + y we mean the arithmetic operation of addition. But the compiler implements the machine hardware add instruction. One could say that this library detects the case where there is confusion on this point. Alternatively, one could say that this library implements correct arithmetic which C/C++ lacks. The second is the most correct. Sometimes, the underlying hardware just can’t represent the arithmetic answer and this library will trap this case. People will use the syntax x+y when they mean the arithmetic operation x+y – I don’t think we can blame them for that.

      One good usage example I can think of is this. After a while of trying to chase a bug I comae up with a hypothesis that my int could be overflowing.

      LOL – this is exactly how I got here. I was making C code for the gameboy. This has an int which was 8 bits. It was convenient to use these integers for arithmetic and storage of values such as compass headings, pixel address etc. of course things overflowed and it was a major bitch to find these – and know where they could be ignored. I had made a bunch of test programs to test different parts of the program and compiled them with MSVC. This permitted me to debug the code and test all the cases. This was a flight instrument so it was impractical to fly my hang glider every time I need to check for overflows. So I used SafeInteger from the MS website. This let me run all my tests knowing that any overflows would be trapped. I got all the code working on this basis. I then compiled with the gameboy cpu compiler and loaded into my gameboy flight instrument. I hardly ever crashed.
      So I was convinced of the value of such a library. I effectively re-implemented it using TMP, added tests, etc and there you have it.

      Note that using TMP has permuted me to detect many cases which can never overflow and omit the checking and exceptions. For example, suppose I store the chess board index in a character. C/C++ promote these to integers before doing any calculation so the result can never overflow. The library might trap if you try to store the result back into an 8 bit char however and this is valuable information as well.

      Also note that for safe ranges – the above considerations are also addressed.

      Again, the problem is that it’s natural to use C/C++ operations to do integer arithmetic – and computers don’t actually to that. This library traps the errors.

      Of course it’s free country, and no one HAS to use this library, but I think that its valuable. I think that a lot of programs – especially embedded systems
      ignore the problem and just use larger types – but that just makes the problem less frequent and harder to re-produce.

      • akrzemi1 says:

        a very interesting discussion. Let’s see where it gets us.

        LOL – this is exactly how I got here. I was making C code for the gameboy.

        This is the usage we appear to agree on, so let’s try to explore it. Suppose I have used sage<int> to debug my program and I indeed ‘trapped’ a bug. What is the next step I will take? I can see two ways to proceed:

        I have a bug somewhere. I will fix it and can go back to using int or proceed to debugging further.
        I will realize that my assumption that int will fit all my results was wrong, and I need to use a bigger integer type (and probably debug with safe<bigger_int>.

        In any case I need the safe wrapper only at the stage of debugging, and then when I ship, I turn it back to an “unsafe” integer. Right? In that case what you need for trapping is an assert rather than a throw. I still need a wrapper type (like safe) so that I know at which point to place an assert, but it is definitely an assert that I want as the default implementation of overflow() — not a throw.

        • Robert Ramey says:

          In any case I need the safe wrapper only at the stage of debugging, and then when I ship, I turn it back to an “unsafe” integer. Right? In that case what you need for trapping is an assert rather than a throw. I still need a wrapper type (like safe) so that I know at which point to place an assert, but it is definitely an assert that I want as the default implementation of overflow() — not a throw.

          This is a legitimate way to use it. Not that the library itself doesn’t throw an exception, it calls a “customization” point function called “overflow” whose default implementation is to through an exception. But the user could override that to do anything else. Since this usage of the library would be not be controversial I won’t spend any more time discussing it here.

          The far more interesting case – and the one raised by your original comment is whether there is a case for using safe numeric as a permanent feature of the code. I believe that there are legitimate uses of it. My believe is derived from the fact that C++/C arithmetic operations are not the same as mathematical numeric operations but the language syntax and our normal mental abstractions lead us to believe they are the same. A programmer expects the expression a + b to result the arithmetic result of the values a + b and safe integers enforce that while C/C++ integers to not. One might say that the programmer is wrong to expect this, but I think it’s a reasonable expectation on his part. One who agrees with me would also want to agree that usage of safe integer is useful in some number of cases.

          In general we can’t know ahead of time when some overflow will occur and we’ll have to check this some how. See https://www.securecoding.cert.org/confluence/pages/viewpage.action?pageId=270. But adding in all this checking by hand is going to be very tedious and error prone. Also the resulting code is going to be extremely tedious, and time consuming to verify – if it can be done at all. The safe integer will do all this automatically.

          Your example with position 0-63 on a chess board might be able to be written so that it can proven to always work – but I think there are lot of cases where this isn’t true. An obvious example is where we’re using an integer to encode a decimal amount of money in cents. There’s just no way to know a head of time that someone isn’t going to try to break the bank. Then you’re faced with analyzing every line of code by hand and invoking error handling code if necessary or using safe. Finally safe numeric can do a lot of things like recognize that safe + safe can never overflow because of C operand promotion rules so no checking needs to be done at all.

          There has already been significant discussion about this concept in terms of C++ library standardization. See https://groups.google.com/a/isocpp.org/forum/?fromgroups#!searchin/std-proposals/numeric/std-proposals/-KVPhEKsEQY/rPLY3l9TfcYJ

          • akrzemi1 says:

            Ok, I think I get it, a thrown exception does not indicate a programmer error, but an “implementation limitation”.

            For instance, imagine some type BigInt that can store arbitrarily big integer: it allocates heap memory to fit all the bits. It it runs out of memory it throws an exception indicating the “implementation limitation” or the lack of resources.

            I understand that your safe<int> does the same, except that its resource is limited to 64 (or similar) bits in a memory location. Did I get your intention right?

            • Robert Ramey says:

              Ok, I think I get it, a thrown exception does not indicate a programmer error, but an “implementation limitation”.

              Right – and not a bad way of looking at it. Wish I’d thought of it.

              We use computers to do arithmetic. So we create syntax such as x + y to represent the arithmetic what we want to do. This permits us to think in terms of arithmetic or mathematics rather than computer instructions to write programs that we can understand. But our computers have (and will always have) limitations on the arithmetic that they are able to do. So when we can’t realize an arithmetic operation as the programmer intended, we throw an exception or call an error function. If we do not want to do this, we have to go back and re-think our code without the benefit of the usage of the arithmetic abstraction. This is a very tedious, error prone and expensive alternative. With safe numerics, we don’t have to do this – the library effectively does it for us.

              Now that you make me think about it, seems like a good subject for your C++ blog – “what are exceptions good for?”

          • akrzemi1 says:

            Not that the library itself doesn’t throw an exception, it calls a “customization” point function called “overflow” whose default implementation is to through an exception. But the user could override that to do anything else.

            According to my interpretation of what the docs say and of the implementation, it is not possible to customize the behavior of what the library does upon overflow. When I have exceptions enabled in my compiler (and I want them), function overflow() is just defined so there is no way for me to customize it. Unless you tell me to define BOOST_NO_EXCEPTIONS? But that is not acceptable for me: I still need exceptions in my other Boost libraries.

      • akrzemi1 says:

        One good usage example I can think of is this. After a while of trying to chase a bug I comae up with a hypothesis that my int could be overflowing.

        LOL – this is exactly how I got here.

        You did not answer my question directly, but I understand that you are saying that you expect the library to be used like this. In order for this use case to be supported and useful, I would need the ability to customize the overflow handler, so that I can insert a debug break inside, or an assert.

        However, I cannot do it unless I define BOOST_NO_EXCEPTIONS. But I do not want to do it, and I cannot, because it controls other libraries in Boost, and I do not want to change these. Effectively, my use scenario is not supported.

        • Robert Ramey says:

          OK – I just looked at the documentation and I agree this needs some work.

          My current intention is that the user would be able to override the overflow function to do whatever he wants. I don’t remember if this is what I was originally intending but it’s clear that this is what is needed.

  • akrzemi1 says:

    Just to let you know that the documentation form under the link is not browsable. I can get to index page, but when I try to click “Introduction -> Problem”, I get an error.

  • Vicente J. Botet Escriba says:

    Unable to Display Statistics if I’m not logged :(

  • m4s0n501