Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 29 min 20 sec ago

Shaun Nykvist: Early Bird registrations nearly sold out for linux.conf.au

Wed, 2016-05-11 13:06

With just over 24hrs to go till registrations close for lca2011 we only have a few early bird registrations left to sell which means it is very likely that they could close before midnight on the 8th November. If you are keen to grab one of these discounted registrations then please make sure you visit the registration page as soon as possible and register and pay (you must pay to get the early bird rate).  For those of you also looking for the volunteer form please also see the volunteer page.

Shaun Nykvist: lca2011 – prizes and more

Wed, 2016-05-11 13:06

A long time ago the lca2011 team decided to have a small competition to capture the ideas of what people wanted at the next linux.conf.au.  We received a lot of responses to this including some very very special requests for things like stickers and cup cakes.  However, while we all wanted to have a conference full of stickers and pretty cup cakes and take on board as many other suggestions as feasibly possible, there was one suggestion that the team decided should be taken on board – that was the inclusion of a poster type session.  Alec Clews having made this suggestion through the competition website has received a complimentary conference registration including penguin dinner.  Congratulations from the team Alec!

There is only 26 days to go till lca2011 officially kicks off!  Final decisions on numbers for  many of the different events will be made shortly so if you are intend to be a part of an awesome 2011 linux.conf.au please ensure that you have registered as soon as possible – time is now running out and there is a lot planned for lca2011 delegates.  A lot of  the plans for lca2011 are now falling into place and the team will be even busier in the next couple of weeks leading up to the conference. We have also received some really cool Loongson lemote mini computers which will be given away during the conference – a few more details on these shortly. Most of the goodies for the schwag bag have arrived and are awaiting being packed while we are also testing the  Internet connections to the main accommodation venue at Urbanest (straight across the river from the conference venue at South Brisbane).

Shaun Nykvist: lca2011 registrations are open

Wed, 2016-05-11 13:06

yippee! lca2011 registrations are now open – here is a direct link to the prices http://lca2011.linux.org.au/register/prices.  Please note that there have been a few changes to things this year, though we have tried to keep prices as low as possible for a the 5 day conference.  There is also a new miniconf this year which I am sure will excite a number of people in the community – this is the Rocket miniconf which will include a live launch of the rockets built during the miniconf.  There is an additional fee attached to this miniconf and places are limited to 24 so if this is something that interests you I would suggest registering as soon as possible so that you do not miss out.  More announcements soon – for now I need a lot of sleep

Shaun Nykvist: linux.conf.au 2011 early bird closes in 10 days

Wed, 2016-05-11 13:06

The early bird tickets for linux.conf.au 2011 are selling quickly with over half of them sold.  It really looks as though they will sell out before the 8/11/2010 so if you are keen to take advantage of these discounted prices I would suggest that you act on this as soon as possible.  The conference is on target to be another great lca with an awesome line up of keynote speakers, and other conference speakers and activities.  The Rocket miniconf is also proving to be very popular  with numbers growing quickly.  There will be a launch of the rockets on the Sunday after the main conference finishes (30/01/2011) so this should also be a heap of fun – especially from some of the ideas I am hearing about what people want to build.

Jonathan Adamczewski: What’s the difference between 0xffffffff and 0xffffffffu?

Wed, 2016-05-11 13:06

In C++, what is the difference between 0xffffffff and 0xffffffffu?

This one’s pretty easy to answer with this information from the C++ standard:

The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be represented.

0xffffffff is a hexadecimal constant, it’s too big to be represented in a (signed) int, so — by the terms of the standard — the type of 0xffffffff is unsigned int.

Furthermore, each of these hexadecimal literals will have a different type:

0x7fffffff // int 0xffffffff // unsigned int 0x1ffffffff // long int (or long long int) 0x1ffffffffu // unsigned long int (or unsigned long long int)

But to answer the original question, there is no difference between 0xffffffff and 0xffffffffu apart from this:

@twoscomplement One is a commonly used curse when the compiler screws up.

— Colin Riley (@domipheus) January 30, 2015

Jonathan Adamczewski: The Growth of Modern C++ Support

Wed, 2016-05-11 13:06

 

Completing what I started here, I’ve charted the numbers from Christophe’s data for C++11, C++11 Concurrency, C++14 and C++17.

The data is taken entirely from the linked pdf with one exception: N3664 is a clarification that permits optimization, not a requirement for compliance. Compilers that do not perform this optimization are no less compliant with C++14. I’ve recomputed the percentages for all compiler versions to take this into account.

In addition to the references from the previous post, the approval date of C++14 was taken from http://en.wikipedia.org/wiki/C++14

Jonathan Adamczewski: C++14 and volatile implicity

Wed, 2016-05-11 13:06

[Update 2016-03-07: It appears that this was a bug in VS2015, and has been fixed in Update 2 RC]

In the process of upgrading Visual Studio 2012 to Visual Studio 2015, I encountered some brand new link errors that looked something like this:

error LNK2001: unresolved external symbol "public: __cdecl FooData::FooData(struct FooData& const &)"

It’s not a new error in VS2015 — VS2012 can certainly produce it. I mean “new” in the sense that there were no problems linking this code when using the older compiler.

The struct in question looks vaguely like this:

struct FooData { int m_Bar; volatile int m_Baz; };

The problem is m_Baz. In C++14, the language was changed to say that structs are not trivially constructible if they have non-static volatile members. And that, I think, is why there’s no default copy constructor being generated. I can’t quote chapter and verse to back up that assertion, though.

[Update: Actually… maybe not? I’m beginning to wonder if VS2015 is doing the wrong thing here.]

But the fix is simple: add a copy constructor. And then, when the program fails to compile, declare a default constructor (because, of course, adding a copy constructor causes the implicit default constructor to be marked as deleted).

I found that developing an understanding of exactly what was happening and why to be the more difficult problem. Initially because the the compiler gave no indication that there was a problem at all, and willingly generated calls to a copy constructor that couldn’t possibly exist. Deeper than that, I’m still trying to piece together my own understanding of exactly why (and how) this change was made to the standard.

Jonathan Adamczewski: The Growth of C++11 Support

Wed, 2016-05-11 13:06

Update: This chart has been updated and I’ve added charts for C++11 Concurrency, C++14, and C++17 here.
 

A few days ago, Christophe Riccio tweeted a link to a pdf that shows the level of support for “Modern C++” standards in four C++ compilers: Visual C++, GCC, Clang, and ICC.

One of the things I wanted to see was not just how support had advanced between versions of each compiler, but how compilers had changed relative to one another over time. I extracted the numbers for C++11 from Christophe’s document, found the release dates for each compiler, and created a chart that puts it all together.

It’s interesting to see how far behind Clang starts in comparison to the others, and that it ends up in a close dance with GCC on the way to full C++11 support. It also highlights how disappointing VC++ has been in terms of language feature advancement — particularly when VS2010 was ahead of Clang and ICC for C++11 features.

Creating the chart also served as an opportunity to play around with data visualization using Bokeh. As such, you can click on the chart above and you’ll see a version that you can zoom, pan, and resize (which is only a small part of what Bokeh offers). I intend to write about my experiences with Bokeh at a later date.

 

Release dates for each compiler were taken from the following pages:

The date used to mark the approval of the C++11 standard is taken from http://en.wikipedia.org/wiki/C++11

Jonathan Adamczewski: Standards vs Compilers: Warning C4146

Wed, 2016-05-11 13:06

warning C4146: unary minus operator applied to unsigned type, result still unsigned

I saw this warning recently.

“Aha!” I thought. “A common source of errors, able to strike down the unsuspecting programmer. Thank you crafters of Visual C++ compiler warnings, tirelessly laboring to uncover wrong assumptions and naively written code.”

“What?” I exclaimed. “Of course the result is still unsigned. That’s how the language is designed, and that’s what I wanted!”

Nevertheless, I read the documentation for the warning to see if there was anything I could glean from it — particularly to see if I could find sufficient reason to not just #pragma disable it.

This is what you can find in the documentation:

Unsigned types can hold only non-negative values, so unary minus (negation) does not usually make sense when applied to an unsigned type. Both the operand and the result are non-negative.

Negation of an unsigned value may not make sense if you don’t know what it means — it is well defined. Regardless, this is a level 2 warning. It is designed to catch common mistakes and misunderstandings and notify the programmer to have them look more closely. It may be an entirely reasonable thing to warn about.

The documentation continues with some rationale:

Practically, this occurs when the programmer is trying to express the minimum integer value, which is -2147483648. This value cannot be written as -2147483648 because the expression is processed in two stages:

  1. The number 2147483648 is evaluated. Because it is greater than the maximum integer value of 2147483647, the type of 2147483648 is not int, but unsigned int.
  2. Unary minus is applied to the value, with an unsigned result, which also happens to be 2147483648.

The first point is wrong. Wrong for a standards-conformant C++ implementation, anyway. The second would be accurate if the first was accurate (because 232 – 231 == 231)

Here’s what the most recent draft of the C++ standard says about the integer literal types:

The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be represented.

2147483648 is a decimal constant with no suffix. When using VC++ with it’s 32 bit long int type, the first of the corresponding list in which its value can be represented is the 64 bit long long int. An unsigned type is never an option.

Unary minus should then be applied to long long int 2147483648, which should result in long long int -2147483648. There’s nothing unsigned in this process

Use of the result should behave in an unsurprising way, too — long long int -2147483648 can be assigned to a variable of type int and nothing unexpected will happen. The type can be converted without affecting the value.

According to the standard, the rationale is flawed, and the warning seems pointless to me.

In theory, there’s no difference between theory and practise

So I tried compiling the example program from the documentation to see what would happen.

// C4146.cpp // compile with: /W2 #include <stdio.h> void check(int i) { if (i > -2147483648) // C4146 printf_s("%d is greater than the most negative int\n", i); } int main() { check(-100); check(1); }

The documentation predicts the following outcome:

The expected second line, 1 is greater than the most negative int, is not printed because ((unsigned int)1) > 2147483648 is false.

If I build the program with gcc 4.9.2, both lines print.

If I build the program with Visual C++ 2012, or even 2015 Preview, only one line is printed (as was predicted).

So there is legitimacy to this warning — this is an area that Visual C++ is not compliant with the standard.

Maybe it’s because the standard has changed? I looked at the earliest version of the text available in the cplusplus github repo dating from late 2011, and that has the same rules as quoted above.

I went back further and found copies of the standard from 2003 and 1998, both of which state:

The type of an integer literal depends on its form, value, and suffix. If it is decimal and has no suffix, it has the first of these types in which its value can be represented: int, long int; if the value cannot be represented as a long int, the behavior is undefined.

So it’s a detail that was previously undefined, which means that the compiler is permitted to do whatever it wants. In this case, we’ll get a warning, but only if the programmer has asked for it using option /W2.

The documentation is accurate, and Visual C++ hasn’t kept up with changes in the standard. This shouldn’t be surprising.

Update: long long int was added to the standard as part of C++11. It appears that VC++ has had long long support since at least Visual Studio .NET 2003

So what?

This investigation arose from my reading of Visual C++ documentation in the context of what I knew of a recent draft of the C++ standard. It turns out that these two things are less connected than I had assumed. Unsurprisingly, the Visual C++ documentation describes Visual C++, not the standard.

While it would be nice if deviations from the standard were clearly marked in the documentation, and even nicer if the Visual C++ compiler was consistent with the ISO standard, the reality is that they are not and it is not.

One should always pay close attention to context, which happens to apply as much when reading about the C++ language as it does when writing C++ code.

Jonathan Adamczewski: What is -1u?

Wed, 2016-05-11 13:06

In C++, what exactly is -1u?

It doesn’t seem like it should be difficult to answer — it’s only three characters: , 1, and u. And, knowing a little bit about C++, it seems like that’ll be (-1) negative one with that u making ((-1)u) an unsigned int. Right?

To be more specific, on an architecture where int is a 32 bit type, and negative numbers are represented using two’s complement (i.e. just about all of them), negative one has the binary value 11111111111111111111111111111111. And converting that to unsigned int should … still be those same thirty two ones. Shouldn’t it?

I can test that hypothesis! Here’s a program that will answer the question once and for all:

#include <stdio.h> #include <type_traits> int main() { static_assert(std::is_unsigned<decltype(-1u)>::value, "actually not unsigned"); printf("-1u is %zu bytes, with the value %#08x\n ", sizeof -1u, -1u); }

Compile and run it like this:

g++ -std=c++11 minus_one_u.cpp -o minus_one_u && minus_one_u

If I do that, I see the following output:

-1u is 4 bytes, with the value 0xffffffff

I’m using -std=c++11 to be able to use std::is_unsigned, decltype and static_assert which combine to assure me that (-1u) is actually unsigned as the program wouldn’t have compiled if that wasn’t the case. And the output shows the result I had hoped for: it’s a four byte value, containing 0xffffffff (which is the same as that string of thirty two ones I was looking for).

I have now proven that -1u means “convert -1 to an unsigned int.” Yay me!

Not so much.

It just so happened that I was reading about integer literals in a recent draft of the ISO C++ standard. Here’s the part of the standard that describes the format of decimal integer literals:

2.14.2 Integer literals
1 An integer literal is a sequence of digits that has no period or exponent part, with optional separating single quotes that are ignored when determining its value. An integer literal may have a prefix that specifies its base and a suffix that specifies its type. The lexically first digit of the sequence of digits is the most significant. A decimal integer literal (base ten) begins with a digit other than 0 and consists of a sequence of decimal digits.

Can you see where it describes negative integer literals?

I can’t see where it describes negative integer literals.

Oh.

I though -1u was ((-1)u). I was wrong. Integer literals do not work that way.

Obviously -1u didn’t just stop producing an unsigned int with the value 0xffffffff (the program proved it!!1), but the reason it has that value is not the reason I thought.

So, what is -1u?

The standard says that 1u is an integer literal. So now I need to work out exactly what that  is doing. What does it mean to negate 1u? Back to the standard I go.

5.3.1 Unary operators
8 The operand of the unary – operator shall have arithmetic or unscoped enumeration type and the result is the negation of its operand. Integral promotion is performed on integral or enumeration operands. The negative of an unsigned quantity is computed by subtracting its value from 2n, where n is the number of bits in the promoted operand. The type of the result is the type of the promoted operand.

I feel like I’m getting closer to some real answers.

So there’s a numerical operation to apply to this thing. But first, this:

Integral promotion is performed on integral or enumeration operands.

Believe me when I tell you that this section changes nothing and you should skip it.

I have an integral operand (1u), so integral promotion must be performed. Here is the part of the standard that deals with that:

4.5 Integral promotions
1 A prvalue of an integer type other than bool, char16_t, char32_t, or wchar_t whose integer conversion rank (4.13) is less than the rank of int can be converted to a prvalue of type int if int can represent all the values of the source type; otherwise, the source prvalue can be converted to a prvalue of type unsigned int.

I’m going to cut a corner here: integer literals are prvalues, but I couldn’t find a place in the standard that explicitly declares this to be the case. It does seem pretty clear from 3.10 that they can’t be anything else. This page gives a good rundown on C++ value categories, and does state that integer literals are prvalues, so let’s go with that.

If 1u is a prvalue, and its type is unsigned int, I can collapse the standard text a little:

4.5 Integral promotions (prvalue edition)
A value of an integer type whose integer conversion rank (4.13) is less than the rank of int …

and I’m going to stop right there. Conversion rank what now? To 4.13!

4.13 Integer conversion rank
1 Every integer type has an integer conversion rank defined as follows:

Then a list of ten different rules, including this one:

— The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type.

Without knowing more about conversion ranks, this rule gives me enough information to determine what 4.5 means for unsigned int values: unsigned int has the same rank as int. So I can rewrite 4.5 one more time like this:

4.5 Integral promotions (unsigned int edition)
1 [This space intentionally left blank]

Integral promotion of an unsigned int value doesn’t change a thing.

Where was I?

Now I can rewrite 5.3.1 with the knowledge that 1u requires no integral promotion:

5.3.1 Unary operators (unsigned int operand edition)
8 The [result of] the unary – operator … is the negation of its operand. The negative of an unsigned quantity is computed by subtracting its value from 2n, where n is the number of bits in the promoted operand. The type of the result is the type of the operand.

And, at long last, I get to do the negating. For an unsigned value that means:

[subtract] its value from 2n, where n is the number of bits in the promoted operand.

My unsigned int has 32 bits, so that would be 232 – 1. Which in hexadecimal looks something like this:

0x100000000 - 0x000000001 0x0ffffffff

But that leading zero I’ve left on the result goes away because

The type of the result is the type of the (promoted) operand.

And I am now certain that I know how -1u becomes an unsigned int with the value 0xffffffff. In fact, it’s not even dependent on having a platform that uses two’s complement  — nothing in the conversion relies on that.

But… when could this possibly ever matter?

For -1u? I don’t see this ever causing actual problems. There are situations that arise from the way that C++ integer literals are defined that can cause surprises (i.e. bugs) for the unsuspecting programmer.

There is a particular case described in the documentation for Visual C++ compiler warning C4146, but I think the rationale for that warning is wrong (or, at least, imprecise), but not because of something I’ve covered in this article. As I’ve already written far too many words about these three characters, I’ll keep that discussion for some time in the future.

Jonathan Adamczewski: Aside: Over-engineered Min() [C++, variadic templates, constexpr, fold left]

Wed, 2016-05-11 13:06

Q: Given a function constexpr int Min(int a, int b), construct a function constexpr int Min(Args... args) that returns the minimum of all the provided args. Fail to justify your over-engineering.

A: Rename Min(int, int) as MinImpl(int, int) or stick it in a namespace. Overloading the function is not only unnecessary, it gets in the way of the implementation.

constexpr int MinImpl(int a, int b) { return a < b ? a : b; }

Implement a constexpr fold left function. If we can use it for Min(), we should be able to do the same for Max(), and other similar functions. Should we be able to find any (#prematuregeneralization).

template<typename ArgA, typename ArgB, typename Func> constexpr auto foldl(Func func, ArgA a, ArgB b) { return func(a, b); } template<typename ArgA, typename ArgB, typename Func, typename ...Args> constexpr auto foldl(Func func, ArgA a, ArgB b, Args... args) { return foldl(func, func(a, b), args...); }

Combine the two.

template<typename ...Args> constexpr auto Min(Args... args) { return foldl(MinImpl, args...); }

Add the bare minimum amount of testing for a constexpr function: slap a static_assert() on it.

static_assert(Min(6, 4, 5, 3, 9) == 3), "Nope");

I did so with Visual Studio 2015 Update 2. It did not object.

Addendum: Some discussion with @nlguillemot and @DrPizza led to this attempt to do something similar with a C++17/C++1z fold-expression:

#include <limits.h> constexpr int MinImpl1(int a, int b) { return a < b ? a : b; } constexpr void MinImpl2(int* m, int a, int b) { *m = a < b ? a : b; } template<typename ...Args> constexpr int Min(Args... args) { int m = INT_MAX; // a binary expression in an operand of a fold-expression // is not allowed, so this won't compile: //((m = MinImpl1(m, args), ...); // But this does: (MinImpl2(/*out*/&m, m, args), ...); return m; } int main() { static_assert(Min(3,4,5) == 3, "nope"); }

This compiles with a gcc-6 pre-release snapshot.

Update: Here’s a further updated version, based on a refinement by @dotstdy.

Jonathan Adamczewski: floats, bits, and constant expressions

Wed, 2016-05-11 13:06

Can you access the bits that represent an IEEE754 single precision float in a C++14 constant expression (constexpr)?

(Why would you want to do that? Maybe you want to run a fast inverse square root at compile time. Or maybe you want to do something that is actually useful. I wanted to know if it could be done.)

For context: this article is based on experiences using gcc-5.3.0 and clang-3.7.1 with -std=c++14 -march=native on a Sandy Bridge Intel i7. Where I reference sections from the C++ standard, I’m referring to the November 2014 draft.

Before going further, I’ll quote 5.20.6 from the standard:

Since this International Standard imposes no restrictions on the accuracy of floating-point operations, it is unspecified whether the evaluation of a floating-point expression during translation yields the same result as the evaluation of the same expression (or the same operations on the same values) during program execution.88

88) Nonetheless, implementations are encouraged to provide consistent results, irrespective of whether the evaluation was performed during translation and/or during program execution.

In this post, I document things that worked (and didn’t work) for me. You may have a different experience.

Methods of conversion that won’t work

(Error text from g++-5.3.0)

You can’t access the bits of a float via a typecast pointer [which is undefined behavior, and covered by 5.20.2.5]:

constexpr uint32_t bits_cast(float f) { return *(uint32_t*)&f; // [2] } error: accessing value of 'f' through a 'uint32_t {aka unsigned int}' glvalue in a constant expression

You can’t convert it via a reinterpret cast [5.20.2.13]

constexpr uint32_t bits_reinterpret_cast(float f) {  const unsigned char* cf = reinterpret_cast<const unsigned char*>(&f); // endianness notwithstanding  return (cf[3] << 24) | (cf[2] << 16) | (cf[1] << 8) | cf[0]; } error: '*(cf + 3u)' is not a constant expression

(gcc reports an error with the memory access, but does not object to the reinterpret_cast. clang produces a specific error for the cast.)

You can’t convert it through a union [gcc, for example, permits this for non-constant expressions, but the standard forbids it in 5.20.2.8]:

constexpr uint32_t bits_union(float f) { union Convert { uint32_t u; float f; constexpr Convert(float f_) : f(f_) {} }; return Convert(f).u; } error: accessing 'bits_union(float)::Convert::u' member instead of initialized 'bits_union(float)::Convert::f' member in constant expression

You can’t use memcpy() [5.20.2.2]:

constexpr uint32_t bits_memcpy(float f) { uint32_t u = 0; memcpy(&u, &f, sizeof f); return u; } error: 'memcpy(((void*)(&u)), ((const void*)(&f)), 4ul)' is not a constant expression

And you can’t define a constexpr memcpy()-like function that is capable of the task [5.20.2.11]:

constexpr void* memcpy(void* dest, const void* src, size_t n) { char* d = (char*)dest; const char* s = (const char*)src; while(n-- > 0) *d++ = *s++; return dest; } constexpr uint32_t bits_memcpy(float f) { uint32_t u = 0; memcpy(&u, &f, sizeof f); return u; } error: accessing value of 'u' through a 'char' glvalue in a constant expression

So what can you do?

Floating point operations in constant expressions

For constexpr float f = 2.0f, g = 2.0f the following operations are available [as they are not ruled out by anything I can see in 5.20]:

  • Comparison of floating point values e.g.
    static_assert(f == g, "not equal");
  • Floating point arithmetic operations e.g.
    static_assert(f * 2.0f == 4.0f, "arithmetic failed");
  • Casts from float to integral value, often with well-defined semantics e.g.
    constexpr int i = (int)2.0f; static_assert(i == 2, "conversion failed");

So I wrote a function (uint32_t bits(float)) that will return the binary representation of an IEEE754 single precision float. The full function is at the end of this post. I’ll go through the various steps required to produce (my best approximation of) the desired result.

1. Zero

When bits() is passed the value zero, we want this behavior:

static_assert(bits(0.0f) == 0x00000000);

And we can have it:

if (f == 0.0f) return 0;

Nothing difficult about that.

2. Negative zero

In IEEE754 land, negative zero is a thing. Ideally, we’d like this behavior:

static_assert(bits(-0.0f) == 0x80000000)

But the check for zero also matches negative zero. Negative zero is not something that the C++ standard has anything to say about, given that IEEE754 is an implementation choice [3.9.1.8: “The value representation of floating-point types is implementation defined”]. My compilers treat negative zero the same as zero for all comparisons and arithmetic operations. As such, bits() returns the wrong value when considering negative zero, returning 0x00000000 rather than the desired 0x80000000.

I did look into other methods for detecting negative zero in C++, without finding something that would work in a constant expression. I have seen divide by zero used as a way to detect negative zero (resulting in ±infinity, depending on the sign of the zero), but that doesn’t compile in a constant expression:

constexpr float r = 1.0f / -0.0f; error: '(1.0e+0f / -0.0f)' is not a constant expression

and divide by zero is explicitly named as undefined behavior in 5.6.4, and so by 5.20.2.5 is unusable in a constant expression.

Result: negative zero becomes positive zero.

3. Infinity

We want this:

static_assert(bits(INFINITY) == 0x7f800000);

And this:

else if (f == INFINITY) return 0x7f800000;

works as expected.

4. Negative Infinity

Same idea, different sign:

static_assert(bits(-INFINITY) == 0xff800000); else if (f == -INFINITY) return 0xff800000;

Also works.

5. NaNs

There’s no way to generate arbitrary NaN constants in a constant expression that I can see (not least because casting bits to floats isn’t possible in a constant expression, either), so it seems impossible to get this right in general.

In practice, maybe this is good enough:

static_assert(bits(NAN) == 0x7fc00000);

NaN values can be anywhere in the range of 0x7f800001 -- 0x7fffffff and 0xff800001 -- 0xffffffff. I have no idea as to the specific values that are seen in practice, nor what they mean. 0x7fc00000 shows up in /usr/include/bits/nan.h on the system I’m using to write this, so — right or wrong — I’ve chosen that as the reference value.

It is possible to detect a NaN value in a constant expression, but not its payload. (At least that I’ve been able to find). So there’s this:

else if (f != f) // NaN return 0x7fc00000; // This is my NaN...

Which means that of the 2*(223-1) possible NaNs, one will be handled correctly (in this case, 0x7fc00000). For the other 16,777,213 values, the wrong value will be returned (in this case, 0x7fc00000).

So… partial success? NaNs are correctly detected, but the bits for only one NaN value will be returned correctly.

(On the other hand, the probability that it will ever matter could be stored as a denormalized float)

6. Normalized Values // pseudo-code static_assert(bits({ 0x1p-126f, ..., 0x1.ffff7p127}) == { 0x00800000, ..., 0x7f7fffff}); static_assert(bits({ -0x1p-126f, ..., -0x1.ffff7p127}) == { 0x80800000, ..., 0xff7fffff});

[That 0x1pnnnf format happens to be a convenient way to represent exact values that can be stored as binary floating point numbers]

It is possible to detect and correctly construct bits for every normalized value. It does requires a little care to avoid truncation and undefined behavior. I wrote a few different implementations — the one that I describe here requires relatively little code, and doesn’t perform terribly [0].

The first step is to find and clear the sign bit. This simplifies subsequent steps.

bool sign = f < 0.0f; float abs_f = sign ? -f : f;

Now we have abs_f — it’s positive, non-zero, non-infinite, and not a NaN.

What happens when a float is cast to an integral type?

uint64_t i = (uint64_t)f;

The value of f will be stored in i, according to the following rules:

  • The value will be rounded towards zero which, for positive values, means truncation of any fractional part.
  • If the value in f is too large to be represented as a uint64_t (i.e. f > 264-1) the result is undefined.

If truncation takes place, data is lost. If the number is too large, the result is (probably) meaningless.

For our conversion function, if we can scale abs_f into a range where it is not larger than (264-1), and it has no fractional part, we have access to an exact representation of the bits that make up the float. We just need to keep track of the amount of scaling being done.

Single precision IEEE 754 floating point numbers have, at most, (23+1) bits of precision (23 in the significand, 1 implicit). This means that we can scale down large numbers and scale up small numbers into the required range.

Multiplying by powers of two change only the exponent of the float, and leave the significand unmodified. As such, we can arbitrarily scale a float by a power of two and — so long as we don’t over- or under-flow the float — we will not lose any of the bits in the significand.

For the sake of simplicity (believe it or not [1]), my approach is to scale abs_f in steps of 241 so that (abs_f ≥ 287) like so:

int exponent = 254; while(abs_f < 0x1p87f) { abs_f *= 0x1p41f; exponent -= 41; }

If abs_f ≥ 287, the least significant bit of abs_f, if set, is 2(87-23)==264.

Next, abs_f is scaled back down by 264 (which adds no fractional part as the least significant bit is 264) and converted to an unsigned 64 bit integer.

uint64_t a = (uint64_t)(abs_f * 0x1p-64f);

All of the bits of abs_f are now present in a, without overflow or truncation. All that is needed now is to determine where they are:

int lz = count_leading_zeroes(a);

adjust the exponent accordingly:

exponent -= lz;

and construct the result:

uint32_t significand = (a << (lz + 1)) >> (64 - 23); // [3] return (sign << 31) | (exponent << 23) | significand;

With this, we have correct results for every normalized float.

7. Denormalized Values // pseudo-code static_assert(bits({ 0x1.0p-149f, ..., 0x1.ffff7p-127f}) == { 0x00000001, ..., 0x007fffff}); static_assert(bits({ -0x1.0p-149f, ..., -0x1.ffff7p-127f}) == { 0x80000001, ..., 0x807fffff});

The final detail is denormalized values. Handling of normalized values as presented so far fails because denormals will have additional leading zeroes. They are fairly easy to account for:

if (exponent <= 0) { exponent = 0; lz = 8 - 1; }

To attempt to demystify that lz = 8 - 1 a little: there are 8 leading bits that aren’t part of the significand of a denormalized single precision float after the repeated 2-41 scaling that has taken place. There is also no leading 1 bit that is present in all normalized numbers (which is accounted for in the calculation of significand above as (lz + 1)). So the leading zero count (lz) is set to account for the 8 bits of offset to the start of the denormalized significand, minus the one that the subsequent calculation assumes it needs to skip over.

And that’s it. All the possible values of a float are accounted for.

(Side note: If you’re compiling with -ffast-math, passing denormalized numbers to bits() will return invalid results. That’s -ffast-math for you. With gcc or clang, you could add an #ifdef __FAST_MATH__ around the test for negative exponent.)

Conclusion

You can indeed obtain the bit representation of a floating point number at compile time. Mostly. Negative zero is wrong, NaNs are detected but otherwise not accurately converted.

Enjoy your compile-time bit-twiddling!

The whole deal:

// Based on code from // https://graphics.stanford.edu/~seander/bithacks.html constexpr int count_leading_zeroes(uint64_t v) { constexpr char bit_position[64] = { 0, 1, 2, 7, 3, 13, 8, 19, 4, 25, 14, 28, 9, 34, 20, 40, 5, 17, 26, 38, 15, 46, 29, 48, 10, 31, 35, 54, 21, 50, 41, 57, 63, 6, 12, 18, 24, 27, 33, 39, 16, 37, 45, 47, 30, 53, 49, 56, 62, 11, 23, 32, 36, 44, 52, 55, 61, 22, 43, 51, 60, 42, 59, 58 }; v |= v >> 1; // first round down to one less than a power of 2 v |= v >> 2; v |= v >> 4; v |= v >> 8; v |= v >> 16; v |= v >> 32; v = (v >> 1) + 1; return 63 - bit_position[(v * 0x0218a392cd3d5dbf)>>58]; // [3] } constexpr uint32_t bits(float f) { if (f == 0.0f) return 0; // also matches -0.0f and gives wrong result else if (f == INFINITY) return 0x7f800000; else if (f == -INFINITY) return 0xff800000; else if (f != f) // NaN return 0x7fc00000; // This is my NaN... bool sign = f < 0.0f; float abs_f = sign ? -f : f; int exponent = 254; while(abs_f < 0x1p87f) { abs_f *= 0x1p41f; exponent -= 41; } uint64_t a = (uint64_t)(abs_f * 0x1p-64f); int lz = count_leading_zeroes(a); exponent -= lz; if (exponent <= 0) { exponent = 0; lz = 8 - 1; } uint32_t significand = (a << (lz + 1)) >> (64 - 23); // [3] return (sign << 31) | (exponent << 23) | significand; }

[0] Why does runtime performance matter? Because that’s how I tested the conversion function while implementing it. I was applying Bruce Dawson’s advice for testing floats and the quicker I found out that I’d broken the conversion the better. For the implementation described in this post, it takes about 97 seconds to test all four billion float values on my laptop — half that time if I wasn’t testing negative numbers (which are unlikely to cause problems due to the way I handle the sign bit). The implementation I’ve described in this post is not the fastest solution to the problem, but it is relatively compact, and well behaved in the face of -ffast-math.

Admission buried in a footnote: I have not validated correct behavior of this code for every floating point number in actual compile-time constant expressions. Compile-time evaluation of four billion invocations of bits() takes more time than I’ve been willing to invest so far.

[1] It is conceptually simpler to multiply abs_f by two (or one half) until the result is exactly positioned so that no leading zero count is required after the cast — at least, that was what I did in my first attempt. The approach described here was found to be significantly faster. I have no doubt that better-performing constant-expression-friendly approaches exist.

[2] Update 2016-03-28: Thanks to satbyy for pointing out the missing ampersand — it was lost sometime after copying the code into the article.

[3] Update 2016-03-28: Thanks to louiswins for pointing out additional code errors.

Jonathan Adamczewski: Another another C++11 ‘countof’

Wed, 2016-05-11 13:06

My earlier post received this comment which is a pretty neat little improvement over the one from g-truc.net.

Here it is, with one further tweak:

template<typename T, std::size_t N> constexpr std::integral_constant<std::size_t, N> countof(T const (&)[N]) noexcept {   return {}; } #define COUNTOF(...) decltype(countof(__VA_ARGS__))::value

The change I’ve made to pfultz2’s version is to use ::value rather than {} after decltype in the macro.

This makes the type of the result std::size_t not std::integral_constant, so it can be used in va_arg settings without triggering compiler or static analysis warnings.

It also has the advantage of not triggering extra warnings in VS2015U1 (this issue).

Jonathan Adamczewski: Another C++11 ‘countof’

Wed, 2016-05-11 13:06

Note: There’s an update here.

Read “Better array ‘countof’ implementation with C++ 11” for context. Specifically, it presents Listing 5 as an implementation of countof() using C++11 constexpr:

  • template<typename T, std::size_t N> constexpr std::size_t countof(T const (&)[N]) noexcept { return N; }

But this falls short. Just a little.

There are arguments that could be passed to a naive sizeof(a)/sizeof(a[0]) macro that will cause the above to fail to compile.

Consider:

struct S { int a[4]; }; void f(S* s) { constexpr size_t s_a_count = countof(s->a); int b[s_a_count]; // do things... }

This does not compile. s is not constant, and countof() is a constexpr function whose result is needed at compile time, and so expects a constexpr-friendly argument. Even though it is not used.

Errors from this kind of thing can look like this from clang-3.7.0:

error: constexpr variable 's_a_count' must be initialized by a constant expression note: read of non-constexpr variable 's' is not allowed in a constant expression

or this from Visual Studio 2015 Update 1:

error: C2131: expression did not evaluate to a constant

(Aside: At the time of writing, the error C2131 seems to be undocumented for VS2015. But Visual Studio 6.0 had an error with the same number)

Here’s a C++11 version of countof() that will give the correct result for countof(s->a) above:

#include <type_traits> template<typename Tin> constexpr std::size_t countof() { using T = typename std::remove_reference<Tin>::type; static_assert(std::is_array<T>::value, "countof() requires an array argument"); static_assert(std::extent<T>::value > 0, // [0] "zero- or unknown-size array"); return std::extent<T>::value; } #define countof(a) countof<decltype(a)>()

Some of the details:

Adding a countof() macro allows use of decltype() in the caller’s context, which provides the type of the member array of a non-const object at compile time.

std::remove_reference is needed to get the array type from the result of decltype(). Without it, std::is_array and std::extent produce false and zero, respectively.

The first static assert ensures that countof() is being called on an actual array. The upside over failed template instantiation or specialization is that you can write your own human-readable, slightly more context aware error message (better than mine).

The second static assert validates that the array size is known, and is greater than zero. Without it, countof<int[]>() will return zero (which will be wrong) without error. And zero-sized arrays will also result in zero — in practice they rarely actually contain zero elements. This isn’t a function for finding the size of those arrays.

And then std::extent<T>::value produces the actual count of the elements of the array.

Addendum:

If replacing an existing sizeof-based macro with a constexpr countof() alternate, Visual Studio 2015 Update 1 will trigger warnings in certain cases where there previously were no warnings.

warning C4267: conversion from 'size_t' to 'int', possible loss of data

It is unfortunate to have to add explicit casts when the safety of such operations is able to be determined by the compiler. I have optimistically submitted this as an issue at connect.microsoft.com.

[0] Typo fix thanks to this commentor

Gabriel Noronha: The EV posts are comming !

Wed, 2016-05-11 13:06
July 2013 it arrived

In July we purchased a 2012 I-miev ex-demo (Dec 2012 rego) from booths motor group Gosford for $25000 it’s now November so in 3-4 months we have done ~5000km. it’s had its 1500km service and the next service is at 15000km or 1 year.

Driving a EV is like driving a classic car, you rarely see them on the road and there is a nice community around them.

Like I said this is post number one more are coming !

Gabriel Noronha: NSW Solar Feed in Tariff

Wed, 2016-05-11 13:06

If you where lucky enough to install your solar panels before 30 June 2012 then you have nothing to worry about your nsw government guaranteed to get 60c for those pre 28/10/2010 in the scheme or 20c for those later. I didn’t own my house before that date so I missed out so we’ll mainly be looking at VFIT voluntary feed in tariff.

So to safe you the research I’ve put it in a table below using Energy Made Easy Website to look up power companies avaliable in NSW.

Company VFIT c/kwh Greenpower Website Click Energy 10 No Click Energy AGL 8 Yes AGL Diamond Energy 8 No** Diamond Energy Power Direct 7.7 Yes Power Direct EnergyAustralia 7.7* Yes Energy Australia Lumo 6.6 Yes Lumo Energy Orgin 6 Yes Orgin energy Red Energy 5* Yes Red Energy

*can’t confirm rate on company website need to ring for quote.
**Doesn’t offer certified green energy but does own green energy generators and no fossil fuel generators.

I export 3199.9 kWh/annually so at 8c it’s $255.99 and at the top rate of 10c/kWh it’s $319.99 so 10c give me an extra $64 per year. but that’s only a saving if the cost rate is the same.

There are other things to take into account when choosing a provide through, like the rates and discount for example click charge 27.39 c/kwh plus 7% discount 25.47 c/kwh which beat the discounted AGL rate of 26.44 c/kwh. Diamond energy with discount is ~26.56 c/kwh. daily service charge will also have to be factored in but it has less of an affect on the bill size. with click offering 78.10 c/day AGL 74.877 c/day and Diamond Energy 78.10 c/day.

I’m currently with AGL but I will be doing some further number crunching to work out if I can get enough savings out of click energy to justify me contract break fee that AGL will charge me if I leave.
I also dispprove on AGLs submission to the RET review so I’m not so willing to give them any more money.

Click energy doesn’t provide green power, this is a slight sticking point as purchasing green energy means not that my power comes from green sources but my money does. But that can be purchased seperatly to your energy bill and more directly to green providers so that might be an option.

Gabriel Noronha: New Electricity Retailer

Wed, 2016-05-11 13:06

So after crunching some more numbers and reading the green peace green energy guide I decided to change electricity retailers. Based of my need for a high VFIT (see previous post )  it was a choice between AGL (current provider), Click Energy and Diamond Energy.

Power Saving Calculations

Ok so the savings it’s not completely fair on AGL $55 of that $70 saving is 100% green energy which I’m not longer buying.  As click doesn’t offer it on their solar plan. but i can buy green energy from the a environmental trust for 4.2c/kWh and it’s a tax deduction.

Click saved me the most money has no contracts over AGLs 3 year killer and Diamonds 1 year one, it was also rated by green peace as middle range green. I’ve decided to move to click energy I’ll officially switch at my next meter read.

What about Gas well it’s going to switched later when click supports it. from twitter today:

It’s official! We’re pleased to announce Click Energy will be a #naturalgassupplier by the end of the year http://t.co/SOtVNIIDJK

— Click Energy (@click4energy) September 4, 2014

If I’ve convinced you to switch and you want to get $50 click has a mates rates referral program  drop me a message and we’ll go from there.

Gabriel Noronha: EVSE for Sun Valley Toursit Park

Wed, 2016-05-11 13:06

So you might of seen a couple posts about Sun Valley Tourist Park, that is because we visit there a lot to visit grandma and grandpa (wife’s parents) .  So we decided because its outside of our return range we have to charge there to get home if we take the I-MIEV. but with the Electric Vehicle Supply Equipment (EVSE) that comes with the car limits the charge rate to 10amps max. So we convinced the park to install a 32amp EVSE.  This allow us to charge at the I-MIEV full rate of 13amps so 30% faster.

Aeroviroment EVSE-RS at Sun Valley

If you want to know more about the EVSE it’s an Aeroviroment EVSE RS.  It should work fine with the Holden volt, Mitsubishi Outlander PHEV, I-MIEV 2012 or later (may not work with 2010 models) and the Nissan LEAF.

If you are in the central coast and want somewhere to charge you can find the details on how to contact the park on plugshare. It’s available for public use depending on how busy the park is and the driver paying a nominal fee, and the driver phones ahead, during office hours.

 

Gabriel Noronha: Charging Infrastructure

Wed, 2016-05-11 13:06

A lot of people ask where do you charge. The answer is nearly for all EV drivers is at home. Some times the next question is do you need special equipment to which the answer is a powerpoint. (more specifically a 15amp powerpoint for the provide cable with the LEAF or I-miev). When out and about we have the following options.

 Commercial Infrastructure

There are two providers of commercial charging infrastructure in Australia, both are American Chargepoint and Blink, there was a 3rd Better place but unfortunately that company went broke.  Blink is yet to setup an Australian office so they are a bit harder to contact.

Chargepoint have a office in every state of Australia and have around 167 charge stations in the country. The chargepoint model is a low risk for them, it requires the person or business that wants a charging station to pay for the capital costs of supply and install of the charger.  It’s then up to the charging station owner if they want to charge the EV driver and chargepoint through the use of their RFID tags issued to drivers then take care of the payment system and charger driver accordingly, at present all the ones in Australia are free to use.

As you can see the blink network is much smaller, with only 5 sites and 7 chargers.  blink doesn’t let the site owner choose the price but instead charge $1USD per hour. I’m still waiting to hear from blink sales on if they have plans to expand in Australia.

Community Infrastructure

What if you don’t want to charge people or just provide a simple power point.  Well for these site there is a great site that EV drivers and Charging spot owners can used to share information.  http://www.recargo.com/search (you might have to pan to Australia) Allows you to sign up and add charge points to there map tell other EV drivers that a charging spot works by checking in (think foursquare and facebook), and upload pictures to help people find charge location.  This has probably been the most useful tool so far when it comes to charging infrastructure, I highly recommend all EV drivers install the app on the iphone or android.

Encouraging Infrastructure

Currently there still isn’t enough charging infrastructure, not because the current EV drivers need it but mainly because it puts people off buying a EV.

In a small effort to make it easier for business to understand what’s required to provide a service to EV drivers I prepared a primer

Electric Vehicle Charging Solutions for Businesses

Hopefully other EV drivers can use this when negotiating with companies about added a charging station to there site.

Gabriel Noronha: Kickstarter Field Hockey Game

Wed, 2016-05-11 13:06

For years in high school I had my hockey mad team mates telling me how great a field hockey game would be…

Looks like someone finally listerned there is currently a kick starter campaign to get one made http://www.kickstarter.com/projects/urbanwarfarestudios/the-field-hockey-game-pc-mac-linux

Love hockey but thing gaming is a waste of time ….. well think of this as a promotion of the sport well worth your time. I’ve backed it !