Review of the Purism Librem 13

Towards the end of last year, I got a new laptop: the Purism Librem 13. It replaced the Lenovo ThinkPad X250 that I was using previously, which maxed out at 8 GB RAM and was beginning to be unusable for Firefox builds.

This is my first professional laptop that isn’t a ThinkPad; as I’ve now been using it for over half a year, I thought I’d write some brief notes on what my experience with it has been like.

Why Purism?

My main requirement from a work point of view was having at least 16 GB RAM while staying in the same weight category as the X250. There were options meeting those criteria in the ThinkPad line (like the X270 or newer generations of X1 Carbon), so why did I choose Purism?

Purism is a social benefit corporation that aims to make laptops that respect your privacy and freedom — at the hardware and firmware levels in addition to software — while remaining competitive with other productivity laptops in terms of price and specifications.

The freedom-respecting features of the Librem 13 that you don’t typically find in other laptops include:

  • Hardware kill switches for WiFi/Bluetooth and the microphone/camera
  • A open-source bootloader (coreboot)
  • A disabled Intel Management Engine, a component of Intel CPUs that runs proprietary software at (very) elevated privilege levels, which Intel makes very hard to disable or replace
  • An attempt to ship hardware components with open-source firmware, though this is very much a work in progress
  • Tamper evidence via Heads, though this is a newer feature and was not available at the time I purchased my Librem 13.

These are features I’ve long wanted in my computing devices, and it was exciting to see someone producing competitively priced laptops with all the relevant configuration, sourcing of parts, compatibility testing etc. done for you.

Hardware

Material

The Librem’s aluminum chassis looks nicer and feels sturdier than the X250’s plastic one.

Screen

At 13.3″, the Librem’s screen size is a small but noticeable and welcome improvement over the X250’s 12.5″.

The X250 traded off screen size for battery life. It’s the same weight as the 14″ ThinkPad X1 Carbon; the weight savings from a smaller screen size go into extra thickness, which allows for a second battery. I was pleased to see that the Librem, which is the same thickness as the X1 Carbon and only has one battery, has comparable battery life to the X250 (5-6 hours on an average workload).

The Librem’s screen is not a touchscreen. I noticed this because I used the X250’s touchscreen to test touch event support in Firefox, but I don’t think the average user has much use of a touchscreen in a conventional laptop (it’s more useful in 2-in-1 laptops, which Purism also offers, and that does have a touchscreen), so I don’t hold this against Purism.

The maximum swivel angle between the Librem’s keyboard and its screen is 130 degrees, compared to the X250’s almost 180 degrees. I did occasionally use the X250’s greater swivel angle (e.g. when lying on a couch), but I didn’t find its absence in the Librem to be a significant issue.

Touchpad

The one feature of ThinkPad laptops that I miss the most in the Librem, is the TrackPoint, the red button in the middle of the keyboard that allows you to move the cursor without having to move your hand down to the touchpad. I didn’t realize how much I relied on this until I didn’t have it, though I’ve been getting by without it. (I view it as additional motivation for me to use the keyboard more and the cursor less.)

Also missing in the Librem are the buttons above the touchpad for left-, right-, and middle-clicking; you instead have to click by tapping the touchpad with one, two, or three fingers (respectively), which I find more awkward and prone to accidental taps.

Finally, while I haven’t noticed this very much myself (but I tend not to be very discerning in this area), several people who have briefly used my Librem commented that the sensitivity of its touchpad is significantly reduced compared to other touchpads they’re used to.

Keyboard

The Librem’s keys feel better to the press than the X250’s. However, I’ve found you have to hit the keys fairly close to their centre for the press to register; the X250’s keys were more sensitive in this respect (hitting the side of the key would still trigger it), so this took some getting used to.

The keyboard can be backlit (at two different levels of intensity, though I don’t think I’ve ever used the second one). However, the shortcut to activate the backlight (Fn + F10) is significantly harder to find in the dark than the X250’s (Fn + Space).

I’ve also found the Librem’s keys get sweaty more easily, I’m guessing due to different materials.

Layout

The Librem’s keyboard layout differs from the X250’s in several small but important ways. Some of the changes are welcome; others, less so.

Here is a picture of the keyboard to illustrate:

Librem 13 keyboard

  • One thing that I think the Librem’s keyboard gets right that the X250 got wrong, is that the key in the bottom left corner is Ctrl, with Fn being next to it, rather than the other way around. I find this significantly aids muscle memory when moving between the Librem’s keyboard and external / desktop keyboards (which invariably have Ctrl in the bottom left corner). (I know that issues like this can technically be worked around by remapping keys, but it’s nice not to have to.)
  • On the other hand, the biggest deficiency in the Librem’s keyboard is the lack of PageUp, PageDown, Home, and End keys. The X250 had all of these: PageUp and PageDown above the right and left arrow keys, Home and End in the top row. With the Librem, you have to use the arrow keys with the Fn modifier to invoke these operations. My typing style is such that I use these operations fairly heavily, and as such I’ve missed the separate keys a lot.
  • A related minor annoyance is the fact that the rightmost key in the second row from the bottom is not Shift as it usually is, but a second Fn key; that’s also an impedient to muscle memory across different keyboards.
  • Lastly, the key in the top right corner is the power key, not Delete which is what I was used to from the X250.

None of these are necessarily dealbreakers, but they did take some getting used to.

Microphone

Every time I’ve tried the Librem’s microphone so far, the recording quality has been terrible, with large amounts of static obscuring the signal. I haven’t yet had a chance to investigate whether this is a hardware or software issue.

Software

The Librem 13 comes with Purism’s own Linux distribution, PureOS. PureOS is basically a light repack of Debian and GNOME 3, with some common software pre-installed and, in some cases, re-branded.

I got the impression that PureOS and its software doesn’t get much in the way of maintenance. For example, for the re-branded browser that came with PureOS, “PureBrowser”, the latest version available in the PureOS repository at the time I got my Librem was based on Firefox 45 ESR, which had been out of support for some 6 months by that time!

I’m also not a huge fan of GNOME 3. I tolerated this setup for all of about two weeks, and then decided to wipe the PureOS installation and replace it with a plain Debian stable installation, with KDE, my preferred desktop environment. This went without a hitch, indicating that — as far as I can tell — there isn’t anything in the PureOS patches that’s necessary for running on this hardware.

Generally, running Linux on the Librem 13 has been a smooth experience; I haven’t seen much in the way of glitches or compatibility issues. Occasionally, I get something like a crashed power management daemon (shortcuts to increase/decrease brightness stop working), but nothing too serious.

Conclusion

The Purism Librem 13 has largely lived up to my goal of having a lightweight productivity laptop with a decent amount of memory (though I’m sad to say that the Firefox build has continued to get larger and slower over time, and linking is sometimes a struggle even with 16 GB of RAM…) while also going the extra mile to protect my privacy and freedoms. The Librem 13 has a few deficiencies in comparison to the ThinkPad line, but they’re mostly in the category of papercuts. At the end of the day it boils down to whether living with a few small annoyances to benefit from the additional privacy features is the right tradeoff for you. For me, so far, it has been, although I certainly hope the Purism folks take feedback like this into account and improve future iterations of the Librem line.

Trip Report: C++ Standards Meeting in Rapperswil, June 2018

Summary / TL;DR

Project What’s in it? Status
C++17 See list Published!
C++20 See below On track
Library Fundamentals TS v2 source code information capture and various utilities Published! Parts of it merged into C++17
Concepts TS Constrained templates Merged into C++20 with some modifications
Parallelism TS v2 Task blocks, library vector types and algorithms, and more Approved for publication!
Transactional Memory TS Transaction support Published! Not headed towards C++20
Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it merged into C++20, more on the way
Executors Abstraction for where/how code runs in a concurrent context Final design being hashed out. Ship vehicle not decided yet.
Concurrency TS v2 See below Under development. Depends on Executors.
Networking TS Sockets library based on Boost.ASIO Published!
Ranges TS Range-based algorithms and views Published! Headed towards C++20
Coroutines TS Resumable functions, based on Microsoft’s await design Published! C++20 merge uncertain
Modules v1 A component system to supersede the textual header file inclusion model Published as a TS
Modules v2 Improvements to Modules v1, including a better transition path Under active development
Numerics TS Various numerical facilities Under active development
Graphics TS 2D drawing API No consensus to move forward
Reflection TS Static code reflection mechanisms Send out for PDTS ballot
Contracts Preconditions, postconditions, and assertions Merged into C++20

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of June 25, 2018). If you encounter such a link, please check back in a few days.

Introduction

A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Rapperswil, Switzerland. This was the second committee meeting in 2018; you can find my reports on preceding meetings here (March 2018, Jacksonville) and here (November 2017, Albuquerque), and earlier ones linked from those. These reports, particularly the Jacksonville one, provide useful context for this post.

At this meeting, the committee was focused full-steam on C++20, including advancing several significant features — such as Ranges, Modules, Coroutines, and Executors — for possible inclusion in C++20, with a secondary focus on in-flight Technical Specifications such as the Parallelism TS v2, and the Reflection TS.

C++20

C++20 continues to be under active development. A number of new changes have been voted into its Working Draft at this meeting, which I list here. For a list of changes voted in at previous meetings, see my Jacksonville report.

Technical Specifications

In addition to the C++ International Standard (IS), the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

At this meeting, the committee voted to publish the second version of the Parallelism TS, and to send out the Reflection TS for its PDTS (“Proposed Draft TS”) ballot. Several other TSes remain under development.

Parallelism TS v2

The Parallelism TS v2 was sent out for its PDTS ballot at the last meeting. As described in previous reports, this is a process where a draft specification is circulated to national standards bodies, who have an opportunity to provide feedback on it. The committee can then make revisions based on the feedback, prior to final publication.

The results of the PDTS ballot had arrived just in time for the beginning of this meeting, and the relevant subgroups (primarily the Concurrency Study Group) worked diligently during the meeting to go through the comments and address them. This led to the adoption of several changes into the TS working draft:

The working draft, as modified by these changes, was then approved for publication!

Reflection TS

The Reflection TS, based on the reflexpr static reflection proposal, picked up one new feature, static reflection of functions, and was subsequently sent out for its PDTS ballot! I’m quite excited to see efficient progress on this (in my opinion) very important feature.

Meanwhile, the committee has also been planning ahead for the next generation of reflection and metaprogramming facilities for C++, which will be based on value-based constexpr programming rather than template metaprogramming, allowing users to reap expressiveness and compile-time performance gains. In the list of proposals reviewed by the Evolution Working Group (EWG) below, you’ll see quite a few of them are extensions related to constexpr; that’s largely motivated by this direction.

Concurrency TS v2

The Concurrency TS v2 (no working draft yet), whose notable contents include revamped versions of async() and future::then(), among other things, continues to be blocked on Executors. Efforts at this meeting focused on moving Executors forward.

Library Fundamentals TS v3

The Library Fundementals TS v3 is now “open for business” (has an initial working draft based on the portions of v2 that have not been merged into the IS yet), but no new proposals have been merged to it yet. I expect that to start happening in the coming meetings, as proposals targeting it progress through the Library groups.

Future Technical Specifications

There are (or were, in the case of the Graphics TS) some planned future Technical Specifications that don’t have an official project or working draft at this point:

Graphics

At the last meeting, the Graphics TS, set to contain 2D graphics primitives with an interface inspired by cairo, ran into some controversy. A number of people started to become convinced that, since this was something that professional graphics programmers / game developers were unlikely to use, the large amount of time that a detailed wording review would require was not a good use of committee time.

As a result of these concerns, an evening session was held at this meeting to decide the future of the proposal. A paper arguing we should stay course was presented, as was an alternative proposal for a much lighter-weight “diet” graphics library. After extensive discussion, however, neither the current proposal nor the alternative had consensus to move forward.

As a result – while nothing is ever set in stone and the committee can always change in mind – the Graphics TS is abandoned for the time being.

(That said, I’ve heard rumours that the folks working on the proposal and its reference implementation plan to continue working on it all the same, just not with standardization as the end goal. Rather, they might continue iterating on the library with the goal of distributing it as a third-party library/package of some sort (possibly tying into the committee’s exploration of improving C++’s package management ecosystem).)

Executors

SG 1 (the Concurrency Study Group) achieved design consensus on a unified executors proposal (see the proposal and accompanying design paper) at the last meeting.

At this meeting, another executors proposal was brought forward, and SG 1 has been trying to reconcile it with / absorb it into the unified proposal.

As executors are blocking a number of dependent items, including the Concurrency TS v2 and merging the Networking TS, SG 1 hopes to progress them forward as soon as possible. Some members remain hopeful that it can be merged into C++20 directly, but going with the backup plan of publishing it as a TS is also a possibility (which is why I’m listing it here).

Merging Technical Specifications into C++20

Turning now to Technical Specifications that have already been published, but not yet merged into the IS, the C++ community is eager to see some of these merge into C++20, thereby officially standardizing the features they contain.

Ranges TS

The Ranges TS, which modernizes and Conceptifies significant parts of the standard library (the parts related to algorithms and iterators), has been making really good progress towards merging into C++20.

The first part of the TS, containing foundational Concepts that a large spectrum of future library proposals may want to make use of, has just been merged into the C++20 working draft at this meeting. The second part, the range-based algorithms and utilities themselves, is well on its way: the Library Evolution Working Group has finished ironing out how the range-based facilities will integrate with the existing facilities in the standard library, and forwarded the revised merge proposal for wording review.

Coroutines TS

The Coroutines TS was proposed for merger into C++20 at the last meeting, but ran into pushback from adopters who tried it out and had several concerns with it (which were subsequently responded to, with additional follow-up regarding optimization possibilities).

Said adopters were invited to bring forward a proposal for an alternative / modified design that addressed their concerns, no later than at this meeting, and so they did; their proposal is called Core Coroutines.

Core Coroutines was reviewed by the Evolution Working Group (I summarize the technical discussion below), which encouraged further iteration on this design, but also felt that such iteration should not hold up the proposal to merge the Coroutines TS into C++20. (What’s the point in iterating on one design if another is being merged into the IS draft, you ask? I believe the thinking was that further exploration of the Core Coroutines design could inspire some modifications to the Coroutines TS that could be merged at a later meeting, still before C++20’s publication.)

As a result, the merge of the Coroutines TS came to a plenary vote at the end of the week. However, it did not garner consensus; a significant minority of the committee at large felt that the Core Coroutines design deserved more exploration before enshrining the TS design into the standard. (At least, I assume that was the rationale of those voting against. Regrettably, due to procedural changes, there is very little discussion before plenary votes these days to shed light on why people have the positions they do.)

The window for merging a TS into C++20 remains open for approximately one more meeting. I expect the proponents of the Coroutines TS will try the merge again at the next meeting, while the authors of Core Coroutines will refine their design further. Hopefully, the additional time and refinement will allow us to make a better-informed final decision.

Networking TS

The Networking TS is in a situation where the technical content of the TS itself is in a fairly good shape and ripe for merging into the IS, but its dependency on Executors makes a merger in the C++20 timeframe uncertain.

Ideas have been floated around of coming up with a subset of Executors that would be sufficient for the Networking TS to be based on, and that could get agreement in time for C++20. Multiple proposals on this front are expected at the next meeting.

Modules

Modules is one of the most-anticipated new features in C++. While the Modules TS was published fairly recently, and thus merging it into C++20 is a rather ambitious timeline (especially since there are design changes relative to the TS that we know we want to make), there is a fairly widespread desire to get it into C++20 nonetheless.

I described in my last report that there was a potential path forward to accomplishing this, which involved merging a subset of a revised Modules design into C++20, with the rest of the revised design to follow (likely in the form of a Modules TS v2, and a subsequent merge into C++23).

The challenge with this plan is that we haven’t fully worked out the revised design yet, never mind agreed on a subset of it that’s safe for merging into C++20. (By safe I mean forwards-compatible with the complete design, since we don’t want breaking changes to a feature we put into the IS.)

There was extensive discussion of Modules in the Evolution Working Group, which I summarize below. The procedural outcome was that there was no consensus to move forward with the “subset” plan, but we are moving forward with the revised design at full speed, and some remain hopeful that the entire revised design (or perhaps a larger subset) can still be merged into C++20.

What’s happening with Concepts?

The Concepts TS was merged into the C++20 working draft previously, but excluding certain controversial parts (notably, abbreviated function templates (AFTs)).

As AFTs remain quite popular, the committee has been trying to find an alternative design for them that could get consensus for C++20. Several proposals were heard by EWG at the last meeting, and some refined ones at this meeting. I summarize their discussion below, but in brief, while there is general support for two possible approaches, there still isn’t final agreement on one direction.

The Role of Technical Specifications

We are now about 6 years into the committee’s procedural experiment of using Technical Specifications as a vehicle for gathering feedback based on implementation and use experience prior to standardization of significant features. Opinions differ on how successful this experiment has been so far, with some lauding the TS process as leading to higher-quality, better-baked features, while others feel the process has in some cases just added unnecessary delays.

The committee has recently formed a Direction Group, a small group composed of five senior committee members with extensive experience, which advises the Working Group chairs and the Convenor on matters related to priority and direction. One of the topics the Direction Group has been tasked with giving feedback on is the TS process, and there was evening session at this meeting to relay and discuss this advice.

The Direction Group’s main piece of advice was that while the TS process is still appropriate for sufficiently large features, it’s not to be embarked on lightly; in each case, a specific set of topics / questions on which the committee would like feedback should be articulated, and success criteria for a TS “graduating” and being merged into the IS should be clearly specified at the outset.

Evolution Working Group

I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

Unless otherwise indicated, proposals discussed here are targeting C++20. I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • Standard library compatibility promises. EWG looked at this at the last meeting, and asked that it be revised to only list the types of changes the standard library reserves to make; a second list, of code patterns that should be avoided if you want a guarantee of future library updates not breaking your code, was to be removed as it follows from the first list. The revised version was approved and will be published as a Standing Document (pending a plenary vote).
  • A couple of minor tweaks to the contracts proposal:
    • In response to implementer feedback, the always checking level was removed, and the source location reported for precondition violations was made implementation-defined (previously, it had to be a source location in the function’s caller).
    • Virtual functions currently require that overrides repeat the base function’s pre- and postconditions. We can run into trouble in cases where the base function’s pre- or postcondition, interpreted in the context of the derived class, has a different meaning (e.g. because the derived class shadows a base member’s name, or due to covariant return types). Such cases were made undefined behaviour, with the understanding that this is a placeholder for a more principled solution to forthcome at a future meeting.
  • try/catch blocks in constexpr functions. Throwing an exception is still not allowed during constant evaluation, but the try/catch construct itself can be present as long as only the non-throwing codepaths as exercised at compile time.
  • More constexpr containers. EWG previously approved basic support for using dynamic allocation during constant evaluation, with the intention of allowing containers like std::vector to be used in a constexpr context (which is now happening). This is an extension to that, which allows storage that was dynamically allocated at compile time to survive to runtime, in the form of a static (or automatic) storage duration variable.
  • Allowing virtual destructors to be “trivial”. This lifts an unnecessary restriction that prevented some commonly used types like std::error_code from being used at compile time.
  • Immediate functions. These are a stronger form of constexpr functions, spelt constexpr!, which not only can run at compile time, but have to. This is motivated by several use cases, one of them being value-based reflection, where you need to be able to write functions that manipulate information that only exists at compile-time (like handles to compiler data structures used to implement reflection primitives).
  • std::is_constant_evaluated(). This allows you to check whether a constexpr function is being invoked at compile time or at runtime. Again there are numerous use cases for this, but a notable one is related to allowing std::string to be used in a constexpr context. Most implementations of std::string use a “small string optimization” (SSO) where sufficiently small strings are stored inline in the string object rather than in a dynamically allocated block. Unfortunately, SSO cannot be used in a constexpr context because it requires using reinterpret_cast (and in any case, the motivation for SSO is runtime performance), so we need a way to make the SSO conditional on the string being created at runtime.
  • Signed integers are two’s complement. This standardizes existing practice that has been the case for all modern C++ implementations for quite a while.
  • Nested inline namespaces. In C++17, you can shorten namespace foo { namespace bar { namespace baz { to namespace foo::bar::baz {, but there is no way to shorten namespace foo { inline namespace bar { namespace baz {. This proposal allows writing namespace foo::inline bar::baz. The single-name version, namespace inline foo { is also valid, and equivalent to inline namespace foo {.

There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above:


Proposals for which further work is encouraged:

  • Generalizing alias declarations. The idea here is to generalize C++’s alias declarations (using a = b;) so that you can alias not only types, but also other entities like namespaces or functions. EWG was generally favourable to the idea, but felt that aliases for different kinds of entities should use different syntaxes. (Among other considerations, using the same syntax would mean having to reinstate the recently-removed requirement to use typename in front of a dependent type in an alias declaration.) The author will explore alternative syntaxes for non-type aliases and return with a revised proposal.
  • Allow initializing aggregates from a parenthesized list of values. This idea was discussed at the last meeting and EWG was in favour, but people got distracted by the quasi-related topic of aggregates with deleted constructors. There was a suggestion that perhaps the two problems could be addressed by the same proposal, but in fact the issue of deleted constructors inspired independent proposals, and this proposal returned more or less unchanged. EWG liked the idea and initially approved it, but during Core Working Group review it came to light that there are a number of subtle differences in behaviour between constructor initialization and aggregate initialization (e.g. evaluation order of arguments, lifetime extension, narrowing conversions) that need to be addressed. The suggested guidance was to have the behaviour with parentheses match the behaviour of constructor calls, by having the compiler (notionally) synthesize a constructor to call when this notation is used. The proposal will return with these details fleshed out.
  • Extensions to class template argument deduction. This paper proposed seven different extensions to this popular C++17 feature. EWG didn’t make individual decisions on them yet. Rather, the general guidance was to motivate the extensions a bit better, choose a subset of the more important ones to pursue for C++20, perhaps gather some implementation experience, and come back with a revised proposal.
  • Deducing this. The type of the implicit object parameter (the “this” parameter) of a member function can vary in the same ways as the types of other parameters: lvalue vs. rvalue, const vs. non-const. C++ provides ways to overload member functions to capture this variation (trailing const, ref-qualifiers), but sometimes it would be more convenient to just template over the type of the this parameter. This proposal aims to allow that, with a syntax like this:

    template <typename Self>
    R foo(this Self&& self, /* other parameters */);

    EWG agreed with the motivation, but expressed a preference for keeping information related to the implicit object parameter at the end of the function declaration, (where the trailing const and ref-qualifiers are now), leading to a syntax more like this:

    template <typename Self>
    R foo(/* other parameters */) Self&& self

    (the exact syntax remains to be nailed down as the end of a function declaration is a syntactically busy area, and parsing issues have to be worked out).
    EWG also opined that in such a function, you should only be able to access the object via the declared object parameter (self in the above example), and not also using this (as that would lead to confusion in cases where e.g. this has the base type while self has a derived type).
  • constexpr function parameters. The most ambitious constexpr-related proposal brought forward at this meeting, this aimed to allow function parameters to be marked as constexpr, and accordingly act as constant expressions inside the function body (e.g. it would be valid to use the value of one as a non-type template parameter or array bound). It was quickly pointed out that, while the proposal is implementable, it doesn’t fit into the language’s current model of constant evaluation; rather, functions with constexpr parameters would have to be implemented as templates, with a different instantiation for every combination of parameter values. Since this amounts to being a syntactic shorthand for non-type template parameters, EWG suggested that the proposal be reformulated in those terms.
  • Binding returned/initialized objects to the lifetime of parameters. This proposal aims to improve C++’s lifetime safety (and perhaps take one step towards being more like Rust, though that’s a long road) by allowing programmers to mark function parameters with an annotation that tells the compiler that the lifetime of the function’s return value should be “bound” to the lifetime of the parameter (that is, the return value should not outlive the parameter).
    There are several options for the associated semantics if the compiler detects that the lifetime of a return value would, in fact, exceed the lifetime of a parameter:

    • issue a warning
    • issue an error
    • extend the lifetime of the returned object



    In the first case, the annotation could take the form of an attribute (e.g. [[lifetimebound]]). In the second or third case, it would have to be something else, like a context-sensitive keyword (since attributes aren’t supposed to have semantic effects). The proposal authors suggested initially going with the first option in the C++20 timeframe, while leaving the door open for the second or third option later on.
    EWG agreed that mitigating lifetime hazards is an important area of focus, and something we’d like to deliver on in the C++20 timeframe. There was some concern about the proposed annotation being too noisy / viral. People asked whether the annotations could be deduced (not if the function is compiled separately, unless we rely on link-time processing), or if we could just lifetime-extend by default (not without causing undue memory pressure and risking resource exhaustion and deadlocks by not releasing expensive resources or locks in time). The authors will investigate the problem space further, including exploring ways to avoid the attribute being viral, and comparing their approach to Rust’s, and report back.

  • Nameless parameters and unutterable specializations. In some corner cases, the current language rules do not give you a way to express a partial or explicit specialization of a constrained template (because a specialization requires repeating the constraint with the specialized parameter values substituted in, which does not always result in valid syntax). This proposal invents some syntax to allow expressing such specializations. EWG felt the proposed syntax was scary, and suggested coming back with better motivating examples before pursuing the idea further.
  • How to catch an exception_ptr without even trying. This aims to allow getting at the exception inside an exception_ptr without having to throw it (which is expensive). As a side effect, it would also allow handling exception_ptrs in code compiled with -fno-exceptions. EWG felt the idea had merit, even though performance shouldn’t be the guiding principle (since the slowness of throw is technically a quality-of-implementation issue, although implementations seem to have agreed to not optimize it).
  • Allowing class template specializations in associated namespaces. This allows specializing e.g. std::hash for your own type, in your type’s namespace, instead of having to close that namespace, open namespace std, and then reopen your namespace. EWG liked the idea, but the issue of which names — names in your namespace, names in std, or both — would be visible without qualification inside the specialization, was contentious.

Rejected proposals:

  • Define basic_string_view(nullptr). This paper argued that since it’s common to represent empty strings as a const char* with value nullptr, the constructor of string_view which takes a const char* argument should allow a nullptr value and interpret it as an empty string. Another paper convincingly argued that conflating “a zero-sized string” with “not-a-string” does more harm than good, and this proposal was accordingly rejected.
  • Explicit concept expressions. This paper pointed out that if constrained-type-specifiers (the language machinery underlying abbreviated function templates) are added to C++ without some extra per-parameter syntax, certain constructs can become ambiguous (see the paper for an example). The ambiguity involves “concept expressions”, that is, the use of a concept (applied to some arguments) as a boolean expression, such as CopyConstructible<T>, outside of a requires-clause. The authors proposed removing the ambiguity by requiring the keyword requires to introduce a concept expression, as in requires CopyConstructible<T>. EWG felt this was too much syntactic clutter, given that concept expressions are expected to be used in places like static_assert and if constexpr, and given that the ambiguity is, at this point, hypothetical (pending what hapens to AFTs) and there would be options to resolve it if necessary.

Concepts

EWG had another evening session on Concepts at this meeting, to try to resolve the matter of abbreviated function templates (AFTs).

Recall that the main issue here is that, given an AFT written using the Concepts TS syntax, like void sort(Sortable& s);, it’s not clear that this is a template (you need to know that Sortable is a concept, not a type).

The four different proposals in play at the last meeting have been whittled down to two:

  • An updated version of Herb’s in-place syntax proposal, with which the above AFT would be written void sort(Sortable{}& s); or void sort(Sortable{S}& s); (with S in the second form naming the concrete type deduced for this parameter). The proposal also aims to change the constrained-parameter syntax (with which the same function could be written template <Sortable S> void sort(S& s);) to require braces for type parameters, so that you’d instead write template <Sortable{S}> void sort(S& s);. (The motivation for this latter change is to make it so that ConceptName C consistently makes C a value, whether it be a function parameter or a non-type template parameter, while ConceptName{C] consistently makes C a type.)
  • Bjarne’s minimal solution to the concepts syntax problems, which adds a single leading template keyword to announce that an AFT is a template: template void sort(Sortable& s);. (This is visually ambiguous with one of the explicit specialization syntaxes, but the compiler can disambiguate based on name lookup, and programmers can use the other explicit specialization syntax to avoid visual confusion.) This proposal leaves the constrained-parameter syntax alone.

Both proposals allow a reader to tell at a glance that an AFT is a template and not a regular function. At the same time, each proposal has downsides as well. Bjarne’s approach annotates the whole function rather than individual parameters, so in a function with multiple parameters you still don’t know at a glance which parameters are concepts (and so e.g. in a case of a Foo&& parameter, you don’t know if it’s an rvalue reference or a forwarding reference). Herb’s proposal messes with the well-loved constrained-parameter syntax.

After an extensive discussion, it turned out that both proposals had enough support to pass, with each retaining a vocal minority of opponents. Neither proposal was progressed at this time, in the hope that some further analysis or convergence can lead to a stronger consensus at the next meeting, but it’s quite clear that folks want something to be done in this space for C++20, and so I’m fairly optimistic we’ll end up getting one of these solutions (or a compromise / variation).

In addition to the evening session on AFTs, EWG looked at a proposal to alter the way name lookup works inside constrained templates. The original motivation for this was to resolve the AFT impasse by making name lookup inside AFTs work more like name lookup inside non-template functions. However, it became apparent that (1) that alone will not resolve the AFT issue, since name lookup is just one of several differences between template and non-template code; but (2) the suggested modification to name lookup rules may be desirable (not just in AFTs but in all constrained templates) anyways. The main idea behind the new rules is that when performing name lookup for a function call that has a constrained type as an argument, only functions that appear in the concept definition should be found; the motivation is to avoid surprising extra results that might creep in through ADL. EWG was supportive of making a change along these lines for C++20, but some of the details still need to be worked out; among them, whether constraints should be propagated through auto variables and into nested templates for the purpose of applying this rule.

Coroutines

As mentioned above, EWG reviewed a modified Coroutines design called Core Coroutines, that was inspired by various concerns that some early adopters of the Coroutines TS had with its design.

Core Coroutines makes a number of changes to the Coroutines TS design:

  • The most significant change, in my opinion, is that it exposes the “coroutine frame” (the piece of memory that stores the compiler’s transformed representation of the coroutine function, where e.g. stack variables that persist across a suspension point are stored) as a first-class object, thereby allowing the user to control where this memory is stored (and, importantly, whether or not it is dynamically allocated).
  • Syntax changes:
    • To how you define a coroutine. Among other motivations, the changes emphasize that parameters to the coroutine act more like lambda captures than regular function parameters (e.g. for reference parameters, you need to be careful that the referred-to objects persist even after a suspension/resumption).
    • To how you call a coroutine. The new syntax is an operator (the initial proposal being [<-]), to reflect that coroutines can be used for a variety of purposes, not just asynchrony (which is what co_await suggests).
  • A more compact API for defining your own coroutine types, with fewer library customiztion points (basically, instead of specializing numerous library traits that are invoked by compiler-generated code, you overload operator [<-] for your type, with more of the logic going into the definition of that function).

EWG recognized the benefits of these modifications, although there were a variety of opinions as to how compelling they are. At the same time, there were also a few concerns with Core Coroutines:

  • While having the coroutine frame exposed as a first-class object means you are guaranteed no dynamic memory allocations unless you place it on the heap yourself, it still has a compiler-generated type (much like a lambda closure), so passing it across a translation unit boundary requires type erasure (and therefore a dynamic allocation). With the Coroutines TS, the type erasure was more under the compiler’s control, and it was argued that this allows eliding the allocation in more cases.
  • There were concerns about being able to take the sizeof of the coroutine object, as that requires the size being known by the compiler’s front-end, while with the Coroutines TS it’s sufficient for the size to be computed during the optimization phase.
  • While making the customization API smaller, this formulation relies on more new core-language features. In addition to introducing a new overloadable operator, the feature requires tail calls (which could also be useful for the language in general), and lazy function parameters, which have been proposed separately. (The latter is not a hard requirement, but the syntax would be more verbose without them.)

As mentioned, the procedural outcome of the discussion was to encourage further work on the Core Coroutines, while not blocking the merger of the Coroutines TS into C++20 on such work.

While in the end there was no consensus to merge the Coroutines TS into C++20 at this meeting, there remains fairly strong demand for having coroutines in some form in C++20, and I am therefore hopeful that some sort of joint proposal that combines elements of Core Coroutines into the Coroutines TS will surface at the next meeting.

Modules

As of the last meeting, there were two alternative Modules designs before the committee: the recently-published Modules TS, and the alternative proposal from the Clang Modules implementers called Another Take On Modules (“Atom”).

Since the last meeting, the authors of the two proposals have been collaborating to produce a merged proposal that combines elements from both proposals.

The merged proposal accomplishes Atom’s goal of providing a better mechanism for existing codebases to transition to Modules via modularized legacy headers (called legacy header imports in the merged proposal) – basically, existing headers that are not modules, but are treated as-if they were modules by the compiler. It retains the Modules TS mechanism of global module fragments, with some important restrictions, such as only allowing #includes and other preprocessor directives in the global module fragment.

Other aspects of Atom that are part of the the merged proposal include module partitions (a way of breaking up the interface of a module into multiple files), and some changes to export and template instantiation semantics.

EWG reviewed the merged proposal favourably, with a strong consensus for putting these changes into a second iteration of the Modules TS. Design guidance was provided on a few aspects, including tweaks to export behaviour for namespaces, and making export be “inherited”, such that e.g. if the declaration of a structure is exported, then its definition is too by default. (A follow-up proposal is expected for a syntax to explicitly make a structure definition not exported without having to move it into another module partition.) A proposal to make the lexing rules for the names of legacy header units be different from the existing rules for #includes failed to gain consensus.

One notable remaining point of contention about the merged proposal is that module is a hard keyword in it, thereby breaking existing code that uses that word as an identifier. There remains widespread concern about this in multiple user communities, including the graphics community where the name “module” is used in existing published specifications (such as Vulkan). These concerns would be addressed if module were made a context-sensitive keyword instead. There was a proposal to do so at the last meeting, which failed to gain consensus (I suspect because the author focused on various disambiguation edge cases, which scared some EWG members). I expect a fresh proposal will prompt EWG to reconsider this choice at the next meeting.

As mentioned above, there was also a suggestion to take a subset of the merged proposal and put it directly into C++20. The subset included neither legacy header imports nor global module fragments (in any useful form), thereby not providing any meaningful transition mechanism for existing codebases, but it was hoped that it would still be well-received and useful for new codebases. However, there was no consensus to proceed with this subset, because it would have meant having a new set of semantics different from anything that’s implemented today, and that was deemed to be risky.

It’s important to underscore that not proceeding with the “subset” approach does not necessarily mean the committee has given up on having any form of Modules in C++20 (although the chances of that have probably decreased). There remains some hope that the development of the merged proposal might proceed sufficiently quickly that the entire proposal — or at least a larger subset that includes a transition mechanism like legacy header imports — can make it into C++20.

Finally, EWG briefly heard from the authors of a proposal for modular macros, who basically said they are withdrawing their proposal because they are satisfied with Atom’s facility for selectively exporting macros via #export directives, which is being treated as a future extension to the merged proposal.

Papers not discussed

With the continued focus on large proposals that might target C++20 like Modules and Coroutines, EWG has a growing backlog of smaller proposals that haven’t been discussed, in some cases stretching back to two meetings ago (see the the committee mailings for a list). A notable item on the backlog is a proposal by Herb Sutter to bridge the two worlds of C++ users — those who use exceptions and those who not — by extending the exception model in a way that (hopefully) makes it palatable to everyone.

Other Working Groups

Library Groups

Having sat in EWG all week, I can’t report on technical discussions of library proposals, but I’ll mention where some proposals are in the processing queue.

I’ve already listed the library proposals that passed wording review and were voted into the C++20 working draft above.

The following are among the proposals have passed design review and are undergoing (or awaiting) wording review:

The following proposals are still undergoing design review, and are being treated with priority:

The following proposals are also undergoing design review:

As usual, there is a fairly long queue of library proposals that haven’t started design review yet. See the committee’s website for a full list of proposals.

(These lists are incomplete; see the post-meeting mailing when it’s published for complete lists.)

Study Groups

SG 1 (Concurrency)

I’ve already talked about some of the Concurrency Study Group’s work above, related to the Parallelism TS v2, and Executors.

The group has also reviewed some proposals targeting C++20. These are at various stages of the review pipeline:

Proposals before the Library Evolution Working Group include latches and barriers, C atomics in C++, and a joining thread.

Proposals before the Library Working Group include improvements to atomic_flag, efficient concurrent waiting, and fixing atomic initialization.

Proposls before the Core Working Group include revising the C++ memory model. A proposal to weaken release sequences has been put on hold.

SG 7 (Compile-Time Programming)

It was a relatively quiet week for SG 7, with the Reflection TS having undergone and passed wording review, and extensions to constexpr that will unlock the next generation of reflection facilities being handled in EWG. The only major proposal currently on SG 7’s plate is metaclasses, and that did not have an update at this meeting.

That said, SG 7 did meet briefly to discuss two other papers:

  • PFA: A Generic, Extendable and Efficient Solution for Polymorphic Programming. This aims to make value-based polymorphism easier, using an approach similar to type erasure; a parallel was drawn to the Dyno library. SG 7 observed that this could be accomplished with a pure library approach on top of existing reflection facilities and/or metaclasses (and if it can’t, that would signal holes in the reflection facilities that we’d want to fill).
  • Adding support for type-based metaprogramming to the standard library. This aims to standardize template metaprogramming facilities based on Boost.Mp11, a modernized version of Boost.MPL. SG 7 was reluctant to proceed with this, given that it has previously issued guidance for moving in the direction of constexpr value-based metaprogramming rather than template metaprogramming. At the same time, SG 7 recognized the desire for having metaprogramming facilities in the standard, and urged proponents on the constexpr approach to bring forward a library proposal built on that soon.

SG 12 (Undefined and Unspecified Behaviour)

SG 12 met to discuss several topics this week:

  • Reviewed a proposal to allow implicit creation of objects for low-level object manipulation (basically the way malloc() is used), which aims to standardize existing practice that the current standard wording makes undefined behaviour.
  • Reviewed a proposed policy around preserving undefined behaviour, which argues that in some cases, defining behaviour that was previously undefined can be a breaking change in some sense. SG 12 felt that imposing a requirement to preserve undefined behaviour wouldn’t be realistic, but that proposal authors should be encouraged to identify cases where proposals “break” undefined behaviour so that the tradeoffs can be considered.
  • Held a joint meeting with WG 23 (Programming Language Vulnerabilities) to collaborate further on a document describing C++ vulnerabilities. This meeting’s discussion focused on buffer boundary conditions and type conversions between pointers.

SG 15 (Tooling)

The Tooling Study Group (SG 15) held its second meeting during an evening session this week.

The meeting was heavily focused on dependency / package mangement in C++, an area that has been getting an increased amount of attention of late in the C++ community.

SG 15 heard a presentation on package consumption vs. development, whose author showcased the Build2 build / package management system and its abilities. Much of the rest of the evening was spent discussing what requirements various segments of the user community have for such a system.

The relationship between SG 15 and the committee is somewhat unusual; actually standardizing a package management system is beyond the committee’s purview, so the SG serves more as a place for innovators in this area to come together and hash out what will hopefully become a de facto standard, rather than advancing any proposals to change the standards text itself.

It was observed that the heavy focus on package management has been crowding out other areas of focus for SG 15, such as tooling related to static analysis and refactoring; it was suggested that perhaps those topics should be split out into another Study Group. As someone whose primary interest in tooling lies in these latter areas, I would welcome such a move.

Next Meetings

The next full meeting of the Committee will be in San Diego, California, the week of November 8th, 2018.

However, in an effort to work through some of the committee’s accumulated backlog, as well as to try to make a push for getting some features into C++20, three smaller, more targeted meetings have been scheduled before then:

  • A meeting of the Library Working Group in Batavia, Illinois, the week of August 20th, 2018, to work through its backlog of wording review for library proposals.
  • A meeting of the Evolution Working Group in Seattle, Washington, from September 20-21, 2018, to iterate on the merged Modules proposal.
  • A meeting of the Concurrency Study Group (with Library Evolution Working Group attendance also encouraged) in Seattle, Washington, from September 22-23, 2018, to iterate on Executors.

(The last two meetings are timed and located so that CppCon attendees don’t have to make an extra trip for them.)

Conclusion

I think this was an exciting meeting, and am pretty happy with the progress made. Highlights included:

  • The entire Ranges TS being on track to be merged into C++20.
  • C++20 gaining standard facilities for contract programming.
  • Important progress on Modules, with a merged proposal that was very well-received.
  • A pivot towards package management, including as a way to make graphical progamming in C++ more accessible.

Stay tuned for future reports from me!

Other Trip Reports

Some other trip reports about this meeting include Bryce Lelbach’s, Timur Doumler’s, and Guy Davidson’s. I encourage you to check them out as well!

Trip Report: C++ Standards Meeting in Jacksonville, March 2018

Summary / TL;DR

Project What’s in it? Status
C++17 See list Published!
C++20 See below On track
Library Fundamentals TS v2 source code information capture and various utilities Published! Parts of it merged into C++17
Concepts TS Constrained templates Merged into C++20 with some modifications
Parallelism TS v2 Task blocks, library vector types and algorithms, and more Sent out for PDTS ballot
Transactional Memory TS Transaction support Published! Not headed towards C++20
Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it merged into C++20, more on the way
Executors Abstraction for where/how code runs in a concurrent context Reached design consensus. Ship vehicle not decided yet.
Concurrency TS v2 See below Under development. Depends on Executors.
Networking TS Sockets library based on Boost.ASIO Publication imminent
Ranges TS Range-based algorithms and views Published!
Coroutines TS Resumable functions, based on Microsoft’s await design Published!
Modules TS A component system to supersede the textual header file inclusion model Voted for publication!
Numerics TS Various numerical facilities Under active development
Graphics TS 2D drawing API Under design review; some controversy
Reflection TS Code introspection and (later) reification mechanisms Initial working draft containing introspection proposal passed wording review
Contracts Preconditions, postconditions, and assertions Proposal under wording review, targeting C++20

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of April 2, 2018). If you encounter such a link, please check back in a few days.

Introduction

A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Jacksonville, Florida. This was the first committee meeting in 2018; you can find my reports on 2017’s meetings here (February 2017, Kona), here (July 2017, Toronto), and here (November 2017, Albuquerque). These reports, particularly the Albuquerque one, provide useful context for this post.

With the final C++17 International Standard (IS) having been officially published, this meeting was focused on C++20, and the various Technical Specifications (TS) we have in flight.

C++17

As mentioned, C++17 has been officially published, around the end of last year. The official published version can be purchased from ISO’s website; a draft whose technical content is identical is available free of charge here.

See here for a list of new language and library features in C++17.

The latest versions of GCC and Clang both have complete support for C++17, modulo bugs. MSVC has significant partial support, but full support is still a work in progress.

C++20

C++20 is under active development. A number of new changes have been voted into its Working Draft at this meeting, which I list here. For a list of changes voted in at previous meetings, see my Toronto and Albuquerque reports.

Technical Specifications

In addition to the C++ International Standard, the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

The committee recently published four TSes – Coroutines, Ranges, Networking, and most recently, Modules – and several more are in progress.

Modules TS

The last meeting ended with the Modules TS close to being ready for a publication vote, but not quite there yet, as the Core Working Group (CWG) was still in the process of reviewing resolutions to comments sent in by national standards bodies in response to the PDTS (“Proposed Draft TS”) ballot. Determined not to leave the resolution of the matter to this meeting, CWG met via teleconference on four different occasions in between meetings to finish the review process. Their efforts were successful; in particular, I believe that the issues that I described in my last report as causing serious implementer concerns (e.g. the “views of types” issue) have been resolved. The revised document was voted for publication a few weeks before this meeting (also by teleconference).

That allowed the time during this meeting to be spent discussing design issues that were explicitly deferred until after the TS’s publication. I summarize that technical discussion below.

Parallelism TS v2

The Parallelism TS v2 has picked up one last major feature: data-parallel vector types and operations, also referred to as “SIMD”. With that in place, Parallelism TS was sent out for its PDTS ballot.

Concurrency TS v2

The Concurrency TS v2 (no working draft yet) is continuing to take shape. There’s a helpful paper that summarizes its proposed contents and organization.

A notable component of the Concurrency TS v2 that I didn’t mention in my last report is a revised version of future::then() (the original version appeared in the Concurrency TS v1, but there was consensus against moving forward with it in that form). This, however, depends on Executors, which will be published independently of the Concurrency TS v2, either in C++20 or a TS of its own.

Library Fundamentals TS v3

The Library Fundementals TS is a sort of a grab-bag TS for library proposals that are not large enough to get their own TS (like Networking did), but experimental enough not to go directly into the IS. It’s now on its third iteration, with v1 and significant components of v2 having merged into the IS.

No new features have been voted into v3 yet, but an initial working draft has been prepared, basically by taking v2 and removing the parts of it that have merged into C++17 (including optional and string_view); the resulting draft will be open to accept new proposals at future meetings (I believe mdspan (a multi-dimensional array view) and expected<T> (similar to Rust’s Result<T>) are headed that way).

Reflection TS

After much anticipation, the Reflection TS is now an official project, with its initial working draft based on the latest version of the reflexpr static introspection proposal. I believe the extensions for static reflection of functions are targeting this TS as well.

It’s important to note that the Reflection TS is not the end of the road for reflection in C++; further improvements, including a value-based (as opposed to type-based) interface for reflection, and metaclasses, are being explored (I write more about these below).

Future Technical Specifications

There are some planned future Technical Specifications that don’t have an official project or working draft yet:

Graphics

The proposal for a Graphics TS, set to contain 2D graphics primitives with an interface inspired by cairo, continues to be under discussion in the Library Evolution Working Group (LEWG).

At this meeting, the proposal has encountered some controversy. A library like this is unlikely to be used for high-performance production use cases like games and browsers; the target market is more people teaching and learning C++, and non-performance-intensive GUI applications. Some people consider that to be a poor use of committee time (it was observed that a large proposal like this would tie up the Library Working Group for one or two full meetings’ worth of wording review). On the other hand, the proposal’s authors have been “strung along” by the committee for a couple of years now, and have invested significant time into polishing the proposal to be standards-quality.

The committee plans to hold an evening session at the next meeting to decide the future of the proposal.

Executors

Executors are a important concurrency abstraction for which the committee has been trying to hash out a suitable design for a long time. There is finally consensus on a design (see the proposal and accompanying design paper), and the Concurrency Study Group had been planning to publish it in its own Technical Specification.

Meanwhile, it became apparent that several other proposals depend on executors, including Networking (which isn’t integrated with executors in its TS form, but people would like it to be prior to merging it into the IS), the planned improvements to future, and new execution policies for parallel algorithms. Coroutines doesn’t necessarily have a dependency, but there are still integration opportunities.

As a result, the Concurrency Study Group is eyeing the possibility of getting executors directly into C++20 (instead of going through a TS), to unblock dependent proposals sooner.

Merging Technical Specifications into C++20

After a TS has been published and has garnered enough implementation and use experience that the committee is confident enough to officially standardize its contents, it can be merged into the standard. This happened with e.g. the Filesystems and Parallelism TSes in C++17, and significant parts of the Concepts TS in C++20.

As the committee has a growing list of published-but-not-yet-merged TSes, there was naturally some discussion of which of these would be merged into C++20.

Coroutines TS

The Coroutines TS was proposed for merger into C++20 at this meeting. There was some pushback from adopters who tried it out and brought up several concerns (these concerns were subsequently responded to).

We had a lively discussion about this in the Evolution Working Group (EWG). I summarize the technical points below, but the procedural outcome was that those advocating for significant design changes will have until the next meeting to bring forward a concrete proposal for such changes, or else “forever hold their peace”.

Some felt that such a “deadline” is a bit heavy-handed, and I tend to agree with that. While there certainly needs to be a limit on how long we wait for hypothetical future proposals that improve on a design, the Coroutines TS was just published in November 2017; I don’t think it’s unreasonable to ask that implementers and users be given more than a few months to properly evaluate it and formulate high-quality proposals to improve it if appropriate.

Ranges TS

The Ranges TS modernizes and Conceptifies significant parts of the standard library (the parts related to algorithms and iterators).

Its merge into the IS is planned to happen in two parts: first, the foundational Concepts that a large spectrum of future library proposals may want to make use of, and then the range-based algorithms and utilities themselves. The purpose of the split is to allow the first part to merge into the C++20 working draft as soon as possible, thereby unblocking proposals that wish to use the foundational Concepts.

The first part is targeting C++20 pretty firmly; the second part is still somewhat up in the air, with technical concerns relating to what namespace the new algorithms will go into (there was previously talk of a std2 namespace to serve as a place to house new-and-improved standard library facilities, but that has since been scrapped) and how they will relate to the existing algorithms; however, the authors are still optimistic that the second half can make C++20 as well.

Networking TS

There is a lot of desire to merge the Networking TS into C++20, but the dependence on executors makes that timeline challenging. As a best case scenario, it’s possible that executors go into C++20 fairly soon, and there is time to subsequently merge the Networking TS into C++20 as well. However, that schedule can easily slip to C++23 if the standardization of executors runs into a delay, or if the Concurrency Study Group chooses to go the TS route with executors.

The remaining parts of the Concepts TS

The Concepts TS was merged into the C++20 working draft in Toronto, but without the controversial abbreviated function templates (AFTs) feature (and some related things).

I mentioned that there was still a lot of demand for AFTs, even if there was no consensus for them in their Concepts TS form, and that alternative AFT proposals targeting C++20 would be forthcoming. Several such proposals were brought forward at this meeting; I discuss them below. While there wasn’t final agreement on any of them at this meeting, there was consensus on a direction, and there is relative optimism about being able to get AFTs in some form into C++20.

What about Modules?

The Modules TS was just published a few weeks ago, so talk of merging it into the C++ IS is a bit premature. Nonetheless, it’s a feature that people really want, and soon, and so there was a lot of informal discussion about the possibility of such a merge.

There were numerous proposals for post-TS design changes to Modules brought forward at this meeting; I summarize the EWG discussion below. On the whole, I think the design discussions were quite productive. It certainly helped that the Modules TS is now published, and design concerns could no longer be postponed as “we’ll deal with this post-TS”.

I think it’s too early to speculate about the prospects of getting Modules into C++20, but there seems to be a potential path forward, which I describe below as well.

Evolution Working Group

I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

Unless otherwise indicated, proposals discussed here are targeting C++20. I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • A couple of minor tweaks to the Coroutines TS: symmetric coroutine transfer, and parameter preview for coroutine promise constructor.
  • Clarifications about the behaviour of contract checks that modify observable (e.g. global) state. The outcome was that evaluating such a contract check constitutes undefined behaviour.
  • Class types in non-type template parameters. This is a long-desired feature, with an example use case being format strings checked at compile-time, and one of the few remaining gaps in the language where user-defined types don’t have all the powers of built-in types. The feature had been blocked on the issue of how to determine the equivalence of two non-type template parameters of class type (which is needed to be able to establish the equivalence of template specializations). Default comparisons finally provided a way forward here; class types used as non-type template parameters need to have a defaulted operator<=> (as do their members).
  • Static reflection of functions. This is an extension to the reflexpr proposal to allow reflecting over functions. You can’t reflect over an overload set; rather, reflexpr can accept a function call expression as an argument, perform overload resolution (without evaluating the call), and reflect the chosen overload. This is targeting the Reflection TS, not C++20.
  • Standard containers and constexpr. This proposal aims to allow the use of dynamic allocation in a constexpr context, so as to make e.g. std::vector usable by constexpr functions. This is accomplished by allowing destructors to be constexpr, and allowing new-expressions and std::allocator to be used in a constexpr context. (The latter is necessary because something like std::vector, which maintains a partially initialized dynamic allocation, can’t be implemented using new-expressions alone. operator new itself isn’t supported, because it loses information about the type of the allocated storage; std::allocator::allocate(), which preserves such information, needs to be used instead.) The proposal as currently formulated does not allow dynamic allocations to “survive” beyond constant experssion evaluation; there will be a future extension to allow this, where “surviving” allocations will be promoted to static or automatic storage duration as appropriate.
  • char8_t: a type for UTF-8 characters and strings. This is a combined core language + library proposal; the language parts include introducing a new char8_t type, and changing the behaviour of u8 character and and string literals to use that type. The latter changes are breaking, though the expected breakage is fairly slight, especially for u8 character literals which are new in C++17 and not heavily used yet.

    Discussion of this proposal centered around the big-picture plan of how UTF-8 adoption will work, and whether we can’t just work towards char itself implying a UTF-8 encoding. Several people argued that that’s unlikely to happen, due to large amounts of legacy code that don’t treat char as UTF-8, and due to the special role of char as an “aliasing” type (where an array of char is allowed to serve as the underlying storage for objects of other types) which prevents compilers from optimizing uses of char the way they could optimize char8_t (which, importantly, would be a non-aliasing type).

    In the end, EWG gave the green-light to the direction outlined in the paper. (There was a brief discussion of pursuing this as a TS, but there was no consensus for this, in part because people felt that if we’re going to change the meaning of u8 literals, we might as well do it now before the C++17 meaning gets a lot of adoption.)
  • explicit(bool). This allows constructors to be declared as “conditionally explicit”, based on a compile-time condition. This is mostly useful for wrapper types like pair or optional, where we want their constructors to be explicit iff. the constructors of their wrapped types are.
  • Checking for abstract class types. This tweaks the rules regarding when attempted use of an abstract type as a complete object is diagnosed, to avoid situations where a class definition retroactively makes a previously declared function that uses the type ill-formed.

There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above:

Finally, EWG decided to pull the previously-approved proposal to allow string literals in non-type template parameters, because the more general facility to allow class types in non-type template parameters (which was just approved) is a good enough replacement. (This is a change from the last meeting, when it seemed like we would want both.) The main difference is that you now have to wrap your character array into a struct (think fixed_string or similar), and use that as your template parameter type. (The user-defined literal part of P0424 is still going forward, with a corresponding adjustment to the allowed template parameter types.)


Proposals for which further work is encouraged:

  • C++ stability, velocity, and deployment plans. This is a proposal for a Standing Document (SD; a less-official-than-a-standard committee document, typically with procedural rather than technical content) outlining the procedure by which breaking changes can be made to C++. It classifies breaking changes by level of detectability (e.g. statically detectable and causes a compiler error, statically detectable but doesn’t cause a compiler error, not statically detectable), and issues guidance for whether and how changes in each category can be made. EWG encouraged the authors to come back with specific wording for the proposed SD.
  • Standard library compatibility promises. This is another proposal for a Standing Document, outlining what compatibility promises the C++ standard library makes to its users, and what kind of future changes it reserves to make. (As an example, the committee reserves the right to add new overloads to standard library functions. This may break user code that tries to take the address of a standard library function, and we want to make it clear that such breakage is par for the course; if you want a guarantee that your code will compile without modifications in future standards, you can only call standard library functions, not take their address.)
  • LEWG wishlist for EWG. This is a wishlist of core language issues that the Library Evolution Working Group would like to see addressed to solve problems facing library authors and users. Some of the items included reining in overeager ADL (see below for a proposal to do just that), making it easier to avoid lifetime errors, dealing with ABI breakage, and finding alternatives for the remaining use cases of macros. EWG encouraged future proposals in these areas, or discussion papers that advance our understanding of the problem (for example, a survey of macro use cases that don’t have non-macro alternatives).
  • Extending the offsetof macro to allow computing the offset to a member given a pointer-to-member variable (currently it requires being given the member’s name). EWG thought this was a valid use case, but expressed a preference for a different syntax rather than overloading the offsetof macro.
  • Various proposed extensions to the Modules TS, which I talk about below.
  • Towards consistency between <=> and other comparison operators. The background to this proposal is that when the <=> operator was introduced, there were a few cases where the specified behaviour was a departure from the corresponding behaviour for the existing two-way comparison operators. These were cases where we would have liked to change the behaviour for the existing operators, but couldn’t due to backwards compatibility considerations. <=>, however, being new to the language, had no such backwards compatibility considerations, so the authors specified the more-desirable behaviour for it. The downside is that this introduced inconsistencies between <=> and the two-way comparison operators.

    This proposal aims to resolve those inconsistencies, in some cases by changing the behaviour of the two-way operators after all. There were five specific areas of change:

    • Sign safety. Today, -1 < 1u evaluates to false due to sign conversion, which is not the mathematically correct result. -1 <=> 1u, on the other hand, is a compiler error. EWG decided that both should in fact work and give the mathematically correct result (which for -1 < 1u is a breaking change, though in practice it’s likely to fix many more bugs than it introduces), though whether this will happen in C++20, or after a longer transition period, remains to be decided.
    • Enum safety. Today, C++ allows two-way comparisons between enumerators of distinct enumerator types, and between enumerators and floating-point values. Such comparisons with <=> are ill-formed. EWG felt they should be made ill-formed for two-way comparisons as well, though again this may happen by first deprecating them in C++20, and only actually making them ill-formed in a future standard. (Comparisons between enumerators and integer values are common and useful, and will be permitted for all comparison operators.)
    • Array safety. Two-way comparisons between operands of array type will be deprecated.
    • Null safety. This is just a tweak to make <=> between a pointer and nullptr return strong_equality rather than strong_ordering.
    • Function pointer safety. EWG expressed a preference for allowing all comparisons between function pointers, and requiring implementers to impose a total order on them. Some implementers indicated they need to investigate the implementability of this on some architectures and report back.
  • Chaining comparisons. This proposes making chains of comparisons, such as a == b == c or a < b <= c, have their expected mathematical meaning (which is currently expressed in C++ in a more cumbersome way, e.g. a == b && b == c). This is a breaking change, since such expressions currently have a meaning (evaluate the first comparison, use its boolean result as the value for the second comparison, and so on). It’s been proposed before, but EWG was worried about the silent breaking change. Now, the authors have surveyed a large body of open-source code, and found zero instances of such expressions where the intended meaning was the current meaning, but several instances where the intended meaning was the proposed meaning (and which would therefore be silently fixed by this proposal). Importantly, comparison chains are only allowed if the comparisons in the chain are either all =, all < and <=, or all > and >=; other chains like a < b > c are not allowed, unlike e.g in Python. In the original proposal, such “disallowed” chains would have retained their current meaning, but EWG asked that they be made ill-formed instead, to avoid confusion. The proposal also contained a provision to have folds over comparisons (e.g. a < ..., where a is a function parameter pack) expand to a chained comparison, but EWG chose to defer that part of the proposal until more implementation experience can be gathered.
  • Size feedback in operator new. This proposes overloads of operator new that return how much memory was allocated (which may be more than what was asked for), so the caller can make use of the entire allocation. EWG agreed with the use case, but had some concerns about the explosion of operator new overloads (each new variation that’s added doubles the number of overloads; with this proposal, it would be 8), and the complications around having the new overloads return a structure rather than void*, and asked the authors to come back after exploring the design space a bit more.
  • The assume_aligned attribute. The motivation is to allow authors to signal to the compiler that a variable holds a value with a particular alignment at a given point in time, for purposes such as more efficient vectorization. The alignment is a property of the variable’s value at a point in time, not of the variable itself (e.g. you can subsequently increment the pointer and it will no longer have that alignment). EWG liked the idea but felt that the proposed semantics about where the attribute could apply (for example, that it could apply to parameter variables but not local variables) were confusing. Suggested alternatives included a magic library function (which would more clearly apply at the time it’s called), and something you can place into a contract check.
  • Fixing ADL. This is a resurrection of a proposal that’s more than a decade old, to fix argument-dependent lookup (ADL). ADL often irks people because it’s too eager, and often finds overloads in other namespaces that you didn’t intend. This proposal to fix it was originally brought forward in 2005, but was deferred at the time because the committee was behind in shipping C++0x (which became C++11); it finally came back now. It aims to make two changes to ADL:
    • Narrow the rules for what makes a namespace an associated namespace for the purpose of ADL. The current rules are very broad; in particular, it includes not only the namespaces of the arguments of a function call, but the namespaces of the template parameters of the arguments, which is responsible for a lot of unintended matches. The proposal would axe the template parameters rule.
    • Even if a function is found in an associated namespace, only consider it a match if it has a parameter matching the argument that caused the namespace to be associated, in the relevant position.

    This is a scary change, because it has the potential to break a lot of code. EWG’s main feedback was that the authors should try implementing it, and test some large codebases to understand the scope of breakage. There were also some concerns about the how the second change would interact with Concepts (and constrained templates in general). The proposal will come back for further review.

  • A proposed language-level mitigation for Spectre variant 1, which I talk about below.
  • Allow initializing aggregates from a parenthesized list of values. This aims to solve a long-standing issue where e.g. vector::emplace() didn’t work with aggregate types, because the implementation of emplace() would do new T(args...), while aggregates required new T{args...}. A library solution was previously proposed for this, but the library groups were unhappy with it because it felt like a workaround for a language deficiency, and it would have had to be applied everywhere in the library where it was a problem (with vector::emplace() being just one example). This proposal fixes the deficiency at the language level. EWG generally liked the idea, though there was also a suggestion that a related problem with aggregate initialization (deleted constructors not preventing it) be solved at the same time. There was also a suggestion that the proposal only apply in dependent contexts (since in non-dependent contexts, you know what kind of initialization you need to use), but that was shot down.
  • Signed integers are two’s complement. The standard currently allows various representations for signed integers, but two’s complement is the only one used in practice, on all modern architectures; this proposal aims to standardize on that, allowing code to portably rely on the representation (and e.g. benefit from hardware capabilities like an arithmetic right shift). EWG was supportive of the idea, but expressed a preference for touching base with WG14 (the C standards committee) to make sure they’re on board with this change. (The original version of this proposal would also have defined the overflow behavior for signed integers as wrapping; this part was rejected in other subgroups and never made it to EWG.)
  • Not a proposal, but the Core Working Group asked EWG whether non-template functions should be allowed to be constrained (with a requires-clause). There are some use cases for this, such as having multiple implementations of a function conditioned on some compile-time condition (e.g. platform, architecture, etc.). However, this would entail some specification work, as the current rules governing overloading of constrained functions assume they are templates, and don’t easily carry over to non-templates. EWG opted not to allow them until someone writes a paper giving sufficient motivation.

Rejected proposals:

  • Supporting offsetof for all classes. offsetof is currently only guaranteed to work for standard-layout classes, but there are some use cases for it related to memory-mapped IO, serialization, and similar low-level things, that require it to work for some classes that aren’t standard-layout. EWG reiterated the feedback it gave on the previous proposal on this topic: to expand the definition of standard-layout to include the desired types. EWG was disinclined to allow offsetof for all classes, including ones with virtual bases, as proposed in this paper; it was felt that this more general goal could be accomplished with a future reflection-based facility.
  • Structured bindings with polymorphic lambdas. This would have allowed a structured binding declaration (e.g. auto [a, b]) as a function parameter, with the semantics that it binds to a single argument (the composite object), and is decomposed into the named consituents on the callee side. EWG sympathized with the goal, but had a number of concerns including visual ambiguity with array declarators, and encouraging the use of templates (and particularly under-constrained templates, until structured bindings are extended to allow a concept in place of auto) where otherwise you might use a non-template.
  • Structured binding declaration as a condition. This would have allowed a condition like if (auto [a, b] = f()), where the condition evaluates to the composite object returned to f() (assuming that object is already usable as a condition, e.g. by having a conversion operator to bool). EWG felt that the semantics weren’t obvious (in particular, people might think one of the decomposed variables is used as the condition). There were also unanswered questions like, in the case of a composite object that uses get<>() calls to access the decomposed variables, whether those calls happen before or after the call to the conversion operator. It was pointed out that you can already use a structured binding in a condition if you use the “if with initializer” form added in C++17, e.g. if (auto [result, ok] = f(); ok), and this is preferable because it makes clear what the condition is. (Some people even expressed a desire for deprecating the declaration-as-condition form altogether, although there was also opposition to that.)

Spectre

No significant meeting of software engineers in the past few months has gone without discussion of Spectre, and this standards meeting was no exception.

Google brought forward a proposal for a language-level mitigation for variant #1 of Spectre (which, unlike variant #2, has no currently known hardware-level mitigation). The proposal allows programmers to harden specific branches against speculation, like so:


  if [[protect_from_speculation(args...)]] (predicate) {
    // use args
  }

args... here is a comma-separated list of one or more variables that are in scope. The semantics is that, if predicate is false, any speculative execution inside the if block treats each of the args as zero. This protects against the exploit, which involves using side channels to recover information accessed inside (misspeculated execution of) the branch at a location that depends on args.

The described semantics can be implemented in assembly; see this llvm-dev post for a description of the implementation approach.

For performance reasons, the proposed hardening is opt-in (as opposed to “harden all branches this way”, although compilers can certainly offer that as an option for non-performance-critical programs), and only as aggressive as it needs to be (as opposed to “disable speculation entirely for this branch”).

The language-level syntax to opt a branch into the hardening remains to be nailed down; the attribute syntax depicted above is one possibility. One complication is that if statements are not the only language constructs that compile down to branches; there are others, including some subtler ones like virtual function dispatch. The chosen syntax should be flexible enough to allow hardening all relevant constructs.

In terms of standardizing this feature, one roadblock is that the C++ standard defines the behavior of programs in terms of an abstract machine, and the semantics of the proposed hardening concern lower-level notions that cannot be described in such terms. As the committee is unlikely to reinvent the C++ abstract machine to allow reasoning about such things as speculative execution in normative wording, it may end up being the case that the syntax of the language construct is described normatively, while its semantics is described non-normatively.

This proposal will return to EWG in a more concrete form at the next meeting. As portably mitigating Spectre is a rather urgent desire in the C++ community, there was some talk of somehow standardizing this feature “out of band” rather than waiting for C++20, though it wasn’t clear what that might look like.

Concepts

EWG had an evening session to discuss proposals related to Concepts, particularly abbreviated function templates (AFTs).

To recap, AFTs are function templates declared without a template parameter list, with concept names used instead of type names in the signature. An example is void sort(Sortable& s);, which is a shorthand for template <Sortable __S> void sort(__S& s);. Such use of a concept name in place of a type name is called a constrained-type-specifier. In addition to parameter types, the Concepts TS allowed constrained-type-specifiers in return types (where the meaning was “the function’s return type is deduced, but also has to model this concept”), and in variable declarations (where the meaning was “the variable’s type is deduced, as if declared with auto, but also has to model this concept”).

constrained-type-specifiers did not make it into C++20 when the rest of the Concepts TS was merged, mostly because there were concerns that you can’t tell apart an AFT from a non-template function without knowing whether the identifiers that appear in the parameter list name types or concepts.

Four proposals were presented at this evening session, which aimed to get AFTs and/or other forms of constrained-type-specifiers into C++20 in some form.

I’ll also mention that the use of a concept name inside a template parameter list, such as template <Sortable S> (which is itself a shorthand for template <typename S> requires Sortable<S>), is called a constrained-parameter. constrained-parameters have been merged into the C++20 working draft, but some of the proposals wanted to make modifications to them as well, for consistency.

Three of the discussed proposals took the approach of a inventing a new syntax for constrained-type-specifiers (and in some cases constrained-parameters) that wasn’t just an identifier, thus syntactically distinguishing AFTs from non-template functions.

  • Concept-constrained auto proposed the syntax auto<Sortable>. The proposal as written concerned variable declarations only, but one could envision extending this to other uses of constrained-type-specifiers.
  • An adjective syntax for concepts proposed Sortable typename S as an alternative syntax for constrained-parameters, with a possible future extension of Sortable auto x for constrained-type-specifiers. The idea is that the concept name is tacked, like an adjective, onto the beginning of what you’d write without concepts.
  • Concepts in-place syntax proposed Sortable{S} for constrained-parameters, and Sortable{S} s for constrained-type-specifiers (where S would be an additional identifier the declaration introduces, that names the concrete type deduced for the parameter/variable). You could also write Sortable{} s if you didn’t want/need to name the type. One explicit design goal of this proposal was that if, in the future, the committee changes its mind about AFTs needing to be syntactically distinguishable from non-template functions (because we get more comfortable with them, or are happy to rely more on tooling to tell them apart), the empty braces could be dropped altogether, and we’d arrive precisely at the Concepts TS syntax.

An additional idea that was floated, though it didn’t have a paper, was to just use the Concepts TS syntax, but add a single syntactic marker, such as a bare template keyword before the function declaration (as opposed to per-parameter syntactic markers, as in the above proposals).

Of these ideas, Sortable{S} had the strongest support, with “Concepts TS syntax + single syntatic marker” coming a close second. The proponents of these ideas indicated that they will try to collaborate on a revised proposal that can hopefully gain consensus among the entire group.

The fourth paper that was discussed attacked the problem from a different angle: it proposed adopting AFTs into C++20 without any special syntactic marker, but also changing the way name lookup works inside them, to more closely resemble the way name lookup works inside non-template functions. The idea was that, perhaps if the semantics of AFTs are made more similar to non-template functions (name lookup is one of the most prominent semantic differences between template and non-template code), then we don’t need to syntactically distinguish them. The proponents of having a syntactic marker did not find this a convincing argument for adopting AFTs without one, but it was observed that the proposed name lookup change might be interesting to explore independently. At the same time, others pointed out similarities between the proposed name lookup rules and C++0x concepts, and warned that going down this road would lead to C++0x lookup rules (which were found to be unworkable).

(As an aside, one topic that seems to have been settled without much discussion was the question of independent resolution vs. consistent resolution; that is, if you have two uses of the same concept in an AFT (as in void foo(Number, Number);), are they required to be the same concrete type (“consistent”), or two potentially different types that both model the concept (“independent”). The Concepts TS has consistent resolution, but many people prefer independent resolution. I co-authored a paper arguing for independent resolution a while back; that sentiment was subsequently reinforced by another paper, and also in a section of the Sortable{S} proposal. Somewhat to my amusement, the topic was never actually formally discussed and voted on; the idea of independent resolution just seemed to slowly, over time, win people over, such that by this meeting, it was kind of treated as a done deal, that any AFT proposal going into C++20 will, in fact, have independent resolution.)

Coroutines

As mentioned above, EWG had a discussion about merging the Coroutines TS into C++20.

The main pushback was due to a set of concerns described in this paper (see also this response paper). The concerns fell into three broad categories:

  • Performance concerns. As currently specified, coroutines perform a dynamic allocation to store the state that needs to be saved in between suspensions. The dynamic allocation can be optimized away in many cases, but it was argued that for some use cases, you want to avoid the dynamic allocation by construction, without relying on your optimizer. An analogy can be made to std::vector: sure, compilers can sometimes optimize the dynamic allocation it performs to be a stack allocation, but we still have stack arrays in the language to guarantee stack allocation.

    One particularly interesting use case that motivates this performance guarantee, is using coroutines to implement a form of error handling similar to Rust’s try! macro / ? operator. The general idea is to hook the coroutine customization points for a type like expected<T> (the proposed C++ analogue of Rust’s Result), such that co_await e where e has type expected<T> functions like try!(e) would in Rust (see the paper for details). However, no one would contemplate using such an error handling mechanism if it didn’t come with a guarantee of not introducing a dynamic allocation.
  • Safety concerns. The issue here is that reference parameters to a coroutine may become dangling after the coroutine is suspended and resumed. There is a desire to change the syntax of coroutines to make this hazard more obvious.
  • Syntax concerns. There are several minor syntactic concerns related to the choice of keywords (co_await, co_yield, and co_return), having to use co_return instead of plain return, and the precedence of the co_await operator. There is a suggestion to address these by replacing co_await with a punctuation-based syntax, with both prefix and postfix forms for better composition (compare having both * and -> operators for pointer dereferencing).

The paper authors plan to bring forward a set of modifications to the Coroutines TS that address these concerns. I believe the general idea is to change the syntax in such a way that you can explicitly access / name the object storing the coroutine state. You can then control whether it’s allocated on the stack or the heap, depending on your use case (e.g. passing it across a translation unit boundary would require allocating it on the heap, similar to other compiler-generated objects like lambdas).

EWG expressed interest in seeing the proposed improvements, while also expressing a strong preference for keeping coroutines on track to be merged into C++20.

Modules

EWG spent an entire day on Modules. With the Modules TS done, the focus was on post-TS (“Modules v2”) proposals.

  • Changing the term “module interface”. This paper argued that “module interface” was a misnomer because a module interface unit can contain declarations which are not exported, and therefore not conceptually part of the module’s interface. No functional change was proposed. EWG’s reaction was “don’t care”.
  • Modules: dependent ADL. The current name lookup rules in the Modules TS have the consequence that argument-dependent lookup can find some non-exported functions that are declared in a module interface unit. This proposal argued this was surprising, and suggested tightening the rules. EWG was favourable, and asked the author to come back with a specific proposal.
  • Modules: context-sensitive keyword. This proposed making module a context-sensitive keyword rather than a hard keyword, to avoid breaking existing code that uses module as an identifier. The general approach was that if a use of module could legally be a module declaration, it is, otherwise it’s an identifier. EWG disliked this direction, because the necessary disambiguation rules were too confusing (e.g. two declarations that were only subtly different could differ in whether module was interpreted as a keyword or an identifier). It was suggested that instead an “escape mechanism” be introduced for identifiers, where you could “decorate” an identifier as something like __identifier(module) or @module to keep it an identifier. It was also pointed out that adopting relevant parts of the “Another take on modules” proposal (see below) would make this problem moot by restricting the location of module declarations to a file’s “preamble”.
  • Unqualified using declarations. This proposed allowing export using name;, where name is unqualified, as a means of exporting an existing name (such as a name from an included legacy header). EWG encouraged exploration of a mechanism for exporting existing names, but wasn’t sure this would be the right mechanism.
  • Identifying module source code. This requires that any module unit either start with a module declaration, or with module; (which “announces” that this is a module unit, with a module declaration to follow). The latter form is necessary in cases where the module wants to include legacy headers, which usually can’t be included in the module’s purview. This direction was previously approved by EWG, and this presentation was just a rubber-stamp.
  • Improvement suggestions to the Modules TS. This paper made several minor improvement suggestions.
    • Determining whether an importing translation unit sees an exported type as complete or incomplete, based on whether it was complete or incomplete at the end of the module interface unit, rather than at the point of export. This was approved.
    • Exporting the declaration of an inline function should not implicitly export the definition as well. There was no consensus for this change.
    • Allow exporting declarations that don’t introduce names; an example is a static_assert declaration. Exporting such a declaration has no effect; the motivation here is to allow enclosing a group of declarations in export { ... }, without having to take care to move such declarations out of the block. This was approved for static_assert only; EWG felt that for certain other declarations that don’t introduce names, such as using-directives, allowing them to be exported might be misleading.
    • A tweak to the treatment of private members of exported types. Rejected because private members can be accessed via reflection.

That brings us to what I view as the most significant Modules-related proposal we discussed: Another take on modules (or “Atom” for short). This is a proposal from Google based on their deployment experience with Clang’s implementation of Modules; it’s a successor to previous proposals like this one. It aims to make several changes – some major, some minor – to the Modules TS; I won’t go through all of them here, but they include changes to name lookup and visibility rules, support for module partitions, and introducing the notion of a “module preamble”, a section at the top of a module file that must contain all module and import declarations. The most significant change, however, is support for modularized legacy headers. Modularized legacy headers are legacy (non-modular) headers included in a module, not via #include as in the Modules TS, but via import (as in import "file" or import <file>). The semantics is that, instead of textually including the header contents as you would with an #include, you process them as an isolated translation unit, produce a module interface artefact as-if it was a module (with all declarations exported, I assume), and then process the import as if it were an actual module import.

Modularized legacy headers are primarily a transition mechanism for incrementally modularizing a codebase. The proposal authors claim that without them, you can’t benefit from compile-time improvements of Modules in a codebase (and in fact, you can take a compile time hit!) unless you bottom-up modularize the entire codebase (down to the standard library and runtime library headers), which is viewed as infeasible for many large production codebases.

Importantly, modularized legacy headers also offer a way forward in the impasse about whether Modules should support exporting macros. In the Atom proposal, modularized legacy headers do export the macros they define, but real modules do not. (There is an independent proposal to allow real modules to selectively export specific macros, but for transition purposes, that’s not critical, since for components that have macros as part of their interface, you can just use them as a modularized legacy header.)

There was some discussion of whether the Atom proposal is different enough from the Modules TS that it would make sense to pursue it as a separate (competing) TS, or if we should try to integrate the proposed changes into the Modules TS itself. The second approach had the stronger consensus, and the authors plan to come back with a specific proposed diff against the Modules TS.

It’s too early to speculate about the impact of pursuing these changes on the schedule for shipping Modules (such as whether it can be merged into C++20). However, one possible shipping strategy might be as follows (disclaimer: this is my understanding of a potential plan based on private conversation, not a plan that was approved by or even presented to EWG):

  • Modules v1 is the currently shipping Modules TS. It is not forward-compatible with v2 or v3.
  • Modules v2 would be a modified version of v1 that would not yet support modularized legacy headers, but would be forward-compatible with v3. Targeting C++20.
  • Modules v3 would support modularized legacy headers. Targeting post-C++20, possibly a second iteration of the Modules TS.

Such a way forward, if it becomes a reality, would seem to satisfy the concerns of many stakeholders. We would ship something in the C++20 IS, and people who are able to bottom-up modularize their codebases can start doing so, without fear of further breaking changes to Modules. Others who need the power of modularized legacy headers can wait until Modules v3 to get it.

I’m pretty happy with the progress made on Modules at this meeting. With the Atom proposal having been discussed and positively received, I’m more optimistic about the feature than I have been for the past few meetings!

Papers not discussed

With the meeting being fairly heavily focused on large proposals like Concepts, Modules, and Coroutines, there were a number of others that EWG didn’t get a chance to look at. I won’t list them all (see the pre-meeting mailing for a list), but I’ll call out two of them: feature-test macros are finally on the formal standards track, and there’s an revised attempt to tackle named arguments in C++ that’s sufficiently different from previous attempts that I think it at least might not be rejected out of hand. I look forward to having these, and the other proposals on the backlog, discussed at the next meeting.

Other Working Groups

Library Groups

Having sat in EWG all week, I can’t report on technical discussions of library proposals, but I’ll mention where various proposals are in the processing queue.

I’ve already listed the library proposals that passed wording review and were voted into the C++20 working draft above.

A few proposals targeting Technical Specifications also passed wording review and were merged into the relevant TS working drafts:

The following proposals are still undergoing wording review:

The following proposals have passed design review and await wording review at future meetings:

The following proposals are still undergoing design review:

In addition, there is a fairly long queue of library proposals that haven’t started design review yet. See the committee’s website for a full list of proposals.

Finally, I’ll mention that the Library Evolution Working Group had a joint evening session with SG 14 (Low Latency Programming) to discuss possible new standard library containers in C++20. Candidates included a fixed capacity vector, a vector with a small object optimization, ring buffer, colony, and slot map; the first three had the greatest support.

Study Groups

SG 6 (Numerics)

SG 6 met for a day, and reviewed a number of numerics-related proposals. In addition to the “signed integers are two’s complement” proposal that later came to EWG, it looked at several library proposals. Math constants, constexpr for <cmath> and <cstdlib>, letting strong_order truly be a customization point, and interpolation were forwarded to LEWG (in some cases with modifications). More better operators and floating point value access for std::ratio remain under discussion. Safe integral comparisons have been made moot by operator<=> (the proposal was “abducted by spaceship”).

SG 7 (Compile-Time Programming)

SG 7, the Compile-Time Programming (previously Reflection) Study Group, met for an evening session and reviewed three papers.

The first, called constexpr reflexpr, was an exploration of what the reflexpr static introspection proposal might look like formulated in terms of value-based constexpr programming, rather than template metaprogramming. SG 7 previously indicated that this is the direction they would like reflection proposals to take in the longer term. The paper was reviewed favourably, with encouragement to do further work in this direction. One change that was requested was to make the API value-based rather than pointer based. Some implementers pointed out that unreflexpr, the operator that takes a meta-object and reifies it into the entity it represents, may need to be split into multiple operators for parsing purposes (since the compiler needs to know at parsing time whether the reified entity is a value, a type, or a template, but the meta-object passed as argument may be dependent in a template context). Finally, some felt that the constexpr for facility proposed in the paper (which bears some resemblance to the previously-proposed tuple-based for loop) may be worth pursuing independently.

The second was a discussion paper called “What do we want to do with reflection?” It outlines several basic / frequently requested reflection use cases, and calls for facilities that address these use cases to be added to C++20. SG 7 observed that one such facility, source code information capture, is already shipping in the Library Fundamentals TS v2, and could plausibly be merged into C++20, but for the rest, a Reflection TS published in the 2019-2020 timeframe is probably the best we can do.

The third was an updated version of the metaclasses proposal. To recap, metaclasses are compile-time transformations that can be applied to a class definition, producing a transformed class (and possibly other things like helper classes / functions). At the last meeting, SG 7 discussed how a metaclass should be defined, and decided on it operating at the “value level” (where the input and output types are represented as meta-objects, and the metaclass itself is more or less just a constexpr function). At this meeting, SG 7 focused on the invocation syntax: how you apply a metaclass to your class. The syntax that appeared to have the greatest consensus was class<interface> Foo { ... }; (where interface is an example metaclass name).

SG 15 (Tooling)

This week was the inaugural meeting of the new Tooling Study Group (SG 15), also in an evening session.

Unsurprisingly, the meeting was well attended, and the people there had many, many different ideas for how C++ tooling could be improved, ranging from IDEs, through refactoring and code analysis tools, to build systems and package managers. Much of the meeting was spent trawling through this large idea space to try to narrow down and focus the group’s scope and mission.

One topic of discussion was, what is the best representation of code for tools to consume? Some argued that the source code itself is the only sufficiently general and powerful representation, while others were of the opinion that a more structured, easy-to-consume representation would be useful, e.g. because it would avoid every tool that consumes it being (or containing / invoking) a C++ parser. It was pointed out that the “binary module interface” representation that module files compile into may be a good representation for tools to consume, and we may want to standardize it. Others felt that instead of standardizing the representation, we should standardize an API for accessing it.

In the space of build systems and package managers, the group recognized that building “one build system” or “one package manager” to rule them all is unlikely to happen. Rather, a productive direction to focus efforts might be some sort of protocol that any build or package system can hook into, and produce some sort of metadata that different tools can consume. Clang implementers pointed out that compilation databases are a primitive form of this, but obviously there’s a lot of room for improvement.

In the end, the group articulated a mission: that in 10 years’ time, it would like the C++ community to be in a state where a “compiler-informed” (meaning, semantic-level) code analysis tool can run on a significant fraction of open-source C++ code out there. This implies having some sort of metadata format (that tells the tool “here’s how you run on this codebase”) that a significant enough fraction of open-source projects support. One concrete use case for this would be the author of a C++ proposal that’s a breaking change, to run a query on open-source projects to see how much breakage the change would cause; but of course the value of such infrastructure / tooling goes far beyond this use case.

It’s a fair question to ask what the committee’s role is in all this. After all, the committee’s job is to standardize the language and its libraries, and not peripheral things like build tools and metadata formats. Even the binary module interface format mentioned above couldn’t really be part of the standard’s normative wording. However, a format / representation / API could conceivably be published in the form of a Standing Document. Beyond that, the Study Group can serve as a place to coordinate development and specification efforts for various peripheral tools. Finally, the Standard C++ Foundation (a nonprofit consortium that contributes to the funding of some commitee meetings) could play a role in funding critical tooling projects.

New Study Group: SG 16 (Unicode)

The committe has decided to form a new study group for Unicode and Text Handling. This group will take ownership of proposals such as std::text and std::text_view (types for representing text that know their encoding and expose functions that operate at the level of code points and grapheme clusters), and other proposals related to text handling. The first meeting of this study group is expected to take place at a subsequent committee meeting this year.

Conclusion

I think this was a productive meeting with good progress made on many fronts. For me, the highlights of the meeting included:

  • Tackling important questions about Modules, such as how to transition large existing codebases, and what to do about macros.
  • C++20 gaining foundational Concepts for its standard library, with the rest of the Ranges TS hopefully following soon.
  • C++20 gaining a standard calendar and timezone library
  • An earnest design discussion about Coroutines, which may see an improved design brought forward at the next meeting.

The next meeting of the Committee will be in Rapperswil, Switzerland, the week of June 4th, 2018. Stay tuned for my report!

Other Trip Reports

Some other trip reports about this meeting include Vittorio Romeo’s, Guy Davidson’s (who’s a coauthor of the 2D graphics proposals, and gives some more details about its presentation), Bryce Lelbach’s, Timur Doumler’s, Ben Craig’s, and Daniel Garcia’a. I encourage you to check them out as well!

Featured Song: Octavarium

I mentioned in my last Featured Song post that I’ve been dabbling in progressive metal. That dabbling led to my (re-)discovery of Dream Theater, one of the genre’s defining bands.

I say re-discovery, because I had listened to some Dream Theatre over a decade ago, including to some of their most popular songs at the time, “Pull Me Under” and “Metropolis” (both from their 1992 album Images and Words). I somewhat liked them, but also found them somewhat tedious / boring, and ultimately wasn’t motivated to check out more of their work.

More recently, however, a comment on the “Elysium” video I featured last time prompted me to check out some of their more recent albums, such as Octavarium (2005) and Systematic Chaos (2007), and, thanks to the evolution of both their style and my tastes over time, my impression was quite different: this stuff is great!

The title track of Octavarium, in particular, captivated me immediately, and that is what I’m featuring today.

At 24 minutes (26 in the live performance I’m linking to), this is the longest song I’ve featured to date (I promise, they will get shorter going forward!), and yet I do not find this song tedious at all – each part of it is different and interesting in its own right, and contributes to a very satisfying whole.

One particularly notable passage from this song is the extended fingerboard intro. While I’m not generally a huge fan of synthetic sounds, this passage in this piece is really well placed, and sets the atmosphere for the rest of the song perfectly.

My favourite part of the song, though, is “Intervals” (beginning at around 16:42 in the linked video) – the slow build-up of tension that leads to the screamed “TRAPPED INSIDE THIS OCTAVARIUM” lines – and the climactic sequence / dénouement that follows and takes you to the end of the song.

(What is an “octavarium”, you ask? The only prior use of the term that I could find was in the name of a liturgical book, but the root word is “octave“, and if you listen to the lyrics, the notion of cycles and the end being the beginning (a property which musical octaves have) comes up repeatedly – so I interpret “trapped inside this octavarium” as meaning “trapped in a cycle you can’t break out of”.)

My one complaint about progressive metal is that some of the extended keyboard / guitar solos (such as, in this song, the ones in the 2-3 minutes leading up to the “Intervals” section), while being technically challenging and intricate, lack some of the “interestingness” (for lack of a better word) of similar solos in power metal. For example, DragonForce‘s guitar solos, while being every bit as fast and technically intricate as Dream Theater’s, also have a sense of “movement” that the latter seem to lack. I think this is what I disliked about older Dream Theater songs like “Pull Me Under”, and I like “Octavarium” so much because it covers a lot of other stylistic ground.

Without further ado, I invite to you enjoy this live performance of “Octavarium”:



As one might expect from a song of this length, there is a lot of speculation / discussion of exactly what meaning it intends to convey. If you’re interested in that, or just want to see the lyrics, check out its SongMeanings page.

Control Flow Visualizer (CFViz): an rr / gdb plugin

rr (short for “record and replay”) is a very powerful debugging tool for C++ programs, or programs written in other compiled languages like Rust1. It’s essentially a reverse debugger, which allows you to record the execution of a program, and then replay it in the debugger, moving forwards or backwards in the replay.

I’ve been using rr for Firefox development at Mozilla, and have found it to be enormously useful.

One task that comes up very often while debugging is figuring out why a function produced a particular value. In rr, this is often done by going back to the beginning of the function, and then stepping through it line by line.

This can be tedious, particularly for long functions. To help automate this task, I wrote – in collaboration with my friend Derek Berger, who is learning Rust – a small rr plugin called Control Flow Visualizer, or CFViz for short.

To illustrate CFViz, consider this example function foo() and a call site for it:

example code

With the CFViz plugin loaded into rr, if you invoke the command cfviz while broken anywhere in the call to foo() during a replay, you get the following output:

example output

Basically, the plugin illustrates what path control flow took through the function, by coloring each line of code based on whether and how often it was executed. This way, you can tell at a glance things like:

  • which of several return statements produced the function’s return value
  • which conditional branches were taken during the execution
  • which loops inside the function are hot (were executed many times)

saving you the trouble of having to step through the function to determine this information.

CFViz’s implementation strategy is simple: it uses gdb’s Python API to step through the function of interest and see which lines were executed in what order. In then passes that information to a small Rust program which handles the formatting and colorization of the output.

While designed with rr in mind, CFViz also works with vanilla gdb, with the limitation that it will only visualize the rest of the function’s execution from the point where it was invoked (since, without rr, it cannot go backwards to the function’s starting point).

I’ve found CFViz to be quite useful for debugging Firefox’s C++ code. Hope you find it useful too!

CFViz is open source. Bug reports, patches, and other contributions are welcome!

Footnotes

1. rr also has a few important limitations: it only runs on Intel CPUs, and only on Linux (although there is a similar tool called Time-Travel Debugging for Windows)

A response to “Net Neutrality. No big deal.”

I recently watched this video titled “Net Neturality. No big deal.” by Bryan Lunduke.

I watched this video because, while I am in favour of Net Neutrality, and concerned about the impending repeal of Net Neutrality regulations in the United States, I consciously try to avoid being stuck in an echo chamber of similar views, and try to expose myself to opposing viewpoints as well; particularly when such opposing viewpoints are held by a figure in a community that I respect and identify with (in Bryan’s case, the Linux and free software community).

I found the video interesting and well-presented, but I didn’t find Bryan’s arguments convincing. I decided to write this post to respond to two of the arguments Bryan makes in particular.

The first argument I wanted to address was about the fact that some of the largest companies that are lobbying to keep Net Neutrality rules in place – Google, Netflix, and Microsoft – are also supporters of DRM. Bryan argues, that since these companies support DRM, which is a threat to a free and open internet, we should not take their support for Net Neutrality (which they claim to also be motivated by a desire for a free and open internet) at face value; rather, they only support Net Neutrality regulations because they have a financial incentive for doing so (namely, they run high-bandwidth streaming and similar services that are likely to be first in line to be throttled in a world without Net Neutrality protections).

I don’t dispute that companies like Google, Netflix, and Microsoft support Net Neutrality for selfish reasons. Yes, the regulations affect their bottom line, at least in the short term. But that doesn’t mean there aren’t also good reasons for supporting Net Neutrality. Many organizations – like the Electronic Frontier Foundation, whom I have no reason to suspect to be beholden to the pocketbooks of large tech companies – have argued that Net Neutrality is, in fact, important for a free and open internet. That Netflix supports it for a different reason, doesn’t make that any less the case.

I also think that comparing DRM and the lack of Net Neutrality in this way confuses the issue. Yes, both are threats to a free and open internet, but I think they are qualitatively very different.

To explain why, let’s model an instance of communication over the internet as being between two parties: a sender or producer of the communication, and its receiver or consumer. DRM exists to give the producer control over how the communication is consumed. There are many problems with DRM, but at least it is not intended to interfere with communication in cases where the producer and consumer agree on the terms (e.g. the price, or lack thereof) of the exchange1.

By contrast, in a world without Net Neutrality rules, an intermediary (such as an ISP) can interfere with (such as by throttling) communication between two parties even when the two parties agree on the terms of the communication. This potentially opens the door to all manner of censorship, such as interfering with the communications of political activists. I see this as being a much greater threat to free communication than DRM.

(I also find it curious that Bryan seems to focus particularly on the standardization of DRM on the Web as being objectionable, rather than DRM itself. Given that DRM exists regardless of whether or not it’s standardized on the Web, the fact that it is standardized on the Web is a good thing, because it enables the proprietary software that implements the DRM to be confined to a low-privilege sandbox in the user’s browser, rather than having “full run of the system” as pre-standardization implementations of DRM like Adobe Flash did. See this article for more on that topic.)

The second argument Bryan makes that I wanted to address was that Net Neutrality rules mean the U.S. government being more involved in internet communications, such as by monitoring communications to enforce the rules.

I don’t buy this argument for two reasons. First, having Net Neutrality rules in place does not mean that internet communications need to be proactively monitored to enforce the rules. The role of the government could very well be limited to investigating and corrections violations identified and reported by users (or organizations acting on behalf of users).

But even we assume there will be active monitoring of internet communications to enforce the rules, I don’t see that as concerning. Let’s not kid ourselves: the U.S. government already monitors all internet communications it can get its hands on; axing Net Neutrality rules won’t cause them to stop. Moreover, users already have a way to protect the content of their communications (and, if desired, even the metadata, using tools like Tor) from being monitored: encryption. Net Neutrality rules don’t change that in any way.

In sum, I enjoyed watching Bryan’s video and I always appreciate opposing viewpoints, but I didn’t find the arguments that Net Neutrality is not a big deal convincing. For the time being, I continue to believe that the impending rollback of U.S. Net Neutrality rules is a big deal.

Footnotes

1. I am thinking here of cases where the content being communicated is original content, that is, content originated by the producer. I am, of course, aware, that DRM can and does interfere with the ability of two parties to communicate content owned by a third party, such as sending a movie to a friend. To be pedantic, DRM can even interfere with communication of original content in cases where such content is mistakenly identified as belonging to a third party. I’m not saying DRM is a good thing – I’m just saying it doesn’t rise to being the same level of threat to free communication as not having Net Neutrality protections does.

Trip Report: C++ Standards Meeting in Albuquerque, November 2017

Summary / TL;DR

Project What’s in it? Status
C++17 See below Publication imminent
Library Fundamentals TS v2 source code information capture and various utilities Published!
Concepts TS Constrained templates Merged into C++20 with some modifications
Parallelism TS v2 Task blocks, library vector types and algorithms and more Nearing feature-completion; expect PDTS ballot at next meeting
Transactional Memory TS Transaction support Published! Not headed towards C++20
Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it headed for C++20
Concurrency TS v2 See below Under active development
Networking TS Sockets library based on Boost.ASIO Publication imminent
Ranges TS Range-based algorithms and views Publication imminent
Coroutines TS Resumable functions, based on Microsoft’s await design Publication imminent
Modules TS A component system to supersede the textual header file inclusion model Resolution of comments on Proposed Draft in progress
Numerics TS Various numerical facilities Under active development; no new progress
Graphics TS 2D drawing API Under active design review; no new progress
Reflection Code introspection and (later) reification mechanisms Introspection proposal awaiting wording review. Targeting a Reflection TS.
Contracts Preconditions, postconditions, and assertions Proposal under wording review

Some of the links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of November 27, 2017). If you encounter such a link, please check back in a few days.

Introduction

A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Albuquerque, New Mexico. This was the third committee meeting in 2017; you can find my reports on previous meetings here (February 2017, Kona) and here (July 2017, Toronto). These reports, particularly the Toronto one, provide useful context for this post.

With the final C++17 International Standard (IS) having been voted for publication, this meeting was focused on C++20, and the various Technical Specifications (TS) we have in flight, most notably Modules.

What’s the status of C++17?

The final C++17 International Standard (IS) has been sent off for publication in September. The final document is based on the Draft International Standard (DIS), with only minor editorial changes (nothing normative) to address comments on the DIS ballot; it is now in ISO’s hands, and official publication is imminent.

In terms of implementation status, the latest versions of GCC and Clang both have complete support for C++17, modulo bugs. MSVC is said to be on track to be C++17 feature-complete by March 2018; if that ends up being the case, C++17 will be quickest standard version to date to be supported by these three major compilers.

C++20

This is the second meeting that the C++20 Working Draft has been open for changes. (To use a development analogy, think of the current Working Draft as “trunk”; it was opened for changes as soon as C++17 “branched” earlier this year). Here, I list the changes that have been voted into the Working Draft at this meeting. For a list of changes voted in at the previous meeting, see my Toronto report.

Technical Specifications

In addition to the C++ International Standard, the committee publishes Technical Specifications (TS) which can be thought of “feature branches” (to continue the development analogy from above), where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

At the last meeting, we published three TSes: Coroutines, Ranges, and Networking. The next steps for these features is to wait for a while (usually at least a year) to give users and implementers a chance to try them out and provide feedback. Once we’re confident the features are ripe for final standardization, they will be merged into a future version of the International Standard (possibly C++20).

Modules TS

The Modules TS made significant progress at the last meeting: its Proposed Draft (PDTS) was published and circulated for balloting, a process where national standards bodies evaluate, vote on, and submit comments on a proposed document. The ballot passed, but numerous technical comments were submitted that the committee intends to address before final publication.

A lot of time at this meeting was spent working through those comments. Significant progress was made, but not enough to vote out the final published TS at the end of the meeting. The Core Working Group (CWG) intends to hold a teleconference in the coming months to continue reviewing comment resolutions. If they get through them all, a publication vote may happen shortly thereafter (also by teleconference); otherwise, the work will be finished, and the publication vote held, at the next meeting in Jacksonville.

I summarize some of the technical discussion about Modules that took place at this meeting below.

The state of Modules implementation is also progressing: in addition to Clang and MSVC, Facebook has been contributing to a GCC implementation.

Parallelism TS v2

The Parallelism TS v2 is feature-complete, with one final feature, a template library for parallel for loops voted in at this meeting. A vote to send it out for its PDTS ballot is expected at the next meeting.

Concurrency TS v2

The Concurrency TS v2 (no working draft yet) continues to be under active development. Three new features targeting it have received design approval at this meeting: std::cell, a facility for deferred reclamation; apply() for synchronized_value; and atomic_ref. An initial working draft that consolidates the various features slated for the TS into a single document is expected at the next meeting.

Executors, slated for a separate TS, are making progress: the Concurrency Study Group approved the design of the unified executors proposal, thereby breaking the lockdown that has been holding the feature up for a number of years.

Stackful coroutines continue to be a unique beast of their own. I’ve previously reported them to be slated for the Concurrency TS v2; I’m not sure whether that’s still the case. They change the semantics of code in ways that impacts the core language, and thus need to be reviewed by the Evolution Working Group; one potential concern is that the proposal may not be implementable on all platforms (iOS came up as a concrete example during informal discussion). For the time being, the proposal is still being looked at by the Concurrency Working Group, where there continues to be strong interest in standardizing them in some form, but the details remain to be nailed down; I believe the latest development is that an older API proposal may end up being preferred over the latest call/cc one.

Future Technical Specifications

There are some planned future Technical Specifications that don’t have an official project or working draft yet:

Reflection

The static introspection / “reflexpr” proposal (see its summary, design, and specification for details), headed for a Reflection TS, has been approved by the Evolution and Library Evolution Working Groups, and is awaiting wording review. The Reflection Study Group (recently renamed to “Compile-Time Programming Study Group”) approved an extension to it, concerning reflection over functions, at this meeting.

There are more reflection features to come beyond what will be in the static introspection TS. One proposal that has been drawing a lot of attention is metaclasses, an updated version of which was reviewed at this meeting (details below).

Graphics

I’m not aware of much new progress on the planned Graphics TS (containing 2D graphics primitives inspired by cairo) since the last meeting. The latest draft spec can be found here, and is still on the Library Evolution Working Group’s plate.

Numerics

Nothing particularly new to report here either; the Numerics Study Group did not meet this week. The high-level plan for the TS remains as outlined previously. There are concrete proposals for several of the listed topics, but not working draft for the TS yet.

Other major features

Concepts

As I related in my previous report, Concepts was merged into C++20, minus abbreviated function templates (AFTs) and related features which remain controversial.

I also mentioned that there will likely be future proposals to get back AFTs in some modified form, that address the main objection to them (that knowing whether a function is a template or not requires knowing whether the identifiers in its signature name types or concepts). Two such proposals were submitted in advance of this meeting; interestingly, both of them proposed a very similar design: an adjective syntax where in an AFT, a concept name would act as an adjective tacked onto the thing it’s constraining – most commonly, for a type concept, typename or auto. So instead of void sort(Sortable& s);, you’d have void sort(Sortable& auto s);, and that makes it clear that a template is being defined.

These proposals were not discussed at this meeting, because some of the authors of the original Concepts design could not make it to the meeting. I expect a lively discussion in Jacksonville.

Now that Concepts are in the language, the question of whether new library proposals should make use of them naturally arose. The Library Evolution Working Group’s initial guidance is “not yet”. The reason is that most libraries require some foundational concepts to build their more specific concepts on top of, and we don’t want different library proposals to duplicate each other / reinvent the wheel in that respect. Rather, we should start by adding a well-designed set of foundational concepts, and libraries can then start building on top of those. The Ranges TS is considered a leading candidate for providing that initial set of foundational concepts.

Operator Dot

I last talked about overloading operator dot a year ago, when I mentioned that there are two proposals for this: the original one, and an alternative approach that achieves a similar effect via inheritance-like semantics.

There hasn’t been much activity on those proposals since then. I think that’s for two reasons. First, the relevant people have been occupied with Concepts. Second, as the reflection proposals develop, people are increasingly starting to see them as a more general mechanism to satisfy operator dot’s use cases. The downside, of course, is that reflection will take longer to arrive in C++, while one of the above two proposals could plausibly have been in C++20.

Evolution Working Group

I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

All proposals discussed in EWG at this meeting were targeting C++20 (except for Modules, where we discussed some changes targeting the Modules TS). I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • Standardizing feature test macros (and another paper effectively asking for the same thing). Feature test macros are macros like __cpp_lambdas that tell you whether your compiler or standard library supports a particular feature without having to resort to the more indirect approach of having a version check for each of your supported compilers. The committee maintains a list of them, but they’re not an official part of the standard, and this has led some implementations to refuse to support them, thus significantly undermining their usefulness. To rectify this, it was proposed that they are made part of the official standard. This was first proposed at the last meeting, but failed to gain consensus at that time. It appears that people have since been convinced (possibly by the arguments laid out in the linked papers), as this time around EWG approved the proposal.
  • Bit-casting object representations. This is a library proposal, but EWG was asked for guidance regarding making this function constexpr, which requires compiler support. EWG decided that it could be made constexpr for all types except a few categories – unions, pointers, pointers-to-members, and references – for which that would have been tricky to implement.
    • As a humorous side-note about this proposal, since it could only apply to “plain old data” types (more precisely, trivially copyable types; as mentioned above, “plain old data” was deprecated as a term of art), one of the potential names the authors proposed for the library function was pod_cast. Sadly, this was voted down in favour of bit_cast.
  • Language support for empty objects. This addresses some of the limitations of the empty base optimization (such as not being able to employ it with types that are final or otherwise cannot be derived from) by allowing data members to opt out of the rule that requires them to occupy at least 1 byte using an attribute, [[no_unique_address]]. The resulting technique is called the “empty member optimization”.
  • Efficient sized delete for variable-sized classes. I gave some background on this in my previous post. The authors returned with sign-off from all relevant implementers, and a clearer syntax (the “destroying delete” operator is now identified by a tag type, as in operator delete(Type*, std::destroying_delete_t), and the proposal was approved.
  • Attributes for likely and unlikely statements. This proposal has been updated as per previous EWG feedback to allow placing the attribute on all statements. It was approved with one modification: placing the attribute on a declaration statement was forbidden, because other attributes on declaration statements consistently apply to the entity being declared, not the statement itself.
  • Deprecate implicit capture of *this. Only the implicit capture of *this via [=] was deprecated; EWG felt that disallowing implicit capture via [&] would break too much idiomatic code.
  • Allow pack expansions in lambda init-capture. There was no compelling reason to disallow this, and the workaround of constructing a tuple to store the arguments and then unpacking it is inefficient.
  • String literals as template parameters. This fixes a longstanding limitation in C++ where there was previously no way to do compile-time processing of strings in such a way that the value of the string could affect the type of the result (as an example, think of a compile-time regex parsing library where the resulting type defines an efficient matcher (DFA) for the regex). The syntax is very simple: template <auto& String>; the auto then gets deduced as const char[N] (or const char16_t[N] etc. depending on the type of the string literal passed as argument) where N is the length of the string. (You can also write template <const char (&String)[N]> if you know N, but you can’t write template <size_t N, const char (&String)[N]> and have both N and String deduced from a single string literal template argument, because EWG did not want to create a precedent for a single template argument matching two template parameters. That’s not a big deal, though: using the auto form, you can easily recover N via traits, and even constrain the length or the character type using a requires-clause.)
  • A tweak to the Contracts proposal. An issue came up during CWG review of the proposal regarding inline functions with assertion checks inside them: what should happen if the function is called from two translation units, one of which is compiled with assertion checks enabled and one of them not? EWG’s answer was that, as with NDEBUG today, this is technically an ODR (one definition rule) violation. The behaviour in practice is fairly well understood: the linker will pick one version or the other, and that version will be used by both translation units. (There are some potential issues with this: what if, while compiling a caller in one of the translation units, the optimizer assumed that the assertion was checked, but the linker picks the version where the assertion isn’t checked? That can result in miscompilation. The topic remains under discussion.)

There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above:

  • Fixing small-ish functionality gaps in concepts. This consisted of three parts, two of which were accepted:
    • requires-clauses in lambdas. This was accepted.
    • requires-clauses in template template parameters. Also accepted.
    • auto as a parameter type in regular (non-lambda) functions. This was mildly controversial due to the similarity to AFTs, whose design is still under discussion, so it was deferred to be dealt with together with AFTs.
  • Access specifiers and specializations.
  • Deprecating “plain old data” (POD).
  • Default constructible and assignable stateless lambdas.

  • Proposals for which further work is encouraged:

    • Standard containers and constexpr. This is the latest version of an ongoing effort by compiler implementers and others to get dynamic memory allocation working in a constexpr context. The current proposal allows most forms of dynamic allocation and related constructs during constant evaluation: non-trivial destructors, new and delete expressions, placement new, and use of std::allocator; this allows reusing a lot of regular code, including code that uses std::vector, in a constexpr context. Direct use of operator new is not allowed, because that returns void*, and constant evaluation needs to track the type of dynamically allocated objects. There is also a provision to allow memory that is dynamically allocated during constant evaluation to survive to runtime, at which point it’s treated as static storage. EWG liked the direction (and particularly the fact that compiler writers were on the same page regarding its implementability) and encouraged development of a more concrete proposal along these lines.
    • Supporting offsetof for stable-layout classes. “Stable-layout” is a new proposed category of types, broader than “standard-layout”, for which offsetof could be implemented. EWG observed that the definition of “standard-layout” itself could be broadened a bit to include most of the desired use cases, and expressed a preference for doing that instead of introducing a new category. There was also talk of potentially supporting offsetof for all types, which may be proposed separately as a follow-up.
    • short float. This proposal for a 16-bit floating-point type was approved by EWG earlier this year, but came back for some reason. There was some re-hashing of previous discussions about whether the standard should mandate the size (16 bits) and IEEE behaviour.
    • Adding alias declarations to concepts. This paper proposed three potential enhancements to concept declarations to make writing concepts easier. EWG was not particularly convinced about the need for this, but believed at least the first proposal could be entertained given stronger motivation.
    • [[uninitialized]] attribute. This attribute is intended to suppress compiler warnings about variables that are declared but not initialized in cases where this is done intentionally, thus facilitating the use of such warnings in a codebase to catch unintentional cases. EWG pointed out that most compiler these days warn not about uninitialized declarations, but uninitialized uses. There was also a desire to address the broader use case of allocating dynamic memory that is purposely uninitialized (e.g. std::vector<char> buffer(N) currently zero-initializes the allocated memory).
    • Relaxed incomplete multidimensional array type declaration. This is a companion proposal to the std::mdspan library proposal, which is a multi-dimensional array view. It would allow writing things like std::mdspan<double[][][]> to denote a three-dimensional array where the size in each dimension is determined at runtime. Note that you still would not be able to create an object of type double[][][]; you could only use it in contexts that do not require creating an object, like a template argument. Basically, mdspan is trying to (ab)use array types as a mini-DSL to describe its dimensions, similar to how std::function uses function types as a mini-DSL to describe its signature. This proposal was presented before, when mdspan was earlier in its design stage, and EWG did not find it sufficiently motivating. Now that the mdspan is going forward, the authors tried again. EWG was open to entertaining the idea, but only if technical issues such as the interaction with template argument deduction are ironed out.
    • Class types in non-type template parameters. This has been proposed before, but EWG was stuck on the question of how to determine equivalence (something you need to be able to do for template arguments) for values of class types. Now, operator<=> has given us a way to move forward on this question, basically by requiring that class types used in non-type template parameters have a defaulted operator<=>. It was observed that there is some overlap with the proposal to allow string literals as template parameters (since one way to pass a character array as a template parameter would be to wrap it in a struct), but it seemed like they also each have their own use cases and there may be room for both in the language.
    • Dynamic library loading. The C++ standard does not talk about dynamic libraries, but some people would find it useful to have a standardized library interface for dealing with them anyways. EWG was asked for input on whether it would be acceptable to standardize a library interface without saying too much about its semantics (since specifying the semantics would require that the C++ standard start talking about dynamic libraries, and specifying their behaviour in relation to exceptions, thread-local storage, the One Definition Rule, and so on). EWG was open to this direction, but suggested that the library interface be made much more general, as in its current incarnation it seemed to be geared towards certain platforms and unimplementable on others.
    • Various proposed extensions to the Modules TS, which I talk about below.

    There was also a proposal for recursive lambdas that wasn’t discussed because its author realized it needed some more work first.

    Rejected proposals:

    • A proposed trait has_padding_bits, the need for which came up during review of an atomics-related proposal by the Concurrency Study Group. EWG expressed a preference for an alternative approach that removed the need for the trait by putting the burden on compiler implementers to make things work correctly.
    • Attributes for structured bindings. This was proposed previously and rejected on the basis of insufficient motivation. The author came back with additional motivation: thread-safety attributes such as [[guarded_by]] or [[locks_held]]. However, it was pointed out that the individual bindings are just aliases to fields of an (unnamed) object, so it doesn’t make sense to apply attributes to them; attributes can be applied to the deconstructed object as a whole, or to one of its fields at the point of the field’s declaration.
    • Keeping the alias syntax extendable. This proposed reverting the part of the down with typename! proposal, approved at the last meeting, that allowed omitting the typename in using alias = typename T::type; where T was a dependent type. The rationale was that even though today only a type is allowed in that position (thus making the typename disambiguator redundant), this prevents us from reusing the same syntax for expression aliases in the future. EWG already considered this, and didn’t find it compelling: the preference was to make the “land grab” for a syntax that is widely used today, instead of keeping it in reserve for a hypothetical future feature.
    • Forward without forward.The idea here is to abbreviate the std::forward<decltype(x)>(x) boilerplate that often occurs in generic code, to >>x (i.e. a unary >> operator applied to x). EWG sympathized with the desire to eliminate this boilerplate, but felt that >>, or indeed any other unary operator, would be too confusing of a syntax, especially when occuring after an = in a lambda init-capture (e.g. [foo=>>foo](...){ ... }). EWG was willing to entertain a keyword instead, but the best people could come up with was fwdexpr and that didn’t have consensus; as a result, the future of this proposal is uncertain.
    • Relaxing the rules about invoking an explicit constructor with a braced-init-list. This would have allowed , among a few other changes, writing return {...}; instead of return T{...}; in a function whose declared return type is T, even if the invoked constructor was explicit. This has been proposed before, but rejected on the basis that it makes it easy to introduce bugs (see e.g. this response). The author proposed addressing those concerns by introducing some new rules to limit the cases in which this was allowed, but EWG did not find the motivation sufficiently compelling to further complicate C++’s already complex initialization rules.
    • Another attempt at standardizing arrays of runtime bound (ARBs, a pared-down version of C’s variable-length arrays), and a C++ wrapper class for them, stack_array. ARBs and a wrapper class called dynarray were previously headed for standardization in the form of an Array Extensions TS, before the project was scrapped because dynarray was found to be unimplementable. This proposal would solve the implementability concerns by restricting the usage of stack_array (e.g. it couldn’t be used as a class member). EWG was concerned that the restrictions would result in a type that’s not very usable. (It was pointed out that a design to make such a type more composable was proposed previously, but the author didn’t have time to pursue it further.) Ultimately, EWG didn’t feel that this proposal had a better chance of succeeding than the last time standardization of ARBs was attempted. However, a future direction that might be more promising was outlined: introducing a core language “allocation expression” that allocates a unnamed (and runtime-sized) stack array and returns a non-owning wrapper, such as a std::span, to access it.
    • A modern C++ signature for main(). This would have introduced a new signature for main() (alongside the existing allowed signatures) that exposed the command-line arguments using an iterable modern C++ type rather than raw pointers (the specific proposal was int main(std::initializer_list<std::string_view>). EWG was not convinced that such a thing would be easier to use and learn than int main(int argc, char*[] argv);. It was suggested that instead, a trivial library facility that took argc and argv as inputs and exposed an iterable interface could be provided; alternatively (or in addition), a way to access command-line arguments from anywhere in the program (similar to Rust’s std::env::args()) could be explored.
    • Abbreviated lambdas for fun and profit. This proposal would introduce a new abbreviated syntax for single-expression lambdas; a previous version of it was presented and largely rejected in Kona. Not much has changed to sway EWG’s opinion since then; if anything, additional technical issues were discovered.

      For example, one of the features of the abbreviated syntax is “automatic SFINAE”. That is, [x] => expr would mean [x] -> decltype(expr) { return expr; }; the appearance of expr in the return type rather than just the body would mean that a substitution failure in expr wouldn’t be a hard error, it would just remove the function overload being considered from the overload set (see the paper for an example). However, it was pointed out that in e.g. [x] -> decltype(x) { return x; }, the x in the decltype and the x in the body refer to two different entities: the first refers to the variable in the enclosing scope that is captured, and the second to the captured copy. If we try to make [x] => x “expand to” that, then we get into a situation where the x in the abbreviated form refers to two different entities for two different purposes, which would be rather confusing. Alternatively, we could say in the abbreviated form, x refers to the captured copy for both purposes, but then we are applying SFINAE in new scenarios, and some implementers are strongly opposed to that.

      It was also pointed out that the abbreviated form’s proposed return semantics were “return by reference”, while regular lambdas are “return by value” by default. EWG felt it would be confusing to have two different defaults like this.
    • Making the lambda capture syntax more liberal in what it accepts. C++ currently requires that in a lambda capture list, the capture-default, if present, come before any explicit captures. This proposal would have allowed them to be written in any order; in addition, it would have allowed repeating variables that are covered by the capture-default as explicit captures for emphasis. EWG didn’t find the motivation for either of these changes compelling.
    • Lifting overload sets into objects. This is a resurrection of an earlier proposal to allow passing around overload sets as objects. It addressed previous concerns with that proposal by making the syntax more explicit: you’d pass []f rather than just f, where f was the name of the overloaded function. There were also provisions for passing around operators, and functions that performed member access. EWG’s feedback was that this proposal seems to be confused between two possible sets of desired semantics:
      1. a way to build super-terse lambdas, which essentially amounts to packaging up a name; the overload set itself isn’t formed at the time you create the lambda, only later when you instantiate it
      2. a way to package and pass around overload sets themselves, which would be formed at the time you package them

      EWG didn’t have much of an appetite for #1 (possibly because it had just rejected another terse-lambda proposal), and argued that #2 could be achieved using reflection.

    Discussion papers

    There were also a few papers submitted to EWG that weren’t proposals per se, just discussion papers.

    These included a paper arguing that Concepts does not significantly improve upon C++17, and a response paper arguing that it in fact does. The main issue was whether Concepts delivers on its promise of making template error messages better; EWG’s consensus was that they do when compared to unconstrainted templates, but perhaps not as much as one would hope when compared to C++17 techniques for constraining templates, like enable_if. There may be room for implementations (to date there is just the one in GCC) to do a better job here. (Of course, Concepts are also preferable over enable_if in other ways, such as being much easier to read.)

    There was also a paper describing the experiences of the author teaching Concepts online. One of the takeaways here is that students don’t tend to find the variety of concept declaration syntaxes confusing; they tend to mix them freely, and they tend to like the abbreviated function template (AFT) syntax.

    Modules

    I mentioned above that a significant focus of the meeting was to address the national body comments on the Modules PDTS, and hopefully get to a publication vote on the final Modules TS.

    EWG looked at Modules on two occasions: first to deal with PDTS comments that had language design implications, and second to look at new proposals concerning Modules. The latter were all categorized as “post-TS”: they would not target the Modules TS, but rather “Modules v2”, the next iteration of Modules (for which the ship vehicle has not yet been decided).

    Modules TS

    The first task, dealing with PDTS comments in EWG, was a short affair. Any comment that proposed a non-trivial design change, or even remotely had the potential to delay the publication of the Modules TS, was summarily rejected (with the intention that the concern could be addressed in Modules v2 instead). It was clear that the committee leadership was intent on shipping the Modules TS by the end of the meeting, and would not let it get derailed for any reason.

    “That’s a good thing, right?” you ask. After all, the sooner we ship the Modules TS, the sooner people can start trying it out and providing feedback, and thus the sooner we can get a refined proposal into the official standard, right? I think the reality is a bit more nuanced than that. As always, it’s a tradeoff: if we ship too soon, we can risk shipping a TS that’s not sufficiently polished for people to reasonably implement and use it; then we don’t get much feedback and we effectively waste a TS cycle. In this case, I personally feel like EWG could have erred a bit more on the side of shipping a slightly more polished TS, even if that meant delaying the publication by a meeting (it ended up being delayed by at least a couple of months anyways). That said, I can also sympathize with the viewpoint that Modules has been in the making for a very long time and we need to ship something already.

    Anyways, for this reason, most PDTS comments that were routed to EWG were rejected. (Again, I should emphasize that this means “rejected for the TS“, not “rejected forever”.) The only non-rejection response that EWG gave was to comment US 041, where EWG confirmed that the intent was that argument-dependent lookup could find some non-exported entities in some situations.

    Of course, there were other PDTS comments that weren’t routed to EWG because they weren’t design issues; these were routed to CWG, and CWG spent much of the week looking at them. At one point towards the end of the week, CWG did consult EWG about a design issue that came up. The question concerned whether a translation unit that imports a module sees a class type declared in that module as complete or incomplete in various situations. Some of the possibilities that have to be considered here are whether the module exports the class’s forward declaration, its definition, or both; whether the module interface unit contains a definition of the class (exported or not) at all; and whether the class appears in the signature of an exported entity (such as a function) without itself being exported.

    There are various use cases that need to be considered when deciding the behaviour here. For example, a module may want to export functions that return or take as parameters pointers or references to a type that’s “opaque” to the module’s consumer, i.e. the module’s consumer can’t create an instance of such a class or access its fields; that’s a use case for exporting a type as incomplete. At the same time, the module author may want to avoid splitting her module into separate interface and implementation units at all, and thus wants to define the type in the interface unit while still exporting it as incomplete.

    The issue that CWG got held up on was that the rules as currently specified seemed to imply that in a consumer translation unit, an imported type could be complete and incomplete at the same time, depending on how it was named (e.g. directly vs. via decltype(f()) where it was the return type of a function f). Some implementers indicated that this would be a significant challenge to implement, as it would require a more sophisticated implementation model for types (where completeness was a property of “views of types” rather than of types themselves) that no existing language feature currently requires.

    Several alternatives were proposed which avoided these implementation challenges. While EWG was favourable to some of them, there was also opposition to making what some saw as a design change to the Modules TS at this late stage, so it was decided that the TS would go ahead with the current design, possibly annotated as “we know there’s a potential problem here”, and it would be fixed up in v2.

    I find the implications of this choice a bit unfortunate. It sounded like the implementers that described this model as being a significant challenge to implement, are not planning to implement it (after all, it’s going to be fixed in v2; why redesign your compiler’s type system if ultimately you won’t need it). Other implementers may or may not implement this model. Either way, we’ll either have implementation divergence, or all implementations will agree on a de facto model that’s different from what the spec says. This is one of those cases where I feel like waiting to polish the spec a bit more, so that it’s not shipped in a known-to-be-broken state, may have been advised.

    I mentioned in my previous report that I thought the various Modules implementers didn’t talk to each other enough about their respective implementation strategies. I still feel like that’s very much the case. I feel like discussing each other’s implementation approaches in more depth would have unearthed this issue, and allowed it to be dealt with, sooner.

    Modules v2

    Now moving on to the proposals targeting Modules v2 that EWG reviewed:

    • Two of them (module interface imports and namespace pervasiveness and modules) it turned out were already addressed in the Modules TS by changes made in response to PDTS comments.
    • Placement of module declarations. Currently, if a module unit contains declarations in the global module, the module declaration (which effectively “starts” the module) needs to go after those global declarations. However, this makes it more difficult for both humans and tools to find the module declaration. This paper proposes a syntax that allows having the module declaration be the first declaration in the file, while still having a way to place declarations in the global module. It was observed that this proposal would make it easier to make module a context-sensitive keyword, which has also been requested. EWG encouraged continued exploration in this direction.
    • Module partitions. This iterates on the previous module partitions proposal (found in this paper), with a new syntax: module basename : partition; (unlike in the previous version, partition here is not a keyword, it’s the partition’s name). EWG liked this approach as well. Module partitions also make proclaimed-ownership-declarations unnecessary, so those can be axed.
    • Making module names strings. Currently, module names are identifier sequences separated by dots (e.g. foo.bar.baz), with the dots not necessarily implying a hierarchical relationship; they are mapped onto files in an implementation-defined manner. Making them strings instead would allow mapping onto the filesystem more explicitly. There was no consensus for this change in EWG.
    • Making module a context-sensitive keyword. As always, making a common word like module a hard keyword breaks someone. In this case, it shows up as an identifier in many mature APIs like Vulkan, CUDA, Direct X 9, and others, and in some of these cases (like Vulkan) the name is enshrined into a published specification. In some cases, the problem can be solved by making the keyword context-sensitive, and that’s the case for module (especially if the proposal about the placement of module declarations is accepted). EWG agreed to make the keyword context-sensitive. The authors of this paper asked if this could be done for the TS rather than for Modules v2; that request was rejected, but implementers indicated that they would implement it as context-sensitive ASAP, thus avoiding problems in practice.
    • Modules TS does not support intended use case. The bulk of the concerns here were addressed in the Modules TS while addressing PDTS comments, except for a proposed extension to allow using-declarations with an unqualified name. EWG indicated it was open to such an extension for v2.
    • Two papers about support for exporting macros, which remains one of the most controversial questions about Modules. The first was a “rumination” paper, which was mostly arguing that we need a published TS and deployment experience before we can settle the question; the second argued that having deployed modules (clang’s pre-TS implementation) in a large codebase (Apple’s), it’s clear that macro support is necessary. A number of options for preserving hygiene, such as only exporting and importing individual macros, were discussed. EWG expressed a lukewarm preference to continuing to explore macro support, particularly with such fine-grained control for hygiene.

    Other Working Groups

    The Library Evolution Working Group, as usual, reviewed a decent amount of proposed new library features. While I can’t give a complete listing of the proposals discussed and their outcomes (having been in EWG all week), I’ll mention a few highlights of accepted proposals:

    Targeting C++20:

    std::span (formerly called array_view) is also targeting C++20, but has not quite gotten final approval from LEWG yet.

    Targeting the Library Fundamentals TS v3:

    • mdspan, a multi-dimensional array view. (How can a multi-dimensional array view be approved sooner than a single-dimensional one, you ask? It’s because mdspan is targeting a TS, but span is targeting the standard directly, so span needs to meet a higher bar for approval.)
    • std::expected<T>, a “value or error” variant type very similar to Rust’s Result

    Targeting the Ranges TS:

    • Range adaptors (“views”) and utilities. Range views are ranges that lazily consume elements from an underlying range, while performing an additional operation like transforming the elements or filtering them. This finally gives C++ a standard facility that’s comparable to C#’s LINQ (sans the SQL syntax), Java 8’s streams, or Rust’s iterators. C++11 versions of the facilities proposed here are available today in the range-v3 library (which was in turn inspired by Boost.Range).

    There was an evening session to discuss the future of text handling in C++. There was general agreement that it’s desirable to have a text handling library that has notions of code units, code points, and grapheme clusters; for many everyday text processing algorithms (like toupper), operating at the level of grapheme clusters makes the most sense. Regarding error handling, different people have different needs (safety vs. performance), and a policy-based approach to control error handling may be advisable. Some of the challenges include standard library implementations having to ship a database of Unicode character classifications, or hook into the OS’s database. The notion of whether we should have a separate character type to represent UTF-8 encoded text, or just use char for that purpose, remains contentious.

    SG 7 (Compile-Time Programming)

    SG 7, the Compile-Time Programming (previously Reflection) Study Group met for an evening session.

    An updated version of a proposed extension to the static reflection proposal to allow reflecting over functions was briefly reviewed and sent onwards for review in EWG and LEWG at future meetings.

    The rest of the evening was spent discussing an updated version of the metaclasses proposal. To recap, a metaclass defines a compile-time transformation on a class, and can be applied to a class to produce a transformed class (possibly among other things, like helper classes / functions). The discussion focused on one particular dimension of the design space here: how the transformation should be defined. Three options were given:

    1. The metaclass operates on a mutable input class, and makes changes to it to produce the transformed class. This is how it worked in the original proposal.
    2. Like #1, but the metaclass operates on an immutable input class, and builds the transformed class from the ground up as its output.
    3. Like #2, but the metaclass code operates on the “meta level”, where the representation of the input and output types is an ordinary object of type meta::type. This dispenses with most of the special syntax of the first two approaches, making the metaclass look a lot like a normal constexpr function.

    SG 7 liked the third approach the best, noting that it not only dispenses with the need for the $ syntax (which couldn’t have been the final syntax anyways, it would have needed to be something uglier), but makes the proposal more general (opening up more avenues for how and where you can invoke/apply the metaclass), and more in line with the preference the group previously expressed to have reflection facilities operate on a homogeneous value representation of the program’s entities.

    Discussion of other dimensions of the design space, such as what the invocation syntax for metaclasses should look like (i.e. how you apply them to a class) was deferred to future meetings.

    SG 12 (Undefined Behaviour and Vulnerabilities)

    SG 12, the Undefined Behaviour Study Group, recently had its scope expanded to also cover documenting vulnerabilities in the C++ language, and ways to avoid them.

    This latter task is a joint effort between SG 12 and WG 23, a sister committee of the C++ Standards Committee that deals with vulnerabilities in programming languages in general. WG 23 produces a language-agnostic document that catalogues vulnerabilities without being specific to a language, and then language-specific annexes for a number of programming languages. For the last couple of meetings, WG 23 has been collaborating with our SG 12 to produce a C++ annex; the two groups met for that purpose for two days during this meeting. The C++ annex is at a pretty early stage, but over time it has the potential to grow to be a comprehensive document outlining many interesting types of vulnerabilities that C++ programmers can run into, and how to avoid them.

    SG 12 also had a meeting of its own, where it looked at a proposal to make certain low-level code patterns that are widely used but technically have undefined behaviour, have defined behaviour instead. This proposal was reviewed favourably.

    C++ Stability and Velocity

    On Friday evening, there was a session to discuss the stability and velocity of C++.

    One of the focuses of the session was reviewing the committee’s policy on deprecating and removing old features that are known to be broken or that have been superseded by better alternatives. Several language features (e.g. dynamic exception specifications) and library facilities (e.g. std::auto_ptr) have been deprecated and removed in this way.

    One of the library facilities that were removed in C++17 was the deprecated “binders” (std::bind1st and std::bind2nd). These have been superseded by the C++11 std::bind, but, unlike say auto_ptr, they aren’t problematic or dangerous in any way. It was argued that the committee should not deprecate features like that, because it causes unnecessary code churn and maintenance cost for codebases whose lifetime and update cycle is very long (on the order of decades); embedded software such as an elevator control system was brought up as a specific example.

    While some sympathized with this viewpoint, the general consensus was that, to be able to evolve at the speed it needs to to satisfy the needs of the majority of its users, C++ does need to be able to shed old “cruft” like this over time. Implementations often do a good job of maintaining conformance modes with older standard versions (and even “escape hatches” to allow old features that have been removed to be used together with new features that have since been added), thus allowing users to continue using removed features for quite a while in practice. (Putting bind1st and bind2nd specifically back into C++20 was polled, but didn’t have consensus.)

    The other focus of the session was the more general tension between the two pressures of stability and velocity that C++ faces as it evolves. It was argued that there is a sense in which the two are at odds with each other, and the committee needs to take a clearer stance on which is the more important goal. Two examples of cases where backwards compatibility constraints have been a drag on the language that were brought up were the keywords used for coroutines (co_await, co_yield, etc. – wouldn’t it have been nice to just be able to claim await and yield instead?), and the const-correctness issue with std::function which still remains to be fixed. A poll on which of stability or velocity is more important for the future direction of C++ revealed a wide array of positions, with somewhat of a preference for velocity.

    Conclusion

    This was a productive meeting, whose highlights included the Modules TS making good progress towards its publication; C++20 continuing to take shape as the draft standard gained the consistent comparisons feature among many other smaller ones; and range views/adaptors being standardized for the Ranges TS.

    The next meeting of the Committee will be in Jacksonville, Florida, the week of March 12th, 2018. It, too, should be an exciting meeting, as design discussion of Concepts resumes (with the future of AFTs possibly being settled), and the Modules TS is hopefully finalized (if that doesn’t already happen between meetings). Stay tuned for my report!

    Other Trip Reports

    Others have written reports about this meeting as well. Some that I’ve come across include Herb Sutter’s and Bryce Lelbach’s. I encourage you to check them out!

    Featured Song: Elysium

    Recently, I’ve been flirting with progressive metal – or at least, symphonic / power metal with significant progressive influences. The more recent albums of Stratovarius (who I’ve featured several times before) – particularly the ones after guitarist Timo Tolkki’s 2008 departure from the band – fall into that category.

    Today’s selection is the title track of the 2011 album Elysium. At 18 minutes, I believe it’s the longest song I’ve featured to date – but then such is progressive rock/metal 🙂

    I listened to this album not long after its release, but it’s only after re-listening to it recently that I feel like I’ve come to truly appreciate it for the masterpiece that it is.

    The song has a bit of everything, from multiple intricate guitar and keyboard solos that typify progressive metal, through calm and contemplative sections that help build tension, to Stratovarius’ signature dramatic orchestral passages.

    In all honesty, I do find it a bit long – I think the first 11 minutes could have been condensed to ~5 without the song losing much value. Past the 11-minute mark though, the song really picks up stylistically, delivering a very emotional series of passages culminating in a climactic ending. So, if you give it a listen and start to get bored, I recommend skipping forward rather than giving up, lest you miss the best part!



    Lyrics can be found in the video description.

    Enjoy!

    Trip Report: C++ Standards Meeting in Toronto, July 2017

    Summary / TL;DR

    Project What’s in it? Status
    C++17 See below Draft International Standard published; on track for final publication by end of 2017
    Filesystems TS Standard filesystem interface Part of C++17
    Library Fundamentals TS v1 optional, any, string_view and more Part of C++17
    Library Fundamentals TS v2 source code information capture and various utilities Published!
    Concepts TS Constrained templates Merged into C++20 with some modifications
    Parallelism TS v1 Parallel versions of STL algorithms Part of C++17
    Parallelism TS v2 Task blocks, library vector types and algorithms and more Under active development
    Transactional Memory TS Transaction support Published! Uncertain whether this is headed towards C++20
    Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it headed for C++20
    Concurrency TS v2 See below Under active development
    Networking TS Sockets library based on Boost.ASIO Voted for publication!
    Ranges TS Range-based algorithms and views Voted for publication!
    Coroutines TS Resumable functions, based on Microsoft’s await design Voted for publication!
    Modules TS A component system to supersede the textual header file inclusion model Proposed Draft voted out for balloting by national standards bodies
    Numerics TS Various numerical facilities Under active development
    Graphics TS 2D drawing API Under active design review
    Reflection Code introspection and (later) reification mechanisms Introspection proposal passed core language and library design review; next stop is wording review. Targeting a Reflection TS.
    Contracts Preconditions, postconditions, and assertions Proposal passed core language and library design review; next stop is wording review.

    Some of the links in this blog post may not resolve until the committee’s post-meeting mailing is published. If you encounter such a link, please check back in a few days.

    Introduction

    A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Toronto, Canada (which, incidentally, is where I’m based). This was the second committee meeting in 2017; you can find my reports on previous meetings here (November 2016, Issaquah) and here (February 2017, Kona). These reports, particularly the Kona one, provide useful context for this post.

    With the C++17 Draft International Standard (DIS) being published (and its balloting by national standards bodies currently in progress), this meeting was focused on C++20, and the various Technical Specifications (TS) we have in flight.

    What’s the status of C++17?

    From a technical point of view, C++17 is effectively done.

    Procedurally, the DIS ballot is still in progress, and will close in August. Assuming it’s successful (which is widely expected), we will be in a position to vote to publish the final standard, whose content would be the same as the DIS with possible editorial changes, at the next meeting in November. (In the unlikely event that the DIS ballot is unsuccessful, we would instead publish a revised document labelled “FDIS” (Final Draft International Standard) at the November meeting, which would need to go through one final round of balloting prior to publication. In this case the final publication would likely happen in calendar year 2018, but I think the term “C++17” is sufficiently entrenched by now that it would remain the colloquial name for the standard nonetheless.)

    C++20

    With C++17 at the DIS stage, C++20 now has a working draft and is “open for business”; to use a development analogy, C++17 has “branched”, and the standard’s “trunk” is open for new development. Indeed, several changes have been voted into the C++20 working draft at this meeting.

    Technical Specifications

    This meeting was a particularly productive one for in-progress Technical Specifications. In addition to Concepts (which had already been published previously) being merged into C++20, three TSes – Coroutines, Ranges, and Networking – passed a publication vote this week, and a fourth, Modules, was sent out for its PDTS ballot (a ballot process that allows national standards bodies to vote and comment on the proposed TS, allowing the committee to incorporate their feedback prior to sending out a revised document for publication).

    Coroutines TS

    The Coroutines TS – which contains a stackless coroutine design, sometimes called co_await after one of the keywords it uses – had just been sent out for its PDTS ballot at the previous meeting. The results were in before this meeting began – the ballot had passed, with some comments. The committee made it a priority to get through all the comments at this meeting and draft any resulting revisions, so that the revised TS could be voted for final publication, which happened (successfully) during the closing plenary session.

    Meanwhile, an independent proposal for stackful coroutines with a library-only interface is making its way through the Library groups. Attempts to unify the two varieties of coroutines into a single design seem to have been abandoned for now; the respective proposal authors maintain that the two kinds of coroutines are useful for different purposes, and could reasonably co-exist (no pun intended) in the language.

    Ranges TS

    The Ranges TS was sent out for its PDTS ballot two meetings ago, but due to the focus on C++17, the committee didn’t get through all of the resulting comments at the last meeting. That work was finished at this meeting, and this revised TS also successfully passed a publication vote.

    Networking TS

    Like the Ranges TS, the Networking TS was also sent out for its PDTS ballot two meetings ago, and resolving the ballot comments was completed at this meeting, leading to another successful publication vote.

    Modules TS

    Modules had come close to being sent out for its PDTS ballot at the previous meeting, but didn’t quite make it due to some procedural mis-communication (detailed in my previous report if you’re interested).

    Modules is kind of in an interesting state. There are two relatively mature implementations (in MSVC and Clang), whose development either preceded or was concurrent with the development of the specification. Given this state of affairs, I’ve seen the following dynamic play out several times over the past few meetings:

    • a prospective user, or someone working on a new implementation (such as the one in GCC), comes to the committee seeking clarification about what happens in a particular scenario (like this one)
    • the two existing implementers consult their respective implementations, and give different answers
    • the implementers trace the difference in outcome back to a difference in the conceptual model of Modules that they have in their mind
    • the difference in the conceptual model, once identified, is discussed and reconciled by the committee, typically in the Evolution Working Group (EWG)
    • the implementers work with the Core Working Group (CWG) to ensure the specification wording reflects the new shared understanding

    Of course, this is a desirable outcome – identifying and reconciling differences like this, and arriving at a specification that’s precise enough that someone can write a new implementation based purely on the spec, is precisely what we want out of a standards process. However, I can’t help but wonder if there isn’t a more efficient way to identify these differences – for example, by the two implementers actually studying each other’s implementations (I realize that’s complicated by the fact that one is proprietary…), or at least discussing their respective implementation strategies in depth.

    That said, Modules did make good progress at this meeting. EWG looked at several proposal changes to the spec (I summarize the technical discussion below), CWG worked diligently to polish the spec wording further, and in the end, we achieved consensus to – finally – send out Modules for its PDTS ballot!

    Parallelism TS v2

    The Parallelism TS v2 (working draft here) picked up a new feature, vector and wavefront policies. Other proposals targeting it, like vector types and algorithms, are continuing to work their way through the library groups.

    Concurrency TS v2

    SG 1, the Study Group that deals with concurrency and parallelism, reviewed several proposals targeting the Concurrency TS v2 (which still does not yet have a working draft) at this meeting, including a variant of joining_thread with cooperative cancellation, lock-free programming techniques for reclamation, and stackful coroutines (which I’ve mentioned above in connection with the Coroutines TS).

    Executors are still likely slated for a separate TS. The unified proposal presented at the last meeting has been simplified as requested, to narrow its scope to something manageable for initial standardization.

    Merging Technical Specifications Into C++20

    We have some technical specifications that are already published and haven’t been merged into C++17, and are thus candidates for merging into C++20. I already mentioned that Concepts was merged with some modifications (details below).

    Parts of the Concurrency TS are slated to be merged into C++20: latches, with barriers to hopefully follow after some design issues are ironed out, and an improved version of atomic shared pointers. future.then() is going to require some more iteration before final standardization.

    The Transactional Memory TS currently has only one implementation; the Study Group that worked on it hopes for some more implementation and usage experience prior to standardization.

    The Library Fundamentals TS v2 seems to be in good shape to be merged into C++20, though I’m not sure of the exact status / concrete plans.

    In addition to the TSes that are already published, many people are eager to see the TSes that were just published (Coroutines, Ranges, and Networking), as well as Modules, make it into C++20 too. I think it’s too early to try and predict whether they will make it. From a procedural point of view, there is enough time for all of these to complete their publication process and be merged in the C++20 timeframe. However, it will really depend on how much implementation and use experience these features get between now and the C++20 feature-complete date (sometime in 2019), and what the feedback from that experience is.

    Future Technical Specifications

    Finally, I’ll give some mention to some planned future Technical Specifications that don’t have an official project or working draft yet:

    Reflection

    A proposal for static introspection (sometimes called “reflexpr” after the keyword it uses; see its summary, design, and specification for details) continues to head towards a Reflection TS. It has been approved by SG 7 (the Reflection and Metaprogramming Study Group) and the Evolution Working Group at previous meetings. This week, it was successfully reviewed by the Library Evolution Working Group, allowing it to move on to the Core and Library groups going forward.

    Meanwhile, SG 7 is continuing to look at more forward-looking reflection and metaprogramming topics, such as a longer-term vision for metaprogramming, and a proposal for metaclasses (I talk more about these below).

    Graphics

    The Graphics TS, which proposes to standardize a set of 2D graphics primitives inspired by cairo, continues to be under active review by the Library Evolution Working Group; the latest draft spec can be found here. The proposal is close to being forwarded to the Library Working Group, but isn’t quite there yet.

    While I wasn’t present for its discussion in LEWG, I’m told that one of the changes that have been requested is to give the library a stateless interface. This matches the feedback I’ve heard from Mozilla engineers knowledgeable about graphics (and which I’ve tried to relay, albeit unsuccessfully, at a previous meeting).

    Evolution Working Group

    I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

    All proposals discussed in EWG at this meeting were targeting C++20 (except for Modules, where we discussed changes targeting the Modules TS). I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

    Accepted proposals:

    • Default member initializers for bitfields. Previously, bit-fields couldn’t have default member initializers; now they can, with the “natural” syntax, int x : 5 = 42; (brace initialization is also allowed). A disambiguation rule was added to deal with parsing ambiguities (since e.g. an = could conceivably be part of the bitfield width expression).
    • Tweaking the rules for constructor template argument deduction. At the last meeting, EWG decided that for wrapper types like tuple, copying should be preferable to wrapping; that is, something like tuple t{tuple{1, 2}}; should deduce the type of t as tuple<int, int> rather than tuple<tuple<int, int>>. However, it had been unclear whether this guidance applied to types like vector that had std::initializer_list constructors. EWG clarified that copying should indeed be preferred to wrapping for those types, too. (The paper also proposed several other tweaks to the rules, which did not gain consensus to be approved just yet; the author will come back with a revised paper for those.)
    • Resolving a language defect related to defaulted copy constructors. This was actually a proposal that I co-authored, and it was prompted by me running into this language defect in Mozilla code (it prevented the storing of an nsAutoPtr inside a std::tuple). It’s also, to date, my first proposal to be approved by EWG!
    • A simpler solution to the problem that allowing the template keyword in unqualified-ids aimed to solve. While reviewing that proposal, the Core Working Group found that the relevant lookup rules could be tweaked so as to avoid having to use the template keyword at all. The proposed rules technically change the meaning of certain existing code patterns, but only ones that are very obscure and unlikely to occur in the wild. EWG was, naturally, delighted with this simplification.
    • An attribute to mark unreachable code. This proposal aims to standardize existing practice where a point in the code that the author expects cannot be reached is marked with __builtin_unreachable() or __assume(false). The initial proposal was to make the standardized version an [[unreachable]] attribute, but based on EWG’s feedback, this was revised to be a std::unreachable() library function instead. The semantics is that if such a call is reached during execution, the behaviour is undefined. (EWG discussed at length whether this facility should be tied to the Contracts proposal. The outcome was that it should not be; since “undefined behaviour” encompasses everything, we can later change the specified behaviour to be something like “call the contract violation handler” without that being a breaking change.) The proposal was sent to LEWG, which will design the library interface more precisely, and consider the possibility of passing in a compile-time string argument for diagnostic purposes.
    • Down with typename! This paper argued that in some contexts where typename is currently required to disambiguate a name nested in a dependent scope as being a type, the compiler can actually disambiguate based on the context, and proposed removing the requirement of writing typename in such contexts. The proposal passed with flying colours. (It was, however, pointed out that the proposal prevents certain hypothetical future extensions. For example, one of the contexts in question is y in using x = y;: that can currently only be a type. However, suppose we later want to add expression aliases to C++; this proposal rules out re-using the using x = y; syntax for them.)
    • Removing throw(). Dynamic exception specifications have been deprecated since C++11 (superseded by noexcept), and removed altogether in C++17, with the exception of throw() as an alias for noexcept(true). This paper proposed removing that last vestige, too, and EWG approved it. (The paper also proposed removing some other things that were deprecated in C++17, which were rejected; I mention those in the list of rejected proposals below.)
    • Ranged-based for statement with initializer. This introduces a new form of range-for: for (T var = init; U elem : <range-expression>); here, var is a variable that lives for the duration of the loop, and can be referenced by <range-expression> (whereas elem is the usual loop variable that takes on a new value on every iteration). This is useful for both scope hygiene (it avoids polluting the enclosing scope with var) and resolving a category of lifetime issues with range-for. EWG expressed concerns about parseability (parsers will now need to perform more lookahead to determine which form of loop they are parsing) and readability (the “semicolon colon” punctuation in a loop header of this form can look deceptively like the “semicolon semicolon” punctuation in a traditional for loop), but passed the proposal anyways.
    • Some changes to the Modules TS (other proposed changes were deferred to Modules v2) – I talk about these below
    • Changes to Concepts – see below

    Proposals for which further work is encouraged:

    • Non-throwing container operations. This paper acknowledges the reality that many C++ projects are unable to or choose not to use exceptions, and proposes that standard library container types which currently rely on exceptions to report memory allocation failure, provide an alternative API that doesn’t use exceptions. Several concrete alternatives are mentioned. EWG sent this proposal to the Library Evolution Working Group to design a concrete alternative API, with the understanding that the resulting proposal will come back to EWG for review as well.
    • Efficient sized deletion for variable-sized classes. This proposal builds on the sized deletion feature added to C++14 to enable this optimization for “variable-sized classes” (that is, classes that take control of their own allocation and allocate a variable amount of extra space before or after the object itself to store extra information). EWG found the use cases motivating, but encouraged the development of a syntax that less prone to accidental misuse, as well as additional consultation with implementers to ensure that ABI is not broken.
    • Return type deduction and SFINAE. This paper proposes a special syntax for single-expression lambdas, which would also come with the semantic change that the return expression be subject to SFINAE (this is a desirable property that often leads authors to repeat the return expression, wrapped in decltype, in an explicit return type declaration (and then to devise macros to avoid the repetition)). EWG liked the goal but had parsing-related concerns about the syntax; the author was encouraged to continue exploring the syntax space to find something that’s both parseable and readable. Continued exploration of terser lambdas, whether as part of the same proposal or a separate proposal, was also encouraged. It’s worth noting that there was another proposal in the mailing (which wasn’t discussed since the author wasn’t present) that had significant overlap with this proposal; EWG observed that it might make sense to collaborate on a revised proposal in this space.
    • Default-constructible stateless lambdas. Lambdas are currently not default-constructible, but for stateless lambdas (that is, lambdas that do not capture any variables) there is no justification for this restriction, so this paper proposed removing it. EWG agreed, but suggested that they should also be made assignable. (An example of a situation where one currently runs into these restrictions is transform iterators: such iterators often aggregate the transformation function, and algorithms often default-construct or assign iterators.)
    • Product type access. Recall that structured bindings work with both tuple-like types, and structures with public members. The former expose get<>() functions to access the tuple elements by index; for the latter, structured bindings achieve such index-based access by “language magic”. This paper proposed exposing such language magic via a new syntax or library interface, so that things other than structured bindings (for example, code that wants to iterate over the public members of a structure) can take advantage of it. EWG agreed this was desirable, expressed a preference for a library interface, and sent the proposal onward to LEWG to design said interface. (Compilers will likely end up exposing intrinsics to allow library implementers to implement such an interface. I personally don’t see the advantage of doing things this way over just introducing standard language syntax, but I’m happy to get the functionality one way or the other.)
    • Changing the attack vector of constexpr_vector. At the previous meeting, implementers reported that supporting full-blown dynamic memory allocation in a constexpr context was not feasible to implement efficiently, and suggested a more limited facility, such as a special constexpr_vector container. This proposal argues that such a container would be too limiting, and suggests supporting a constexpr allocator (which can then be used with regular containers) instead. Discussion at this meeting suggested that (a) on the one hand, a constexpr allocator is no less general (and thus no easier to support) than new itself; but (b) on the other hand, more recent implementer experiments suggest that supporting new itself, with some limitations, might be feasible after all. Continued exploration of this topic was warmly encouraged.
    • Implicit evaluation of auto variables. This is a resurrection of an earlier proposal to allow a class to opt into having a conversion function of some sort called when an instance of it is assigned to an auto-typed variable. The canonical use case is an intermediate type is an expression template system, for which it’s generally desirable to trigger evaluation when initializing an auto-typed variable. EWG wasn’t fond of widening the gap between the deduction rules for auto and the deduction rules for template parameters (which is what auto is modelled on), and suggested approaching the problem form a different angle; one idea that was thrown around was the possibility of extending the notion of deduction guides (currently used for class template argument deduction) to apply to situations like this.
    • Allowing class template specializations in unrelated namespaces. The motivation here is to avoid having to reopen the namespace in which a class template was defined, to provide a specialization of that template. EWG liked the idea, but suggested that it might be prudent to still restrict such specializations to occur within associated namespaces of the specialization (e.g. the namespaces of the specialization’s template arguments) – kind of like how Rust doesn’t allow you to implement a trait unless you’re either the author of the trait, or the author of the type you’re implementing the trait for.
    • Precise semantics for contract assertions. This paper explores the design space of contract assertions, enumerating the various (sometimes contradictory) objectives we may want to achieve by using them, and proposes a set of primitive operations that facilitate implementing assertions in ways that meet useful subsets of these objectives. EWG expressed an interest in standardizing some of the proposed primitives, specifically a mechanism to deliberately introduce unspecified (nondeterministic) behaviour into a program, and a “prevent continuation handler” that an assertion check can invoke if an assertion fails and execution should not continue as a result. (A third primitive, for deliberately invoking undefined behaviour, is already handled by the independently proposed std::unreachable() function that EWG approved at this meeting.)

    Rejected proposals:

    • Attributes for structured bindings. This proposal would have allowed applying attributes to individual bindings, such as auto [a, b [[maybe_unused]], c] = f();. EWG found this change insufficiently motivated; some people also thought it was inappropriate to give individual bindings attributes when we can’t even give them types.
    • Making pointers-to-members callable. This would have allowed p(s) to be valid and equivalent to s.*p when p is a pointer to a member of a type S, and s is an object of that type. It had been proposed before, and was rejected for largely the same reason: some people argued that it was a half-baked unified call syntax proposal. (I personally thought this was a very sensible proposal – not at all like unified call syntax, which was controversial for changing name lookup rules, which this proposal didn’t do.)
    • Explicit structs. The proposal here was to allow marking a struct as explicit, which meant that all its fields had to be initialized, either by a default member initializer, by a constructor intializer, or explicitly by the caller (not relying on the fallback to default initialization) during aggregate initialization. EWG didn’t find this well enough motivated, observing that either your structure has an invariant, in which case it’s likely to be more involved than “not the default values”, or it doesn’t, in which case the default values should be fine. (Uninitialized values, which can arise for primitive types, are another matter, and can be addressed by different means, such as via the [[uninitialized]] attribute proposal.)
    • Changing the way feature-test macros are standardized. Feature test macros (like __cpp_constexpr, intended to be defined by an implementation if it supports constexpr) are currently standardized in the form of a standing document published by the committee, which is not quite a standard (for example, it does not undergo balloting by national bodies). As they have become rather popular, Microsoft proposed that they be standardized more formally; they claimed that it’s something they’d like to support, but can’t unless it’s a formal standard, because they’re trying to distance themselves from their previous habit of supporting non-standard extensions. (I didn’t quite follow the logic behind this, but I guess large companies sometimes operate in strange ways.) However, there was no consensus to change how feature test macros are standardized; some on the committee dislike them, in part because of their potential for fragmentation, and because they don’t completely remove the need for compiler version checks and such (due to bugs etc.)
    • Removing other language features deprecated in C++17. In addition to throw() (whose removal passed, as mentioned above), two other removals were proposed.
      • Out-of-line declarations of static constexpr data members. By making static constexpr data members implicitly inline, C++17 made it so that the in-line declaration which provides the value is also a definition, making an out-of-line declaration superfluous. Accordingly, the ability to write such an out-of-line declaration at all was deprecated, and was now proposed for removal in C++20.
      • Implicit generation of a copy constructor or copy assignment operator in a class with a user-defined copy assignment operator, copy constructor, or destructor. This has long been known to be a potential footgun (since generally, if you need to user-define one of these functions, you probably need to user-define all three), and C++11 already broke with this pattern by having a user-defined move operation disable implicit generation of the copy operations. The committee has long been eyeing the possibility extending this treatment to user-defined copy operations, and the paper suggested that perhaps C++20 would be the time to do so. However, the reality is that there still is a lot of code out there that relies on this implicit generation, and much of it isn’t actually buggy (though much of it is).

      Neither removal gained consensus. In each case, undeprecating them was also proposed, but that was rejected too, suggesting that the hope that these features can be removed in a future standard remains alive.

    • Capturing *this with initializer. C++17 added the ability to have a lambda capture the entire *this object by value. However, it’s still not possible to capture it by move (which may be reasonable if e.g. constructing the lambda is the last thing you do with the current object). To rectify this, this paper proposed allowing the capture of *this with the init-capture syntax. Unfortunately, this effectively allows rebinding this to refer to a completely unrelated object inside the lambda, which EWG believed would be far too confusing, and there didn’t appear to be a way to restrict the feature to only work for the intended use case of moving the current object.
    • bit_sizeof and bit_offsetof. These are similar to sizeof and offsetof, but count the number of bits, and work on bitfield members. EWG preferred to hold off on these until they are implementable with a library interface on top of reflection.
    • Parametric functions. This oddly-named proposal is really a fresh approach to named arguments. In contrast with the named arguments proposal that I co-authored a few years back, which proposed to allow using named arguments with existing functions and existing parameter names (and garnered significant opposition over concerns that it would make parameter names part of a function’s interface when the function hadn’t been written with that in mind), this paper proposed introducing a new kind of function, which can have named arguments, declared with a new syntax, and for which the parameter names are effectively part of the function’s type. While this approach does address the concerns with my proposal, EWG felt the new syntax and new language machinery it would require was disproportionate to the value of the feature. In spite of the idea’s repeated rejection, no one was under any illusion that this would be the last named arguments proposal to come in front of EWG.

    There were a handful of proposals that were not discussed due to their authors not being present. They included the other terse lambdas proposal and its offshoot idea of making forwarding less verbose, and a proposal for an [[uninitialized]] attribute.

    Concepts

    A major focus of this meeting was to achieve consensus to get Concepts into C++20. To this end, EWG spent half a day plus an evening session discussing several papers on the topic.

    Two of the proposals – a unified concept definition syntax and semantic constraint matching – were write-ups of design directions that had already been discussed and approved in Kona; their discussion at this meeting was more of a rubber-stamp. (The second paper contained a provision to require that re-declarations of a constrained template use the same syntax (e.g. you can’t have one using requires-clauses and the other using a shorthand form); this provision had some objections, just as it did in Kona, but was passed anyways.)

    EWG next looked at a small proposal to address certain syntax ambiguities; the affected scenarios involve constrained function templates with a requires-clause, where it can be ambiguous where the require-clause after the template parameter list ends, and where the function declaration itself begins. The proposed solution was to restrict the grammar for the expression allowed in a top-level requires-clause so as to remove the ambiguity; expressions that don’t fit in the restricted grammar can still be used if they are parenthesized (as in requires (expr)). This allows common forms of constraints (like trait<T>::value or trait_v<T>) to be used without parentheses, while allowing any expression with parentheses. This was also approved.

    That brings us to the controversial part of the discussion: abbreviated function templates (henceforth, “AFTs”), also called “terse templates”. To recap, AFTs are function templates declared without a template parameter list, where the parameter types use concept names (or auto), which the compiler turns into invented template parameters. A canonical example is void sort(Sortable& s);, which is a shorthand for template <Sortable S> void sort(S& s); (which is itself a shorthand for template <typename S> requires Sortable<S> void sort(S& s);).

    AFTs have been controversial since their introduction, due to their ability to make template code look like non-template code. Many have argued that this is a bad idea, beacuse template code is fundamentally different from non-template code (e.g. consider different name lookup rules, the need for syntactic disambiguators like typename, and the ability to define a function out of line). Others have argued that making generic programming (programming with templates) look more like regular programming is a good thing.

    (A related feature that shared some of the controversy around AFTs was concept introductions, which were yet another shorthand, making Mergeable{In1, In2, Out} void merge(In1, In1, In2, Out); short for template <typename In1, typename In2, typename Out> requires Mergeable<In1, In2, Out> void merge(In1, In1, In2, Out);. Concept introductions don’t make a template look like a non-template the way AFTs do, but were still controversial as many felt they were an odd syntax and offered yet another way of defining constrained function templates with relatively little gain in brevity.)

    The controversy around AFTs and concept introductions was one of the reasons the proposed merger of the Concepts TS into C++17 failed to gain consensus. Eager not to repeat this turn of events for C++20, AFTs and concept introductions were proposed for removal from Concepts, at least for the time being, with the hope that this would allow the merger of Concepts into C++20 to gain consensus. After a long and at times heated discussion, EWG approved this removal, and approved the merger of Concepts, as modified by this removal (and by the other proposals mentioned above), into C++20. As mentioned above, this merger was subsequently passed by the full committee at the end of the week, resulting in Concepts now being in the C++20 working draft!

    It’s important to note that the removal of AFTs was not a rejection of having a terse syntax for defining constrained function templates in general. There is general agreement that such a terse syntax is desirable; people just want such a syntax to come with some kind of syntactic marker that makes it clear that a function template (as opposed to a non-template function) is being declared. I fully expect that proposals for an alternative terse syntax that comes with such a syntactic marker will forthcome (in fact, I’ve already been asked for feedback on one such draft proposal), and may even be approved in the C++20 timeframe; after all, we’re still relatively early in the C++20 cycle.

    There was one snag about the removal of AFTs that happened at this meeting. In the Concepts wording, AFTs are specified using a piece of underlying language machinery called constrained type specifiers. Besides AFTs, this machinery powers some other features of Concepts, such as the ability to write ConceptName var = expr;, or even vector<auto> var = expr;. While these other features weren’t nearly as controversial as AFTs were, from a specification point of view, removing AFTs while keeping these in would have required a significant refactor of the wording that would have been difficult to accomplish by the end of the week. Since the committee wanted to “strike the iron while it’s hot” (meaning, get Concepts into C++20 while there is consensus for doing so), it was decided that for the time being, constrained type specifiers would be removed altogether. As a result, in the current C++20 working draft, things like vector<auto> var = expr; are ill-formed. However, it’s widely expected that this feature will make it back into C++20 Concepts at future meetings.

    Lastly, I’ll note that there were two proposals (one I co-authored, and a second one that didn’t make the pre-meeting mailing) concerning the semantics of constrained type specifiers. The removal of constrained type specifiers made these proposals moot, at least for the time being, so they were not discussed at this meeting. However, as people propose re-adding some of the uses of constrained type specifiers, and/or terse templates in some form, these papers will become relevant again, and I expect they will be discussed at that time.

    Modules

    Another major goal of the meeting was to send out the Modules TS for its PDTS ballot. I gave an overview of the current state of Modules above. Here, I’ll mention the Modules-related proposals that came before EWG this week:

    • Distinguishing the declaration of a module interface unit from the declaration of a module implementation unit. The current syntax is module M; for both. In Kona, a desire was expressed for interface units to have a separate syntax, and accordingly, one was proposed: export module M;. (The re-use of the export keyword here is due to the committee’s reluctance to introduce new keywords, even context-sensitive ones. module interface M; would probably have worked with interface being a keyword in this context only.) This was approved for the Modules TS.
    • Module partitions (first part of the paper only). These are module units that form part of a module interface, rather than being a complete module interface; the proposed syntax in this paper is module M [[partition]];. This proposal failed to gain consensus, not over syntax concerns, but over semantic concerns: unlike the previous module partitions proposal (which was not presented at this meeting, ostensibly for lack of time), this proposal did not provide for a way for partitions to declare dependencies on each other; rather, each partition was allowed to depend on entities declared in all other partitions, but only on their forward-declarations, which many felt was too limiting. (The corresponding implementation model was to do a quick “heuristic parse” of all partitions to gather such forward-declarations, and then do full processing of the partitions in parallel; this itself resulted in some raised eyebrows, as past experience doing “heuristic parsing” of C++ hasn’t been very promising.) Due to the controversy surrounding this topic, and not wanting to hold up the Modules TS, EWG decided to defer module partitions to Modules v2.
    • Exporting using-declarations. This wasn’t so much a proposal, as a request for clarification of the semantics. The affected scenario was discussed, and the requested clarification given; no changes to the Modules TS were deemed necessary.
    • Name clashes between private (non-exported) entities declared in different modules. Such a name clash is ill-formed (an ODR violation) according to the current spec; several people found that surprising, since one of the supposed advantages of Modules is to shield non-exported entities like private helpers from the outside world. This matter was discussed briefly, but a resolution was postponed to after the PDTS ballot (note: that’s not the same as being postponed to Modules v2; changes can be made to the Modules TS between the PDTS ballot and final publication).
    • A paper describing some requirements that a Modules proposal would need to have to be useful in evolving a particular large codebase (Bloomberg’s). Discussion revealed that the current spec meets some but not all of these requirements; the gaps mainly concern the ability to take a legacy (non-modular) code component, and non-intrusively (“additively”) provide a modular “view” of that component. No changes were proposed at this time, but some changes to fill these gaps are likely to appear as comments on the PDTS ballot.
    • Identifying module source code. Currently, the module M; or export module M; declaration that introduces a module unit is not required to be the first declaration in the file. Preceding declarations are interpreted as being part of the global module (and this is often used to e.g. include legacy headers). The author of this proposal would nonetheless like something that’s required to be the first declaration in the file, that announces “this file is a module unit”, and proposed module ; as being such a marker. EWG was favourable to the idea, but postponed discussion of a concrete syntax until after the PDTS.

    With these discussions having taken place, the committee was successful in getting Modules sent out for its PDTS ballot. This is very exciting – it’s a major milestone for Modules! At the same time, I think it’s clear from the nature of the some of the proposals being submitted on the topic (including the feedback from implementation and deployment experience at Google, some of which is yet to be fully discussed by EWG) that this is a feature where there’s still a fair amount of room for implementation convergence and user feedback to gain confidence that the feature as specified will be useful and achieve its intended objectives for a broad spectrum of C++ users. The PDTS ballot formally begins the process of collecting that feedback, which is great! I am very curious about the kinds of comments it will garner.

    If you’re interested in Modules, I encourage you to give the prototype implementations in Clang and MSVC a try, play around with them, and share your thoughts and experiences. (Formal PDTS comments can be submitted via your national standards body, but you can also provide informal feedback on the committee’s public mailing lists.)

    Other Working Groups

    The Library Working Group had a busy week, completing its wording review of the Ranges TS, Networking TS, and library components of the Coroutines TS, and allowing all three of these to be published at the end of the week. They are also in the process of reviewing papers targeting C++20 (including span, which provides an often-requested “array view” facility), Parallelism TS v2, and Library Fundamentals TS v3.

    The Library Evolution Working Group was, as usual, working through its large backlog of proposed new library features. As much as I’d love to follow this group in as much detail as I follow EWG, I can’t be in two places at once, so I can’t give a complete listing of the proposals discussed and their outcomes, but I’ll mention a few highlights:

    SG 7 (Reflection and Metaprogramming)

    SG 7 met for an evening session and discussed three topics.

    The first was an extension to the existing static reflection proposal (which is headed towards publication as the initial Reflection TS) to allow reflection of functions. Most of the discussion concerned a particular detail: whether you should be allowed to reflect over the names of a function’s parameters. It was decided that you should, but that in the case of a function with multiple declarations, the implementation is free to choose any one of them as the source of the reflected parameter names.

    The second topic was what we want metaprogramming to look like in the longer term. There was a paper exploring the design space that identified three major paradigms: type-based metaprogramming (examples: Boost.MPL, the current static reflection proposal), heterogenerous value-based (example: Boost.Hana), and homogeneous value-based (this would be based on constexpr metaprogramming, and would require some language extensions). Another paper then argued that the third one, homogeneous value-based metaprogramming, is the best choice, both from a compile-time performance perspective (the other two involve a lot of template instantiations which are compile-time expensive), and because it makes metaprogramming look more like regular programming, making it more accessible. SG 7 agreed that this is the long-term direction we should aim for. Note that this implies that, while the Reflection TS will likely be published in its current form (with its type-based interface), prior to merger into the C++ IS it would likely be revised to have a homogenerous value-based interface.

    The third topic was a proposal for a more advanced reflection/metaprogramming feature, metaclasses. Metaclasses combine reflection facilities with proposed code injection facilities to allow class definitions to undergo arbitrary user-defined compile-time transformations. A metaclass can be thought of as a “kind” or category of class; a class definition can be annotated (exact syntax TBD) to declare the class as belonging to that metaclass, and such a class definition will undergo the transformations specified in the metaclass definition. Examples of metaclasses might be “interfaces”, where the transformation includes making every method pure virtual, and “value types”, where the transformation includes generating memberwise comparison operators; widely used metaclasses could eventually become part of the standard library. Obviously, this is a very powerful feature, and the proposal is at a very early stage; many aspects, including the proposed code injection primitives (which are likely to be pursued as a separate proposal), need further development. Early feedback was generally positive, with some concerns raised about the feature allowing codebases to grow their own “dialects” of C++.

    The Velocity of C++ Evolution

    C++ standardization has accelerated since the release of C++11, with the adoption of a three-year standardization cycle, the use of Technical Specifications to gather early feedback on major new features, and an increase in overall participation and the breadth of domain areas and user communities represented in the committee.

    All the same, sometimes it feels like the C++ language is still slow to evolve, and a big part of that is the significant constraint of needing to remain backwards-compatible, as much as possible, with older versions of the language. (Minor breakages have occurred, of course, like C++11 user-defined literals changing the meaning of "literal"MACRO. But by and large, the committee has held backwards-compatibility as one of its cornerstone principles.)

    A paper, discussed at an evening session this week, explores the question of whether, in today’s age of modern tools, the committee still needs to observe this constraint as strictly as it has in the past. The paper observes that it’s already the case that upgrading to a newer compiler version typically entails making some changes / fixes to a codebase. Moreover, in cases where a language change does break or change the meaning of code, compilers have gotten pretty good at warning users about it so they can fix their code accordingly (e.g. consider clang’s -Wc++11-compat warnings). The paper argues that, perhaps, the tooling landscape has matured to the point where we should feel free to make larger breaking changes, as long as they’re of the sort that compilers can detect and warn about statically, and rely on (or even require) compilers to warn users about affected code, allowing them to make safe upgrades (tooling could potentially help with the upgrades, too, in the form of automated refactorings). This would involve more work for compiler implementers, and more work for maintainers of code when upgrading compilers, but the reward of allowing the language to shed some of its cumbersome legacy features may be worth it.

    The committee found this idea intriguing. No change in policy was made at this time, but further exploration of the topic was very much encouraged.

    If the committee does end up going down this path, one particularly interesting implication would be about the future of the standard library. The Concepts-enabled algorithms in the Ranges TS are not fully backwards-compatible with the algorithms in the current standard library. As a result, when the topic of how to merge the Ranges TS into the C++ standard came up, the best idea people had was to start an “STLv2”, a new version of the standard library that makes a clean break from the current version, while being standardized alongside it. However, in a world where we are not bound to strict backwards-compatibility, that may not be necessary – we may just be able to merge the Ranges TS into the current standard library, and live with the resulting (relatively small) amount of breakage to existing code.

    Conclusion

    With C++17 effectively done, the committee had a productive meeting working on C++20 and advancing Technical Specifications like Modules and Coroutines.

    The merger of Concepts into C++20 was a definite highlight of this meeting – this feature has been long in the making, having almost made C++11, and its final arrival is worth celebrating. Sending out Modules for its PDTS ballot was a close second, as it allows us to start collecting formal feedback on this very important C++ feature. And there are many other goodies in the pipeline: Ranges, Networking, Coroutines, contracts, reflection, graphics, and many more.

    The next meeting of the Committee will be in Albuquerque, New Mexico, the week of November 6th, 2017. Stay tuned for my report!

    Other Trip Reports

    Others have written reports about this meeting as well. Some that I’ve come across include Herb Sutter’s and Guy Davidson’s. Michael Wong also has a report written just before the meeting, that covers concurrency-related topics in more depth than my reports. I encourage you to check them out!

    Featured Song: Misery Is a Butterfly

    It’s been a while since I wrote a Featured Song post. Today, I’m going to share a song from a band I’ve discovered in recent months, Blonde Redhead.

    A small band with just three members (none of whom, incidentally, is blonde or a redhead 🙂 ), their music has a unique sound and a contemplative atmosphere that I’ve taken to.

    I particularly enjoy lead singer Kazu Makino‘s thin, clear high voice which gives the band’s sound an almost ethereal twist. She also has really good stage presence (check out some live recordings of the band).

    The song I’m featuring today – my favourite of the Blonde Redhead songs I’ve heard so far – is the title track of their 2004 album Misery Is a Butterfly:



    Featured lyrics:

    Remember when we found misery
    We watched her, watched her spread her wings
    And slowly fly around our room
    And she asked for your gentle mind

    Full lyrics and discussion can be found here.

    I’d also like to share a beautiful instrumental (piano and cello) cover of this song that I’ve come across:



    Hope you’ve enjoyed them!