Posts

C++23 finally lets us solve the const view problem

For a while now I've been vexed by a problem I don't really know how to name or describe in simple terms. Say we have a JSON file and we are designing a program that provides a more natural way to edit that JSON file than writing JSON by hand. This program would help its user maintain proper structure within the JSON, but it would also be forgiving of erroneous structure in hand-edited or corrupted JSON documents. Once you have a deserialized JSON in memory, you could go through all the effort of loading it all into bespoke classes with rich interactions, but you immediately run into some problems. Depending on your serialization framework, you might have issues with the load and save code getting out of sync, or issues with upgrading from older versions and later downgrading again, or elements in the JSON being re-ordered unnecessarily and causing unnecessary diffs in source control repositories. The deserialized JSON is right there. It's all already in memory. Why are w...

CW_USEDEFAULT: An underappreciated UX feature

Say you're writing code for a desktop PC application. It's very common for people to have multiple displays/screens for their desktop setups nowadays. When the user launches your program, how do you know which screen it should appear on to provide the least-jarring user experience? Many applications seem to get this wrong, opting for whatever display happens to be set as primary, or for whatever display the program was last closed on, or whatever screen there happens to be a mouse pointer on, either at some arbitrary time after the app started executing its code or each time it gets around to creating a window. While those approaches would make for good choices that the user can pick between in your app's settings, they're all poor defaults and even worse for the first-launch experience. It's actually a trick question. The correct answer is to let the window manager decide. That can vary based on operating system and desktop environment, but the general principle ...

What if C++ had explicit destruction?

Scope-based lifetime is a fantastic language feature, and it's a large contributing factor to the popularity of languages like Rust. C++ calls it RAII for historical reasons, but the general idea is that the compiler automatically inserts calls to cleanup functions (destructors) at the instant when the object ceases to be accessible (goes out of scope). This works very well for a wide variety of object types, such as allocated memory, operating system handles (e.g. files), locks/mutexes, logging, and more. The primary strength here is that the compiler doesn't let us forget to do a necessary operation. However, destructors in C++ are also incredibly limited compared to how they could be. They cannot take any parameters, and they cannot fail. You can of course add interfaces to set parameters in advance or to explicitly clean up before the destructor so you can check for failures, but those are things that can be forgotten. The compiler always remembers to call the destructor...