Pointer arithmetic is defined only within an array, including the one-past-end pointer. A non-array object is treated as an array of length one for these purposes.

Explanation Link to heading

The code below has undefined behavior:

// UB
int f() {
    int x = 0;
    return *(&x - 1 + 1);
}

Even though it is successfully compiled by Clang, GCC, and MSVC and will most likely appear to work fine, it still contains UB. With UB, anything can happen: it may seem to work, it may break, or it may “work” until a different optimization level, compiler version, target architecture or tiny refactor makes it fail. In general, compilers are allowed to assume that undefined behavior never occurs, so they don’t have to generate code that handles such cases correctly. As a result, once UB is present, you can’t rely on any particular program behavior.

But why is there undefined behavior if we seemingly just return the value of x?

Pointer arithmetic Link to heading

First, note that + and - are left-associative, so &x - 1 + 1 is grouped as (&x - 1) + 1.

Now, let’s look at the rule about pointer arithmetic, which requires pointer addition and subtraction to stay within the same array or one-past-end:

When an expression J that has integral type is added to or subtracted from an expression P of pointer type, the result has the type of P.

  • If P evaluates to a null pointer value and J evaluates to 0, the result is a null pointer value.
  • Otherwise, if P points to a (possibly-hypothetical) array element i of an array object x with n elements, the expressions P + J and J + P (where J has the value j) point to the (possibly-hypothetical) array element i + j of x if 0 ≤ i + j ≤ n, and the expression P - J points to the (possibly-hypothetical) array element i − j of x if 0 ≤ i − j ≤ n.
  • Otherwise, the behavior is undefined.

So the UB comes from the fact that &x - 1 steps outside the allowed range during evaluation — even though that intermediate pointer is not dereferenced.

There are no arrays in the code example, but the Standard states that for pointer arithmetic purposes a non-array object is treated as an array of length 1:

An object of type T that is not an array element is considered to belong to an array with one element of type T.

So x is treated like int[1]: &x is element 0, &x + 1 is one-past-end, and &x - 1 steps outside of the allowed range.

Constexpr check Link to heading

Constant evaluation cannot perform operations that have UB, so we can demonstrate the presence of UB by forcing compile-time evaluation. We can do this by adding constexpr and evaluating it during compile time using static_assert:

constexpr int f() {
    int x = 0;
    return *(&x - 1 + 1);
}

static_assert(f() == 0);

Clang refuses to compile this code:

error: static assertion expression is not an integral constant expression
    8 | static_assert(f() == 0);
      |               ^~~~~~~~
note: cannot refer to element -1 of non-array object in a constant expression

Same for MSVC:

error C2131: expression did not evaluate to a constant
note: failure was caused by out of range index -1; allowed range is 0 <= index < 1
note: the call stack of the evaluation (the oldest call first) is
note: while evaluating function 'int f(void)'
Compiler returned: 2

However, up-to-date GCC 15.2 successfully compiles this. This behavior is tracked as a bug.

Reordering Link to heading

If we change the expression from &x - 1 + 1 to &x + 1 - 1, then there is no UB anymore:

  • &x is logically treated as an array int[1], so &x + 1 is the allowed one-past-end pointer;
  • subtracting 1 brings it back to &x.

All three compilers successfully compile this.

Why does this rule exist? Link to heading

One reason is that pointers that stay in-bounds during arithmetic allow the compiler to perform optimizations based on alias analysis. UB lets the compiler assume certain “impossible” situations cannot occur, so it doesn’t have to generate code that correctly handles such situations. For example, the compiler might assume that after a series of arithmetic operations, the pointer still points to an element of the same array object (or one-past-end). The same applies to a single non-array object.

If this weren’t true, the compiler would have to be “paranoid”: a pointer derived from &x could, in principle, wander into a different object — i.e., alias unrelated objects. This would force much more conservative assumptions and may disable many optimizations.

Note that the one-past-end pointer might have the same address as another object. However, it may only be used for arithmetic and comparisons within the same array; using it to access or modify an unrelated object is undefined behavior.

Another reason is portability: this rule supports implementations on architectures with non-flat addressing, where a pointer is not just a plain integer address and may carry additional metadata (segments, capabilities, etc.).

References / Further reading Link to heading

  1. C99 rationale v5.10 (see discussion around pointer arithmetic and segmented architectures).
  2. WG14 provenance/alias-analysis notes
  3. Pointers Are More Abstract Than You Might Expect in C