The reason types are called “static semantics” is because what counts as “semantics” depends on where you’re looking from. Type checking is an (abstract) interpretation of a syntactically well-formed program. Execution is a (concrete) interpretation of a syntactically well-formed program. The only difference is in the granularity of the execution, i.e. the semantics the interpreter attributes to the program. It is easy to add in even more layers with different interpreters for different levels of granularity (this is often called “elaboration” or “optimization”, depending on the goal of the interpreter).

It’s called static because we just happen to run that interpreter at compile time. We could just as well do it at run time by creating a “shadow program” that is executed in parallel with the real program. In fact, this is exactly what (checked) dynamic languages are doing. Less efficient and fewer compile-time guarantees than static checking, but it’s no big whoop. For sufficiently expressive type systems (e.g. dependent types, certain deadlock-avoidance systems, staged programming), the types themselves exist dynamically and so some portion of the shadow program cannot be compiled away. At this point there’s no real static/dynamic difference between terms and types, but you still have the nice separation of concerns by calling them different things. And this can be hoisted up another level again with dynamic kinding systems. And hoisted up as many more levels as you like (hence, “many different interpreters for different levels of granularity”).

There’s nothing special about the execution of a program. It’s just a mathematical object, and that’s just one interpretation of it.

]]>Type checking being syntactic means (to me) that a program is determined to be well typed depending only on its syntactic content, and not on the semantics. So for instance:

```
foo i f | i <= 5 = f i
| i > 5 = f + i
```

is judged to be invalid, because we can look only at the syntactic terms, and see that `f + i`

requires f to be an integer, and `f i`

requires f to be a function. Whereas, perhaps, if we didn’t bother with this check, things would be fine, because we always happen to call `foo`

with an integer `f`

when `i > 5`

and with a function otherwise. So if we looked at the overall semantic content of the program, we might judge that things are okay, but aren’t capable of judging that using only syntax-based rules.

Of course, when we get into full-on dependent types, we can give foo a type like:

```
foo : (i : Integer) -> (if i <= 5 then (Integer -> Integer) else Integer) -> Integer
```

I’m not sure if that still qualifies as being syntactic or not.

]]>Anyway, I have to admit I’ve never gone into formal specifications of languages so I’ll take your word on this Now, if this is indeed what Randal meant then I stand corrected, however then I’d argue that the *syntactic check* you get from a Haskell-like type system still can’t easily be replaced by manually implemented tests on the dynamic semantics of a type.

However, semantics of programming languages generally has to do with assigning some meaning to syntactic terms. So, if you go with denotational semantics (which is what I’m most familiar with), you’re giving the meaning of your programs in terms of mathematical objects, like sets or something. So if we’re giving semantics to arithmetic expressions in some language, we might say (where syntactic expressions go inside brackets):

```
[0] = 0
[1] = 1
...
[m + n] = [m] + [n]
[m - n] = [m] - [n]
...
```

And if we extend the language with ways of defining functions, we might have rules:

```
[f x] = [f]([x])
```

Where on the left we have an expression representing the application of some expression `f`

to some expression `x`

, and its meaning on the right is the result of the mathematical function corresponding to `f`

at the value corresponding to `x`

.

However, when you give a type system for a language, you don’t typically (as far as I’ve seen) make reference to the above sort of semantics. Indeed, the type system is typically given first. You wouldn’t say, “if `f`

denotes a mathematical function that takes an integer, and `x`

denotes an integer, then the application of `f`

to `x`

is type correct.” Types are assigned to expressions (which are syntactic) and not to their semantic counterparts.

However, you probably could look at such type assignments as another kind of semantics, aside from denotational, operational or what have you. In denotational semantics, you’re giving a correspondence between syntactic terms and, say, integers and functions thereof that are their meaning. In a type system (which, as I mentioned, is often called the “static semantics” of a language), you’re giving a correspondence between syntactic terms and types. Of course, not all raw syntactic constructs are well typed, so we might view the type-assignment function as a partial function (implemented here with Maybe):

```
[0] = Just Integer
[1] = Just Integer
...
[m + n] = guardInteger [m] >> guardInteger [n] >> Just Integer
...
[f x] = guardIntegerToInteger [f] >> guardInteger [x] >> Just Integer
```

And so on. So in this view, a type system is another kind of semantics for your language, which helps to rule out certain invalid syntactic expressions independently of their meanings in other semantics (and note, for instance, that the denotational semantics above simply assumed that when we have `f x`

, that `f`

corresponds to an appropriate mathematical function, which can be guaranteed by first checking that the static semantics accepts the program; if you didn’t rule out invalid expressions in advance, you’d have partiality concerns complicating the denotational semantics).

Anyhow, that is the sense in which type checking is ‘syntactic,’ from a formal specification perspective. That may or may not be the sense in which you and others were thinking of the word.

]]>```
f : a -> b, x : a ==> f x : b
```

All of which talk about how you determine the type of one syntactic expression based on the types of its syntactic components (perhaps things are less clear cut in dependently typed languages, where type checking involves evaluating expressions). However, the type system of a language is also often called its “static semantics,” and obviously a good type system will be a way of ruling out (at compile time) syntactic expressions that correspond to semantically (as in dynamic semantics) bad things.

And with a good type system, you can do quite a lot. For instance, here is a program I wrote not too long ago that implements insertion sort in Agda, and proves that it produces a sorted permutation of the original list (although I’m not really happy with the permutation part; it assumes more axioms about permutations than are probably necessary). Of course, the type system is the method by which these properties are proved.

]]>Static typing is still effectively a syntax check, not a semantic check though.

That is factually incorrect. Static typing is most definitely a semantic check, and can be used to make static guarantees about the behaviour of code. This is as opposed to syntax, which is usually more-or-less well separated from the semantics. Systems like Coq can be used to prove statements about programs in the type system statically.

]]>This is of course another reason why Randal’s talk is misleading, one can’t say that dynamic languages beat static ones without actually examining the state-of-the art in static type systems.

]]>But to harness that power, one must learn to program the type system first. This takes some time getting used to.

]]>I for one would be interested to read the blog post if you ever do publish it. I recently wrote about my contrasting experiences with static/dynamic languages, and would love to have more references that point out the fallacies of the arguments that dynamic language enthusiasts push.

]]>I’d be happy to read your draft though. I’m curious.

]]>