The common denominator is tensor contraction. View a tensor of rank k as an object whose elements are indexed through k indices (let us forget about co- and contravariant indices for the moment). A scalar is a tensor of rank 0, a vector a tensor of rank 1, and a matrix is a vector of rank 2.
I assume that by "vector of rank 2" you mean "tensor of rank 2". If so, I have spent my whole life thinking I know nothing about tensors, and now I find out that they are plain old Fortran arrays, which I first learned in the early 1970s before I even had access to a computer. These same arrays appear in APL, Common Lisp, and the two SRFIs. (It's not uncommon for the standard to allow implementations to set a limit on the rank; usually the minimum value of this limit is 7.) Or at least (if I understand Linas) tensors are that subset of arrays that have the same number of elements in each dimension: 3 x 3 x 3 but not 3 x 4 x 5. To cope with this difference, inner product requires only the indices being contracted on to be the same, letting the others go.
The outer product of a rank k and a rank l tensor is a rank (k + l) tensor. (You basically just take all combinations of products of pairs of elements of the two factors.)
Yes. Generalizing, the rank (number of dimensions) of the outer product is the sum of the ranks of the operands.
Given two rank 3 tensors, you could define their "inner product" by taking their outer product and contracting over a pair of indices you choose by convention. But it would be more interesting to have a general contraction procedure over any pair of indices.
It would. I attempted to get Brad to put the inner product into SRFI 179, but didn't succeed, because the APL definition works because APL scalars just are 0-dimensional arrays, and the latter are not supported by 179 (though they are by 164 and CL). He said he might add the definition as an example, but didn't. Perhaps I should have asked for array-contract instead, though the same irregularity would result: it might return an array or not, depending on the ranks of the arguments.
covariant and contravariant
I do hope this is the same, or very close to, the definition I know for inclusion-polymorphic functions: a polymorphic function can be soundly typed if it is contravariant in its arguments (meaning it can accept any subtype of the declared type) and covariant in its result (meaning it can return any supertype of the declared type). Java arrays are notoriously covariant in both access and mutation, and consequently are unsound: you can cast an array of ints (or even of Integers, which are true objects in Java as ints are not) to an array of Objects, and then mutate an element to a non-integer with no complaint from the compiler (though it will fail at runtime, of course). They really should have been invariant, but too late now.
This concept of co/contravariance, by the way, is isomorphic to the simple Bell-LaPadula model of mandatory access control. Given security levels such as Unclassified, Secret, and Top Secret where each level includes the level below, the "simple security rule" says that an agent cannot read a document from a store with a higher security level than its own (covariance restriction) and the "*-rule" (so named for lack of a better name) says that an agent cannot write a document into a store with a lower level than its own (contravariance). In practice, permission to write usually goes with permission to read, so the practical *-rule requires agents to write only into stores with the same level as its own (invariance). I find this isomorphism fascinating, and use the Bell-LaPadula model to reason about co/contravariance, since arguments are "written" to a function and results are "read" from them.
(Why does the *-rule even exist? Because you don't want a buggy Top Secret process to write Top Secret information to an Unclassified store. For human agents, this is generally enforced by the legal system rather than by technical means.)