Domanda

I read: Use composite type to create new table

I have a table called locations, e.g. representing objects with lat. and long. coordinates.

In another table, I declared a column of type locations (just for fun, not trying to be smart), i.e.

CREATE TABLE XXX (..., some_column locations, ...);

And now I'm asking myself what this means and if I could store a locations object in there.

And here's what I tried to do:

SELECT pg_typeof(ROW(x)) FROM locations x LIMIT 1;

which returns record. I tried casting this to locations, i.e.

SELECT ROW(x)::locations FROM locations X LIMIT 1;

which yields

ERROR: cannot cast type record to locations

Next I tried defining a composite type type_location based on the columns of the locations table, and created a typed table (CREATE TABLE ... OF ...) based on it. Still I am unable to do ROW(x)::locations.

Ultimately I'm trying to get a value to store into table XXX of type locations (or type_location) but I don't understand which part my reasoning is fallacious.

PS: I'm not trying to create a sound database design using this construction but really only just toying around with Postgresql and its type system.

È stato utile?

Soluzione

And now I'm asking myself what this means and if I could store a locations object in there.

Yes, you can. (But there are not many great use cases for that.)

This does not do what you seem to think it does:

SELECT ROW(x)::locations FROM locations X LIMIT 1;

x is already a row type. By wrapping it into ROW(x) you create a record containing a column of type locations, which cannot be cast to the row type locations as it's something else. Use instead:

SELECT x::locations FROM locations x LIMIT 1;

... where the cast is redundant. So just:

SELECT x FROM locations x LIMIT 1;

However, if there is a column of the same name "x", this resolves to the column name. Pick a table alias that can never appear as column name or use this to be sure:

SELECT (x.*)::locations FROM locations x LIMIT 1;

Now, the cast is not redundant as Postgres would otherwise expand x.* or even (x.*) to the list of columns. Read the manual here and here.


Also just:

SELECT pg_typeof(x) FROM locations x LIMIT 1;

instead of:

SELECT pg_typeof(ROW(x)) FROM locations x LIMIT 1;

Aside: the ROW constructor does not preserve column names and always produces an anonymous record (as you found out the hard way).

SELECT ROW(x)::locations FROM locations X LIMIT 1;

Related:

Altri suggerimenti

I've been messing around with types and casts this past week too and love your question. Here's a bit of a trick (?)

create table state_crunched
     (data state);


-- table_name::table_name or table_name::text cast a row/compound type to a (csv,ish,"format like this")    insert into state_crunched 
    select state::state from state; 

In my case, the sample table is state:

CREATE TABLE IF NOT EXISTS api.state (
    id uuid DEFAULT gen_random_uuid(),
    "name" citext,
    abbreviation citext,
    population bigint,
    total_sq_miles real,
    percent_land text
);

In our case, we're implementing one+ multiple custom types per table, and it might be helpful to be able to archive rows according to a type/format:

create table state_crunched
     (format_name text,
     data state);

I just tried, and I don't see a way of creating an anonymous record as a column type. (Other than storing them as text, etc. after the table::table serialization trick.) I'm thinking of storing records in rows for archiving. It's probably better to use replication and a history table for each source archive. But, like you, I'm just getting my head around the range of features available. So, in the sketch table above, the type name would then let you know how to unpack the data back into a record structure. You could have a ::casting defined with CREATE CAST and a function to handle the expansion. Or at least that's what I was thinking, I don't know how to make that work.

If you're aksing "why multiple compound types for a table, which is itself already a compound type?", then fair question.

CREATE TYPE api.state_v1 AS
(
    "name" citext,
    population bigint,
    total_sq_miles real,
    percent_land text,
    statehood_year integer,
);


CREATE TYPE api.state_v2 AS
(
    id uuid,
    "name" citext,
    abbr citext,
    population bigint,
    total_sq_miles real,
    percent_land real,
    statehood_year smallint,
    capital citext
);

In our case, we've got a distributed system where CRUD work is done in something other than Postgres. Then data is pushed up to Postgres for aggregation and analysis. The "client" applications in the field may lag significantly between updates. It's entirely possible for a site to be months behind the latest release. This means that operations like INSERT are based on the structure in Postgres from months ago. I think that in this situation, many people have an ORM or some centralized layer for translating input formats of different types into the current structure. Well, Postgres is our centralized system, so the code goes there. The idea is to have an INESRT handling function that accepts arrays of rows in a particular format, like state_v1[] or state_v2[]. The server-side function(s) then unnest the incoming array data, massages it as needed, and inserts it into the table. If the underlying table has had column names added, dropped, renamed, or retyped, then the function can deal with getting the old format into the new shape.

I'll be watching with interest if you come up with any ideas, info, or tricks.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a dba.stackexchange
scroll top