Content pfp
Content
@
0 reply
0 recast
0 reaction

Ryan pfp
Ryan
@ryansmith
pg_column_size is nice. I knew a bigint was 8 bytes, but I was curious to see the storage size for comparable numeric data.
2 replies
0 recast
10 reactions

Shane da Silva pfp
Shane da Silva
@sds
If you haven't already, it'll blow your mind when you see how large composite types are to store. It's baffling, especially when you consider how nice it would be to represent a currency as a composite type (value + denomination). You're better off just having two columns instead.
1 reply
0 recast
0 reaction

Ryan pfp
Ryan
@ryansmith
Interesting. I wonder why.
1 reply
0 recast
1 reaction

Shane da Silva pfp
Shane da Silva
@sds
Having looked into this long ago, my best summary is because of the overhead of storing tuple metadata for _each row_. It feels like it could have been avoided, but I suppose the implementation has not yet been optimized. That said, being able to define your own operator overloads (e.g. to error if you accidentally try to sum two currencies of differing denominations together) might be worth the storage cost for sake of DevEx.
1 reply
0 recast
1 reaction

Ryan pfp
Ryan
@ryansmith
Interesting. I wonder if you could get the same kind of abstraction with a stored procedure
1 reply
0 recast
1 reaction

Shane da Silva pfp
Shane da Silva
@sds
This is why I’m super interested in the outcome of pg-nano. If defining+maintaining stored procedures can be done in aan ergonomic way, I think we’ll start to see more folks using them. https://warpcast.com/sds/0x8a761ddc
1 reply
0 recast
1 reaction

Ryan pfp
Ryan
@ryansmith
I've seen pg-nano-like projects over the years. The all-or-nothing approach to stored procedures is not a good idea. That being said, @indexsupply 's performance is due (in large part) to stored procedures. I use them when I need to push computation to the storage. I don't need a framework for this. I just do it carefully and intentionally. They work best when used as a small, sharp knife.
0 reply
0 recast
1 reaction