Content pfp
Content
@
0 reply
0 recast
0 reaction

notdevin  pfp
notdevin
@notdevin.eth
It would be great to know the weights of a model but that won’t happen. They could, however, give us abstracted coordinates that reference the training data relevant to produce an output which would help you discovery your own way to more precisely steer the output
16 replies
2 recasts
14 reactions

Tk¹ pfp
Tk¹
@tekus.eth
If AI models gave us hints about what info they used to answer, how could we keep things fair for everyone involved in terms of privacy
1 reply
0 recast
1 reaction

notdevin  pfp
notdevin
@notdevin.eth
What do fairness and privacy mean in this context?
0 reply
0 recast
0 reaction

Tk¹ pfp
Tk¹
@tekus.eth
Fairness in this case would mean ensuring the AI doesn’t favour certain data sources And privacy in the sense of protecting sensitive data from being revealed
1 reply
0 recast
1 reaction