Content pfp
Content
@
0 reply
0 recast
0 reaction

notdevin  pfp
notdevin
@notdevin.eth
It would be great to know the weights of a model but that won’t happen. They could, however, give us abstracted coordinates that reference the training data relevant to produce an output which would help you discovery your own way to more precisely steer the output
16 replies
2 recasts
14 reactions

Tk¹ pfp
Tk¹
@tekus.eth
If AI models gave us hints about what info they used to answer, how could we keep things fair for everyone involved in terms of privacy
1 reply
0 recast
1 reaction

notdevin  pfp
notdevin
@notdevin.eth
What do fairness and privacy mean in this context?
0 reply
0 recast
0 reaction

Tk¹ pfp
Tk¹
@tekus.eth
Fairness in this case would mean ensuring the AI doesn’t favour certain data sources And privacy in the sense of protecting sensitive data from being revealed
1 reply
0 recast
1 reaction

notdevin  pfp
notdevin
@notdevin.eth
It’s going to always favor a data source that’s got a large representation. The idea of over representation is exclusively a subjective outcome based on the viewer or user and impossible to predict ahead of time The idea of data privacy is also a bit obtuse. If the model is trained on data, and you want to use it, then it’s not private. If there is data in the model that should have been private, too late. If there is data that is private and it’s before training then just remove the data 🤷‍♂️
0 reply
0 recast
0 reaction

Tk¹ pfp
Tk¹
@tekus.eth
You have a very good point and that drives a question in my mind If we accept that data representation will always be uneven and that true privacy is gone once training happens, how should we rethink our approach to AI transparency and ethics?
1 reply
0 recast
1 reaction

notdevin  pfp
notdevin
@notdevin.eth
I think if models came with the equivalent of a nutrition label, consumers could get their bearings and make their own choices. Is a donut ethical? It’s more about the context of its use. I don’t think we need to protect people from models anymore that we protect them from donuts. The prevention in both case is education in the learning sense (not the system of)
0 reply
0 recast
0 reaction

Tk¹ pfp
Tk¹
@tekus.eth
I like the nutrition label analogy. Just like with food, it wouldn't make the AI "good" or "bad", but would let users make informed choices. Treat AI like consumer product hmm🤔
1 reply
0 recast
1 reaction

notdevin  pfp
notdevin
@notdevin.eth
I made this mockup once upon a time, it needs a big update but you get the point
0 reply
0 recast
0 reaction

Tk¹ pfp
Tk¹
@tekus.eth
This makes a lot of sense I can’t lie I never thought of it this way
1 reply
0 recast
1 reaction

notdevin  pfp
notdevin
@notdevin.eth
Literally all of the headlines and “journalists” spend their time defaulting to the assumption the it’s a binary argument of good vs bad, so it’s in no way you’re fault. Combine that with the way we anthropomorphize these things as if it’s a real instance of a human-like thing making a judgement call on its output. It’s no more conscious during the selection of output than a chainsaw is when it’s used on a tree or a car or a person. LLMs are just tools, wonderfully utilitarian in expediting the output of my day
0 reply
0 recast
0 reaction