Content
@
0 reply
0 recast
1 reaction
Dan Simmons
@simmons
Is NotebookLM getting worse or are my expectations just getting higher?
1 reply
0 recast
1 reaction
marv 🎙️
@marvp
Hmm I've had a couple less-than-optimal outputs too. But I do want to try it with some longer docs as well
1 reply
0 recast
1 reaction
Dan Simmons
@simmons
I've found that, the less I give it, the better it does at giving the full context. If I start inputting multiple sources, it does a wayyyyy zoomed out 50,000ft overview that omits most of the detail/value. Separately, the cadence of the two hosts has gotten kind of weird, they start finishing each other's thoughts as if they have a shared brain vs being distinct entities (which I seem to remember it not doing in the earlier days) 🤷♂️
1 reply
0 recast
1 reaction