Interesting words, and I think the thesis that culture is a limiting factor in converting AI access to growth/learning is a correct one. I feel confused by a few things, though:
- If AI progress continues the way it has been, what's the point in scaling such high-growth cultures? We're already at a point where LLMs perform better on knowl…
Interesting words, and I think the thesis that culture is a limiting factor in converting AI access to growth/learning is a correct one. I feel confused by a few things, though:
- If AI progress continues the way it has been, what's the point in scaling such high-growth cultures? We're already at a point where LLMs perform better on knowledge tasks than a large portion of the population would be capable of performing. Is there a benefit to people outside of some small portion of the population learning to use LLMs for growth in this manner?
- It seems like we're moving towards a world where having a broad knowledge base internalized is less valuable. In particular, we're moving towards a world where we can access knowledge "just-in-time", in the same way that we currently (or at least prior to pandemic-related supply chain issues) managed logistics. In a few years, it may be that your obsessions with esoteric questions are just a new variant of entertainment, the new way people like you "distract themselves and wriggle out of work".
I suspect that a key culture shift will be that people move from "just Google it" to "just ask ChatGPT" -- and once that happens, and once a new generation grows up with LLMs and is as fluent with prompting them as millenials are with searching Google, and as AI companies make LLMs easier to use, what's the difference between the world we inhabit and the one you worry we won't?
These are good questions. I guess I find it valuable to pursue creating cultures that allow people to flourish and become skilled because 1) an underlying assumption that this is a way of living that is more pleasant than the alternative and 2) I don't think we know how fast and far AI will evolve and how big of a role humans will play in solving the problems that need solving, so we might as well do what we can to make sure that people are there to solve them, in case AI won't.
Interesting words, and I think the thesis that culture is a limiting factor in converting AI access to growth/learning is a correct one. I feel confused by a few things, though:
- If AI progress continues the way it has been, what's the point in scaling such high-growth cultures? We're already at a point where LLMs perform better on knowledge tasks than a large portion of the population would be capable of performing. Is there a benefit to people outside of some small portion of the population learning to use LLMs for growth in this manner?
- It seems like we're moving towards a world where having a broad knowledge base internalized is less valuable. In particular, we're moving towards a world where we can access knowledge "just-in-time", in the same way that we currently (or at least prior to pandemic-related supply chain issues) managed logistics. In a few years, it may be that your obsessions with esoteric questions are just a new variant of entertainment, the new way people like you "distract themselves and wriggle out of work".
I suspect that a key culture shift will be that people move from "just Google it" to "just ask ChatGPT" -- and once that happens, and once a new generation grows up with LLMs and is as fluent with prompting them as millenials are with searching Google, and as AI companies make LLMs easier to use, what's the difference between the world we inhabit and the one you worry we won't?
These are good questions. I guess I find it valuable to pursue creating cultures that allow people to flourish and become skilled because 1) an underlying assumption that this is a way of living that is more pleasant than the alternative and 2) I don't think we know how fast and far AI will evolve and how big of a role humans will play in solving the problems that need solving, so we might as well do what we can to make sure that people are there to solve them, in case AI won't.