Kevin kelly's book, What Technology Wants, I found was pretty insightful. He tries to make the case that tech is an evolving life-form and its deep want is to expand human choice and potential. yet it feels more like a cosmic parasite gorging itself on us. Kelly's brain has succumb to the parasite's will, like the zombie ant fungus that prioritizes the parasites desires over its own well being till death.
If you go bubonic do it in style, and that would - obviously - be the style of late medieval German Gothic and Renaissance paintings by Matthias Grünewald (or Mathis Gothart Nithart, who excels in diseased and tormented skin). Those ai image generators don't have much of art history available for nothing.
Well, any article with a Gene Wolfe epigraph is going to get some credit from me. But I do wonder if you are underestimating the making a god thing. After all, I think that the standard secular materialistic conclusion makes all this activity rather more mysterious rather than less so. If there were, in fact, supernatural (or advanced technological beings), then all the time spent describing and serving them would be very directly rational.
"I have to do this to avoid the laser or the thunderbolt", or "they make the crops grow" is a solid cause of action.
But if there aren't such beings, then the countless thousands of years of repeated efforts, in all the various directions and versions, is wildly odd. And recent evidence shows that this is quite so even in non hierarchical or egalitarian type societies.
You don't always find an afterlife and gods, but spirits and something binding stuff ritually is universal. I think it has a lot to do with institution building, in the very direct sense that spirits and gods are the same type of thing as, well, states and companies.
And mechanism and religion aren't necessarily opposed: See literally the Greek text Mechanica attributed to Heron of Alexandria. He built neat devices for Greek Theaters and Temples. It wasn't seen to undermine the gods or belief.
And hasn't Sam Altman already insinuated that yeah, they're way overleveraged and not turning a profit, so we should expect to bail them out when the bubble bursts?
I suppose I'm lucky (for now) in my PMC position. "AI" is not being pushed on me per se as it is others, at least not directly. Yes, firms like Adobe and Microsoft are cramming it into every bit of software they sell, which comes on the heels of the push to end "perpetual licensing" for their products and force everyone into a subscription model ("You will own nothing..." and the like), so I can't necessarily *easily* avoid it. They're even making it such that you cannot turn it off via OS or application settings. PC mfg's are touting their new laptops that integrate "AI" from software developers including the aforementioned. However, I know other people who are being made to not only use "AI" as much as they possibly can, but to log their use of it (or be logged automatically). Ed Zitron has written a lot about this.
And when you look at the companies doing this, it becomes obvious why. They have raised so much money from greedy investors and they've realized that without convincing as many other firms and people to use it as they possibly can, the bubble will pop when those investors start making runs on their investments. On that level, it's also a tremendous circle jerk. One "AI" firm integrates some offering or the other with another "AI" firm's. And so on, in a giant money (and reputation/narrative) laundering operation.
"Early techlords like Google could violate Crackanomics because they still respected basic economics. But OpenAI is not Google. Google's marginal cost of serving you a webpage was marginal, while OpenAI's costs on inference alone are astronomical. Every instance of ChatGPT has to reincarnate fully, which is really expensive folly. It's comically and karmically expensive. It's like rubbing a genie bottle to do the dishes. At some point, just you run out of wishes. And I, for one, am here for it. The crash of OpenAI will be delicious, and if we're lucky, it takes the whole US economy with it.
Now that AI has to ravage a rainforest to return a brainfart, Capitalism has reached terminal velocity, straight down. It cannot get stupider than this. The business model accounting for 99.9% of American growth is 99.9% a pyramid scheme. The rich are building a pyramid of GPUs over the tomb of GP.
Compare this to the recent past, which was stupid but not this stupid. While Google and, even worse, Uber might have lost money at a net income level (GP minus operating expenses), OpenAI (seems to) lose money at a gross profit level, which is very different. They're buying coke for 20 and selling crack at 15, which violates the 0th Crack Commandment, which is make money. Every query you run on OpenAI doesn't just drain the water supply somewhere, it burns money."
Kevin kelly's book, What Technology Wants, I found was pretty insightful. He tries to make the case that tech is an evolving life-form and its deep want is to expand human choice and potential. yet it feels more like a cosmic parasite gorging itself on us. Kelly's brain has succumb to the parasite's will, like the zombie ant fungus that prioritizes the parasites desires over its own well being till death.
If you go bubonic do it in style, and that would - obviously - be the style of late medieval German Gothic and Renaissance paintings by Matthias Grünewald (or Mathis Gothart Nithart, who excels in diseased and tormented skin). Those ai image generators don't have much of art history available for nothing.
https://share.google/5vcfF26oIZTfUsFy3
Well, any article with a Gene Wolfe epigraph is going to get some credit from me. But I do wonder if you are underestimating the making a god thing. After all, I think that the standard secular materialistic conclusion makes all this activity rather more mysterious rather than less so. If there were, in fact, supernatural (or advanced technological beings), then all the time spent describing and serving them would be very directly rational.
"I have to do this to avoid the laser or the thunderbolt", or "they make the crops grow" is a solid cause of action.
But if there aren't such beings, then the countless thousands of years of repeated efforts, in all the various directions and versions, is wildly odd. And recent evidence shows that this is quite so even in non hierarchical or egalitarian type societies.
You don't always find an afterlife and gods, but spirits and something binding stuff ritually is universal. I think it has a lot to do with institution building, in the very direct sense that spirits and gods are the same type of thing as, well, states and companies.
And mechanism and religion aren't necessarily opposed: See literally the Greek text Mechanica attributed to Heron of Alexandria. He built neat devices for Greek Theaters and Temples. It wasn't seen to undermine the gods or belief.
And hasn't Sam Altman already insinuated that yeah, they're way overleveraged and not turning a profit, so we should expect to bail them out when the bubble bursts?
I suppose I'm lucky (for now) in my PMC position. "AI" is not being pushed on me per se as it is others, at least not directly. Yes, firms like Adobe and Microsoft are cramming it into every bit of software they sell, which comes on the heels of the push to end "perpetual licensing" for their products and force everyone into a subscription model ("You will own nothing..." and the like), so I can't necessarily *easily* avoid it. They're even making it such that you cannot turn it off via OS or application settings. PC mfg's are touting their new laptops that integrate "AI" from software developers including the aforementioned. However, I know other people who are being made to not only use "AI" as much as they possibly can, but to log their use of it (or be logged automatically). Ed Zitron has written a lot about this.
And when you look at the companies doing this, it becomes obvious why. They have raised so much money from greedy investors and they've realized that without convincing as many other firms and people to use it as they possibly can, the bubble will pop when those investors start making runs on their investments. On that level, it's also a tremendous circle jerk. One "AI" firm integrates some offering or the other with another "AI" firm's. And so on, in a giant money (and reputation/narrative) laundering operation.
https://indi.ca/openais-business-model-is-a-money-laundry/
"Early techlords like Google could violate Crackanomics because they still respected basic economics. But OpenAI is not Google. Google's marginal cost of serving you a webpage was marginal, while OpenAI's costs on inference alone are astronomical. Every instance of ChatGPT has to reincarnate fully, which is really expensive folly. It's comically and karmically expensive. It's like rubbing a genie bottle to do the dishes. At some point, just you run out of wishes. And I, for one, am here for it. The crash of OpenAI will be delicious, and if we're lucky, it takes the whole US economy with it.
Now that AI has to ravage a rainforest to return a brainfart, Capitalism has reached terminal velocity, straight down. It cannot get stupider than this. The business model accounting for 99.9% of American growth is 99.9% a pyramid scheme. The rich are building a pyramid of GPUs over the tomb of GP.
Compare this to the recent past, which was stupid but not this stupid. While Google and, even worse, Uber might have lost money at a net income level (GP minus operating expenses), OpenAI (seems to) lose money at a gross profit level, which is very different. They're buying coke for 20 and selling crack at 15, which violates the 0th Crack Commandment, which is make money. Every query you run on OpenAI doesn't just drain the water supply somewhere, it burns money."