A post-scarcity utopia is an inherently immoral ideal to try to achieve.
Solving the central economic problem of "Unlimited desires, limited resources to fulfill those desires" in a utopia would mean that we can enact free effort/work. Whether it comes to the extreme abundance in America that is backed by networks of unpaid Central American farmers and East Asia sweatshops, or Communist work quotas and gulags, Utopian ideals in the real world, where free work/effort is impossible, are reliant on slavery.
Some people point to technology as the answer for moral free labor, but ignoring the dangers of growing automation and unchecked technology for existential reasons, how long until human desire continues to grow and requires more complex robots to support us, and until we start creating things analogous to sapient creations? All the way up back to the square one; Slavery. Utopian abundance relies on it.
The ideological and opinion factor of Utopian also turns up similar issues. What should be done with people who break rules and laws in it? Is it just assumed that anyone viewing a utopia will be instantly awestruck enough to conform?
If they're enforced, it can be assumed that any level of punishment can be justified. Punishment systems exist to push people in the correct direction, if you have a 100% soundproof moral system, then NO amount of punishment would ever be unjustified when propping up Utopian perfect ideals. That might work and be okay for "rule breakers", but what about those that want to do things that are just morally grey within the Utopian laws? How would those be decided? A moral council would lead to subjectivity, and when enforcing perfect moral laws, adding layers of subjectivity ruins the point. Putting in an absolute moral framework is still going to lead to people finding loopholes, whether maliciously or by random chance.
Okay, so what if instead the moral framework was omniscient, and could always reinforce itself? That framework having every single possible scenario and clause that will ever happen likely mean that it would've had to have been made with FUTURE sins in mind, meaning that whoever IS going to sin, always was going to.
And what about people who just simply disagree with the Utopian ideals? Is it immoral to disagree with a system, that at least within itself, considers itself perfect? Is that subversion, and the thought-traitor is just as guilty as the murderer and the rapist? If it's no, then the fact that there are degrees of morality and 'wrongness' in the system means that the system isn't perfect, because the laws and morals would be absolute.
If the answer is yes, then it seems like the only moral law propping up the system is the force of violence and threat of violence inherent in itself.
If both cases are taken to be a flawed version of utopia, and that a true Utopia wouldn't have inherent slavery, and wouldn't have inherent violence forcing conformity, and that a given system of either would be missing, then wouldn't that make the antithesis of an immoral Utopia be one with whysical and ideological struggle at its core rather than care freeness and conformity?
Where effort must be continually taken to continue to exist, and where struggle and disagreement is constant?
Absolute moral frameworks justify everything, and allow nothing. Maybe the ideal is where morals and laws have constant variance and are constantly shifting as struggles rise and fall, where everything is allowed and nothing is ever absolutely justified, because maybe absolutes are inherently immoral.
Wouldn't that make the world we live in NOW be Utopia?
>how long until human desire continues to grow and requires more complex robots to support us, and until we start creating things analogous to sapient creations? All the way up back to the square one; Slavery.
On this point, I respectfully disagree. I don't believe we could create an artificial intelligence that would be sapient - at least not any time soon. Even if a machine were super advanced, I don't think it would be an ethical problem (could it even be hurt, or understand itself?).
>how long until human desire continues to grow and requires more complex robots to support us, and until we start creating things analogous to sapient creations?
Artificial Intelligence ~= Artificial mind. The term Intelligence is kinda misleading, as we tend to equate it to the human experience of intelligence. Nowadays, AI might replicate well some human capabilities, but they are not of geral purpose, i.e. they are trained especially for one task. The training process involves a lot of data mashing, probabilities estimation and with it the fitting of non-linear functions, and that's it. Models don't have to deal with a natural world, predators, gathering of resources and thus natural selection. That's why I think AI wont lead to an artificial mind like ours. Of course, mind emulation still is a thing, but I guess it will probably be it's own field with years to come.
But if we talk of analogue to sapient, as you said, well… How we draw the line? Biologists have a similar question, "What we can call to be alive?" If we conclude that having a self-replicating mechanism is a must have, then we can call a self-replicating computer virus a living being?
Well, this topic is extense, and isn't the focus of your text.
About the Utopia: Have you ever believed in absolute morals?
With absolute I mean whole reaching, i.e. one rule, or set of rules, that every human, dead, alive or to be born, agrees on. And by rules I mean an answer to a question or a guide to a situation, no valor stipulated to those questions/situations.
Now, if it's absolute, whole, then the mere act of hypothesising it's invalidity would break it's universality.
With this in mind, how can you be assure that the moral basis of your utopia would be really absolute? Well, in reality you can, because if you can think of a moral rule being trespassed, then it's already not whole, as we stablished earlier.
So in order to maintain a utopia with "absolute morals", in reality you need to build it's morals, and establish human compliance.
So what makes a dystopia differ from a utopia it's just it's moral values, because the process is the same.
So yeah 1984 = Brave New World = Fahrenheit 451, i'm tired and don't know wtf in writing anymore…
cba to read the whole thing but I skimmed it and liked what I heard, but we've all heard it before soooooo