 Originally Posted by Taosaur
The opening maxim is agreeable enough, but I'm not seeing how the bolded statement or its underlying assumptions follow. The emphasized statement assumes that if one states a belief with absolute confidence, then one is responsible for any "product of the belief." Setting aside that identifying such a product invites a prickly thicket of causal analysis in any real world example, let's say that we have two individuals who both check the "strongly agree" box for the following statement:
"Genetically modified food crops are destructive to society." (please, no one argue the truth/falsity of this statement; it's an example)
They both take actions, and for the sake of argument we'll ignore all mitigating factors and allow that their actions follow primarily and to the same extent, respectively, from the above belief.
Believer (A) joins a Community Supported Agriculture program and cuts out processed foods. Believer (B) uses a fertilizer bomb to destroy all the equipment at a factory farm. Both actions can be seen to follow from their shared belief, but is Believer (A) responsible for the actions of Believer (B) for having agreed to the statement? What about Believer (C) who checked the "somewhat agree" box?
Most of us hold and express individual beliefs with considerably more graduated degrees of confidence than true/false, and in complex relation to other beliefs. The more developed our ideas are on any given subject, the less likely it is that a true/false analysis will yield any useful information.
Thank you for this example. It's perfect to clarify!
I think there's some misinterpretation of what I mean by "ideas" which contribute to the intent which the person is responsible for. I actually mean the collective sum of ideas which contribute to the intent (this would include the idea/mental algorithm which gives each contributing idea weight--though this is where falsifiability gets real mucky do to the level of chaos inherent in so many real-world problems).
To clarify your example using this expanded definition of "idea" (to "set of ideas"),
"Genetically modified food crops are destructive to society," would have to be the sole idea fueling both A & B's decision. C does nothing because he doesn't know (and that's a special case I'll explain later), so we'll only focus on A & B. From the given idea alone, actions A & B do not logically follow, so it's ridiculous to assume that's the only idea that went into A & B's decision. So I have here an expanded set of ideas for A and B.
(A) joins a Community Supported Agriculture program and cuts out processed foods because he thought:
Well, technically this should be interpreted as two actions.
-joining the program will give me a wider view on the issue
-cutting out processed foods will decrease the number of genetically-modified foods I buy
-buying locally will ensure that no genetically modified crops are being bought
-buying food is a means of voting, in economic terms
-a large number of votes-by-purchase will be more effective over time than a small number of violent attacks
-peaceful means, although taking more time, are better than non-peaceful means
-genetically modified crops are safe enough to be eradicated over a period of years
-I give the following weights of importance to the above ideas as well as the weight of my faith of this mechanism (summing to 1): x1, x2, x3, x4, x5, x6.
(B) uses a fertilizer bomb to destroy all the equipment at a factory farm, because he thought:
I'm not compounding this with "Decided not to join the support program," because perhaps B didn't even know about the support program, among other things.
-Bombing the plant will have a greater impact on ceasing bioengineering than joining a support program
-A small number of violent attacks will do more damage than a large number of smaller actions, because there is no guarantee the large number will actually cooperate
-the loss of life caused by the bombing is insignificant compared to the loss society will suffer from genetically modified crops
-The risk genetically modified crops pose is too great and must be stopped as soon as possible
-If I do X and Y after, I will not get caught
-I give the following weights of importance to the above ideas as well as the weight of my faith in this mechanism (summing to 1): y1, y2, y3, y4, y5, y6.
Any list of thoughts is only an approximate (and much shorter than they can be), but we can assume that any thoughts not listed during a person's reflection didn't have enough weight of importance to be included in the list.
The ideas highlighted in dark grey are ideas which can be enlightened by reading more material on both sides of that facet of the issue. If the parties adhered to the premise of the maxim when thinking about these ideas, these thoughts could be sharpened, which helps them make a better decision.
The idea in dark green can be interpreted as a thought which follows from adherence to the premise of the maxim. The idea in dark red is an idea which follows from the opposite premise (fear of being proven wrong, or more specifically the expectation that others will not agree). I can explain a logical pathway that A & B could've used if you can't construct one yourself.
The last idea in each list is the mental weighing mechanism, much of which occurs at a subconscious level. In fact, the only consciously alterable value in the weighted array (without the input of new information, as that'd affect the dark grey statements most directly) is the last one, representing faith in the weighting mechanism itself. This is equivalent to how possibly wrong you think you could be. Factors which influence this weight can be reflected by one's intent to seek additional and conflicting information (the first part of my maxim). If one fears being proven wrong, then they will tend not to look for conflicting information and will not compare it with their own information. If one does not fear being proven wrong, then within the limits of human ability (defined as time and brainpower), they will search for information which will improve their weighting mechanism. If they do this, then they are not responsible of the action, simply because they lacked information.
If B honestly did not consider the possibility loss of human life as an important factor, then he is not responsible for any deaths that occurred, because he was unaware. The tragedy is, only the person doing the thinking can ever know what ideas went into weighing the decision, so the responsibility I speak of is a personal responsibility. In the current judiciary system, we work off of thoughts like those in dark red as evidence of knowing that what you did would be wrong. In reality, the person could perhaps only know what they did would garner disapproval from others ("and perhaps they were all wrong," thinks B). Thinking falsifiably rather than defensively would make people less apt to make that mistake (because they'd stop and consider why everyone else would disagree with their action).
C gives up responsibility because the system is too chaotic to predict accurately (interpreted here as the naturally-formed idea which gives weight to the others in the set), and he knows it. But if C does not hold trust in his own mental weighting system, and we are all the same species with the same rough brain, how can any of us trust it? If we all decide to be like C, then no actions will get done at all. You've got to break a few eggs. Luckily, most decisions don't break that many, or none at all, because to some degree we all err on the side of, "This idea could be wrong," or at the basal evolutionary level, "Others will not approve=>I do not want to be poorly reflected in society=>I will not do this action in this way/at all."
Also, you'd probably mean to say
"Genetically modified food crops are destructive to the purity of natural agriculture and pose a risk of creating herbicide-resistant strains of weed through cross-pollination," by-the-by. Though what threatening weeds cross-pollinate with corn besides corn, I have no fracking clue. If you watch the documentary Food Inc, you'll realize some of the opposition comes from a ploy to garner a monopoly on the corn and soybean industries.
|
|
Bookmarks