*rubs face* Right. The past week or so has been a cluster of really heavy, often painful, and sometimes aggressive conversations in my family. (Clusters like that tend to happen occasionally, when too many factors converge on too many people - they also tend to domino). Because some of those discussion are my mother and sister attempting to navigate the differences between their general understandings of things versus mine and my father's (with some big differences emerging between the two of them, too, and some between my father and I, but, y'know, different people, even outside of the gross divides of brain chemistry), my mother started researching Aspergers again. (Which, as a general response, I can't fault. When in doubt, acquire more information).
There were a lot of things that bubbled up during the week, most of which I don't really want to talk about. But there was one thing, because my mam was reading Baron-Cohen and it came up, that sort of fascinated me.
In the book she was reading, the author said two things that impacted me (not sure about the rest).
There were a lot of things that bubbled up during the week, most of which I don't really want to talk about. But there was one thing, because my mam was reading Baron-Cohen and it came up, that sort of fascinated me.
In the book she was reading, the author said two things that impacted me (not sure about the rest).
The first is that empathy actually comes in two parts (the part that lets you see when others are in pain, and the part that makes their pain matter to you), and that it's possible to have one of those parts without the other (which is easier to see, maybe, in people who have the first part, and not the second - people who know when others are in pain, and don't give a shit, versus those who can't tell when someone is in pain, but would give a shit if they did, or if someone pointed it out to them). Baron-Cohen argues that autistics/aspies probably have the 'give a shit' part but not the 'able to tell' part, and psychopaths in particular have the 'able to tell' part but not the 'give a shit' part.
Of course, in the first blush of interaction, they probably look largely the same, unless the aspie/autistic is high functioning enough to run probabilities, or the psychopath is high-functioning enough to fake caring. But. You know.
This makes sense to me. I mean, logical sense. That empathy comes in two parts, and having one does not automatically entail having the other. I'm betting there are quite a few people or instances that most everyone can point to where someone knew they were in pain, and didn't seem to care. It might be less obvious, but it makes sense the reverse also applies. That someone didn't notice the pain for whatever reason, but when they had it pointed out to them, seemed genuinely upset and apologetic.
... Or I could just like it when people attempt to functionally define fuzzy words/concepts for me. Knowing how empathy breaks down logically somewhat appeals to me. It just makes sense to me that the big fuzzy lump of ideas wrapped up in the word can be broken down into smaller, more discrete parts. (I find this happens with a lot of fuzzy words - remember a while back I posted about 'forgiveness' and a bunch of people had very disparate definitions of the word, depending on whether or not they thought 'trust' automatically came under the umbrella of 'forgiveness'? Conflation seems to be no-one's friend, but it's all over the place).
The second thing he said, and I think this might have been largely a throw-away remark, was that a lot of aspies he knew had actually developed strong moral systems even in the absence of part of their empathy, just by brute force of logic. And that ...
Yes. Basically? Yes. Because ... honestly, I don't understand how else you manage it. This is my problem with quite a few religious systems of morality. They, um, occasionally don't make logical sense in the specifics. Which weirds me out. But. My own personal morality is probably ... well. It's very logical (or what I consider logic, anyway). Possibly too much so, by some interpretations, I guess.
Um. Example. You know those character tests for RP characters, the alignment tests? Nine possible alignments on two axes, Good-->Evil and Chaotic-->Lawful? (Answering as myself, I test out as either True Neutral or Neutral Good, depending on the test, if that tells you anything). There was a question I remember on one of them: In the case where there is doubt of guilt, is it better to let a guilty man go free, or imprison an innocent one?
To which I answered, let the guilty man go free. On the grounds that logically, that's the most minimally damaging approach allowing for all possibilities. If the man really is guilty, then he is suspected anyway, and you can keep an eye on him. If he's innocent, though, then a) imprisoning him or worse directly damages him unfairly, but also b) if he's innocent, that means the guilty party is free regardless. And while you're focusing on punishing the innocent guy, the guilty guy is doing what he wants.
So. Let the guilty guy go free, and keep an eye on him. Less damaging all round, in the case of doubt of guilt, yes?
As far as I understand it, the function of a system of morality is to minimise pain for as many people as possible. No? That's what the end-goal seems to be to me. The purpose of morality is to do 'good' (or be good, which sometimes doesn't seem to be quite the same thing). 'Good' is usually defined as not damaging other people. So. Morality is the act of minimising pain? This seemed to be the root of the concept?
Of course, 'pain' is a fuzzy concept in and of itself, outside of basic physical pain, but I tend to define the broader sense of it as 'the effect of something that happens against the person's will'. In the sense that physical pain is the body telling the brain that 'bad shit is happening, we don't want this', while more general pain is the brain/spirit telling itself/the brain that 'bad shit is happening, we don't want this'. So. Pain means what happens when something is done to someone against their will.
(Weirdly, though, it seems that one kind can overrule the other, in that if physical pain is to the will of the person, it seems to be partially cancelled out. *shakes head* Or, if something that's to the will of someone causes more physical pain than is acceptable, they'll abort that will. Which ... seems to result in pain regardless. Also, people can have multiple wills at once, which seems to cause pain if they happen to be mutually exclusive or just difficult to effect at once. People are complicated, yes?)
But. Getting slightly back on track. These, for me, are the underpinnings of morality as I understand it. Pain is what happens when you act against/damage someone's will. The function of morality as a system is to minimise pain for as many people as possible.
Of course, depending on the situation, preventing pain to many might involve inflicting it on a few (a basic example, imprisoning a murderer against their will to prevent them from harming someone else), and there are gradations of that in practice that start to slide away from morality, depending on how you define 'necessary' and the degree to which you are internally justifying what you're doing.
In an example of how inflexible my understanding of morality is, causing pain to someone is always morally wrong. There are no circumstances in which it stops being 'wrong', morally speaking. It's just ... sometimes, it's less wrong than other options. However, less wrong than other wrongs is still actually wrong. There is a difference between morally 'right' and functionally 'necessary'. One stays the same, the other changes depending on the circumstance. The aim in any situation is to be the least wrong possible within the parameters of the situation, but there appear to be very few situations in which it is possible to be 'right'. Given that every separate being in a situation is liable to have a slightly-to-majorly different will to every other being, it's impossible to be totally correct in most situations, meaning the best case scenario is to be only minimally wrong. Which does tend to mean, in classical moralities, that everyone is at least slightly evil, no matter what they do. It's a logical product of chaos and human difference.
Um. Going back to logic. I have a moral system constructed from my understanding of the concepts presented to me that are supposed to be involved in the construction of morality. In a lot of ways, it's somewhat absolutist, in that the ideal concepts of 'right' and 'wrong' don't actually vary according to the situation for me. But it's a logically constructed system, going from information to action.
As for why I have a moral system (because I have see some purely logical arguments as to why you shouldn't bother with one - logically sound, even, to an extent), well. One strand of rationale is still logical, in that morality is widely considered a cornerstone of functionality in society. People consider themselves to operate on it, and react to others based on how they seem to operate on it. So logically, having one should increase functionality in relation to other people. (This is the cold, brutal part of logic, in many ways).
The other strand is ... less so. In that, the idea of crimes against my will horrify me. I have a base terror of being forced, in any way, against my will. Which is, basically, a base terror of pain. *shrugs* I suspect most people have this, though the degree may vary. And having felt both pain and terror myself, the idea of deliberately causing that in someone else is ... *wipes mouth* No. Not good. Basically, I just don't like pain, as a concept or an actuality, and consider minimising its existence across the board to be a self-sufficient goal.
(This doesn't actually mean that I'm morally 'good', by the way. In practice, I have caused pain, and sometimes willfully, usually in anger. It's part of why I don't like anger. It overrides too many things. I've also failed to stop pain being caused to others. I'm just ... not actually better than anyone, at this -_-;).
So, at the base of it, I have a moral system because I want to have one. Because aiming to do the least damage possible seems good to me as a way of life. The details of my constructed moral system come from my logically attempting to define how best to do that. Most moral systems I've seen tend to differ largely in this respect, and largely due to differing definitions of what constitutes 'good', what constitutes 'pain', and what constitutes 'necessary'. Also, in the specific case of religious systems, what results in pain after death, and how best to avoid that. *shrugs*
A weird side-effect of my moral construction in relation to religion is that, in the posited circumstance that my actions cause pain to the god, I actually do feel somewhat guilty. However, this applies to all gods, because I've no proof that any of them exist more than any other, so the pain caused to one by my disobeying/disregarding instruction is equal in my head to the pain caused to another, and one really can't obey all of them. (It is also equal to pain caused to people, so obeying some of them becomes impossible, morally speaking). So, in the end, it falls under 'do the least damage possible, because you're inevitably going to cause some'. Which I mostly solve by attempting to isolate some common elements, and doing those. Which, given that most religions seem to have a concept of 'good' and it often is related to not causing pain, I just fold them into the general 'cause as little pain as possible just in general', and hope it minimises the pain all round. *shrugs* Whatever god shows up to take issue with that, if any, well, we'll deal with that then.
This is, you understand, largely the same as my approach to hurting people. In that, yes, I often have difficulty identifying certain types of pain in people (if you're crying, yelling, or have an obvious physical injury, I'll get it. After that, things get iffy). So that I'm almost always reacting to people telling me I've caused pain, rather being able to track it myself.
I've constructed a lot of base rules to try and cut it down across the board (some things will hurt people regardless, so don't do them). In the absence of understandable visual indicators, you can logically deduce a lot of it. If pain is basically the result of conflict of wills, then the thing to do is try to find out what the people around you want, what you want, and which will is going to cause the least damage overall (or which combination/compromise of wills will cause least damage/most benefit overall). This is easier if you have the tools to register emotional reactions to actions, but in theory it's possible regardless (in practice, it mostly results in me apologising a lot after the fact, once someone explains to me the negative reactions I missed at the time).
And yes, before you ask, that is often as much manual work as it sounds. Especially when a lot of the time you're not allowed to determine someone's will/want verbally, because asking is out of line for some social reason. Which means that, lacking verbal means, and being unable to use non-verbal means ... Um. I get stymied a lot.
There are generally categories of actions: 'always bad', 'usually bad but exceptions', 'often bad, depending on situation', 'sometimes bad, depending on person and situation', 'once in a while bad, for more or less inexplicable reasons', and 'usually safe'. A lot of my time involving other people is spent trying to figure out which category an action is likely to fit into, and then why. It's the 'why' part that's usually confusing, and often results in things being miscategorised.
But. Um. Anyway.
In essence? Pain is bad. It's working out how to avoid it that's the problem. And yes, as the man said, I have constructed a moral system by brute force of logic, based almost purely on my own will and my logical extrapolations on how to effect it, given my understanding of the concepts involved.
Um. I sort of thought most people did? I mean, how does it work, otherwise? Your moral system is based on what you understand of 'right' and 'wrong', and your moral standing is based on how you then choose to act based on those understandings. Which in practice does seem to mean that people are 'good' based on one system, and 'bad' based on another, which is why comparative morality was never my strong point.
As I said. My own morality tends to be a little absolutist -_-; People are bad when they cause pain, get worse the more pain they cause and the more willfully they cause it, and everyone is a little bit bad no matter what they do, because so many circumstances do not allow people to escape unscathed.
*smiles faintly* So, in practice, I consider free will to be fundamentally the root of evil, in some ways, because the free will of each individual inevitably results in the injury of another, simply because no will is ever completely the same. However, I also consider the removal of free will to be fundamentally evil on an order of magnitude above that, since evil is causing pain, pain is action against the will, and the removal of will is therefore the ultimate possible crime against it, the ultimate evil. In Christian terms, free will was the original sin, and the choice to give it simultaneously the original evil and the original good. Which, I suspect, might be what some people mean when they say 'evil is ubiquitous', since you can't get away from it no matter what you do.
... This might also indicate that you shouldn't think about things too closely, sometimes -_-;
ETA: I am aware that doing/being good is not necessarily directly the same as just not being bad. *shakes head* Doing good in the sense of doing uncomplicated things to make people happy also exists as an avenue of good. There are things to do on top of not hurting people that are good. It's just that most things seem to be defined in opposition, as in it's easier to define what's 'wrong' than it is to define what's 'right', so for the functional rules of a morality, I took the route of explaining what had to be at least avoided (in the system I use, anyway). What you do on top of that is another question/level on. Yes?
Also? I hate fuzzy definitions. They honestly seem to be the root of a lot of problems between people. Those that aren't caused by deliberate malice, at least.
But. Yes. Um. Morality, I have some, and this is why? *grins sheepishly* Sometimes I just need to lay concepts out, you know? *shakes head* I'll shut up now.
Of course, in the first blush of interaction, they probably look largely the same, unless the aspie/autistic is high functioning enough to run probabilities, or the psychopath is high-functioning enough to fake caring. But. You know.
This makes sense to me. I mean, logical sense. That empathy comes in two parts, and having one does not automatically entail having the other. I'm betting there are quite a few people or instances that most everyone can point to where someone knew they were in pain, and didn't seem to care. It might be less obvious, but it makes sense the reverse also applies. That someone didn't notice the pain for whatever reason, but when they had it pointed out to them, seemed genuinely upset and apologetic.
... Or I could just like it when people attempt to functionally define fuzzy words/concepts for me. Knowing how empathy breaks down logically somewhat appeals to me. It just makes sense to me that the big fuzzy lump of ideas wrapped up in the word can be broken down into smaller, more discrete parts. (I find this happens with a lot of fuzzy words - remember a while back I posted about 'forgiveness' and a bunch of people had very disparate definitions of the word, depending on whether or not they thought 'trust' automatically came under the umbrella of 'forgiveness'? Conflation seems to be no-one's friend, but it's all over the place).
The second thing he said, and I think this might have been largely a throw-away remark, was that a lot of aspies he knew had actually developed strong moral systems even in the absence of part of their empathy, just by brute force of logic. And that ...
Yes. Basically? Yes. Because ... honestly, I don't understand how else you manage it. This is my problem with quite a few religious systems of morality. They, um, occasionally don't make logical sense in the specifics. Which weirds me out. But. My own personal morality is probably ... well. It's very logical (or what I consider logic, anyway). Possibly too much so, by some interpretations, I guess.
Um. Example. You know those character tests for RP characters, the alignment tests? Nine possible alignments on two axes, Good-->Evil and Chaotic-->Lawful? (Answering as myself, I test out as either True Neutral or Neutral Good, depending on the test, if that tells you anything). There was a question I remember on one of them: In the case where there is doubt of guilt, is it better to let a guilty man go free, or imprison an innocent one?
To which I answered, let the guilty man go free. On the grounds that logically, that's the most minimally damaging approach allowing for all possibilities. If the man really is guilty, then he is suspected anyway, and you can keep an eye on him. If he's innocent, though, then a) imprisoning him or worse directly damages him unfairly, but also b) if he's innocent, that means the guilty party is free regardless. And while you're focusing on punishing the innocent guy, the guilty guy is doing what he wants.
So. Let the guilty guy go free, and keep an eye on him. Less damaging all round, in the case of doubt of guilt, yes?
As far as I understand it, the function of a system of morality is to minimise pain for as many people as possible. No? That's what the end-goal seems to be to me. The purpose of morality is to do 'good' (or be good, which sometimes doesn't seem to be quite the same thing). 'Good' is usually defined as not damaging other people. So. Morality is the act of minimising pain? This seemed to be the root of the concept?
Of course, 'pain' is a fuzzy concept in and of itself, outside of basic physical pain, but I tend to define the broader sense of it as 'the effect of something that happens against the person's will'. In the sense that physical pain is the body telling the brain that 'bad shit is happening, we don't want this', while more general pain is the brain/spirit telling itself/the brain that 'bad shit is happening, we don't want this'. So. Pain means what happens when something is done to someone against their will.
(Weirdly, though, it seems that one kind can overrule the other, in that if physical pain is to the will of the person, it seems to be partially cancelled out. *shakes head* Or, if something that's to the will of someone causes more physical pain than is acceptable, they'll abort that will. Which ... seems to result in pain regardless. Also, people can have multiple wills at once, which seems to cause pain if they happen to be mutually exclusive or just difficult to effect at once. People are complicated, yes?)
But. Getting slightly back on track. These, for me, are the underpinnings of morality as I understand it. Pain is what happens when you act against/damage someone's will. The function of morality as a system is to minimise pain for as many people as possible.
Of course, depending on the situation, preventing pain to many might involve inflicting it on a few (a basic example, imprisoning a murderer against their will to prevent them from harming someone else), and there are gradations of that in practice that start to slide away from morality, depending on how you define 'necessary' and the degree to which you are internally justifying what you're doing.
In an example of how inflexible my understanding of morality is, causing pain to someone is always morally wrong. There are no circumstances in which it stops being 'wrong', morally speaking. It's just ... sometimes, it's less wrong than other options. However, less wrong than other wrongs is still actually wrong. There is a difference between morally 'right' and functionally 'necessary'. One stays the same, the other changes depending on the circumstance. The aim in any situation is to be the least wrong possible within the parameters of the situation, but there appear to be very few situations in which it is possible to be 'right'. Given that every separate being in a situation is liable to have a slightly-to-majorly different will to every other being, it's impossible to be totally correct in most situations, meaning the best case scenario is to be only minimally wrong. Which does tend to mean, in classical moralities, that everyone is at least slightly evil, no matter what they do. It's a logical product of chaos and human difference.
Um. Going back to logic. I have a moral system constructed from my understanding of the concepts presented to me that are supposed to be involved in the construction of morality. In a lot of ways, it's somewhat absolutist, in that the ideal concepts of 'right' and 'wrong' don't actually vary according to the situation for me. But it's a logically constructed system, going from information to action.
As for why I have a moral system (because I have see some purely logical arguments as to why you shouldn't bother with one - logically sound, even, to an extent), well. One strand of rationale is still logical, in that morality is widely considered a cornerstone of functionality in society. People consider themselves to operate on it, and react to others based on how they seem to operate on it. So logically, having one should increase functionality in relation to other people. (This is the cold, brutal part of logic, in many ways).
The other strand is ... less so. In that, the idea of crimes against my will horrify me. I have a base terror of being forced, in any way, against my will. Which is, basically, a base terror of pain. *shrugs* I suspect most people have this, though the degree may vary. And having felt both pain and terror myself, the idea of deliberately causing that in someone else is ... *wipes mouth* No. Not good. Basically, I just don't like pain, as a concept or an actuality, and consider minimising its existence across the board to be a self-sufficient goal.
(This doesn't actually mean that I'm morally 'good', by the way. In practice, I have caused pain, and sometimes willfully, usually in anger. It's part of why I don't like anger. It overrides too many things. I've also failed to stop pain being caused to others. I'm just ... not actually better than anyone, at this -_-;).
So, at the base of it, I have a moral system because I want to have one. Because aiming to do the least damage possible seems good to me as a way of life. The details of my constructed moral system come from my logically attempting to define how best to do that. Most moral systems I've seen tend to differ largely in this respect, and largely due to differing definitions of what constitutes 'good', what constitutes 'pain', and what constitutes 'necessary'. Also, in the specific case of religious systems, what results in pain after death, and how best to avoid that. *shrugs*
A weird side-effect of my moral construction in relation to religion is that, in the posited circumstance that my actions cause pain to the god, I actually do feel somewhat guilty. However, this applies to all gods, because I've no proof that any of them exist more than any other, so the pain caused to one by my disobeying/disregarding instruction is equal in my head to the pain caused to another, and one really can't obey all of them. (It is also equal to pain caused to people, so obeying some of them becomes impossible, morally speaking). So, in the end, it falls under 'do the least damage possible, because you're inevitably going to cause some'. Which I mostly solve by attempting to isolate some common elements, and doing those. Which, given that most religions seem to have a concept of 'good' and it often is related to not causing pain, I just fold them into the general 'cause as little pain as possible just in general', and hope it minimises the pain all round. *shrugs* Whatever god shows up to take issue with that, if any, well, we'll deal with that then.
This is, you understand, largely the same as my approach to hurting people. In that, yes, I often have difficulty identifying certain types of pain in people (if you're crying, yelling, or have an obvious physical injury, I'll get it. After that, things get iffy). So that I'm almost always reacting to people telling me I've caused pain, rather being able to track it myself.
I've constructed a lot of base rules to try and cut it down across the board (some things will hurt people regardless, so don't do them). In the absence of understandable visual indicators, you can logically deduce a lot of it. If pain is basically the result of conflict of wills, then the thing to do is try to find out what the people around you want, what you want, and which will is going to cause the least damage overall (or which combination/compromise of wills will cause least damage/most benefit overall). This is easier if you have the tools to register emotional reactions to actions, but in theory it's possible regardless (in practice, it mostly results in me apologising a lot after the fact, once someone explains to me the negative reactions I missed at the time).
And yes, before you ask, that is often as much manual work as it sounds. Especially when a lot of the time you're not allowed to determine someone's will/want verbally, because asking is out of line for some social reason. Which means that, lacking verbal means, and being unable to use non-verbal means ... Um. I get stymied a lot.
There are generally categories of actions: 'always bad', 'usually bad but exceptions', 'often bad, depending on situation', 'sometimes bad, depending on person and situation', 'once in a while bad, for more or less inexplicable reasons', and 'usually safe'. A lot of my time involving other people is spent trying to figure out which category an action is likely to fit into, and then why. It's the 'why' part that's usually confusing, and often results in things being miscategorised.
But. Um. Anyway.
In essence? Pain is bad. It's working out how to avoid it that's the problem. And yes, as the man said, I have constructed a moral system by brute force of logic, based almost purely on my own will and my logical extrapolations on how to effect it, given my understanding of the concepts involved.
Um. I sort of thought most people did? I mean, how does it work, otherwise? Your moral system is based on what you understand of 'right' and 'wrong', and your moral standing is based on how you then choose to act based on those understandings. Which in practice does seem to mean that people are 'good' based on one system, and 'bad' based on another, which is why comparative morality was never my strong point.
As I said. My own morality tends to be a little absolutist -_-; People are bad when they cause pain, get worse the more pain they cause and the more willfully they cause it, and everyone is a little bit bad no matter what they do, because so many circumstances do not allow people to escape unscathed.
*smiles faintly* So, in practice, I consider free will to be fundamentally the root of evil, in some ways, because the free will of each individual inevitably results in the injury of another, simply because no will is ever completely the same. However, I also consider the removal of free will to be fundamentally evil on an order of magnitude above that, since evil is causing pain, pain is action against the will, and the removal of will is therefore the ultimate possible crime against it, the ultimate evil. In Christian terms, free will was the original sin, and the choice to give it simultaneously the original evil and the original good. Which, I suspect, might be what some people mean when they say 'evil is ubiquitous', since you can't get away from it no matter what you do.
... This might also indicate that you shouldn't think about things too closely, sometimes -_-;
ETA: I am aware that doing/being good is not necessarily directly the same as just not being bad. *shakes head* Doing good in the sense of doing uncomplicated things to make people happy also exists as an avenue of good. There are things to do on top of not hurting people that are good. It's just that most things seem to be defined in opposition, as in it's easier to define what's 'wrong' than it is to define what's 'right', so for the functional rules of a morality, I took the route of explaining what had to be at least avoided (in the system I use, anyway). What you do on top of that is another question/level on. Yes?
Also? I hate fuzzy definitions. They honestly seem to be the root of a lot of problems between people. Those that aren't caused by deliberate malice, at least.
But. Yes. Um. Morality, I have some, and this is why? *grins sheepishly* Sometimes I just need to lay concepts out, you know? *shakes head* I'll shut up now.
Tags: