• Lucid Dreaming - Dream Views




    Page 2 of 2 FirstFirst 1 2
    Results 26 to 39 of 39
    Like Tree19Likes

    Thread: Is psychology a science?

    1. #26
      Drowning in Dreams Achievements:
      Made lots of Friends on DV Vivid Dream Journal Veteran First Class 10000 Hall Points Created Dream Journal
      <span class='glow_8B0000'>Zhaylin</span>'s Avatar
      Join Date
      Jan 2009
      LD Count
      c. 6 since join
      Gender
      Location
      Central West Virginia, USA
      Posts
      5,772
      Likes
      4724
      DJ Entries
      199
      I think only psychiatrists [pdoc]should be able to prescribe meds. For nearly a decade or so, General Practitioners prescribed me various things from sleeping pills to anti-depressants. One such Dr. labeled me as having bi-polar. Only one experience with the meds prescribed ended well (I handle Prozac perfectly fine).

      My psychiatrist also serves as my psychologist and I wouldn't trade him for anything. I don't understand when people say they have to go to 2 or 3 different mental health doctors. It's hard enough opening up to and trusting ONE person- let alone 2 or more!

      I think a lot of progress has yet to be made. A lot of it does seem like speculation- but it's a logical sort of speculation. When I went to see my pdoc he told me to forget about labels. He asked me what my problems were and went from there. It still took a couple of years to find the right meds.
      I would say it's definitely a science. But it's an ever evolving one

    2. #27
      DuB
      DuB is offline
      Distinct among snowflakes DuB's Avatar
      Join Date
      Sep 2005
      Gender
      Posts
      2,399
      Likes
      362
      Quote Originally Posted by Taosaur View Post
      I get the impression talking to non-wealthy people who have ended up on meds that even if a psychiatrist's or MD's name is on the scrip, it's often case workers with limited medical background who are, for all intents and purposes, making the diagnosis and deciding the course of treatment.
      But do you think that's a good idea? My views on the matter are a little more conflicted than I initially let on.

      Quote Originally Posted by stormcrow View Post
      @Dub- You are a psychologist?
      Sort of. I'm working on my PhD in experimental psychology and my career goal is to do behavioral research at a university. But strictly speaking, the APA says that you're not "really" a psychologist until you have your PhD. So I guess I'm not "really" there yet.

      Quote Originally Posted by stormcrow View Post
      I most definitely see the significance of what psychology is attempting to do and think that we as humans can benefit from this knowledge considerably, we just have to make sure this knowledge has a valid basis and isnt mostly speculation.

      And also can you recommend any psychologists? Ive read Jung, Lacan and Skinner but that's about it, I'm not very knowledgeable on the subject.
      Well frankly, it's not surprising that you hold such a skeptical opinion given the list of psychologists you mentioned having read. If that was my sample of what psychologists do, I'd be pretty damn skeptical too. Those guys are certainly an interesting read, but they're not to be taken too seriously. Skinner's view was at least backed up by some systematic experimental data, which is generally more than can be said for the other two, but his dogmatically narrow philosophy of psychology was pretty roundly overturned around 40 years ago on both empirical and conceptual bases.

      Reading psychologists is not really the same as reading philosophers. For starters, I wouldn't recommend sifting through the literature by author (i.e., getting one person's view on everything--few psychologists even have strong views on everything, and you probably ought to be suspicious of the ones who do!), but rather by subject area (i.e., getting everybody's view on one specific topic). So what areas are you interested in? This book is one of my personal favorites (this book also is a fascinating counterpoint), but recommendations would really depend on what you're interested in. It's a vast field.

    3. #28
      Dionysian stormcrow's Avatar
      Join Date
      Jun 2010
      LD Count
      About 1 a week
      Gender
      Location
      Cirith Ungol
      Posts
      895
      Likes
      483
      DJ Entries
      3
      Thanks Dub I really don't know that much about the subject(besides the ones I mentioned) but Im really just interested in psychology in general but of course I am very interested in epistemology. Thanks for the book recommendations I actually saw one of them at half price books awhile ago and was wondering what does heuristics mean lol? Ill most def check them out, thanks.

    4. #29
      khh
      khh is offline
      Remember Achievements:
      1000 Hall Points Veteran First Class
      khh's Avatar
      Join Date
      Jun 2009
      Gender
      Location
      Norway
      Posts
      2,482
      Likes
      1309
      Psychology as a field is certainly a science, but not all psychologists are scientists. I think that's the distinction that's important to make. There is a lot of bad science out there, and psychology has seen it's fair share.

      (on the note of favorite fields of psychology, I myself like personality psychology and cognitive psychology. Of course neuroscience in general is fun.)
      April Ryan is my friend,
      Every sorrow she can mend.
      When i visit her dark realm,
      Does it simply overwhelm.

    5. #30
      Member Achievements:
      Veteran First Class 5000 Hall Points

      Join Date
      Jul 2009
      Gender
      Posts
      276
      Likes
      21
      It falls quite well into the definition of science... .

    6. #31
      Moo nsi dem oons ide kookyinc's Avatar
      Join Date
      Jun 2010
      LD Count
      4
      Gender
      Location
      Moonside
      Posts
      529
      Likes
      118
      DJ Entries
      16
      Though (unfortunately) insofar my knowledge of psychology is limited to a high school course, I would say it is definitely a science. Experiments are run, observations are made, and conclusions are drawn. People often think that psychology is nothing more than an anxious person laying on a couch and talking about how much he wants to sleep with his mother, but there is a lot of research going on (especially recently) in universities and the like that try to explain why people behave the way they do (which, by the way, is essentially the definition of psychology).

      Even if it for some reason does not qualify as a science, though, I would certainly not call it a pseudo-science because, for one thing, research is done, examined, and conclusions that have at least some real-world applications are drawn (an example would be finding out what medicine works best for schizophrenics).

      At least, that's how it all seems to me. But remember, I have limited knowledge on the subject. I'd say DuB is a much better choice of someone to listen to.
      I don't usually think, therefore I mostly am not.
      Quote Originally Posted by abicus View Post
      You can not convince the one with faith who needs not look for fact that the facts "prove them wrong".
      Likewise, you cant teach some one who looks for facts to have faith in the absence of facts.

    7. #32
      Member Achievements:
      1000 Hall Points Veteran First Class

      Join Date
      Jul 2011
      Gender
      Location
      MI
      Posts
      38
      Likes
      2
      Psychology as a science is an old debate. It is not a science in the currently popular definition of science, as it cannot be proven objectively and it does not explain things on the basic level (physiology), instead it explains theories that happen to work. It is as if there were a 15 story building where we explain the 12th as if it were the foundation, even though there are many levels under it.

      Jung was probably the most science-like of the bunch. He got it down to two attitude types (objective/subjective to the outer/inner world) and four functions (obj/sub perceiving, obj/sub judging). van der Hoop goes further explaining how each of the functions works, that is, the steps that each do.

      Jung also has a theory on theories. Explaining that we each have a theory based on our own psychological make up (collected works, end of book 6).

      So, it really depends on what you mean by science. Psychology does use a scientific approach and does break things down to their smallest parts, it is based on theories and the theories are tested, but all those tests are subjective.
      Last edited by chacham; 07-28-2011 at 02:59 PM.

    8. #33
      Member Achievements:
      1 year registered Veteran First Class 5000 Hall Points

      Join Date
      Sep 2004
      Gender
      Location
      Seattle, WA
      Posts
      2,503
      Likes
      217
      I think it's often treated in as scientific a way as it can, but due to the nature/complexity of what they're studying, it is much more difficult to be objective about the results. Even in a perfect double-blind study, where you THINK you're only controlling the one variable you're testing for some kind of cause-effect, in truth, there are a LOT of variables you aren't aware of, and that today's technology can't pin down accurately. The brain is a giant black box, big enough that all test results should be taken with a grain of salt. A random sampling of people (all from the same city/neighbourhood, usually) is NOT the same as, say, a random sampling of molecules in a chemistry experiment.

      In fact, I'd bet most "random samples" for psych experiments (especially those conducted by master's and phd students) are not all that random at all. They often just consist of psych-101 students who need to take part of experiments as part of class. If the test results depend at all on there being a good spread, then the test is useless.

      Also, I've recently had a good discussion with a friend of mine, who is a mathematician, and who has spent some time working with psychologists and got to know some psych depts at a few universities quite well. These departments/faculties have a lot of ego (and money) riding on being "self-contained" (i.e. no need to outsource basic courses to other faculties, e.g. mathematics). And unfortunately, this results in most psychologists having a completely inadequate math background, which is important. A lot of them haven't properly studied and learned statistical analysis, and so their data analysis is also something you should take with a grain of salt. Unless a qualified statistician looks through the raw data and how statistical methods were applied to derive the general conclusions, I'd take those results with a grain of salt as well. Of course, it's my expectation that anything important is double-checked by someone who actually knows statistics, but I have a feeling that might be asking a lot.

      So, in conclusion, I think it IS a science, at least conceptually, but due to the nature of the problem, and some other factors, the results tend to be more fuzzy and unreliable, in spite of the scientific method being employed (or at least attempted). Also, a lot of test results depend on the test subjects (people) relating their personal interpretations of their experience (that is what it was like when I part-took in some psych tests at school). That means that instead of directly and objectively MEASURING the results, each data point is being filtered through the belief system, prejudices and subjective experience of a different individual.

    9. #34
      DuB
      DuB is offline
      Distinct among snowflakes DuB's Avatar
      Join Date
      Sep 2005
      Gender
      Posts
      2,399
      Likes
      362
      Quote Originally Posted by Replicon View Post
      Also, I've recently had a good discussion with a friend of mine, who is a mathematician, and who has spent some time working with psychologists and got to know some psych depts at a few universities quite well. These departments/faculties have a lot of ego (and money) riding on being "self-contained" (i.e. no need to outsource basic courses to other faculties, e.g. mathematics). And unfortunately, this results in most psychologists having a completely inadequate math background, which is important. A lot of them haven't properly studied and learned statistical analysis, and so their data analysis is also something you should take with a grain of salt. Unless a qualified statistician looks through the raw data and how statistical methods were applied to derive the general conclusions, I'd take those results with a grain of salt as well. Of course, it's my expectation that anything important is double-checked by someone who actually knows statistics, but I have a feeling that might be asking a lot.
      It is true--perhaps even obvious--that experimental psychologists lack the same facility with statistical data analysis that a statistician would have. But I don't think this in itself is a convincing reason to be skeptical of the data analyses of psychologists in the whole (although probably it is a fair reason in some circumstances, as I discuss below).

      Ernest Rutherford, a chemist, said: "If your experiment needs statistics, you ought to have done a better experiment." He presumably said this with tongue slightly in cheek, but on my reading there is at least a sensible underlying point. (To give an indication of Rutherford's penchant for overblown declaration, this is the same man who infamously claimed that "all science is either physics or stamp collecting.") Clean, simple experiments tend to yield easily interpretable data. The more complicated the data, the more we must rely on statistics to interpret them; and the more we must rely on statistics, the more it seems that we are, in some sense, increasing the inferential distance between the raw data and our conclusions as researchers. This distance makes most of us suspicious, and probably rightly so.

      Most psychologists, therefore, strive for experimental designs that are elegant and simple, the required data analyses straightforward. And for the most part (allowing plenty of exceptions) they pretty well achieve it. The typical psychologist rarely needs to step outside the bounds of the classical general linear model in analyzing his or her experimental data. In fact, the most widely reported data analysis procedure in the field is simply a t-test of some kind or other, the simplest cases of GLMs. One need only be familiar with diagnosing and addressing violations of the assumptions of the GLM (some of which the model is quite robust to anyway) to be at least a perfectly competent data analyst, even if not a statistician. This familiarity is of course common knowledge among psychological researchers.

      I allowed above that in some cases one might justifiably be skeptical of the conclusions that a psychologist draws from his or her data on these simple educational grounds. In general, this will be increasingly true as the data analysis becomes increasingly convoluted. In these cases, one might question whether the psychological researcher is qualified to handle such complex data competently. But I should point out that it is also in these very cases where it is most likely that at least one verifiably competent, statistically trained individual will be called upon to serve as one of the peer reviewers for such a paper. It is then the job of this individual to see to it that the data analysis reported by the psychologist is adequate and appropriate before the study is to be published in the psychological literature.

      In summary: the data handled by most psychologists are well within the modest educational means of the researchers to analyze appropriately, and for the data that are not, it is one of the many functions of peer review to ensure that they are still handled correctly.

      Quote Originally Posted by Replicon View Post
      Also, a lot of test results depend on the test subjects (people) relating their personal interpretations of their experience (that is what it was like when I part-took in some psych tests at school). That means that instead of directly and objectively MEASURING the results, each data point is being filtered through the belief system, prejudices and subjective experience of a different individual.
      As one might reasonably expect, the issue of when verbal reports on mental processes are or are not reliable and/or valid is one with which psychologists have been intimately familiar for some time (a classic discussion of the topic can be found in Nisbett & Wilson, 1977, which incidentally is one of my favorite papers). Without knowing the details of the experiments in which you participated, I can just say that I am pretty confident the researchers in question were well aware of these important issues.

      One thing to think about is that we are often interested in participants' descriptions and explanations for their own thoughts and actions even when we think that the participants don't have a shred of privileged authority on which to give valid, objective reports of such things. For a hypothetical example, we might ask a participant to list some of the reasons why they married the person that they did. In reality, we might just be interested in counting how many reasons the participant spontaneously indicates, without believing that any of the reasons are of any explanatory value. However, we are usually content to let the participant go on believing that we are actually interested in the those reasons and are willing to take them at the participant's word.
      Taosaur likes this.

    10. #35
      Member Achievements:
      1 year registered Veteran First Class 5000 Hall Points

      Join Date
      Sep 2004
      Gender
      Location
      Seattle, WA
      Posts
      2,503
      Likes
      217
      DuB, thanks for the insightful post.

      Maybe I was being a bit pessimistic when I said "it's asking a lot." Really, so long as they make the raw data available, any interesting result is likely to get cross-checked.

      One thing to think about is that we are often interested in participants' descriptions and explanations for their own thoughts and actions even when we think that the participants don't have a shred of privileged authority on which to give valid, objective reports of such things. For a hypothetical example, we might ask a participant to list some of the reasons why they married the person that they did. In reality, we might just be interested in counting how many reasons the participant spontaneously indicates, without believing that any of the reasons are of any explanatory value. However, we are usually content to let the participant go on believing that we are actually interested in the those reasons and are willing to take them at the participant's word.
      I totally get that they're aware of these factors, and do their best to come up with elegant experiments, but even in your hypothetical examples, there are unknowns creeping in, that you can't just mitigate easily. If you're interested in the number of reasons they give, you're likely trying to correlate the number of reasons ("rationalization factor?" hehe) to something specific. However, a variance in numbers of reasons given might be more highly affected by another, unanticipated factor. It's probably true that with a big enough sample size, and good enough randomization, a correlation would be meaningful. In other words, if you're isolating two groups of people that are random, except for one attribute you want to measure, there is a size where the attribute split will overwhelm other random factors that could affect the results. But since there's no good way to know, statistically, how big of a sample you need to have any kind of expectations about the result (since, among other reasons, you don't actually know just how strong a correlation to expect), any experiment conducted by a master's student on a relatively small number of people (in the tens or hundreds), I would think, has a reliability issue to be dealt with.

      What kind of statistical analysis do they go through to ensure, to the best of their abilities, that the results will be reliable? If you think there's a correlation, but the experiment shows just randomness (no correlation), how do you know whether to conclude that the sample set needs to be much, much bigger, or that you can mostly abandon trying to find a correlation, because there isn't one?

      I guess sometimes, they can do some calibration... like if people are rating pain on a scale from 1 to 10, they understand that everyone has a different tolerance, and therefore a different scale, so they might have them rate common painful things to get a feel for skew...

    11. #36
      DuB
      DuB is offline
      Distinct among snowflakes DuB's Avatar
      Join Date
      Sep 2005
      Gender
      Posts
      2,399
      Likes
      362
      Quote Originally Posted by Replicon View Post
      I totally get that they're aware of these factors, and do their best to come up with elegant experiments, but even in your hypothetical examples, there are unknowns creeping in, that you can't just mitigate easily. If you're interested in the number of reasons they give, you're likely trying to correlate the number of reasons ("rationalization factor?" hehe) to something specific. However, a variance in numbers of reasons given might be more highly affected by another, unanticipated factor. It's probably true that with a big enough sample size, and good enough randomization, a correlation would be meaningful. In other words, if you're isolating two groups of people that are random, except for one attribute you want to measure, there is a size where the attribute split will overwhelm other random factors that could affect the results. But since there's no good way to know, statistically, how big of a sample you need to have any kind of expectations about the result (since, among other reasons, you don't actually know just how strong a correlation to expect), any experiment conducted by a master's student on a relatively small number of people (in the tens or hundreds), I would think, has a reliability issue to be dealt with.
      After reading this passage a lot of times, I think I see the point you're getting at, but I'm still not completely sure, so correct me if I misinterpret you.

      First a side note on "good enough randomization." Achieving what is essentially completely random assignment to experimental conditions is trivial. In the simplest case of two experimental groups with no constraints on sample size, it can be done perfectly adequately just by tossing a fair coin before each participant arrives. For more than two groups I like to use RANDOM.ORG - True Random Number Service; and more complicated randomization is usually easy to accomplish with a simple R script. So I have to confess that I don't see what your worry is on this issue, if indeed you have one.

      Now let me try to paraphrase my understanding of your more central point about small samples being biased and/or unreliable. Assume we have a small sample of 10 people which we randomly assign to either the treatment or control conditions. Assume also that, whatever our experimental manipulation and outcome variable may be, it happens to be the case that the manipulation has a strong effect on the outcome variable for men, but little or no effect on the outcome variable for women. Let's say that, by chance, we end up with a total of 6 men and 4 women in our sample. Finally let's suppose that random assignment leaves us with 4 men/1 woman in the treatment group and 2 men/3 women in the control group. So obviously in this case we should expect to see a stronger effect on the outcome variable for the treatment group vs. the control group simply because of the unbalanced gender split. And if for some reason we failed to measure gender and take that factor into account in our analysis, that would make it look like the treatment is simply more effective than the control overall, when in fact the truth is more subtle and qualified than that. And your point is that random quirks of this kind should be far diminished in much larger samples, due to the law of large numbers. Is this interpretation basically right?

      That is all true; but of course, it is perhaps the most basic function of statistical analysis to deal with precisely these issues. To start off in brief, technical terms: there is a term for sample size in the confidence interval formula for a parameter estimate which functions such that an increase in sample size is associated with a monotonic decrease in the confidence interval around that estimate. So parameter estimates (such as an estimated difference between two experimental groups) that are based on very small sample sizes (as in our example above) will have correspondingly wide confidence intervals around them (that is, there will be a lot of explicit "uncertainty" built around the estimate). This means that with small sample sizes, it typically takes a very strong "true" treatment effect and/or very rigorous methodological controls for the effect to clearly emerge empirically due to the built-in uncertainty in the parameter estimates. And in larger sample sizes, where the problematic issues we discussed above tend to be less so, there is correspondingly little uncertainty built around the parameter estimates and so "true" treatment effects can be revealed more easily.

      All of this is to say that, while the issues you raised are real and will always be a possibility in an analysis, it is not the case that small samples are particularly vulnerable to this problem.

      Quote Originally Posted by Replicon View Post
      What kind of statistical analysis do they go through to ensure, to the best of their abilities, that the results will be reliable? If you think there's a correlation, but the experiment shows just randomness (no correlation), how do you know whether to conclude that the sample set needs to be much, much bigger, or that you can mostly abandon trying to find a correlation, because there isn't one?
      Well, you can never "know" for sure, but for most simple tests there are fairly straightforward techniques for obtaining estimates of this kind which the researcher can then use to inform their decision about whether to continue. These techniques are known as power analysis. For simple tests, they essentially consist of working algebraically backward through the standard statistical testing procedure by first assuming some nonzero true effect size (for example, you could assume that the estimated effect size you obtained from a small pilot study is equal to the true effect size) and then solving for what the values of certain variables (such as sample size) would need to be in order to have a particular probability of correctly revealing the true effect--this probability is known as the statistical power of the test. Alternatively, you can solve for statistical power given a fixed a priori sample size, or for what the true effect size would have to be given a fixed power level and sample size, etc.

      For more complicated tests, these estimates are often obtained through Monte Carlo simulation, where instead of assuming a nonzero effect size X and deriving power analytically, you generate many, many simulated data sets where the true effect size is programmed to be X on average, analyze all of the data sets using the statistical procedure of interest, and then estimate power simply as the proportion of simulated experiments in which you correctly uncovered a treatment effect.

      Quote Originally Posted by Replicon View Post
      I guess sometimes, they can do some calibration... like if people are rating pain on a scale from 1 to 10, they understand that everyone has a different tolerance, and therefore a different scale, so they might have them rate common painful things to get a feel for skew...
      Mmm, not really sure what you're getting at with this one.

    12. #37
      Member Achievements:
      1 year registered Veteran First Class 5000 Hall Points

      Join Date
      Sep 2004
      Gender
      Location
      Seattle, WA
      Posts
      2,503
      Likes
      217
      Took me a while to get back to DV after a trip

      I appreciate the detailed explanation about the methods. It's interesting reading. So the sample set size can be nicely mitigated in a lot of cases.

      The other thing I enjoyed with psych testing was the misdirection. They're often measuring one thing, but telling you the point of the experiment is something completely unrelated, so it doesn't artificially prime the results.

      Now, for some of the other points:

      First a side note on "good enough randomization." Achieving what is essentially completely random assignment to experimental conditions is trivial. In the simplest case of two experimental groups with no constraints on sample size, it can be done perfectly adequately just by tossing a fair coin before each participant arrives.
      I wasn't talking about randomizing within a sample set. That is, of course, trivial. I was really getting at biases within the sample set compared to "the set of everyone in the city/country/world"

      Sometimes, tests are conducted with specific bias in mind (e.g. college students), but if you want to make a generalization about people in general, I would think that limiting the sample (based on location, like "a college") could have a significant effect. Even if you advertise all over a big city, your total sample set consists of "people with a propensity to respond to psych experiment ads for the money" which may or may not be a meaningful bias (sometimes, you just don't know).

      Unless the census does the random selection (kind of like jury duty), and the experiment is conducted in many cities and areas, it's not really "a set of random people with no bias" Again, I bet that for most experiments, that doesn't really matter, but I always get these nagging feelings in the back of my mind when it's not addressed directly.

      Mmm, not really sure what you're getting at with this one.
      (this was in response to my comment about calibration)

      What I was getting at were the integrity of the data points, since they are often filtered through people's subjective experience.

      I'll give you a quick example. I took part in one experiment that was targeting a correlation between "how good you think you are" and "how shaky your confidence about that gets after an experience of being bad at that thing you think you're good at." (they didn't tell you that though)

      The way it worked was something like this (slightly hazy memory here btw):

      1) You fill out a form that asks questions about how good you think you are at math. It was usually "a scale from 1 to 10, 1 being no good at all, 10 being genius" or "strongly agree, strongly disagree, etc." type multiple choices.

      2) Then, you take a math test. They deliberately made it really hard. Like, harder than what I've seen on math competitions. They wanted to create the feeling of "failing a test" under time crunch, and the associated stress. They succeeded - the thing LOOKED like it should have been easy, but it was really fucking hard.

      3) You fill out another form similar to (1)

      And the idea was, they compare your confidence about your math abilities before and after the failed test. (they only told us that part after - throughout, we thought it was a completely different thing meant to measure something else - don't remember the details though)



      Anyway, what I was getting at was, if you ask two people to rate the intensity of a subjective matter on a scale from 1 to 10, two people who experience the same thing will still put a different number. If they're generally insecure, then no matter how good they actually are at math, they'll rate themselves lower.

      We've already kind of covered this stuff, when you mentioned that they might ask subjective questions, and they'll just count the number of reasons they give, etc. except even THOSE data points, though better, will have been filtered through the subjective experience.



      My global, overall point, is really that while there are some great techniques for tightening up test results, and you've outlined a few of them, testing is an expensive process, and I'm sure not all official tests undergo the right kind of rigor. Of course, the same is true for ALL experiments, but the "black box" you're dealing with in a psych experiment is often more complex and has more unknowns than the black box in other fields.

    13. #38
      D.V. Editor-in-Chief Original Poster's Avatar
      Join Date
      Jun 2006
      LD Count
      Lucid Now
      Gender
      Location
      3D
      Posts
      8,263
      Likes
      4140
      DJ Entries
      11
      Science to me is anything that can be proven with results. It is the tool of the philosopher to use Science because everyone has a subjective outlook upon the world and its necessary to have reliable studies which show what tests true over time and on massive scales. The Scientific Method should be utilized in as many far reaching categories of discovery as possible, including esoteric concepts. You cannot distinguish an entire field as Science or Not Science, judge the studies themselves and decide for yourself if its saying something credible. If you only accept what is scientifically proven you're denying yourself your own rich, subjective experience.

      Everything works out in the end, sometimes even badly.


    14. #39
      Rain On Your Roof Achievements:
      1000 Hall Points Made Friends on DV Veteran First Class
      Unelias's Avatar
      Join Date
      Dec 2008
      LD Count
      Lost count.
      Gender
      Location
      Where angels fear to tread
      Posts
      1,228
      Likes
      256
      We have it quite same as UK

      Psychiatrist is a doctor who has specialized in psychology, he treats patients and prescribes medicine if needed. Unlike psychiatrist, psychologist do not do medical diagnoses on patient. (Which is under IDC-10). Psychologist can, however, do psychodiagnostics ( describe the nature of problems, chart the strengths and weaknesses of the customer, and find means to make patients situation better etc.). In Finland psychologist cannot prescribe medicine, because that is the responsibility of psychiatrist as a doctor.

      In addition of that there is psychotherapist, which is more specialized and requires base degree from healthcare ( doctor, psychologist etc) along with other conditions fulfilled. All these three ( psychologist, psychiatrist and psychotherapist are under surveillance and control of Valvira which is National Supervisory Authority for Welfare and Health. It supervises practices of medical nature. )

      Many people like to see that psychology is not a science. This controversity comes from the fact that psychology studies one of the most hardest subject to come by and that is human mind and human as complete being. Many people seem to have completely twisted views about psychology or limit it to Freudian stereotype they have heard. Psychology is a wide and vast field of different studies. If you think of neurology, which is part of psychology, it is studied and researched completely in scientific method. But psychology has also lot of subjective information gathering, in addition of objective methods, because no mind is alike another. They are working hard to map and find common and universal laws that apply to all humans and that is mighty hard.

      Beside of that there are many fields that normal people doesn't always know psychologists work on.

      # career choice psychologist
      # parenting and child psychologist
      # school psychologist
      # psychologist in mental health services
      # health center psychologist
      # work and organization psychologist

      In addition psychology can be applied to almost all careers and situations where human behavior is part of it. Road and city planning, IQ and aptitude tests, sports etc.

      I can talk of this more if anyone is interested.
      Last edited by Unelias; 09-02-2011 at 05:36 PM.
      Jujutsu is the gentle art. It's the art where a small man is going to prove to you, no matter how strong you are, no matter how mad you get, that you're going to have to accept defeat. That's what jujutsu is.

    Page 2 of 2 FirstFirst 1 2

    Similar Threads

    1. Tell me about AP Psychology.
      By That Kid in forum Ask/Tell Me About
      Replies: 6
      Last Post: 05-17-2009, 04:14 AM
    2. Psychology
      By Jdeadevil in forum The Lounge
      Replies: 10
      Last Post: 06-07-2007, 05:45 PM
    3. Reverse Psychology?
      By bentrider08 in forum Attaining Lucidity
      Replies: 5
      Last Post: 02-11-2007, 07:14 PM
    4. Psychology-
      By Chatter-Box in forum The Lounge
      Replies: 13
      Last Post: 12-09-2006, 09:54 PM
    5. Religion and Psychology
      By Leo Volont in forum Religion/Spirituality
      Replies: 4
      Last Post: 11-17-2005, 12:43 AM

    Bookmarks

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •