Exam 2 Flashcards
(50 cards)
What does it mean to say that science is “self correcting?” Has this changed in recent years?
- in theory, the replication of research and studies will reveal which findings hold up and which do not
- supposed to be that people are constantly checking other people’s work by trying to replicate them
- checks and balances
- in recent years this has not been the case because people do not really replicate studies
—– for a number of reasons:
1. no money in re doing someone else’s work
2. journal’s do not really publish replications
3. very few studies are actually replicable
As described by Collins and Tabak (2014), what factors have contributed to the “reproducibility crisis” in biomedical sciences?
- overinterpretations of a hypothesis where people generate experiments that are supposed to open up new avenues of research vs. just answering a question
- the difference in techniques with animals in pre clinical work is nearly impossible to replicate due to things like animal types, lab environment, and small protocol tweaks
- people do not post the null data from their studies
- public peer review where reviewers don’t get paid and often pass the work to someone else
- people are not really replicating studies and they are usually not published, especially by large journals
What are the four methods outlined by Munafo et al. (2017) to improve the reliability and efficiency of scientific research?
- protecting against cognitive biases
- blinding is the best way to avoid this
- have the person do the experiment blinded to the
identity of the key parts of the data and the
experimental condition
- Improving methodological training
- improved training in stats, interpretation of data,
and the limitations of certain methods - ensuring sufficient power
- decrease exploitation of analytic flexibility
- improved training in stats, interpretation of data,
- Implementing independent methodological support
- minimize conflicts, including financial conflicts, such
as who is funding the research or who is
sponsoring it - ensuring ss
- minimize conflicts, including financial conflicts, such
What are the four methods outlined by Munafo et al. (2017) to improve the reliability and efficiency of scientific research?
- protecting against cognitive biases
- blinding is the best way to avoid this
- have the person do the experiment blinded to the
identity of the key parts of the data and the
experimental condition
- Improving methodological training
- improved training in stats, interpretation of data,
and the limitations of certain methods - ensuring sufficient power
- decrease exploitation of analytic flexibility
- continued methodological education for both senior
and junior researchers
- improved training in stats, interpretation of data,
- Implementing independent methodological support
- minimize conflicts, including financial conflicts, such
as who is funding the research or who is
sponsoring it - have different committees to provide advice,
conduct the trial, and oversee the design
- minimize conflicts, including financial conflicts, such
- encourage collaboration and team science
- so one person is not the only one thinking on an
idea - having sufficient statistical power to decrease
possibility of false-positives or false-negatives - collaborations across different sites can increase
power (data sharing) - can also diversify demographics
- so one person is not the only one thinking on an
Harris (2017) summarizes the “six red flags for suspect work.” What are these? According to Harris, how have these contributed to sloppy science and worthless cures in the biomedical field?
- are the studies blinded?
- did they know which samples were in each
condition? - contributes to sloppy science due to biases and self
deception - worthless cure - results might be inflated
- 20% of nonreplicable studies had untrustworthy
designs
- did they know which samples were in each
- were basic experiments repeated?
- each scientist has their own way of working and
even the smallest discrepancies can make a study
non replicable because each step, tool, analysis,
etc. is not clearly indicated - contributes to sloppy science because particularly
in pre clinical work, it is very hard to replicate and
not indicating what was used can lead to someone
using something different which can get a different
result - also, the result could be a fluke due to bad
technique, etc. - worthless cure - a lot of stuff never makes it to
clinical stage because the basic science part is
what is messed up - 8% of nonreplicable studies are due to poor lab
technique
- each scientist has their own way of working and
- Were all the results present?
- researchers can cherry pick their best results and
not show failed or skewed results - contributes to sloppy science because it is only
showing your good results and misleading readers - worthless cure - this is not helpful in interpretation of
results because they are inflated or artificial
- researchers can cherry pick their best results and
- Were there positive and negative controls?
- running parallel experiments as comparisons where
one should fail and one should support the
hypothesis - sloppy science - if you don’t have a control, you
have nothing to compare your result to - what if it is
no different than the “normal/baseline” condition? or
what if it has no effect - worthless cure - treatment might not work or it might
work as well as no treatment
- running parallel experiments as comparisons where
- Did scientists make sure they were using valid ingredients?
- contamination of ingredients is a big issue
- ingredient must be appropriate for the study
- 25% of studies use dubious ingredients
- sloppy science - poor sterilization and caution with
ingredients leads to contamination, also people
might use the wrong ingredient or an inappropriate
ingredient - worthless cure - invalid ingredients will skew results
- were the statistical tests appropriate
- it’s common for biomedical scientists to choose the
wrong method to analyze their data and this makes
their results invalid - 18% of nonreplicable studies are due to scientists
misusing their data analysis - sloppy science - leads to inflated, false, or
inadequate results - worthless cure - fake info - trying to study something
based on false information - secret sauce - way of analyzing data that is not listed in the methods, preventing people from doing the research
— allows people to make excuses about why their study did not reproduce “because I did X (not listed in methods) and you did not”
- it’s common for biomedical scientists to choose the
Based on the Harris chapter, what are the ways to fix the replicability crisis?
- get journals to change their incentives
- get funding agencies to promote better practices
- get universities to grapple with these issues
- get scientists to change their ways
Regarding issues such as sloppy science, non-reproducibility, and quantitative illiteracy, what problem would you fix first, why?
- I would fix quantitative illiteracy because despite how the study is run, if they are open and transparent about it you can take the stats and data for what they are if you know that the data have been handled and analyzed properly
- with non-reproducibility there are so many factors that could impact this, particularly in animal research where the animals are so sensitive to things like smell and environment
- sloppy science seems a bit broad to try and tackle but maybe that means it should be done first?
what is a critical tool for cognitive neuroscience research?
careful and thoughtful behavioral testing
compare and contrast structural imaging and functional imaging. what are examples of each of these methods, and what kinds of information does each method provide?
Structural imaging
- CT - gives overall image of the brain and can show structural abnormalities such as a tumor - x ray at a bunch of different angles to create an image
- MRI - shows which part of the brain has more water in it (more water = white matter areas) by orienting polar head of water to magnet
- X-ray - shows abnormalities such as tumor, stroke, or abnormal blood in the brain
—— pneumoencephalography - inserting air into the CSF and following it up the spinal cord into the brain
—— Angiography - shows circulatory issues that may effect blood flow by injecting an x-ray absorbing material into the blood stream
- DTI - detects movement of water molecules to create image of the brain’s white matter pathways
Functional Imaging
- fMRI - shows oxygenated vs unoxygenated blood; areas with more activity require more oxygen, so show to be active on fMRI
——- resting state fMRI - shows the levels of activity between areas, how they raise and fall, and how they are at rest
- MRS - provides information about neurotransmitter levels
- PET - shows movement of blood flow of the brain and areas with high activity use higher about of blood which is tracked via radioactive water being injected into the blood
- EEG - shows net electrical change in the areas around the nearest electrode
- MEG - lets you induce a current and measure where that current comes out of the skull
- Optical Tomography - reflected infrared light infers blood flow when it is reflected out of the brain
- rCBF - indirectly detects changes in metabolic activity as it changes the amount of blood flow in different brain regions
- MVPA - the level of activity that provides information about the brain function and the pattern of activity
you are designing an experiment to measure a brain-behavior relationship. A) if there is a premium on spatial resolutions, which approaches/methods would be superior?
Spatial resolution = how accurate the location of the activity is
fMRI/MRI - high spatial resolution meaning where it shows activation is very accurate
PET - okay but still bad
MRS - poor resolution
CT- very bad resolution
you are designing an experiment to measure a brain-behavior relationship. B) if there is a premium on temporal resolutions, which approaches/methods would be superior?
temporal resolution = when the activity happened (what are you measuring and how long is the measurement taken from when the biological process happened)
EEG - really good because it measures real time event related potentials
fMRI - not great, a few seconds behind
PET - horrible
Lesion - literally years
You have been given a “blank check” to buy a piece of equipment to perform brain-behavior studies. what equipment would you buy and why?
I would buy an fMRI
- it is the most expensive
- highest spatial resolution and can show changes in regional activity
- can be used to compare results with lesion studies
what is the “method of converging operations” as defined by Banich?
- this is when a community of researchers examine a question of multiple different perspectives, using a variety of populations and methods to find out if the result is similar or the same in each case
- this increases confidence in the conclusion
- for example, tests are not normed on just one thing, they are normed based on sex, SES, education, first language, gender, race, ethnicity, etc.
- another example, is using animal behavior studies, human behavioral studies, and then brain imaging studies
You have been asked to check your friend’s CT scan for possible abnormalities. What would you look for and why? Explain the concepts of hyperdensity and hypodensity.
Hyper-density - areas that appear lighter than they should can indicate blood, calcification, tumor, clotting in the brain; can show stroke
Hypo-density - areas that appear darker than they should can indicate air, fat, or lesions
With all the fancy new ways to look at the brain, is there still a role for neuropsychology?
Yes
- flexible test batteries allow for comparison of scores between person and reliable norms and between the same person at 2 different points in time
- images show how the brain looks yes, but without behavioral testing, there is no real way to get behavioral information from images - you have to use neuropsychological testing
- can estimate premorbid capabilities with vocabulary and reading tests that are reliable and valid
- premorbid function - you can get a more holistic approach to behavior and how it has changed
Neuropsychological assessment = measurement of quantitative, standardized fashion the most complex aspects of human behavior such as attention, perception, memory, speech, language, building, drawing, reasoning, problem-solving, judgement, planning, and emotional processing (emotional and social functioning is missing from this list)
what is the brain impairment index
holy grail battery of tests that sorts people into “brain damage” vs “no brain damage” with binary 1 , 0 code
Selimbeyoglu and Parvizi (2010) provided a “meta-review” of electrical brain stimulation (EBS) studies. Give 3 examples of findings from the literature, selected from the main brain territories reviewed (Frontal lobe, insula, parietal lobe, occipital lobe, and temporal lobe)
Frontal
- ocular motor responses of smooth and saccadic eye movements
- lip smacking and chewing
- emotional facial expression and laughter
- reaching and grasping
- nonconscious movement
- feelings of retrosternal pain or discomfort
- rocking, swaying, disequilibrium
- speech arrest, reading problems, singing problems
- autonomic reactions like blushing, mydriases, increase heart rate or increased respiration
- palilalia = repeating words
Insula
- sensation of suffocation
- bilateral painful burning, stinging, and tingling
- warmth and or cooling
- sensation of vertigo or nausea
- feeling of falling
- fumbling, plucking, lip smacking, chewing
- speech arrest
Parietal
- vestibular and sensorimotor issues like vertigo, disequilibrium, and sensations of body oscillations
- visual disturbances like blurred vision and oscillopsia
- urge to move body parts or illusions of moving
- out of body experience
- hemi spatial neglect of right hemisphere
- speech arrest, anomia
- finger anomia (can not finger spell)
- illusionary sense that someone, a ghost or shadow, was standing behind the patient
Occipital
- seeing geometric shapes and simple patterns
- white or black spots
- visual illusions/hallucinations
- complex visual hallucinations of people or movement
- blobs of flashing light, colors, movement
Temporal
- complex feelings
- feeling of unreality or familiarity like Deja vu
- emotional feelings of fear, loneliness, urge to cry, anger, anxiety, levitation, or lightness
- mirth (laughter, happiness or excitement)
- illusion of dream like state
- recall of past experiences
- auditory hallucinations like water dripping, hammer and nail, music, human voices, changes to present auditory stimuli (muffling)
- pain
- sudden movement, staring, unresponsiveness, chewing, or plucking
How did Ojemann use intra-operative simulation mapping techniques to elaborate the standard models of speech and language organization in the human brain? Describe how his work supported two main themes, which he called “compartmentalization” and “variance”
Compartmentalization = language is compartmentalized into separate systems for processing different aspects of language; cortical areas are dedicated to language but are not in small units
—– for example: frontal and temporoparietal lesions disturb written language but not oral language
—– this is in comparison to Geschwind’s
disconnection theory where disruption of written
language would say visual cortices and
language cortices were disconnected
- intraoperative electrical stimulation has shown different areas exist for different grammatical classes of words in different languages
—– stimulation of one or the other of 2 areas can lead to disturbances in naming the same object in one or the other of 2 language areas
- also has shown there to be functional separation where stimulation alters naming of an object in oral language or in manual communication like finger spelling
- even with areas where stimulation does not alter language, neurons there often change in activity still and change in different ways during speech production and perception
- polyglot can lose all but one language, remaining language is not even necessarily their dominant language
- essential language areas are preferentially located in crown in the gyri and not really down in the sulci
Variance = people have huge amounts of variance between them when it comes to language localization of specific things
- in the gross anatomy of the brain, especially in the left perisylvian cortex
- also functional lateralization of language where some people have it on right side
- more severe naming deficits via stimulation with more fluent languages
- gyral patterns, planum temporal, and cytoarchitecture areas have differences (planum temporal tends to be larger in language dominant hemisphere)
Quitoga et al. (2005) argued that their findings suggest a sparse, explicit, and invariant encoding of visual percepts in the medial temporal lobe. Have they discovered a Jennifer Aniston neuron and a Halle Berry neuron?
they have discovered neurons that respond to familiar faces, landmarks, etc. and I assume the neuron is not “the jennifer aniston” neuron but rather the recognition of a particular pattern.
- it would not be evolutionarily beneficial to remember every single person’s face and have a neuron for every face, or even pattern that we come across
- this does not mean that we have single neurons which encode for discrete faces
———– some units respond to pictures of more than one individual or object, each cell might represent more than one class of images, and they only looked at a small sample of stimuli
- the neuron also fired to the name of the actresses
- this might be important for transformation of complex visual percepts into long-term memories
How does the work of Abel et al. 2015 provide “direct physiological evidence” for the neural basis of proper name retrieval? How does the work support the claim that the left anterior temporal lobe is a “heteromodal” convergence region for proper naming
- they measured activity with ECoG which is a patch of electrodes that sits on top of the surface of the brain or is a tiny electrode probe that is inserted into the cortex to measure activity of deeper cortical structures
- they measured the electrical activity from a picture and voice proper naming from the left anterior temporal pole/lobe
—–this is direct evidence because it is measuring direct activity by measuring electrical pulses from a smaller sum of neurons than EEG
—– Uiowa neurosurgeon Hiroto Kawasaki figured out how to make the electrodes flexible enough to bend/curve around the temporal pole so they lay flat on the cortex
—-this is direct, unlike fMRI which looks at an indirect measure of brain activity via the ratio of oxygenated to deoxygenated blood - this makes the claim that the left anterior temporal lobe is heteromodal convergence region, meaning that it is a “3rd party” in the relationship between conceptual knowledge and word form
—- the area was surrounded by unimodal areas - making the suggesting that this area (left ATL) is the “hub” and integrates the different unimodal information around it
—- the ATL had nearly identical activation when the stimuli was a photo or auditory, indicating that it was implicated in both, meaning that it can not be unimodal…because it was implicated in 2 modes
example: getting a description of the president and then having to recall the name still is a process that activates the ATL which provides more evidence for heteromodality
compare the spatial and temporal resolution of single unit recording and related direct neuronal recording techniques (electrocorticography) to the resolutions of other approaches we have covered (especially lesion method and functional neuroimaging)
fMRI bold signal
- the signal is not correlated well to low frequencies
- temporal resolution of 3 sec. and it peaks with the best signal at 5-6 seconds
- spatial resolution of 3-4 mm
ECoG
- temporal resolution of 5ms
- spatial resolution of 1cm at the cortical surface and .5-3mm in the local field when in the brain
Lesion method
- temporal resolution - years? until they are dead
- spatial resolution - good when using MRI and PET
what are the strengths and weaknesses of doing direct, single unit recording in humans?
strengths
- you are measuring a very acute signal, i.e. one neuron’s signal
- the device responds to both high and low frequencies
weakness
- very invasive
- have to study in a patient who has electrodes for epilepsy clinical research
- have to use the sites the patient has the electrodes placed
- it is hard to do because the neuron can die or you can lose contact with it
Based on the review by Boes et al. (2018), is noninvasive brain stimulation (NIBS) a safe and effective way to treat depression?
Efficacy
- has lot of potential with individualized dosing and flexible treatment
—– the excitability of the motor cortex does not necessarily correspond to the target area (they compensate for this by treating at 20% higher than your threshold)
—– motor threshold can change based on amount of sleep, amount of caffeine, meds, and stress
- there is a lot of human error, such as how the coil is held
- still trying to decide if single or multiple treatments per day is better
- don’t have a set duration for how long treatment should be (some are 4-6 weeks, some taper)
- don’t know the efficacy of low vs high doses
Safety
- said to have “excellent” safety when the protocols and guidelines are followed
- risk of headaches, seizure, hearing loss
- so far so good with safety
- high doses appear safe so far
- biggest risk to safety is unregulated, marketable at home devices for this
What is transient global amnesia? What are some of the proposed causes?
- syndrome where people suddenly are unable to recall events and form new memories
- sudden onset of anterograde and retrograde amnesia that goes away in 1-2 hours
- harmless
- could be due to reduced blood flow in the brain in the temporal lobe
—-could be tied to strenuous exercise, contact with water, emotional stress, sexual intercourse
—–linked to history of migraines, psychiatric disease, and vascular disease