Neurobiology For Dummies book cover

Neurobiology For Dummies

By: Frank Amthor Published: 04-14-2014

The approachable, comprehensive guide to neurobiology

Neurobiology rolls the anatomy, physiology, and pathology of the nervous system into one complex area of study. Neurobiology For Dummies breaks down the specifics of the topic in a fun, easy-to-understand manner. The book is perfect for students in a variety of scientific fields ranging from neuroscience and biology to pharmacology, health science, and more. With a complete overview of the molecular and cellular mechanisms of the nervous system, this complete resource makes short work of the ins and outs of neurobiology so you can understand the details quickly.

Dive into this fascinating guide to an even more fascinating subject, which takes a step-by-step approach that naturally builds an understanding of how the nervous system ties into the very essence of human beings, and what that means for those working and studying in the field of neuroscience. The book includes a complete introduction to the subject of neurobiology.

  • Gives you an overview of the human nervous system, along with a discussion of how it's similar to that of other animals
  • Discusses various neurological disorders, such as strokes, Alzheimer's disease, Parkinson's disease, and schizophrenia
  • Leads you through a point-by-point approach to describe the science of perception, including how we think, learn, and remember

Neurobiology For Dummies is your key to mastering this complex topic, and will propel you to a greater understanding that can form the basis of your academic and career success.

Articles From Neurobiology For Dummies

9 results
9 results
Neurobiology For Dummies Cheat Sheet

Cheat Sheet / Updated 03-27-2016

Neurobiology has all kinds of real-world (and not so real-world) applications. From curing paralysis to the possibility of cyborgs, neurobiology has answers to many fascinating questions.

View Cheat Sheet
Can Imaging Systems Read Our Minds?

Article / Updated 03-26-2016

Since functional magnetic resonance imaging (fMRI) machines became more common at the end of the 20th century, there have been more claims about the ability of this technology to extract the content of mental processing. Many aspects of the claims and counterclaims parallel those associated with so-called “lie detector” tests during their heyday in the late 20th century, including the ability to detect lying itself. fMRI scanners detect blood oxygenation levels and blood flow changes associated with metabolic changes in brain areas at a resolution of one to several cubic millimeters, depending on the magnet strength. This measurement is a one-dimensional index of the overall level of neural activity in that tissue volume, which is a complex circuit, composed of millions of neurons. What can be deduced from fMRI scans, practically and theoretically? The gross anatomy of the brain is characterized by localization of function, with distinct motor and sensory areas, and maps within those areas. For example, neuroscientists know exactly where the brain area is that controls the left hand, and, if a person in an fMRI magnet moved his left hand, that movement would easily be detected. In sensory systems, visual space is laid out in a complex topographic map on the brain. If a person imagines some specific shape directly in front of her, some of the same brain areas will be activated that would have been activated by actually seeing that shape. This brain activity associated with imagery also can be detected in a scanner. Brain areas whose activity is necessary to conjure up images, or to lie, are different from those involved in retrieving content from actual memory, and this also can be detected. Eventually, though, we run out of resolution. A 1 cubic millimeter volume of brain tissue has trillions of different states. No one-dimensional measure of the overall activity in this volume can distinguish among all these states. Scanners may well be able to distinguish between a finite number of alternatives characterized by significant differences in brain activity over many millimeters (such as images causing arousal), but they can’t with foreseeable non-invasive technology distinguish among complex, subtle differences in similar thought patterns.

View Article
Are Cyborgs Possible?

Article / Updated 03-26-2016

Cyborgs (cybernetic organisms) already exist! Any one of the more than 100,000 people worldwide who has a cochlear implant to restore hearing is essentially a cyborg, a functional combination of organic and machine parts. Your Great-Aunt Gertie suddenly seems much cooler, doesn’t she? The real question is how rapidly additional brain functions will be carried out with brain-computer interfaces (BCIs) and how quickly they’ll be developed. Here are some interesting ideas about cyborgs: The beginning of the human cyborg era began with the need to restore lost function, particularly hearing, where the BCI was relatively straightforward. Research is underway to use a cyborg approach to repair some kinds of blindness via miniature cameras and arrays of stimulators to inject the camera signal into the nervous system. A long-standing project is aimed at replacing some memory functions in a portion of the medial lobe of the brain called the hippocampus with silicon circuitry. Just as the nervous system adapts to new and novel inputs, such as those that occur when learning how to ride a bicycle or drive a car, it can likely adapt to direct injection of signals into the nervous system.

View Article
How Can Paralysis Be Cured?

Article / Updated 03-26-2016

Paralysis has multiple causes. The part of the brain that controls movement can be damaged, such as from a stroke. Injuries and diseases can interrupt the message transmitted from the brain to the muscles. But treatment and research are helping, and rapid advances in brain-computer interfaces (BCIs) are making these science-fiction–sounding approaches feasible in the near future: Rehabilitation and training help in all types of paralysis by strengthening pathways and recruiting alternate ones to bypass the injury. Considerable current research is being done on the cause of the paralysis. Curing the actual disease is almost always the treatment of first choice. One promising treatment for strokes and tumors involves regrowing neurons by taking cells that have been reprogrammed to be neural stem cells and doing autologous transplants derived from the person’s own tissue put into the damaged region. Other research efforts involve the use of growth factors to stimulate regeneration in damaged brain areas. Prosthetics may be used to alleviate paralysis by capturing the high-level neural signal for movement, bypassing the injury, and stimulating movement closer to the muscle, or by capturing high-level motor command signals and performing body movements with mechanical devices, such as prosthetic limbs or exoskeletons that magnify with motors the force exerted by our limbs, or that respond to brain commands for movement recorded by electrodes.

View Article
Can the Mind Be Downloaded?

Article / Updated 03-26-2016

As computers become more powerful, there is increasing speculation about whether they could equal or surpass human intelligence. One thread in this discussion is the idea of downloading our minds into an artificial substrate such as a computer. Most neuroscientists are very skeptical about this idea for a few reasons: What we actually know about the brain is that it’s extremely complex. It consists of on the order of 100 billion neurons each with a thousand connections to other neurons. No present-day substrate can come close to this complexity. Neuroscientists think that the essential function of the brain is carried out by neural computations that generate action potentials that are sent to other neurons. Even if we could measure the firing of every neuron in the brain, and the strength of every synapse producing that firing, and download or model that in silicon, we still don’t know if it would actually work like a human brain. Bottom line: Your thoughts are your own, for the foreseeable future.

View Article
10 Careers for Neurobiology Students

Article / Updated 03-26-2016

If you’re a neurobiology student, or you’re thinking about pursuing a PhD in neurobiology, you may be wondering what people do with PhDs after graduation. (If you’re not wondering this yourself, you can bet your parents are.) Here’s a list of ten careers for people who’ve studied neurobiology. Conducting academic research Most graduate students in areas related to neurobiology are trained as academic researchers. Most academic researchers are university faculty members who conduct funded research. And most of them have done one or two post-docs (short for post-doctoral research associate, a kind of professional lab assistant who works under the mentorship of a professor). In many laboratories today, the laboratory head or principle investigator (PI) spends a lot of his or her time writing grants, serving on committees, and teaching. The lab head may delegate large portions of certain projects to post-docs who, by virtue of their PhD training, should be able to independently conduct most of the experiments and analysis on their assigned portion of the project. Typically a post-doc is expected to produce two academic journal articles a year in consult with the lab head. In neurobiology, doing a post-doc is what typically separates the career path of a new PhD holder between research and teaching. The additional training and publications obtained as a post-doc, often in more than one lab, make people much more competitive for obtaining grant funding to support a research program. Most research programs in the United States in neurobiology-related areas are federally funded by the National Institutes of Health (NIH) or National Science Foundation (NSF). A typical NIH individual investigator grant (R01) usually runs over $200,000 per year for three to five years. This money pays part of the PI’s salary, the salary of any post-docs, and the stipend and sometimes tuition of graduate students after their first year. It also pays for laboratory equipment and supplies. The NSF has much less money for neurobiology than the NIH. It makes far fewer awards, and these are typically less than half the annual amounts of NIH awards. Some faculty obtain money for research from foundations that support research to cure particular diseases. Also, it is common at the time of hiring for a faculty member to get a startup package of equipment, staff salaries, and supplies over several years. Startup packages in neurobiology-related areas may range from $250,000 to $750,000. Universities seek to recoup some of these costs from what are called indirect costs given with grants. Although the rate varies from institution to institution, indirect cost rates are around 50 percent in public universities and 75 percent in private ones. This means for every $100,000 of grant support obtained by a PI from the NIH, $50,000 to $75,000 in addition is given to the university to “support” the PI’s research, such as paying for laboratory space and library and animal facilities. Times are rapidly changing as far as how laboratory research is conducted. Prior to World War II, research in the United States, like Europe and Japan, was concentrated in a few elite universities like Harvard, MIT, and Stanford. From the late 1950s until about 2004, federal research funding exploded and federally sponsored research occurred in almost every major university in the country. In the last decade, however, NIH funding has been relatively stagnant, and NIH priorities have shifted from basic research to research in areas associated with curing diseases. This has concentrated funding in larger labs in the top 10 or 20 universities that can support core facilities such as gene sequencers and other expensive molecular biological analyzers. Twenty years ago, an investigator, once established, might expect to have funding for 15 years or more for a lab consisting of himself or herself and a mix of a post-doc, a graduate student, and one technician. Now the average size of an NIH grant-holding lab is over 25 people, and the average duration of an R01 award is on the order of six years. The NIH is under enormous pressure to cure diseases rapidly, given the huge and increasing percentage of the federal budget spent on healthcare. Working in industrial laboratories An alternative to academic research is industry. A major source of careers for PhD neurobiologists is the pharmaceutical industry. The vast majority of pharmaceutical agents act on neurotransmitter or hormone receptors. Discovering new, useful drugs that act on the nervous system is an urgent and profitable business. All drugs have to pass stringent multistage tests for efficacy and safety. Drug company laboratories test thousands of potential drugs that are either synthesized or found in nature. Synthesized drugs may be designed to have shapes that fit a particular receptor, or are derivatives of drugs known to activate receptors. Potential drugs are also found in natural toxins like snake and spider venoms, which are cocktails of neurotransmitter agonists and antagonists, many of which have never been synthesized. Synthesized or natural compounds are first purified and analyzed for structure. Initial tests typically look for efficacy. Multi-well dishes contain compartments with single cells expressing different receptors whose responses are measured automatically to the potential drug. Safety testing is more complex because a new drug has to be safe for every tissue in the body. Testing in mice does not ensure human safety. Moreover, not all humans are genetically identical and some drugs may have serious side effects for a small percentage of people. Safety and efficacy testing go through a series of increasingly expensive clinical trials, during which the drug could fail at any time. The downside of working in the pharmaceutical industry compared to academia is that there is much less independence. Pharma researchers are charged with inventing and evaluating new drugs or devices that will be profitable to the company. On the other hand, a major advantage of this career path is that it isn’t so dependent on writing grants. Pharma researchers generally work in laboratories that have excellent, up-to-date instrumentation and good staff support. If you can work within the confines of the company’s directives and need for bottom-line accountability, the work environment can be excellent, and some independence in research, including publishing in academic journals, is possible. Teaching, from elementary to graduate school In the United States, most PhD holders are teachers rather than federally funded researchers. They teach primarily in colleges and universities, but they may also teaching in primary and secondary schools. Teaching is essential to the overall mission of science for both creating the next generation of researchers and teachers, and disseminating the results of scientific research to the larger community. The fact that someone is primarily a teacher doesn’t means that research is out of the question. It just usually has to be done on a smaller budget. Many important research questions can be approached with modest laboratory equipment and time. Great advances in neurobiology have been made using invertebrate preparations, some of which have resulted in breakthroughs in neurobiology or the development of important drugs. One promising note is that many universities are now endowing “chairs” for a larger percentage of the faculty. Endowing a chair means getting a lump sum of money, around $2 million, usually from a donor, and using the interest to pay most the faculty member’s salary and provide a modest research budget. Some graduate stipends are endowed the same way. This method may easily pay for itself by making the faculty member more competitive for obtaining federal money. Many people believe that the U.S. educational system must emphasize STEM (Science, Technology, Engineering, and Mathematics) subjects much earlier in education. It may be appropriate to hire more science Ph.D.s to teach high school and teach and run elementary schools. If you want your students to learn some neurobiology, would you rather have them taught by a neurobiologist or an education major? Writing and editing There are other careers involved in disseminating neurobiological knowledge besides classroom teaching. Writing about science and medicine includes everything from academic monographs to undergraduate textbooks to blogs and popular books. Another important function associating with science writing is editing professional journals. The editing process is essential in screening the write-up of data in refereed articles that become available to other researchers in the field. Science absolutely depends on the refereed dissemination of results and the reproduction of those results based on published methods for all its progress. Using neurobiological knowledge in business More and more the United States’ economy revolves around acquiring, processing, and mining information. Neurobiology has impacted commerce in artificial intelligence and robotics, and this impact is growing. The first artificial intelligence simulations could perform well in very limited domains like proving geometry theorems and playing checkers. Later, artificial neural nets and fuzzy sets took on complex control functions. Recently, computer systems play champion chess, win at Jeopardy, and may be able to drive cars. Factory robots that previously only did programmed, simple, repetitive motions are now able to adapt to unforeseen circumstances. These examples show that neurobiological knowledge has impacted business at many levels, from brain logic, to networks using neuron-like elements, to robots with spinal cord-like hierarchical controllers, to current attempts with memristors to model neuronal synapses. Google has many basic scientists among their employees, for example. Also, advertising is increasingly based on data about people and their preferences and habits. Architecture and city planning take into account how individuals and groups interact with their environment. Avatars and other kinds of agents may interact with people in more human-like ways using neurobiological knowledge about the brain and how it works. Developing sensory prosthetics Many surveys show that the fear of losing a major sense — like sight, hearing, or touch — is one of the biggest for many people, on a par with the fear of getting cancer. Currently, however, only deafness from conductive failure or auditory hair cell death can be overcome with any sort of implant. The inability to replace vision or somatosensory loss is not primarily due to lack of computer power, but rather due to the inability to interface computer processing with the nervous system. Neurobiology will be essential for the development of sensory prosthetics that can replace lost natural senses. Such prosthetics require that we learn where to place the brain-computer interface in the nervous system, how to communicate with neurons there, and what messages to send to the neurons and receive feedback from them. Stimulation of the nervous system has in Parkinson’s disease and some types of depression proven to be more effective than drug treatments. Some are calling the new field of this type of neural stimulation electroceuticals. Deep brain stimulation (DBS) and transcranial magnetic and direct current stimulation (TMS and tDCS) are being used to treat a variety of pathologies and enhance normal learning and function. Transplantation of stem cells — particularly one’s own cells converted into stem cells — is also a promising technology for many brain and other diseases. Success with this technology will require more advanced knowledge of genetics and epigenetics of neurons, neural circuitry, and neuropharmacology. Companies such as Medtronic are developing implantable devices ranging from cardiac pacemakers to more complex interfaces with the nervous system. The use of implantable devices that affect the nervous system via recording and/or current injection is called by some the field of “electroceuticals.” Replacing motor function Spinal cord or brain injury, strokes, and tumors are some possible causes of paralysis. A person could lose a limb from injury or cancer. The field of neurobiology can make important contributions to restoring motor function through either the repair/replacement of central control neural circuitry, or through prosthetics. In spinal cord injuries in mammals, the axons that run from motor cortex to the alpha motor neurons that drive the muscles, and from sensory receptors to somatosensory cortex, are severed and don’t regrow. Considerable laboratory research on the wound environment and axon growth parameters is being conducted, but restoring function when the spine is completely severed is usually not possible. Understanding the neurobiology of axonal growth, cell-adhesion, and chemo-affinity substances is a major challenge for the current generation of neurobiologists. Paralysis from central brain damage is sometimes partly overcome by activating alternate pathways. Training regimes such as constraint induced therapy (CIT) can facilitate this. Currently it is unclear if additional neural plasticity in the brain might be induced pharmacologically, or with stem cells or electrical current stimulation. Another approach to paralysis is to drive limbs artificially. This typically involves recording a signal from the brain or an axon from the spinal cord that is relayed past the injury to the muscle or alpha motor neuron that drives the muscle. As in other cases, the brain-computer interface needs work. Muscles are activated by different types of neurons recruited in a specific order firing at different rates depending on the force required and the duration needed. Poor emulation of this control produces jerky and inappropriate movements that don’t work for walking and balance or properly controlling the arms and hands. Another possibility, particularly when a limb is missing, is to use central neural signals to control and artificial limb. This is a challenging problem because millions of neurons in multiple brain and spinal cord areas are involved. Nevertheless, roboticists dealing with imbuing robots with fine dexterity — such as the dexterity required to pick up an egg without breaking it — are dealing with similar control problems and those algorithms may be applied to prostheses. Working in brain ethics and conducting brain studies of religious experience Neuobiologists also have contributions to make to fields normally considered “liberal arts.” Spirituality and art are essential parts of high human existence. Brain science can contribute to the human search for truth and the existence of other realms because truth and beauty depend on the human brain. This means that knowledge of brain science and neurobiology should lead to careers in fields like bioethics, where writing and lecturing on the relationship between the brain and mind are essential. For example, for thousands of years, meditation and prayer have been known to alter consciousness in many beneficial ways. These mind states, induced by behavioral procedures, produce clear changes in brain activity that can be studied scientifically. Knowledge of how meditative practices change brain activity may yield important health benefits for treating depression and enhancing normal human potential. Religious experiences can be transformative, for good or bad. They incite a person to improve his or her life and be a better person, or convince a person to become almost slavishly devote to a group such as a cult. It is certainly at least partly a scientific question why some religious feelings produce a universal respect for all life, while others end up with a disdain for it. Another complex interface exists between neurobiology and the law and ethics. Human society and law are based on ideas about free will and guilt. But the law also recognizes “temporary insanity” and acknowledges that some people are not mentally capable of understanding the consequences of their actions. Society must to be able to base legal judgments on sound neurobiological evidence. Using brain science in behavioral economics Humans make decisions that are less than rational, and behavioral economics looks at how these decisions come into play when it comes to financial matters. Advertising and marketing are endeavors that take advantage of irrationalities such as associating cars with attractive actors to make sales. On the negative side, addiction, substance abuse, and eating disorders are behaviors that may potentially be disrupted by specific schedules of reinforcement. Neurobiological knowledge is increasingly being applied to clinical problems, and psychologists and psychiatrists often work with neurobiologists in team endeavors, attacking addiction and other maladaptive behaviors. Neural counseling While our bodies are living longer, our brains can’t always keep up. That makes it difficult to know what to do for people with relatively healthy bodies and poorly functioning brains. Some babies are born with severe intellectual disabilities, while many elderly people transition very gradually from normal to low intellectual function. At what point are people not competent to make decisions about their own welfare? When people carry genetic mutations that have a significant risk of producing severely deformed or retarded offsping, genetic counseling is often helpful. Neural assessment of intrinsic brain activity may become widespread for cases of brain decline, as in Alzheimer’s, or when severe brain injury occurs. Recent research on “locked-in” syndrome using brain scans indicates that some people thought to be comatose are actually conscious of their surroundings, and others not, without treatment staff being able to distinguish between the two states from any standard test.

View Article
Is Depression “All in Your Mind”?

Article / Updated 03-26-2016

Every day, another genetic anomaly underlying a mental illness makes the headlines. Evidence of serotonin (a neurotransmitter) disorders has led to the widespread use of prescription medications such as Prozac and other SSRIs (selective serotonin reuptake inhibitors). Many people aren’t comfortable with this trend and similar ones, like prescribing Ritalin to children for attention deficit hyperactivity disorder (ADHD). Are we prisoners of our brains? If our brains are imbalanced, do we require pharmacological adjustment (in other words, drugs)? Is there no free will or responsibility for our actions that is derived from free will? Virtually all spiritual traditions teach the virtues of self-control and responsibility. Our laws, legal system, and entire civilization are founded on the idea that people can make good or bad choices, and are to be held responsible for the results of those choices. If our choices are due to neurotransmitter imbalances, who is responsible for anything? Societies typically take the middle ground of differentiating what is “normal,” a state in which the person is responsible for his actions, from “abnormal,” in which, to some extent, the person may not be responsible. What is the basis for a middle ground? Consider depression and the behavior that results from it. In the past, people with depression were often told, “It’s all in your head” or “Just snap out of it.” Today, they’re often given pills. Can information from neurobiology help us understand this dilemma, which is so fundamental to our role as citizens in society? One of the most important ideas in neuroscience in recent decades has been that cognitive processes affect the brain, and the brain affects cognitive processes in a feedback loop that can spiral out of control. Depressed thoughts, such as those caused by negative events, like losing a loved one or ending a relationship, can cause the pharmacology of the brain to change so that we remember the bad aspects of things more than the good ones. Future experiences that may otherwise have been perceived as neutral get processed negatively, leading to further negative effects. Sometimes the cycle can be disrupted, and sometimes medications work, but in the long run often only when the depressed person is successful in deciding to take responsibility for correcting his life. Depression is all in the mind, but the mind is a product of the brain, and the ability of the mind to beat depression can be aided by pharmacology, or spiritual awakening, or even exercising.

View Article
Can Vision Be Restored for the Blind?

Article / Updated 03-26-2016

Most blindness is due to the death of photoreceptors in the retina, such as in retinitis pigmentosa and macular degeneration. Another leading cause of blindness is death of retinal ganglion cells from glaucoma. Damage can also occur in the visual pathways to the neocortex, or to the occipital lobe of the neocortex, can produce different kinds of blindness. The strategy for “curing” blindness depends on where the damage occurred. The first and best approach to treating blindness is stopping the disease itself. Most retinal degeneration has a genetic basis. Genetic diseases can be treated indirectly (by artificially restoring the biochemistry upset by the genetic anomaly — for example, through medication) or directly (by altering the gene itself through transgenic technology). However, if many of the affected cells have already died, these approaches may not be possible. Other genetic approaches, such as reprogramming some of the remaining cells in the retina to differentiate into the types that have died, or introducing stem cells to replace the lost cells, may be possible. Other, more artificial approaches to restoring sight may hold promise in the short term. After the retinal photoreceptors have died, the remaining cells in the retina, including the output retinal ganglion cells, appear to continue to be functional (though silent, due to the loss of their input signal). Several approaches are aimed at stimulating retinal ganglion cells directly. These include genetic modification of the ganglion cells to express their own light-absorbing ion channels so that they respond to light directly without photoreceptors. A difficulty of this approach is that the phototransduction efficiency of the retina will be a lot lower than that of the natural retina, so that significant light amplification, which might be damaging to the retina, may be necessary. Some blind patients have had electronic chips implanted in their retinas that use electrical current to stimulate either the bipolar cells that photoreceptors would normally stimulate, or the output ganglion cells directly. This approach requires a conversion from a camera image to drive current generators in the chip on the retina. The difficulty is stimulating enough ganglion cells individually. Clinically useful vision probably requires at least hundreds of discrete individually modulated stimulation points, whereas current injected into the retina spreads out over a wide area. Another visual prosthesis approach has been to inject current modulated from a camera image into the visual cortex. Signal injection higher in the visual system is the only viable approach for glaucoma, where the output retinal ganglion cells have died, or in the case of the physical loss of the eyes. However, most neurons in visual cortex are feature selective for lines or edges of a particular orientation or that move in a particular direction, and it isn’t clear what signal to put on which current stimulators in a chip. This is also a potential problem even for retinal stimulators because the human retina, like all mammalian retinas, probably contains at least 20 distinct classes of retinal ganglion cells. Some totally artificial vision approaches are based on sensory substitution. Several research groups have used an audio signal and trained blind people to “recognize” objects in the environment using hearing (bats do this via their own ultrasonic chirps). Facsimiles of point-to-point light intensity derived from a camera have been used to vibrate points on the skin, or even electrically stimulate the tongue as a visual prosthesis. These approaches tend to be low resolution, but they can be implemented immediately.

View Article
What Is Consciousness and Where Is It Located?

Article / Updated 03-26-2016

People often ask where consciousness is in the brain, but that question is problematic. The question assumes that consciousness is some special entity embedded within the nervous system. This assumption is a descendent of the philosophical position of dualism promoted by Descartes, who believed that all the body and most of the human brain were material substances like that of any animal, but the soul that contained human consciousness resided in a special place (the pineal gland for Descartes, because it is one of the very few unpaired brain structures). One problem with dualism is that there is no way for the spiritual soul to interact with the material body because such an interaction must involve a transfer of energy between body and soul for communication and control, but a purely spiritual entity can’t absorb or emit energy — otherwise, it wouldn’t be purely spiritual. A deeper problem for dualism is that it doesn’t explain anything about the relationship between consciousness and brain function, because consciousness is hidden away in a non-material soul. But we know that brain dysfunction can produce altered consciousness. Is the altered consciousness due to the intact spiritual entity within the brain being corrupted by having to use a broken machine to operate in the world, or is the altered consciousness a product of the brain itself? Many spiritual traditions take the view of the first alternative — that the spiritual entity within the brain is immutable and everlasting, but it functions in the world we know through the physical brain, like a lens focusing the rays of the sun. If the lens is imperfect, the image is corrupted, but the sun remains perfect as the real source of the image. No neurophysiological experiment has revealed energy leaking out of the brain into some sort of non-material conscious entity, nor has it shown any energy coming into the brain from anything other than known material sources in such a way that the firing of recorded neurons was modulated by any non-physical entity. A better question about consciousness would be to ask what areas of the brain need to be active for the specific sort of consciousness humans have versus the sort of consciousness other mammals have. For example, consciousness is not possible without an intact reticular formation in the brainstem that controls wakefulness, but this formation is equally necessary for rodents to be awake. What humans seem to have is a very large frontal lobe that interacts with the parietal, temporal, and occipital lobes that interact with each other directly and through the thalamus. The simultaneous, coordinated firing of neurons in the frontal lobe with neurons in the other lobes is necessary for humans’ ability to make choices and responses. Within this loop of billions of neurons is the knowledge of our linguistic categorization of the world — the content of our consciousness.

View Article