Researchers are a step closer to figuring out how our brains turn those squiggly lines on papers and screens into words.
A team of cognitive neuroscientists at the University of Pittsburgh have completed a landmark study looking into how the human brain recognizes and processes written words – or, more simply, reads.
“We really don’t really think about it when we’re reading a word, but all you’re really seeing are black and white lines and you turn that into a story, a sentence, a word, something with real meaning,” said Avniel Ghuman, one of the lead researchers.
Cognitive scientist and fellow researcher Julie Fiez said their findings raise important questions about how our brains present the printed words.
The team studied epilepsy patients who agreed to have electrodes implanted in their brains. Their main goal was to reduce seizures, but it gave doctors and scientists an opportunity to also examine how their brains decipher written words. Epilepsy surgeon Mark Richardson performed the procedures.
“In some patients with epilepsy that doesn’t respond to medication, the only way we can potentially stop the seizures is to locate the place in the brain where they’re starting,” Richardson said.
Richardson said they used the electrodes to stimulate different parts of the brain, map their functions and find the connection to seizures. During the study, they also used those electrodes to stimulate the parts of the brain used for reading and recognizing words.
For one experiment, participants were asked to read a series of seven-letter words. Researchers analyzed the recorded data collected from one section of the brain.
“We were particularly interested in this brain region – the left mid fusiform gyrus – because we know that it’s very important for reading,” said researcher Elizabeth Hirshorn.
She said that part of the brain looks different in people with reading disorders like dyslexia and changes as they become more literate.
Ghuman said their findings also suggest that part of the brain recognizes visual cues about words and also works in conjunction with other areas to refine and decode words that are visually similar, like “hint” and “lint.”
“Right after reading a word, you could tell the difference between words that were very different from one another – hint and dome – and then it was a later stage that came a couple of a hundred milliseconds later that now suddenly we can even tell the difference between words that were only one letter apart even though we couldn’t do that originally, we could do that eventually,” Ghuman said.
The team now hopes to continue its research with less invasive approaches and a broader sampling of participants.
In this week's Tech Report headlines:
- U.S. Olympians in Rio are getting a bit of a boost thanks to new technology that could shave seconds off their speed on the track and in the pool. Nike used 3-D printing to develop small silicone protrusions that redirect air flow around runners, while Adidas used body scanners to design better swimsuits for Team USA. They said the upgrades are completely legal and meet Olympic standards.
- Carnegie Mellon University’s “Create Lab” is expanding nationally. It's adding robotics and computer science research space in Atlanta and Salt Lake City. CMU will train local educators and provide them with tools to assist their students in the classroom and expand technology education. The project is a partnership with Georgia Institute of Technology, Atlanta Public Schools, and the Utah STEAM Action Center.
The Associated Press contributed to this story.