don’t think I’m a gloomy person,” Katie Mack said. She just likes thinking about the end — the annihilation of Earth, the solar system, our galaxy and especially the universe. Apocalyptic topics that can put even these uncertain times into perspective. “The destruction of the whole universe: There’s nothing bigger and more dramatic than that,” she said.
Change is in the nature of her career. As she began her undergraduate studies at the California Institute of Technology, cosmologists were processing the 1998 discovery that some mysterious entity called “dark energy” was pushing galaxies apart from one another. While working toward her Ph.D. at Princeton University, the first results from the Wilkinson Microwave Anisotropy Probe (WMAP) came out, providing “our first really detailed accounting of the contents of the universe,” she said. “Since WMAP was partly led by people at Princeton, it was a big part of life there, and hugely exciting; I felt like I was right at the ground floor on some of the most exciting discoveries in cosmology.”
Over the past decade, researchers have used such techniques to pick apart topics that social scientists have chased for more than a century: from the psychological underpinnings of human morality, to the influence of misinformation, to the factors that make some artists more successful than others. One study uncovered widespread racism in algorithms that inform health-care decisions; another used mobile-phone data to map impoverished regions in Rwanda
“The biggest achievement is a shift in thinking about digital behavioural data as an interesting and useful source”, says Markus Strohmaier, a computational social scientist at the GESIS Leibniz Institute for the Social Sciences in Cologne, Germany.
Not everyone has embraced that shift. Some social scientists are concerned that the computer scientists flooding into the field with ambitions as big as their data sets are not sufficiently familiar with previous research. Another complaint is that some computational researchers look only at patterns and do not consider the causes, or that they draw weighty conclusions from incomplete and messy data — often gained from social-media platforms and other sources that are lacking in data hygiene.
There continues to be an impressive appetite for conceptual and philosophical explorations of psychiatry. The publishing field is now populated by a diverse array of backgrounds and perspectives. The general public seems mostly interested in decrying the medicalization of normal and the transformation of our woes into neatly packaged mental disorders. The academic literature is dominated by philosophers and philosophically-trained professionals; while the intellectual discourse is of high caliber, it unfortunately remains largely inaccessible to mental health professionals and much of the general public, and resultantly it has had little influence outside the academic community. There is also a cohort of individuals with a critical interest in the subject but whose philosophical focus remains stuck on classical critical figures such as Thomas Szasz, Michel Foucault and R.D. Laing, with little engagement with contemporary philosophy of science. The philosophical work of Kenneth Kendler and his various collaborators (John Campbell, Carl Craver, Kenneth Schaffner, Erik Engstrom, Rodrigo Munoz, George Murphy, and Peter Zachar) assembled in a specially curated volume occupies a unique and special position in this contemporary landscape and there is much to be said in its favor.
One of the least expected aspects of 2020 has been the fact that epidemiological models have become both front-page news and a political football. Public health officials have consulted with epidemiological modelers for decades as they’ve attempted to handle diseases ranging from HIV to the seasonal flu. Before 2020, it had been rare for the role these models play to be recognized outside of this small circle of health policymakers.
Some of that tradition hasn’t changed with the SARS-CoV-2 pandemic. International bodies, individual countries, most states, and even some cities have worked with modelers to try to shape policy responses to the threat of COVID-19. But some other aspects of epidemiological modeling life clearly have changed. The models, some of which produce eye-catching estimates of fatalities, have driven headlines in addition to policy responses. And those policy responses have ended up being far more controversial than anyone might have expected heading into the pandemic.
The idea that we have brains hardwired with a mental template for learning grammar—famously espoused by Noam Chomsky of the Massachusetts Institute of Technology—has dominated linguistics for almost half a century. Recently, though, cognitive scientists and linguists have abandoned Chomsky’s “universal grammar” theory in droves because of new research examining many different languages—and the way young children learn to understand and speak the tongues of their communities. That work fails to support Chomsky’s assertions.
The research suggests a radically different view, in which learning of a child’s first language does not rely on an innate grammar module. Instead the new research shows that young children use various types of thinking that may not be specific to language at all—such as the ability to classify the world into categories (people or objects, for instance) and to understand the relations among things. These capabilities, coupled with a unique human ability to grasp what others intend to communicate, allow language to happen. The new findings indicate that if researchers truly want to understand how children, and others, learn languages, they need to look outside of Chomsky’s theory for guidance.
Finland is about to launch an experiment in which a randomly selected group of 2,000–3,000 citizens already on unemployment benefits will begin to receive a monthly basic income of 560 euros (approx. $600). That basic income will replace their existing benefits. The amount is the same as the current guaranteed minimum level of Finnish social security support. The pilot study, running for two years in 2017-2018, aims to assess whether basic income can help reduce poverty, social exclusion, and bureaucracy, while increasing the employment rate.
The Finnish government introduced its legislative bill for the experiment on 25 August. Originally, the scope of the basic income experiment was much more ambitious. Many experts have criticized the government’s experiment for its small sample size and for the setup of the trial, which will be performed within just one experimental condition. This implies that the experiment can provide insights on only one issue, namely whether the removal of the disincentives embedded in social security will encourage those now unemployed to return to the workforce or not.
Still, the world’s largest national basic income experiment represents a big leap towards experimental governance, a transformation that has been given strong emphasis in the current government program of the Finnish state. Additionally, the Finnish trial sets the agenda for the future of universal basic income at large. Its results will be closely followed by governments worldwide. The basic income experiment may thus well lead to the greatest societal transformation of our time.
Where would you look for alien life? An astronomer and science popularizer explains the basics of astrobiology to outline five plausible scenarios for finding extraterrestrials
It takes Jon Willis, astronomy professor at the University of Victoria in British Columbia, 170 pages – the length of a Henry James short story, for Pete’s sake – to get around to the Drake Equation in his winning debut, All These Worlds Are Yours: The Scientific Search for Alien Life, but we should probably deal with it immediately. In 1961, at a conference in West Virginia, astronomer Frank Drake worked up an exotic-looking equation on a blackboard. Drake had just recently completed a project designed to detect alien radio broadcasts from the region of the star systems Tau Ceti and Epsilon Eridanus, and the blackboard equation he wrote has become a signature parameter for the whole subject of Willis’s book, astrobiology.
A new paper using data from NASA’s Kepler telescope came out recently, estimating that 22% of Sun-like stars harbor Earth-sized planets. This is a big increase over previous estimates. It’s very cool work. Love it. But the news spin was predictable:
New York Times: The known odds of something — or someone — living far, far away from Earth improved beyond astronomers’ boldest dreams on Monday.
USA Today: We are not alone.
You get the idea. Aliens under every rock. The existence of extraterrestrial intelligence (henceforth ETIs, or just ETs) is normally discussed in the context of the Fermi Paradox, which Wikipedia describes as “the apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilization and humanity’s lack of contact with, or evidence for, such civilizations.” Now I’m a strong advocate for there being no ETs in our galaxy, as explained in this recent post. In fact I’ve gotten so tired of hearing about ETs I’ve started thinking of it as “Carl Sagan Syndrome.” Name checking the deservedly well regarded astronomer and advocate for the Search for Extraterrestrial Intelligence (SETI). With this latest news cycle I got to wondering. Why so much Sagan Syndrome? What am I missing?
A group led by Dr. Robert Costanza has calculated the value of the world’s ecosystems…the group’s most recent estimate puts the yearly value at $142.7 trillion.
“I think this is a very important piece of science,” said Douglas J. McCauley of the University of California, Santa Barbara. That’s particularly high praise coming from Dr. McCauley, who has been a scathing critic of Dr. Costanza’s attempt to put price tags on ecosystem services.
“This paper reads to me like an annual financial report for Planet Earth,” Dr. McCauley said. “We learn whether the dollar value of Earth’s major assets have gone up or down.”
The group last calculated this value back in 1997 and it rose sharply over the past 17 years, even as those natural habitats are disappearing. Dr. Costanza and his colleagues estimate that the world’s reefs shrank from 240,000 square miles in 1997 to 108,000 in 2011.
Deep within the Earth’s rocky mantle lies oceans’ worth of water locked up in a type of mineral called ringwoodite, new research shows.
The results of the study will help scientists understand Earth’s water cycle, and how plate tectonics moves water between the surface of the planet and interior reservoirs, researchers say.
The Earth’s mantle is the hot, rocky layer between the planet’s core and crust. Scientists have long suspected that the mantle’s so-called transition zone, which sits between the upper and lower mantle layers 255 to 410 miles (410 to 660 kilometers) below Earth’s surface, could contain water trapped in rare minerals. However, direct evidence for this water has been lacking, until now.
To see if the transition zone really is a deep reservoir for water, researchers conducted experiments on water-rich ringwoodite, analyzed seismic waves travelling through the mantle beneath the United States, and studied numerical models. They discovered that downward-flowing mantle material is melting as it crosses the boundary between the transition zone and the lower mantle layer.
“If we are seeing this melting, then there has to be this water in the transition zone,” said Brandon Schmandt, a seismologist at the University of New Mexico and co-author of the new study published today (June 12) in the journal Science. “The transition zone can hold a lot of water, and could potentially have the same amount of H2O [water] as all the world’s oceans.” (Melting is a way of getting rid of water, which is unstable under conditions in Earth’s lower mantle, the researchers said.)
Sgt Bowe Bergdahl spoke English for 23 years until he was captured by Taliban fighters in Afghanistan five years ago. But since his release, he has trouble speaking it, says his father. How can you lose your native language, asks Taylor Kate Brown.
Some people have gone decades without speaking or hearing their first language but they retain the ability to speak it easily, says Dr Monika Schmid, a linguistics professor at the University of Essex in the UK. But others begin losing fluency within a few years of not speaking it.
It’s rare to totally lose command of a first language, she says. Instead people have “language attrition” – trouble recalling certain words or they use odd grammar structures. Age is a factor. Once past puberty, Dr Schmid says, your first language is stable and the effects of attrition can reverse themselves if you are re-immersed. But children as old as 10 don’t necessarily retain the language they were born into. In a study of French adoptees who left South Korea in childhood, when asked in their early 30s to identify Korean, they did no better than native French speakers with no exposure to the language.
The difficulties in recalling your first language are greater the more immersed you are in a second language, says Dr Aneta Pavlenko at Temple University in Philadelphia, because cognitive resources are limited. Despite teaching Russian at university in the US, she herself returned to her Russian-speaking community in Kiev to realise she had forgotten how to start a conversation at the post office.
It’s well known that brain injuries can have an impact on language loss, but emotional trauma can also affect it. Among German Jews who fled the country during the Holocaust, Dr Schmid says the loss of language was far more dramatic the greater their trauma.
“Addressing a field that has been dominated by astronomers, physicists, engineers, and computer scientists, the contributors to this collection raise questions that may have been overlooked by physical scientists about the ease of establishing meaningful communication with an extraterrestrial intelligence. These scholars are grappling with some of the enormous challenges that will face humanity if an information-rich signal emanating from another world is detected.”
I write in praise of air. I was six or five
when a conjurer opened my knotted fist
and I held in my palm the whole of the sky.
I’ve carried it with me ever since.
That is the opening stanza from “In Praise of Air” by British poet, playwright and novelist Simon Armitage.
There’s beauty to this poem that goes beyond the ideas it conveys and the careful craftsmanship of the writer. The work doesn’t just praise the air, it clears it.
Or, more accurately, the 65-foot-high banner upon which the poem is printed clears it. That’s because the material is coated with nanotechnology that chews up airborne pollutants.
In Praise of Air, a collaboration between Armitage and physical chemistry professor Tony Ryan, has been unfurled on a building at the University of Sheffield in the UK to bring attention to Ryan’s innovation.
Text and Image via TXCHNOLOGIST. Read full article at the TXCHNOLOGIST
More than five centuries after Christopher Columbus’s flagship, the Santa Maria, was wrecked in the Caribbean, archaeological investigators think they may have discovered the vessel’s long-lost remains – lying at the bottom of the sea off the north coast of Haiti. It’s likely to be one of the world’s most important underwater archaeological discoveries.
“All the geographical, underwater topography and archaeological evidence strongly suggests that this wreck is Columbus’ famous flagship, the Santa Maria,” said the leader of a recent reconnaissance expedition to the site, one of America’s top underwater archaeological investigators, Barry Clifford.
“The Haitian government has been extremely helpful – and we now need to continue working with them to carry out a detailed archaeological excavation of the wreck,” he said.
So far, Mr Clifford’s team has carried out purely non-invasive survey work at the site – measuring and photographing it.
For most of recorded history, human beings situated the mind — and by extension the soul — not within the brain but within the heart. When preparing mummies for the afterlife, for instance, ancient Egyptian priests removed the heart in one piece and preserved it in a ceremonial jar; in contrast, they scraped out the brain through the nostrils with iron hooks, tossed it aside for animals, and filled the empty skull with sawdust or resin. (This wasn’t a snarky commentary on their politicians, either—they considered everyone’s brain useless.) Most Greek thinkers also elevated the heart to the body’s summa. Aristotle pointed out that the heart had thick vessels to shunt messages around, whereas the brain had wispy, effete wires. The heart furthermore sat in the body’s center, appropriate for a commander, while the brain sat in exile up top. The heart developed first in embryos, and it responded in sync with our emotions, pounding faster or slower, while the brain just sort of sat there. Ergo, the heart must house our highest faculties.
Meanwhile, though, some physicians had always had a different perspective on where the mind came from. They’d simply seen too many patients get beaned in the head and lose some higher faculty to think it all a coincidence. Doctors therefore began to promote a brain-centric view of human nature. And despite some heated debates over the centuries—especially about whether the brain had specialized regions or not—by the 1600s most learned men had enthroned the mind within the brain. A few brave scientists even began to search for that anatomical El Dorado: the exact seat of the soul within the brain.
Read full article written by Sam Kean at SALON.
Image above: Eugene Thirion’s “Jeanne d’Arc” (1876)
Different kinds of pain summon different terms of art: hurt, suffering, ache, trauma, angst, wounds, damage. Pain is general and holds the others under its wings; hurt connotes something mild and often emotional; angst is the most diffuse and the most conducive to dismissal as something nebulous, sourceless, self-indulgent, and affected. Suffering is epic and serious; trauma implies a specific devastating event and often links to damage, its residue. While wounds open to the surface, damage happens to the infrastructure—often invisibly, irreversibly—and damage also carries the implication of lowered value. Wound implies en media res: The cause of injury is in the past but the healing isn’t done; we are seeing this situation in the present tense of its immediate aftermath. Wounds suggest sex and aperture: A wound marks the threshold between interior and exterior; it marks where a body has been penetrated. Wounds suggest that the skin has been opened—that privacy is violated in the making of the wound, a rift in the skin, and by the act of peering into it.
If consciousness is just the workings of neurons and synapses, how do we explain the phenomenon of near-death experience? By some accounts, about 3% of the U.S. population has had one: an out-of-body experience often characterized by remarkable visions and feelings of peace and joy, all while the physical body is close to death. To skeptics, there are more plausible, natural explanations, like oxygen deprivation. Is the prospect of an existence after death “real” and provable by science, or a construct of wishful thinking about our own mortality?
A new study from psychologists at the University of Chicago and Pompeu Fabra University in Barcelona finds that people using a foreign language take a relatively utilitarian approach to moral dilemmas, making decisions based on assessments of what’s best for the common good. That pattern holds even when the utilitarian choice would produce an emotionally difficult outcome, such as sacrificing one life so others could live.
“This discovery has important consequences for our globalized world, as many individuals make moral judgments in both native and foreign languages,” says Boaz Keysar, Professor of Psychology at UChicago. “The real world implications could include an immigrant serving as a jury member in a trial, who may approach decision-making differently than a native-English speaker.” Leading author Albert Costa, UPF psychologist adds that “deliberations at places like the United Nations, the European Union, large international corporations or investment firms can be better explained or made more predictable by this discovery.”
Protein discovery could boost efficacy of bone marrow replacement treatments
Researchers at the University of California, San Diego School of Medicine report that a protein called beta-catenin plays a critical, and previously unappreciated, role in promoting recovery of stricken hematopoietic stem cells after radiation exposure.
The findings, published in the May 1 issue of Genes and Development, provide a new understanding of how radiation impacts cellular and molecular processes, but perhaps more importantly, they suggest new possibilities for improving hematopoietic stem cell regeneration in the bone marrow following cancer radiation treatment.
Ionizing radiation exposure – accidental or deliberate – can be fatal due to widespread destruction of hematopoietic stem cells, the cells in the bone marrow that give rise to all blood cells. A number of cancer treatments involve irradiating malignancies, essentially destroying all exposed blood cells, followed by transplantation of replacement stem cells to rebuild blood stores. The effectiveness of these treatments depends upon how well the replacement hematopoietic stem cells do their job.
Found among the notes of the poet Johann Wolfgang Goethe is a stupendous claim: Everything is leaf. This is a statement that seems too beautiful to be science. Goethe came to this idea on a trip to Italy in the late 1700s. The more Goethe looked at plants, and lived and breathed with plants, the more profoundly he felt poetry’s limits. He turned to botany and began publishing scientific works. He created his own study of seeing, which he called “morphology.” In this, Goethe’s love of plants followed the same path that all lasting love must take. Goethe wanted to know plants from their most essential beginnings, wanted to touch their seeds, follow their cycles. He couldn’t be satisfied just wandering around parks, glancing at the flowers and pronouncing metaphors upon them — Goethe had to understand what a plant truly is. Everything is leaf, he discovered at last, every part of a plant is leaf. The cotyledon, the foliage, the cataphylls, the petals — a plant is fundamentally leaf. Goethe published this intimate memoir of his relationship with leaves and named it The Metamorphosis of Plants.
It’s unsurprising that Goethe came to his idea about the everythingness of leaf while wandering the lush countryside of Naples. I wonder if he could have had his realization trudging through the barren early spring gardens of Weimar. “The Neapolitan firmly believes that he lives in Paradise and takes a very dismal view of northern countries,” Goethe wrote in his notebook. “Sempre neve, case di legno, gran ignoranza, ma denari assai — that is how he pictures our lives. For the edification of all northerners, this means: ‘Snow all the year round, wooden houses, great ignorance, but lots of money.’” That is to say, a leaf in Germany is a mostly invisible thing. It is an entr’acte, a promise. In the northern parts of the world, the leaves hide inside the sticks; the sticks, for most of the year, look dead. And only a poet or a flimflammer could come up with the notion that something hardly visible is everything.
Imagine looking out over Tokyo Bay from high above and seeing a man-made island in the harbor, 3 kilometers long. A massive net is stretched over the island and studded with 5 billion tiny rectifying antennas, which convert microwave energy into DC electricity. Also on the island is a substation that sends that electricity coursing through a submarine cable to Tokyo, to help keep the factories of the Keihin industrial zone humming and the neon lights of Shibuya shining bright.
But you can’t even see the most interesting part. Several giant solar collectors in geosynchronous orbit are beaming microwaves down to the island from 36 000 km above Earth.
It’s been the subject of many previous studies and the stuff of sci-fi for decades, but space-based solar power could at last become a reality—and within 25 years, according to a proposal from researchers at the Japan Aerospace Exploration Agency (JAXA). The agency, which leads the world in research on space-based solar power systems, now has a technology road map that suggests a series of ground and orbital demonstrations leading to the development in the 2030s of a 1-gigawatt commercial system—about the same output as a typical nuclear power plant.”
When I was a hormone-addled adolescent in the late 1960s and early ’70s, I would often look up at a poster of Sigmund Freud on my brother’s bedroom wall. The title on the portrait – something like ‘Freud: explorer of the unconscious and discoverer of the meaning of dreams’ – depicted a hero of intellectual freedom and creative thought. When you looked at it closely, the portrait seemed to writhe and come alive. In the drug-fueled style of those decades of ongoing sexual revolution, the artist had depicted the nose as an erect penis, the cheeks as a female behind, and the eyes as female breasts. One side of the face was a voluptuous female whose legs wrapped around the body of a muscular male on the other side of the face and, of course, both heads were thrown back in dramatized ecstasy. I recall some of my brother’s stoned friends gazing at the portrait with bewildered looks on their faces, apparently unsure if the writhing torsos they saw were really there or not.
Right from the start, I saw Freud as a kind of secular saint because he was willing to take an unbiased look at himself through the raw material of his dreams. If he found in those dreams a mass of broiling sexual impulses, so be it. Those impulses had to be accepted, understood and explained within a larger picture of the human mind.
Academic hoaxes have a way of crystallizing, and then shattering, the intellectual pretensions of an era. It was almost 20 years ago, for instance, that a physicist named Alan Sokal laid siege to postmodern theory with a Trojan horse. You may remember the details: Sokal wrote a deliberately preposterous academic paper called “Transgressing the Boundaries: Toward a Transformative Hermeneutics of Quantum Gravity.” He filled it with the then trendy jargon of “critical theory,” and submitted it to a prominent journal of cultural studies called Social Text. Amid worshipful citations of postmodern theorists and half-baked references to complex scientific work, the paper advanced a succession of glib, sweeping assertions (“Physical ‘reality,’ no less than social ‘reality,’ is at bottom a social and linguistic construct”). Social Text published it without demanding any significant editorial changes.
When Sokal revealed that his paper was a practical joke, the media went wild—or as wild, at least, as the media has ever gone over an academic prank. By successfully aping the methods and conventions of postmodern cultural analysis, and using them to serve intentionally ridiculous ends, Sokal had, for many in the public, exposed once and for all how unsound those methods and conventions were.
As a critical theorist working at the intersection of Continental philosophy, psychoanalysis, and feminist and queer theory, I make observations about human life that are speculative rather than empirical. That may explain why my definition of character pertains to what is least tangible, least intelligible about our being, including the inchoate frequencies of desire that sometimes cause us to behave in ways that work against our rational understanding of how our lives are supposed to turn out.
If identity captures something about the relatively polished social persona we present to the world, then character—in my view—captures something about the wholly idiosyncratic and potentially rebellious energies that, every so often, break the facade of that persona. From this perspective, our character leaps forth whenever we do something “crazy,” such as suddenly dissolving a committed relationship or leaving a promising career path. At such moments, what is fierce and unapologetic about us undermines our attempts to lead a “reasonable” life, causing us to follow an inner directive that may be as enigmatic as it is compelling. We may not know why we feel called to a new destiny, but we sense that not heeding that call will stifle what is most alive within us.
Text by Mari Rutti at The Chronicle Review. Continue THERE
Portuguese designer Susana Soares has developed a device for detecting cancer and other serious diseases using trained bees. The bees are placed in a glass chamber into which the patient exhales; the bees fly into a smaller secondary chamber if they detect cancer.
Scientists have found that honey bees – Apis mellifera – have an extraordinary sense of smell that is more acute than that of a sniffer dog and can detect airborne molecules in the parts-per-trillion range.
Bees can be trained to detect specific chemical odors, including the biomarkers associated with diseases such as tuberculosis, lung, skin and pancreatic cancer.
It can be unsettling to contemplate the unlikely nature of your own existence, to work backward causally and discover the chain of blind luck that landed you in front of your computer screen, or your mobile, or wherever it is that you are reading these words. For you to exist at all, your parents had to meet, and that alone involved quite a lot of chance and coincidence. If your mother hadn’t decided to take that calculus class, or if her parents had decided to live in another town, then perhaps your parents never would have encountered one another. But that is only the tiniest tip of the iceberg. Even if your parents made a deliberate decision to have a child, the odds of your particular sperm finding your particular egg are one in several billion. The same goes for both your parents, who had to exist in order for you to exist, and so already, after just two generations, we are up to one chance in 1027. Carrying on in this way, your chance of existing, given the general state of the universe even a few centuries ago, was almost infinitesimally small. You and I and every other human being are the products of chance, and came into existence against very long odds.
Excerpt from an article writen by Tim Maudlin at Aeon. Continue THERE
A few years ago, cognitive scientist Duje Tadin and his colleague Randolph Blake decided to test blindfolds for an experiment they were cooking up.
They wanted an industrial-strength blindfold to make sure volunteers for their work wouldn’t be able to see a thing. “We basically got the best blindfold you can get.” Tadin tells Shots. “It’s made of black plastic, and it should block all light.”
Tadin and Blake pulled one on just to be sure and waved their hands in front of their eyes. They didn’t expect to be able to see, yet both of them felt as if they could make out the shadowy outlines of their arms moving.
Being scientists, they wondered what was behind the spooky phenomenon. “We knew there wasn’t any visual input there,” Tadin says. They figured their minds were instinctively filling in images where there weren’t any.
After conducting several experiments involving computerized eye trackers, they proved themselves right. Between 50 and 75 percent of the participants in their studies showed an eerie ability to “see” their own bodies moving in total darkness. The research, put together by scientists at the University of Rochester and Vanderbilt University, is published in the journal Psychological Science.
How were they so sure? “The only way you can produce smooth eye movements is if you’re following a target,” Tadin tells Shots. When our eyes aren’t tracking something very specific, they tend to jerk around randomly. “If you just try to make your eyes move smoothly, you can’t do it.” The researchers used this knowledge to test whether people could really distinguish their hand movements in the dark.
Scientists have found a new way to grow hair, one that they say may lead to better treatments for baldness. So far, the technique has been tested only in mice, but it has managed to grow hairs on human skin grafted onto the animals. If the research pans out, the scientists say, it could produce a treatment for hair loss that would be more effective and useful to more people than current remedies like drugs or hair transplants.
Present methods are not much help to women, but a treatment based on the new technique could be, the researchers reported Monday in Proceedings of the National Academy of Sciences.
Currently, transplants move hair follicles from the back of the head to the front, relocating hair but not increasing the amount. The procedure can take eight hours, and leave a large scar on the back of the head. The new technique would remove a smaller patch of cells involved in hair formation from the scalp, culture them in the laboratory to increase their numbers, and then inject them back into the person’s head to fill in bald or thinning spots. Instead of just shifting hair from one spot to another, the new approach would actually add hair.
The senior author of the study is Angela Christiano, a hair geneticist and dermatology professor at Columbia University Medical Center in New York, who has become known for her creative approach to research. Dr. Christiano’s interest in the science of hair was inspired in part by her own experience early in her career with a type of hair loss called alopecia areata. She has a luxuriant amount of hair in the front of her head, but periodically develops bald spots in the back. The condition runs in her family.
Excerpt from an article written by Denise Grady at NYT. Continue THERE
A picture taken on April 13, 2012 and released by the Tsuji Lab Research Institute for Science and Technology of the Tokyo University of Science shows a hairless mouse with black hair on its back at the laboratory in Noda, Chiba Prefecture.
Ian Tibbetts, 43, who first damaged his eye in an industrial accident when scrap metal ripped his cornea in six places, had his sight restored by the radical operation, chronicled in the new BBC documentary The Day I Got My Sight Back.
The surgery allowed Mr Tibbetts to see his four-year-old twin sons, Callum and Ryan, for the first time, a moment he describes as “ecstasy”.
The procedure, called osteo-odonto-keratoprothesis, or OOKP, was conducted by ophthalmic surgeon Christopher Liu at the Sussex Eye Hospital in Brighton, Sussex. Mr Tibbetts and his wife Alex agreed to the revolutionary surgery after all other options had failed, leaving Mr Tibbetts depressed and out of work.
The complex surgery is a two-part procedure. First, the tooth and part of the jaw are removed, and a lens is inserted into the tooth using a drill. The tooth and lens are then implanted under the eye socket. After a few months, once the tooth has grown tissues and developed a blood supply, comes the second step: part of the cornea is sliced open and removed and the tooth is stitched into the eye socket. Since the tooth is the patient’s own tissue, the body does not reject it.