How Twisted Graphene Became the Big Thing in Physics

The lab produced dozens of twisted bilayer graphene “Devices,” as researchers call them, but none of them showed significant evidence of electron correlation.

The sudden jumps in twisted bilayer graphene – from conducting to insulating to superconducting – with just a tweak of an external electric field indicate that free electrons are slowing to a virtual halt, notes physicist Dmitri Efetov of the Institute of Photonic Sciences in Barcelona, Spain.

Said MacDonald, is the small number of electrons that seem to be doing the heavy lifting in magic-angle twisted bilayer graphene – about one for every 100,000 carbon atoms.

MacDonald points out, for example, that some of the insulating states in twisted bilayer graphene appear to be accompanied by magnetism that arises not from the quantum spin states of the electrons, as is typically the case, but entirely from their orbital angular momentum – a theorized but never-before-observed type of magnetism.

Semiconductors and transitional metals can be deposited in twisted layers and are seen as good candidates for correlated physics – perhaps better than twisted bilayer graphene.

Having burst far out into the lead of the twisted bilayer graphene field in stunning fashion, Jarillo-Herrero isn’t sitting back and waiting for others to catch up.

Such hopes ultimately pan out, for now the excitement in twisted bilayer graphene seems only to be building.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

AI developed to tackle physics problems is really good at summarizing research papers

New research from MIT and elsewhere is making an AI that can read scientific papers and generate a plain-English summary of one or two sentences.

A novel neural network they developed, along with other computer researchers, journalists, and editors, can read scientific papers and render a short, plain-English summary.

Autoread. “We have been doing various kinds of work in AI for a few years now,” says Marin Solja?i?, a professor of physics at MIT and co-author of the research.

“We use AI to help with our research, basically to do physics better. And as we got to be more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics – a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

The name the team gave this approach, thankfully, is much easier to wrap your head around: RUM. “RUM helps neural networks to do two things very well,” says Preslav Nakov, a senior scientist at the Qatar Computing Research Institute and paper co-author.

As a proof-of-concept, the team ran the same research paper through a conventional neural network and through their RUM-based system, asking them to produce short summaries.

“Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.”

Hmmmmmnnnn, very interesting indeed.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link