Microelectronics and Rare Earth Elements Sectors

Rodeo

Contributor
Moderator
DefenceHub Diplomat
Messages
1,330
Reactions
31 5,069
Nation of residence
Turkey
Nation of origin
Turkey


Keep in mind this is still made with DUV machines, with a sort of tinkering to reach the limits of what duvs can offer. It's not as efficient as EUV 7nm chips (and the transistor size as the video mentions is most times a marketing gimmick as what Nvidia calls 10nm, intel can call 14nm and what they call 7nm, Nividia calls 5nm as the criteria they use for measurements are different; another crucial point is that transistor size is the most important factor for performance and efficiency of chips but it's not the only factor; manufacturing know-how in limiting power leakage, enhancing switching speeds, interconnect resistance and capacitance all matter. For power consumption there are also techniques independent of transistor size, like DVFS and some other technical stuff that I read about but I forgot. All in all this Huawei chip was not expected by West but it doesn't mean China has closed the gap by that much. They are still about 10 years behind (as the performance and techniques and machines used are more than a decade out of date; but still it shows unexpected speed of development).
It's not that I disagree with the professor @Bogeyman quoted. China has been sanctioned heavily and has to develop the entire supply-chain itself. This is a humongous undertaking and the most important element is the lithography machines, imho. I wish their research would be more fruitful and we see a fully vertically-integrated chip manufacturing that will drive the price-gouging NVIDIA out of market. NVIDIA's cupidity has to end and the Chinese are the only hope so that I, maybe, can run my sidekick AI on my GPU clusters at home someday.
 

Nilgiri

Experienced member
Moderator
Aviation Specialist
Messages
10,108
Reactions
126 20,522
Nation of residence
Canada
Nation of origin
India
It's not that I disagree with the professor @Bogeyman quoted. China has been sanctioned heavily and has to develop the entire supply-chain itself. This is a humongous undertaking and the most important element is the lithography machines, imho. I wish their research would be more fruitful and we see a fully vertically-integrated chip manufacturing that will drive the price-gouging NVIDIA out of market. NVIDIA's cupidity has to end and the Chinese are the only hope so that I, maybe, can run my sidekick AI on my GPU clusters at home someday.

Cost regarding that can only change if PRC manages to convince the world to adopt Yuan far more, so pricing can even attempt to bypass USD (and the vast seignoriage US has here by default).

Problem for PRC is its total debt level is already at 300% of GDP (this is huge problem showing up in real estate for a reason)....and thats all part of reason PRC holds onto US forex (just like Xi Jinping sending his daughter to harvard) against what it tries to broadcast instead.

Then there is problem of where the value addition is regarding these things. 10 billion out of 13 billion PRC earns from world in IP (with PRC still paying 45 billion to the world for their IP) was from huawei related comms. i.e there are huge tech tiers regarding the actual operationalising of the chips into products...compared to the component apex themselves. These are the things (human intellectual services related to the components) that are far more sanction proof in end too as they can be applied/licensed to other components (and their price levels defended in other countries etc).

PRC is lagging here, the paper publ. to IP earning ratio shows it....among the larger domestic fiscal problems PRC has grown for itself with its overwrought approach to statist control and bureaucracy.
 

fushkee

Committed member
Messages
193
Reactions
6 300
Nation of residence
Qatar
Nation of origin
Turkey
Türkiye should invest such projects which produce chips. I know it costs huge budget. But we should start with Çakil project and develop it with new investments.
 

Bogeyman 

Experienced member
Professional
Messages
9,216
Reactions
68 31,326
Website
twitter.com
Nation of residence
Turkey
Nation of origin
Turkey

GQQzskoXEAA5nna
 

Bogeyman 

Experienced member
Professional
Messages
9,216
Reactions
68 31,326
Website
twitter.com
Nation of residence
Turkey
Nation of origin
Turkey

US proposes restrictions for investments in Chinese tech, AI​


The United States Department of the Treasury has fleshed out a proposed rule that would restrict and monitor US investments in China for artificial intelligence, computer chips and quantum computing.

The fleshed-out draft rule, issued on Friday, stems from President Joe Biden’s August executive order regarding the access that “countries of concern” have to American dollars to fund advanced technologies that could enhance those nations’ military, intelligence, surveillance and cyber-capabilities. The order identified China, Hong Kong and Macau as countries of concern.

The Biden administration has sought to stymie the development of technologies by China, the world’s second largest economy, that could give it a military edge or enable it to dominate emerging sectors such as electric vehicles (EVs).

In addition to the proposed rule, Biden, a Democrat, has also placed a stiff tariff on Chinese EVs, an issue with political implications as Biden and his Republican presidential opponent Donald Trump are both trying to show voters who can best stand up to China, a geopolitical rival and major trading partner.

The proposed rule outlines the required information that US citizens and permanent residents must provide when engaging in transactions in this area as well as what would be considered a violation of the restrictions.

It specifically would prohibit American investors from funding AI systems in China that could be used for weapons targeting, combat and location tracking, among other military applications, according to a senior Treasury official who previewed the rule for reporters on the condition of anonymity.

The US Treasury is seeking comment on the proposal through August 4 and after that is expected to issue a final rule.

Biden administration officials, including Treasury Secretary Janet Yellen, have insisted they have no interest in “decoupling” from China – however, tensions between the two nations have increased in recent years.

After the US military in February 2023 shot down a suspected Chinese spy balloon off the US East Coast after it traversed sensitive military sites across North America, China threatened repercussions.

Since then, incidents between the two nations based on national security concerns have regularly occurred.

For instance, Biden in May issued an order blocking a Chinese-backed cryptocurrency mining firm from owning land near a Wyoming nuclear missile base, calling its proximity to the base a “national security risk”.

 

Bogeyman 

Experienced member
Professional
Messages
9,216
Reactions
68 31,326
Website
twitter.com
Nation of residence
Turkey
Nation of origin
Turkey

China: World’s 1st light-based AI chip beats NVIDIA H100 in energy efficiency​

Ateam of scientists from Beijing has announced a groundbreaking advancement in artificial intelligence (AI) technology with the development of the world’s first fully optical AI chip.
This innovative chip, known as Taichi-II, represents a significant leap forward in both efficiency and performance, surpassing even the renowned NVIDIA Corp. NVDA H100 GPU in energy efficiency.
The research team, led by professors Fang Lu and Dai Qionghai from Tsinghua University, unveiled their findings on Wednesday.

A leap beyond: The Taichi-II chip’s superiority​

The Taichi-II chip represents a substantial advancement from its predecessor, the Taichi chip, which had already set impressive records. Earlier this year, the researchers announced that the original Taichi chip had exceeded the energy efficiency of NVIDIA’s H100 GPU by over a thousand times, as reported by South Morning China Post (SCMP).
Now, the Taichi-II chip has further elevated this benchmark, showcasing superior performance across various scenarios.
The study led by Professors Fang Lu and Dai Qionghai highlights Taichi-II’s capability to transform AI training and modeling. Unlike traditional methods that rely on electronic computers for training, the Taichi-II leverages optical processes, making it more efficient and significantly enhancing performance.
In practical terms, the Taichi-II chip has demonstrated remarkable advancements in several areas. It has expedited the training of optical networks containing millions of parameters by an order of magnitude and improved the accuracy of classification tasks by 40 percent.
In complex imaging scenarios, its energy efficiency in low-light conditions has improved by six orders of magnitude.

Innovative approach: FFM learning​

The development of the Taichi-II chip is marked by its use of a novel approach called fully forward mode (FFM) learning. This technique allows for a computer-intensive training process to be conducted directly on the optical chip, enabling parallel processing of machine learning tasks.
Xue Zhiwei, lead author of the study and a doctoral student, emphasized that this architecture supports high-precision training and is well-suited for large-scale network training.
“Our research envisions a future where these chips form the foundation of optical computing power for AI model construction,” Fang Lu stated.
The FFM learning method capitalizes on high-speed optical modulators and detectors, which could potentially outperform GPUs in accelerated learning scenarios. This innovation opens new possibilities for optical computing, moving it from theoretical concepts to practical, large-scale applications.


Implications and future prospects​

The timing of Taichi-II’s debut is particularly notable. As the US has imposed restrictions on China’s access to advanced GPUs for AI training, the Taichi-II chip offers a viable alternative that could help mitigate these limitations.

Additionally, the performance of Taichi-II comes amid reports that NVIDIA’s high-tech AI chips may be making their way into the hands of Chinese military officials, potentially influencing China’s technological advancements.


Fully forward mode training for optical neural networks​


 

Bogeyman 

Experienced member
Professional
Messages
9,216
Reactions
68 31,326
Website
twitter.com
Nation of residence
Turkey
Nation of origin
Turkey

World’s fastest memory writes 25 billion bits per sec, 10,000× faster than current tech​


Aresearch team at Fudan University has built the fastest semiconductor storage device ever reported, a non‑volatile flash memory dubbed “PoX” that programs a single bit in 400 picoseconds (0.0000000004 s) — roughly 25 billion operations per second. The result, published today in Nature, pushes non‑volatile memory to a speed domain previously reserved for the quickest volatile memories and sets a benchmark for data‑hungry AI hardware.

Smashing the speed ceiling​

Conventional static and dynamic RAM (SRAM, DRAM) write data in 1–10 nanoseconds but lose everything when power is cut. Flash chips, by contrast, hold data without power yet typically need micro‑ to milliseconds per write — far too slow for modern AI accelerators that shunt terabytes of parameters in real time.
The Fudan group, led by Prof. Zhou Peng at the State Key Laboratory of Integrated Chips and Systems, re‑engineered flash physics by replacing silicon channels with two‑dimensional Dirac graphene and exploiting its ballistic charge transport.
By tuning the “Gaussian length” of the channel, the team achieved two‑dimensional super‑injection, which is an effectively limitless charge surge into the storage layer that bypasses the classical injection bottleneck.
“Using AI‑driven process optimization, we drove non‑volatile memory to its theoretical limit,” Zhou told Xinhua, adding that the feat “paves the way for future high‑speed flash memory.”

One billion cycles in a blink​

Co‑author Liu Chunsen likens the breakthrough to shifting from a U‑disk that writes 1,000 times per second to a chip that fires 1 billion times in the blink of an eye. The previous world record for non‑volatile flash programming speed was about two million operations per second.
Because PoX is non‑volatile, it retains data with no standby power, a critical property for next‑generation edge AI and battery‑constrained systems. Combining ultra‑low energy with picosecond write speeds could remove the long‑standing memory bottleneck in AI inference and training hardware, where data shuttling, not arithmetic, now dominates power budgets.

Industrial and strategic implications​

Flash memory remains a cornerstone of global semiconductor strategy thanks to its cost and scalability. Fudan’s advance, reviewers say, offers a “completely original mechanism” that may disrupt that landscape.
If mass‑produced, PoX‑style memory could eliminate separate high‑speed SRAM caches in AI chips, slashing area and energy. It can enable instant‑on, low‑power laptops and phones, and support database engines that hold entire working sets in persistent RAM.
The device can also strengthen China’s domestic drive to secure leadership in foundational chip technologies. The team did not disclose endurance figures or fabrication yield, but the graphene channel suggests compatibility with existing 2D‑material processes that global fabs are already exploring. “Our breakthrough can reshape storage technology, drive industrial upgrades and open new application scenarios,” Zhou said.

What happens next​

Fudan engineers are now scaling the cell architecture and pursuing array‑level demonstrations. Commercial partners have not been named, but Chinese foundries are racing to integrate 2D materials with mainstream CMOS lines.

If successful, PoX could come in as a new class of ultra‑fast, ultra‑green memories that meet the swelling appetite of large‑language‑model accelerators, finally giving AI hardware a storage medium that keeps pace with its logic.


Prof. Zhou Peng Fudan University

Subnanosecond flash memory enabled by 2D-enhanced hot-carrier injection​

Abstract

The pursuit of non-volatile memory with program speeds below one nanosecond, beyond the capabilities of non-volatile flash and high-speed volatile static random-access memory, remains a longstanding challenge in the field of memory technology1. Utilizing fundamental physics innovation enabled by advanced materials, series of emerging memories2,3,4,5 are being developed to overcome the speed bottleneck of non-volatile memory. As the most extensively applied non-volatile memory, the speed of flash is limited by the low efficiency of the electric-field-assisted program, with reported speeds6,7,8,9,10 much slower than sub-one nanosecond. Here we report a two-dimensional Dirac graphene-channel flash memory based on a two-dimensional-enhanced hot-carrier-injection mechanism, supporting both electron and hole injection. The Dirac channel flash shows a program speed of 400 picoseconds, non-volatile storage and robust endurance over 5.5 × 106 cycles. Our results confirm that the thin-body channel can optimize the horizontal electric-field (Ey) distribution, and the improved Ey-assisted program efficiency increases the injection current to 60.4 pA μm−1 at |VDS| = 3.7 V. We also find that the two-dimensional semiconductor tungsten diselenide has two-dimensional-enhanced hot-hole injection, but with different injection behaviour. This work demonstrates that the speed of non-volatile flash memory can exceed that of the fastest volatile static random-access memory with the same channel length.


In-memory ferroelectric differentiator​

Abstract

Differential calculus is the cornerstone of many disciplines, spanning the breadth of modern mathematics, physics, computer science, and engineering. Its applications are fundamental to theoretical progress and practical solutions. However, the current state of digital differential technology often requires complex implementations, which struggle to meet the extensive demands of the ubiquitous edge computing in the intelligence age. To face these challenges, we propose an in-memory differential computation that capitalizes on the dynamic behavior of ferroelectric domain reversal to efficiently extract information differences. This strategy produces differential information directly within the memory itself, which considerably reduces the volume of data transmission and operational energy consumption. We successfully illustrate the effectiveness of this technique in a variety of tasks, including derivative function solving, the moving object extraction and image discrepancy identification, using an in-memory differentiator constructed with a crossbar array of 1600-unit ferroelectric polymer capacitors. Our research offers an efficient hardware analogue differential computing, which is crucial for accelerating mathematical processing and real-time visual feedback systems.


@Sanchez @TR_123456 @Nilgiri @Yasar_TR @Strong AI
 

TheInsider

Experienced member
Professional
Messages
4,256
Solutions
1
Reactions
40 15,214
Nation of residence
Turkey
Nation of origin
Turkey
No need to hype it until we see an affordable scalability and mass production capability but this is the dream of every PC enthusiast, RAM and SSD as a single component.
 

Follow us on social media

Latest posts

Top Bottom