hardware26

joined 2 years ago
[–] [email protected] 19 points 1 year ago (2 children)

If you knew about the birds and the bees, you would know that this wasn't random.

[–] [email protected] 4 points 1 year ago

Immerse yourself into technology. Become the screen.

[–] [email protected] 2 points 1 year ago (1 children)

I don't know what it is. It just reminded me of ATMEL8051 and I wanted to share.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (4 children)

I used Atmel8051 in college. It fits nicely on a breadboard and teaches you how to use assembly and make wonders with 512 byte (yes byte) RAM if I remember the number correctly. I think half of that RAM was even reserved.

[–] [email protected] 5 points 1 year ago

To be fair 10^(0.000000000000000000001x) is also exponential growth. And if status quo is x=0 and removing entire management means x=10 this means even the max we can get is very little improvement. It can be "exponential" and still not so much.

[–] [email protected] 21 points 1 year ago (2 children)

"Exponentially" is not synonymous to "a lot". Exponent is a mathematical term and exponential growth requires at least two variables exponentially related to each other. For this to be possibly exponential growth a) progress should be quantifiable (removing management and treating workers well should be quantized somehow) b) performance should be quantifiable and measured at a bunch of progress points (if you have only two measurements it can as well be linear) c) performance should be or can be modeled as a an exponential function of progress in removing management and treating workers well.

[–] [email protected] 5 points 1 year ago (1 children)

I wish we had an active aoe2 community.

[–] [email protected] 1 points 1 year ago (1 children)

Leakage resistance also contributes to dissipation factor and the simple formula omits this, that is why ESR calculated from dissipation factor is larger. As you said, if one is more interested in heat generated, dissipation factor is more important (leakage also dissipates power). If interested in the decoupling and filter performance of the capacitor, ESR is more important. And all these depend on temperature and capacitor bias voltage as well :)

[–] [email protected] 25 points 1 year ago

I don't think this will work well and others already explained why, but thanks for using this community to pitch your idea. We should have more of these discussions here rather than CEO news and tech gossip.

[–] [email protected] 1 points 1 year ago

I guess it can happen if you start moving the bottle forward after you start pouring the water.

[–] [email protected] 4 points 1 year ago

We should stop calling these titles confusing and call them what they are, plain wrong. This is the title of the original article. People who cannot write grammatically correct titles are writing entire articles.

[–] [email protected] 3 points 1 year ago (1 children)

Grammatically speaking, doesn't it really say that Sunak warned others? I am confused.

 

One of the biggest shortcomings of silicon is that it can only be made so thin because its material properties are fundamentally limited to three dimensions [3D]. For this reason, two-dimensional [2D] semiconductors—so thin as to have almost no height—have become an object of interest to scientists, engineers and microelectronics manufacturers.

Thinner chip components would provide greater control and precision over the flow of electricity in a device, while lowering the amount of energy required to power it. A 2D semiconductor would also contribute to keeping the surface area of a chip to a minimum, lying in a thin film atop a supporting silicon device.

But until recently, attempts to create such a material have been unsuccessful.

Now, researchers at the University of Pennsylvania School of Engineering and Applied Science have grown a high-performing 2D semiconductor to a full-size, industrial-scale wafer. In addition, the semiconductor material, indium selenide (InSe), can be deposited at temperatures low enough to integrate with a silicon chip.

"For the purposes of an advanced computing technology, the chemical structure of 2D InSe needs to be exactly 50:50 between the two elements. The resulting material needs a uniform chemical structure over a large area to work," says Song.

The team achieved this groundbreaking purity using a growth technique called "vertical metal-organic chemical vapor deposition" (MOCVD). Previous research had attempted to introduce the indium and selenium in equal quantities and at the same time. Song demonstrated, however, that this method was the source of undesirable chemical structures in the material, producing molecules with varying ratios of each element. MOCVD, by contrast, works by sending the indium in a continuous stream while introducing the selenium in pulses.

 

The key takeaway here is that the people writing these guidelines try to give as much information as possible,” Reaves says. “That’s great, in theory. But the writers don’t prioritize the advice that’s most important. Or, more specifically, they don’t deprioritize the points that are significantly less important. And because there is so much security advice to include, the guidelines can be overwhelming – and the most important points get lost in the shuffle.

In other words, the guideline writers are compiling security information, rather than curating security information for their readers.

Drawing on what they learned from the interviews, the researchers developed two recommendations for improving future security guidelines.

First, guideline writers need a clear set of best practices on how to curate information so that security guidelines tell users both what they need to know and how to prioritize that information.

Second, writers – and the computer security community as a whole – need key messages that will make sense to audiences with varying levels of technical competence.

“Look, computer security is complicated,” Reaves says. “But medicine is even more complicated. Yet during the pandemic, public health experts were able to give the public fairly simple, concise guidelines on how to reduce our risk of contracting COVID. We need to be able to do the same thing for computer security.”

 

As solder bump pitches shrink, several issues arise. Reduced bump height and surface area for bonding make it increasingly difficult to establish reliable electrical connections, necessitating precise manufacturing processes to avoid errors. Critical co-planarity and surface roughness become paramount, as even minor irregularities can compromise successful bonding.

To overcome these issues, Cu-Cu hybrid bonding technology steps in as a game-changer. This innovative technique involves embedding metal contacts between dielectric materials and using heat treatment for solid-state diffusion of copper atoms, thereby eliminating the bridging problem associated with soldering.

The advantages of hybrid bonding over flip-chip soldering are obvious. Firstly, it enables ultra-fine pitch and small contact sizes, facilitating high I/O counts. This is critical in modern semiconductor packaging, where devices require a growing number of connections to meet performance demands. Secondly, unlike flip-chip soldering, which often relies on underfill materials, Cu-Cu hybrid bonding eliminates the need for underfill, reducing parasitic capacitance, resistance and inductance, as well as thermal resistance. Lastly, the reduced thickness of the bonded connections in Cu-Cu hybrid bonding, nearly eliminating the 10 to 30 micron thickness of solder balls in flip-chip technology, opens up new possibilities for more compact and efficient semiconductor packages.

 

cross-posted from: https://discuss.tchncs.de/post/3306215

Although you are probably not aware of them, dozens of electronic control units (ECUs) — printed circuit boards (PCBs) in metal or plastic housings — exist in your car to control and monitor the operation and safety of your vehicle’s many control systems. These units must work for the lifetime of your car, during which time they are subjected to many heating and cooling cycles. The most obvious cycle occurs when you start your car after it has cooled at night. It heats up as the car runs and then cools again when you shut it off. That’s one “ambient” temperature cycle.

Additional so called “active” thermal cycles can occur locally within specific electronic components on the PCB. For instance, a MOSFET transistor draws a lot of current and heats up the PCB near its location, causing additional thermal cycling. These complex temperature distributions can cause local thermomechanical strain because differences in temperature across the PCB result in differential expansion of the board. Because the board is constrained by its housing, this can lead to bending of the board, putting additional strain on the solder joints that connect the components to the board.

The widely used power law based approach — simulation of only few cycles and prognosis of solder joints lifetime — has many shortcomings, where no absolute lifetime prediction or the damage driven load relocation and its nonlinear evolution are captured. Youssef Maniar and Marta Kuczynska, engineers at Robert Bosch GmbH in Germany, have developed an accurate nonlinear damage model able to predict absolute lifetime of solder connections. The problem they faced, absolute lifetime prediction, involves simulation of all cycles imposed to the components, and the computational effort is therefore extensive. Then, about two years ago, they read an academic paper that described a way to “jump” over some cycles to accelerate simulation.

The mathematics behind the ability to jump over a large number of simulated thermomechanical cycles to dramatically accelerate the simulation time without sacrificing accuracy is involved, but the software essentially looks at the slope or “gradient” of certain solution variables (e.g., stress) versus time plot on the fly to determine when it can skip over the next n number of cycles. The maximum value of n must be defined by the simulation engineer before the run. The simulation engineer also inputs other parameters beforehand to impose limits on the software to optimize the run.

 

Although you are probably not aware of them, dozens of electronic control units (ECUs) — printed circuit boards (PCBs) in metal or plastic housings — exist in your car to control and monitor the operation and safety of your vehicle’s many control systems. These units must work for the lifetime of your car, during which time they are subjected to many heating and cooling cycles. The most obvious cycle occurs when you start your car after it has cooled at night. It heats up as the car runs and then cools again when you shut it off. That’s one “ambient” temperature cycle.

Additional so called “active” thermal cycles can occur locally within specific electronic components on the PCB. For instance, a MOSFET transistor draws a lot of current and heats up the PCB near its location, causing additional thermal cycling. These complex temperature distributions can cause local thermomechanical strain because differences in temperature across the PCB result in differential expansion of the board. Because the board is constrained by its housing, this can lead to bending of the board, putting additional strain on the solder joints that connect the components to the board.

The widely used power law based approach — simulation of only few cycles and prognosis of solder joints lifetime — has many shortcomings, where no absolute lifetime prediction or the damage driven load relocation and its nonlinear evolution are captured. Youssef Maniar and Marta Kuczynska, engineers at Robert Bosch GmbH in Germany, have developed an accurate nonlinear damage model able to predict absolute lifetime of solder connections. The problem they faced, absolute lifetime prediction, involves simulation of all cycles imposed to the components, and the computational effort is therefore extensive. Then, about two years ago, they read an academic paper that described a way to “jump” over some cycles to accelerate simulation.

The mathematics behind the ability to jump over a large number of simulated thermomechanical cycles to dramatically accelerate the simulation time without sacrificing accuracy is involved, but the software essentially looks at the slope or “gradient” of certain solution variables (e.g., stress) versus time plot on the fly to determine when it can skip over the next n number of cycles. The maximum value of n must be defined by the simulation engineer before the run. The simulation engineer also inputs other parameters beforehand to impose limits on the software to optimize the run.

 

Mine is playing AOE2 in easiest (or standard if I want a bit of challenge) mode against 3 bots. I just build my economy, wall up (and laugh at the enemy soldiers attacking my walls in vain), reach imperial age and attack once my army reaches the population limit. I also send 104 in the chat so they don't surrender and I can enjoy razing their all buildings one by one. If any of them builds a castle, even more fun. A build a trebuchet and watch it raze the castle from a safe distance. If there is sea, after I am done with the land, I build 3 docks, do research and build a navy and hunt down ships around the unxplored sea. It is fun, satisfying and relaxing.

What is yours?

 

cross-posted from: https://discuss.tchncs.de/post/3157319

Compared with traditional monolithic devices, the design and manufacturing process for chiplets is significantly different. The scrap costs associated with manufacturing traditional monolithic semiconductor devices is basically linear, including single chip cost, packaging, and assembly costs.

Manufacturing processes for 2.5D/3D designs differ significantly in terms of the accumulation of scrap costs. Specifically, these costs increase geometrically from fabrication to assembly driven by scrap costs for multiple dies, multi-chip partial assemblies, and/or full 2.5D/3D packages.

Shifting tests, either left or right, in the test process is a strategy to achieve these goals and minimize the overall manufacturing cost of 2.5D/3D components. Shift left is the ability to increase test coverage earlier in the manufacturing process (e.g., during wafer inspection and partial packaging) to maximize KGD, while reducing future packaging costs. Additional tests can also be added to the process to identify new failure types or failure modes.

However, the benefits of shift left need to be weighed. For example, increasing test intensity early in the manufacturing process can positively impact known good devices but it can also lead to an increase in test costs that is not sufficiently offset by the optimizations, even after accounting for the resulting reduction in scrap costs.

Shift right means increasing test coverage later in the manufacturing process, expanding the ability to detect defects, and maintaining quality levels with the goal of reducing costs with higher parallelism testing.

Typically, a test item with a higher yield on wafer or mission pattern tests, or a high yield test that requires a longer scan test time is an ideal candidate for shifting right. These tests can be moved to final or system level test, or flexibly managed in between.

The goal of shifting tests to the left or right is to achieve the optimal combination of quality and yield throughout the entire manufacturing process, ultimately optimizing the overall cost of quality.

 

Compared with traditional monolithic devices, the design and manufacturing process for chiplets is significantly different. The scrap costs associated with manufacturing traditional monolithic semiconductor devices is basically linear, including single chip cost, packaging, and assembly costs.

Manufacturing processes for 2.5D/3D designs differ significantly in terms of the accumulation of scrap costs. Specifically, these costs increase geometrically from fabrication to assembly driven by scrap costs for multiple dies, multi-chip partial assemblies, and/or full 2.5D/3D packages.

Shifting tests, either left or right, in the test process is a strategy to achieve these goals and minimize the overall manufacturing cost of 2.5D/3D components. Shift left is the ability to increase test coverage earlier in the manufacturing process (e.g., during wafer inspection and partial packaging) to maximize KGD, while reducing future packaging costs. Additional tests can also be added to the process to identify new failure types or failure modes.

However, the benefits of shift left need to be weighed. For example, increasing test intensity early in the manufacturing process can positively impact known good devices but it can also lead to an increase in test costs that is not sufficiently offset by the optimizations, even after accounting for the resulting reduction in scrap costs.

Shift right means increasing test coverage later in the manufacturing process, expanding the ability to detect defects, and maintaining quality levels with the goal of reducing costs with higher parallelism testing.

Typically, a test item with a higher yield on wafer or mission pattern tests, or a high yield test that requires a longer scan test time is an ideal candidate for shifting right. These tests can be moved to final or system level test, or flexibly managed in between.

The goal of shifting tests to the left or right is to achieve the optimal combination of quality and yield throughout the entire manufacturing process, ultimately optimizing the overall cost of quality.

 

cross-posted from: https://discuss.tchncs.de/post/3011500

Many volume applications use FPGA because they need in-field reconfigurability (changing standards, changing algorithms, etc) but they want to improve their system’s competitiveness (power, size, cost). FPGAs are bulky, expensive and power hungry. Integrating eFPGA can greatly improve the economics while maintaining full reconfigurability and performance.

We’ve found with customers that a significant portion of the LUTs in their designs don’t change with reconfigurations: they are fixed buses to bring data to and from the reconfigurable core. This can be hardwired so the number of LUTs needed in the SoC is typically half of what’s in the FPGA. There is also a lot of cost of voltage regulators for an FPGA that disappear with integration.

Typically, the cost of eFPGA is 1/10th the cost of the FPGA it replaces but with the same speed and programmability. Power can also be cut to 1/10th because most of the power in an FPGA is the power-hungry PHYs that are mostly not needed when using eFPGA in the SoC.

 

cross-posted from: https://discuss.tchncs.de/post/3011500

Many volume applications use FPGA because they need in-field reconfigurability (changing standards, changing algorithms, etc) but they want to improve their system’s competitiveness (power, size, cost). FPGAs are bulky, expensive and power hungry. Integrating eFPGA can greatly improve the economics while maintaining full reconfigurability and performance.

We’ve found with customers that a significant portion of the LUTs in their designs don’t change with reconfigurations: they are fixed buses to bring data to and from the reconfigurable core. This can be hardwired so the number of LUTs needed in the SoC is typically half of what’s in the FPGA. There is also a lot of cost of voltage regulators for an FPGA that disappear with integration.

Typically, the cost of eFPGA is 1/10th the cost of the FPGA it replaces but with the same speed and programmability. Power can also be cut to 1/10th because most of the power in an FPGA is the power-hungry PHYs that are mostly not needed when using eFPGA in the SoC.

 

Many volume applications use FPGA because they need in-field reconfigurability (changing standards, changing algorithms, etc) but they want to improve their system’s competitiveness (power, size, cost). FPGAs are bulky, expensive and power hungry. Integrating eFPGA can greatly improve the economics while maintaining full reconfigurability and performance.

We’ve found with customers that a significant portion of the LUTs in their designs don’t change with reconfigurations: they are fixed buses to bring data to and from the reconfigurable core. This can be hardwired so the number of LUTs needed in the SoC is typically half of what’s in the FPGA. There is also a lot of cost of voltage regulators for an FPGA that disappear with integration.

Typically, the cost of eFPGA is 1/10th the cost of the FPGA it replaces but with the same speed and programmability. Power can also be cut to 1/10th because most of the power in an FPGA is the power-hungry PHYs that are mostly not needed when using eFPGA in the SoC.

 

In a study recently published in the journal Patterns, researchers demonstrate that computer algorithms often used to identify AI-generated text frequently falsely label articles written by non-native language speakers as being created by artificial intelligence. The researchers warn that the unreliable performance of these AI text-detection programs could adversely affect many individuals, including students and job applicants.

view more: ‹ prev next ›