Hackaday
Fresh hacks every day
If you’ve dealt with reactance, you surely know the two equations for computing inductive and capacitive reactance. But unless you’ve really dug into it, you may only know the formula the way a school kid knows how to find the area of a circle. You have to have a bit of higher math to figure out why the equation is what it is. [Old Hack EE] wanted to figure out why the formulas are what they are, so he dug in and shared what he learned in a video you can see below.
The key to understanding this is simple. The reactance describes the voltage over the current through the element, just like resistance. The difference is that a resistance is just a single number. A reactance is a curve that gives you a different value at different frequencies. That’s because current and voltage are out of phase through a reactance, so it isn’t as easy as just dividing.
If you know calculus, the video will make a lot of sense. If you don’t know calculus, you might have a few moments of panic, but you can make it. If you think of frequency in Hertz as cycles per second, all the 2π you find in these equations convert Hz to “radian frequency” since one cycle per second is really 360 degrees of the sine wave in one second. There are 2π radians in a circle, so it makes sense.
We love developing intuition about things that seem fundamental but have a lot of depth to them that we usually ignore. If you need a refresher or a jump start on calculus, it isn’t as hard as you probably think. Engineers usually use vectors or imaginary numbers to deal with reactance, and we’ve talked about that too, if you want to learn more.
From Blog – Hackaday via this RSS feed
Roughly the size of a Tic Tac container, this project packs a punch in a compact package. [Matt] sent in this beautifully documented pocket device that brings back great memories of texting on early cellphones.
The EclairM0’s firmware is written in TinyGo, a language he hadn’t used before but found perfect for a microcontroller project where storage space is tight. The 14-button input mimics early phone keypads, using multi-tapping and combo key presses to offer various functions. The small SSD1306 OLED display is another highlight. Building on an earlier CircuitPython project, [Matt] optimized the screen’s performance, speeding up its response time for a snappy user experience. The battery picked was only 3 mm thick, however the protection circuity on the battery added another 2 mm so he moved that protection circuity to the main PCB itself to keep it as thin as initially planned.
Weighing just 15 grams, this lightweight device runs on a SAMD21 microcontroller, which supports USB host functionality. This allows the EclairM0 to act as a keyboard, mouse, or even USB peripherals. Housed in a 3D-printed case, the entire project is open-source, with design and firmware files available on GitHub.
We love small handheld projects around here and this well-documented, fun pocket device is no exception, if you want your own he has a page dedicated to helping you build a EclairM0.
From Blog – Hackaday via this RSS feed
The Star Trek tricorder was a good example of a McGuffin. It did anything needed to support the plot or, in some cases, couldn’t do things also in support of the plot. We know [SirGalaxy] was thinking about the tricorder when he named the Tinycorder, but the little device has a number of well-defined features. You can see a brief video of it working below the break.
The portable device has a tiny ESP32 and a battery. The 400×240 display is handy, but has low power consumption. In addition to the sensors built into the ESP32, the Tinycorder has an AS7341 light sensor, an air quality sensor, and a weather sensor. An odd combination, but like its namesake, it can do lots of unrelated things.
The whole thing goes together in a two-part printed case. This is one of those projects where you might not want an exact copy, but you very well might use it as a base to build your own firmware. Even [SirGalaxy] has plans for future developments, such as adding a buzzer and a battery indicator.
This physically reminded us of those ubiquitous component testers. That another multi-purpose tester that started simple and gets more features through software.
From Blog – Hackaday via this RSS feed
If you say that you’re “nuking” something, pretty much everyone will know that you mean you’re heating something in the microwave. It’s technically incorrect, of course, as the magnetron inside the oven emits only non-ionizing radiation, and is completely incapable of generating ionizing radiation such as X-rays. Right?
Perhaps not, as these experiments with an overdriven magnetron suggest. First off, this is really something you shouldn’t try; aside from the obvious hazards that attend any attempt to generate ionizing radiation, there are risks aplenty here. First of all, modifying magnetrons as [SciTubeHD] did here is risky thanks to the toxic beryllium they contain, and the power supply he used, which features a DIY flyback transformer we recently featured, generates potentially dangerous voltages. You’ve been warned.
For the experiment, [SciTubeHD] stripped the magnets off a magnetron and connected his 40-kV AC power supply between the filament and the metal case of the tube. We’re not completely clear to us how this creates X-rays, but it appears to do so given the distinctive glow given off by an intensifying screen harvested from an old medical X-ray film cassette. The light is faint, but there’s enough to see the shadows of metallic objects like keys and PCBs positioned between the tube and the intensifying screen.
Are there any practical applications for this? Probably not, especially considering the potential risks. But it’s still pretty cool, and we’re suitably impressed that magnetrons can be repurposed like this.
From Blog – Hackaday via this RSS feed
Recycling 3D filament is a great idea in theory, and we come across homemade filament extruders with some regularity, but they do have some major downsides when it comes to colored filaments. If you try to recycle printer waste of too many different colors, you’ll probably be left with a nondescript gray or brown filament. Researchers at Western University, however, have taken advantage of this pigment mixing to create colors not found in any commercial filament (open access paper).
They started by preparing samples of 3D printed waste in eight different colors and characterizing their spectral reflectance properties with a visible-light spectrometer. They fed this information into their SpecOptiBlend program (open source, available here), which optimizes the match between a blend of filaments and a target color. The program relies on the Kubelka-Munk theory for subtractive color mixing, which is usually used to calculate the effect of mixing paints, and minimizes the difference which the human eye perceives between two colors. Once the software calculated the optimal blend, the researchers mixed the correct blend of waste plastics and extruded it as a filament which generally had a remarkably close resemblance to the target color.
In its current form, this process probably won’t be coming to consumer 3D printers anytime soon. To mix differently-colored filaments correctly, the software needs accurate measurements of their optical properties first, which requires a spectrometer. To get around this, the researchers recommend that filament manufacturers freely publish the properties of their filaments, allowing consumers to mix their filaments into any color they desire.
This reminds us of another technique that treats filaments like paint to achieve remarkable color effects. We’ve also seen a number of filament extruders before, if you’d like to try replicating this.
From Blog – Hackaday via this RSS feed
Last week, the mainstream news was filled with headlines about K2-18b — an exoplanet some 124 light-years away from Earth that 98% of the population had never even heard about. Even astronomers weren’t aware of its existence until the Kepler Space Telescope picked it out back in 2015, just one of the more than 2,700 planets the now defunct observatory was able to identify during its storied career. But now, thanks to recent observations by the James Web Space Telescope, this obscure planet has been thrust into the limelight by the discovery of what researchers believe are the telltale signs of life in its atmosphere.
Artist’s rendition of planet K2-18b.
Well, maybe. As you might imagine, being able to determine if a planet has life on it from 124 light-years away isn’t exactly easy. We haven’t even been able to conclusively rule out past, or even present, life in our very own solar system, which in astronomical terms is about as far off as the end of your block.
To be fair the University of Cambridge’s Institute of Astronomy researchers, lead by Nikku Madhusudhan, aren’t claiming to have definitive proof that life exists on K2-18b. We probably won’t get undeniable proof of life on another planet until a rover literally runs over it. Rather, their paper proposes that abundant biological life, potentially some form of marine phytoplankton, is one of the strongest explanations for the concentrations of dimethyl sulfide and dimethyl disulfide that they’ve detected in the atmosphere of K2-18b.
As you might expect, there are already challenges to that conclusion. Which is of course exactly how the scientific process is supposed to work. Though the findings from Cambridge are certainly compelling, adding just a bit of context can show that things aren’t as cut and dried as we might like. There’s even an argument to be made that we wouldn’t necessarily know what the signs of extraterrestrial life would look like even if it was right in front of us.
Life as We Know It
Credit where credit is due, most of the news outlets have so far treated this story with the appropriate amount of skepticism. Reading though the coverage, Cambridge’s findings are commonly described as the “strongest evidence yet” of potential extraterrestrial life, rather than being treated as definitive proof. Well, other than the Daily Mail anyway. They decided to consult with ChatGPT and other AI tools in an effort to find out what lifeforms on K2-18b would look like.
So, AI-generated frogmen renders not withstanding, what makes these findings so difficult to interpret? For one thing, we have very little idea of what extraterrestrial life would actually be like, so proving that it exists is exceptionally difficult. Scientists have precisely one data point for what constitutes as life, and you’re sitting on it. We only know what life on Earth looks like, and while there’s an incredible amount of biodiversity on our home planet, it all still tends to play by the same established rules.
On Earth, dimethyl sulfide (DMS) is produced by phytoplankton.
We assume those rules to be a constant on other planets, but that’s only because we don’t know what else to look for. Consider that the bulk of our efforts in the search for extraterrestrial intelligence (SETI) thus far have been based on the idea that other sentient beings would develop some form of radio technology similar to our own, and that if we simply pointed a receiver at their star, we would be able to pick up their version of I Love Lucy.
This is a preposterous presupposition, which doesn’t even make much sense when compared to humanity’s history. Consider the science, literature, and art that humankind was able to produce before the advent of the electric light. Now imagine that Proxima Centauri’s answer to Beethoven is putting the finishing touches on their latest masterpiece as our radio telescope silently checks their planet off the list of inhabited worlds because it wasn’t emanating any RF transmissions we recognize.
Similarly, here on Earth dimethyl sulfide (DMS) and dimethyl disulfide (DMDS) are produced exclusively by biological processes. DMS specifically is so commonly associated with marine phytoplankton that we often associate its smell with being in proximity of the sea. This being the case, you could see how finding large quantities of these gases in the atmosphere of an alien planet would seem to indicate that it must be teaming with aquatic life.
But just because that’s true on Earth doesn’t mean it’s true on K2-18b. We know these gases can be created abiotically in the laboratory, which means there are alternative explanations to how they could be produced on another planet — even if we can’t explain them currently. Further, a paper released in November 2024 pointed out that DMS was detected on comet 67P/Churyumov–Gerasimenko by the European Space Agency’s Rosetta spacecraft, indicating there’s some unknown method by which it can be produced in the absence of any biological activity.
Finding What You’re Looking For
All that being said, let’s assume for the sake of argument that the presence of dimethyl sulfide and dimethyl disulfide was indeed enough to confirm there was life on the planet. You’d still need to confirm beyond a shadow of a doubt that those gases were present in the atmosphere. So how do you do that?
Within our own solar system, you could send a probe. Which is what’s been suggested to investigate the possibility that phosphine gas exists on Venus. But remember, we’re talking about a planet that’s 124 light-years away. In this case, the only way to study the atmosphere is through spectroscopy — that is, examining the degree to which various wavelengths of light (visible and otherwise) are blocked as they pass through it.
This is, as you may have guessed, easier said than done. The amount of data you can collect from such a distant object, even with an instrument as powerful as the James Webb Space Telescope is minuscule. You need to massage the data with various models to extract any useful information from the noise, and according to some critics, that’s when bias can creep in.
In a recently released paper, Jake Taylor from the University of Oxford argues that the only reason Nikku Madhusudhan and his team found signs of DMS and DMDS in the spectrographic data is because that’s what they were looking for. Given their previous research that potentially detected methane and carbon dioxide in the atmosphere of K2-18b, it’s possible the team was already primed to find further evidence of biological processes on the planet, and were looking a bit too hard to find evidence to back up their theory.
When analyzing the raw data without any preconceived notion of what you’re looking for, Taylor says there’s “no strong statistical evidence” to support the detection of DMS and DMDS in the atmosphere of K2-18b. This conclusion itself will need to be scrutinized, of course, though it does have the benefit of Occam’s razor on its side.
In short, there may or may not be dimethyl sulfide and dimethyl disulfide gases in the atmosphere of K2-18b, and that may or may not mean there’s potentially some form of biological life in the planet’s oceans…which it may or may not actually have. If you’re looking for anything more specific than that, the science is still out.
From Blog – Hackaday via this RSS feed
When it comes to our machines, we generally have very prescribed and ordered ways of working with them. We know how to tune our CNC mill for the minimum chatter when its chewing through aluminium. We know how to get our FDM printer to lay perfect, neat layers to minimize the defects in our 3D prints.
That’s not what Blair Subbaraman came down to talk about at the 2024 Hackaday Supercon, though. Instead, Blair’s talk covered the magic that happens when you work outside the built-in assumptions and get creative. It’s all about sketching with machines.
Blair starts out by highlighting various items that were fabricated with an eye to tool pathing itself, before relating this to his work with 3D printers.
Early on, Blair’s talk focuses on some unique objects, fabricated with digital methods, but in unconventional ways. “These objects aren’t purely designed in CAD, but also kind of designed directly in the machine tool paths.” Motioning to a carved vase that makes good use of tool marks, Blair explains the concept. “The design is really driven by the mark that the endmill has left in the wood,” he says. “That’s not something that’s encoded or specified in the geometry file, you just have to try a bunch of settings and see what happens and see what looks good to your eye.”
Jumping back to the concept of sketching, and he Blair roots the concept in its modern uses—like Arduino sketches, or those used with the Processing framework. “If we can write a little program and we can sketch with pixels or LEDs, what might it look like to sketch with a 3D printer? he asks.
Via direct control of the printer’s behavior, it was possible for Blair to create this blobby, stringy 3D printed vase.
Right away, he gives a potent and clear example—a unique 3D printed vase. It’s not produced in the usual way, however. Blair didn’t create a CAD model, then throw it in a slicer, before chucking the G-code on the printer. Instead, it’s created with more direct control of the 3D printer itself. The printer’s extruder is commanded to run in place, creating a hot blob of plastic, before the gantry gently pulls away, creating a string to the next stack of blobs, where the process repeats again. Rather than a solid 3D-printed wall, the result is altogether more delicate and complex, with fine strings linking towers of delicately melted plastic. It’s something that you couldn’t really create just by using standard 3D printing tools.
“This is what I mean by designing directly in toolpaths,” Blair explains. It’s achieved through precise control over the extruder and motion platform. The G-code is finessed to create blobs of plastic that are just right, and to move the head at just the right speed to create a contiguous molten string without breaking or sagging.
Blair’s p5.fab framework exists to make this sort of experimentation easier and more accessible.
Blair created the p5.fab Javascript library to make it easier to craft—or sketch—in this manner. His library includes simple commands for controlling, say, a 3D printer. Stacking up commands to control moves and various extruder operations allows the creation of objects in an entirely different way than just using CAD to specifically define the desired geometry directly. “We can use these really simple commands to quickly build up more complicated objects,” says Blair. “You can print some fun things that you’d maybe be hard pressed to do with CAD and a slicer, here, and you can do it in a really computationally modest way.” A particularly enjoyable example? Printing a handle on the side of a disposable coffee cup. It’s a gimmick, but one that does show the possibilities at play.
Blair’s inspiration to work with toolpaths directly has its benefits. “Whatever slicer you like might come out with a wire printing mode or some other experimental slicing mode, but some of the motivation here is that I don’t want to wait for my slicer to come out with a blob mode in order to print blobby things,” says Blair. “Probably my slicer is never going to make blobs the way I like my blobs.”
If hooking up a MIDI controller to a 3D printer doesn’t blow your mind, it really should.
Blair’s talk goes further with some really neat ideas. A particular highlight is using a MIDI controller with knobs and sliders to control a 3D printer. Imagine being able to make tweaks to print settings like movement speed or extrusion rate on the fly. It’s not what you’d want for producing an accurate part, for sure. And yet, Blair demonstrates how it allowed him to discover how to print neatly stacked coils in TPU, just by giving his hands direct control over the machine parameters in a live sense.
Overall, it’s a talk that makes us think about how to get closer to the machines we create with. Slicers and CAD are perfect for making our regular 3D prints. At the same time, there are great and wild things that can be achieved by taking more direct control over the machinery, and indeed—sketching with the machines!
From Blog – Hackaday via this RSS feed
It’s not really an understatement to say that over the years videocards (GPUs) — much like CPU coolers — have become rather chonky. Unfortunately, the PCIe slots they plug into were never designed with multi-kilogram cards in mind. All this extra weight is of course happily affected by gravity.
The problem has gotten to the point that the ASUS ROG Astral RTX 5090 card added a Bosch Sensortec BMI323 inertial measurement unit (IMU) to provide an accelerometer and angular rate (gyroscope) measurements, as reported by [Uniko’s Hardware] (in Chinese, see English [Videocardz] article).
There are so-called anti-sag brackets that provide structural support to the top of the GPU where it isn’t normally secured. But since this card weighs in at over 6 pounds (3 kilograms) for the air cooled model, it appears the bracket wasn’t enough, and active monitoring was necessary.
The software allows you to set a sag angle at which you receive a notification, which would presumably either allow you to turn off the system and readjust the GPU, or be forewarned when it is about to rip itself loose from the PCIe slot and crash to the bottom of the case.
From Blog – Hackaday via this RSS feed
Are your Eurorack modules too crowded? Sick of your patch cables making it hard to twiddle your knobs? Then you might be very interested in the new Euroknob, the knob that sports a hidden patch cable jack.
Honestly, when we first saw the Euroknob demo board, we thought [Mitxela] had gone a little off the rails. It looks like nothing more than a PCB-mount potentiometer or perhaps an encoder with a knob attached. Twist the knob and a row of LEDs on the board light up in sequence. Nice, but not exactly what we’re used to seeing from him. But then he popped the knob off the board, revealing that what we thought was the pot body is actually a 3.5-mm audio jack, and that the knob was attached to a mating plug that acts as an axle.
The kicker is that underneath the audio jack is an AS5600 magnetic encoder, and hidden in a slot milled in the tip of the audio jack is a tiny magnet. Pop the knob into the jack, give it a twist, and you’ve got manual control of your module. Take the knob out, plug in a patch cable, and you can let a control voltage from another module do the job. Genius!
To make it all work mechanically, [Mitxela] had to sandwich a spacer board on top of the main PCB. The spacer has a large cutout to make room for the sensor chip so the magnet can rotate without hitting anything. He also added a CH32V003 to run the encoder and drive the LEDs to provide feedback for the knob-jack. The video below has a brief demo.
This is just a proof of concept, to be sure, but it’s still pretty slick. Almost as slick as [Mitxela]’s recent fluid-motion simulation pendant, or his dual-wielding soldering irons.
From Blog – Hackaday via this RSS feed
In the early days of computing, and well into the era where home computers were common but not particularly powerful, programming these machines was a delicate balance of managing hardware with getting the most out of the software. Memory had to be monitored closely, clock cycles taken into account, and even video outputs had to be careful not to overwhelm the processor. This can seem foreign in the modern world where double-digit gigabytes of memory is not only common, it’s expected, but if you want to hone your programming skills there’s no better way to do it than with the limitations imposed by something like a retro computer or a Raspberry Pi Pico.
This project is called Kaleidoscopio, built by [Linus Åkesson] aka [lft] and goes deep into the hardware of the Pi Pico in order to squeeze as much out of the small, inexpensive platform as possible. The demo is written with 17,000 lines of assembly using the RISC-V instruction set. The microcontroller has two cores on it, with one core acting as the computer’s chipset and the other acts as the CPU, rendering the effects. The platform has no dedicated audio or video components, so everything here is done in software using this setup to act as a PC from the 80s might. In this case, [lft] is taking inspiration from the Amiga platform, his favorite of that era.
The only hardware involved in this project apart from the Pi Pico itself are a few resistors, an audio jack, and a VGA port, further demonstrating that the software is the workhorse in this build. It’s impressive not only for wringing out as much as possible from the platform but for using the arguably weaker RISC-V cores instead of the ARM cores, as the Pi Pico includes both. [lft] goes into every detail on the project’s page as well, for those who are still captivated by the era of computer programming where every bit mattered. For more computing demos like this, take a look at this one which is based on [lft]’s retrocomputer of choice, the Amiga.
From Blog – Hackaday via this RSS feed
One of the most paradoxical aspects of creating art is the fact that constraints, whether arbitrary or real, and whether in space, time, materials, or rules, often cause creativity to flourish rather than to wither. Picasso’s blue period, Gadsbyby Ernest Vincent Wright, Tetris, and even the Volkswagen Beetle are all famous examples of constraint-driven artistic brilliance. Similarly, in the world of electronics we can always reach for a microcontroller but this project from [Peter] has the constraint of only using passive components, and it is all the better for it.
The project is a lockbox, a small container that reveals a small keypad and the associated locking circuitry when opened. When the correct combination of push buttons is pressed, the box unlocks the hidden drawer. This works by setting a series of hidden switches in a certain way to program the combination. These switches are connected through various diodes to a series of relays, so that each correct press of a button activates the next relay. When the final correct button is pushed, power is applied to a solenoid which unlocks the drawer. An incorrect button push will disable a relay providing power to the rest of the relays, resetting the system back to the start.
The project uses a lot of clever tricks to do all of this without using a single microcontroller, including using capacitors that carefully provide timing to the relays to make them behave properly rather than all energizing at the same time. The woodworking is also notable as well, with the circuit components highlighted when the lid is opened (but importantly, hiding the combination switches). Using relays for logic is not a novel concept, though; they can be used for all kinds of complex tasks including replacing transistors in single-board computers.
From Blog – Hackaday via this RSS feed
A hefty portable power bank is a handy thing to DIY, but one needs to get their hands on a number of matching lithium-ion cells to make it happen. [Chris Doel] points out an easy solution: salvage them from disposable vapes and build a solid 35-cell power bank. Single use devices? Not on his watch!
[Chris] has made it his mission to build useful things like power banks out of cells harvested from disposable vapes. He finds them — hundreds of them — on the ground or in bins (especially after events like music festivals) but has also found that vape shops are more than happy to hand them over if asked. Extracting usable cells is most of the work, and [Chris] has refined safely doing so into an art.
Disposable vapes are in all shapes and sizes, but cells inside are fairly similar.
Many different vapes use the same cell types on the inside, and once one has 35 identical cells in healthy condition it’s just a matter of using a compatible 3D-printed enclosure with two PCBs to connect the cells, and a pre-made board handles the power bank functionality, including recharging.
We’d like to highlight a few design features that strike us as interesting. One is the three little bendy “wings” that cradle each cell, ensuring cells are centered and held snugly even if they aren’t exactly the right size. Another is the use of spring terminals to avoid the need to solder to individual cells. The PCBs themselves also double as cell balancers, providing a way to passively balance all 35 cells and ensure they are at the same voltage level during initial construction. After the cells are confirmed to be balanced, a solder jumper near each terminal is closed to bypass that functionality for final assembly.
The result is a hefty power bank that can power just about anything, and maybe the best part is that it can be opened and individual cells swapped out as they reach the end of their useful life. With an estimated 260 million disposable vapes thrown in the trash every year in the UK alone, each one containing a rechargeable lithium-ion cell, there’s no shortage of cells for an enterprising hacker willing to put in a bit of work.
Power banks not your thing? [Chris] has also created a DIY e-bike battery using salvaged cells, and that’s a money saver right there.
Learn all about it in the video, embedded below. And if you find yourself curious about what exactly goes on in a lithium-ion battery, let our own Arya Voronova tell you all about it.
From Blog – Hackaday via this RSS feed
Looks like the Simpsons had it right again, now that an Australian radio station has been caught using an AI-generated DJ for their midday slot. Station CADA, a Sydney-based broadcaster that’s part of the Australian Radio Network, revealed that “Workdays with Thy” isn’t actually hosted by a person; rather, “Thy” is a generative AI text-to-speech system that has been on the air since November. An actual employee of the ARN finance department was used for Thy’s voice model and her headshot, which adds a bit to the creepy factor.
The discovery that they’ve been listening to a bot for months apparently has Thy’s fans in an uproar, although we suspect that the media doing the reporting is probably more exercised about this than the general public. Radio stations have used robo-jocks for the midday slot for ages, albeit using actual human DJs to record patter to play between tunes and commercials. Anyone paying attention over the last few years probably shouldn’t be surprised by this development, and we suspect similar disclosures will be forthcoming across the industry now that the cat’s out of the bag.
Also from the world of robotics, albeit the hardware kind, is this excellent essay from Brian Potter over at Construction Physics about the sad state of manual dexterity in humanoid robots. The whole article is worth reading, not least for the link to a rogue’s gallery of the current crop of humanoid robots, but briefly, the essay contends that while humanoid robots do a pretty good job of navigating in the world, their ability to do even the simplest tasks is somewhat wanting.
Brian’s example of unwrapping and applying a Band-Aid, a task that any toddler can handle, as being unimaginably difficult for any current robot to handle is quite apt. He attributes the gap in abilities between gross movements and fine motor control partly to hardware and partly to software. We think the blame skews more to the hardware side; while the legs and torso of the typical humanoid robot offer a lot of real estate for powerful actuators, squeezing that much equipment into a hand approximately the size of a human’s is a tall order. These problems will likely be overcome, of course, and when they do, Brian’s helpful list of “Dexterity Evals” or something similar will act as a sort of Turing test for robot dexterity. Although the day a humanoid robot can start a new roll of toilet paper without tearing the first sheet is the day we head for the woods.
We recently did a story on the use of nitrogen-vacancy diamonds as magnetic sensors, which we found really exciting because it’s about the simplest way we’ve seen to play with quantum physics at home. After that story ran, eagle-eyed reader Kealan noticed that Brian over at the “Real Engineering” channel on YouTube had recently run a video on anti-submarine warfare, which includes the uses of similar quantum magnetometers to detect submarines. The magnetometers in the video are based on the Zeeman effect and use laser-pumped helium atoms to detect tiny variations in the Earth’s magnetic field due to large ferrous objects like submarines. Pretty cool video; check it out.
And finally, if you have the slightest interest in civil engineering you’ve got to check out Animagraff’s recent 3D tour of the insides of Hoover Dam. If you thought a dam was just a big, boring block of concrete dumped in the middle of a river, think again. The video is incredibly detailed and starts with accurate 3D models of Black Canyon before the dam was built. Every single detail of the dam is shown, with the “X-ray views” of the dam with the surrounding rock taken away being our favorite bit — reminds us a bit of the book Underground by David Macaulay. But at the end of the day, it’s the enormity of Hoover Dam that really comes across in this video. The way that the structure dwarfs the human-for-scale included in almost every sequence is hard to express — megalophobics, beware. We were also floored by just how much machinery is buried in all that concrete. Sure, we knew about the generators, but the gates on the intake towers and the way the spillways work were news to us. Highly recommended.
From Blog – Hackaday via this RSS feed
For all their education, medical practitioners sometimes forget that what’s old hat to them is new territory for their patients. [David Revoy] learned that when a recent visit to the veterinarian resulted in the need to monitor his cat’s pulse rate at home, a task that he found difficult enough that he hacked together this digital cat stethoscope.
Never fear; [David] makes it clear that his fur-baby [Geuloush] is fine, although the gel needed for an echocardiogram likely left the cat permanently miffed. With a normal feline heart rate in the 140s, [David] found it hard to get an accurate pulse by palpation, so he bought a cheap stethoscope and a basic lavalier USB microphone. Getting them together was as easy as cutting the silicone tubing from the stethoscope head and sticking the microphone into it.
The tricky part, of course, would be getting [Geuloush] to cooperate. That took some doing, but soon enough [David] had a clean recording to visualize in an audio editor. From there it’s just a simple matter of counting up the peaks and figuring out the beats per second. It probably wouldn’t be too hard to build a small counter using a microcontroller so he doesn’t have to count on the cat napping near his PC, but in our experience, keyboards are pretty good cat attractants.
This is one of those nice, quick hacks whose simplicity belies their impact. It’s certainly not as fancy as some of the smart stethoscopes we’ve seen, but it doesn’t need to be.
Thanks to [Spooner] for the tip.
From Blog – Hackaday via this RSS feed
Space X Starship firing its many Raptor engines. The raptor pioneered the new generation of methalox. (Image: Space X)
Go back a generation of development, and excepting the shuttle-derived systems, all liquid rockets used RP-1 (aka kerosene) for their first stage. Now it seems everybody and their dog wants to fuel their rockets with methane. What happened? [Eager Space] was eager to explain in recent video, which you’ll find embedded below.
At first glance, it’s a bit of a wash: the density and specific impulses of kerolox (kerosene-oxygen) and metholox (methane-oxygen) rockets are very similar. So there’s no immediate performance improvement or volumetric disadvantage, like you would see with hydrogen fuel. Instead it is a series of small factors that all add up to a meaningful design benefit when engineering the whole system.
Methane also has the advantage of being a gas when it warms up, and rocket engines tend to be warm. So the injectors don’t have to worry about atomizing a thick liquid, and mixing fuel and oxidizer inside the engine does tend to be easier. [Eager Space] calls RP-1 “a soup”, while methane’s simpler combustion chemistry makes the simulation of these engines quicker and easier as well.
There are other factors as well, like the fact that methane is much closer in temperature to LOX, and does cost quite a bit less than RP-1, but you’ll need to watch the whole video to see how they all stack up.
We about rocketry fairly often on Hackaday, seeing projects with both liquid-fueled and solid-fueled engines. We’ve even highlighted at least one methalox rocket, way back in 2019. Our thanks to space-loving reader [Stephen Walters] for the tip. Building a rocket of your own? Let us know about it with the tip line.
From Blog – Hackaday via this RSS feed
[David Bloomfield] wanted to make some tweaks to an embedded system, but didn’t quite have the requisite skills. He decided to see if vibe coding could help.
[David]’s goal was simple. To take the VESC Telemetry Display created by [Lukas Janky] and add some tweaks of his own. He wanted to add more colors to the display, while changing the format of the displayed data and tweaking how it gets saved to EEPROM. The only problem was that [David] wasn’t experienced in coding at all, let alone for embedded systems like the Arduino Nano. His solution? Hand over the reigns to a large language model. [David] used Gemini 2.5 Pro to make the changes, and by and large, got the tweaks made that he was looking for.
There are risks here, of course. If you’re working on an embedded system, whatever you’re doing could have real world consequences. Meanwhile, if you’re relying on the AI to generate the code and you don’t fully understand it yourself… well, the possibilities are obvious. It pays to know what you’re doing at the end of the day. In this case, it’s hard to imagine much going wrong with a simple telemetry display, but it bears considering the risks whatever you’re doing.
We’ve talked about the advent of vibe coding before, too, with [Jenny List] exploring this nascent phenomenon. Expect it to remain a topic of controversy in coding circles for some time. Video after the break.
From Blog – Hackaday via this RSS feed
The self-propelled zip fastener uses a worm gear to propel itself along the teeth. (Credit: YKK)
At first glance the very idea of a zipper that unzips and zips up by itself seems somewhat ridiculous. After all, these contraptions are mostly used on pieces of clothing and gear where handling a zipper isn’t really sped up by having an electric motor sluggishly move through the rows of interlocking teeth. Of course, that’s not the goal of YKK, which is the world’s largest manufacturer of zip fasteners. The demonstrated prototype (original PR in Japanese) shows this quite clearly, with a big tent and equally big zipper that you’d be hard pressed to zip up by hand.
The basic application is thus more in industrial applications and similar, with one of the videos, embedded below, showing a large ‘air tent’ being zipped up automatically after demonstrating why for a human worker this would be an arduous task. While this prototype appears to be externally powered, adding a battery or such could make it fully wireless and potentially a real timesaver when setting up large structures such as these. Assuming the battery isn’t flat, of course.
It might conceivably be possible to miniaturize this technology to the point where it’d ensure that no fly is ever left unzipped, and school kids can show off their new self-zipping jacket to their friends. This would of course have to come with serious safety considerations, as anyone who has ever had a bit of their flesh caught in a zipper can attest to.
https://www.theverge.com/news/656535/ykk-self-propelled-zipper-prototype
https://www.ykk.com/newsroom/g_news/2025/20250424.html
From Blog – Hackaday via this RSS feed
It is easier than ever to produce projects with nice enclosures thanks to 3D printing and laser cutting. However, for a polished look, you also need a labeled front panel. We’ve looked at several methods for doing that in the past, but we enjoyed [Accidental Science’s] video showing his method for making laminated panels.
His first step is to draw the panel in Inkscape, and he has some interesting tips for getting the most out of the program. He makes a few prints and laminates one of them. The other is a drill guide. You use the drill guide to make openings in the panel, which could be aluminum, steel, plastic, or whatever material you want to work in.
The laminated print goes on last with just enough glue to hold it. Is it a lot of work? You bet it is. But the results look great. There are a number of things to look out for, so if you plan to do this, the video will probably save you from making some mistakes.
There are many ways to get this job done. We’ve asked you for ideas before and, as usual, you came through. If you want a different take on laminated panels, there are a few different tips you can glean from this project.
From Blog – Hackaday via this RSS feed
If you build electronics, you will eventually need a coil. If you spend any time winding one, you are almost guaranteed to think about building a coil winder. Maybe that’s why so many people do. [Jtacha] did a take on the project, and we were impressed — it looks great.
The device has a keypad and an LCD. You can enter a number of turns or the desired inductance. It also lets you wind at an angle. So it is suitable for RF coils, Tesla coils, or any other reason you need a coil.
There are a number of 3D printed parts, so this doesn’t look like an hour project. Luckily, none of the parts are too large. The main part is 2020 extrusion, and you will need to tap the ends of some of the pieces.
There is a brief and strangely dark video in the post if you want to see the machine in operation. The resulting coil looked good, especially if you compare it to how our hand-wound ones usually look.
While most of the coil winders we see have some type of motor, that’s not a necessity.
From Blog – Hackaday via this RSS feed
[Sean Boyce] has been busy building board games. Specifically, an electronic strategy boardgame that is miraculously also compatible with Settlers of Catan.
[Sean’s] game is called Calculus. It’s about mining asteroids and bartering. You’re playing as a corporation attempting to mine the asteroid against up to three others doing the same. Do a good job of exploiting the space-based resource, and you’ll win the game.
Calculus is played on a board made out of PCBs. A Xiao RP2040 microcontroller board on the small PCB in the center of the playfield is responsible for running the show. It controls a whole ton of seven-segment displays and RGB LEDs across multiple PCBs that make up the gameboard. The lights and displays help players track the game state as they vie for asteroid mining supremacy. Amusingly, by virtue of its geometry and some smart design choices, you can also use [Sean]’s board to play Settlers of Catan. He’s even designed a smaller, cheaper travel version, too.
We do see some interesting board games around these parts, because hackers and makers are just that creative. If you’ve got your own board game hacks or builds in the works, don’t hesitate to let us know!
From Blog – Hackaday via this RSS feed
Sometimes you need random numbers — and properly random ones, at that. [Sean Boyce] whipped up a rig that serves up just that, tasty random bytes delivered fresh over MQTT.
[Sean] tells us he’s been “designing various quantum TRNGs for nearly 15 years as part of an elaborate practical joke” without further explanation. We won’t query as to why, and just examine the project itself. The main source of randomness — entropy, if you will — is a pair of transistors hooked up to create a bunch of avalanche noise that is apparently truly random, much like the zener diode method.
In any case, the noise from the transistors is then passed through a bunch of hex inverters and other supporting parts to shape the noise into a nicely random square wave. This is sampled by an ATtiny261A acting as a Von Neumann extractor, which converts the wave into individual bits of lovely random entropy. These are read by a Pi Pico W, which then assembles random bytes and pushes them out over MQTT.
Did that sound like a lot? If you’re not in the habit of building random number generators, it probably did. Nevertheless, we’ve heard from [Sean] on this topic before. Feel free to share your theories on the best random number generator designs below, or send your best builds straight to the tipsline. Randomly, of course!
From Blog – Hackaday via this RSS feed
Classic demos from the demoscene are all about showing off one’s technical prowess, with a common side order of a slick banging soundtrack. That’s precisely what [BUS ERROR Collective] members [DJ_Level_3] and [Marv1994] delivered with their prize-winning Primer demo this week.
This demo is a grand example of so-called “oscilloscope music”—where two channels of audio are used to control an oscilloscope in X-Y mode. The sounds played determine the graphics on the screen, as we’ve explored previously.
The real magic is when you create very coolsounds that also draw very cool graphics on the oscilloscope. The Primerdemo achieves this goal perfectly. Indeed, it’s intended as a “primer” on the very artform itself, starting out with some simple waveforms and quickly spiraling into a graphical wonderland of spinning shapes and morphing patterns, all to a sweet electronic soundtrack. It was created with a range of tools, including Osci-Render and apparently Ableton 11, and the recording performed on a gorgeous BK Precision Model 2120 oscilloscope in a nice shade of green.
If you think this demo is fully sick, you’re not alone. It took out first place in the Wild category at the Revision 2025 demo party, as well as the Crowd Favorite award. High praise indeed.
We love a good bit of demoscene magic around these parts.
Thanks to [STrRedWolf] for the tip!
From Blog – Hackaday via this RSS feed
In the 90s, a video game craze took over the youth of the world — but unlike today’s games that rely on powerful PCs or consoles, these were simple, standalone devices with monochrome screens, each home to a digital pet. Often clipped to a keychain, they could travel everywhere with their owner, which was ideal from the pet’s perspective since, like real animals, they needed attention around the clock. [ViciousSquid] is updating this 90s idea for the 20s with a digital pet squid that uses a neural network to shape its behavior.
The neural network that controls the squid’s behavior takes a large number of variables into account, including whether or not it’s hungry or sleepy, or if it sees food. The neural network adapts as different conditions are encountered, allowing the squid to make decisions and strengthen its algorithms. [ViciousSquid] is using a Hebbian learning algorithm which strengthens connections between neurons which activate often together. Additionally, the squid’s can form both short- and long-term memories, and the neural network can even form new neurons on its own as needed.
[ViciousSquid] is still working on this project, and hopes to eventually implement a management system in the future, allowing the various behavior variables to be tracked over time and overall allow it to act in a way more familiar to the 90s digital pets it’s modeled after. It’s an interesting and fun take on those games, though, and much of the code is available on GitHub for others to experiment with as well. For those looking for the original 90s games, head over to this project where an emulator for Tamagotchis was created using modern microcontroller platforms.
From Blog – Hackaday via this RSS feed
Sometimes, a flat display just won’t cut it. If you’re looking for something a little rounder, perhaps your vision could persist in in looking at [lhm0]’s rotating LED sphere RP2040 POV display.
As you might have guessed from that title, this persistence-of-vision display uses an RP2040 microcontroller as its beating (or spinning, rather) heart. An optional ESP01 provides a web interface for control. Since the whole assembly is rotating at high RPM, rather than slot in dev boards (like Pi Pico) as is often seen, [lhm0] has made custom PCBs to hold the actual SMD chips. Power is wireless, because who wants to deal with slip rings when they do not have to?
The LED-bending jig is a neat hack-within-a-hack.
[lhm0] has also bucked the current trend for individually-addressable LEDs, opting instead to address individual through-hole RGB LEDs via a 24-bit shift-register. Through the clever use of interlacing, those 64 LEDs produce a 128 line display. [lhm0] designed and printed an LED-bending jig to aid mounting the through-hole LEDs to the board at a perfect 90 degree angle.
What really takes this project the extra mile is that [lhm0] has also produced a custom binary video/image format for his display, .rs64, to encode images and video at the 128×256 format his sphere displays. That’s on github,while a seperate library hosts the firmware and KiCad files for the display itself.
This is hardly the first POV display we’ve highlighted, though admittedly it isn’t the cheapest one. There are even other spherical displays, but none of them seem to have gone to the trouble of creating a file format.
If you want to see it in action and watch construction, the video is embedded below.
From Blog – Hackaday via this RSS feed