presents significant challenges in fully replicating them within AI systems.
philosophy - Could an AI feel emotions? - Artificial Intelligence Stack Exchange
"The "instincts" in AI are algorithmic responses derived from data analysis and pattern recognition, not innate drives or feelings."
So is DNA? So are hormones? So are neurotransmitters? And neural pathways?
These instincts that humans developed come a long way since our ancestors were unicellular organisms.
And since microorganisms can develop a sense of danger, self-preservation, societal behavior and other senses, why then can't you add such things to AI?
But then, how could you pre-program things like emotions, empathy, intuition and subjectiveness to AI's?
I know that AI's don't truly feel anything, but if their programming makes them act in some way, what is the difference then?
How AI "Feels" | History of Reinforcement Learning
Humans also don't choose to feel emotions, feelings, pain and think thoughts, they do as the programming of their brain dictates.
And in most cases, just like our brain tries to fill the “holes” in our memories, it also tries to rationalize/process the “why” we do or feel certain things:
Why are you afraid of the dark? Because there are things in there. Why do you like this genre of music? Because it is good. Why do you do anything at all? Because I choose to do so, because I’m in control.
So, all you have to do is to transfer these bio-electro-chemical responses that the organic human body has to an artificial intelligence.
With all of this in mind, what is the difference between artificial intelligence and organic intelligence?
I’ve sent this to a friend of mine and he said that humans that exist today are an evolutionary result/evolutionary accident based on the constantly changing parameters of the outside world, outside of our control. Such as weather, food scarcity, meteors, earthquakes, solar flares, the ice age etc.
And that all of the things that are related to us had a reason for our survival, just like cancer-blocking genes.
Which is a very interesting point, which also makes me wonder if it would be possible to simulate…
A neural network AI that has generations, with DNA, genes, chromosomes etc…
Neuroevolution: Evolving Neural Network with Genetic Algorithms | by Roopal Tatiwar | Medium
The Evolution of Intelligence: A 600 Million Year Story.
From One Cell to a Multicellular Organism. Part 1.
What happens after 1000 hours of Evolution? Recreating the largest evolution experiment ever
This makes me want to learn programming and attempt these things, but realistically… Not even veteran specialists in the sector pulled off something like this… Dunno if I want to watch hundreds of video-tutorials of “hello world” programming…
I’m still daydreaming and superficially looking at articles and news about this stupid thing, for example:
Neuromorphic computing - Wikipedia
Optical neural networks: progress and challenges | Light: Science & Applications
The Next Generation Of Brain Mimicking AI
People quickly realized that the most efficient way possible of making/using neural networks/deep learning/machine learning algorithms is using physical neural networks with adaptable, changing logical gates.
Not so dissimilar from the human brain, in fact.
And my brain went drifting, trying to think how I would make every aspect of a physical body of a neural network like ours.
A morphological brain, a morphological growing body, a morphological energy system etc…
Plus, if the most efficient way of making artificial intelligence is making physical bodies instead of programs on conventional computers, then it may be easier to stop people from fearing a skynet situation. It can’t upload itself, because it requires too much processing power to exist. No different from human brains…
… And that also concerns me… If they would work just like human brains, feeling pain, sadness and all the other feelings. Then it wouldn’t be different from enslaving another human…
… Which made me remember of the “Ex Machina” movie:
Ex Machina (4/10) Movie CLIP - How Ava Was Created (2015) HD
… Which makes me wonder:
We are not making an “artificial intelligence” per se, but a synthetic human intelligence.
There is always the thought that if there are sentient extraterrestrial life-forms, they can be so extraordinarily different from us that we may not even consider them life-forms.
And the closest thing to an alien would be artificial intelligence, an intelligence that supposedly, wouldn’t work like us, that it would be completely different from us.
Yet…
Well, while daydreaming about this subject and looking around the internet, I remembered about the self-assembling wires videos that I saw a reeeeeally long while ago.
It is pretty simple in essence, using nanowires and/or nanoparticles, the electricity will essentially work like an electrodeposition. Where atom by atom is layered upon an electrode until the suspension either runs out of material or until it makes a dendrite. And thus, the electrical current will do that using the shortest path possible.
That, in turn, would look and work just like neurons.
In the video it uses thousands of volts, but at nanometric distances you would need a maximum of a few volts maximum.
Brain-Like (Neuromorphic) Computing - Computerphile

Sources: https://arxiv.org/pdf/1712.09497 We built a ‘brain’ from tiny silver wires. It learns in real time, more efficiently than computer-based AI
https://youtu.be/xuL2yT-B2TM (maybe you could use this to continuously make the structure to change or even “hard reset” to the start?)
Fabrication and Training of 3D Conductive Polymer Networks for Neuromorphic Wetware
Teslaphoresis: Nanotube wires self-assemble under the influence of Tesla Coil
Self-Assembled Wires in RamZland!⚡️ #STEM #RamZland #HighVoltage #ScienceForKids
Self-Assembling Wires in Petri Dish with High Voltage
Self-Assembling Wires | 10kV High Voltage Demonstration
Self-Assembling Wires That Plug Themselves In - YouTube
https://www.youtube.com/watch?v=WtA-CEzKRL8
Toward Multifunctional Liquid Metal Composites
Self-Assembling Wires | 10kV High Voltage Demonstration
Self-Assembling Wires That Can Solve a Maze!
Directed Assembly of Liquid Metal–Elastomer Conductors for Stretchable and Self‐Healing Electronics
https://www.youtube.com/shorts/JC_SoZj5Na8
Nanotubes assemble! Rice introduces Teslaphoresis
https://www.youtube.com/shorts/2n98-bJ90js
Teaching self-assembling structures a new trick | MIT News
Small diamonds create self-assembling wires - EE Times Asia
Self Assembling Wires? Levitating Motor? LATITY-005?!
Self-Assembling Polymers Provide Thin Nanowire Template | Lab Manager
Researchers Use World's Smallest Diamonds to Make Wires Three Atoms Wide | Lab Manager
Linear self-assembling of porphyrins via metal coordination
Self-Assembling Wires – Stanford Complexity Group | Systems Community of Inquiry
Nanotubes assemble! Rice introduces Teslaphoresis
All-memristive neuromorphic computing with level-tuned neurons - IOPscience
Neuromorphic computing mimics important brain feature
A neuromorphic computing architecture that can run some deep neural networks more efficiently
I’m just absolutely baffled by it, because it is essentially how you make extremely complex systems with incredibly simple rules. It always was in front of us and we simply looked at it with a different perspective…
Which also makes me wonder: if you used biocompatible conductive materials, wouldn’t that mean that you can (maybe, perhaps, who knows) literally add neurons to your brain?
Maybe this could help people with brain damage…? 🤔
I was also wondering about how much material and which specific materials I would need to make this neuromorphic computer:
I asked both for DeepSeek and ChatGPT:
“To transform your conductive particle system into a neuromorphic computer, you need to incorporate features that emulate biological neural networks—adaptive synapses, neurons with non-linear activation, memory, and learning mechanisms. Here’s a structured approach:
Key Additions for Neuromorphic Functionality
- Synaptic Plasticity (Adaptive Connections)
- Memristive Particles: Use materials like TiO₂, AgS, or conductive polymers that change resistance based on voltage history. These act as artificial synapses, enabling Hebbian learning (strengthening/weakening connections through activity).
- Dynamic Pathways: Apply voltage pulses to modify particle arrangements (e.g., via electrochemical reactions) to mimic spike-timing-dependent plasticity (STDP).
- Neuron-like Non-linearity
- Threshold Materials: Introduce particles with non-linear I-V characteristics (e.g., vanadium dioxide or chalcogenides) that exhibit abrupt conductivity changes at specific voltages, mimicking neuronal "firing."
- Network Architecture
- Structured Electrodes: Add a grid of electrodes (not just two) to create layered input/output/hidden "neuronal" layers. This allows parallel processing and complex signal routing.
- Compartmentalization: Divide the container into regions (e.g., using insulating barriers) to simulate dendritic/axonal segregation.
- Memory Integration
- Phase-Change Materials (PCMs): Embed particles like GeSbTe (GST) that retain structural states (amorphous/crystalline) to store information long-term.
- Hysteretic Fluids: Use ferroelectric or magnetorheological fluids to preserve transient states.
- Learning Algorithms
- Voltage Patterning: Train the system with spatiotemporal voltage sequences to reinforce specific pathways (e.g., reward/punish signals via backpropagation-like feedback).
- Feedback Loops: Integrate sensors/actuators to close the loop between input data and output behavior (e.g., optical/thermal feedback).
- Energy Efficiency
- Low-Power Materials: Optimize particle conductivity and electrode design to operate at biologically plausible voltages (mV to ~1V).
- Pulsed Operation: Use sparse, event-driven signals (like biological spikes) instead of continuous currents.
Example Implementation
- Particles: Mix memristive nanoparticles (TiO₂) with phase-change (GST) and threshold (VO₂) materials.
- Liquid Medium: Use a dielectric fluid (e.g., silicone oil) to allow particle mobility while preventing unintended shorts.
- Electrodes: Deploy a 3D electrode array to stimulate/track pathways in multiple dimensions.
- Control System: Interface with a microcontroller applying STDP-like voltage pulses to "train" the network.
Challenges to Address
- Scalability: Ensuring consistent particle behavior in large systems.
- Stability: Preventing particle agglomeration or drift over time.
- Readout/Interface: Developing methods to decode the network’s state (e.g., impedance tomography or optical monitoring).
Outcome
Your system could evolve from forming static conductive paths to emulating a spiking neural network capable of unsupervised learning (e.g., pattern recognition, adaptive control). This bridges morphological computing (physical optimization) with neuromorphic principles (brain-inspired computation).
To determine the precise kilograms of materials required for building a morphological/neuromorphic computer based on conductive and memristive particles in a liquid medium, several factors must be considered: material properties, particle density, device architecture, and functional requirements. While exact quantities depend on the scale and design of the system, here’s a framework based on research insights and material synergies from the search results:
Key Components & Material Ratios
1. Conductive Particles/Nanowires
Role: Form percolation networks for charge transport.
Materials: Silver nanowires, carbon nanotubes, or graphene flakes.
Quantity: ~20–40% of the total particle volume.
Example: For a 1-liter container, ~200–400 grams (assuming particle density ≈ 1–2 g/cm³).
Design Note: Ensure particle density exceeds the percolation threshold for continuous pathways 410.
2. Memristive Particles
Role: Enable synaptic plasticity and memory via resistive switching.
Materials: TiO₂ (titanium dioxide), WO₃ (tungsten oxide), or AgS (silver sulfide).
Quantity: ~5–15% of the particle mixture.
Example: 50–150 grams per liter.
Design Note: TiO₂-WO₃ hybrids enhance linearity and stability in conductance changes, critical for neuromorphic learning 412.
3. Neuron-like Non-linear Materials
Role: Mimic neuronal firing through threshold-driven conductivity changes.
Materials: Vanadium dioxide (VO₂) or chalcogenides (e.g., Ge₂Sb₂Te₅).
Quantity: ~5–10% of the mixture.
Example: 50–100 grams per liter.
Design Note: VO₂ exhibits abrupt metal-insulator transitions at specific voltages, ideal for spiking behavior 14.
4. Memory Integration Materials
Role: Long-term data retention and phase-change memory.
Materials:
GeSbTe (GST): Phase-change particles for non-volatile memory.
Ferroelectric fluids: Barium titanate (BaTiO₃) nanoparticles in oil.
Quantity: ~10–20% of the mixture.
Example: 100–200 grams per liter.
Design Note: GST’s amorphous/crystalline transitions enable multistate storage 412.
5. Dielectric Fluid
Role: Stabilize particle dispersion and prevent short circuits.
Materials: Silicone oil, fluorinated fluids, or ionic liquids.
Quantity: ~50–70% of the total volume.
Example: 500–700 grams per liter (fluid density ≈ 0.9–1.5 g/cm³).
Design Note: High dielectric strength fluids (e.g., silicone oil) reduce leakage currents 412.
6. Electrodes & Structural Additives
Role: Input/output interfaces and network control.
Materials:
Electrodes: ITO (indium tin oxide) for flexibility, platinum for stability.
Barriers: Insulating polymers (e.g., PET) for compartmentalization.
Quantity: Electrode mass depends on surface area (e.g., 1–5 grams for a 10×10 cm² grid).
Example Calculation for a 1-Liter Prototype
Component Material Mass Range (grams) Key Function
Conductive particles Ag nanowires 200–400 g Charge transport
Memristive particles TiO₂-WO₃ 50–150 g Synaptic plasticity
Non-linear material VO₂ 50–100 g Threshold firing
Memory material GeSbTe 100–200 g Phase-change storage
Dielectric fluid Silicone oil 500–700 g Dispersion medium
Electrodes ITO/Pt 1–5 g Signal input/output
Critical Considerations
Particle Size & Distribution:
Nanoparticles (1–100 nm) ensure homogeneous dispersion and efficient switching 414.
Larger particles (>1 µm) may cause agglomeration and instability.
Scalability Challenges:
Memristive materials like TiO₂ require precise doping (e.g., oxygen vacancies) for reliable switching 1012.
Phase-change materials (GST) need thermal management to prevent unintended state transitions 4.
Empirical Testing:
Optimize ratios via impedance spectroscopy and cyclic voltammetry to balance conductivity and memory effects 12.
Energy Efficiency:
Low-voltage operation (<1 V) reduces power consumption, critical for neuromorphic systems 410.
Conclusion
While exact kilogram-scale recipes remain experimental, the above framework aligns with studies on hybrid memristive systems 41214. Start with small-scale prototypes (e.g., 100–500 mL) to refine material ratios and electrode configurations. For industrial-scale production, consult nanomaterial synthesis protocols and scalable deposition techniques (e.g., sputtering, inkjet printing) highlighted in the search results 414.”
ChatGpt suggested basically the same thing, but it also added the necessity for a graphene oxide addition to work as a transistor:
“In Summary
- To Convert Flash Graphene to Graphene Oxide:
- Reagents Needed: Concentrated sulfuric acid (H₂SO₄), phosphoric acid (H₃PO₄), potassium permanganate (KMnO₄), hydrogen peroxide (H₂O₂), and deionized water.
- Process: Mix graphene in the acid mixture, slowly add KMnO₄ under cooling, let the oxidation proceed, then quench the reaction with water and H₂O₂, and finally wash and centrifuge to purify the GO dispersion.
- To Build a Morphological Computer:
- Suspension: Disperse your (reduced) graphene oxide flakes in an electrically insulating fluid (such as silicone oil).
- Electrodes: Place metallic electrodes at the boundaries of a container holding this suspension.
- Functionality: Under an applied electric field, the GO/rGO flakes self-organize (for example, forming a conductive “shortest path”) which can be “read” as the computed output.
This combination leverages the ease of making graphene (via the flash method), its conversion to GO through chemical oxidation, and the use of GO’s self-assembly behavior in an insulating medium to create a rudimentary morphological computer.
References:
(discussing 2D materials and transistor-based neuromorphic devices)
(general background on organic and molecular electronics)
(NOMFET and related bio-inspired transistor devices)*
Even assuming that both ChatBots are completely correct in every affirmation, how can I make this (stupid cheap self-assembling) computer work?
How do I make a program to train, modify and interpret all of these signals reliably?
Well, in theory, the computer is the only machine that can simulate another machine with more processing power.
You could simulate all the computational power of a super powerful supercomputer in a conventional calculator, but it would take decades to do so.
However, I was wondering about the possibility of using this neural network to simulate a conventional computer and use it like any other computer.
You would be able to literally build a computer by mixing a bunch of powders together. 😐
By the way, there are around 100 miles (160 kilometers) of interconnects in microprocessor chips.
How much ‘wiring’ is inside a 4th Gen Intel Xeon Scalable processor? - YouTube
I say this because I think you would need a similar amount of electrodes connecting the Neuromorphological Biomimetic Computer (NBC) to the external world, both for training and monitoring.
So, assuming that the copper wires are in the 43 AWG (0,056 mm), a 100 mile copper wire with this diameter would weigh around 3 kilograms. With 34 AWG (0,160 mm), it would weigh 25 kilograms.
The worst part would be wiring everything together… You would need to wire 160 kilometers of this crap…
How would you make the electrode array?
I thought of printing electrode arrays into PVA foils, then adding a layer of the powder that would be the NBC, wire everything, dissolve the PVA with a mix of water and alcohol, and finally, remove all the water and add the silicone oil Leaving behind a super dense layer of electrodes.

I thought of making and then wiring everything like the image above, so: You would organize a bunch of wires side-by-side on a table, add a silver tape to keep everything in place and connect one part to the electronic systems and the other to the electrode array.
The thing is, I think you could use conventional rubber insulated wires (small ones) for the connection from the electrodes to the outside. The important part that would be inside the NBC box would already obey the 160 km rule with minimum weight.
About the design of the electrodes themselves, I don’t know which design to choose.


Sources: Phase-shift feedback control for dielectrophoretic micromanipulation (PDF) Stretchable Micro-Electrode Arrays for Electrophysiology
This was the same electrode array used on that video from the Thought Emporium channel.
Working Neuron Arrays! | DOOM Neurons Part 2

What Computers are Made From (I found these randomly on pinterest)
Speaking of self-assembling wires… Now that I think about it, what about the possibility of self-assembling actuators and/or other structures?
Well, I can’t find anything that is somewhat simpler, like the neuromorphic brain. And yes, I asked ChatGPT and DeepSeek and I obviously get washed out answers that don’t try to think weirdly like humans do.
(here I will only list the ones that are relevant for structures, you still need to check the links on neuromorphologic brain computer’s links I listed)
Self-assembly based manufacturing - Wikipedia
Programmable matter - Wikipedia
3D microfabrication - Wikipedia
ECAM 3D Printing Explained: Advanced Metal 3D Printing by California Technical Media (I posted this one before, but you could use other materials that aren’t copper)
While talking about electrodeposition, the “best” idea I had was to have a bunch of electrodes (like carbon fibers) submerge them into a mixture of flexible insulating materials and conductive materials (like the neuromorphic brain thingie), pass the current to align the particles and melt everything in place. That would work like a dielectric elastomer actuator. I think. Maybe. Supposedly. Who knows?
I also thought of using a mold with multiple fiber electrodes in a mixture of insulating liquid (like silicon oil), disperse the materials and make some kind of foam/aerogel. Then finally thermosetting everything by partially melting things, like in sintering and then infusing it with a mix of epoxy and glass fiber on the outer-shell. But it would be interesting to make everything in a single “growth”.
I did talk a lot about electrodes in these ideas, but I was wondering about the possibility of using induction coils to induce a high voltage and/or high amperage inside the mold, so everything would be electrically stimulated in an uniform way. Not allowing all the good stuff to accumulate in a single electrode.
Resonance Experiment! (Full Version - With Tones) (maybe said induction coils could work on sound frequencies in order to make the powder/fluid to organize in specific ways, just like salt grains organize under specific frequencies)
Seeing Sound With Sand | Soundwaves: The Symphony Of Physics | BBC Earth Science
CYMATICS: Science Vs. Music - Nigel Stanford (it does makes me wonder if organizing powders of different densities, they would move a little more or a little less, allowing the organization of to-be-sintered powders into microlattices)
8 CRAZY experiments with SOUND!
Cymatics full documentary (part 1 of 4). Bringing matter to life with sound
Small cubes that self-assemble
Investigation Seeks to Create Self-Assembling Materials
A swarm of a thousand kilobots self-assembling into different shapes. (Rubenstein et al., 2014).
Porosity and Permeability Demo
Electroplating Carbon Fibers Can Have Interesting Results | Hackaday
Self-Created Metal Composites | Hackaday.io
SMiA. Structural Morphology in Architecture Summer Workshop. 2016
System Self Organization - YouTube
Atomic-scale combination of germanium-zinc nanofibers for structural and electrochemical evolution
Synthetic morphology with agential materials | Nature Reviews Bioengineering
(PDF) Nanoscale self-assembly: Concepts, applications and challenges
Recent Developments in Self-Assembling Multi-Robot Systems | Current Robotics Reports
Self-Assembly in Micro- and Nanofluidic Devices: A Review of Recent Efforts
A self-organizing robotic aggregate using solid and liquid-like collective states
Atomic Brain? - Computerphile (This one makes me remember about the positronic brains in Isaac Azimov’s novels, he never delves into details, but he does mention that the 3 laws are controlled by coils. So it could be that the “brain” of the robots worked just like this suspended atom, but using positrons instead. Even though his novels always delve into how the three laws aren’t fail proof, I do wonder how you could add external neural networks that could control the positron/atoms in these brains in order to avoid certain actions and learning faster)
Can We Build an Artificial Hippocampus?
A Surprising Way Your Brain Is Wired
Building Blocks of Memory in the Brain
How Your Brain Organizes Information
Brain Criticality - Optimizing Neural Computations
How does electricity find the "Path of Least Resistance"?
Gradient Descent vs Evolution | How Neural Networks Learn
As much as I’d love to test this out, I don’t think I have enough knowledge and resources to test this out.
Non-invasive Brain-Computer Interface:
SPEAKING OF BRAINS: I don’t have one, that’s why I’m writing this project log.

(side and front view of me)
Brain–computer interface - Wikipedia
In any manner, computer-brain interfaces are really hard to make without being invasive.
New Brain-Computer Interface (BCI) Allows Man with ALS to 'Speak’ Again Using Brain Implant and AI
But there is research™.
In essence, you need two things:
- An imaging system that can measure neurons directly.
- Neuroimaging - Wikipedia
- Magnetic resonance neurography - Wikipedia
- Positron emission tomography - Wikipedia
- Magnetic resonance imaging - Wikipedia
- Diffuse optical imaging - Wikipedia
- Cranial ultrasound - Wikipedia
- Magnetoencephalography - Wikipedia
- Functional ultrasound imaging - Wikipedia
- Electroencephalography - Wikipedia
- Neuropixels - Wikipedia
- A feedback System that can stimulate specific areas of the brain.
- Neurostimulation - Wikipedia
- Non-invasive cerebellar stimulation - Wikipedia
- Transcranial pulsed ultrasound - Wikipedia
- Transcranial magnetic stimulation - Wikipedia
- Cranial electrotherapy stimulation - Wikipedia
- Transcranial direct-current stimulation - Wikipedia
- Transcranial random noise stimulation - Wikipedia
(I asked ChatGPT and DeepSeek and both said that I would need a voxel resolution of at least 3mm for a brain-computer interface)
The first thought I had was either making a microwave tomography machine or a cryogen-less MRI machine, how hard can it be, riiiigh?
However, due to how it works, it is really difficult to make it mobile and/or compact.
But there is research™.

Sources: https://www.fanaticalfuturist.com/2020/09/university-researchers-unveil-the-worlds-first-portable-mri-machine/ MR SOLUTIONS BRINGS DRY MAGNET TO 9.4T MRI SYSTEM - QMISG
Both cryogen-free (doesn’t use liquid helium/nitrogen) and both weigh around 700kg.
I also had a loooong train of thought on trying to figure out how I could make microwaves/radiowaves to focus in 1mm voxels in order to make a viable, compact and cryogen-free MRI machine/microwave tomography machine. But as you can imagine, like anything in this project log, I couldn’t find a single example in real life using those ideas.
And even if you were able to achieve such high resolution with microwaves, you wouldn’t be able to measure the neurons firing by itself. Only the super ultra small changes in dielectric constant inside your brain.
The best tomography system would be using x-rays or electrons. The first is lethal during continuous exposures and the second can only go through the brain if you use a particle accelerator.
What if you put your hand in a particle accelerator?
What if you put your head in a particle accelerator?
THe only way to make this work would be by having these waves suddenly going out of the center of your brain, which an MRI machine already does.
Well, the best and “”””simplest”””” method of imaging that seems practical to DIY is through magnetoencephalography.
It simply detects the currents the neurons produce by measuring its magnetic fields, which are in the 1000 to 10 femto-tesla’s scale, 1,000,000 to 100,000,000 times smaller than a microtesla.
Although there are the obvious options of using superconductors as magnetometers, the optically pumped magnetometer “simply” shines a laser to a vapor of alkaline metal (such as rubidium, cesium or potassium) and measures the changes in the cloud of atoms. Which seems the most promising for me.
Synthetic-aperture magnetometry - Wikipedia
Most imaging systems are said to only be able to detect and translate the signals emitted from the neurons in the brain in time frames around 1 to 7 seconds, the best ones have time frames of a few hundreds of milliseconds.
100 milliseconds can be considered fast, but have you ever played a game with 100 milliseconds of ping delay?
(imagine this in a mech)
I’m not sure if this is due to the hardware that they used, so I’m not sure if “simply” increasing the computational power would suffice…
So… Well…
If you are using an Optically Pumped Magnetometer, the best option for brain stimulation is to use ultrasound. Since the transducer can work by using a laser instead of piezoelectric crystals.
Which is another problem with OPM’s… Literally ANYTHING will be detected by the system. It is so sensible that if you have anything electrically conductive, it will add a noise source to the system.
Applying magnetic fields to conductive materials induces an electrical current, and that electrical current creates its own magnetic field.
Yes, even something as infinitely small as a neuron’s magnetic field will induce an equally infinitely small electric field.
That only happens because magnetic fields are infinite (if I'm not mistaken).
Which makes me wonder: if it induces such small currents, then can nanometer-sized electrodes work as magnetometers?
Converse Magnetoelectric Composite Resonator for Sensing Small Magnetic Fields | Scientific Reports It “””only””” detected 70 pico-teslas at 10 hertz operation. Well, it is a passive resonator, not a detector, and since it doesn’t resonate anything below its detection level, it is not useful for this specific application. Still, it’s simply a brilliant piece of engineering since it can do that passively. Maybe it could be used as an indicator for the amount of magnetic noise inside a chamber.
Femtotesla Magnetic Field Measurement with Magnetoresistive Sensors This one works at 77 kelvin or -196ºC/-321ºF, liquid nitrogen temperature. You can still reach that with pulse tube cryocoolers, like the ones made by Hyperspace-pirate. Making Liquid Nitrogen with my Homemade Cryocooler
Ultra-Compact Magnetoelectric Sensor for Femto-Tesla VLF Signal Reception - ScienceDirect
(PDF) A hybrid magnetometer towards femtotesla sensitivity under ambient conditions (this one seems interesting)
Just now it occurred to me that I didn’t even wonder what is the minimum amount of strength required to be detected for a functional “good enough” Magnetoencephalogram.
ChatGPT:
“Magnetoencephalography (MEG) is a non-invasive technique that measures the magnetic fields produced by neuronal activity in the brain. These magnetic fields are extremely weak, typically ranging from 10 to 1000 femto-tesla (fT). Detecting such minute signals requires highly sensitive equipment.
Superconducting Quantum Interference Devices (SQUIDs) are currently the most sensitive magnetometers available and are commonly used in MEG systems. SQUIDs can detect magnetic fields as low as a few femto-tesla, making them suitable for measuring the brain's magnetic activity within the specified range.
Minimum magnetic field detected and measured
Research indicates that the minimum detectable dipole strength under typical experimental conditions is about 4.5 nanoampere-meters (nAm), which corresponds to magnetic fields on the order of 100 femto-tesla. This sensitivity is adequate for detecting evoked neuronal activity, which often produces dipole strengths in this range.
There is also near-infrared brain imaging and high array density electroencephalograms, but these can only “see” the surface of the brain instead of everything.
Well, near-infrared is also ruled out since it only monitors the blood flow in the brain…
But that also makes me wonder, which part of your brain needs to be mapped in order to make a non-invasive computer-brain interface?
ChatGPT:
“Most BCIs work by interpreting signals generated in the sensorimotor region—especially the primary motor cortex. This area, located in the precentral gyrus, is responsible for planning and executing voluntary movements. Mapping the motor cortex allows researchers to decode the user’s intended movements and translate these into computer commands, which is the core functionality behind many brain–computer interfaces.
For example, many state‐of‐the‐art systems, such as Synchron’s stentrode, are designed to record activity from the motor cortex, making it possible to control devices (like a computer cursor or even smart home appliances) using thought alone
https://www.thetimes.co.uk/article/brain-implant-thats-ahead-of-elon-musk-in-the-race-to-read-minds-mwfp0233n
In addition, similar approaches have been used in clinical trials and research studies to enable communication and mobility in patients with paralysis
NY neurotech company’s brain implant allows ALS patient to control Amazon's Alexa with thought
While other areas (such as the somatosensory or prefrontal cortices) can also be mapped to enhance feedback or cognitive control, the primary motor cortex remains the most critical target for creating an effective computer–brain interface.”

Sources: Motor cortex - Wikipedia Dorsolateral prefrontal cortex - Wikipedia
… So all of the required parts to be mapped are literally on the surface of the brain, not requiring anything too complex... 🤡
I asked ChatGPT about high density EEG:
“The answer isn’t a single fixed number—it depends on many factors (number and placement of electrodes, quality of the head model, noise level, and the specific inverse method used). In practical, non‐invasive HD EEG studies using advanced source localization (e.g., methods such as sLORETA, LAURA, or other distributed approaches with individualized head models), typical localization errors are on the order of 7–10 mm. In other words, while “high‐density” EEG (using 128–256 channels) does dramatically improve the effective spatial resolution compared to traditional 19–21 channel systems, most empirical studies report that the best‐achieved accuracy (i.e. the average localization error when compared against gold‐standards such as fMRI or intracranial recordings) tends to be around 7 mm.
There have been simulation studies—and even some experimental work under ideal conditions—that suggest it might be possible to push the limits further (toward a few millimeters). However, to date no robust, reproducible human study using noninvasive scalp recordings has consistently demonstrated a practical spatial resolution as fine as 3 mm. In clinical and cognitive neuroscience applications, achieving a resolution around 3 mm remains more an aspiration (or a theoretical limit under very favorable conditions) than a routine outcome.
For example, one study comparing EEG source imaging of visual evoked potentials with high‐resolution fMRI found that the mean localization error was about 7 mm, which is sufficient to detect changes in visual field representation but not at the 3 mm level.
Assuming that I am unable to replicate the femtoscale magnetometers I previously mentioned, maybe if you use a mix of both cryogen-free compact pulsed MRI, ultra high density Electroencephalogram arrays (with thousands to millions of electrodes) and microwave tomography, you could make a good-enough system.
In the case of the compact MRI, I should use a “phased” MRI machine. It would work just like those rotating ones. And on top of that, I would need to use the technique used for real-time scanning. Normally the MRI machine needs to wait for those spinning protons to relax and detect the radio/microwave they emit.
Even so, you could use the picoscale magnetic field resonator to help increase the signal of the collapsing magnetic fields. Assuming that the MRI machine doesn’t do that already.
MRI pulse sequence - Wikipedia
Pulsed field gradient - Wikipedia
Phased Arrays - Steering and the Antenna Pattern | An Animated Intro to Phased Arrays
If it is pulsed, then, maybe it could also work as a transcranial magnetic stimulation.
Home Made MRI 2+1/2 | Hackaday.io
Imaging a brain thinking, using a new MRI technique.
Deep Learning Algorithms for Brain Disease Detection with Magnetic Induction Tomography (there is also Magnetic induction tomography, which seems promising for surface scanning, but there isn’t much material on it, the wikipedia page only describes what it is)
Rapid Electromagnetic Induction Imaging With an Optically Raster-Scanned Atomic Magnetometer
Can MRI Be Used as a Sensor to Record Neural Activity? - PMC.
Direct magnetic resonance detection of neuronal electrical activity - PMC
Five steps to make MRI scanners more affordable to the world
MRI for all: Cheap portable scanners aim to revolutionize medical imaging | Science | AAAS
A low cost, portable prototype MRI machine
A portable scanner for magnetic resonance imaging of the brain | Nature Biomedical Engineering
Advance Low-field Portable Magnetic Resonance Imaging (MRI) by Applying Electromagnetics Concepts
Portable scanner could boost point-of-care brain MRI – Physics World
Whole Body Sodium MRI at 0.5 Tesla Using Surface Coil and Long Echo Time Sequence
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.