Close
0%
0%

ThunderScope

An Open Source Software Defined Oscilloscope

Similar projects worth following
The goal of this project is to design and build an open source PC-connected alternative to low cost benchtop 1000 series oscilloscopes that is competitive on both performance and price. The specs this project must achieve are at least 100MHz on four channels, at a similar price to other entry-level scopes.

 I started this project sometime in 2018 and have been working on it ever since. From the very beginning, I've planned to release this project as open source, but fell prey to perhaps the most classic excuse open source has to offer: "I'll release it when I'm done". And so, the project moved forward, past various milestones of done-ness. And my fear of showing not just my work, but the (sometimes flawed, and always janky) process behind it kept me making the same excuse. In doing so, I've missed out on the input of the open-source community that I've spent so long lurking in, spent nights banging my head against problems that could have been spotted earlier, and slowed down the project as a whole.

"The best time to open source your project was when you started it, the second best time is now"


The project is now in a near-completed state and is released as open-source on GitHub under an MIT license. I will be making a series of project posts here detailing all the failures, fixes, and lessons learned in chronological order. I look back to when I was first learning about hardware through following open source projects and although I could learn a bit from finished layouts and schematics, the most I've learned is from blog posts and project logs that describe the problems faced and how they were solved. I wish to do the same for those just starting out in this amazing field, and hopefully also release an excellent oscilloscope for them to use in their electronics journey! If you're interested, sign up at Crowd Supply to be notified when the campaign starts!

  • Crowd Supply Launch

    Aleksa09/20/2024 at 17:10 0 comments

    Six years ago I had the idea to make an open source oscilloscope, a scope I wish I had when I started playing around with electronics. Three years ago it was a working proof of concept that I documented online, hoping I'd find people who were interested in making it happen. I did, and I can't express how grateful I am for it! Through our shared efforts, I'm so proud to announce that ThunderScope is finally ready for your lab bench: https://www.crowdsupply.com/eevengers/thunderscope

  • Peak[ing] Performance - 50 Ohm Mode Bandwidth

    Aleksa08/04/2024 at 19:03 0 comments

    Had to split this one into 50 Ohm and 1 MOhm Paths so it wouldn't be too long - and it still ended up being half an hour!


  • When Zero Isn't Zero: DC Offsets in Oscilloscopes

    Aleksa07/26/2024 at 20:14 0 comments

    Turns out the arrows did work last time? Either that or it was the extra jokes I added in. Anyway, this one doesn't have much intentional humor but the real joke is my slow descent into madness told over hours of videos on scope front ends.

  • Low Frequency Path - A Crossover Double-Cross

    Aleksa07/21/2024 at 18:18 0 comments

    Thinking about the poor intern that wrote an arrow detection algorithm for the auto-select thumbnail option. Wonder if the arrow thing actually works? We'll find out!

  • Real Scopes Have (Derating) Curves

    Aleksa07/17/2024 at 00:49 0 comments

    A clickbait title for a very dry technical video post. I have included one [1] gag for your viewing pleasure.

  • I can't write good, so I'm making videos now

    Aleksa07/14/2024 at 15:03 0 comments

    I'm starting a series of technical documentation videos that explain every component in the ThunderScope front end - focusing on a different aspect of the circuit every time, groundhog day style. Hope you enjoy it - or at least find it useful for understanding oscilloscopes.

  • FPGA Module: Extreme Artix Optimization

    Aleksa04/25/2022 at 02:33 1 comment

    It's been a while since I posted one of these! I've got a few days before another board comes in so I figured I'd post a log before I disappear into my lab once again. Hardware-wise, we left off after the main board was finished. This board required a third-party FPGA module, which had a beefy 100k logic element Artix-7 part as the star of the show, costarring two x16 DDR3 memory chips.

    But wouldn't it look better if it was all purple? The next step was to build my own FPGA module, tailored specifically to this project.

    Read more »

  • Demo Video!

    Aleksa11/07/2021 at 00:01 0 comments
  • Software Part 2: Electron, Redux and React

    Aleksa10/30/2021 at 19:12 1 comment

    Despite the name of this project log, we aren't talking about chemistry! Instead, I welcome back my friend Andrew, who I now owe a couple pounds of chicken wings for recounting the war stories behind the software of this project!

    We’re coming off the tail end of a lot of hardware, and some software sprinkled in as of the earlier post. Well my friends, this is it, we’re walking down from the top of Mount Doom, hopefully to the sound of cheering crowds as we wrap up this tale. Let ye who care not for the struggles of the software part of “software defined oscilloscope” exit the room now. No, seriously, this is your easy out. I’m not watching. Go on. Still here? Okay.

    Let’s get right to the most unceremonious point, since I’m sure this alone is plenty to cause us to be branded Servants of Sauron. The desktop application, and the GUI, is an Electron app. I know, I know, but hear me out. For context, Electron is the framework and set of tools that runs some of the most commonly used apps on your computer. Things like Spotify and Slack run on Electron. It is very commonly used and often gets a bad rep because of various things like performance, security, and the apps just not feeling like good citizens of their respective platforms.

    All of these things can be true. Electron is effectively a Chrome window, running a website, with some native OS integrations for Windows, macOS and Linux in general. It also provides a way for a web app to have deeper integrations with some core OS functions we alluded to earlier, such as the unix sockets/windows named pipes system. Chrome is famously known for not being light on memory, this much is true, but it has gotten significantly better over the last few years and continues to be so. Much the same can be said for security, between Chrome improvements that get upstreamed to Chromium and Electron specific hardening, poor security in an electron app is now often just developer oversight. The most pertinent point is the good citizenry of the app on its platform. Famously, people on Mac expect such an app to behave a certain way. Windows is much the same, though the visual design language is not as clearly policed, many of the behaviours are. Linux is actually the easiest since clear definitions don’t really exist, this has led to, funny enough, the Linux community being some of the largest acceptors of Electron apps. After all, they get apps they may otherwise not get at all.

    As much as I would love to write a book containing my thoughts on Electron, I am afraid that’s not what this blog calls for. So, in quick summary, why Electron for us, a high speed, performance sensitive application? I will note this, none on the team were web developers prior to starting. It is very often the case that when web developers or designers switch over to application development, they will use Electron in order to leverage their already existing skills. This is good, mind you, but this was not the case for us. We needed an easy way to create a cross platform application that could meet our requirements. In trying to find the best solution, I discovered two facts. Fact the first, many other high speed applications are beginning to leverage Electron. Fact the second, finding out that integration with native code on the Electron side is not nearly as prohibitive as I initially thought. So, twas on a faithful Noon when I suggested to our usual writer, Aleksa, that we should give Electron a whirl. I got laughed at. Then the comically necessary “Oh wait no you’re serious”. I got to work, making us a template to start from and proving the concept. That’s how we ended up here.

    Read more »

  • Software Part 1: HDL, Drivers and Processing

    Aleksa10/21/2021 at 01:07 0 comments

    We've gone through a lot of hardware over these last 14 project logs! Before we leave the hardware hobbit hole to venture to software mount doom, let's take a look at the map of middle earth that is the block diagram of the whole system.

    The first block we will tackle is the FPGA. The general structure is quite similar to the last design, there is ADC data coming in which gets de-serialized by the SERDES and placed into a FIFO, as well as scope control commands which are sent from the user's PC to be converted to SPI and I2C traffic. Since we don't have external USB ICs doing the work of connecting to the user's PC, this next part of the FPGA design is a little different.

    There is still a low speed and a high speed path, but instead of coming from two separate ICs, both are handled by the PCIe IP. The low speed path uses the AXI Lite interface, which goes to the AXI_LITE_IO block to either fill a FIFO which supplies the serial interface block or to control GPIO which read from the other FPGA blocks or write values to the rest of the board. On the high speed path, the datamover takes sample data out of the ADC FIFO and writes it to the DDR3 memory through an AXI4 interface and the PCIe IP uses another AXI4 interface to read the sample data from the DDR3 memory. The reads and writes to the DDR3 memory from the AXI4 interfaces are manged by the memory interface generator. The memory here serves as a circular buffer, with the datamover always writing to it, and the PCIe IP always reading from it. Collision prevention is done in software on the PC, using GPIO data from the low speed path to determine if it is safe to initiate a read.

    Read more »

View all 24 project logs

  • 1
    Assembly Video

View all instructions

Enjoy this project?

Share

Discussions

Valerio wrote 02/20/2023 at 10:07 point

any news about the project?

  Are you sure? yes | no

Aleksa wrote 02/20/2023 at 19:48 point

Yup! Testing rev3 of the baseboard now and have new FPGA modules in production. I haven't been keeping up with project updates here, but if you want to follow the development in real time feel free to join our discord server: https://discord.gg/pds7k3WrpK

  Are you sure? yes | no

EdaMilesLin wrote 12/01/2022 at 01:58 point

Hey Bro! I know HMCAD1520( 2Gsps ADC ) it's hard to buy in distribution, i am  in china! So I want to know your purchase channel !!! hope  your repy!!

  Are you sure? yes | no

EdaMilesLin wrote 12/01/2022 at 02:05 point

I also DIY a scope  ( 1Gsps x 2, and 2Gsps x1 ),united a  logic analyse, scope, Signal generator,by use Zynq Ultrascale+ MPSOC . Now  I lack HMCAD1520 2pcs!!

  Are you sure? yes | no

mh-nexus wrote 11/01/2022 at 14:08 point

Just want to encourage you to keep going. This project is very interesting and will be quite useful, especially since it allows direct logging on a computer. I am looking forward to the CrowdSupply campaign.

If at some point in time you also plan to make a 10 bit version, that would be great!

  Are you sure? yes | no

Aleksa wrote 11/01/2022 at 14:28 point

Appreciate it! We're making good progress to launch by the end of the year. As for a 10 bit version, how about 12 :) I've got a lead on a tray of hmcad1520s which can sample at 8, 12 and even 14 bit and is pin compatible with our current ADC (hmcad1511)

  Are you sure? yes | no

mh-nexus wrote 11/02/2022 at 10:35 point

The high res version would be interesting. I see the chip is about 100 instead of 50 $, so it could be an interesting option for those willing to have higher res :)

  Are you sure? yes | no

perrymail2000 wrote 11/25/2021 at 00:13 point

Are there plans to make this compatible with sigrok?

  Are you sure? yes | no

Aleksa wrote 12/08/2021 at 16:58 point

Sorry for the late reply, I didn't get notified about this comment for some reason! We're focusing our efforts on glscopeclient right now, but it should be able to support sigrok with appropriate tweaks to how the triggered data is sent to the client software.

  Are you sure? yes | no

edmund.humenberger wrote 11/20/2021 at 09:28 point

You probably know https://hackaday.com/2019/05/30/glscopeclient-a-permissively-licensed-remote-oscilloscope-utility/

There is a recent demo if its capabilities.  https://www.youtube.com/watch?v=z0ckmC2RXi4

  Are you sure? yes | no

Aleksa wrote 11/20/2021 at 18:08 point

That's a great demo, I'm seriously considering integrating it into this project. Why reinvent the wheel adding all these features when another open source project has them all? Just got to figure out how to hook the two together. I'm not a software guy myself, so I'd love to chat with a contributer behind that project to figure things out with!

  Are you sure? yes | no

edmund.humenberger wrote 11/21/2021 at 10:13 point

Awesome hardware without proper SW support is pretty useless. I was told that any hardware project these days consists of 80% software development effort. I really suggest to you to find someone who is capable and >>willing<< to put in the effort to make GLSCOPEclient work with your hardware.  But finding this person will be a challenge in itself. You might be able to provide/find funding to/for this person.
If you succeed and make a first version usable, you can tap into the community of developers for the GLSCOPEclient and don't have to build your own community for your firmware (which is even harder).

Your opportunity with your headless scope is that all existing cheap scopes suck with their capability to transfer waveforms >>fast<< to the PC (this is where you shine).

(PS: the 8 bit resolution unfortunately is on the low side)

  Are you sure? yes | no

drandyhaas wrote 11/19/2021 at 14:51 point

Hi,

Great project! As the developer of the first CrowdSupply scope ( https://www.crowdsupply.com/andy-haas/haasoscope ) I share your goals!

I've read a bit through your work here, but I have two main questions. 

What chip do you use to get 1 GB/s to the PC? 

Can a PC CPU really keep up with processing that much data in real time? To calculate the triggers, for instance, might take 10-100 floating point operations per sample. That's 10-100 GFlops. Do you use multiple threads/cores? GPU?

Thanks, Andy.

  Are you sure? yes | no

Aleksa wrote 11/19/2021 at 16:34 point

Hi Andy,

Seeing your scope succeed on Crowd Supply made me realize that people really do want open source test equipment - great work!

We've used the hard PCIe IP in a Artix7 FPGA to reach >1 GB/s with four lanes of PCIe gen 2. These PCIe lanes go to a Thunderbolt device controller and out to the user's PC.

Currently we only have edge triggering set up, but it does work in real time. This is since it only takes one operation per sample (subtract one from the other) and another operation to check for trigger events, that is only done once per block of samples. This will be further optimized by using a proper SIMD implementation. Triggering is only one part of the pipeline, so we do use multiple threads. We're aiming to run smoothly on any modern quad core, so can't use too many threads. Rendering the waves is GPU accelerated, but should run smoothly on integrated graphics.

Please feel free to ask me any more questions you might have, and consider joining our discord! https://discord.gg/pds7k3WrpK

Cheers,

Aleksa

  Are you sure? yes | no

remydyer wrote 08/26/2021 at 16:18 point

Great project.
I have a question: What's the highest data rate you can actually sustain continuously with that tiny little 24kB buffer on the usb3 fifo? 

I ask, because I know that with USB 2.0 Hi-Speed, one really needs at least about 8MiB of sdram attached to the FPGA as a 'deep buffer' in order to maintain 30 MB/s without dropping packets. This isn't the fifo chips' fault - it's the USBIF's fault for not requiring USB root hub controllers to handle packet timing with state machines and DMA when they added hi-speed.

What happens all too often, is that the PC OS just doesn't poll the bus sometimes, and the hardware attached to the fifo needs someplace to store fresh data whilst the usb-fifo chip is full and waiting on the OS.  This all didn't matter with USB 1 speeds, but with USB 2.0, just missing a packet time for a few too many microseconds really breaks using bulk transfers to capture data steadily from an FPGA with ADC's attached.

But with USB 3, I hope, this should not be an issue - I fervently hope that the super speed bus can in fact DMA straight through to host ram without needing the OS to service an interrupt. I haven't tried it, which is why I ask.

I found that it was a very good idea to test transfer integrity by just running a free-counting 24 bit binary counter on the FPGA - having it increment for each sample, and have a copy of it streamed through the USB fifo all the way to a file on a (big) disk. 

This helped me verify that it could reliably sustain the data rate I was shooting for by leaving it running until it filled the disk array (about 11TB at the time). With an incrementing counter, you can quickly scan the beginning and end of the file, and very easily determine whether the counter is where it should be at.

I also found that leaving an oscillocope to 'watch' the 'fifo full' fifo interface pin was good practise - you're looking for pulses longer than expected, which means the data isn't flowing as expected. 

In any case - I'd suggest that streaming the raw (from ADC) data straight to disk, and then looking at it 'retroactively' is a very good way to do science. In my line of work, I do 1MS/s capture of 6-8 channels at 12 bits, then just save it down to a file. This is run like the old paper 'strip chart recorders' - start and run all day - never bother with trying to 'trigger' and save just data you think might be interesting - you miss all the stuff that happens unexpectedly. And since I'm working things that may break very quickly without warning, it has been very helpful to have such a 'black box recorder' record to go back through later to figure out exactly what went wrong. I regard the 'trigger and save' approach to basically be too close to cherry picking. It's too easy to miss too much.

Anyway since the work is in an 'industrial' environment, I have a linux SBC in a box with the ADCs/FPGA's etc (with a gigabit ethernet adaptor) with which I stream the data out to where the big disks are, just using a couple invocations of netcat. I have mostly just been using the ztex.de usb-fpga boards this way, although only the usb 2.0 ones.

The program called 'snd' (https://ccrma.stanford.edu/software/snd/ or just 'apt install snd' in any debian) is very useful for quickly looking at arbitrary raw PCM files. It's intended for raw sound file editing, but uses memory mapped io and can accept data with arbitrary number of channels, format and sample rate. It can seem to 'lock up' if you open a very large file and zoom 'all the way' out - but this is because it internally scans the whole file and makes a low-res map of it. It may take a while, but when it's done you can then zoom right in anywhere - feels a bit like using google earth. 

For actual processing/data extraction, you can just use numpy from python. Just open the raw pcm file with memory mapped io, and let the OS kernel worry about chunking/loading/unloading it through memory. It just looks like an array to you ( there's a package called tqdm that easily adds nice progress bars with ETA's, great when you're chewing through multi-TiB data files). This usually results in performance quite close to disk read speed, depending on how much processing you do. Profile and use cython etc where it matters, if it does.

I have also got a setup which does the whole 'looking for a trigger, and saving so many seconds capture' setup, but that was at a much lower data rate with NI hardware and software. Using a multithreaded software architecture with separate threads and queues to pass data between them was key there, as was assembling the data into fairly large 'blocks' to handle at once. First thread handled 'catching' the data and chunking it up, second handled looking for trigger conditions and handling a ring buffer so that data before the trigger could also be saved, third thread to catch the 'collected' data to be saved out to a file.

If I was going to suggest anything to do with processing the datastream live, rather than just saving to a file and worrying about it later,  it would be to look into using either gstreamer (which is for pipeing video processing - also tends to be fairly heavy data streaming rates) and/or gnuradio (or both). 

Gstreamer would be especially useful, as it is already architected to try to handle high data rate streams like uncompressed video.  You could use your data to feed 'live' instrument readouts / plots (generated within custom gstreamer plugins), which you could then 'mix' into another live video stream. You could even then connect this directly to youtube, and livestream video with a event-detecting oscilloscope overlay mixed in running from live data. (I have kind of done this, but cheated by putting the whole oscilloscope I was using where the high res security camera I was livestreaming from could see it). 

Would be great for any experiments where things can go wrong quickly, and I suspect using gstreamer like that is possibly how spacex does it when they launch rockets. From what I recall, you could even use html5 in gstreamer to draw the overlay, and you can certainly do 'live' video mixing like cutting between cameras and greenscreen etc. 

Gnuradio is also an obvious one - the SDR guys are going to love your hardware, I am sure!

Hope this helps, good luck!

  Are you sure? yes | no

Aleksa wrote 08/27/2021 at 01:05 point

Great comment! I found that the FT601 could sustain a data rate of 370 MB/s. To verify that, I lowered the clock rate from an external clock generator until the FIFO full led wasn't lit. I also used a counter much like you described and sifted through the data in csv format to make sure it was all consecutive. I really like the idea of triggering off of the FIFO full pin (since it should never be full while streaming) and the method of analyzing the data coming out (certainly beats waiting ~10min for excel to do anything in such a large csv). Piggybacking off of video processing is also an interesting prospect for handling such large streams of data. I appreciate the suggestions!

  Are you sure? yes | no

Aaron Jaufenthaler wrote 06/15/2021 at 08:11 point

Thank you for the logs. I enjoy reading them

  Are you sure? yes | no

Aleksa wrote 06/15/2021 at 15:01 point

Thanks, glad to hear you're enjoying them so far!

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates