I probably phrased things a little incorrectly. Testing is ALWAYS necessary. The idea is to use the API for your testing inside of your application module instead of instantiating your application code as a component and writing a test bench around it.
With the design pattern, the port interface is fixed by design. This actually *forces* any internal signals that go from module to module to have to go through the framework (and become registered) by API calls. That's using the next_state_rec and state_reg_rec signals. So you implicitly must rely on the API to behave correctly for driving all internal signals... that's a good thing because that is going to be 90+% of design 'routing' anyway.
What you are left are just the INPUT signals from the FPGA pins that the API cannot exercise. At this point there is nothing left to do but either write a traditional test-bench or bypass them and go straight to implementation to test your logic in hardware.
With the API strategy, I am just trying hard to keep a new-comer from having to learn testbenches design in his/her beginning stages. The learning curve is just too high at that point.
Great question! I struggled with this. There is absolutely a possibility of doing both. I would RATHER do local apps because they run faster and are easier to develop. But, here's the deal with the distributed thing and why it makes sense initially...
First is the long answer, but I would be like to be somewhat complete.
When I I'm developing this, I'm on my Windows Machine, using NotePad++ and testing with Xilinx Vivado 18.2, and everything is local. smh...
Maybe you see the problem already.. Some techies kinda despise Windows, So Notepad++ is out for them. The next question would probably be about a Linux Text editor... Yep, ok.. as long as the text editor has a 'tail -f' type feature, that will work. But which text editor? How many? What happens the Dragonfly framework hangs on a Mac machine because they have Python 2.x installed on PATH variable instead of 3.x? Also Dragonfly is calling my WSR (Windows Speech Recognition System), on Linux, Dragonfly may call Kaldi or Sphinx or something like that... sigh....
Making SpeakHDL distributed 'initially' allows me to punt on a lot of the things I don't currently know right now. On the server, I have Python 3.8 running. The hope is that I can build a single client application, do some 'limited' testing on the server and can GRADUALLY incorporate features and more editors and it will all work for everyone, because I'm just sending text back and forth.
I just don't want SpeakHDL to morph into a situation where people were getting frustrated because I didn't do a build or any testing in their environment and they have a bug.
Bonus Ranting:
You know what's worse,.. Dragonfly/Python is just one thing, we haven't event talked about testing on Altera/Intel vs Xilinx Vivado, vs old Xilinx ISE. Yikes!!
The short answer is...
SpeakHDL is possible to run locally but deployment and testing would be quickly become a nightmare.
Will there be a possibility to build/run the process completely on a local infrastructure, or will it always be dependent upon AWS or some other cloud processing service?
Is there something inherent to the API that makes the need for testing ("Never Build A Testbench.") unnecessary?