Cool Things

Summary

I have had fun working on some cool things for myself, for university and for my career. Please be aware that in most of the projects below I was part of a team. I wish I could take full credit for every single thing, but no, I can't.

6800 in 1976

I learned how to program by myself in 1976 when I bought a SWTPC 6800 computer. It had 16K of RAM, no hard drive, not even a floppy drive. It came with a 6800 datasheet and a assembly listing of MIKBUG. MIKBUG allowed me to peek and poke memory and run from a given memory address.

After work (framing carpenter), I'd have some supper than off to the basement to figure out how these incredibly cool things worked! It took a long time, but I was eventually able to get the user (me) to enter a number and then another number and have the program print the sum. Yup, 2 + 2 does in fact equal 4.

There was no assembler, so I manually had to assemble the code using the datasheet. And then manually type in the op codes in binary. The risk of typos is yuge.

BTW the cool thing about this PC was that it could save a program by playing the modem audio and recording that on a cassette tape. Then you play the cassette tape back and the audio is used as if it were an incoming modem signal. Very cool but it was not very reliable on a cheap-o Radio Shack cassette tape player! Lesson learned.

Pascal

Eventually I got a TRS-80 and after that an Apple II. I started to write code in higher level languages including Basic. At one point, I got Pascal running.

One cool PASCAL project was an airplane wing rib outline. I was into model airplanes and I wanted to design airfoils. The program would take the root rib outline and the tip rib outline and figure out the outlines of the intermediate ribs. Got it to print them all out onto a dot matrix printer i.e. with accurate dimensions. Then put those printouts on a balsa sheet and cut out the ribs.

Eventually figured out that I could put a little tab at the front and back of the ribs, so they would be at the perfect angle of attack while I was building the wings. And I could use that to make the tip have a slightly different angle of attack then the root rib (a technique to prevent stalls).

My challenge was to change the program to allow elliptical wings i.e. like a Spitfire. Nope, never got there.

SPAM - Simulated Processor and Memory

In university, I took a graduate school prep course. The idea was that students could use it to find out
what it was like to be in a Master's program. I had to write a thesis, have an advisor and the course mark was based on the final oral exam just like in a Master's.

The project I came up with was to write a 6502 simulator. It would run on the VAX mainframe currently used for student assignments.

I called it SPAM, because it simulated the microprocessor just like the can of "meat" was simulated too. Basically my first Dad joke.

I went through the 6502 datasheet line by line. Simulated each op code and the effect each had on the accumulator and control registers, etc. To test it I initially wrote 6502 opcode files, but that got tiresome. I wrote a QUAD (QUick And Dirty) assembler to be able to write more complex code.

Based on my experience with MIKBUG, I added a couple of functions get_char() and print_char() so I could interact with the simulator in a Unix terminal.

Eventually QEMU appeared on the web. But I had a first cut version of that about a decade before them (just bragging).

Parser written in Cobol

I worked for one company that wrote a multi-user shared IDE that was used by IBM mainframe development teams. Cool product that predated modern day IDEs by a decade. (It would be interesting to see a cross-user IDE running in development teams...)

The user request was to have a way to see all the source files affected if you changed a file. Most of those folks used Cobol and once in a blue moon some PL/1. These languages have the equivalent of "#include" and so if you changed one of those, you may need to also change all the source files that used it. And you may need to change the JCL that ran that app.

I wrote a parser that would go through the Cobol source. I had to write the parser in Cobol since very few companies had C or any other language installed. That was difficult since Cobol is a high-level language with no regular expressions or anything similar. I could write the PL/1 parser in PL/1 which was much closer to C but still no regexs.

To write the report was tricky since it requires recursion to figure out what modules used/called the one you were modifying. Cobol does not have a stack, and so I had to write my own stack (in Cobol!).

Eventually the company got C installed and I used it to write a regex parser. To ensure it was perfect I used a technique called Cleanroom Software Engineering. The results were fantastic, rock solid, no bugs.

Cross-platform Interpreter

A company needed a simple script to process files transferred from a mainframe to thousands of PCs across the country e.g. interest rates to local banks. The files + script were transferred and then the script was invoked to make sure the files were in the right place, etc. on that PC. They asked me to write the scripting language.

Went the university bookstore and bought Ullman et al. Dragon Book. I wrote a perl-like interpreter. It was multi-platform and ran on DOS 3.3, DOS 6, Win3.1, Win95, Win NT, OS2, a few Unix flavors e.g. Data General. Getting the first cut was fairly straightforward, and then the nick-picky work of getting all the other functions we needed took a while.

The hard part was testing it on those multiple platforms. The solution was to write a test framework in the language itself. That was cool because 1) it forced the language to be the minimal amount of functionality for a viable language, 2) running the framework at all was a good test of it and 3) of course anything I wrote had to pass on all the platforms or at least be platform-specific functions. That testing made it rock solid.

To show this off during a User meeting, I wrote a script that would transfer Win95 binaries (and my script) to a PC running DOS 6. Yup the transfer speed was the optimal 56K (at the time)! The last step in the script was to reboot and Win95 would come up. To be able to run this repeatedly, I wrote another script to transfer and install DOS 6 on the Win95 PC, so I could yo-yo that back and forth. Fun!

BTW I didn't know it was "perl-like" because perl didn't come out until later on.

cron ... on DOS 6

To invoke some behaviors on a schedule, I was asked to write cron. That way we could run backups and other tasks in a similar way across all the platforms we supported. Did that. The fun one was DOS 6.

It had to run every minute to double-check if there was a command line to kick off, and figuring out how to get that to go took a while, but I did get it to work.

Gas station simulator

A company I worked for produces about half the gas pumps in the world. They were working on a new front-end to those pumps. During development, it would be ideal for developers to be able to test it on their PCs. They asked me to write a gas pump simulator.

I looked at the actual code that runs on the gas pump written in C, and it was well done. They used simple ISRs Interrupt Service Routines to interact with the hardware.

I used that nice architecture! Cross-compiled that code onto our development PCs and then used some C Pre-processor magic to stub out the ISRs and simulate the actual pump hardware. Then I set up a simple GUI that had 16 threads, each one simulating a separate gas pump. Then the new front-end under development could communicate with these threads/pumps and run as it would in real life. They, of course, had real gas pumps to test on, and that confirmed my code was okay. But hey on your PC you could fill your "gas tank" in 10s.

The cool part is that I used structs to fake out a Class (and other techniques) to write the GUI with an object-oriented architecture. Easy to write, easy to modify, easy to test.

Extreme Programming club in San Diego (XPSD)

I got into Extreme Programming (now called "Agile") and thought it was pretty cool. When we moved to San Diego, I found out there was an XP club there and I joined it. I was the 4th member.

I posted a couple of notes for our meetings. Made a few more posts about the club in a few other sites and after 6 months or so, we had 240+ members. Did a few presentations on XP at the club meetings.

Triage of defects

Worked at Nokia for a while. I found out that they had bugs in their defect tracking system that had not been looked at... ever. I spoke to my boss and asked if I could do a quick and dirty review of all of them, that way we could figure out their priority. I called it doing a "triage". He liked that.

Wrote CI/CD ... in perl

Worked for a company that was pretty conservative.

One senior developer made a change, submitted it to the VCS and went on holiday the next day. It broke the build and caused a lot of havoc for 2 weeks until he came back.

I suggested we install a CI/CD e.g. Jenkins but was turned down(!). So I asked if it was okay if I just double-checked the build continually. They shrugged and said sure why not.

So I wrote a quick and dirty app that would download the latest source and compile it. I wrote it in perl (Python did not exist at that time). If a problem occurred, I would manually send an email. The developers appreciated that feedback.

Eventually I asked it was okay for my script to send those emails directly. They said sure, why not. So I did. And then I asked it was okay to move that app onto a lab server since I was continually going from meeting to meeting with my laptop. They said sure, why not.

Eventually we hired a DevOps guy. He looked at that and said "What the hell, why aren't you using Jenkins?" I said that's a great idea! So he set it up, and we converted over to Jenkins from there on.

Testing based on actual usage patterns

We had a pretty good log on an IV pump. I realized with a few extra entries we could exactly reproduce what a user did on the pump. Any bugs could be mostly recreated by automating the pump's log.

We had been running a monkey test to randomly check the GUI but after millions of key presses it didn't find anything significant. We had tried different techniques to increase the likelihood of finding bugs, but no joy.

I put those two ideas together. Use the logs from actual usage in the field to find the most common patterns real live users did with the pumps and then randomly run those scenarios. Then use statistics based on those numbers to come up with an MTTF.

Upper management was intrigued by it, but shot it down as too much development time and effort.

I looked up the idea of using MTTF on software and (unfortunately!) found a white paper on it, so it was not a unique idea at the time.

As an aside, I brought home a pump once to show it off to my son. He played with it for 20 seconds and found a catastrophic bug in the touchscreen!

Automated test framework

Worked for a company that has about 90% or so market share of dialysis pumps. The project I worked on was greenfield, so no team, no prior code, skunk works approved by the CEO. Sweet!

Based on my experiences in past companies, I wanted continual ongoing testing as development went on. So I wrote an initial version of an automated test framework in Ruby. Then I wrote a side app that took the output of that framework and generated FDA compatible docs. That way we could write some code, run the verification tests and generate the verification documents. If anything looked odd, we could it fix ASAP.

We had unit tests, automated verification tests and eventually manual testing to double- triple-check everything. And then we gave the latest GUI to an RN (registered nurse) on the team who had worked in Dialysis clinics on other pumps. We immediately fixed any bugs, gaps, UX issues she and other folks found.

I periodically passed those docs over to the Quality team. And they got pissed off because they couldn't find any problems with them! (just kidding)

Source parser

I had to generate a ton of documents for the dialysis pump project. Eventually I wrote about 150 of them. Some of them were one-offs, and some had to be continually updated as the code was written. To help with that, I got the developers to tag all the functions in the code base with an SDD id and unit test id and then used those to auto-generate the docs showing that the code was correctly implemented according to the SDD and that all the code was unit tested as well.

To make sure it was done correctly, I wrote a source code parser that gathered all that info and generated a website that showed for example, the lines of code on the left and the unit test on the right. That way we could double-check that the unit test actually tested what it claimed to test.

Ditto for the SDD ids. Ditto for the SDD - SRS matrix. Useful.

Time warp idea

I thought "The Rocky Horror Picture Show" was hilarious. And in our automated testing we needed a way to test that certain events occurred in a very short time period (0.5s).

Brainstormed for ideas, and we came up with "time warp". From the automated script we could request the app to automatically scale any delays (e.g. sleep()'s). So 0.5s would take 30s in real time with a time warp factor of 60. The developers had to be careful to ensure that all sleep()'s were wrapped with a function that could handle that scaling.

It worked. It was just a jump to the left.

And then we went the other way. We took a 4-hour treatment and time warped it to a couple of minutes. That made a ton of automated testing doable as well. It was just a jump to the right.

The real life lesson here, for me, was the Development team and the Verification team are tied at the hip. They need to work together, and they did to great success.

Full Automation in 3 hours

I set up 35 servers to handle the automation testing. And yes I wrote scripts to automate the creation and maintenance of those servers.

We had about 1200 or so requirements, and we were able to automate all the requirements except for 3 (those had to be done manually).

By tweaking which servers ran which tests, I was able to get the overall time to run from "overnight" down to 3 hours. When all the automation tests were done, the FDA compliant docs were auto-generated as well.

The cool thing here is that there was no "end of project" nightmare anymore. We could make a last minute change and then have the results and the docs ready to go in 3 hrs.

Total cost of all this? Less than $10K in hardware and setup. Cool.

Remote control of the pump GUI

Was asked by the hardware team if we could "remote control" a pump. They needed that for testing the pump in a closed chamber.

We brainstormed and found an interesting hook. The automation scripts we had could get screenshots. And we also had the x,y locations of all the buttons, etc. on the screen. So we put those two observations together.

We recreated a GUI on a local PC by grabbing the screen every 0.5s. And when the PC user click on that image, we translated those PC x,y coordinates into remote pump x,y coordinates, and then sent the automation command to "click" at that point. Luckily, we didn't have to do swipes (whew!). With some testing and tweaking it felt very much like remote control. The 0.5s was a little laggy but still useful enough for most needs.

Eventually I wrote a version of this that just monitored multiple pumps i.e. no remote control, but simply to get the displays. It would get the screen images every 1s or so for 8 pumps and display them in a browser. Then we could check what people were doing, in sorta kinda real time, with the pumps in the lab at any time. Very cool.

Cross compiled FreeRTOS

Worked on a product that used FreeRTOS. They wanted a way to get script based test automation working. I noticed that the app used only a few FreeRTOS builtin commands. I figured that if I could get the app to compile on to a macOS I could run a ton of test scripts against it fairly easily.

The FreeRTOS tasks where simply threads. I wrote the equivalent commands for Mutex, Queue and Semaphore. With some additional tweaking I was able to get it to compile and I could run most of the code on macOS. I was then able to use that to run automated tests for most (not all) of the code base.

Android and iOS test framework

Wrote an automated test framework in Python to run against an app on an Android or iOS. The current tests ran on the phone itself but that caused issues since it interacted with the phone's OS extensively while it was trying to test the app's behavior. Not good.

I wrote the framework to run on macOS/linux and the test devs converted the existing tests to use the new framework. The new framework opened up a bunch of new possible requirements they could test and to the automated test coverage jumped up.

The total time for these runs was getting close to 24 hours. We started investigating ways to get around that inexpensively. It turns out that RaspberryPIs were cheap. Before Covid issues around $55USD.

So we bought one, and I converted my framework to run on an RPI. That worked. We bought a dozen RPIs and hooked them up to a dozen phones and ran the tests in parallel bringing down the overall time to a few hours. All for less than $1000.

Converting the tests from Android to iOS was fairly simple too. The framework was generic. The test cases were fairly generic as well since the dev team used the POM (Page Object Model) pattern. They had some tweaks to make but generally a test script ran on both platforms quite nicely. Had to buy more RPIs.

Gathering all the data from those RPIs and coordinating the setup and triggering of the tests on the RPIs took a little more effort, but that worked out nicely as well.

Converted a tkinter GUI to run headless

Started at a company that had a test GUI that could invoke and check various conditions on an insulin pump. The verification team was using that manually, and it took about two weeks to run the full set of verification tests.

I looked into the GUI's architecture. It was based on tkinter, and more importantly it was roughly an MVC architecture. There were some issues in the implementation, but generally it was set up in the right way. I cleaned up some of them over time, in between other tasks that were on my plate. Eventually I got it to be 100% clean "merciless MVC".

Then I created an alternate view ("V") for the app that was headless. In other words the rest of the code base worked correctly with both a GUI and or in headless mode. And then I ran the headless MVC from a python script to show off an automated verification test.

The development team jumped on that. They used it to run an initial suite of automated tests used only by them. The plan was to eventually convert some of the longer running (or boring!) automation tests by python scripts and leave the "fun" testing for the manual team.

- John Arrizza