Turning a PCB into a Schematic
This documents the steps I take to reverse engineer a schematic from photographs of a PCB.
Step 1: Capturing Images
The Problems
Capturing a good image of a PCB is more difficult than it seems. There are three major problems that must be worked around: lighting and parallax and sharpness.
The challenge with illuminating a PCB is that they are shiny, so the best way to illuminate a PCB is with indirect diffuse lighting. Dave Jones posted a few videos on this problem, settling on using a cardboard box lined with white paper, with LED lighting shining up from the bottom of the box. His PCB light box worked great for capturing photos of small PCBs, but I need to capture images of large PCBs, 64 inch2 or more.
Putting the camera inside the light box while also getting the camera far enough away to eliminate parallax would mean making a light box over 8 feet long. The alternative would be to put the camera outside the light box, which would require aperture in the light box of at least as large as the PCB, which would make creating a diffuse "cloud" of light much more challenging.
When capturing a large PCB with a single photo sharpness becomes a limiting factor. Sharpness is limited by both optics and sensor pixel density. My camera's 18MP sensor can produce a 400 DPI equivalent image, which is okay, but the bigger challenge is getting a crisp image of a PCB at 8 feet distance. This requires good quality optics, as any aberration can make the task of tracing the PCB frustrating.
But the biggest problem is parallax. PCBs often contain tall components like capacitors that can obscure significant portions of the PCB in a single exposure photograph. Even with the camera 8 feet away, a 1" tall capacitor near the edge of a PCB can obscure enough of a PCB to add unnecessary guess-work to the job of tracing the traces.
The Solution
What you see here solves my problems with lighting, sharpness and parallax. This is a Shapeoko CNC router that I am re-purposing as an X-Y gantry. Attached to the router is a wooden contraption that supports a light ring and a camera. The camera is a Raspberry Pi 3B+ attached to a Raspberry Pi HQ camera and a PoE hat, all stuffed in a cute 3D printed encolsure. This was built for another project, but having a Raspberry Pi that mounts to a standard 1/4-20 thread and that only connects a single cable is incredibly convenient. The camera is stepped over the PCB in 1 inch increments, and captures an area roughly 2.5" x 2" in each photo. The ring light provides a "light bath" for the 1 inch x 1 inch square of interest at the centre of the photo that produces no reflections or shadows. Because I'm only using the center of the photo, parallax is basically eliminated.
The software that controls this is a bit of a bodge. On the camera there's a piece of software running that listens for commands over a TCP socket, and captures images to the SD card when told to do so. On my laptop there's a piece of Go that controls the Shapeoko, and that sends commands over TCP to the Pi to capture images. Yes, the Pi could do it all, but my camera enclosure deliberately blocks the USB ports on the Pi.
The above image is a sample of what comes out of the camera. There are some obvious problem, all of which are expected and easily overcome. Spherical aberration in my cheap C-mount lens makes the edges of the image useless, and the image is rotated a bit. There's two sources of rotation; both the camera and the PCB can be rotated relative to the axis of the CNC gantry. The PCB rotation can be safely ignored, but the camera rotation must be corrected before the images are stitched together. Because the images are captured on an exact 1" grid, stitching is relatively easy. In fact, the images aren't "stitched" at all, they're simply tiled together. By manually overlaying two adjacent images, I determine the "board level resolution", which tells me exactly how many pixels there are in an inch at the level of the PCB. In this case, it's 1784 pixels per inch.
Once I have that number, I do a series of trial assemblies by rotating each exposure by a fixed amount, then assembling a 1784 x 1784 pixel square out of the middle of each exposure into a single large image. I do this with a 3 tile x 3 tile sample area, generating images for a range of rotation values. With a bit of trial and error, I can find the rotation value that results in a final image with no "jaggies". Then I do the final pass and assembly the final image, which in this example results in a single 318 megapixel image.
Step 2: Tracing Traces
The orthophotos of both sides of the PCB get loaded into my CAD software in separate layers. I flip the back image, align the front and back layers, then scale the images so that they are 1:1 scale. In the case of the PCB I used when creating this page, the entire PCB was designed on a 0.025" grid, so I set up the grid in my CAD software to match, and adjusted the position of the images to also match this grid.
I then create symbols for the various footprints that can be found on the board, with metadata attached to each symbol and pad that describes what they are. These symbols are then placed on another layer and lined up with the parts on the othrophotos:
Then I add all of the vias, which are just circles with a metadata tag:
With all of the components and vias placed, I move on to tracing the traces, starting with traces on the back side of the PCB, which are easy to trace because they're fully visible. I'm not careful about making sure that every pad and via is placed on the end of a line segment, nor do I bother spliting line segments for mid-trace intersections.
Then I move on to the traces on the front or component side. These is a time consuming process, as what is going on under things like DIP packages can't be seen. Figuring out what is going with obscured traces requires the services of a multimeter. Usually one end of an obscured trace can be found and can form one end of a continuity test, then the other end can by found by probing pins and vias. Tracing out obscured traces is a bit like playing Sodoku, especially when both ends of a trace are obscured. Part of the process is gaining an understanding of the original designer's style.
Step 3: Creating Nets
With all of important stuff identified on the PCB images, I then use a bit of Python hackery using my CAD software's terrible Python API to translate my CAD drawing into a simple JSON file that decribes all of the traces represented as line segments separated by layer, vias represented as points and components represented as points that exist on either one of both layers.
This JSON file is then parsed by a bit of Go code. This code first resolves anywhere a via or pad intersects a midpoint on a line segment, then splits up the line segment so that each intersection is represented by the end of a line segment. When building nets, everything is represented as "segments", which are just pairs of points. A trace is a segment with both ends at different positions but in the same layer. A via or through-hole pad is treated as a segment with the end and start point in the same location, but on separate laters. Surface mount pads are represented as a segment with a null end point.