add vale, work on yosys4gal
All checks were successful
Build Blog / Build (push) Successful in 3m22s

This commit is contained in:
saji 2025-04-12 14:22:14 -05:00
parent 67c2e81892
commit 74c5c27043
4 changed files with 85 additions and 36 deletions

18
.vale.ini Normal file
View file

@ -0,0 +1,18 @@
StylesPath = styles
MinAlertLevel = suggestion
Packages = write-good
[*.{md}]
# ^ This section applies to only Markdown files.
#
# You can change (or add) file extensions here
# to apply these settings to other file types.
#
# For example, to apply these settings to both
# Markdown and reStructuredText:
#
# [*.{md,rst}]
BasedOnStyles = Vale, write-good

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View file

@ -4,40 +4,66 @@ description: Bringing modern synthesis to 30-year old technology with Yosys and
date: 2024-06-14
---
## A History Lesson
## A history lesson
During the semiconductor revolution, a dilemma appeared: Designing new ICs required a lot of time and effort to create the mask,
and iteration was expensive. At the time, IC designs were very simple, since the available tools/compute to do tasks like optimization
or place-and-route were limited. And what if you wanted a low-volume design? Programmable Logic Arrays (PLAs) were an early
approach to these problems. The idea was simple: create a flexible logic archiecture that could be modified later in the process
to implement various digital designs. These worked by using matricies of wires in a Sum-of-Products architecture. Inputs
would be fed with their normal and inverted forms to a bank of AND gates, which would select various inputs
using a fuse tie on the die and create product terms. The outputs of the AND gates would then be fed into OR gates, which would
create the sum term for the whole output.
During the semiconductor revolution, a dilemma appeared: Designing new ICs
required a lot of time and effort to create the mask, and iteration was
expensive. At the time, IC designs were very simple, since the available
tools/compute to do tasks like optimization or place-and-route were limited.
And what if you wanted a low-volume design? Programmable Logic Arrays (PLAs)
were an early approach to these problems. The idea was simple: create a
flexible logic architecture that could be modified later in the process to
implement various digital designs. These worked by using matrices of wires in a
Sum-of-Products architecture. Inputs would be fed with their normal and
inverted forms to a bank of AND gates, which would select various inputs using
a fuse tie on the die and create product terms. The outputs of the AND gates
would then be fed into OR gates, which would create the sum term for the whole
output.
This design was popular, since it allowed for less-certain aspects of the chip to be moved to a later design process.
Eventually, hardware people got jealous of the fast (for the time) compile-evaluate loops in software, and so PAL (Programmable
Array Logic) was invented. These are similar to PLA logic, but the fuses are programmed using a simple programmer rather
than a complex die process. This means that a developer with a pile of chips can program one, test it, make some adjustments,
and then program the next. Later versions would solve the whole one-time-programmable aspect using UV-erasable EEPROM.
This design was popular, since it allowed for less-certain aspects of the chip
to be moved to a later design process. Eventually, hardware people got jealous
of the fast (for the time) compile-evaluate loops in software, and so PAL
(Programmable Array Logic) was invented. These are similar to PLA logic, but
the fuses are programmed using a simple programmer rather than a complex die
process. This means that a developer with a pile of chips can program one, test
it, make some adjustments, and then program the next. Later versions would
solve the whole one-time-programmable aspect using UV-erasable EEPROM.
Demands would increase futher and flip-flops would be added, as well as feedback capability. This allows for very complex
functions to be implemented, since you can chain "rows" of the output blocks.
![A figure shows the structure of a PLA. There is a grid of wires that is fed into the inputs of AND gates. The AND gates are then selected by a set of OR gates.](pla_logic2.svg "Old School PLA.")
Demands would increase further and flip-flops would be added, as well as
feedback capability. This allows for very complex functions to be implemented,
since you can chain "rows" of the output blocks. This culminated in the
GAL22V10, which was an electronically-erasable, 22-pin programmable logic
block, which had up to 10 outputs that could be registered and used for
feedback.
![A figure shows the Output Logic Macrocell, or OLMC. The OLMC consists of a D Flip-Flop, feedback routing, and 4-to-1 mux to select behavior](gal_olmc.png)
## Back To Today: GALs in the 21st Century
These days, modern FPGA technology can be yours for a couple of bucks. Open-source toolchains allow fast, easy development,
and the glut of Verilog resources online makes it easier than ever to enter the world of hardware design.
But there are times when GALs might still be useful. For one, they start up instantly. Some FPGAs have a very fast one-time-
programmable internal ROM, but this is obviously not without drawback since the design can no longer change. In most
cases the bitstream must be loaded from an external SPI flash. This can take a few seconds, which may not be acceptable if
the logic is critical. Another important factor is the DIP package that is offered. This makes GALs perfect for breadboard
applications. You could use it like an 8-in-1 74-series logic chip, changing the function depending on what you need.
Finally, operating at 5 volts is useful when interfacing with older systems.
These days, modern FPGA technology can be yours for a couple of bucks.
Open-source toolchains allow fast, easy development, and the glut of Verilog
resources online makes it easier than ever to enter the world of hardware
design. But there are times when GALs might still be useful. For one, they
start up instantly. Some FPGAs have a very fast one-time- programmable internal
ROM, but this loses the "field-programmable" aspect which makes FPGAs
desirable. In most cases the bitstream must be loaded from an external SPI
flash. This can take up to a few seconds, which may not be acceptable if the
logic is critical. Another important factor is the chip packaging. Most FPGAs
are BGA packages, with some offering QFN or even a few QFP variants, but none
are available in any DIP form factor, at least without a small board in
between. The ATF22V10 (which is a clone/successor of the GAL22V10) is available
in DIP, SSOP, and even PLCC if that's your jam. The package options make GALs
perfect for breadboard applications. You could use it like an 8-in-1 74-series
logic chip, changing the function depending on what you need. Additionally,
GALs operate at 5 volts is useful when interfacing with older systems and
removes the need for a level shifter.
But programming GALs is an excersize in frustration. Take a look at a basic combinitoral assembly file:
However, this isn't all great. Programming GALs is an exercise in frustration.
Take a look at a basic combinatorial assembly file:
```PALASM
GAL16V8
@ -61,21 +87,25 @@ DESCRIPTION
Simple test of combinatorial logic.
```
While it's pretty intuititve what it does, it's not exactly a stellar format for writing complex logic.
Plus, there's no way to integrate or test this (we'll get back to this). Compared to the Verilog flow,
with simulation, testbenches, and synthesis, the raw assembly is stuck in the 80s.
In the contrived example the behavior is pretty clear, but it's not exactly a
stellar format for writing complex logic. Plus, there's no way to integrate or
test this in a larger system (we'll get back to this). Compared to the Verilog
flow, with simulation, testbenches, and synthesis, the raw assembly is stuck in
the 80s and requires manual logic simplification.
Verilog compilers for GALs *did exist*, but they ran on old-as-dirt systems, didn't have any significant optimization
capabilities, and were almost always proprietary. What if we could make our own open-source Verilog flow for GAL chips?
Then we could write test benches in Verilog, map complex designs onto the chip, and even integrate our designs with FPGAs later
down the line.
Verilog compilers for GALs *did exist*, but they ran on old-as-dirt systems,
didn't have any significant optimization capabilities, and were almost always
proprietary. What if we could make our own open-source Verilog flow for GAL
chips? Then we could write test benches in Verilog, map complex designs onto
the chip, and even integrate our designs with FPGAs later down the line.
# The idea
GAL assembly is still common
# Is this useful?
# using menu screens
No, not really.
Well, there's a very very niche use case. These parts are 5-volt tolerant, and come in DIP packages. If you needed some basic glue logic
@ -83,6 +113,6 @@ when working on an older 5 volt system, you might want to have a few of these +
At the very least, these chips can emulate any 74-series chip, and can reduce a multi-chip design to a single chip.
The DIP form factor makes it much easier to breadboard, and the chips have zero start up delay.
In that narrow use case, `yosys4gal` is rather crucial. You no longer need WinCUPL or any old software, instead
In that narrow use case, `yosys4gal` is rather crucial. You no longer need WinCUPL or any old software, instead
using Verilog + Yosys. Your designs are automatically optimized, which makes it easier to fit more complex logic. And since it's verilog,
you can integrate it into a larger simulation or move it to an FPGA later if you desire.
you can integrate it into a larger simulation or move it to an FPGA later if you desire.

View file

@ -18,6 +18,7 @@
packages = forAllSystems (pkgs: rec {
default = pkgs.buildNpmPackage {
name = "myblog";
version = "unstable";
buildInputs = with pkgs; [
nodejs
vips