21 Oct 2019

OE Floating-Point Options for ARMv5 (ARM926EJ-S)

One of the (many) things I enjoy about OpenEmbedded is how easy it is to try out different configurations. Want to switch from sysvinit to systemd? Change the config, re-build, and there's your new image to test. Want to switch from busybox to coreutils? Change the config, re-build, and there's your new image.

Recently, I have been working with an ARMv5 device that was released in 2008: the NXP LPC3240 which is based on the ARM926EJ-S SoC. The specific device I'm using has a VFPv2 unit, however since the VFP was optional on the ARM926EJ-S, most distros/images are built with no floating-point support. From the standpoint of binary distributions, this makes the most sense: if you want to supply a binary to run on the most number of devices, build for the lowest common denominator. But when building your own distro/images from source using OpenEmbedded, you have the flexibility to tweak the parameters of your build to suit the specifics of your hardware.

Nowadays, a user has 3 choices when it comes to VFP on the ARM926EJ-S:
  1. soft: floating-point emulated in software (no hardware floating-point)
  2. softfp: enable hardware floating-point but have floating-point parameters passed in integer registers (i.e. use the soft calling conventions)
  3. hard: enable floating-point and have floating-point parameters passed in floating-point registers (i.e. use FPU-specific calling conventions)
The naming of option 2 (softfp) is unfortunate. To me, saying "soft floating-point" implies the floating-point is being emulated in software. However, its name was meant to contrast its calling convention with that of hard floating-point, not to imply the floating-point is being emulated in software.

By default in OpenEmbedded, specifying tune-arm926ejs.inc sets the DEFAULTTUNE to "armv5te" which disables VFP. By tweaking DEFAULTTUNE in your machine.conf file (or local.conf) you can try out all the options. Personally, when setting DEFAULTTUNE, I also like to tweak TUNE_CCARGS.

To try out the different options, set the following parameters:
  1. soft:
    DEFAULTTUNE = "armv5te"
    TUNE_CCARGS = "-mcpu=arm926ej-s -marm"
  2. softfp:
    DEFAULTTUNE = "armv5te-vfp"
    TUNE_CCARGS = "-mcpu=arm926ej-s -mfpu=vfp -mfloat-abi=softfp -marm"
  3. hard:
    DEFAULTTUNE = "armv5tehf-vfp"
    TUNE_CCARGS = "-mcpu=arm926ej-s -mfpu=vfp -mfloat-abi=hard -marm"
The meta-openembedded/meta-oe layer provides a number of recipes for benchmark applications. Interesting performance benchmark programs include: whetstone, dhrystone, linpack, nbench, and the "cpu" test of sysbench.

STD BENCHMARK DISCLAIMER: when it comes to benchmarks it's always important to remember that they are synthetic. That is: they are programs created to measure the performance of some artificial work-load of their choosing. If you want to know how the performance of your program will change under different settings, the only real way to determine that is to build and test your specific program under the different settings. It's also worth pointing out that during the era when benchmark programs were a really hot topic (late 90's-ish?) many vendors would tailor their hardware towards the popular benchmark programs of the time, skewing the results dramatically. In other words, a specific piece of hardware would be tuned to run a specific benchmark really well, but "real" workloads wouldn't see much improvement. Therefore YMMV.

For this experiment I created three images; each one built using one of the three floating-point tunings given above but all containing the same contents and the same versions of all the contents. I then loaded each of the images on my hardware in turn, so I could run the benchmark programs to generate performance data.

As of the time these images were built (Oct 11, 2019), the HEAD revision of openembedded-core was 59938780e7e776d87146002ea939b185f8704408 and the head revision of meta-openembedded/meta-oe was fd1a0c9210b162ccb147e933984c755d32899efc. At that time, the compiler being used was gcc-9.2, and the versions of various components are: glibc:2.30, bash:5.0, dhrystone:2.1, linpack:1.0, nbench:2.2.3, sysbench:0.4.12, and whetstone:1.2.

First Impressions

One of the first interesting things to note is the size of the various binaries:


soft softfp hard




whetstone 33,172 20,236 20,444




dhrystone 13,752 9,660 9,660




sysbench 81,268 77,176 77,176




linpack 13,744 9,652 9,652




nbench 47,308 43,216 43,216

    Looking at the disassembly of each of these binaries, it's not hard to see why this is. Disassembling the binaries is as simple as:
    $ arm-oe-linux-gnueabi-objdump -d whetstone
    While the softfp and hard programs are filled with VFP instructions (e.g. vldr, vmul.f64, vsub.f64, etc.) the soft program contains calls to various __aeabi_* functions and __adddf3. These functions come from libgcc, a library written by the gcc people to help shore up things that are missing from standard C libraries (such as software emulation of floating-point, see here for more info). Interestingly, the code of these functions is linked into the executable itself (and not as a shared library). As you can imagine, emulating floating-point operations in software is going to take a lot of code!

    If you have floating-point hardware, taking advantage of it will shrink the size of your executables (if they use floating-point math).

    Whetstone

    whetstone is a benchmark program whose primary purpose is to measure floating-point performance. In each image I ran the whetstone program 5 times, timing each run with time, and had it run 1,000,000 loops:
    # time whetstone 1000000
    The averages of each test are as follows. Higher MIPS is better, lower time is better:

    soft softfp hard
    MIPS duration [s] MIPS duration [s] MIPS duration [s]
    100.16 998.4 1872.84 53.4 1872.84 53.4

    Dhrystone

    dhrystone is a benchmark used to evaluate integer performance. In each image I ran the whetstone program 5 times, timing each run, and performing 1,000,000 iterations per run:
    # time echo 1000000 | dhry
    The averages are as follows. Higher dhry/sec is better, lower time is better:

    soft softfp hard
    dhry/sec duration [ms] dhry/sec duration [ms] dhry/sec duration [ms]
    432527.22 2.3 431037.7 2.3 429554.58 2.3

    Sysbench (cpu)

    sysbench is a benchmark which includes a bunch of sub-benchmarks, one of which is the "cpu" test. On each image I ran the cpu test 5 times, capping the run-time to 300[s]. The benchmark appears to perform prime factorization, measuring something called "events", and recording run time per event.
    # time sysbench --max-time=300 --test=cpu run
    soft softfp hard
    events duration/event [ms] events duration/event [ms] events duration/event [ms]
    1157.2 259.29 2951.6 101.638 2951 101.662


    As a final test, on each image I ran the cpu test just once without a time limitation, to see how much time it would otherwise take.
    # time sysbench --test=cpu run
    soft softfp hard
    events test duration events test duration events test duration
    10000 43m0.50s 10000 16m56.499s 10000 16m56.777

    Linpack

    linpack is a benchmark testing a computer's ability to perform numerical linear algebra. The program takes one required parameter: the size of the array to use. If you pass "200", it will calculate a 200x200 array. As it runs, it determines how many repetitions to perform, it bases the repetitions on its performance. For each repetition it records how much time it took. When it's done a set of repetitions, it calculates a KFLOPS count, then starts over with a different repetition count.

    For each image I ran the program once with "200" and once with "500". With no hardware floating point support calculating a 200x200 array it starts with 1 repetition, then tries 2, then 4, 8, etc. With hardware floating-point on a 200x200 array it starts with 8 repetitions, then 16, 32, etc. On a 200x200 array the repetition counts common to all images are 8, 16, and 32. On a 500x500 array the repetition counts common to all images are 1 and 2.

    The program never terminates; it keeps increasing the repetition count and going until explicitly killed.
    # echo 200 | linpack
    soft softfp hard
    reps time/rep KFLOPS reps time/rep KFLOPS reps time/rep KFLOPS
    8 4.3 2718.669 8 0.64 18553.356 8 0.62 19223.389
    16 8.6 2718.614 16 1.29 18552.917 16 1.25 19214.278
    32 17.2 2718.792 32 2.58 18552.361 32 2.49 19212.128
    # echo 500 | linpack
    soft softfp hard
    reps time/rep KFLOPS reps time/rep KFLOPS reps time/rep KFLOPS
    1 8.1 2674.928 1 1.38 15876.865 1 1.38 15883.324
    2 16.17 2674.871 2 2.74 15878.365 2 2.74 15882.516

    nbench

    nbench (aka BYTEmark) runs a bunch of sub-tests (including: numerical sort, string sort, bitfield, fp emulation, fourier, assignment, IDEA, huffman, neural net, and LU decomposition) then generates both an integer index and a floating-point index. These indices are relative to what were considered capable machines of the time (mid-1990's).

    This benchmark was run twice on each image, the averaged results are:

    soft softfp hard


    integer idx fp idx integer idx fp idx integer idx fp idx


    1.054 0.1 1.1095 0.961 1.109 0.979

    Conclusions

    Since software floating-point emulation gets added statically to C programs, using hardware floating point makes binaries smaller in programs that perform floating-point calculations. Enabling floating-point in such programs also improves the performance of floating-point operations noticeably. Interestingly, it appears as though integer performance is ever so slightly impacted in the hard case relative to softfp. Therefore it would seem to be that if your entire work-load is floating-point, then go with hard, otherwise if there is both floating-point and considerable integer calculations, softfp might be best.

    As always, test your own application to know which mode is best in your scenario.

    16 Sep 2019

    Board Bring-Up: An Introduction to gdb, JTAG, and OpenOCD

    Lately I've been working on getting a recent-ish U-Boot (2018.07) and Linux kernel (5.0.19) running on an SoC that was released back in 2008: the NXP LPC3240 which is based on the ARM926EJ-S processor (NOTE: the ARM926EJ-S was released in 2001).

    Although I had U-Boot working well enough to load and boot Linux, the moment the Linux kernel started printing its boot progress, my console was filled with the sort of garbage that tells an embedded developer they've got the wrong baud rate. Double-checking, and even triple-checking, of the baud rate values, however, showed that every place where it was configured, it had been correctly set to 115200 8N1.

    Having a working console is the basis from which the rest of a software developer's board bring-up activities take place! If you can compile a kernel, load it on the board, and get it to print anything, legibly, to the console, then you're already in a really good position. But if there's no working connection via the console, it means more low-level work is needed.

    Going down the hierarchy (from easier to harder), if the console isn't working, then you'll need to see if JTAG is a possibility. If a JTAG isn't available, then you'll need to look for an LED to blink. Blinking an LED to debug one's work during board bring-up isn't uncommon, but it can be a lot more painful. With nothing but (perhaps) a single LED, it can be hard (though strictly not impossible) to communicate something as simple as: "the value at 0x4000 4064 is 0x0008 097e, and I've reached <this> point in the code". Thankfully for me, this particular board has a working JTAG, and there is support for this SoC in OpenOCD.

    JTAG is a very large specification and has a lot of use-cases. For my purposes, JTAG consists of:
    • extra logic that is added to a chip which implements a very specific state machine
    • a bunch of extra pins (at least 4, but some designs add more) with which to interface to this internal state machine from outside the chip
    • a set of commands (in the state machine) that can be executed by toggling bits on the external pin interface
    These commands let you do things such as push individual bits into, or get individual bits out of, a particular device's JTAG scan chain. Depending on how the scan chain is implemented, this could translate to activities such as the ability to read/write registers, and read/write arbitrary values in the memory space (which includes things like peripherals, configuration, etc).

    Most development hosts don't have random GPIO lines available for interfacing, therefore a dongle of some sort is needed to go between the desktop machine and the target board's multi-wire JTAG interface. In days past, these dongles would be connected to the development host via serial or parallel interfaces; nowadays they're mostly USB.

    Armed with a JTAG dongle, in theory it would be possible to start interacting with the target board directly via JTAG commands. However, this could be very tedious as the JTAG commands are very primitive (i.e. having to follow the state machine precisely, and work 1 bit at a time). One of the more common arrangements is to use gdb, which permits the user to perform higher-level actions (i.e. set a breakpoint, read a given 32-bit memory address, list the register contents, etc) and let the software deal with the details. Note, however, gdb itself does not know how to "speak" JTAG nor does it know how to interact with a JTAG dongle. gdb does, however, speak its own command language called the remote gdbserver protocol. It is OpenOCD which acts as the interpreter between the remote gdbserver protocol on the one hand (e.g. over a network port), and JTAG commands for the target on the other (e.g. over USB to the dongle) marshalling all the data back and forth between the two.

    With the target board powered off, plug the JTAG dongle's pins into the board's JTAG connector; connect the development host to the JTAG dongle via USB.

    Power on the target board.

    Run OpenOCD on the development host. In my specific case the command I invoke is:
    $ openocd -f interface/ftdi/olimex-arm-usb-ocd-h.cfg -f board/phytec_lpc3250.cfg
    It is important to note that openocd runs as a daemon, and as such, once invoked, does not terminate until explicitly killed. In particular, this command is run in its own terminal, and simply left running until my debugging session is done. All other work that will be done, needs to be performed in other terminals. Perhaps you're thinking: "I'll just run it in the background using an ampersand". That would work, however: as it runs and interacts with gdb and the board, openocd will print out useful information to the terminal. Therefore giving it its own terminal and letting it run independently while keeping it visible is often quite useful. It's always someplace visible on my desktop while debugging.

    OpenOCD needs to know what dongle I'm using (it supports a number of JTAG dongles) and it needs to know the board or SoC to which it is connecting (it has support for many SoCs and boards). Implicit in the choice of dongle is the communication protocol (here USB) and dongle characteristics (properties, product ID, etc).  By specifying a target board or SoC, you're letting OpenOCD know things such as how to initialize its connection, the register size, what speed to use, details about how the device needs to be reset, and so on.

    More recently, some development boards come with built-in debug circuitry, including a USB connector, already designed into the target board itself. In these cases the JTAG dongle isn't needed. One simply needs to connect the target board directly to the development host via a single USB cable, and start up OpenOCD (and gdb) giving only one piece of information: the board's name. All other details are implied.

    Running on a GNU/Linux system, gdb works best with ELF executables. gdb can be coerced into working with raw binaries, but when presented with an ELF file, it is provided with a lot more of the data it needs to do its job. But neither the Linux kernel nor U-Boot are ELF binaries. As part of their default build processes, however, both the Linux kernel and U-Boot build systems generate ELF output in addition to the parts that are actually run. A U-Boot build will produce, for example, u-boot.bin, which is the actual U-Boot binary that is stored wherever the bootloader needs to be placed. But in addition to this, a file called u-boot is produced which is its ELF counterpart. Similarly for the Linux kernel, the kernel itself might be found in arch/arm/Image, but its ELF counterpart is vmlinux.

    If you want to debug a Linux kernel via JTAG using gdb, simply invoke:
    $ arm-oe-linux-gnueabi-gdb vmlinux

    Since the target is an ARM board and my host is an x86 board, I need to invoke the cross-gdb program, not the native one (otherwise it won't be able to make sense of the binary instructions). Since I do so much of my work using OpenEmbedded as a basis, when working independently on U-Boot and the kernel, I simply have OpenEmbedded generate an SDK targeting this particular board, and use it for all my work. When invoking this cross-debugger, I simply provide it with the path to, and the name of, the ELF file containing the kernel I have compiled.

    By default openocd listens on port 6666 for tcl connections, port 4444 for telnet connections, and port 3333 for gdb connections. In order to create the link between gdb and openocd, once gdb is up and running you'll need to link them together by issuing the target remote or target extended-remote command:
    (gdb) target extended-remote :3333
    Of course if you've told openocd to listen to a different port, you'll need to make the necessary adjustments to the connection.

    Congratulations! You're now debugging the Linux kernel on your target board via JTAG using gdb! No serial console required!

    In my particular case, although I knew the Linux kernel was doing something, I wasn't sure what exactly was going on since the baud rate via my serial console was messed up. Using this setup I was able to dump the kernel's ring buffer, allowing me to see exactly what the kernel was doing and providing me with valuable debugging information of its boot:
    (gdb) x /2000bs __log_buf


    27 Jun 2019

    Verizon Struggles to Understand How Email Works

    Email has been around for longer than I've been alive! But apparently, 48 years on, it remains too complicated for even a telecommunications company such as Verizon to understand.

    On June 1st I get the following email in my inbox:

    Your updated email address needs to be verified.

    To protect your privacy and ensure that we're sending important information to the right place, click below to verify your email address.


    Turns out someone has just signed up for a Verizon account and given my email address instead of his own. No problem, I'll just not click on the link and everything should be fine, right?

    Lol... NOT!

    In the last 3 weeks I've received 11 emails from Verizon... letting me know my new phone is on its way (and verifying my account and address information), confirming my order, providing details of my next bill and plan details, asking me to fill out a survey (let's just say they didn't get top marks in that one!), etc... and I never clicked the link!

    It's a good thing they sent out that initial "address verification email". Wouldn't want all that personal information going to the wrong person, eh?

    At the bottom of every email they've sent, there's always an "unsubscribe" link. Great, I'll just click on that... Oh wait, I can't unsubscribe by clicking the link. I have to sign in to my Verizon account before I can unsubscribe. Is that even legal? I thought unsubscribing was supposed to be a one-click thing in the USA?

    So I figure maybe I'll get a bit creative and ask for a password reset, the system will send me a link, I'll click the link, and be able to change the email address? Nope. Can't do that either. "For security reasons you need to provide the secret PIN that was used when the account was created in order to reset the account". Oh that's nice, at least that part of their system works.

    Oh, here my solution: on their website, under support, is a messaging app that I can use to contact a customer service rep. I'll use that, chat with a rep, and have them remove my email address from this account. Nope. Can't do that, the app asks that I login to my account before I can chat with a customer service rep from the website.

    Looking through the emails that I've received so far, I find the name, address, email, and phone of the Verizon customer service rep who signed this person up. Oh this should be the ticket! I'll email her, let her know what's up, then she can use her insider magic to erase my email from this customer's account. Wow! I must have been on something when I thought that was going to fix anything. She outright refused to help. Her reply was "I will reach out to <customer> and ask them to correct the email address". Really?! That's your solution?! The person who didn't know what an email address was in the first place is who we're relying on now to fix this? The person who has no clue what his email address is (or, apparently, what an email address is to begin with) is the genius who's going to get us out of this mess? If he had a clue to begin with, we wouldn't be here, would we?! When I, politely, point this out to her, she then asks if I know <customer>'s email address so she can change it to that. WTF??! How do you expect me to know the correct email address of some random whack-job on the Internet? I'm so stupid, I should have just said "yes" and given her some other random email address (like, maybe her own). Then this would be solved (from my point of view). And if she is capable of changing it (should I have given her some random email address) why can't she just delete mine without asking <genius> to do it? Why would she be able to solve the problem had I provided a reply to her ridiculous question, but can't fix the problem otherwise?

    So tonight I decided to call Verizon customer support itself and get this sorted out. Spoiler alert: it's still not fixed. First off, the customer-support dial-in system is very adamant that I provide my Verizon phone number and PIN in order to let me do almost anything. In fact, one of the top-level menu items is "if you're not an existing customer" (so this gets me out of having to have a Verizon account) but then if you pick option #6 on the very next menu (for "other") then it asks for your Verizon phone number and PIN!! So I have to call back again and pretend I'm not a current customer but that I want to become one. This finally lets me talk to a person (the "sales" lineup is never busy). I explain the issue. He's very nice and all, but insists that there is no way for him (or anyone else) to change the account information on an account without knowing the PIN of that account.

    I understand the point. Verizon (like most companies) doesn't trust their own employees (especially the ones at the lower echelons) and therefore has a system in place such that customer service reps can't log themselves into random accounts and mess around with the data. That sounds all fine and good.

    But in 2019, as sophisticated (or whatever you wish to use) as Verizon's system is, there's no contingency for the scenario whereby a customer puts in the wrong email address other than to wait and hope for the customer to fix the error themselves? Nobody anywhere who was part of designing Verizon's systems ever considered the possibility that random users might (accidentally perhaps?) put in the wrong email address and therefore provide a mechanism to remove such an email address from their system? It just never occurred to them that this might happen?

    Worse yet, is the fact the original "email verification link" is apparently pointless. Regardless whether the link is clicked or not, if Verizon has an email to send to a customer, the email on file is used whether or not it has been verified.

    It seems like a pretty basic oversight. If you're going to have a path whereby the system is going to send out verification emails to verify the email address a random person randomly puts into the system, there should be a little more thought put into what should happen should the email link never get clicked (maybe it could delete itself after a short period?). Or at the very least, a mechanism whereby someone within Verizon can delete an email address from an account (especially if it hasn't been verified). Or even less than that, the "unsubscribe" links at the bottom of the emails should allow a person to unsubscribe without having to log into an account and provide a PIN (especially in the case where the email address has not yet been verified).

    2 Feb 2019

    LoRa - first steps

    It took all of (maybe?) an hour to setup 2 Adafruit Feather M0 with RFM95 LoRa Radio devices and have them ping each other using the simple getting started guide and default Arduino code. Yes Arduino, boo hiss, I agree. But it was a very simple and easy way to perform a quick test which helps answer a few basic feasibility questions.



    As some of you know, we own a farm, which presents lots of amazing opportunities for electronics projects: remote sensing, remote control, recording, etc. It would be great to know if someone cranks up the heat in the tack room, then leaves without turning it back down again. It would be great to know if someone accidentally leaves a light on, or a door open, somewhere in the barn. It would be great to be alerted if the electric fence goes down. It would be fantastic to be able to track water temperature in numerous places throughout our outdoor wood boiler HVAC system and correlate that with ambient room temperatures and outdoor temperature. It would be even more amazing to be able to track property-wide and area-specific electricity usage and water usage. And perhaps even consider some HVAC control projects too! Then there's motion sensing, detecting cars/people coming and going, gate operations/accessibility, wildlife/herd tracking, mailbox alerts, ..., it's quite a list!

    But before I can even start to dream too much, I need to look at a lot of mundane things and figure out a whole bunch of details. For example: how do I communicate with things over the length (685m) and width (190m) of our property (~30acres)? What's the best way to communicate with things in the barn? Does everything need to be plugged in, or are batteries feasible?

    One of the challenges that might not be readily obvious to most, is that the barn is mostly wrapped in metal. Trying to do wireless things in, around, through, and past an all-metal-wrapped barn is not straight-forward. Even our house has a metal roof. Another challenge is the fact our house is made of field-stone, and has roughly 17" thick concrete/stone walls! Try getting WiFi out of the underground basement through a 17" concrete/stone wall!

    I'm sure to most people, it's obvious WiFi isn't a solution. Maybe sections of the property could be covered by WiFi, but it's certainly not the solution everywhere. And even at that, trying to cover an outdoor area in WiFi requires outdoor antennas, and WiFi extenders (which are not cheap, and can be difficult to get them to work together). Not to mention: WiFi is hard on battery-operated devices. Obviously Bluetooth isn't going to cut it either. So that eliminates all those Espressif ESP8266/ESP32 and BT/BTLE devices. A traditionally popular option would be Zigbee, but I get the feeling its popularity is waning. The rising star today for "IoT things" seems to be LoRa, so I wanted to give that a try. Ideally though, I'd like to try Zigbee too, so I can evaluate it and LoRa side-by-side.

    But how well is LoRa going to work on my property? Sure we hear all sorts of amazing numbers describing the theoretical LoRa range, but these results always come with provisos. How well is LoRa going to work from my basement? Through my house's thick walls? Past the all-metal barn? Over the hills? And through the forest?

    Then on top of LoRa itself is this whole LoRaWAN stuff and The Things Network... (whatever those things are).

    Above the radio we then have to consider microcontrollers. I wouldn't want to wake up one day to find that I had grown overly-biased in my preference for one microcontroller over all others. But having worked with 8-bit PICs, 8-bit AVRs, and 8051s, I have to say: those 32-bit CortexMs from ARM are pretty sweet! Maybe I'll consider using a PIC here or there just to improve my microprocessor breadth, but they won't be a top priority. Another up-and-coming microcontroller that I'll want to experiment with would be one of those smaller RISC-V designs such as the FE310.

    On top of the microcontroller goes the code. As I said above, the Arduino environment is cute for some quick prototyping, but ideally I'd prefer to be closer to the hardware. Popular choices in the maker community include MicroPython and Adafruit's CircuitPython. Those are okay choices and both have their place, but only you're fond of "single control-loop" designs. Through these projects I'm hoping to explore MicroPython, CircuitPython, and, yes, even Arduino stuff, but ultimately I'd like to spend most of my time with things like FreeRTOS, Zephyr, mbed, and libopencm3. Any others I should consider?

    Above the "firmware" comes higher-level software such as messaging. I'm guessing MQTT is the only sensible choice here?

    I'm still not done. Another item that needs serious attention are all the various hardware choices: hardware form-factors, batteries, weatherproof enclosures, .... If every item is going to be a one-off design, then I can try a bunch of different boards, batteries, enclosures, and form-factors to see which ones work better than others. But if I want to build up an ecosystem of devices all built on the same known platform, then I need to consider standardizing on some of these options.

    I like what Adafruit has done with their Feather line of development boards. They're standardized, breadboard-able, have LiPo connectors and charging hardware onboard, and have an ever-growing ecosystem of daughter-boards (FeatherWings). What's nice about the Feather ecosystem is how the user has a choice of microcontroller for the baseboard itself. I think it would be fair to call Adafruit's Feather ecosystem a form-factor for The Internet of Things. Are there any others worth considering?


    I started out saying how it didn't take very long to get two of these boards sending messages to each other. Although my research had told me that it should work easily, I was still very amazed when I took one of the boards, plugged in a LiPo battery, put everything inside a weatherproof enclosure, brought it to the barn, and returned to my desk to find they were still communicating! The barn is about 80m (~260') away, and my office is underground, behind a 17" thick concrete/stone wall! I tweaked the code a little, but didn't make any changes to the radio operation other than to set the frequency. I'm using just a plain, simple 3" wire soldered to the "antenna" pad. Wow!

    And with this little experiment, I've (finally!) started down the path of (hopefully!) many fun, electronics, farm projects! I now know I can at least communicate from my desk to the barn over LoRa using a simple 3" wire antenna and two tiny Feather boards.

    4 Sep 2018

    OE Hands-On, September 13 2018

    Back in April I gave a talk about OpenEmbedded/Yocto at my local RaspberryPi Meetup:
    https://www.meetup.com/Raspberry-Pi/events/gbdwdpyxgbqb/
    slides: https://www.slideshare.net/TrevorWoerner/using-openembedded-93867379

    That talk went well, and participants were anxious to try it themselves. Therefore we've arranged for this upcoming Toronto Raspberry Pi Meetup, September 13 2018, to be a hands-on session with OpenEmbedded/Yocto! Bring your Raspberry Pi, and associated equipment[1], with you to the meeting and I'll help you work on generating your own distros/images!

    Admission is free, but limited, so please sign up at:
    https://www.meetup.com/Raspberry-Pi/events/gbdwdpyxmbrb/



    [1] If you want to participate, you need to bring, at a minimum, your Raspberry Pi (any of rpi0, rpi1 (original), rpi2, rpi3 (any), or cm3), its power supply, and a microSD card. If you want to verify anything is working you'll need either a serial console cable, or a device to plug into your device's HDMI port. I'll bring some spare serial console cables with me. If you use an HDMI device, you might also want to bring a USB keyboard and mouse.

    22 Jan 2018

    RPi and TPM and DT Part 1 (Background)

    The company for which I work put together a daughter-board for the Raspberry Pi that includes a TPM chip (among other things). This series of posts is a recap of what was required to get the TPM chip working with an OE/Yocto-generated image.

    More generically, Hitex has also created a daughter-board for the Raspberry Pi that includes the same chip.

    The TPM chip is this one from Infineon, the SLB9670. It uses an SPI interface.

    Back in the glory days of the PC and the PC clones, the memory locations of a device's configuration registers were known a priori. In other words, given the example of a serial port: a serial port has 8 contiguous configuration registers, if your computer had a serial port, its first register would be at memory location 0x3f8 and it would be known as COM1. If you had a second serial port, its base address would be 0x2f8 and it would be COM2. Nobody would ever dream of putting something else at 0x3f8, and if your PC's OS probed 0x3f8 and didn't find anything, it could assume you didn't have a COM1.

    Nowadays, such hard-coding of the I/O space would be considered silly. Some products need no serial ports, and others need dozens; setting aside a large block of I/O space for devices that may or may not be present isn't useful. There's no reason 0x3f8 has to be reserved for the first serial port. Additionally, buses such as PCI and USB allow devices to be placed anywhere in memory and can be configured and probed dynamically. However, not all devices are on such fancy buses. SPI and I2C devices, for example, have simpler buses that don't provide dynamic probing and configuration. But your drivers still need to know where in the memory map they are found.

    One solution to map known device addresses for a given product to device drivers, without hard-coding or a priori knowledge, is to use a Device Tree (DT). Given a specific product with a set of devices at specific addresses (for that product) a DT can be created (for that product) to map these known addresses for the relevant drivers. This means every product, potentially, needs its own Device Tree (if DT is the solution being used). Device Trees are stored and maintained alongside the Linux kernel sources, but are usually flashed to a separate location than the kernel itself are not not part of the kernel blob itself (although there are options to make it so). When booted, the kernel is provided with the information it needs to find the Device Tree. In this way, the same kernel build could work among a set of products that have, roughly, the same peripherals but potentially at different addresses, by providing different DTs for each product.

    Device Trees are not new technology. Their lineage can be traced back to Sun's OpenFirmware from the late 1980s. The Linux kernel started using OF/DT early on for various PowerPC boards (e.g. the PowerPC-based Apple Macs from the 1990s).

    Device Trees are not the only way to solve the kernel's "how can I find my peripherals?" problem. Other solutions include ACPI and UEFI.

    This is all well and good for boards whose peripherals are "hard-coded" to the board itself (either because they are part of the MCU, or because they are soldered to the board and their bus IDs are also hard-coded). But most embedded boards come with expansion headers that allow the user to plug in daughter-boards to add features (be they "shields" or "capes" or "hats" etc.). These expansion headers mostly expose a bunch of I2C, SPI, and various GPIO pins to the user. The resulting explosion of DT possibilities would be crazy to try to maintain in any repository if the goal was to try to track every combination of board with every combination of daughter-board. Therefore Device Tree Overlays came into existence.

    A Device Tree Overlay is a small snippet of a Device Tree that can be maintained and loaded separately, but is processed and merged with the base Device Tree by the kernel when it boots. DT Overlays are not dynamic. The assumption is that in order to change a product's daughter-board, the product would have to be powered down, and while the power is applied the wiring is not being changed. Some daughter-boards are even getting so fancy as to include a discoverable ID that can be used by the kernel to find and load the overlay without user intervention!

    In my specific case I have a Raspberry Pi 3, with a "dumb" daughter-board which includes a TPM chip that I want the Linux kernel to be able to find and use. The Linux kernel already includes a driver for this specific TPM chip [drivers/char/tpm/tpm_tis_spi.c], all I need to do is to help it find the chip by providing a correct DT Overlay telling the driver where to look (i.e. which SPI bus and chip select to use) for the hardware. I also need to adjust my kernel configuration to make sure the relevant parts were compiled and included to support this hardware.

    25 Oct 2017

    Running ModelSim-Altera from the Quartus Prime Lite IDE under Linux

    For reference I'm running:
    • Quartus Prime Lite version 17.0.2.602
    • ModelSim-Altera (i.e. the "free" one)
    • openSUSE 42.2 (kernel: 4.4.90-18.32-default)

    Simulation is an important step when working with FPGAs. Unfortunately, out of the box, the ModelSim-Altera simulation tool will not run from either the Quartus Prime Lite IDE nor the cmdline under a modern (post-4.x kernel), 64-bit, Linux.

    To get an idea of the problem (and verify it's occurring with your setup) try running the simulator tool by clicking on Tools -> Run Simulation Tool -> RTL Simulation (I'm fairly sure simulation can only be run after a design has been successfully compiled, so you'll need to start with a compiling project. Also I'm pretty sure you'll also need a top-level HDL for simulation, simulation doesn't work with schematic capture only (although getting the IDE to generate HDL from a schematic is fairly straight-forward))



    For me this action results in the following error message:


    Oddly, the contents of that file (i.e. where the error message says to check for more details) are only populated after you hit OK. Meaning you have to make the dialog box go away before you can open the file it tells you to check for more information. IOW, you can't keep the dialog box open to help find the file, you have to make note of where the file is, close the dialog box, then open the file. For reference mine looks like this:
    Info: Start Nativelink Simulation process
    Info: NativeLink has detected Verilog design -- Verilog simulation models will be used

    ========= EDA Simulation Settings =====================

    Sim Mode              :  RTL
    Family                :  max10
    Quartus root          :  /opt/Altera/intelFPGA_lite/17.0/quartus/linux64/
    Quartus sim root      :  /opt/Altera/intelFPGA_lite/17.0/quartus/eda/sim_lib
    Simulation Tool       :  modelsim-altera
    Simulation Language   :  verilog
    Simulation Mode       :  GUI
    Sim Output File       : 
    Sim SDF file          : 
    Sim dir               :  simulation/modelsim

    =======================================================

    Info: Starting NativeLink simulation with ModelSim-Altera software
    Sourced NativeLink script /opt/Altera/intelFPGA_lite/17.0/quartus/common/tcl/internal/nativelink/modelsim.tcl
    Error: Can't launch ModelSim-Altera Simulation software -- make sure the software is properly installed and the environment variable LM_LICENSE_FILE or MGLS_LICENSE_FILE points to the correct license file.
    Error: NativeLink simulation flow was NOT successful



    ================The following additional information is provided to help identify the cause of error while running nativelink scripts=================
    Nativelink TCL script failed with errorCode:  issued_nl_message
    Nativelink TCL script failed with errorInfo:  Can't launch ModelSim-Altera Simulation software -- make sure the software is properly installed and the environment variable LM_LICENSE_FILE or MGLS_LICENSE_FILE points to the correct license file.
        while executing
    "error "$emsg" "" "issued_nl_message""
        invoked from within
    "if [ catch {exec $vsim_cmd -version} version_str] {
                    set emsg "Can't launch $tool Simulation software -- make sure the software is properly installed..."
        (procedure "launch_sim" line 88)
        invoked from within
    "launch_sim launch_args_hash"
        ("eval" body line 1)
        invoked from within
    "eval launch_sim launch_args_hash"
        invoked from within
    "if [ info exists ::errorCode ] {
                    set savedCode $::errorCode
                    set savedInfo $::errorInfo
                    error $result $..."
        invoked from within
    "if [catch {eval launch_sim launch_args_hash} result ] {
                set status 1
                if [ info exists ::errorCode ] {
                    set save..."
        (procedure "run_sim" line 74)
        invoked from within
    "run_sim run_sim_args_hash"
        invoked from within
    "if [ info exists ::errorCode ] {
                set savedCode $::errorCode
                set savedInfo $::errorInfo
                error "$result" $savedInfo ..."
        (procedure "run_eda_simulation_tool" line 334)
        invoked from within
    "run_eda_simulation_tool eda_opts_hash"
    Note that this error message location, /home/trevor/devel/fpga/de10-lite/rapid_verilog, is the directory that I created for this project. Also note that orgate is the name of this project's top-level entity. Therefore the location and filename of your error message will be different, but should follow the same pattern. For future reference I'll refer to /home/trevor/devel/fpga/de10-lite/rapid_verilog as ${PROJECT_DIR}.

    What's really bad about the error messages that show up in the GUI dialog box and in the rpt file is that they imply there's some sort of problem with a license file. There isn't. No license (not even a "free" one) is required to run ModelSim-Altera. This is a red herring. There are a bunch of things that are wrong and need fixing. But none of those have anything to do with licensing. So please resist the urge to log into the Altera website, generate a license, download, and install it. There will be no change to your predicament and hopefully I've saved you from going down that path and wasting a few hours.

    If, at Altera's Quartus Prime Lite download page, you selected Combined Files then the simulation software, ModelSim-Altera, would have been installed along with the entire Quartus Prime Lite package.




    During installation, the Quartus Prime Lite installer will ask you for an install location. I'm going to call that location ${QUARTUS_INSTALL_DIR} throughout this post (which for me is /opt/Altera/intelFPGA_lite/17.0). At ${QUARTUS_INSTALL_DIR} we find the modelsim_ase directory, which is the directory that contains the ModelSim-Altera Simulation software.

    To start investigating what's going wrong, we need to start with where the Quartus Prime Lite IDE looks for the simulation tool. Under Tools -> Options


     Select General -> EDA Tool Options


    If we look look into this ${QUARTUS_INSTALL_DIR}/modelsim_ase/bin directory, we see that it is filled with symlinks:


    Looking carefully, we see that all these links point to just one file!


    Looking through this vco file we see it's a shell script. Its job appears to be a multiplexer: the calling program tries to run one of the programs in this bin directory. Based on a bunch of things the vco script finds, it figures out which "real" program to run and where to find this "real" program. The script appears to have support for being run on AIX, Cygwin, Win*, SunOS, HP-UX, and Linux. As you can tell by this list, vco is probably in need of an update!

    In the case of Linux, the script first tries to determine which "mode" it should use. The possible modes are "32" and "64". But it doesn't make its determination based on the processor on which it is run, it makes this determination based on a combination of what specific program the caller is requesting, and what directory names it finds in ${QUARTUS_INSTALL_DIR}/modelsim_ase. In most cases, even when running on 64-bit hardware, when run on Linux it will set the "mode" to 32.

    Next the vco script tries to define the directory in which to find the actual Linux programs to run. It does this by looking at what kernel is running. The script has support for kernels starting with 2.4.x and running up to 3.x. If the current kernel is something that it can match in this range, it sets the run directory to "linux". Any kernel it finds outside this range is assumed to be something ancient, so it sets the run directory to "linux_rh60". For reference, RedHat Linux 6.0 came out in 1999 with a 2.2.5 kernel. Therefore, when run on a modern system with a post 3.x kernel, it will want to find the programs to run in a directory named ${QUARTUS_INSTALL_DIR}/modelsim_ase/linux_rh60. If we look in the ${QUARTUS_INSTALL_DIR}/modelsim_ase directory, we find a linuxaloem directory and we find a linux symlink to linuxaloem. But we don't find any linux_rh60 directory nor symlink. So it appears that support for early systems has been removed from Quartus Prime Lite, but the vco script still tries to use that directory when it doesn't know what to make of the kernel version.

    There are a couple ways to solve this issue. On the one hand you could dig into the vco script and try to fix some of its shortcomings. You could create a linux_rh60 symlink that simply points to either linuxaloem or linux. But at the end of the day, the easiest thing to do is to completely bypass the vco script altogether by editing the directory path in the EDA Tool Options to point to ${QUARTUS_INSTALL_DIR/modelsim_ase/linuxaloem instead! This way when Quartus Prime Lite wants to run the simulator, it will simply invoke the correct Linux binary directly instead of going through the multiplexing script first.

    Now that we've routed around this problem, we can try running the simulator again... nope, not fixed.
    Looking in the ${QUARTUS_INSTALL_DIR}/modelsim_ase/linuxaloem directory, we find a bunch of actual executables (i.e. not more scripts). If we try to run, for example, the vsim program we get:

     $ ./vsim
    ./vish: error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory

    Why is it running the vish program? In any case, this error makes a little bit of sense. We noted above that the vco script decides to set the mode to "32", despite running on the 64-bit machine. If we look at these programs (in ${QUARTUS_INSTALL_DIR/modelsim_ase/linuxaloem) we find:

     $ ldd vsim
            linux-gate.so.1 (0xf7783000)
            libpthread.so.0 => /lib/libpthread.so.0 (0xf773e000)
            libdl.so.2 => /lib/libdl.so.2 (0xf7739000)
            libm.so.6 => /lib/libm.so.6 (0xf76f2000)
            libc.so.6 => /lib/libc.so.6 (0xf7549000)
            /lib/ld-linux.so.2 (0xf7785000)
    and

     $ ldd vish
            linux-gate.so.1 (0xf76f6000)
            libpthread.so.0 => /lib/libpthread.so.0 (0xf76b1000)
            libdl.so.2 => /lib/libdl.so.2 (0xf76ac000)
            libm.so.6 => /lib/libm.so.6 (0xf7665000)
            libX11.so.6 => not found
            libXext.so.6 => not found
            libXft.so.2 => not found
            libXrender.so.1 => not found
            libfontconfig.so.1 => /usr/lib/libfontconfig.so.1 (0xf7626000)
            librt.so.1 => /lib/librt.so.1 (0xf761d000)
            libncurses.so.5 => /lib/libncurses.so.5 (0xf75f2000)
            libc.so.6 => /lib/libc.so.6 (0xf7449000)
            /lib/ld-linux.so.2 (0xf76f8000)
            libfreetype.so.6 => /usr/lib/libfreetype.so.6 (0xf73b1000)
            libexpat.so.1 => /usr/lib/libexpat.so.1 (0xf7388000)
            libtinfo.so.5 => /lib/libtinfo.so.5 (0xf735e000)
            libz.so.1 => /lib/libz.so.1 (0xf7346000)
            libbz2.so.1 => /usr/lib/libbz2.so.1 (0xf7337000)
            libpng16.so.16 => /usr/lib/libpng16.so.16 (0xf72fb000)

    Obviously my openSUSE install already has libX11 (and friends) installed, why does ldd claim to not find them? If you look at the libraries it does find, you'll notice that ldd is finding them in /lib/... not /lib64/.... This means these binaries are looking for 32-bit libraries, not 64-bit ones. Therefore, we need to install some 32-bit packages. Also, as it turns out, my system already has a bunch of 32-bit compatibility libraries installed, otherwise the ldd outputs wouldn't have even been this good. If your ldd tests on these binaries doesn't come out as well, you may need to find and install even more 32-bit libraries than were needed on my system. On my openSUSE 42.2 system the following helps get past this "thanks for the 32-bit binaries" issue:

    # zypper install libX11-6-32bit libXext6-32bit libXft2-32bit libXrender1-32bit

    Now that we've routed around this problem, we can try running the simulator again... nope, not fixed. However, this time we get no error dialog. I do see some sort of dialog pop up then quickly disappear before I can read anything, but the simulator doesn't start.

    If we start the Quartus software from the cmdline, when we try to run the simulator we'll see the following error message on the cmdline from which we invoked the IDE:

    $ ./quartus
    Info: *******************************************************************
    Info: Running Quartus Prime Shell
        Info: Version 17.0.2 Build 602 07/19/2017 SJ Lite Edition
        Info: Copyright (C) 2017  Intel Corporation. All rights reserved.
        Info: Your use of Intel Corporation's design tools, logic functions
        Info: and other software and tools, and its AMPP partner logic
        Info: functions, and any output files from any of the foregoing
        Info: (including device programming or simulation files), and any
        Info: associated documentation or information are expressly subject
        Info: to the terms and conditions of the Intel Program License
        Info: Subscription Agreement, the Intel Quartus Prime License Agreement,
        Info: the Intel MegaCore Function License Agreement, or other
        Info: applicable license agreement, including, without limitation,
        Info: that your use is for the sole purpose of programming logic
        Info: devices manufactured by Intel and sold by Intel or its
        Info: authorized distributors.  Please refer to the applicable
        Info: agreement for further details.
        Info: Processing started: Wed Oct 25 02:55:26 2017
    Info: Command: quartus_sh -t /opt/Altera/intelFPGA_lite/17.0/quartus/common/tcl/internal/tivelink/qnativesim.tcl --rtl_sim orgate orgate
    Info: Quartus(args): --rtl_sim orgate orgate
    Info: Info: Start Nativelink Simulation process
    Info: Info: NativeLink has detected Verilog design -- Verilog simulation models will be ud
    Info: Info: Starting NativeLink simulation with ModelSim-Altera software
    Warning: Warning: File orgate_run_msim_rtl_verilog.do already exists - backing up currentile as orgate_run_msim_rtl_verilog.do.bak2
    Info: Info: Generated ModelSim-Altera script file /home/trevor/devel/fpga/de10-lite/rapiderilog/simulation/modelsim/orgate_run_msim_rtl_verilog.do File: /home/trevor/devel/fpga/d0-lite/rapid_verilog/simulation/modelsim/orgate_run_msim_rtl_verilog.do Line: 0
    Info: Info: Spawning ModelSim-Altera Simulation software
    Info: Info: Successfully spawned ModelSim-Altera Simulation software
    Info: Info: NativeLink simulation flow was successful
    Info: Info: For messages from NativeLink scripts, check the file /home/trevor/devel/fpga/10-lite/rapid_verilog/orgate_nativelink_simulation.rpt File: /home/trevor/devel/fpga/de10ite/rapid_verilog/orgate_nativelink_simulation.rpt Line: 0
    Info (23030): Evaluation of Tcl script /opt/Altera/intelFPGA_lite/17.0/quartus/common/tclnternal/nativelink/qnativesim.tcl was successful
    Info: Quartus Prime Shell was successful. 0 errors, 1 warning
        Info: Peak virtual memory: 796 megabytes
        Info: Processing ended: Wed Oct 25 02:55:26 2017
        Info: Elapsed time: 00:00:00
        Info: Total CPU time (on all processors): 00:00:00
    Failed to obtain lock: couldn't open "/home/trevor/.modelsim_lock": file already exists
    Error in startup script:
    Initialization problem, exiting.

    Initialization problem, exiting.

    Initialization problem, exiting.

        while executing
    "Transcript::action_log "PROPREAD \"$key\" \"$value\"""
        (procedure "VsimProperties::Init" line 59)
        invoked from within
    "VsimProperties::Init $MTIKeypath"
        (procedure "PropertiesInit" line 18)
        invoked from within
    "PropertiesInit"
        invoked from within
    "ncFyP12 -+"
        (file "/mtitcl/vsim/vsim" line 1)
    ** Fatal: Read failure in vlm process (0,0)

    Note: if we try running vsim (from ${QUARTUS_INSTALL_DIR}/modelsim_ase/linuxaloem) directly we also get:

     $ ./vsim
    Failed to obtain lock: couldn't open "/home/trevor/.modelsim_lock": file already exists
    Error in startup script:
    Initialization problem, exiting.

    Initialization problem, exiting.

    Initialization problem, exiting.

        while executing
    "Transcript::action_log "PROPREAD \"$key\" \"$value\"""
        (procedure "VsimProperties::Init" line 59)
        invoked from within
    "VsimProperties::Init $MTIKeypath"
        (procedure "PropertiesInit" line 18)
        invoked from within
    "PropertiesInit"
        invoked from within
    "ncFyP12 -+"
        (file "/mtitcl/vsim/vsim" line 1)
    ** Fatal: Read failure in vlm process (0,0)

    If we google some of these messages, we find that it's a very strange way of saying that the program is unhappy with our system's version of freetype. The solution to this issue is to download, compile, and install an older version of freetype. We don't, however, want to install this older version to any system locations (and therefore mess up our system's package manager). Therefore, we will simply provide an alternate install-to directory, and use some tricks to make sure these Quartus programs use the alternate library. In my case I downloaded freetype-2.4.7, built, and installed it. But hold on: by default my system will compile everything to 64-bit code. That won't work in this case because, as we've established, these binaries are 32-bit and are looking for 32-bit libraries. Therefore in order to build for 32-bit on a 64-bit system:

    $ CFLAGS=-m32 ./configure --prefix=/home/trevor/local/packages/freetype-2.4.7-32bit/

    FreeType build system -- automatic system detection

    The following settings are used:

      platform                    unix
      compiler                    cc
      configuration directory     ./builds/unix
      configuration rules         ./builds/unix/unix.mk

    If this does not correspond to your system or settings please remove the file
    `config.mk' from this directory then read the INSTALL file for help.

    Otherwise, simply type `make' again to build the library,
    or `make refdoc' to build the API reference (the latter needs python).

    cd builds/unix; ./configure  '--prefix=/home/trevor/local/packages/freetype-2.4.7-32bit/'
    configure: loading site script /usr/share/site/x86_64-unknown-linux-gnu
    checking build system type... x86_64-unknown-linux-gnu
    checking host system type... x86_64-unknown-linux-gnu
    checking for gcc... gcc
    checking whether the C compiler works... no
    configure: error: in `/home/trevor/devel/extern/freetype-2.4.7/builds/unix':
    configure: error: C compiler cannot create executables
    See `config.log' for more details
    builds/unix/detect.mk:84: recipe for target 'setup' failed
    make: *** [setup] Error 77

    That's not good. If we look at the builds/unix/config.log file we find:

    configure:2904: checking whether the C compiler works
    configure:2926: gcc -m32   conftest.c  >&5
    /usr/lib64/gcc/x86_64-suse-linux/6/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/lib64/gcc/x86_64-suse-linux/6/libgcc.a when searching for -lgcc
    /usr/lib64/gcc/x86_64-suse-linux/6/../../../../x86_64-suse-linux/bin/ld: cannot find -lgcc
    /usr/lib64/gcc/x86_64-suse-linux/6/../../../../x86_64-suse-linux/bin/ld: skipping incompatible /usr/lib64/gcc/x86_64-suse-linux/6/libgcc.a when searching for -lgcc
    /usr/lib64/gcc/x86_64-suse-linux/6/../../../../x86_64-suse-linux/bin/ld: cannot find -lgcc
    collect2: error: ld returned 1 exit status

    That's a classic problem that happens when one tries to build 32-bit software on a 64-bit system. The 32-bit versions of more build infrastructure needs to be added:

    # zypper install gcc-32bit

    Now the build should work:

    $ CFLAGS=-m32 ./configure --prefix=/home/trevor/local/packages/freetype-2.4.7-32bit
    ...
    $ make install

    If we then try to run the vsim program by hand from the cmdline we (still) get:

    Error in startup script:
    Initialization problem, exiting.

    Initialization problem, exiting.

    Initialization problem, exiting.

        while executing
    "Transcript::action_log "PROPREAD \"$key\" \"$value\"""
        (procedure "VsimProperties::Init" line 59)
        invoked from within
    "VsimProperties::Init $MTIKeypath"
        (procedure "PropertiesInit" line 18)
        invoked from within
    "PropertiesInit"
        invoked from within
    "ncFyP12 -+"
        (file "/mtitcl/vsim/vsim" line 1)
    ** Fatal: Read failure in vlm process (0,0)

    That's because we're still using the system's freetype. In order to use the freetype we've just built and installed:

    $ LD_LIBRARY_PATH=/home/trevor/local/packages/freetype-2.4.7-32bit/lib:$LD_LIBRARY_PATH ./vsim
    Failed to obtain lock: couldn't open "/home/trevor/.modelsim_lock": file already exists
    Reading pref.tcl

    Success!!


    But what can we do to make sure this recently-compiled freetype is used automatically?

    If we look carefully at the full log that we get from when we ran Quartus Prime Lite from the cmdline and tried to run the simulator from the resulting IDE, we see that one of the first things the IDE does is to run quartus_sh. This file is located in ${QUARTUS_INSTALL_DIR}/quartus/bin. If we look into this file we see that it is a script which is very similar in function to the vco script we found in the modelsim_ase directory [in fact, if we look at everything in that directory we see that the contents of every file is exactly the same as every other file in that directory with actual copies instead of symlinking to just one script!]. Most of the script determines what to do, then at the end the actual program is invoked. Just before invoking the actual program a qenv.sh file is sourced. The qenv.sh file is found in ${QUARTUS_INSTALL_DIR}/quartus/adm directory. If we edit this qenv.sh file and, very close to the top, add the following line (adjust it so it matches your specific situation):

    export LD_LIBRARY_PATH=/home/trevor/local/packages/freetype-2.4.7-32bit/lib:$LD_LIBRARY_PATH

    then we'll find that we can invoke the simulator from within the IDE and it will come up with the work folder that we need to simulate our design!!

    SUCCESS!



    Now we can simulate our design by right-clicking on orgate and selecting Simulate


    Voila!


    One webpage that was immensely helpful was this one from the Arch community, thanks!!