Perl 5.34.0 – one step closer to Perl 7?

Earlier today Perl 5.34.0 was released ( perldelta ).

Perl 5.34.0 represents approximately 11 months of development since Perl 5.32.0 and contains approximately 280,000 lines of changes across 2,100 files from 78 authors.

This marks the latest and maybe last release in the Perl 5 series that started way back in 1994 according to wikipedia. Next up should be Perl 7 ( perl7faq ) and that is about time as many assume that Perl is stale due to no major releases in 27 years (!). And no, the Perl 6 / Raku confusion did not help at all and was in my eyes a terribly bad move that did much more damage to the image and brand than it did good.

In case you haven’t figured it out by now, I like perl and have done so since 1995 and I’m not giving up on it.

Added content

This is a great and quick overview of what has been added in perl the last couple of releases: https://phoenixtrap.com/2021/05/25/perl-can-do-that-now/

Update: Using Data::Lua 0.02 in 2021

It looks like it is a no-go and doesn’t work as expected. At least not in the way I expected it to.

To test my recently installed perl module Data::Lua I wrote a very small test just to see if I could get the data from the file I wanted.

#!/usr/bin/env perl

use strict;
use warnings;

# enable perl 5.10 features
use v5.10;

# use these modules
use Carp;
use Data::Dumper;

use Data::Lua;

# === TEST

my $vars = Data::Lua->parse_file('data/indata.lua');
print Dumper($vars);

exit;

This should simply take the LUA indata.lua file and parse it into a perl variable, $vars, as described in the perldoc for Data::Lua. Then Data::Dumper will just print the resulting data structure. Easy. It does run without any errors but it produces a huge output:

$ ./parsetest.pl | wc -l
9773494

This can’t be right? My input file is rather small. Something like 10kBytes and a total of 526 lines:

$ wc -l data/indata.lua
526 data/indata.lua

A difference of over 9.7 million lines is a tad bit too much to write of easily. Examining the output there is a lot of ‘undef’ lines. Like 9.7 million of them with some data sprinkled in between. Removing these lines makes it look like there is a chance to have the data I want.

$ ./parsetest.pl | grep -v undef | wc -l
526

Maybe. There is no way to verify if it is in somewhat correct format etc. unless I put in more time into this. And my input files will be much larger than this test file so producing 99.9% output data that will have to be filtered away before further processing isn’t good.

So in the end I will have to write my own parser. Probably not a generic one but one that solves this particular problem I’m facing. Maybe I’ll write an update on that when I get somewhere.

Using Data::Lua 0.02 in 2021

Today I ran into a small problem. I needed to parse some files with Lua data (not my choice!) into a database (generic, but think MySQL). Of course I probably could have done this using Lua directly but I didn’t want to spend more time on this than necessary so I went with what I’m most comfortable with; perl.

A quick look at CPAN and I found Data::Lua which should parse my Lua data quickly. Turning to my development system running Debian I quickly installed Lua and went to (locally, for my private user) install Inline::Lua and Data::Lua. That should do the trick. Except that I ran into 2 problem with the Data::Lua tests:

t/parse-file.t …… 1/8 error: [string "_INLINED_LUA"]:18: attempt to call a nil value (global 'setfenv')

and

t/parse.t ……….. 1/7 error: [string "_INLINED_LUA"]:3: attempt to call a nil value (global 'loadstring')

After examining this further I found out that these functions setfenv and loadstring was removed from Lua in version 5.2. My Debian system has version 5.3 (or was it 5.4?) installed as default.

Solution

To make this work I had to remove the default Lua version that was installed and replace it with and older 5.1.

# apt-get remove lua5.3 liblua5.3-dev
# apt-get install liblua5.1-0-dev lua5.1

Then I had to (force) rebuild Inline::Lua since it was built against the 5.3 libraries

$ cpan -f install Inline::Lua

After this Data::Lua passed all tests with no problems and installed smoothly. It remains to see if it solves my Lua data into a database via perl exercise.

Examining Data::Lua a bit

Looking into the single file that is Data::Lua it seems that adjusting the two available functions to for with modern Lua versions is a really easy fix. But from what I can make out it seems like that module is more or less abandoned by its’ author. Last update was in 2009 and it is version 0.02. I also see that my problem is reported as a bug over 2 years ago so I guess it is not getting any attention. Sad.

Using UltraEdit + Kick Assembler + Vice for C64 development

This a re-post of old information that I’ve previously posted somewhere back in 2018 but I wanted it here too for easy linking. It should still be valid in MacOs Big Sur and an slightly updated UltraEdit (I use Version: 16.10.0.22) as I used it as late as yesterday for programming som small stuff.

I followed the instructions at https://goatpower.org/projects-releases/sublime-package-kick-assembler-c64/ in order to set up my development system on Mac OS (High Sierra). But with one major difference. I had a fully licensed UltraEdit that I have been using since many years.

So this is how I set up the UltraEdit editor to use the compiler in a similar way to Sublime3:

1. Entered Tool Configuration and configured it as:

Command Line:

export PATH=$PATH:/Applications/VICE/X64.app/Contents/MacOS/ && export CLASSPATH=$CLASSPATH:/Applications/KickAssembler/KickAss.jar && mkdir -p bin && java cml.kickass.KickAssembler '%n%e' -log 'bin/%n_BuildLog.txt' -o 'bin/%n_Compiled.prg' -vicesymbols -showmem -symbolfiledir bin && x64 -moncommands 'bin/%n.vs' 'bin/%n_Compiled.prg'

You probably want to save the file before running the assembler:

And show the Terminal/Console for any assembler output:

2. Assigned a new hotkey for ‘User Tool 1’ in the preferences for keybindings, I picked F7.

3. Installed the following as mos6502.uew (Preferences → Display → Syntax Highlightning)

/L99"MOS6502 Assembly" Line Comment Num = 2// Block Comment On = /* Block Comment Off = */ String Chars = " File Extensions = asm
/Colors = 0,8421376,8421376,8421504,0,
/Colors Back = 16777215,16777215,16777215,16777215,16777215,
/Colors Auto Back = 1,1,1,1,1,
/Font Style = 0,0,0,0,0,
/C1"MOS6502 OpCodes" Colors = 16711680 Colors Back = 16777215 Colors Auto Back = 1 Font Style = 0
adc ahx alr anc anc2 and arr asl axs
bcc bcs beq bit bmi bne bpl bra brk bvc bvs
clc cld cli clv cmp cpx cpy
dcp dec dex dey
eor
inc inx iny isc
jmp jsr
las lax lda ldx ldy lsr
nop
ora
pha php pla plp
rla rol ror rra rti rts
sac sax sbc sbc2 sec sed sei shx shy sir slo sre sta stx sty
tas tax tay tsx txa txs tya
xaa
/C2"Registers" Colors = 255 Colors Back = 16777215 Colors Auto Back = 1 Font Style = 0
x
y
(
)
*
/C3"KickAss Directives" Colors = 33023 Colors Back = 16777215 Colors Auto Back = 1 Font Style = 0
.align .assert .byte .const .enum .error .eval .fill .for .function .if .import .macro .pc .print .pseudocommand .pseuodpc .return .struct .text .var .word
:BasicUpstart 
*Matrix *Vector
abs acos add asin asmCommandSize atan atan2 AT_ABSOLUTE AT_ABSOLUTEX AT_ABSOLUTE_Y AT_IMMEDIATE AT_INDIRECT AT_IZEROPAGEX AT_IZEROPAGEY AT_NONE author
BF_BITMAP_SINGLECOLOR BF_C64FILE BF_FLI BF_KOALA binary BLACK BNE_REL BLUE BROWN
c64 cbrt ceil copyright cos cosh CYAN
DARK_GRAY
else exp expm1
floor
get getData getMulticolorByte getPixel getSinglecolorByte getType getValue getX getY getZ GRAY GREEN
Hashtable height hypot
IEEEremainder init
JMP_IND
keys
LoadPicture LDA_ABS LDA_ABSX LDA_ABSY LDA_IMM LDA_IND LDA_IZPX LDA_IZPY LDA_ZP LDA_ZPX LDA_ZPY List LIGHT_BLUE LIGHT_GRAY LIGHT_GREEN LIGHT_RED LoadBinary location log log10 log1p
Matrix max min mod MoveMatrix
name
ORANGE
PerspectiveMatrix play pow PURPLE put
random RED remove round RotationMatrix RTS
ScaleMatrix set shuffle signum sin sinh size songs source sqrt startSong
tan tanh toDegress toRadians text
Vector
WHITE width
YELLOW
{
}
/C4"C64 Custom Regs" Colors = 32768 Colors Back = 16777215 Colors Auto Back = 1 Font Style = 0
$d000 $d001 $d002 $d003 $d004 $d005 $d006 $d007 $d008 $d009 $d00a $d00b $d00c $d00d $d00e $d00F $d010 $d011 $d012 $d013 $d014 $d015 $d016 $d017 $d018 $d019 $d01a $d01b $d01c $d01d $d01e $d01F $d020 $d021 $d022 $d023 $d024 $d025 $d026 $d027 $d028 $d029 $d02a $d02b $d02c $d02d $d02e $d400 $d401 $d402 $d403 $d404 $d405 $d406 $d407 $d408 $d409 $d40a $d40b $d40c $d40d $d40e $d40F $d410 $d411 $d412 $d413 $d414 $d415 $d416 $d417 $d418 $d419 $d41a $d41b $d41c $dc00 $dc01 $dc02 $dc03 $dc04 $dc05 $dc06 $dc07 $dc08 $dc09 $dc0a $dc0b $dc0c $dc0d $dc0e $dc0F $dd00 $dd01 $dd02 $dd03 $dd04 $dd05 $dd06 $dd07 $dd08 $dd09 $dd0a $dd0b $dd0c $dd0d $dd0e $dd0f $fffe $ffff

And that was it. Make some code in UltraEdit and hit F7 to compile and run it in Vice. Very Neat!

Don’t use your Ubiquiti USG for speed tests

No, you can not trust the built-in speed test in the Ubiquiti Unifi Security Gateway or the controller. Here are some rather simple test that I made.

First of all, my internet connectivity is probably above average. I have two ISPs where the primary is a 1000/1000 connection (WAN1 on the USG) and the secondary is a 250/50 connection (WAN2 on the USG).

Everything in my network is connected by cable. Incoming internet is delivered to me with either standard gigabit ethernet cable (cat-6) or via a coax connected cable modem. All computers and devices at home are connected by cable too except mobile phones and tablets. These tests except the controller test is made using my iMac connected like:

iMac -> Unifi Flex Mini -> Unifi US-8 -> Unifi Security Gateway -> Internet

This is the built in speed test in my Unifi Controller (version 6.0.45.0) running on a Raspberry Pi 3B:

So yes, just under 200/200 isn’t that impressive on a supposed 1000/1000 connection. But I know I have downloaded files much, much faster than that. Something is not quite right here. Let try a test from my computer. As I’m Swedish there is a local initiative called Bredbandskollen (http://www.bredbandskollen.se/) which is the go-to test that most ISPs recommend here. Lets see what I get there:

Wow, 900Mbps and less than a millisecond in latency! That’s more like it. And this also proves that the USG very well can handle speeds up to gigabit without much problems (provided the correct settings, I have “Protection Mode” set to “Disabled” as afaik it doesn’t do much anyway other than provide a false sense of security).

But one test is no test (well, now we have two tests that shows different results) so I did a few more. Note that these tests were done while my home network was resonable un-quiet with many browser tabs open, Spotify playing music etc.

FAST (http://fast.com) is the one that is recommended by Netflix I think. Gives very similar results to Bredbandskollen.

And this is SpeedTest by Ookla (http://speedtest.net) which gives even slightly higher speeds.

Conclusion: Don’t ever trust the built-in speed test that the USG and/or controller provides (unless you have a sub-100 Mbps connection I guess). Always (and I can not stress this enough) do the tests with a wired connection. Going wireless introduces so many variables that is hard to control. Always do at least two (three) tests using different services. These tests are of course only showing the speed I get at one point in time and if I suspected variance I would have to do more tests of a longer time period.

Bonus information for me: I really get the promised 1000/1000 speeds that my primary ISP is selling me.

That time when a power cut in a nearby area killed my USG3p

Yeah, that happened. Story time.

One Friday evening I was sitting at home enjoying dinner and watching tv, almost like a normal person. Then suddenly I lost internet access for no apparent reason. Since I do have two separate incoming internet connections (one fibre and one cable ASDL) from two very different ISPs I was like “huh?”. Around the same time my monitoring system pointed out that the router was not reachable. Time to figure out what was wrong.

A visual inspection of the router gave zero clues as it looked like it was working. LEDs flashing as expected but the GUI was not reachable. Neither did the device respond to ping. A restart did nothing but now the status light started to flash white. Oh, I had this one before when the power supply was broken so I quickly dug up another spare one and plugged it in. Same response, not good.

At the same time I saw on Facebook that people not in my neighbourhood but an adjacent one complained about power loss. I still had power and had not experienced any problems. Checking the power company website I saw that there were outages both south and north of my area. These problems started at 19:17 and when I check the monitoring system it turns out that was the same time as my router became unresponsive. Odd but it had to be related. No other equipment in my setup reported any kind of problems around that time.

The next day, Saturday, I went by my local computer shop (Webhallen) to pick up a new router and went home to install it which thankfully worked straight out of the box. Adopted it into my Unifi network via the controller I had set up on a RaspberryPi earlier this year and I was back online again.

Post mortem: when I finally had my network back up I went to see what was really wrong with the old router. It turned out that a factory reset brought it back to life again. So if I had tried that first it would have saved me the cost of a new router but now I have a spare one if I ever need it.

A fair warning: Seagate Exos X16 drives are noisy!

And that is an understatement. I recently replaced my very quiet (and old) WD Red 3TB drives with Samsung Exos X16 16TB drives. I’m starting to think this was a mistake as my storage server is in the living room. I wish I had somewhere else to keep the server but my apartment is not big enough.

I read somewhere on the internet (so yes, with a grain of salt but it sounds about correct) that the WD Reds are 28-29dB when active and the Exos X16s are 45 (!) dB. I realize that these Exos is enterprise class drives and probably not intended for home use and that in a server room there will be enough noise as it is to make this less of a problem.

Maybe I’ll have to replace them. Sad because I got them quite cheap and replacements will be significantly more expensive. Wonder if the Toshiba N300 drives are quieter because they seem to be a good alternative.

Netdata warns about packets dropped ratio

After my recent re-install of my fileserver I decided to make use of the Netdata monitoring (https://www.netdata.cloud). It is simple and requires very little configuration which suits me perfectly at the moment. But to my surprise it started to throw warnings at me from the start. Strange as my server is just installed and doesn’t have many service nor traffic to speak of, just a bunch of disks and NFS/CIFS shares.

One that caught my eye was Interface Drops (net_drops.enp3s0) which sounded like there was something wrong with the network interface or the local network:

screenshot from Netdatas notification list

A quick look at ifconfig confirms that there is packet drop on the interface. Not a large amount but enough to trigger the warning in Netdata.

# ifconfig enp3s0
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.6  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::62a4:4cff:feb1:b0d5  prefixlen 64  scopeid 0x20<link>
        ether 60:a4:4c:b1:b0:d5  txqueuelen 1000  (Ethernet)
        RX packets 13150808  bytes 5182069895 (4.8 GiB)
        RX errors 0  dropped 2874  overruns 0  frame 0
        TX packets 12350768  bytes 15847850867 (14.7 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

But at the same time ethtool didn’t report these packages at all.

# ethtool -S enp3s0
NIC statistics:
     tx_packets: 12350894
     rx_packets: 13151278
     tx_errors: 0
     rx_errors: 0
     rx_missed: 2
     align_errors: 0
     tx_single_collisions: 0
     tx_multi_collisions: 0
     unicast: 12879297
     broadcast: 40489
     multicast: 231492
     tx_aborted: 0
     tx_underrun: 0

Odd. This is on my local network and the server is not exposed to internet so the source of those packets should be local. While I do have quite a few devices on my home network none of them should as far as I know send out unknown traffic that will get dropped just like that.

Looking at the graph I could easily see that the drops where very regular. Every 30 second there was a packet dropped.

screenshot of the graph Interface Drops

Time to look at the interface with tcpdump and see if there is any obvious offenders that appear every 30 seconds. And behold, after some fancy filtering to remove familiar, unsuspicious, traffic this line regularly came up every 30 second:

17:53:24.402973 LLDP, length 85: UniFiSwitch

Interesting. So my Ubiquity UniFi switch (a US-8) is using LLDP, Link Layer Discovery Protocol (wikipedia), to advertise its existence on the local network. This is what gets dropped regularly as my server doesn’t understand it and thus triggering the warning in Netdata.

To solve this I decided to make my server aware of LLDP by installing the lldpd package. This version of LLDP doesn’t require and specific configuration. It “just works”.

  # apt-get install lldpd
  # systemctl enable lldpd
  # service lldpd start

And within just a few minutes the warning in Netdata disappeared. Good times.

Since this was on a newly installed server with not that much traffic on the interfaces it was easy to catch. Had I started up all services these packets would have made up such a low ratio that they probably wouldn’t have triggered a warning.

Home storage refresh

Back on my old blog that I had 10 years ago (yes. it seems like forever ago and it probably is) I did write about building my homelab setup. At that time I used a HP Microserver NL54 (AMD Turion II based dual core) as both VMWare ESXi and ZFS storage. Needless to say that wasn’t and ideal setup in the long run but before hardware started to be limiting the hard drives started to fail. WD Greens will never again be used for anything by me.

So in 2013 I decided that I needed to solve my storage needs first so I built a ZFS NAS from scratch. I required a physically small setup since I don’t really have room for big noisy servers at home at the moment. The result was a small mini-ITX based Debian server with a Intel Pentium G2030 CPU (not at all powerful but can run linux + zfs without problems), 16GB of RAM, a Supermicro SATA controller card (2×4 SATA ports), 2 SSD (one for system and one as SLOG) and 6 WD Red 3TB disks in ZRAID2. All this in a case that is 25cm x 30 cm x 20cm. Brilliant!

Since then I have got rid of the Microserver and replaced some services with Raspberry Pies (PiHole and Unifi controller) and moved some VMs to VMWare Fusion on my primary computer (iMac Retina 27″).

Now it is 2021, the second year of the plague, and my fingers are finally itching to do some system work again. The fact that one of the WD Red drives had errors and kept going offline at times after a pretty decent 7 years of power on time was the decisive factor to do something about this (and that I was running out of space was another). Time to get to work!

Wishlist

So what do I want from a new setup?

  • It must fit in the same size case as that is all the space I have in the cupboard
  • It must have more storage space, preferably at least +50% more usable space
  • It would be nice to have a more powerful CPU so it can run a few VMs or Docker containers
  • It would be nice if it could work as a backup target for my iMac (Time Machine)
  • If possible re-use as much of the old setup to save money

That doesn’t sound that hard does it? It turns out it wasn’t.

What I got

I started off getting some new disks. Two 16TB Seagate EXOS X16 disks to replace the six 3TB WD Reds. Going from ZRAID2 to a simple mirror should also increase the performance quite a bit.

From Tradera (the swedish version of eBay) I managed to get a Intel i5 3550 CPU for 18 Euros. It will fit the current motherboard while giving some more oompf and two more cores.

In order to be able to reinstall the system without trashing the old one (having a rollback option can be very handy, I know this after 25 years in IT) I got a Samsung EVO 870 SSD to be the new system disk.

With this I figured I could get by quite nicely.

Then I thought “if I’m going to work in this small case I may as well do as much as possible at the same time” so I bought two Toshiba N300 8TB disks for a second mirror pair because they were on sale.

Going from 18TB raw disk to 48TB would at least be a significant upgrade. And as it goes from 6 disks to 4 disks I will have two free spots in the HDD cage for easy future expansion if needed. Nice.

Next

Waiting for all the parts to arrive, figuring out what I parts missed to buy and then disassemble the old system and rebuild it a new one. I’ll write about that next.

Hello world …again

So I decided not to continue on my previous blog which has been stale since 2013 (2011 really) but instead start a new one and keep the old one as it is for archival purposes. Let us see where this takes me.

Note: this is 99% for my own sake as I seem to need somewhere tp act as a place where i can write things that will server as my “external” memory sometimes 😉