Earlier today Perl 5.34.0 was released ( perldelta ).
Perl 5.34.0 represents approximately 11 months of development since Perl 5.32.0 and contains approximately 280,000 lines of changes across 2,100 files from 78 authors.
This marks the latest and maybe last release in the Perl 5 series that started way back in 1994 according to wikipedia. Next up should be Perl 7 ( perl7faq ) and that is about time as many assume that Perl is stale due to no major releases in 27 years (!). And no, the Perl 6 / Raku confusion did not help at all and was in my eyes a terribly bad move that did much more damage to the image and brand than it did good.
In case you haven’t figured it out by now, I like perl and have done so since 1995 and I’m not giving up on it.
It looks like it is a no-go and doesn’t work as expected. At least not in the way I expected it to.
To test my recently installed perl module Data::Lua I wrote a very small test just to see if I could get the data from the file I wanted.
# enable perl 5.10 features
# use these modules
# === TEST
my $vars = Data::Lua->parse_file('data/indata.lua');
This should simply take the LUA indata.lua file and parse it into a perl variable, $vars, as described in the perldoc for Data::Lua. Then Data::Dumper will just print the resulting data structure. Easy. It does run without any errors but it produces a huge output:
$ ./parsetest.pl | wc -l
This can’t be right? My input file is rather small. Something like 10kBytes and a total of 526 lines:
$ wc -l data/indata.lua
A difference of over 9.7 million lines is a tad bit too much to write of easily. Examining the output there is a lot of ‘undef’ lines. Like 9.7 million of them with some data sprinkled in between. Removing these lines makes it look like there is a chance to have the data I want.
$ ./parsetest.pl | grep -v undef | wc -l
Maybe. There is no way to verify if it is in somewhat correct format etc. unless I put in more time into this. And my input files will be much larger than this test file so producing 99.9% output data that will have to be filtered away before further processing isn’t good.
So in the end I will have to write my own parser. Probably not a generic one but one that solves this particular problem I’m facing. Maybe I’ll write an update on that when I get somewhere.
Today I ran into a small problem. I needed to parse some files with Lua data (not my choice!) into a database (generic, but think MySQL). Of course I probably could have done this using Lua directly but I didn’t want to spend more time on this than necessary so I went with what I’m most comfortable with; perl.
A quick look at CPAN and I found Data::Lua which should parse my Lua data quickly. Turning to my development system running Debian I quickly installed Lua and went to (locally, for my private user) install Inline::Lua and Data::Lua. That should do the trick. Except that I ran into 2 problem with the Data::Lua tests:
t/parse-file.t …… 1/8 error: [string "_INLINED_LUA"]:18: attempt to call a nil value (global 'setfenv')
t/parse.t ……….. 1/7 error: [string "_INLINED_LUA"]:3: attempt to call a nil value (global 'loadstring')
After examining this further I found out that these functions setfenv and loadstring was removed from Lua in version 5.2. My Debian system has version 5.3 (or was it 5.4?) installed as default.
To make this work I had to remove the default Lua version that was installed and replace it with and older 5.1.
Then I had to (force) rebuild Inline::Lua since it was built against the 5.3 libraries
$ cpan -f install Inline::Lua
After this Data::Lua passed all tests with no problems and installed smoothly. It remains to see if it solves my Lua data into a database via perl exercise.
Examining Data::Lua a bit
Looking into the single file that is Data::Lua it seems that adjusting the two available functions to for with modern Lua versions is a really easy fix. But from what I can make out it seems like that module is more or less abandoned by its’ author. Last update was in 2009 and it is version 0.02. I also see that my problem is reported as a bug over 2 years ago so I guess it is not getting any attention. Sad.
This a re-post of old information that I’ve previously posted somewhere back in 2018 but I wanted it here too for easy linking. It should still be valid in MacOs Big Sur and an slightly updated UltraEdit (I use Version: 220.127.116.11) as I used it as late as yesterday for programming som small stuff.
No, you can not trust the built-in speed test in the Ubiquiti Unifi Security Gateway or the controller. Here are some rather simple test that I made.
First of all, my internet connectivity is probably above average. I have two ISPs where the primary is a 1000/1000 connection (WAN1 on the USG) and the secondary is a 250/50 connection (WAN2 on the USG).
Everything in my network is connected by cable. Incoming internet is delivered to me with either standard gigabit ethernet cable (cat-6) or via a coax connected cable modem. All computers and devices at home are connected by cable too except mobile phones and tablets. These tests except the controller test is made using my iMac connected like:
iMac -> Unifi Flex Mini -> Unifi US-8 -> Unifi Security Gateway -> Internet
This is the built in speed test in my Unifi Controller (version 18.104.22.168) running on a Raspberry Pi 3B:
So yes, just under 200/200 isn’t that impressive on a supposed 1000/1000 connection. But I know I have downloaded files much, much faster than that. Something is not quite right here. Let try a test from my computer. As I’m Swedish there is a local initiative called Bredbandskollen (http://www.bredbandskollen.se/) which is the go-to test that most ISPs recommend here. Lets see what I get there:
Wow, 900Mbps and less than a millisecond in latency! That’s more like it. And this also proves that the USG very well can handle speeds up to gigabit without much problems (provided the correct settings, I have “Protection Mode” set to “Disabled” as afaik it doesn’t do much anyway other than provide a false sense of security).
But one test is no test (well, now we have two tests that shows different results) so I did a few more. Note that these tests were done while my home network was resonable un-quiet with many browser tabs open, Spotify playing music etc.
FAST (http://fast.com) is the one that is recommended by Netflix I think. Gives very similar results to Bredbandskollen.
Conclusion: Don’t ever trust the built-in speed test that the USG and/or controller provides (unless you have a sub-100 Mbps connection I guess). Always (and I can not stress this enough) do the tests with a wired connection. Going wireless introduces so many variables that is hard to control. Always do at least two (three) tests using different services. These tests are of course only showing the speed I get at one point in time and if I suspected variance I would have to do more tests of a longer time period.
Bonus information for me: I really get the promised 1000/1000 speeds that my primary ISP is selling me.
One Friday evening I was sitting at home enjoying dinner and watching tv, almost like a normal person. Then suddenly I lost internet access for no apparent reason. Since I do have two separate incoming internet connections (one fibre and one cable ASDL) from two very different ISPs I was like “huh?”. Around the same time my monitoring system pointed out that the router was not reachable. Time to figure out what was wrong.
A visual inspection of the router gave zero clues as it looked like it was working. LEDs flashing as expected but the GUI was not reachable. Neither did the device respond to ping. A restart did nothing but now the status light started to flash white. Oh, I had this one before when the power supply was broken so I quickly dug up another spare one and plugged it in. Same response, not good.
At the same time I saw on Facebook that people not in my neighbourhood but an adjacent one complained about power loss. I still had power and had not experienced any problems. Checking the power company website I saw that there were outages both south and north of my area. These problems started at 19:17 and when I check the monitoring system it turns out that was the same time as my router became unresponsive. Odd but it had to be related. No other equipment in my setup reported any kind of problems around that time.
The next day, Saturday, I went by my local computer shop (Webhallen) to pick up a new router and went home to install it which thankfully worked straight out of the box. Adopted it into my Unifi network via the controller I had set up on a RaspberryPi earlier this year and I was back online again.
Post mortem: when I finally had my network back up I went to see what was really wrong with the old router. It turned out that a factory reset brought it back to life again. So if I had tried that first it would have saved me the cost of a new router but now I have a spare one if I ever need it.
And that is an understatement. I recently replaced my very quiet (and old) WD Red 3TB drives with Samsung Exos X16 16TB drives. I’m starting to think this was a mistake as my storage server is in the living room. I wish I had somewhere else to keep the server but my apartment is not big enough.
I read somewhere on the internet (so yes, with a grain of salt but it sounds about correct) that the WD Reds are 28-29dB when active and the Exos X16s are 45 (!) dB. I realize that these Exos is enterprise class drives and probably not intended for home use and that in a server room there will be enough noise as it is to make this less of a problem.
Maybe I’ll have to replace them. Sad because I got them quite cheap and replacements will be significantly more expensive. Wonder if the Toshiba N300 drives are quieter because they seem to be a good alternative.
After my recent re-install of my fileserver I decided to make use of the Netdata monitoring (https://www.netdata.cloud). It is simple and requires very little configuration which suits me perfectly at the moment. But to my surprise it started to throw warnings at me from the start. Strange as my server is just installed and doesn’t have many service nor traffic to speak of, just a bunch of disks and NFS/CIFS shares.
One that caught my eye was Interface Drops (net_drops.enp3s0) which sounded like there was something wrong with the network interface or the local network:
A quick look at ifconfig confirms that there is packet drop on the interface. Not a large amount but enough to trigger the warning in Netdata.
Odd. This is on my local network and the server is not exposed to internet so the source of those packets should be local. While I do have quite a few devices on my home network none of them should as far as I know send out unknown traffic that will get dropped just like that.
Looking at the graph I could easily see that the drops where very regular. Every 30 second there was a packet dropped.
Time to look at the interface with tcpdump and see if there is any obvious offenders that appear every 30 seconds. And behold, after some fancy filtering to remove familiar, unsuspicious, traffic this line regularly came up every 30 second:
17:53:24.402973 LLDP, length 85: UniFiSwitch
Interesting. So my Ubiquity UniFi switch (a US-8) is using LLDP, Link Layer Discovery Protocol (wikipedia), to advertise its existence on the local network. This is what gets dropped regularly as my server doesn’t understand it and thus triggering the warning in Netdata.
To solve this I decided to make my server aware of LLDP by installing the lldpd package. This version of LLDP doesn’t require and specific configuration. It “just works”.
And within just a few minutes the warning in Netdata disappeared. Good times.
Since this was on a newly installed server with not that much traffic on the interfaces it was easy to catch. Had I started up all services these packets would have made up such a low ratio that they probably wouldn’t have triggered a warning.
Back on my old blog that I had 10 years ago (yes. it seems like forever ago and it probably is) I did write about building my homelab setup. At that time I used a HP Microserver NL54 (AMD Turion II based dual core) as both VMWare ESXi and ZFS storage. Needless to say that wasn’t and ideal setup in the long run but before hardware started to be limiting the hard drives started to fail. WD Greens will never again be used for anything by me.
So in 2013 I decided that I needed to solve my storage needs first so I built a ZFS NAS from scratch. I required a physically small setup since I don’t really have room for big noisy servers at home at the moment. The result was a small mini-ITX based Debian server with a Intel Pentium G2030 CPU (not at all powerful but can run linux + zfs without problems), 16GB of RAM, a Supermicro SATA controller card (2×4 SATA ports), 2 SSD (one for system and one as SLOG) and 6 WD Red 3TB disks in ZRAID2. All this in a case that is 25cm x 30 cm x 20cm. Brilliant!
Since then I have got rid of the Microserver and replaced some services with Raspberry Pies (PiHole and Unifi controller) and moved some VMs to VMWare Fusion on my primary computer (iMac Retina 27″).
Now it is 2021, the second year of the plague, and my fingers are finally itching to do some system work again. The fact that one of the WD Red drives had errors and kept going offline at times after a pretty decent 7 years of power on time was the decisive factor to do something about this (and that I was running out of space was another). Time to get to work!
So what do I want from a new setup?
It must fit in the same size case as that is all the space I have in the cupboard
It must have more storage space, preferably at least +50% more usable space
It would be nice to have a more powerful CPU so it can run a few VMs or Docker containers
It would be nice if it could work as a backup target for my iMac (Time Machine)
If possible re-use as much of the old setup to save money
That doesn’t sound that hard does it? It turns out it wasn’t.
What I got
I started off getting some new disks. Two 16TB Seagate EXOS X16 disks to replace the six 3TB WD Reds. Going from ZRAID2 to a simple mirror should also increase the performance quite a bit.
From Tradera (the swedish version of eBay) I managed to get a Intel i5 3550 CPU for 18 Euros. It will fit the current motherboard while giving some more oompf and two more cores.
In order to be able to reinstall the system without trashing the old one (having a rollback option can be very handy, I know this after 25 years in IT) I got a Samsung EVO 870 SSD to be the new system disk.
With this I figured I could get by quite nicely.
Then I thought “if I’m going to work in this small case I may as well do as much as possible at the same time” so I bought two Toshiba N300 8TB disks for a second mirror pair because they were on sale.
Going from 18TB raw disk to 48TB would at least be a significant upgrade. And as it goes from 6 disks to 4 disks I will have two free spots in the HDD cage for easy future expansion if needed. Nice.
Waiting for all the parts to arrive, figuring out what I parts missed to buy and then disassemble the old system and rebuild it a new one. I’ll write about that next.
So I decided not to continue on my previous blog which has been stale since 2013 (2011 really) but instead start a new one and keep the old one as it is for archival purposes. Let us see where this takes me.
Note: this is 99% for my own sake as I seem to need somewhere tp act as a place where i can write things that will server as my “external” memory sometimes 😉