It looks like it is a no-go and doesn’t work as expected. At least not in the way I expected it to.
To test my recently installed perl module Data::Lua I wrote a very small test just to see if I could get the data from the file I wanted.
#!/usr/bin/env perl use strict; use warnings; # enable perl 5.10 features use v5.10; # use these modules use Carp; use Data::Dumper; use Data::Lua; # === TEST my $vars = Data::Lua->parse_file('data/indata.lua'); print Dumper($vars); exit;
This should simply take the LUA
indata.lua file and parse it into a perl variable, $vars, as described in the perldoc for Data::Lua. Then Data::Dumper will just print the resulting data structure. Easy. It does run without any errors but it produces a huge output:
$ ./parsetest.pl | wc -l 9773494
This can’t be right? My input file is rather small. Something like 10kBytes and a total of 526 lines:
$ wc -l data/indata.lua 526 data/indata.lua
A difference of over 9.7 million lines is a tad bit too much to write of easily. Examining the output there is a lot of ‘undef’ lines. Like 9.7 million of them with some data sprinkled in between. Removing these lines makes it look like there is a chance to have the data I want.
$ ./parsetest.pl | grep -v undef | wc -l 526
Maybe. There is no way to verify if it is in somewhat correct format etc. unless I put in more time into this. And my input files will be much larger than this test file so producing 99.9% output data that will have to be filtered away before further processing isn’t good.
So in the end I will have to write my own parser. Probably not a generic one but one that solves this particular problem I’m facing. Maybe I’ll write an update on that when I get somewhere.