bash script question

Hello all,

I have a question regarding a bash script:
how can I read a file and put in a array all the lines that are after each blank lines.





put in the array only aaa bbb ccc

Use perl instead. :wink:

if it’s possible to call perl from my script it’s ok. but how I’m gonna make this task in perl. I don’t know perl.

is this a homework question?


No. I had to make a script to automate some stuff send to printer and I have to solve this task.

Hash: SHA1

Here ya go:

<quote language=“perl”>
#!/usr/bin/perl -w
#Version: 0.1.20090721142000Z
#Description: Script hacked together to read every line that appears AFTER
a blank line.
#Note that lines with even a simple space on them will be treated as

#Pass in the file as the only argument.
if(scalar(@ARGV) == 0) {die("

" . ‘Specify the file to be read as a
parameter.’ . "

my($readFile) = shift(@ARGV);
my($readNext) = 1;
my($fileLine) = ‘’;
my(@desiredArray) = ();

open(INFILE, “<$readFile”) || die('Unable to open file: ’ . $readFile . ':
’ . $!);

while(<INFILE>) {
$fileLine = $_;
if($readNext) {
if($fileLine =~ /^$/) {
$readNext = 1;
}#End if
else {
push(@desiredArray, $fileLine);
$readNext = 0;
}#End else
}#End if
if($fileLine =~ /^$/) {
$readNext = 1;
}#End if
}#End while

close(INFILE) || die('Unable to close file: ’ . $readFile);

" . join("
", @desiredArray) . "


Paste the contents (minus <quote></quote> tags) into a file and make it
executable. Feed it your text file as input. It prints output for you at
the end though that is optional and the output format could easily be changed.

Good luck.

ionpetrache wrote:
> if it’s possible to call perl from my script it’s ok. but how I’m gonna
> make this task in perl. I don’t know perl.
Version: GnuPG v2.0.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla -


Thanks a lot!

Hash: SHA1

Don’t just use it… learn from it. Perl is a great language to know
(cross-platform, very powerful for text hacking, free, open source) and
some of the basic concepts that it excels at (regexes, for example) apply
to many other languages.

Good luck.

ionpetrache wrote:
> Thanks a lot!
Version: GnuPG v2.0.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla -


no need to. :slight_smile: can be done with shell+tools.

fyi, please use code tags for posting code.
second, no need to overrate Perl.

use awk

awk 'BEGIN{ RS=""}{ print $1}' file

the above just print out what you want. Depending on what you want to do with them, there is sometimes no need to store them to arrays.

Sure, but why be a masochist. :stuck_out_tongue: Besides when you mention tools, you are thinking of invoking external processes with attendant inefficiency. It’s far better to do it using bash built-ins. (Yes, I know how.)

You’re using awk, which is another programming language. The shell is there only to invoke awk. How is this different in principle from invoking perl? As opposed to doing it all in bash, using read and tests.

using bash builtin for parsing files is less efficient than using awk.

Yes, but you are simply proposing a different programming language, awk, instead of perl. Nothing wrong with that, but equally different from what the OP asked for, to be done in bash. Which was too restrictive a stipulation anyway.

And why would bash built-ins be in principle less efficient than awk? No external processes are involved.

first, “Use Perl instead” is considered “not a helping solution”. Anyone can post “Use Python instead”, or “Use Ruby instead”. but then it doesn’t really help OP with his current situation , ie bash. At the very least, if the poster wants to suggest “Use XXX language instead” , he/she should show examples of how it might help OP accomplish his task.

second, awk has been associated with the shell since prehistoric times, way before Perl. In this context as well as OP’s shell situation, what i say about “no need to use Perl” is not wrong.

remember OP needs to send stuff to printer? most probably he might have to use things like lpr or some other command line tools for his particular printer. since that’s possible, using bash+tools might be more “the way to go”. (of course, i am not saying you can’t use Perl, but then you still need to make system calls to printer commands, or use some Printer module from CPAN or from Printer manufacturer etc).

the “main way” of parsing files, whether big or small, require the use of while read loop. (if there are any other built in ways, pls let me know, because AFAIK, this is the most common way). In bash, parsing files with while read loop is slower because this “loop” is not “precompiled” (if that’s the right word) into the bash executable itself. (ie you/me have to specifically write the loop). whereas with awk, its “loop” is already compiled into awk. You are a veteran, so i think you know what i meant.

Are you sure that it will make a significant difference in execution time, having to parse the while loop? Are you sure that bash doesn’t compile to an intermediate form before execution? Bear in mind there are also counteracting factors in using awk or perl. You have to load a program which is probably not in memory (at least the first time it is used), whereas the blocks for the bash interpreter are already in memory, thanks to Linux’s caching, so it can just share the executable code of an existing process.

To be honest I would not worry about these things and just write it in the language that is most convenient to code in (including awk if I feel like it) or most clear. If it’s a one-off script that is used to process data that came in that form, it’s not a big deal. I’m just stressing this point because people often just assume some “facts” without taking into account all the factors or doing some measurements, when warranted.

No, I agree it wasn’t helpful, but I wasn’t really serious when I posted that. It was about to go to sleep and the OP put some extra constraints on the solution (put the lines in an array, why?) so I didn’t feel like explaining bash arrays or asking why it had to be in an array. I figured someone else would come along anyway. :slight_smile:

if you are not convinced ,you can try creating a big file in terms of MB, time the process using bash while read loop, vs one with only awk and see for yourself.