Why are scripting languages (e.g. Perl, Python, and Ruby) not suitable as shell languages?

PythonRubyPerlBashShell

Python Problem Overview


What are the differences between shell languages like Bash (bash), Z shell (zsh), Fish (fish) and the scripting languages above that makes them more suitable for the shell?

When using the command line, the shell languages seem to be much easier. It feels for me much smoother to use bash for example than to use the shell profile in IPython, despite reports to the contrary. I think most will agree with me that a large portion of medium to large scale programming is easier in Python than in Bash. I use Python as the language I am most familiar with. The same goes for Perl and Ruby.

I have tried to articulate the reason, but I am unable to, aside from assuming that the treatment of strings differently in both has something to do with it.

The reason of this question is that I am hoping to develop a language usable in both. If you know of such a language, please post it as well.

As S.Lott explains, the question needs some clarification. I am asking about the features of the shell language versus that of scripting languages. So the comparison is not about the characteristics of various interactive (REPL) environments such as history and command line substitution. An alternative expression for the question would be:

Can a programming language that is suitable for design of complex systems be at the same time able to express useful one-liners that can access the file system or control jobs? Can a programming language usefully scale up as well as scale down?

Python Solutions


Solution 1 - Python

There are a couple of differences that I can think of; just thoughtstreaming here, in no particular order:

  1. Python & Co. are designed to be good at scripting. Bash & Co. are designed to be only good at scripting, with absolutely no compromise. IOW: Python is designed to be good both at scripting and non-scripting, Bash cares only about scripting.

  2. Bash & Co. are untyped, Python & Co. are strongly typed, which means that the number 123, the string 123 and the file 123 are quite different. They are, however, not statically typed, which means they need to have different literals for those, in order to keep them apart.
    Example:

                     | Ruby             | Bash    
     -----------------------------------------
     number          | 123              | 123
     string          | '123'            | 123
     regexp          | /123/            | 123
     file            | File.open('123') | 123
     file descriptor | IO.open('123')   | 123
     URI             | URI.parse('123') | 123
     command         | `123`            | 123
    
  3. Python & Co. are designed to scale up to 10000, 100000, maybe even 1000000 line programs, Bash & Co. are designed to scale down to 10 character programs.

  4. In Bash & Co., files, directories, file descriptors, processes are all first-class objects, in Python, only Python objects are first-class, if you want to manipulate files, directories etc., you have to wrap them in a Python object first.

  5. Shell programming is basically dataflow programming. Nobody realizes that, not even the people who write shells, but it turns out that shells are quite good at that, and general-purpose languages not so much. In the general-purpose programming world, dataflow seems to be mostly viewed as a concurrency model, not so much as a programming paradigm.

I have the feeling that trying to address these points by bolting features or DSLs onto a general-purpose programming language doesn't work. At least, I have yet to see a convincing implementation of it. There is RuSH (Ruby shell), which tries to implement a shell in Ruby, there is rush, which is an internal DSL for shell programming in Ruby, there is Hotwire, which is a Python shell, but IMO none of those come even close to competing with Bash, Zsh, fish and friends.

Actually, IMHO, the best current shell is Microsoft PowerShell, which is very surprising considering that for several decades now, Microsoft has continually had the worst shells evar. I mean, COMMAND.COM? Really? (Unfortunately, they still have a crappy terminal. It's still the "command prompt" that has been around since, what? Windows 3.0?)

PowerShell was basically created by ignoring everything Microsoft has ever done (COMMAND.COM, CMD.EXE, VBScript, JScript) and instead starting from the Unix shell, then removing all backwards-compatibility cruft (like backticks for command substitution) and massaging it a bit to make it more Windows-friendly (like using the now unused backtick as an escape character instead of the backslash which is the path component separator character in Windows). After that, is when the magic happens.

They address problem 1 and 3 from above, by basically making the opposite choice compared to Python. Python cares about large programs first, scripting second. Bash cares only about scripting. PowerShell cares about scripting first, large programs second. A defining moment for me was watching a video of an interview with Jeffrey Snover (PowerShell's lead designer), when the interviewer asked him how big of a program one could write with PowerShell and Snover answered without missing a beat: "80 characters." At that moment I realized that this is finally a guy at Microsoft who "gets" shell programming (probably related to the fact that PowerShell was neither developed by Microsoft's programming language group (i.e. lambda-calculus math nerds) nor the OS group (kernel nerds) but rather the server group (i.e. sysadmins who actually use shells)), and that I should probably take a serious look at PowerShell.

Number 2 is solved by having arguments be statically typed. So, you can write just 123 and PowerShell knows whether it is a string or a number or a file, because the cmdlet (which is what shell commands are called in PowerShell) declares the types of its arguments to the shell. This has pretty deep ramifications: unlike Unix, where each command is responsible for parsing its own arguments (the shell basically passes the arguments as an array of strings), argument parsing in PowerShell is done by the shell. The cmdlets specify all their options and flags and arguments, as well as their types and names and documentation(!) to the shell, which then can perform argument parsing, tab completion, IntelliSense, inline documentation popups etc. in one centralized place. (This is not revolutionary, and the PowerShell designers acknowledge shells like the DIGITAL Command Language (DCL) and the IBM OS/400 Command Language (CL) as prior art. For anyone who has ever used an AS/400, this should sound familiar. In OS/400, you can write a shell command and if you don't know the syntax of certain arguments, you can simply leave them out and hit F4, which will bring a menu (similar to an HTML form) with labelled fields, dropdown, help texts etc. This is only possible because the OS knows about all the possible arguments and their types.) In the Unix shell, this information is often duplicated three times: in the argument parsing code in the command itself, in the bash-completion script for tab-completion and in the manpage.

Number 4 is solved by the fact that PowerShell operates on strongly typed objects, which includes stuff like files, processes, folders and so on.

Number 5 is particularly interesting, because PowerShell is the only shell I know of, where the people who wrote it were actually aware of the fact that shells are essentially dataflow engines and deliberately implemented it as a dataflow engine.

Another nice thing about PowerShell are the naming conventions: all cmdlets are named Action-Object and moreover, there are also standardized names for specific actions and specific objects. (Again, this should sound familar to OS/400 users.) For example, everything which is related to receiving some information is called Get-Foo. And everything operating on (sub-)objects is called Bar-ChildItem. So, the equivalent to ls is Get-ChildItem (although PowerShell also provides builtin aliases ls and dir – in fact, whenever it makes sense, they provide both Unix and CMD.EXE aliases as well as abbreviations (gci in this case)).

But the killer feature IMO is the strongly typed object pipelines. While PowerShell is derived from the Unix shell, there is one very important distinction: in Unix, all communication (both via pipes and redirections as well as via command arguments) is done with untyped, unstructured strings. In PowerShell, it's all strongly typed, structured objects. This is so incredibly powerful that I seriously wonder why noone else has thought of it. (Well, they have, but they never became popular.) In my shell scripts, I estimate that up to one third of the commands is only there to act as an adapter between two other commands that don't agree on a common textual format. Many of those adapters go away in PowerShell, because the cmdlets exchange structured objects instead of unstructured text. And if you look inside the commands, then they pretty much consist of three stages: parse the textual input into an internal object representation, manipulate the objects, convert them back into text. Again, the first and third stage basically go away, because the data already comes in as objects.

However, the designers have taken great care to preserve the dynamicity and flexibility of shell scripting through what they call an Adaptive Type System.

Anyway, I don't want to turn this into a PowerShell commercial. There are plenty of things that are not so great about PowerShell, although most of those have to do either with Windows or with the specific implementation, and not so much with the concepts. (E.g. the fact that it is implemented in .NET means that the very first time you start up the shell can take up to several seconds if the .NET framework is not already in the filesystem cache due to some other application that needs it. Considering that you often use the shell for well under a second, that is completely unacceptable.)

The most important point I want to make is that if you want to look at existing work in scripting languages and shells, you shouldn't stop at Unix and the Ruby/Python/Perl/PHP family. For example, Tcl was already mentioned. Rexx would be another scripting language. Emacs Lisp would be yet another. And in the shell realm there are some of the already mentioned mainframe/midrange shells such as the OS/400 command line and DCL. Also, Plan9's rc.

Solution 2 - Python

It's cultural. The Bourne shell is almost 25 years old; it was one of the first scripting languages, and it was the first good solution to the central need of Unix admins. (I.e., a 'glue' to tie all the other utilities together and to do typical Unix tasks without having to compile a damn C program every time.)

By modern standards, its syntax is atrocious and its weird rules and punctuation-as-statement style (useful in the 1970s when every byte counted) make it hard for non-admins to penetrate it. But it did the job. The flaws and shortcomings were addressed by evolutionary improvements in its descendants (ksh, bash, zsh) without having to reconceive the ideas behind it. Admins stuck to the core syntax because, weird as it was, nothing else handled the simple stuff better without getting in the way.

For complex stuff, Perl came along and morphed into a sort of half-admin, half-application language. But the more complex something gets, the more it's seen as an application rather than admin work, so the business people tend to look for "programmers" rather than "admins" to do it, despite the fact that the right kind of geek tends to be both. So that's where the focus went, and the evolutionary improvements to the application capabilities of Perl resulted in...well, Python and Ruby. (That's an oversimplification, but Perl was one of several inspirations for both languages.)

Result? Specialization. Admins tend to think modern interpreted languages are too heavyweight for the things they're paid to do every day. And overall, they're right. They don't need objects. They don't care about data structures. They need commands. They need glue. Nothing else tries to do commands better than the Bourne shell concept (except maybe Tcl, which was already mentioned here); and Bourne is good enough.

Programmers -- who nowadays are having to learn about devops more and more -- look at the limitations of the Bourne shell and wonder how the hell anyone could put up with it. But the tools they know, while they certainly lean towards the Unixish style of I/O and file operations, aren't better for the purpose. I've written things like backup scripts and file renaming one-offs in Ruby, because I know it better than I know bash, but any dedicated admin could do the same thing in bash -- probably in fewer lines and with less overhead, but either way, it'd work just as well.

It's a common thing to ask "Why does everyone use Y when Z is better?" -- but evolution in technology, like evolution in everything else, tends to stop at good enough. The 'better' solution doesn't win unless the difference is viewed as a deal-breaking frustration. Bourne-type scripting might be frustrating to you, but for the people who use it all the time and for the jobs it was meant for, it's always done the job.

Solution 3 - Python

A shell language has to be easy to use. You want to type one-time throw away commands, not small programs. I.e. you want to type

ls -laR /usr

not

shell.ls("/usr", long=True, all=True, recursive=True)

This (also) means shell languages don't really care if an argument is an option, a string, a number or something else.

Also, programming constructs in shells are an add-on, and not even always build-in. I.e. consider the combination of if and [ in (ba)sh, seq for generating sequences, and so on.

Finally, shells have specific needs that you need less, or differently in programming. I.e. pipes, file redirection, process/job control, and so on.

Solution 4 - Python

> If you know of such a language, please post it as well.

Tcl is one such language. Mainly because it is designed to primarily be a shell interpreter for CAD programs. Here's one hardcore Python programmer's* experience of realising why Tcl was designed the way it was: I can't believe I'm praising Tcl

For me, I've written and have been using and improved Tcl shell (written in Tcl, of course) as my main Linux login shell on my homebrewed router: Pure Tcl readline

Some of the reasons I like Tcl in general has everything to do with the similarity of its syntax to traditional shells:

  1. At its most basic, Tcl syntax is command argument argument.... There's nothing else. This is the same as Bash, C shell or even DOS shell.

  2. A bareword is considered a string. This is again similar to traditional shells allowing you to write: open myfile.txt w+ instead of open "myfile.txt" "w+".

  3. Because of the foundations of 1 and 2, Tcl ends up with very little extraneous syntax. You write code with less punctuation: puts Hello instead of printf("Hello");. When writing programs you don't feel the hurt so much, because you spend a lot of time thinking about what to write. When you use a shell to copy a file you don't think you just type and having to type ( and " and , and ) and ; again and again gets annoying very quickly.

*Note: not me; I'm a hardcore Tcl programmer

Solution 5 - Python

Who says they aren't? Take a look at Zoidberg. REPLs (Read Eval Print Loops) make crappy shells because every command must be syntactically correct, and running a program goes from being:

foo arg1 arg2 arg3

to

system "foo", "arg1", "arg2", "arg3"

And don't even get me started on trying to do redirection.

So, you need a custom shell (rather than a REPL) that understands commands and redirection and the language you want to use to bind commands together. I think zoid (the Zoidberg shell) does a pretty good job of it.

Solution 6 - Python

These answers inspired me to take over maintenance of the Perl-based shell Zoidberg. After some fixes, it is usable again!

Check out the user's guide or install Bundle::Zoidberg using your favorite CPAN client.

Solution 7 - Python

No.


No, scripting languages are probably not suitable for shells.


The problem is the dichotomy between macro languages and, well, everything else.

The shell is in a category with other legacy macro languages such as nroff and m4. In these processors, everything is a string and the processor defines a mapping from input strings to output strings.

Certain boundaries are crossed in both directions in all languages, but it's usually quite clear whether a system's category is macro or, hmm, I'm not aware of an official term ... I will say "a real language".

So sure, you could type in all your commands in a language like Ruby, and it might even be a second-best choice to a real shell, but it will never be a macro language. There is too much syntax to respect. It takes too many quotes.

But the macro language has its own issues when you start programming in it, because too many compromises had to be made to get rid of all that syntax. Strings are typed in with no quotes. Various amounts of magic need to be re-introduced to inject the missing syntax. I did a code-golf in nroff once, just to be different. It was pretty strange. The source code to big implementations in macro languages is scary.

Solution 8 - Python

Since both are formally programming languages, what you can do in one, you can do in the other. Actually it is a design emphasis issue. Shell languages are designed for interactive use, while scripting languages aren't.

The basic difference in the design is the storage of data between commands and the scope of variables. In Bash, etc. you have to jump through hoops to store a value (for example, commands like set a='something'), while in languages like Python you simply use an assignment statement (a = 'something'). When using the values in a shell language you have to tell the language that your want the value of the variable, while in scripting languages you have to tell the language when you want the immediate value of the string. This has effects when used interactively.

In a scripting language where ls was defined as a command

a = some_value

ls a*b  

(What does a mean? Does this mean some_value * (whatever b is) or do you mean 'a'anystring'b'?. In a scripting language the default is what is stored in memory for a.)

ls 'a*b'  Now means what the Unix ls a*b means.

In a Bash-like language

set a=some_value

ls a*b   means what the Unix ls a*b means.

ls $a*b  uses an explicit recall of the value of a.

Scripting languages make it easy to store and recall values and hard to have a transient scope on a value. Shell languages make it possible to store and recall values, but have a trivially transient scope per command.

Solution 9 - Python

I think it's a question of parsing. Shell languages assumes by default $ command means you mean a command to run, Python/Ruby need you to do system("command") or what not. It's not that they're unsuitable, just that nobody has really done it yet, at least I think so. Rush http://rush.heroku.com/ is an example attempt in Ruby, Python has "iPython" or something like that.

Solution 10 - Python

Scalability and extensibility? Common Lisp (you can even run CLISP, and possibly other implementations, as a login shell in Unix environments).

Solution 11 - Python

For the Windows users, I haven't yet felt the need for PowerShell, because I still use 4NT (now Take Command Console) from JP Software. It is a very good shell with lots of programming abilities. So it combines the best of both worlds.

When you take a look at, for example, IRB (the Ruby interpreter), it must be well possible to extend it with more one-liners to do daily scripted or mass file management and on the minute tasks.

Solution 12 - Python

You beg the question. Not everyone agrees that shell languages are superior. For one, _Why doesn't

> Not long ago a friend asked me how to recursively search his PHP scripts for a string. He had a lot of big binary files and templates in those directories that could have really bogged down a plain grep. I couldn't think of a way to use grep to make this happen, so I figured using find and grep together would be my best bet. > > find . -name ".php" -exec grep 'search_string' {} ; -print > > Here's the above file search reworked in Ruby: > > Dir['**/.php'].each do |path| > File.open( path ) do |f| > f.grep( /search_string/ ) do |line| > puts path, ':', line > end > end > end > > Your first reaction may be, "Well, that's quite a bit wordier than the original." And I just have to shrug and let it be. "It's a lot easier to extend," I say. And it works across platforms.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionMuhammad AlkarouriView Question on Stackoverflow
Solution 1 - PythonJörg W MittagView Answer on Stackoverflow
Solution 2 - PythonSFEleyView Answer on Stackoverflow
Solution 3 - PythonIvo van der WijkView Answer on Stackoverflow
Solution 4 - PythonslebetmanView Answer on Stackoverflow
Solution 5 - PythonChas. OwensView Answer on Stackoverflow
Solution 6 - PythonJoel BergerView Answer on Stackoverflow
Solution 7 - PythonDigitalRossView Answer on Stackoverflow
Solution 8 - PythonrobView Answer on Stackoverflow
Solution 9 - PythonrogerdpackView Answer on Stackoverflow
Solution 10 - Pythonuser559914View Answer on Stackoverflow
Solution 11 - PythonpeterView Answer on Stackoverflow
Solution 12 - PythonColonel PanicView Answer on Stackoverflow