Programming Loops vs Recursion – Computerphile

Programming Loops vs Recursion – Computerphile


[There’s] a lot of interesting stuff both from the
point of view of the content but also the historical context between, y’ know
“When were `for’ loops invented?”. Well that’s what Algol called them but prior
to that FORTRAN called them DO loops. And prior to that they existed in assembler.
So, first of all, what’s the history and what does it get you when you can do
loops, but when do you run out of steam, even with loops, and you have to use this
shock! horror! Pure Mathematicians thing – that computer scientists have to learn
about – recursion?! It was a real culture shock, it really was, in the roughly
1940s, 1950s to suddenly find out that what the theoreticians had been
drivelling on about for years – about recursive functions in mathematics – actually was of
massive, massive importance for computer science. Back in the ’40s and early
’50s it was all Assembler – or a slightly dressed-up thing called a macro
assembler, where you can have little routines full of, y’ know, packaged
assembler instructions which could be called up, as and when needed. So, that
sort of served people for quite some time. But probably one of the first
high-level languages to introduce loops was good old FORTRAN [shows textbook]. Even though
that was published in ’65 Fortran itself goes back, I think, for almost ten years before
that. It was invented by John Backus and a large team of people at IBM in the
1950s. Many of you will know it. It’s an excellent language for engineering and
scientific calculations. It is low level. I mean, when you look at the nature of a
FORTRAN loop it’s almost like doing it in assembler – but not quite. They didn’t
call them for loops – they called them DO loops. What I’m saying here is – you package all this
up – where you’re saying repeat the following sequence of
instructions, which I’ve done with my wavy lines here. Keep doing them until
you hit the statement with a numeric label on it of 180. The loop back from
the statement labelled 180, back up to here to increment the loop counter, which
you’re all familiar with in languages like C. It wasn’t done, as it would
be in C, by saying: “Here’s my block of stuff to be repeated it’s inside these
curly braces”. Here you can see it’s a lot more like assembler, a lot more low-level.
I mean there’s nothing magic about “180”; it could be “72”; it depended on your labelling
system. Implicitly here, in a simple thing like this, you’d start off [with the counter]
at one and every time I returned back here it would reset [the counter] to be 2, 3, 4 and so on up to
and including 10. It’s comforting for those who were coming from assembler
into a higher-level language to see something that was only slightly higher
level, in sophistication, than assembler was. How did loops become more “powerful”,
if you like? Well, again, even in assembler and even in
FORTRAN, there’s no reason why you couldn’t have a loop within a loop. So I
might have, outside of all this code, yet another layer of DO. What shall we say:
“DO 200 J=1, 20”. So, there might be some more statements between 180 and
200, who knows, but again, you see, a numeric label. And can see what’s
happening is that for every setting of J, which will start at 1 and go up to 20,
for every single one of those J settings the inner loop will be running through
the complete spectrum of settings of I going from 1 to 10. So you will have 200
locations [that] are being affected here. Basically going through the rows and
columns of a matrix. All sorts of calculations in physics, chemistry and
particularly engineering just rely on two-dimensional arrays full of numbers
– either integers or scientific numbers with a decimal point. and so on. Even hard-core assembly programmers had to admit if you were
doing heavy scientific programming it was nice to be a little bit more abstract
and to have this sort of facility available to you. Now you might say: “Well,
what came along to spoil the party then ?” or “How did people realize that this was
wonderful but not quite enough?” The compiler of course has got to be
tolerant and has got to be capable of compiling nested DO loops correctly but
how deep would it let you nest them? Well, I’m guessing, I would suspect that
the early FORTRAN compilers probably wouldn’t allow you to go more than about
10 deep, maximum. And I think you and I Sean have just been looking up what are the
current limits in C? I seem to remember the earliest `gcc’ was something like 32
But Ithink we looked up this … some C++ nowadays allows you to do nested loops
256 deep! And, of course, there are multi-dimensional problems that might
actually need that, because it it doesn’t take much knowledge of higher maths to
realize if you’ve got a loop within a loop the outer loop goes around n times; the
inner loop is going around n times, you are then coping with an n-squared
problem. If you put the third loop inside the other two you’re coping with a cubic,
three-dimensional, problem. So what we’re saying is all these multi-dimensional
polynomial-going-on-exponential problems, that come up quite naturally, you can
cope with them in nested for-loops so long as they don’t need to be more than
power-32 or power-256 or whatever it is. And you think, well, that should be enough for
anybody! There’s these multi-dimensional problems you can just do them by nesting
`for’ loops and surely [a depth of] 256 is enough for anybody? What kind of problem
wouldn’t it be enough for? Well, a lot of theoretical computer scientists of my
knowledge amused me greatly when – those of them that will own up to this – back in
the 60s. People started going to lectures from mathematicians, theoreticians, people concerned with “Godel Computability” and so on. And
of course, those sort of people, were very familiar indeed, at a mathematical level,
with Ackermann’s function. Now, as you know – you and I – we’ve done that one:
>>Sean: Was that “The most difficult … ?”
>>DFB: “The most difficult number to compute, question mark”
“We set this going four weeks ago
nearly now the first few are vanished …”
video insert>So what made it so difficult?
well you write down Ackermann’s function and it very clearly ends up with routines
calling themselves recursively in a very very complicated way. Now I think your
average sort of engineer would be happy to say that there’s this thing called `factorial’
which is 5 times 4 times 3 times 2 times 1, or whatever. And you could do that in a
loop as well as doing this fancy recursion thing, but a lot of
theoreticians admitted to me they saw a Ackermann’s function and said: “I could try that
out in FORTRAN !”. Now what they perhaps didn’t realize – but it became famous by 1960 – is: FORTRAN is wonderful, but original
FORTRAN did not do user-level recursion You could write a thing called ACK.
You could actually get it to call itself in FORTRAN. But you might have been
expecting that every time it called itself it would lay out a data area for
each recursive call they’re called “stack frames” – we know that now. You get lots of
stack frames, one on top of another and as you come back through the recursion
they’re deleted and thrown away and you climb back into your main program.
FORTRAN doesn’t do that. It sets aside one stack frame. You keep calling
yourself recursively it just tramples in its muddy gumboots over all your
data area and you end up with total garbage. It no more gives you values of the
Ackermann function than fly to the moon! And people said: “I then realized the
importance of having user-level recursion, in programming languages, to
cope with those really hard problems that fell outside nested for-loops”.
Algol was famous in that its routines could call themselves recursively and
could get the right answer and, for limited low-order values of Ackermann’s
function – very slow, very slow indeed – but it would come out with the right answer.
>>Sean: Is there any need to think of an example of a problem, or program, because Ackermann
feels to me like it’s the test-bed. You know, when you’re testing out a
motor-car you might take it on the track and see how fast it can go.
But in day-to-day life that car might only get half that speed. What’s the
real-world kind of equivalent? Is there such a thing?
>>DFB: Real world equivalent?
>>Sean: … of something that might need to use recursion … ?
>>DFB: … of that complexity? Not many things is the answer to that. I mean, yes, it’s
true that Ackermann, as you know, was David Hilbert’s research student. And the
challenge was on to find something that was so innately recursive that – remember
it was “generally recursive”, they called it – as opposed to “primitive recursive”. And
simple things like factorial and indeed indeed Fibonacci, are primitive recursive.
So I think you’re right that you really are just making the point that
eventually there are things that will kill you. I think the question in the
middle is: “Is there something out there – pieces of program you need to write –
where non-trivial recursion, in a sense, is needed but not quite to the
horrendous degree that Ackermann did. And the answer is: “Yes, compilers is where it hit
people”. Because although early FORTRAN did not provide user-level recursion, for
you and me, nevertheless John Backus and his team implemented it in the middle
1950s I think at IBM. And Backus wrote articles afterwards
basically saying: “We didn’t know enough about recursion and even though we
didn’t provide it for the users of our language, boy did we need it in the
compiler! And we ended up inventing it in all but name”
The syntactic structures of what is legal, in a language, even at the level
just of arithmetic statements can be quite recursive. Because you end up with
brackets within brackets within brackets all with a multiplier outside. And which
order do you do the brackets in? And, you know, how how many levels of bracket
nesting can you have. And if you don’t get things sorted out correctly then
you’ll get the wrong answer. But once again the problem could be that your users
would come up to you and present you with a problem just designed to test out
your compiler, and whether it was robust enough to be able to cope with a high
degree of nesting even just in arithmetic statements. So by 1960 in
Algol, yeah, the there were enough users, at the user level, who could see that a
modicum of recursion, perhaps more complicated than factorial but not quite
up to full Ackermann capabilities would be very nice indeed to have within your language.
Again referring back to that original video, I had a lot of really
interesting mail from various people who said to me: “OK, you said that this is an
innately recursive problem and it just had to have general recursion capabilities?
Well I …. ”
of trailer>

100 thoughts to “Programming Loops vs Recursion – Computerphile”

  1. I was interested to check if the y-combinator can get around the fact that FORTRAN77 didn't have the ability to do (general) recursion. As it turns out (and I probably should have guessed at the outset), both fail for the same reason: lack of stack frames. This means that you don't have so-called automatic variables, that is variables that are independent between recursive calls. For example the definition (ignoring the base case): fac(x) = x * fac(x-1) needs to have a different value for x for each recursive call. Interestingly for this "simple recursion" case, however, it's ok that x is not independent, because it's not used again and this acts almost identically to a simple loop. But this doesn't work for general recursion, where a variable can be used again, so even fac(x) = fac(x-1) * x won't work, since the x at the end will have been clobbered by the recursive call. As for the y-combinator, well it also requires the language feature of being able to pass a function to another function, and then be able to execute that function over itself. For the same reason, this clobbering of local variables will prevent this from working in an almost identical way. Note that you have to jump through hoops in FORTRAN77 to be able to trick the function into calling itself. Also note that recent compilers supporting FORTRAN77 actually do support recursion, although this is not consistently implemented across all compilers. Later versions of FORTRAN of course fully support recursion.

  2. The most popular (and generally fastest) sorting algorithm requires recursion. Quicksort wouldn't be possible to implement in a general way without some sort of stack of partitions.

  3. Would an average engineer times anything by one, after multiplying it by 5,4,3 and 2? 7:28 don't get it, but I'm no mathematician

  4. In C++ (or C if applicable) how is this 256-deep loop limit affected by function calls? Say you are working through a 256-dimensional problem for n data sets, you would need a 257-deep loop to work through that. If the input controller is a function that sets up data structures in a loop and calls the second function, which does this 256-nested loop, would the compiler refuse to compile, or would the deferral through function calls allow for what is essentially a 257-deep nested loop?

    Essentially this:

    function1:
    for(…)
    function2(…)

    function2:
    for(…)
    …(255 more loops)

    versus this:

    for(…)
    for(….)
    …(255 more loops)

  5. I have watched so many videos since discovering this channel; fascinating. It's incredible how early in time very advanced features of programming languages were conceptualised/invented

  6. Recursion gets at the fundamental difference between 'natural' languages and things like maths and programming languages. Essentially, human minds can only handle about 3 or 4 levels of nesting before becoming incapable of following along without external memory aids. And it turns out that just about everything weird that happens in the grammars of languages are tactics for dealing with this innate limitation to only 3 or 4 levels of recursion at a time.

    Essentially, that's the closest anyone will ever be able to find to Chomsky's UG, but it is in many regards the antithesis of UG at the same time, which I find just glorious.

  7. That explanation of Fortran recursive calls trampling existing data and producing garbage was beautifully illustrated.

  8. I see the benefit of recursions in the elegance of a function. It's keeping your code really clean. That's what I like about recursions. But, on the other side, it is sometimes hard to understand the recursion with tail and head. But I think everybody should learn at least one language with recursions and should learn the way to think. It gives you the ability to break problems down into a few lines of code. Even if it is within a for loop .

  9. I think there's a huge missed point here, which is that you can implement recursion using loops in modern languages, rather easily actually. Just use a stack to emulate the function stack. Recursion isn't inherently more powerful than loops, as this video implies. They are both equally powerful

  10. I'm just here to point out that any recursive function can be converted to a iterative function. So having recursion support is not really necessary, besides the fact that function calls are always slower than iterative approaches

  11. I love watching Professor Brailsford videos, he makes me appear super knowledgable at work when we have a tech problem and I take everyone back to (first) Brailsford Principles

  12. Back in university I tried using FORTRAN exactly one day. I started reading the textbook of FORTRAN 77 (or whatever version) and I wanted to try writing a simple "Hello world" type of program with it. I copied one of the first example programs from the book to a file in university's main computer and I tried compiling it. After a few rounds of correcting my typos I finally got it compiled. Then I tried to run it. ERROR. I gave up any ideas I might have got aboout FORTRAN. I had much better success writing C some months later (and ever since).

  13. i'm 9 min in and tbh idk wtf is going on xD
    recursion is loops in loops i guess but feel like there's a bigger picture i'm not seeing here

  14. As someone who is writing an operating system, this video proved to show me that recursion on the stack can eventually pop. So long as you have a heap fallback, everything should run smoothly. I wonder how much static local variables in C could reduce the amount of recursive intensity recursion brings by allocating more stack frames.

  15. I like this professor. He's smart. Only thing I'll say is that in the real world, recursion is best avoided if possible. It's fun to be fancy in an interview but in an actual application loops usually work better. Recursion is slower (due mostly to allocating and deallocating to the Stack), and you run the risk of a stack overflow.

  16. Sophisticated programmers would hand-compile high-level pseudo-code into a lower level target language (like Fortran or Basic) using storage allocation rulesets.

  17. If you ever have to write a compiler,, recursive descent parsing driving code generation for a stack machine is the way to go.

  18. The quantum-mechanical basis of our universe is patently recursive in nature with the each recursive call calling into existence new computational resources to parallelize the computation.

  19. These videos are amazing and this guy is a treasure. Thank you, and him, so much for putting these up! There is so much noise in compsci, and so many folks who want to prove their intelligence by making subject matter harder than it has to be – it's refreshing to see somebody who knows it well enough that they can explain it like it's simple!

  20. Recursion is simple but definitely more complicated than looping. Recursion always has to have at least 2 ways of exiting. One to call itself and another that pops the stack. This guaranteed escape must be made by the programmer and if it don’t trigger at execute time, you blow up the stack and the program crashes. (please don’t tell me about tail call recursion as it is a fake kind of recursion that doesn’t use or a need to use the stack) I probably have used a recursive function in about 1 in 500 functions so useful in some circumstances but no where near as useful as just plain looping.

    Looping is at the core of the usefulness of programming. If you have no loops, no matter how many instructions you program, your program will stop almost immediately. With desktop computers executing billions of instructions per second, you just can’t provide enough instructions to satisfy a modern computer without looping. Programming is all about finding patterns and then putting the code that deals with that pattern in a loop or set of loops. Conditional statements are also essential but looping sits at the center of programming. Recursion is a “nice to have” feature but it sits on the sidelines. Many professional programmers have only ever made recursive functions at University. How could this be true if recursive functions were as necessary as a plain loop?

    Using recursion in compilers I think mixes up using a stack with recursion that uses a stack but also includes functions calling themselves. These 2 things are not the same. I have written multiple compilers and I have used stacks in every one of them but I don’t recall even a single recursive function being created. Even in assembler, the only difference I can see between any normal function call and a recursive routine was making sure you had enough stack for the recursive function call depth and storing any needed local variables on the stack.

    Recursion might have been a big deal for Mathematicians but for professional developers, it was no big deal. As the author of this video pointed out, the amount of memory needed to directly use data in multi-dimensional arrays is the max size of each dimension multiplied to each other. Even small dimensions of 3 or 4 dimensions would exceed the memory available so having 256 dimensions etc is fiction even today. Computer programs can simulate almost anything so I am not saying that sparse matrixes can’t be simulated in less memory than I have stated but that kind of obscure problem is just one of millions and nothing special. If a language limited arrays to no more that 2 dimensions, most programmers and code simply wouldn’t care. You can always simulate any number of dimensions by just using a simple block of memory big enough to hold it.

  21. Anecdotal details like: "We didn't know enough
    about recursion and even though we didn't provide it for the users of our language, boy did we need it in the compiler! And we ended up inventing it in all but name" are awesome insights into invention in general.

  22. Recursion is better then loops… as long as the compiler can convert it into a loop XD

  23. so Visual C++ will only allow 123 nested for loops in the main function makes 124 nested layers including the main function wich leaves 4 leayers fot internal use. so the true Limit is probably 128

  24. if you cant use recursion:
    1. open your function impolementation in any Text editor.
    2. if depth is allready set continue with step 4.
    3. set depth to equal 1.

    4. cut out your function.
    5. paste whatever you have copied / cut last.
    6. append _$(depth – 1) to the name of the function you just pasted.
    7. append _$depth to all calls of your function inside of the function you pasted.
    8. save your file .
    9. if you get an out_of_diskspace error while saving continue with step 11.
    10. start a new instance of this manual with the variable depth set to $depth + 1 and continue from here once you finish it.
    11. close this text editor ignoring any modified file warnings that might ocur.
    12. return the ballpoint pen you 'accidentally' stole to whereever you stole it from.

  25. Until I watched this video I had never even thought about whether there is a limit to nested loops or not! Am I missing something or really, in the practical world, how often would one ever need to nest any loops or do recursive more than 2-3 levels deep???!

  26. i want to learn , i want to listen , i want to watch your videos but europian accent is so annoying and hard to get it .

  27. made me think of my proof-of-concept hyperoperation code:

    public class main {

    public static void main(String[] args) {
    System.out.print(hyper(3,3,4));
    }

    public static long hyper(long a, long b, long n){
    if(n<=0) return a+1;
    if(n==1) return a+b;
    long r=0;
    for(long i=b-1; i!=0; i–){
    if(r==0) r=hyper(a,a,n-1);
    else r=hyper(a,r,n-1);
    }
    return r;
    }
    }

  28. Without recursion search trees and spreadsheets would be nearly impossible, many functions must be recursive.

  29. what kind of rigid people can ONLY solve the problem by using recursion?
    The bad preforfmance is part of the cost of making a function call in the old computers with non existing harware support for function calls.

Leave a Reply

Your email address will not be published. Required fields are marked *