Is this technically an O(1) algorithm for "Hello World"?

C#.NetAlgorithmBig O

C# Problem Overview


Would this be classified as an O(1) algorithm for "Hello, World!" ??

public class Hello1
{
   public static void Main()
   {
      DateTime TwentyYearsLater = new DateTime(2035,01,01);
      while ( DateTime.Now < TwentyYearsLater )
      { 
          System.Console.WriteLine("It's still not time to print the hello ...");
      }
      System.Console.WriteLine("Hello, World!");
   }
}

I'm thinking of using the

DateTime TwentyYearsLater = new DateTime(2035,01,01);
while ( DateTime.Now < TwentyYearsLater )
{ 
   // ... 
}

snippet of code as a busy loop to put in as a joke whenever someone asks for an algorithm of a certain complexity. Would this be correct?

C# Solutions


Solution 1 - C#

Big O notation in this context is being used to describe a relationship between the size of the input of a function and the number of operations that need to be performed to compute the result for that input.

Your operation has no input that the output can be related to, so using Big O notation is nonsensical. The time that the operation takes is independent of the inputs of the operation (which is...none). Since there is no relationship between the input and the number of operations performed, you can't use Big O to describe that non-existent relationship

Solution 2 - C#

Big-O notation means roughly 'given an operation on an amount of work, N, how much calculation time, proportional to N, does the algorithm take?'. Eg, sorting an array of size N can take N^2, Nlog(N), etc.

This has no amount of input data to act on. So it's not O(anything).

Even worse; this isn't technically an algorithm. An algorithm is a method for computing the value of a mathematical function -- maths functions are a mapping from one input to an output. Since this takes no input and returns nothing, it's not a function, in the mathematical sense. From wikipedia:

> An algorithm is an effective method that can be expressed within a > finite amount of space and time and in a well-defined formal language > for calculating a function. Starting from an initial state and initial > input (perhaps empty), the instructions describe a computation that, > when executed, proceeds through a finite number of well-defined > successive states, eventually producing "output" and terminating at a > final ending state.

What this is, technically, is a control system. From wikipedia;

> A control system is a device, or set of devices, that manages, > commands, directs or regulates the behavior of other devices or > systems.

For people wanting a more in-depth answer about the difference between mathematical functions and algorithms, and the more powerful abilities of computers to do side-effecting things like console output, displaying graphics, or controlling robots, have a read of this paper on the Strong Church-Turing Hypothesis

Abstract

> The classical view of computing positions computation as a closed-box transformation of inputs (rational numbers or finite strings) to outputs. According to the interactive view of computing, computation is an ongoing interactive process rather than a function-based transformation of an input to an output. Specifically, communication with the outside world happens during the computation, not before or after it. This approach radically changes our understanding of what is computation and how it is modeled. > > The acceptance of interaction as a new paradigm is hindered by the Strong Church-Turing Thesis (SCT), the widespread belief that Turing Machines (TMs) capture all computation, so models of computation more expressive than TMs are impossible. In this paper, we show that SCT reinterprets the original Church-Turing Thesis (CTT) in a way that Turing never intended; its commonly assumed equivalence to the original is a myth. We identify and analyze the historical reasons for the widespread belief in SCT. Only by accepting that it is false can we begin to adopt interaction as an alternative paradigm of computation

Solution 3 - C#

No, your code has time complexity of O(2^|<DeltaTime>|),

For a proper coding of the current time.
Please, let me first apologize for my English.

What is and how Big O works in CS

Big O notation is not used to tie the input of a program with its running time.
Big O notation is, leaving rigor behind, a way to express the asymptotic ratio of two quantities.

In the case of algorithm analysis these two quantities are not the input (for which one must first have a "measure" function) and the running time.
They are the length of the coding of an instance of the problem1 and a metric of interest.

The commonly used metrics are

  1. The number of steps required to complete the algorithm in a given model of computation.
  2. The space required, if any such concept exists, by the model of computation.

Implicitly is assumed a TM as the model so that the first point translates to the number of applications of the transition2 function, i.e. "steps", and the second one translates the number of different tape cells written at least once.

Is it also often implicitly assumed that we can use a polynomially related encoding instead of the original one, for example a function that search an array from start to end has O(n) complexity despite the fact that a coding of an instance of such array should have length of n*b+(n-1) where b is the (constant) number of symbols of each element. This is because b is considered a constant of the computation model and so the expression above and n are asymptotically the same.

This also explains why an algorithm like the [Trial Division][1] is an exponential algorithm despite essentially being a for(i=2; i<=sqr(N); i++) like algorithm3.

See this.

This also means that big O notation may use as many parameters one may needs to describe the problem, is it not unusual to have a k parameter for some algorithms.

So this is not about the "input" or that "there is no input".

Study case now

Big O notation doesn't question your algorithm, it just assumes that you know what you are doing. It is essentially a tool applicable everywhere, even to algorithm which may be deliberately tricky (like yours).

To solve your problem you used the current date and a future date, so they must be part of the problem somehow; simply put: they are part of the instance of the problem.

Specifically the instance is:

<DeltaTime>

Where the <> means any, non pathological, coding of choice.

See below for very important clarifications.

So your big O complexity time is just O(2^|<DeltaTime>|), because you do a number of iteration that depends on the value of current time. There is no point in putting other numeric constants as the asymptotic notation is useful as it eliminates constants (so for example the use of O(10^|<DeltaTime>|*any_time_unit) is pointless).

Where the tricky part is

We made one important assumption above: that the model of computation reificates5 time, and by time I mean the (real?) physical time. There is no such concept in the standard computational model, a TM does not know time, we link time with the number of steps because this is how our reality work4.

In your model however time is part of the computation, you may use the terminology of functional people by saying that Main is not pure but the concept is the same.

To understand this one should note that nothing prevent the Framework to using a fake time that run twice, five, ten times faster that physical time. This way your code will run in "half", "one fifth", "one tenth" of the "time".

This reflection is important for choosing the encoding of <DeltaTime>, this is essentially a condensed way of writing <(CurrentTime, TimeInFuture)>. Since time does not exist at priory, the coding of CurrentTime could very well be the word Now (or any other choice) the day before could be coded as Yesterday, there by breaking the assumption that the length of the coding increase as the physical time goes forward (and the one of DeltaTime decreases)

We have to properly model time in our computational model in order to do something useful.

The only safe choice we can do is to encode timestamps with increasing lengths (but still not using unary) as the physical time steps forward. This is the only true property of time we need and the one the encoding needs to catch. Is it only with this type of encoding that your algorithm maybe given a time complexity.

Your confusion, if any, arise from the fact that the word time in the phrases 'What is its time complexity?' and 'How much time will it take?' means to very very different things

Alas the terminology use the same words, but you can try using "steps complexity" in your head and re-ask yourself your question, I hope that will help you understand the answer really is ^_^


1 This also explains the need of an asymptotic approach as each instance has a different, yet not arbitrary, length.
2 I hope I'm using the correct English term here.
3 Also this is why we often find log(log(n)) terms in the math.
4 Id est, a step must occupy some finite, but not null, nor not connected, interval of time.
5 This means that the computational mode as a knowledge of physical time in it, that is can express it with its terms. An analogy are how generics work in the .NET framework.

[1]: https://en.wikipedia.org/wiki/Trial_division "Trial Divisions"

Solution 4 - C#

Although there are a bunch of great answers here, let me rephrase all of them a bit.

Big-O notation exists to describe functions. When applied to analysis of algorithms this requires us to first define some characteristic of this algorithm in terms of a function. The common choice is considering number of steps as a function of input size. As noted in other answers, coming up with such function in your case seems strange, because there is no clearly defined "input". We can still try to do it, though:

  • We can regard your algorithm as a constant function which takes any input of any size, ignores it, waits a fixed amount of time, and finishes. In this case its runtime is f(n) = const, and it is a O(1)-time algorithm. This is what you expected to hear, right? Yes, it is, technically, an O(1)-algorithm.
  • We can regard the TwentyYearsLater as the "input-size"-like parameter of interest. In this case the runtime is f(n) = (n-x) where x is the "now time" at the moment of invocation. When seen this way, it is a O(n)-time algorithm. Expect this counter-argument whenever you go showing your technically O(1)-algorithm to other people.
  • Oh, but wait, if k=TwentyYearsLater is the input, then its size n is, actually, the number of bits needed to represent it, i.e. n = log(k). The dependency between the size of the input n and runtime is therefore f(n) = 2^n - x. Seems like your algorithm has just become exponentially slow! Ugh.
  • Another input to the program is in fact the stream of answers given by the OS to the sequence of DateTime.Now invocations in the loop. We can actually imagine that this whole sequence is provided as input at the moment we run the program. The runtime can then be considered to depend on the property of this sequence - namely its length until the first TwentyYearsLater element. In this case the runtime is again f(n) = n and the algorithm is O(n).

But then again, in your question you did not even say you were interested in runtime. What if you meant memory use? Depending on how you model the situation you can say the algorithm is O(1)-memory or, perhaps, O(n)-memory (if the implementation of DateTime.Now requires to keep track of the whole invocation sequence somewhy).

And if your goal was to come up with something absurd, why don't you go all in and say that you are interested in how the size of the algorithm's code in pixels on the screen depends on the chosen zoom level. This might be something like f(zoom) = 1/zoom and you can proudly declare your algorithm to be O(1/n)-pixel size!

Solution 5 - C#

I have to disagree with Servy slightly. There is an input to this program, even if it's not obvious, and that's the system's time. This might be a technicality you hadn't intended, but your TwentyYearsFromNow variable isn't twenty years from the system's time now, it's statically assigned to January 1st, 2035.

So if you take this code and execute it on a machine that has a system time of January 1st, 1970, it's going to take 65 years to complete, regardless of how fast the computer is (there may be some variation if its clock is faulty). If you take this code and execute it on a machine that has a system time of January 2nd, 2035, it will complete almost instantly.

I would say your input, n, is January 1st, 2035 - DateTime.Now, and it's O(n).

Then there's also the issue of the number of operations. Some people have noted that faster computers will hit the loop faster, causing more operations, but that's irrelevant. When working with big-O notation, we don't consider the speed of the processor or the exact number of operations. If you took this algorithm and ran it on a computer, and then ran it again but for 10x longer on the same computer, you would expect the number of operations to grow by the same factor of 10x.

As for this:

> I'm thinking of using the [redacted code] snippet of code as a busy loop to put in as a joke whenever someone asks for an algorithm of a certain complexity. Would this be correct?

No, not really. Other answers have covered this, so I just wanted to mention it. You can't generally correlate years of execution to any big-O notation. Eg. There's no way to say 20 years of execution = O(n^87) or anything else for that matter. Even in the algorithm you gave, I could change the TwentyYearsFromNow to the year 20110, 75699436, or 123456789 and the big-O is still O(n).

Solution 6 - C#

Big-O analysis deals with the amount of processing involved as the amount of data being processed increases without limit.

Here, you're really only dealing with a single object of fixed size. As such, applying big-O analysis depends heavily (primarily?) upon how you define your terms.

For example, you could mean printing output in general, and imposing a wait so long that any reasonable amount of data would/will be printed in precisely the same period of time. You also have to add a little more in the way of somewhat unusual (if not outright wrong) definitions to get very far though--particularly, big-O analysis is usually defined in terms of the number of fundamental operations needed to carry out a particular task (but note that complexity can also be considered in terms of things like memory use, not just CPU use/operations carried out).

The number of fundamental operations usually translates fairly closely to time taken, however, so it isn't a huge stretch treat the two as synonymous. Unfortunately, however, we're still stuck with that other part: the amount of data being processed increasing without limit. That being the case, no fixed delay you can impose will really work. To equate O(1) with O(N), you'd have to impose an infinite delay so that any fixed amount of data took forever to print, just as an infinite amount of data would.

Solution 7 - C#

big-O relative to what?

You seem to be intuiting that twentyYearsLater is an "input". If indeed you wrote your function as

void helloWorld(int years) {
   // ...
}

It would be O(N) where N = years (or just say O(years)).

I would say your algorithm is O(N) relative to whatever number you happen to write in the line of code starting with twentyYearsLater = . But people do usually not consider numbers in the actual source code as the input. They might consider the command line input as the input, or the function signature input as the input, but, most likely not the source code itself. That is what you are disputing with your friend - is this the "input"? You set up your code in a way to make it intuitively seem like an input, and you can definitely ask its big O running time with respect to the number N on line 6 of your program, but if you use such a non-default choice as input you really need to be explicit about it.

But if you take the input to be something more usual, like the command line or the input to the function, there is no output at all and the function is O(1). It takes twenty years, but since big-O does not change up to a constant multiple, O(1) = O(twenty years).

Similar question - what is the runtime of:

void sortArrayOfSizeTenMillion(int[] array)

Assuming it does what it says and the input is valid, and the algorithm harnesses a quicksort or bubble sort or anything reasonable, it's O(1).

Solution 8 - C#

This "algorithm" is correctly described as O(1) or constant time. It's been argued that there is no input to this program, hence there is no N to analyze in terms of Big Oh. I disagree that there is no input. When this is compiled into an executable and invoked, the user can specify any input of arbitrary length. That input length is the N.

The program just ignores the input (of whatever length), so the time taken (or the number of machine instructions executed) is the same irrespective of the length of the input (given fixed environment = start time + hardware), hence O(1).

Solution 9 - C#

One thing I'm surprised hasn't been mentioned yet: big-O notation is an upper bound!

The issue everyone has noticed is that there is no N describing the inputs to the algorithm, so there is nothing to do big-O analysis with. However, this is easily mitigated with some basic trickery, such as accepting an int n and printing "Hello World" n times. That would get around that complaint and get back to the real question of how that DateTime monstrosity works.

There is no actual guarantee that the while loop will ever terminate. We like to think it has to at some time, but consider that DateTime.now returns the system date and time. There is actually no guarantee that this is monotonically increasing. It is possible that there is some pathologically trained monkey constantly changing the system date and time back to October 21, 2015 12:00:00 UTC until someone gives the monkey some auto-fitting shoes and a hoverboard. This loop can actually run for an infinite amount of time!

When you actually dig into the mathematical definition of big-O notations, they are upper bounds. They demonstrate the worst case scenario, no matter how unlikely. The worst case* scenario here is an infinite runtime, so we are forced to declare that there is no big-O notation to describe the runtime complexity of this algorithm. It doens't exist, just as 1/0 doesn't exist.

* Edit: from my discussion with KT, it is not always valid to presume the scenario we are modeling with big-O notation is the worst-case. In most cases, if an individual fails to specify which case we're using, they intended to explore the worst case. However, you can do a big-O complexity analysis on the best-case runtime.

Solution 10 - C#

Complexity is used to measure computational "horsepower" in terms of time/space. Big O notation is used to compare which problems are "computable" or "not computable" and also to compare which solutions -algorithms- are better than other. As such, you can divide any algorithm into two categories: those that can be solved in polynomial time and those that can't.

Problems like the Sieve of Erathostene are O(n ^exp) and thus are solvable for small values of n. They are computable, just not in polynomial time (NP) and thus when asked if a given number is prime or not, the answer depends on the magnitude of such number. Moreover, complexity does not depend on the hardware, so having faster computers changes nothing...

Hello World is not an algorithm and as such is senseless to attempt to determine its complexity -which is none. A simple algorithm can be something like: given a random number, determine if it is even or odd. Now, does it matter that the given number has 500 digits? No, because you just have to check if the last digit is even or odd. A more complex algorithm would be to determine if a given number divides evenly by 3. Although some numbers are "easy" to compute, others are "hard" and this is because of its magnitude: compare the time it takes to determine the remaninder between a number with one digit and other with 500 digits.

A more complex case would be to decode a text. You have an apparent random array of symbols which you also know are conveying a message for those having the decrypting key. Let's say that the sender used the key to the left and your Hello World would read: Gwkki Qieks. The "big-hammer, no-brain" solution would produce all combinations for those letters: from Aaaa to Zzzz and then search a word dictionary to identify which words are valid and share the two common letters in the cypher (i, k) in the same position. This transformation function is what Big O measures!

Solution 11 - C#

Everyone’s correctly pointed out that you don’t define N, but the answer is no under the most reasonable interpretation. If N is the length of the string we’re printing and “hello, world!” is just an example, as we might infer from the description of this as an algorithm “for hello, world!,” then the algorithm is O(N), because you might have an output string that takes thirty, forty or fifty years to print, and you’re adding only a constant time to that. O(kN+c) ∈ O(N).

Addendum:

To my surprise, someone is disputing this. Recall the definitions of big O and big Θ. Assume we have an algorithm that waits for a constant amount of time c and then prints out a message of length N in linear time. (This is a generalization of the original code sample.) Let’s arbitrarily say that we wait twenty years to start printing, and that printing a trillion characters takes another twenty years. Let c = 20 and k = 10¹², for example, but any positive real numbers will do. That’s a rate of d = c/k (in this case 2×10⁻¹¹) years per character, so our execution time f(N) is asymptotically dN+c years. Whenever N > k, dN = c/k N > c. Therefore, dN < dN+c = f(N) < 2 dN for all N > k, and f(N) ∈ Θ(N). Q.E.D.

Solution 12 - C#

I think people are getting thrown off because the code doesn't look like a traditional algorithm. Here is a translation of the code that is more well-formed, but stays true to the spirit of OP's question.

void TrolloWorld(long currentUnixTime, long loopsPerMs){
    long laterUnixTime = 2051222400000;  //unix time of 01/01/2035, 00:00:00
    long numLoops = (laterUnixTime-currentUnixTime)*loopsPerMs;

    for (long i=0; i<numLoops; i++){
        print ("It's still not time to print the hello …");
    }
    print("Hello, World!");
}

The inputs are explicit whereas before they were implicitly given by the time the code was started at and by the speed of the hardware running the code. The code is deterministic and has a well-defined output for given inputs.

Because of the limitations that are imposed on the inputs we can provide, there is an upper bound to the number of operations that will be executed, so this algorithm is in fact O(1).

Solution 13 - C#

Most people seem to be missing two very important things.

  1. The program does have an input. It is the hard-coded date/time against which the system time is being compared. The inputs are under the control of the person running the algorithm,and the system time is not. The only thing the person running this program can control is the date/time they've hard-coded into the comparison.

  2. The program varies based on the input value, but not the size of the input set, which is what big-O notation is concerned with.

Therefore, it is indeterminate, and the best 'big-O' notation for this program would probably be O(null), or possibly O(NaN).

Solution 14 - C#

At this point in time, yes

This algorithm has an implicit input, namely the time that the program is started at. The execution time will vary linearly1 depending on when it is started. During the year 2035 and after, the while loop immediately exits and the program terminates after constant operations2. So it could be said that the runtime is O(max(2035 - start year, 1))3. But since our start year has a minimum value, the algorithm will never take more than 20 years to execute (i.e. a constant value).

You can make your algorithm more in keeping with your intent by defining DateTime TwentyYearsLater = DateTime.Now + new TimeSpan(365*20,0,0,0);4

1 This holds for the more technical sense of execution time measured as number of operations because there is a maximum number of operations per time unit.
2 Assuming fetching DateTime.Now is a constant operation, which is reasonable.
3 I'm somewhat abusing big O notation here because this is a decreasing function with respect to start year, but we could easily rectify this by expressing it in terms of years prior to 2035.
4 Then the algorithm no longer depends on the implicit input of the start time, but that's of no consequence.

Solution 15 - C#

I'd argue that this is O(n). using http://www.cforcoding.com/2009/07/plain-english-explanation-of-big-o.html as a reference.

> What is Big O? > > Big O notation seeks to describe the relative complexity of an > algorithm by reducing the growth rate to the key factors when the key > factor tends towards infinity.

and

> The best example of Big-O I can think of is doing arithmetic. The basic arithmetic operations we learnt > in school were: > > addition; subtraction; multiplication; and division. Each of these is > an operation or a problem. A method of solving these is called an > algorithm.

For your example,

given the input of n = 20 (with units years).

the algorithm is a mathematical function f(). where f() happens to be wait for n years, with 'debug' strings in between. The scale factor is 1. f() can be reduced/or increased by changing this scale factor.

for this case, the output is also 20 (changing the input changes the output linearly).

essentially the function is

f(n) = n*1 = n
    if  n = 20, then 
f(20) = 20 

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionSubpar Web DevView Question on Stackoverflow
Solution 1 - C#ServyView Answer on Stackoverflow
Solution 2 - C#Steve CooperView Answer on Stackoverflow
Solution 3 - C#Yuni MjView Answer on Stackoverflow
Solution 4 - C#KT.View Answer on Stackoverflow
Solution 5 - C#ShazView Answer on Stackoverflow
Solution 6 - C#Jerry CoffinView Answer on Stackoverflow
Solution 7 - C#djechlinView Answer on Stackoverflow
Solution 8 - C#waldol1View Answer on Stackoverflow
Solution 9 - C#Cort AmmonView Answer on Stackoverflow
Solution 10 - C#TuringView Answer on Stackoverflow
Solution 11 - C#DavislorView Answer on Stackoverflow
Solution 12 - C#panofsteelView Answer on Stackoverflow
Solution 13 - C#Theo BrinkmanView Answer on Stackoverflow
Solution 14 - C#panofsteelView Answer on Stackoverflow
Solution 15 - C#Angel KohView Answer on Stackoverflow