Jump to content

Talk:Big O notation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Determining if a binary number is even or odd changed to finding median

[edit]

"Determining if a binary number is even or odd" isn't a particularly useful example, because there will only one number being tested. Finding the median value for a sorted list of measurements is a better example. SlowJog (talk) 02:32, 29 October 2023 (UTC)[reply]


Big O and data structures

[edit]

My note on this topic disapeared, but I still think there is common misinterpretation there. Suppose you have algorithm which \Theta(n) times calls a method of a data structure. The assymptotic complexity of the method is described as O(\log s) where s is the size of the data structure. The actual use of the data structure is such that s never exceeds 1. (Weird case) If you simply multiply the number of method calls by their complexity you obtain bound 0.

There are several solutions of this wrong calculation ... I recommend considering the minimal complexity of a call be 1 what changes the result to \Theta(n). I do not think we would change all complexity descriptions to O(1+\log s) ..., but one should know about this use asymptotic when the size does not go to infinity problem.Hippo.69 (talk) 13:50, 12 December 2023 (UTC)[reply]

Are you talking about this discussion from three years ago? --JBL (talk) 17:11, 12 December 2023 (UTC)[reply]
Oh yes, and seems nobody else considers using assymptotic complexity for tiny data structures be a problem. Let us hope people are smart enough not to think n calls to a data structure could take constant/zero time in total and there is no need to emphasize that.

Hippo.69 (talk) 21:04, 14 December 2023 (UTC)[reply]

It makes no sense for the size of a data structure to be a positive number less than one. These sizes are generally integers: integer numbers of memory cells (bits, bytes, words, whatever) or number of elements stored. Occasional care needs to be taken with logs when the size can equal one, and people are often sloppy about that, but it's harmless.
In the meantime, the much bigger problem with O-notation is the use of it with more than one variable, without any specification of how the two variables are assumed to go to infinity together or separately. It's safe for the most common usage, for graph algorithms on connected graphs as a function of vertex and edge counts, because those two things are tied to each other in both directions, but even for disconnected multigraphs one runs into this issue. —David Eppstein (talk) 21:51, 14 December 2023 (UTC)[reply]
Yes, I was refering to the sloppiness with size 1 and log(s) (s not meaning the number of bytes or cells, but the number of represented items) (and I do not see a problem with data structure state representing 0 elements).
I actually am not sure where is the problem with the complexity bounded by more parameters going to infinity.
I interpret f(n_1,n_2,...,n_k)\in O(g(n_1,n_2,...,n_k)) as ... there exists c and n_0 that for all i 1<=i<=k and n_i>n_0 f(n_1,n_2,....,n_k)<c g(n_1,n_2,...,n_k). What problems it causes? Hippo.69 (talk) 08:00, 15 December 2023 (UTC)[reply]
So you are assuming that all n_i go to infinity at the same rate? But what if they don't? If an algorithm takes time O(f(x)+g(y)) according to your definition, and you hold x fixed while letting y grow, how much time does it take? —David Eppstein (talk) 08:13, 15 December 2023 (UTC)[reply]
No, I am assuming all n_i going to infinity where the rate does not matter. Of course there could be other constraints (typically m<n^2 ... edges and vertices) or (h<=n Kirkpatrick–Seidel h subset of n points), but it seems to me they are not relevant. Oh I have not read the x, y example ... if x is hold fixed, it does not go to infinity, but if goes to infinity, it seems to me it is fine using O as I have written. ... End of course there could be constants in the expressions f,g, but there is no variable and n_i connected to them ... may be try to show the confusing use in an exampleHippo.69 (talk) 08:38, 15 December 2023 (UTC)[reply]
But if you assume that all n_i are greater than some n_0, then you are assuming the rate does matter. Consider the function f(x,y) that is 2^y for x=1 but x^2+y^2 for x,y>1. For all x,y>n_0 (for instance n_0=2), it is less than some constant times x^2+y^2. Does that mean you can write that it is O(x^2+y^2)? It is, according to your definition. What if you plug in that time bound to an algorithm that, say, loops over all pairs (x,y) up to n? Can you just add the O(x^2+y^2) bound and get a valid answer? —David Eppstein (talk) 18:28, 15 December 2023 (UTC)[reply]
OK, I get your point. When the function is not monotone, the relative grow rate matters. Hippo.69 (talk) 21:02, 15 December 2023 (UTC)[reply]

Why isn't "Little-o notation" a separate article.

[edit]

As the title indicates, why isn't Little-o notation its own separate article. It seems worthy of its own article, as there is a lot to say about it. Just as big O notation got its own article. 207.244.169.9 (talk) 02:55, 7 August 2024 (UTC)[reply]

linear function of a function

[edit]

I've had two different editors ask what a linear function of a function is.

For constant M > 0, f(g(x)) = M*g(x) is an increasing linear function of g(x).

See linear function (calculus) if you are unfamiliar. I'm not aware of any particular reason that one could not describe a function this way, but if y'all have an explanation I'd love to hear it. VineFynn (talk) 08:49, 17 December 2024 (UTC)[reply]

Hi. I can now guess what you mean by a linear function of a function, but this is certainly not standard terminology and not treated in a basic calculus course, as you seem to believe. In any case reference to this unusual notion will definitely not make clearer the property f(x)≤Mg(x) to an unfamiliar reader.--Sapphorain (talk) 10:25, 17 December 2024 (UTC)[reply]
It's true that this special case of a composite function isn't given a distinguishing name in calculus, although composite functions are everywhere there, but I figured that was because how it would be described is self-evident. I'm honestly baffled by how anyone who has done calculus could find this terminology confusing, but evidently I am outnumbered in this respect. Do as you please. VineFynn (talk) 13:36, 17 December 2024 (UTC)[reply]