Skip to main content
<FrontBackGeek/>
Big O Notation with JavaScript

A Deep Dive into Big O Notation with JavaScript

FrontBackGeek 5 months ago

In this article, we will explore the significance of Big O Notation in the Data Structure. Don’t worry; we’ll keep it simple and practical, and of course, we’ll be using JavaScript.

Why Big O Matters

In the world of programming, where being efficient is important, Big O Notation is very helpful. It helps us figure out how our code will perform as we work with more and more data.

Imagine it as a compass that guides us through the tricky parts of algorithms, helping us pick the best and most efficient ways to solve problems.

Big O Notation with JavaScript
Big O Notation with JavaScript

Let’s assume we have two code snippets – one with ten steps, the other with five. While the timing might vary, Big O allows us to generalize, expressing that one is more efficient than the other, irrespective of specific times.

Why Not Just Time Our Code?

You might wonder why we don’t just measure our code’s performance by timing it. Well, the problem is that timing can be unpredictable due to things like how fast your computer is, other programs running in the background, and other factors. Big O gives us a more reliable and scalable way to measure and compare code efficiency.

1. Constant Complexity (O(1))

let’s look at a task where the number of operations stays constant, regardless of input size.

function getFirstElement(arr) {
    return arr[0]; // Always returns the first element
}

const anotherArray = [10, 20, 30, 40, 50];
console.log(getFirstElement(anotherArray)); // Output: 10

Here, retrieving the first element always takes the same amount of time, no matter how big the array is. This is constant time complexity, represented as O(1).

2. Linear Complexity (O(n))

Consider a simple task: finding a specific element in an array.

function findElement(arr, target) {
    for (let i = 0; i < arr.length; i++) {
        if (arr[i] === target) {
            return true; // Element found
        }
    }
    return false; // Element not found
}

const myArray = [1, 2, 3, 4, 5];
console.log(findElement(myArray, 3)); // Output: true

In this example, the number of operations grows linearly with the size of the array. If we double the array’s length, the function might take roughly twice as long. This linear relationship is denoted as O(n) in Big O Notation.

Counting Operations: A Real-world Example

Let’s analyze a slightly more complex example to count the operations and determine the Big O Notation.

function exampleFunction(arr) {
    let sum = 0; // O(1)

    for (let i = 0; i < arr.length; i++) {
        sum += arr[i]; // O(n)
    }

    return sum; // O(1)
}

const numbers = [1, 2, 3, 4, 5];
console.log(exampleFunction(numbers)); // Output: 15

In this case, we’re adding up all elements in an array. The complexity is O(1) for the initial sum assignment, O(n) for the loop, and O(1) for the final return. When simplifying, we focus on the part that grows fastest, so this function’s Big O Notation is O(n).

3. Quadratic Time Complexity O(n^2)

Consider a num_list with seven elements ranging from ‘1’ to ‘7’. In a quadratic scenario, we select the first element and then iterate through each element in the list, repeating this process for each element. Let’s break it down:

  • Select ‘1’ and iterate through ‘1’ to ‘7’
  • Select ‘2’ and iterate through ‘1’ to ‘7’
  • Repeat this pattern for ‘3’, ‘4’, ‘5’, ‘6’, ‘7’

This nested loop structure results in quadratic time complexity – for two elements, there are four operations; for three elements, nine operations, and so on.

function quadraticExample(numList) {
    for (let i = 0; i < numList.length; i++) {
        for (let j = 0; j < numList.length; j++) {
            console.log(numList[i], numList[j]);
        }
    }
}

const numList = [1, 2, 3, 4, 5, 6, 7];
quadraticExample(numList);

In this function, the nested loops showcase quadratic time complexity. The console.log within the inner loop executes for every combination of elements, leading to a total of 49 operations for seven elements (7 * 7).

Now, let’s calculate the Big O notation for this function. Considering the inner loop’s length as ‘n’ and the outer loop’s length as ‘m’, the complexity is O(n * m). In our case:

  • n is the length of numList
  • m is also the length of numList

Running the function results in 49 operations (7 * 7), confirming the quadratic time complexity.

Rule for Multiple Inputs

Introducing a second list with a length of ‘m’, we now have two input variables. Applying rule number five, the complexity becomes O(n * m). A practical demonstration in JavaScript would look like this:

function quadraticWithTwoInputs(numList, secondList) {
    for (let i = 0; i < numList.length; i++) {
        for (let j = 0; j < secondList.length; j++) {
            console.log(numList[i], secondList[j]);
        }
    }
}

const secondList = [1, 2, 3, 4, 5];
quadraticWithTwoInputs(numList, secondList);

Here, the number of operations is 35 (7 * 5), showcasing the impact of two variables on time complexity.

4. Factorial Time Complexity O(n!)

O(n!) complexity signifies a factorial time scenario, where, for every element in your array, you add nested loops. Now, take a moment to ponder this – adding a nested loop for every single item in your array. As the number of items increases, the loops grow exponentially. The resulting time increase is so drastic that, for practical purposes and scalability, encountering O(n!) is almost implausible.

If we were to visualize this on a graph, the upward trajectory of time for O(n!) is not just bad; it’s downright horrifying for long-term scalability. While you’re unlikely to stumble upon this complexity in your coding endeavors, being aware of its existence is crucial.

Creating a practical code example for O(n!) is challenging due to its impracticality, we can simulate a scenario where nested loops are added for each element, giving you a glimpse of the concept. Here’s a JavaScript snippet to illustrate this:

function oFactorialExample(arr) {
  for (let i = 0; i < arr.length; i++) {
    for (let j = 0; j < arr.length; j++) {
      for (let k = 0; k < arr.length; k++) {
        // Nested loops for each element in the array
        console.log(`Processing element: ${arr[i]}, ${arr[j]}, ${arr[k]}`);
      }
    }
  }
}

// Example usage with an array
const sampleArray = [1, 2, 3];
oFactorialExample(sampleArray);

In this example, we have a function oFactorialExample that takes an array as input and performs nested loops for each element. While this doesn’t exactly represent a practical use case, it demonstrates the idea of adding nested loops for every item in the array, contributing to the exponential increase in time complexity.

5. Logarithmic Time Complexity O(log n)

Consider the statement log(x), where the base ‘b’ equals ‘y’. This is true if ‘b^y’ is equal to ‘x’. For our discussion, we predominantly use binary logarithms (base 2). So, log(n) implies log base 2 of ‘n’.

Examples:

  • log(1) = 0, as 2^0 = 1
  • log(2) = 1, as 2^1 = 2
  • log(4) = 2, as 2^2 = 4

Doubling the Value of ‘n’

An intriguing pattern emerges with logarithms: when you double the value of ‘n’, the log value only increases by 1. This fundamental property becomes crucial in complexity analysis.

Real-World Application

When tied to complexity analysis, logarithmic time complexity is a powerful ally. As ‘n’ grows, the number of operations increases minimally, showcasing its efficiency. This makes O(log n) notably superior to linear complexities like O(n).

Where to Expect Logarithmic Complexity?

Logarithmic time complexity frequently finds application in sorting and searching algorithms. Let’s consider an example using an array.

Example:

Suppose we have an array of 8 elements (0 to 7). In an algorithm that halves the elements at each step, we notice a logarithmic pattern.

  1. Full Array (8 elements)
  2. Halved (4 elements)
  3. Halved Again (2 elements)
  4. Final Step (1 element)

The number of steps closely aligns with log (8) = 3, demonstrating that the total operations approximate log (n). This concept extends to scenarios where the input is divided into two parts iteratively.

function logarithmicAlgorithm(inputArray) {
    let steps = 0;

    while (inputArray.length > 1) {
        // Display the current array state
        console.log(`Step ${steps + 1}: ${inputArray.length} elements`);

        // Halve the array
        inputArray = inputArray.slice(0, Math.ceil(inputArray.length / 2));

        // Increment the steps
        steps++;
    }

    // Display the final step
    console.log(`Step ${steps + 1}: ${inputArray.length} element (Final Step)`);
    console.log(`Total Steps: ${steps}`);
}

// Example with an array of 8 elements (0 to 7)
const arrayExample = Array.from({ length: 8 }, (_, i) => i);
logarithmicAlgorithm(arrayExample);

This program initializes an array with eight elements (0 to 7) and demonstrates the described algorithm, printing the array state at each step. The loop continues until there is only one element left in the array. The total number of steps is then displayed, showcasing the logarithmic pattern.

Conclusion

Understanding Big O Notation helps us write efficient code that performs well even as our applications grow. Whether it’s a simple linear search or a more intricate algorithm, being aware of the scalability of our code is key to becoming a proficient JavaScript developer.

© FrontBackGeek.com 2021 All Rights Reserved. DMCA.com Protection Status