What is the most ridiculous pessimization you've seen?

PerformanceOptimization

Performance Problem Overview


We all know that premature optimization is the root of all evil because it leads to unreadable/unmaintainable code. Even worse is pessimization, when someone implements an "optimization" because they think it will be faster, but it ends up being slower, as well as being buggy, unmaintainable, etc. What is the most ridiculous example of this that you've seen?

Performance Solutions


Solution 1 - Performance

I think the phrase "premature optimization is the root of all evil" is way, way over used. For many projects, it has become an excuse not to take performance into account until late in a project.

This phrase is often a crutch for people to avoid work. I see this phrase used when people should really say "Gee, we really didn't think of that up front and don't have time to deal with it now".

I've seen many more "ridiculous" examples of dumb performance problems than examples of problems introduced due to "pessimization"

  • Reading the same registry key thousands (or 10's of thousands) of times during program launch.
  • Loading the same DLL hundreds or thousands of times
  • Wasting mega bytes of memory by keeping full paths to files needlessly
  • Not organizing data structures so they take up way more memory than they need
  • Sizing all strings that store file names or paths to MAX_PATH
  • Gratuitous polling for thing that have events, callbacks or other notification mechanisms

What I think is a better statement is this: "optimization without measuring and understanding isn't optimization at all - its just random change".

Good Performance work is time consuming - often more so that the development of the feature or component itself.

Solution 2 - Performance

Databases are pessimization playland.

Favorites include:

  • Split a table into multiples (by date range, alphabetic range, etc.) because it's "too big".
  • Create an archive table for retired records, but continue to UNION it with the production table.
  • Duplicate entire databases by (division/customer/product/etc.)
  • Resist adding columns to an index because it makes it too big.
  • Create lots of summary tables because recalculating from raw data is too slow.
  • Create columns with subfields to save space.
  • Denormalize into fields-as-an-array.

That's off the top of my head.

Solution 3 - Performance

I think there is no absolute rule: some things are best optimized upfront, and some are not.

For example, I worked in a company where we received data packets from satellites. Each packet cost a lot of money, so all the data was highly optimized (ie. packed). For example, latitude/longitude was not sent as absolute values (floats), but as offsets relative to the "north-west" corner of a "current" zone. We had to unpack all the data before it could be used. But I think this is not pessimization, it is intelligent optimization to reduce communication costs.

On the other hand, our software architects decided that the unpacked data should be formatted into a very readable XML document, and stored in our database as such (as opposed to having each field stored in a corresponding column). Their idea was that "XML is the future", "disk space is cheap", and "processor is cheap", so there was no need to optimize anything. The result was that our 16-bytes packets were turned into 2kB documents stored in one column, and for even simple queries we had to load megabytes of XML documents in memory! We received over 50 packets per second, so you can imagine how horrible the performance became (BTW, the company went bankrupt).

So again, there is no absolute rule. Yes, sometimes optimization too early is a mistake. But sometimes the "cpu/disk space/memory is cheap" motto is the real root of all evil.

Solution 4 - Performance

On an old project we inherited some (otherwise excellent) embedded systems programmers who had massive Z-8000 experience.

Our new environment was 32-bit Sparc Solaris.

One of the guys went and changed all ints to shorts to speed up our code, since grabbing 16 bits from RAM was quicker than grabbing 32 bits.

I had to write a demo program to show that grabbing 32-bit values on a 32-bit system was faster than grabbing 16-bit values, and explain that to grab a 16-bit value the CPU had to make a 32-bit wide memory access and then mask out or shift the bits not needed for the 16-bit value.

Solution 5 - Performance

Oh good Lord, I think I have seen them all. More often than not it is an effort to fix performance problems by someone that is too darn lazy to troubleshoot their way down to the CAUSE of those performance problems or even researching whether there actually IS a performance problem. In many of these cases I wonder if it isn't just a case of that person wanting to try a particular technology and desperately looking for a nail that fits their shiny new hammer.

Here's a recent example:

Data architect comes to me with an elaborate proposal to vertically partition a key table in a fairly large and complex application. He wants to know what type of development effort would be necessary to adjust for the change. The conversation went like this:

Me: Why are you considering this? What is the problem you are trying to solve?

Him: Table X is too wide, we are partitioning it for performance reasons.

Me: What makes you think it is too wide?

Him: The consultant said that is way too many columns to have in one table.

Me: And this is affecting performance?

Him: Yes, users have reported intermittent slowdowns in the XYZ module of the application.

Me: How do you know the width of the table is the source of the problem?

Him: That is the key table used by the XYZ module, and it is like 200 columns. It must be the problem.

Me (Explaining): But module XYZ in particular uses most of the columns in that table, and the columns it uses are unpredictable because the user configures the app to show the data they want to display from that table. It is likely that 95% of the time we'd wind up joining all the tables back together anyway which would hurt performance.

Him: The consultant said it is too wide and we need to change it.

Me: Who is this consultant? I didn't know we hired a consultant, nor did they talk to the development team at all.

Him: Well, we haven't hired them yet. This is part of a proposal they offered, but they insisted we needed to re-architect this database.

Me: Uh huh. So the consultant who sells database re-design services thinks we need a database re-design....

The conversation went on and on like this. Afterward, I took another look at the table in question and determined that it probably could be narrowed with some simple normalization with no need for exotic partitioning strategies. This, of course turned out to be a moot point once I investigated the performance problems (previously unreported) and tracked them down to two factors:

  1. Missing indexes on a few key columns.
  2. A few rogue data analysts who were periodically locking key tables (including the "too-wide" one) by querying the production database directly with MSAccess.

Of course the architect is still pushing for a vertical partitioning of the table hanging on to the "too wide" meta-problem. He even bolstered his case by getting a proposal from another database consultant who was able to determine we needed major design changes to the database without looking at the app or running any performance analysis.

Solution 6 - Performance

I have seen people using alphadrive-7 to totally incubate CHX-LT. This is an uncommon practice. The more common practice is to initialize the ZT transformer so that bufferication is reduced (due to greater net overload resistance) and create java style bytegraphications.

Totally pessimistic!

Solution 7 - Performance

Nothing Earth-shattering, I admit, but I've caught people using StringBuffer to concatenate Strings outside of a loop in Java. It was something simple like turning

String msg = "Count = " + count + " of " + total + ".";

into

StringBuffer sb = new StringBuffer("Count = ");
sb.append(count);
sb.append(" of ");
sb.append(total);
sb.append(".");
String msg = sb.toString();

It used to be quite common practice to use the technique in a loop, because it was measurably faster. The thing is, StringBuffer is synchronized, so there's actually extra overhead if you're only concatenating a few Strings. (Not to mention that the difference is absolutely trivial on this scale.) Two other points about this practice:

  1. StringBuilder is unsynchronized, so should be preferred over StringBuffer in cases where your code can't be called from multiple threads.
  2. Modern Java compilers will turn readable String concatenation into optimized bytecode for you when it's appropriate anyway.

Solution 8 - Performance

I once saw a MSSQL database that used a 'Root' table. The Root table had four columns: GUID (uniqueidentifier), ID (int), LastModDate (datetime), and CreateDate (datetime). All tables in the database were Foreign Key'd to the Root table. Whenever a new row was created in any table in the db, you had to use a couple of stored procedures to insert an entry in the Root table before you could get to the actual table you cared about (rather than the database doing the job for you with a few triggers simple triggers).

This created a mess of useless overheard and headaches, required anything written on top of it to use sprocs (and eliminating my hopes of introducing LINQ to the company. It was possible but just not worth the headache), and to top it off didn't even accomplish what it was supposed to do.

The developer that chose this path defended it under the assumption that this saved tons of space because we weren't using Guids on the tables themselves (but...isn't a GUID generated in the Root table for every row we make?), improved performance somehow, and made it "easy" to audit changes to the database.

Oh, and the database diagram looked like a mutant spider from hell.

Solution 9 - Performance

How about POBI -- pessimization obviously by intent?

Collegue of mine in the 90s was tired of getting kicked in the ass by the CEO just because the CEO spent the first day of every ERP software (a custom one) release with locating performance issues in the new functionalities. Even if the new functionalities crunched gigabytes and made the impossible possible, he always found some detail, or even seemingly major issue, to whine upon. He believed to know a lot about programming and got his kicks by kicking programmer asses.

Due to the incompetent nature of the criticism (he was a CEO, not an IT guy), my collegue never managed to get it right. If you do not have a performance problem, you cannot eliminate it...

Until for one release, he put a lot of Delay (200) function calls (it was Delphi) into the new code. It took just 20 minutes after go-live, and he was ordered to appear in the CEO's office to fetch his overdue insults in person.

Only unusual thing so far was my collegues mute when he returned, smiling, joking, going out for a BigMac or two while he normally would kick tables, flame about the CEO and the company, and spend the rest of the day turned down to death.

Naturally, my collegue now rested for one or two days at his desk, improving his aiming skills in Quake -- then on the second or third day he deleted the Delay calls, rebuilt and released an "emergency patch" of which he spread the word that he had spent 2 days and 1 night to fix the performance holes.

This was the first (and only) time that evil CEO said "great job!" to him. That's all that counts, right?

This was real POBI.

But it also is a kind of social process optimization, so it's 100% ok.

I think.

Solution 10 - Performance

"Database Independence". This meant no stored procs, triggers, etc - not even any foreign keys.

Solution 11 - Performance

var stringBuilder = new StringBuilder();
stringBuilder.Append(myObj.a + myObj.b + myObj.c + myObj.d);
string cat = stringBuilder.ToString();

Best use of a StringBuilder I've ever seen.

Solution 12 - Performance

Using a regex to split a string when a simple string.split suffices

Solution 13 - Performance

Very late to this thread I know, but I saw this recently:

bool isFinished = GetIsFinished();

switch (isFinished)
{
    case true:
        DoFinish();
        break;

    case false:
        DoNextStep();
        break;

    default:
        DoNextStep();
}

Y'know, just in case a boolean had some extra values...

Solution 14 - Performance

Worst example I can think of is an internal database at my company containing information on all employees. It gets a nightly update from HR and has an ASP.NET web service on top. Many other apps use the web service to populate things like search/dropdown fields.

The pessimism is that the developer thought that repeated calls to the web service would be too slow to make repeated SQL queries. So what did he do? The application start event reads in the entire database and converts it all to objects in memory, stored indefinitely until the app pool is recycled. This code was so slow, it would take 15 minutes to load in less than 2000 employees. If you inadvertently recycled the app pool during the day, it could take 30 minutes or more, because each web service request would start multiple concurrent reloads. For this reason, new hires wouldn't appear in the database the first day when their account was created and therefore would not be able to access most internal apps on their first couple days, twiddling their thumbs.

The second level of pessimism is that the development manager doesn't want to touch it for fear of breaking dependent applications, but yet we continue to have sporadic company-wide outages of critical applications due to poor design of such a simple component.

Solution 15 - Performance

No one seems to have mentioned sorting, so I will.

Several different times, I've discovered that someone had hand-crafted a bubblesort, because the situation "didn't require" a call to the "too fancy" quicksort algorithm that already existed. The developer was satisified when their handcrafted bubblesort worked well enough on the ten rows of data that they're using for testing. It didn't go over quite as well after the customer had added a couple of thousand rows.

Solution 16 - Performance

I once worked on an app that was full of code like this:

 1 tuple *FindTuple( DataSet *set, int target ) {
 2     tuple *found = null;
 3     tuple *curr = GetFirstTupleOfSet(set);
 4     while (curr) {
 5         if (curr->id == target)
 6             found = curr;
 7         curr = GetNextTuple(curr);
 8     }
 9     return found;
10 }

Simply removing found, returning null at the end, and changing the sixth line to:

            return curr;

Doubled the app performance.

Solution 17 - Performance

I once had to attempt to modify code that included these gems in the Constants class

public static String COMMA_DELIMINATOR=",";
public static String COMMA_SPACE_DELIMINATOR=", ";
public static String COLIN_DELIMINATOR=":";

Each of these were used multiple times in the rest of the application for different purposes. COMMA_DELIMINATOR littered the code with over 200 uses in 8 different packages.

Solution 18 - Performance

The big all time number one which I run into time and time again in inhouse software:

Not using the features of the DBMS for "portability" reasons because "we might want to switch to another vendor later".

Read my lips. For any inhouse work: IT WILL NOT HAPPEN!

Solution 19 - Performance

I had a co-worker who was trying to outwit our C compiler's optimizer and routine rewrote code that only he could read. One of his favorite tricks was changing a readable method like (making up some code):

int some_method(int input1, int input2) {
    int x;
    if (input1 == -1) {
        return 0;
    }
    if (input1 == input2) {
        return input1;
    }
    ... a long expression here ...
    return x;
}

into this:

int some_method() {
    return (input == -1) ? 0 : (input1 == input2) ? input 1 :
           ... a long expression ...
           ... a long expression ...
           ... a long expression ...
}

That is, the first line of a once-readable method would become "return" and all other logic would be replace by deeply nested terniary expressions. When you tried to argue about how this was unmaintainable, he would point to the fact that the assembly output of his method was three or four assembly instructions shorter. It wasn't necessarily any faster but it was always a tiny bit shorter. This was an embedded system where memory usage occasionally did matter, but there were far easier optimizations that could have been made than this that would have left the code readable.

Then, after this, for some reason he decided that ptr->structElement was too unreadable, so he started changing all of these into (*ptr).structElement on the theory that it was more readable and faster as well.

Turning readable code into unreadable code for at the most a 1% improvement, and sometimes actually slower code.

Solution 20 - Performance

In one of my first jobs as a full-fledged developer, I took over a project for a program that was suffering scaling issues. It would work reasonably well on small data sets, but would completely crash when given large quantities of data.

As I dug in, I found that the original programmer sought to speed things up by parallelizing the analysis - launching a new thread for each additional data source. However, he'd made a mistake in that all threads required a shared resource, on which they were deadlocking. Of course, all benefits of concurrency disappeared. Moreover it crashed most systems to launch 100+ threads only to have all but one of them lock. My beefy dev machine was an exception in that it churned through a 150-source dataset in around 6 hours.

So to fix it, I removed the multi-threading components and cleaned up the I/O. With no other changes, execution time on the 150-source dataset dropped below 10 minutes on my machine, and from infinity to under half an hour on the average company machine.

Solution 21 - Performance

I suppose I could offer this gem:

unsigned long isqrt(unsigned long value)
{
    unsigned long tmp = 1, root = 0;
    #define ISQRT_INNER(shift) \
    { \
        if (value >= (tmp = ((root << 1) + (1 << (shift))) << (shift))) \
        { \
            root += 1 << shift; \
            value -= tmp; \
        } \
    }

    // Find out how many bytes our value uses
    // so we don't do any uneeded work.
    if (value & 0xffff0000)
    {
        if ((value & 0xff000000) == 0)
            tmp = 3;
        else
            tmp = 4;
    }
    else if (value & 0x0000ff00)
        tmp = 2;

    switch (tmp)
    {
        case 4:
            ISQRT_INNER(15);
            ISQRT_INNER(14);
            ISQRT_INNER(13);
            ISQRT_INNER(12);
        case 3:
            ISQRT_INNER(11);
            ISQRT_INNER(10);
            ISQRT_INNER( 9);
            ISQRT_INNER( 8);
        case 2:
            ISQRT_INNER( 7);
            ISQRT_INNER( 6);
            ISQRT_INNER( 5);
            ISQRT_INNER( 4);
        case 1:
            ISQRT_INNER( 3);
            ISQRT_INNER( 2);
            ISQRT_INNER( 1);
            ISQRT_INNER( 0);
    }
#undef ISQRT_INNER
    return root;
}

Since the square-root was calculated at a very sensitive place, I got the task of looking into a way to make it faster. This small refactoring reduced the execution time by a third (for the combination of hardware and compiler used, YMMV):

unsigned long isqrt(unsigned long value)
{
    unsigned long tmp = 1, root = 0;
    #define ISQRT_INNER(shift) \
    { \
        if (value >= (tmp = ((root << 1) + (1 << (shift))) << (shift))) \
        { \
            root += 1 << shift; \
            value -= tmp; \
        } \
    }

    ISQRT_INNER (15);
    ISQRT_INNER (14);
    ISQRT_INNER (13);
    ISQRT_INNER (12);
    ISQRT_INNER (11);
    ISQRT_INNER (10);
    ISQRT_INNER ( 9);
    ISQRT_INNER ( 8);
    ISQRT_INNER ( 7);
    ISQRT_INNER ( 6);
    ISQRT_INNER ( 5);
    ISQRT_INNER ( 4);
    ISQRT_INNER ( 3);
    ISQRT_INNER ( 2);
    ISQRT_INNER ( 1);
    ISQRT_INNER ( 0);

#undef ISQRT_INNER
    return root;
}

Of course there are both faster AND better ways to do this, but I think it's a pretty neat example of a pessimization.

Edit: Come to think of it, the unrolled loop was actually also a neat pessimization. Digging though the version control, I can present the second stage of refactoring as well, which performed even better than the above:

unsigned long isqrt(unsigned long value)
{
    unsigned long tmp = 1 << 30, root = 0;

    while (tmp != 0)
    {
        if (value >= root + tmp) {
            value -= root + tmp;
            root += tmp << 1;
        }
        root >>= 1;
        tmp >>= 2;
    }

    return root;
}

This is exactly the same algorithm, albeit a slightly different implementation, so I suppose it qualifies.

Solution 22 - Performance

This might be at a higher level that what you were after, but fixing it (if you're allowed) also involves a higher level of pain:

Insisting on hand rolling an Object Relationship Manager / Data Access Layer instead of using one of the established, tested, mature libraries out there (even after they've been pointed out to you).

Solution 23 - Performance

All foreign-key constraints were removed from a database, because otherwise there would be so many errors.

Solution 24 - Performance

Checking before EVERY javascript operation whether the object you are operating upon exists.

if (myObj) { //or its evil cousin, if (myObj != null) {
    label.text = myObj.value; 
    // we know label exists because it has already been 
    // checked in a big if block somewhere at the top
}

My problem with this type of code is nobody seems to care what if it doesn't exist? Just do nothing? Don't give the feedback to the user?

I agree that the Object expected errors are annoying, but this is not the best solution for that.

Solution 25 - Performance

This doesn't exactly fit the question, but I'll mention it anyway a cautionary tale. I was working on a distributed app that was running slowly, and flew down to DC to sit in on a meeting primarily aimed at solving the problem. The project lead started to outline a re-architecture aimed at resolving the delay. I volunteered that I had taken some measurements over the weekend that isolated the bottleneck to a single method. It turned out there was a missing record on a local lookup, causing the application to have to go to a remote server on every transaction. By adding the record back to the local store, the delay was eliminated - problem solved. Note the re-architecture wouldn't have fixed the problem.

Solution 26 - Performance

How about YAGNI extremism. It is a form of premature pessimization. It seems like anytime you apply YAGNI, then you end up needing it, resulting in 10 times the effort to add it than if you had added it in the beginning. If you create a successful program then odds are YOU ARE GOING TO NEED IT. If you are used to creating programs whose life runs out quickly then continue to practice YAGNI because then I suppose YAGNI.

Solution 27 - Performance

Not exactly premature optimisation - but certainly misguided - this was read on the BBC website, from an article discussing Windows 7.

> Mr Curran said that the Microsoft Windows team had been poring over every aspect of the operating system to make improvements. "We were able to shave 400 milliseconds off the shutdown time by slightly trimming the WAV file shutdown music.

Now, I haven't tried Windows 7 yet, so I might be wrong, but I'm willing to bet that there are other issues in there that are more important than how long it takes to shut-down. After all, once I see the 'Shutting down Windows' message, the monitor is turned off and I'm walking away - how does that 400 milliseconds benefit me?

Solution 28 - Performance

Someone in my department once wrote a string class. An interface like CString, but without the Windows dependence.

One "optimization" they did was to not allocate any more memory than necessary. Apparently not realizing that the reason classes like std::string do allocate excess memory is so that a sequence of += operations can run in O(n) time.

Instead, every single += call forced a reallocation, which turned repeated appends into an O(n²) Schlemiel the Painter's algorithm.

Solution 29 - Performance

An ex-coworker of mine (a s.o.a.b., actually) was assigned to build a new module for our Java ERP that should have collected and analyzed customers' data (retail industry). He decided to split EVERY Calendar/Datetime field in its components (seconds, minutes, hours, day, month, year, day of week, bimester, trimester (!)) because "how else would I query for 'every monday'?"

Solution 30 - Performance

No offense to anyone, but I just graded an assignment (java) that had this

import java.lang.*;

Solution 31 - Performance

Maybe just having a quick glance over the system early on will help point to the possible bottlenecks.

"This part doesnt need to be fast" (archiving logs) "This part must be hella fast" (accepting new connections)

Then the very fast parts usually dont need to be extra optimised with dirty quirks, usually decent hardware and well coded parts will suffice.

Just answering the simple question "Do I gain anything from having this part of code very fast?" will be a great guideline. I mean, using common sense optimises other parts of the project!

Solution 32 - Performance

I don't think pessimization is rare. From my experience doing performance tuning, a lot of the poor performance is caused by "good programming practice" justified in the name of "efficiency". Examples:

  • Map collections or "dictionaries"
    These usually make use of some kind of hash-coding, so they will have O(1) performance, but will only break even when filled with far more items than are typically used.

  • Iterators
    These are justified as being possibly optimized into efficient inline code, when it is seldom checked to see if they actually are.

  • Notifications and event handling as a way to keep data consistent
    Since data structure is seldom normalized, inconsistency must be managed, and notification is the usual method because it supposedly takes care of the problem "immediately". However, there is a big difference between immediacy and efficiency. Also "properties", when Get or Set, are encouraged to reach deep into the data structure to try to keep it consistent. These "short leash" methods can cause large amounts of wasted computation. "Long leash" methods, such as periodically cycling through the data structure to "repair" it, can be a little less "immediate" but much more efficient.

Examples

Solution 33 - Performance

while true; do echo 3 > /proc/sys/vm/drop_caches; sleep 3600; done

This caused the kernel to spend time clearing out disk cache, and once it succeeded, everything ran slowly until the cache got repopulated. Caused by a misapprehension that disk cache prevented memory from being available for applications to use.

Solution 34 - Performance

Some collegues of mine, that were on an "optimization" project of existing server side batches (written in C++), "optimized" to death the logging class (!), using win32-specific code and functions.

Maybe the bottleneck was in logger.write(...), who knows...

Solution 35 - Performance

One co-worker had to check access to the page for a specific role - "Admin" only. This is what she wrote:

.

if( CurrentUser.CurrentRole == "Role1" || CurrentUser.CurrentRole == "Role2")  
{
// Access denied
} 
else
{
// Access granted
}

instead of

if( !CurrentUser.CurrentRole.equals("Admin") ) 
{
 // access denied
}  

So whenever a new role was added to the system, the new role had access to all confidential pages.


The same coworker was also joins for production and archive table for all queries.

Solution 36 - Performance

Another fancy performance trick :)

if (!loadFromDb().isEmpty) {
    resultList = loadFromDb();
    // do something with results
}

For a small price of extra DB hit, you save all that time doing like 10 lines of code, that probably wouldn't do much on an empty list anyway. And things like this were scattered all over the code :)

Solution 37 - Performance

One company I visited as a consultant many years ago had written a sort function that looked something like this:

procedure sort(string[] values, string direction)
begin
  while not sorted do
  begin
    for every value in values
    begin
      if direction="Ascending" then
      begin
        ... swap values in ascending order
      end
      else if direction="Descending" then
      begin
        ... swap values in descending order
      end
    end;
  end;
end;

That is a bubblesort algorithm (which is inefficient in itself) with a string comparison for direction in the inner loop! I could hardly believe my eyes and explained that yes I can probably make a couple of speed improvements here (they were my clients after all so I had to be diplomatic about the fact that this was the most non-optimal code I've ever seen :-) )

Solution 38 - Performance

I was going to mention StringBuilder for tiny/non-looping string concats, but its been mentioned.

Putting a method's variables into private class members to prevent them from getting "garbage collected every time the method runs." The variables are value types.

Solution 39 - Performance

An application that used an Integer field to allocated bitwise which application access groupings our clients could add thier users to. That meant at the time we could create a grand total of 32 groups to be shared across all 500+ clients.

Aaaah, but a bitwise comparison is faster than an equals and waaay faster than a join right?

Unfortunately, when I completely (and rather vocally) freaked at this code and it's author, I discovered the author was my bosses boss. A rather authoritarian dude it turns out.

P.s.

I know what your are thinking, it totally should have been a binary string right? :)

Solution 40 - Performance

A lot of programmers don't know or don't want to know SQL so they find "tricks" to avoid really using SQL so they can get the data into an array. Arrays make some people happy. (I love both cursors and arrays. Coke and Pepsi.) I have found these two blocks of code in a few object oriented programmers' code that complained that relational databases are slow. (the answer is not more memory or more processors.)

The table in this case is a huge table with the uniqueid_col is a unique id or a unique row.

Load this data into arrayX (because arrays must be faster)


Select uniqueid_col, col2, col3
from super_big_tbl

(psuedo code)


Loop
arrayX.next_record
if uniqueid_col = '829-39-3984'
return col2
end if
end loop
(My answer is at bottom.)

This next one is a simple mistake I have also seen. The idea is you never get a duplicate this way:


Select uniqueid_col, col2, col3
from super_big_tbl
group by uniqueid_col, col2, col3
having uniqueid_col = '829-39-3984'

Correct syntax should be


Select uniqueid_col, col2, col3
from super_big_tbl
where uniqueid_col = '829-39-3984'

Solution 41 - Performance

Any significant optimization effort that isn't based on triaged reports from a profiler tool earns a big WTF from me.

Solution 42 - Performance

I've got an intentional one... I've once implemented sorting through backtracking... just as a proof of concept ;)) needless to say its performance was horrific.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestiondsimchaView Question on Stackoverflow
Solution 1 - PerformanceForedeckerView Answer on Stackoverflow
Solution 2 - PerformancedkretzView Answer on Stackoverflow
Solution 3 - PerformanceMiniQuarkView Answer on Stackoverflow
Solution 4 - PerformanceMark HarrisonView Answer on Stackoverflow
Solution 5 - PerformanceJohnFxView Answer on Stackoverflow
Solution 6 - PerformanceZalagaView Answer on Stackoverflow
Solution 7 - PerformanceBill the LizardView Answer on Stackoverflow
Solution 8 - PerformanceDusdaView Answer on Stackoverflow
Solution 9 - PerformanceTheBlastOneView Answer on Stackoverflow
Solution 10 - PerformancechrisView Answer on Stackoverflow
Solution 11 - PerformanceluftView Answer on Stackoverflow
Solution 12 - PerformanceCherianView Answer on Stackoverflow
Solution 13 - PerformanceDamovisaView Answer on Stackoverflow
Solution 14 - PerformancespoulsonView Answer on Stackoverflow
Solution 15 - PerformanceDan BreslauView Answer on Stackoverflow
Solution 16 - PerformanceDour High ArchView Answer on Stackoverflow
Solution 17 - PerformanceKitsuneYMGView Answer on Stackoverflow
Solution 18 - PerformancePeter StuerView Answer on Stackoverflow
Solution 19 - PerformanceEddieView Answer on Stackoverflow
Solution 20 - PerformanceJeffrey BlakeView Answer on Stackoverflow
Solution 21 - PerformanceChristofferView Answer on Stackoverflow
Solution 22 - PerformanceGordon HartleyView Answer on Stackoverflow
Solution 23 - PerformanceGugeView Answer on Stackoverflow
Solution 24 - PerformanceChetan SView Answer on Stackoverflow
Solution 25 - PerformanceJack BeNimbleView Answer on Stackoverflow
Solution 26 - PerformanceDunkView Answer on Stackoverflow
Solution 27 - PerformancebelugabobView Answer on Stackoverflow
Solution 28 - Performancedan04View Answer on Stackoverflow
Solution 29 - PerformanceJorilView Answer on Stackoverflow
Solution 30 - PerformanceOverflownView Answer on Stackoverflow
Solution 31 - PerformanceEricView Answer on Stackoverflow
Solution 32 - PerformanceMike DunlaveyView Answer on Stackoverflow
Solution 33 - PerformancedhasenanView Answer on Stackoverflow
Solution 34 - PerformancefriolView Answer on Stackoverflow
Solution 35 - PerformanceAbhishekView Answer on Stackoverflow
Solution 36 - PerformanceSlartibartfastView Answer on Stackoverflow
Solution 37 - PerformanceVille KrumlindeView Answer on Stackoverflow
Solution 38 - PerformanceJeremyView Answer on Stackoverflow
Solution 39 - PerformanceNatView Answer on Stackoverflow
Solution 40 - PerformanceStradasView Answer on Stackoverflow
Solution 41 - PerformanceJeffHView Answer on Stackoverflow
Solution 42 - PerformanceluvieereView Answer on Stackoverflow