Jump to content

JavaScript performance test, too fast to be true?


Recommended Posts

I found a random question over StackOverflow regarding performance of of C++ and C#, using a prime number counter. I changed code a bit and translate it into JavaScript to test the results. I am unable to understand it, why JavaScript execution is lot faster than C++ (g++)??

JS code:

var primes = function(n){    var i, j, countprimes, isprime, i_sqrR;    countprimes = 0;    for (i = 1; i <=n; i++) {        isprime = 1;        i_sqrR = Math.sqrt(i)        for(j = 2; j < i_sqrR; j++) {             if (!(i%j)) {                isprime = 0;                break;             }        }        countprimes += isprime;    }    return countprimes;};var n = 10000000;var start = new Date().getTime();var result = primes(n);var end = new Date().getTime();var time = end - start;document.writeln("There are " + result + " prime numbers between 1 and " + n + ". <br />");document.writeln('Execution time: ' + time + "milliseconds");

C++ code:
 

#include <iostream>#include <ctime>#include <cmath>using namespace std;int primes(unsigned long n) {unsigned long i, j, i_sqrR;int countprimes, isprime;countprimes = 0;  for(i = 1; i <= n; i++) {      isprime = 1;      i_sqrR = sqrt((float)i);            for(j = 2; j < i_sqrR; j++){           if(!(i%j)) {        isprime = 0;        break;      }   }    countprimes+= isprime;  }  return countprimes;}int main() { int n, res; cin>>n; unsigned int start = clock(); res = primes(n); int tprime = clock() - start; cout<<"\nI found "<<res<<" prime numbers between 1 and "<<n<<" in "<<tprime<<" micro seconds." <<endl; return 0;}

Result I got is, 9589 milliseconds (9.6 seconds) for JS under node webkit(nwjs-v0.12.2-linux-x64), similar numbers in Firefox(39) too. At the same time the c++ program took 16674821 microseconds (16.7 seconds). C++ was compiled with following command:

 

g++ -O3 -s -o prime_num test.cxx

 

Added detail: I am using Ubuntu 14.04 x86_64.
Is it just a bad code favoring JS? I can not think of any other reason, or is JS now really 'faster' in mathematical calculations compared with good C++ compilers like G++?

Link to comment
Share on other sites

I don't think it's a fair test. This type of test is extremely difficult to write fairly.

 

Your C++ code mostly uses ints, but there is a sqrt call. sqrt takes a float and returns a float but you pass it an int and assign the result to an int, so you have explicitly introduced an int->float conversion followed by a float->int conversion. These conversions can be surprisingly slow, I remember seeing Visual C++ call a helper function to do the conversion for standards compliance reasons, and the helper function could come up high in profiles.

 

Javascript does not have typed variables. Numbers are specced as doubles (there are no other number types in the language!) but optimisers are allowed to use other types for optimised code based on runtime profiling. Although you may expect/intend i_sqrR to be an integer, the JIT can probably see it's faster to leave it as a float. I'm simply speculating, but the JIT may propagate that to making the loop indices both floats as well, and then there is never any type conversion necessary. So the JS code does not have to do some of the work the C++ code does - it can probably figure out how to circumvent the conversions entirely.

 

The JIT may also take advantage of CPU features it identifies are supported at runtime, e.g. SSE4, whereas compiled code may conservatively fall back to something like SSE2 for widest hardware support. The JIT might also have a better opportunity to auto-vectorize the code since it can observe it running. For the same reason it may be able to generate more efficient branching code by identifying hot/cold paths at runtime, like profile-guided optimisation in C/C++.

 

I don't think this particular test is comparing apples to apples. But it is true that very carefully written JS, using knowledge of how the JIT will examine types, can be approximately as fast as native code in some cases.

Link to comment
Share on other sites

In addition to Ashley's comments, your JS routine is too small and too simple to compare JS to C++. After the first iteration your entire JS routine is probably running as machine code because it has been JITed. If you were to compare a real world app, such as a game, written in both JS and C++ the C++ version would likely trounce the JS version because even with JITing the JS version would be saddled with a significant amount of interpretation and performance would plummet. Even with your JS routine, if JIT were disabled I'm sure you would see far different results.

Link to comment
Share on other sites

That's an interesting test you have there. I'd however prefer to see an algorithm that didn't use system libraries (eg sqrt) to ensure the implementation is the same.

 

I received different results from your code. 

 

C++ (Visual Studio 2010)



I found 665111 prime numbers between 1 and 10000000 in 7952 micro seconds.


Javascript:



There are 665026 prime numbers between 1 and 10000000. 
Execution time: 12909milliseconds


A couple of things to consider:

- Firstly, both programs did not find the same number of prime numbers. I haven't had a chance to debug, however my initial guess is the javascript version is not handling types correctly (eg i_sqrt should be cast back to an integer/unsigned long). What results did you receive?

- More importantly, the clock routine reports on 'clock ticks' instead of milliseconds. 

 



Link to comment
Share on other sites

 

That's an interesting test you have there. I'd however prefer to see an algorithm that didn't use system libraries (eg sqrt) to ensure the implementation is the same.
 
I received different results from your code. 
 
C++ (Visual Studio 2010)
I found 665111 prime numbers between 1 and 10000000 in 7952 micro seconds.
Javascript:
There are 665026 prime numbers between 1 and 10000000. Execution time: 12909milliseconds
A couple of things to consider:
- Firstly, both programs did not find the same number of prime numbers. I haven't had a chance to debug, however my initial guess is the javascript version is not handling types correctly (eg i_sqrt should be cast back to an integer/unsigned long). What results did you receive?
- More importantly, the clock routine reports on 'clock ticks' instead of milliseconds. 
 

 

Hello yes there was a mistake done by me in trying to 'improve' original code. I changed the code for c++ and got 665026 for both C++ and JS. i_sqrR have to be a float instead of a long because that's what sqrt() returns. Here is the new code:

#include <iostream>#include <ctime>#include <cmath>using namespace std;int primes(unsigned long n) {unsigned long i, j;float i_sqrR, k;int countprimes, isprime;countprimes = 0;  for(i = 1; i <= n; i++, k++) {      isprime = 1;      i_sqrR = sqrt(k);            for(j = 2; j < i_sqrR; j++){           if(!(i%j)) {        isprime = 0;        break;      }   }    countprimes+= isprime;  }  return countprimes;}int main() { int n, res; cin>>n; unsigned int start = clock(); res = primes(n); int tprime = clock() - start; cout<<"\nI found "<<res<<" prime numbers between 1 and "<<n<<" in "<<tprime<<" micro seconds." <<endl; return 0;}

After using the new source code the time dropped to 15.5 second or something like that.

Also there isn't lot of confusion about the time, as the process take considerable time on gcc builds I made, I rechecked it using a stop watch, while stop watch can't calculate up to the microseconds, it did say it was something around 16 second first time I checked. I think it is related to the implementation of sqrt in gcc vs that of in javascript.

btw, Visual C++ really completed it in like 7.9-8 seconds? Can you give numbers for newer code too...?

 

 

 

In addition to Ashley's comments, your JS routine is too small and too simple to compare JS to C++. After the first iteration your entire JS routine is probably running as machine code because it has been JITed. If you were to compare a real world app, such as a game, written in both JS and C++ the C++ version would likely trounce the JS version because even with JITing the JS version would be saddled with a significant amount of interpretation and performance would plummet. Even with your JS routine, if JIT were disabled I'm sure you would see far different results.

Oh right, that is a very important point. As basically we are just repeating a procedure which only needed to be interpreted once after that it is basically just executed for tens of millions of times...

 

I don't think it's a fair test. This type of test is extremely difficult to write fairly.

 

Your C++ code mostly uses ints, but there is a sqrt call. sqrt takes a float and returns a float but you pass it an int and assign the result to an int, so you have explicitly introduced an int->float conversion followed by a float->int conversion. These conversions can be surprisingly slow, I remember seeing Visual C++ call a helper function to do the conversion for standards compliance reasons, and the helper function could come up high in profiles.

 

Javascript does not have typed variables. Numbers are specced as doubles (there are no other number types in the language!) but optimisers are allowed to use other types for optimised code based on runtime profiling. Although you may expect/intend i_sqrR to be an integer, the JIT can probably see it's faster to leave it as a float. I'm simply speculating, but the JIT may propagate that to making the loop indices both floats as well, and then there is never any type conversion necessary. So the JS code does not have to do some of the work the C++ code does - it can probably figure out how to circumvent the conversions entirely.

 

The JIT may also take advantage of CPU features it identifies are supported at runtime, e.g. SSE4, whereas compiled code may conservatively fall back to something like SSE2 for widest hardware support. The JIT might also have a better opportunity to auto-vectorize the code since it can observe it running. For the same reason it may be able to generate more efficient branching code by identifying hot/cold paths at runtime, like profile-guided optimisation in C/C++.

 

I don't think this particular test is comparing apples to apples. But it is true that very carefully written JS, using knowledge of how the JIT will examine types, can be approximately as fast as native code in some cases.

See my newer version, there I tried my best to reduce the type problems. But still improvement is hardly 1.5 second.

I think it should be related to:

1. sqrt function of g++ is probably less efficient, as HemingwayGames got much better results with Visual C++.

2. Interpretation part is minimal, after being interpreted for first time the code just have to run by the JIT again and again.

 

I'll try finding alternative square root functions and try it on Visual Studio too...

Link to comment
Share on other sites

 

i_sqrR have to be a float instead of a long because that's what sqrt() returns

 

Sorry I believe the JS needs to be corrected instead of the c++. Even though sqrt() returns a float, the value should be cast/converted to an integer value so that the decimal component is removed (NB the value will need to be truncated instead of rounded). 

 

Javascript:

i_sqrR = Math.floor(Math.sqrt(i));
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...