Jump to content

Are leading zeros ever required?


JCPalmer
 Share

Recommended Posts

I think I found a way (actually 2)s to reduce a Blender export file.  I noticed normals, UV's, shape keys are always values > -1 and < 1.  Even many positions could be in this range.  I popped this little piece of code, which if it error-ed would cause my scene to not display:

var zeroless = new Float32Array([-.3333,.3709]);

Scene did not fail.  I will soon be updating the in-line code generator to see it works like:

this.setVerticesData(BABYLON.VertexBuffer.PositionKind, new Float32Array([-.3333,.3709 ...

My question is: are there any platforms, or JSON, where this cannot occur?  Blender is shipping up to 4 decimals (stripping trailing), so normals & UV's this a 1/8 reduction for negative numbers with no trailing 0's, and 1/7 for positive.  Shapekeys should probably be always 0 to .9999, so 1/7 for them.

Speaking of 4 decimals, the second reduction I want to try is changing the test for equality of vertices / normals / UV's.  Right now they use ==, but I have made a function for shape key analysis to only test equality to the number of decimals output.  I have been slowing adding it in more and more places.

def same_vertex(vertA, vertB):
    if vertA is None or vertB is None: return False
    return (round(vertA.x, MAX_FLOAT_PRECISION_INT) == round(vertB.x, MAX_FLOAT_PRECISION_INT) and 
            round(vertA.y, MAX_FLOAT_PRECISION_INT) == round(vertB.y, MAX_FLOAT_PRECISION_INT) and 
            round(vertA.z, MAX_FLOAT_PRECISION_INT) == round(vertB.z, MAX_FLOAT_PRECISION_INT))

This has had more effect so far than you might think.  Blender is quad based, so when the exporter makes a temp version as triangles one point might be 16.3456789 and another might be 16.2356794.  The point here is not really about space, but smoothness.  The stuff being exported just does not look as good as it does in Blender in crease areas.  I am thinking it might be some sort of flat-shading effect due to extra verts.

Might even add a superSmooth arg on tester, where the test is using 3 decimals, just to see what happens:

def same_vertex(vertA, vertB, superSmooth = False):
    precision = MAX_FLOAT_PRECISION_INT - (1 if superSmooth else 0)
    if vertA is None or vertB is None: return False
    return (round(vertA.x, precision) == round(vertB.x, precision) and 
            round(vertA.y, precision) == round(vertB.y, precision) and 
            round(vertA.z, precision) == round(vertB.z, precision))

 

Link to comment
Share on other sites

not sure, id do a test of performance of with a leading 0 and without.

Im not super versed in this topic, but I would assume that if you leave out a leading 0 at some point javascript would have to add one back in?  I mean maybe not but how else could it do calculations without having a fully constructed number?

So this is where I would be interested.  Ill dig through the number and math class some and try to figure that out, but I would assume the drop in the export would not be that substantial? I mean whats a 0, one bit?  How many times and how large would your file need to be for that to take an effect?
now, as far as which would be faster still begs to differ? for all I know leaving the 0 out will increase calculation speed by skipping over a functional step, or perhaps its the opposite. 

If you find out more about this please let me know, I would like to tailor my habits to match the most effective standard.

Link to comment
Share on other sites

Each character in an ascii file is 8bits.  The removal on an output with 2 meshes (body wt 12 shape keys & hair) reduced the file size by 11.5%.  It worked.  The export, whether .babylon or in-line JS is about 96% numbers, so 1/8 (12.5%) would be approachable if indexes were not integers instead of floats.

In Javascript, once read each value is going to be a float 64.  BJS converts that to a float 32.  The difference in parsing time is probably not significant.  The extra code to produce the data only increased from 8.0305 sec to 8.1185 sec.  Hard to pin that amount of time on this.  It varies some without any changes.  I did not code this to be efficient.  Here is the process in python.  Perhaps, someone could write an equivalent in JS for the serializer.

def format_f(num):
    s = MAX_FLOAT_PRECISION % num # rounds to N decimal places while changing to string
    s = s.rstrip('0') # strip trailing zeroes
    s = s.rstrip('.') # strip trailing .    
    s = '0' if s == '-0' else s # nuke -0
    
    asNum = float(s)
    if asNum != 0 and asNum > -1 and asNum < 1:
        if asNum < 0:
            s = '-' + s[2:]
        else:
            s = s[1:]
        
    return s

The interesting part of the log file, basically 17k verts

	processing begun of mesh:  Body
		processing begun of Standard material:  Body:Young_asian_female
			Diffuse texture found "young_lightskinned_female_diffuse3"
				Image texture found, type:  diffuseTexture, mapped using: "UVMap"
				WARNING: Opacity non-std way to indicate opacity, use material alpha to also use Opacity texture
		processing begun of Standard material:  Tongue01:Tongue01material
			Diffuse texture found "tongue01_diffuse"
				Image texture found, type:  diffuseTexture, mapped using: "UVMap"
				WARNING: Opacity non-std way to indicate opacity, use material alpha to also use Opacity texture
		processing begun of Standard material:  Teeth_base:Teethmaterial
			Diffuse texture found "teeth"
				Image texture found, type:  diffuseTexture, mapped using: "UVMap"
				WARNING: Opacity non-std way to indicate opacity, use material alpha to also use Opacity texture
		processing begun of Standard material:  Eyelashes02:Bodymaterial
			Diffuse texture found "eyelashes02"
				Image texture found, type:  diffuseTexture, mapped using: "UVMap"
				WARNING: Opacity non-std way to indicate opacity, use material alpha to also use Opacity texture
		processing begun of Standard material:  Eyebrow001:Eyebrow001
			Diffuse texture found "eyebrow001"
				Image texture found, type:  diffuseTexture, mapped using: "UVMap"
				WARNING: Opacity non-std way to indicate opacity, use material alpha to also use Opacity texture
		processing begun of Standard material:  High-poly:Eye_deepblue
			Diffuse texture found "deepblue_eye"
				Image texture found, type:  diffuseTexture, mapped using: "UVMap"
				WARNING: Opacity non-std way to indicate opacity, use material alpha to also use Opacity texture
		processing begun of multimaterial:  voice_sync_female.Multimaterial#0
		num positions      :  12674
		num normals        :  12674
		num uvs            :  25348
		num uvs2           :  0
		num colors         :  0
		num indices        :  58812
		Shape Keys:
			MakeHuman shape key consolidation performed
			shape key "FACE-MOUTH_OPEN":  n verts different from basis: 4116
			shape key "FACE-LIPS_LOWER_IN":  n verts different from basis: 420
			shape key "FACE-LIPS_PART":  n verts different from basis: 429
			shape key "FACE-MOUTH_WIDE":  n verts different from basis: 603
			shape key "FACE-MOUTH_UP":  n verts different from basis: 717
			shape key "FACE-LIPS_MID_UPPER_UP":  n verts different from basis: 284
			shape key "FACE-LIPS_MID_LOWER_DOWN":  n verts different from basis: 160
			shape key "FACE-TONGUE":  n verts different from basis: 284
			shape key "FACE-HAPPY":  n verts different from basis: 3286
			shape key "FACE-SAD":  n verts different from basis: 5460
			shape key "FACE-ANGRY":  n verts different from basis: 3160
			shape key "FACE-SCARED":  n verts different from basis: 5974
			shape key "FACE-LAUGHING":  n verts different from basis: 6722
			shape key "FACE-CRYING":  n verts different from basis: 4005
			shape key "FACE-DETERMINED":  n verts different from basis: 2975
			shape key "FACE-STRUGGLING":  n verts different from basis: 4466
			shape key "FACE-DISGUSTED":  n verts different from basis: 3902
			shape key "FACE-SKEPTICAL":  n verts different from basis: 1081
			shape key "FACE-CUSTOM1":  n verts different from basis: 1794
			shape key "FACE-CUSTOM2":  n verts different from basis: 6119
		Shape-key group, FACE, # of affected vertices: 8100, out of 12674
	processing begun of mesh:  Long01
		processing begun of Standard material:  Long01:Long01
			Diffuse texture found "long01_diffuse"
				Image texture found, type:  diffuseTexture, mapped using: "UVMap"
				WARNING: Opacity non-std way to indicate opacity, use material alpha to also use Opacity texture
		num positions      :  3415
		num normals        :  3415
		num uvs            :  6830
		num uvs2           :  0
		num colors         :  0
		num indices        :  12324
========= Writing of files started =========
========= Writing of files completed =========
========= end of processing =========
elapsed time:  0 min, 8.1066 secs

 

Link to comment
Share on other sites

I feel real confident that this works.  Reason is I just Gulped / minified a file out of Tower of Babel, and did not get as much compression I used to get.  I suspected the reason was Gulp also stripped leading zeros.

Here is a pretty old Tower of Babel output that was Gulped.  Was not generated with leading zeros removed, but they are gone.  (You have to scroll a few pages to the right to see one.)

If Gulp is pulling them out, it must work everywhere!  Of course, a .babylon file cannot be gulped.  The Blender exporter has been generating tighter & tighter files, as indicated from this thread.  When this feature finally makes it to the .babylon exporter, chop another 10+% out of the file!

Link to comment
Share on other sites

  • 1 month later...

@JCPalmer - it is always safe to use additional zeros in interger values. However, as you know, this doesn't often work the other way - by using only integer values.

Common example: .

        // use: pick_color.r, pick_color.g, pick_color.b
        //var r = parseFloat(pick_color.r) * 255;
        //var g = parseFloat(pick_color.g) * 255;
        //var b = parseFloat(pick_color.b) * 255;

I've been monitoring my mobile performance tests using XDK to emulate mobile devices, and have found no detectible performance issues in using many variables utilizing float values compared to integer values. We've come a long way baby! :)

DB

Link to comment
Share on other sites

@dbawel - I think Jeff is looking at things like animations which have values between 0 and 1. 40 bones with 300 frames and 16 float values/per frame - leaving out the leading zero can reduce the file size of the babylon file.. And of course the UVs of meshes.

And Jeff has seriously reduced the file size of the babylon export from Blender - still looking for more ;)

cheers gryff :)

Link to comment
Share on other sites

@gryff - yep, that's something I didn't consider. However, I don't see enough substancial memory usage from adding a 0 to non integer values in any case. But good point - I should have read his posts more carefully to see what the whole issue was.

DB

Link to comment
Share on other sites

@gryff I think you just saw the last attempt to decrease size for the .babylon format.  Even the in-line JS format has only one more 20 - 30% reduction coming.  That huge drop when Blender started optimizing meshes when they also had skeleton is not repeatable.

Even with the leading zeros gone from the the JS format, the .babylon format is still slightly smaller, initially.  That Is because I have tried to make the code as readable as possible with many line breaks and space character based indenting, not tab based.  Gulp rips all that out and more, of course.

Link to comment
Share on other sites

@JCPalmer. Jeff,  I've been impressed with the reductions in file size (as I documented here) and the improvement in readability I've also noticed. But the reduction in file size has also helped with the readability issue too. I use Notepad++  for coding and looking at .babylon files. Notepad++ has always had, for me at least, issues with large files - so with the file size reduction AND the file layout changes for readability, there is a big improvement in how Notepad++ handles .babylon files

cheers, gryff :)

Link to comment
Share on other sites

@dbawel, on the memory representation front, the reason you see no difference between a JS number or array of numbers, whether they are integer or not is:  all JS numbers are float 64.

Now if you or another .babylon format user was concerned about CPU memory, I would change how vertex data is loaded.  Right now, I believe, vertex data is loaded as a JS number[], when coming from a .babylon.  The data cannot be passed to openGL like that. A Float32Array object is created and copied to pass data, then thrown away.

I changed VertexBuffer to accept & store the array as a Number[] or Float32Array object in BJS 2.3.  This enabled the potential for higher morph rates (QueuedInterpoation extension is entirely Float32Array based), since no temp Float32Array need be created / garbage collected every frame.  This should also reduce the memory footprint of the biggest things in the VM by 50% being 32 bits instead of 64.

The in-line JS format creates permanent Float32Array backers for VertexBuffer, right off the bat.

Link to comment
Share on other sites

Hey,

don't know if this will earn me a "no shit einstein!" .. but I'm going to write it regardless :P 

I'm transforming a lot of DXF files to .babylon files and had quite some playtime with this. Since the initial load and the file size was one of my biggest issues I ended up to only include positions and normals in my babylon scene file and calculating UVs in the loading callback. that basically stripped over 1/2 of my filesize. I'd strip the normals too if I could, but I need them for some server side analysis of the meshes.

Granted, I'm mainly importing architecture models, so I'm dealing with many "flat" face normals and larger but fewer meshes so I only need to do planar mapping. But the additional time computing the UVs compared to loading a file with UVs at least felt about the same.

Anyways, if your use case is remotely close to mine and if it is an option for you, I'd really recommend to compute UVs and even normals on load to reduce the file size.

 

Link to comment
Share on other sites

Complex textures would not allow for UV calculation, like for this texture for teeth:

teeth.png

In fact, BJS can calculate normals.  I could put an option in the in-line JS exporter I have for Blender to calc on the BJS side.  You can do things like turn things inside out in Blender, so it would have to be optional.  Would support BJS calculated normals, if the feature was added for .babylon loading, but I am not adding the feature.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...