• Content Count

  • Joined

  • Last visited

  1. The "svg to pdf to png via ghostscript" method in the link you provided seems like the behavior I expect. However, the resulting png, as you can see in (1), looks bad, worse than my "svg to upscaled png to downscaled png" method. But using using ghostscript to create an upscaled png, and then downscaling it with imagemagick provides a much nicer result, as you can see in (2), though yet it is still a more irregular image than the one produced by the method I proposed earlier (it is less symmetrical, have some black pixels in unexpected places). I gave the unstable inkscape 0.91 a try and saw that, indeed, there is a "use anti-aliasing" check-box in the drawing's properties, however it does not affect at all the output of the exported pngs as you noted could happen, just the output to the screen, and even so, just the when you are viewing on a 1:1 zoom ratio. I tried your suggestion: in (3) I checked "use anti-aliasing", set the zoom to 1 and made a screen shot, and in (4) I unchecked "use anti-aliasing" and made another screen shot. The result of (4) seems the best until now, but it relies in the use of a unstable development release of inkscape, and the process of doing screen shots and cutting the desired image out is as not practical as the one I proposed earlier (obs.: the gray box on 3 and 4 are inkscape's page borders and page shadow). Indeed, good pixel art is about precisely pushing pixels of the right color on the right spot. But there are various degrees of precision and I think there are some scenarios where ease of editing an reusability of models is more important then precision. I would be making a mistake if I intended to reproduce pixel art such as the one found in “Seiken Densetsu 3” (for super nintendo) or “Castlevania Symphony of the Night” (for the first sony playstation) using vectors. However, I think it is plausible to try to make art similar to what is found on “Super Mario World” (for super nintendo) using vector graphics with non anti-aliased output. Similarly, “high” resolution pixel-art (with tiles/sprites with up to 128 pixels height/width), such as the character sprites used on "Street Fighter 3" (for the first sony playstation) could be also done with vectors. Of course a sprite with 128x128 pixels could be exported and used even with anti-aliasing on, but the result is not as nice and clean as the one you get without it, with single pixel lines and no anti-aliasing. I found this video on youtube about Adobe Illustrator: As you can see in 1:53, when you zoom in in illustrator, it gives you the option to see things pixelated, even though you are working with vectors and scalable fonts (true type/open type fonts). Illustrator even gives you the option to turn anti-aliasing on/off on a per object basis. This kind of behavior (the option of zooming in and being able to see things pixelated, without aliasing or filtering) plus the option to export exactly what you are seeing (without anti-aliasing and/or filtering), is the kind of behavior I think would be just what is needed to do pixel art with vectors. If Illustrator already does it, then there already exists a tool to do what I want. However, I prefer to use free tools whenever possible and that's why I'm trying to do it with inkscape instead of Illustrator or some other non-free software.
  2. Has anyone already tried to make pixelated graphics from a scalable vector (svg) source? I use Inkscape, and I think working with scalable vectors in inkscape is easier than pushing pixels on a raster graphics editor like gimp, because vectors are more easily reusable and easier to change. However, Inkscape always does anti-aliasing when scaling down, which makes art intended to be pixelated look real bad when exported to png (I don't know about Corel Drawn or Adobe Illustrator, I have never used them). I made a bash script (shell script, if you prefer), which I'm attaching to this thread, that uses inkscape to convert an svg file to a scaled up png, and then scale it down with image magick. The result looks better than the file exported directly in low resolution from inkscape, and is easy to fix the remaining issues with gimp (unfortunately, I couldn't get rid of it yet). I have attached also an example svg, with a comparison of what inkscape usually spits out (1), what image spits out when downscaling an upscaled exported image (2) and a fixed up version (3). I have put also the scaled up png. Using this process to achieve the desired result (making pixel art out of vector graphics) is one option, but certainly it is not an optimal process, nor very practical (you should run the script and make the manual fixes every time you change your svg). Does any one know of a better process, that does not involve don't using scalable vectors? Maybe there is an inkscape extension of some sort that exports the png without doing anti-aliasing. Inkscape 0.91 (the upcoming release) will be completely replacing libnr (it's current render engine, developed by themselves) for libcairo (a much more mature and widely used renderer), and it seems cairo allows to disable anti-aliasing when up/down scaling things. But I don't know if inkscape itself will allow to turn anti-aliasing on/off when exporting to png. I remember that, on Windows XP, MS Paint used to let you create curved lines which where not anti-aliased. First you traced the straight line, then you curved it (and after it became normal graphics, which could be edited only like normal raster graphics). I think a vector graphics editor that could display and export graphics in that pixelated way would be very useful in some scenarios. If anyone wants to try my script, just unzip it, and run some_svg_file.svg; Unfortunately, it will run on linux only (or maybe on OS X and BSDs also). EDIT: The image is intended to be a 16x16px mushroom, like a super mario mushroom.
  3. gnumaru

    Infinite game

    I found this article which seems interesting: The author does an analysis on approaches to implementing voxel engines. Even though it is about 3D worlds, you could just “cut out one of the dimensions” and think 2D =)
  4. Even though the “easy way” would be bundling and html5 game with a full web-browser, like intel xdk/crosswalk do on mobile (and like node-webkit does on desktop), there is great value in you initiative. It is certainly very annoying to make an html5 game with less than 1MB in size when zipped and see it grow to almost 20MB when packed in an apk with crosswalk. And certainly a full browser is not needed when all you want is only webgl. Also, I don't like the idea of uploading my game for some other service to bundle it inside an apk using some obscure process which may not always be available. I installed all your examples in my tablet, a Genesis Skyworth GT-1240 (the cheapest 10 inch tablet I could buy), and every one of them performed very well. I think gles.js can be very valuable for those planning on making webgl only games. It may not be suitable for games that relies on DOM or canvas, but probably sounds the best option for pure webgl. As you stated yourself at your website, you just need to define an easy and simple workflow for packaging the webgl apps for android (maybe an eclipse plugin? Or cli tool?). Keep up the good work, and I hope to see gles.js released in the wild soon. By the way, I played Tsunami Cruiser. Nice game =)
  5. Nepoxx Indeed, the C compiler preprocessor would do with the files exactly what I do not want to do. I do not want to bundle every .js file into one single big file, that's what the C preprocessor does. But when I made comparisons with C includes, I was talking about execution behavior, the javascript execution behavior compared to the behavior of a compiled C code that got includes. For example, if you execute the following lines on your browser: /* ********** */ eval("var a = 123;"); alert(a); var b = 987; eval("alert(b );"); /* ********** */ The first alert call will alert '123' and the second alert call will alert '987'. But if you 'use strict', the "var a" declaration and assignment wont be visible outside the eval, and the first alert will throw a "ReferenceError: a is not defined", and if you omit the var for the variable's 'a' declaration it will throw a "ReferenceError: assignment to undeclared variable a" (because when you 'use strict' you only declare globals explicitly by appending them to the window object). But the second alert will behave identically with or without 'use strict', because when you eval some string, it's code runs using the context where the eval call is made. This behavior of eval (although achieved in execution time) is the same of a C include statement (although achieved in compile time). If you create two C source files named a.c and b.c: /* ********** */ //code for a.c int main(){ int x = 0; #include "b.c" i = i+1; } /* ********** */ /* ********** */ //code for b.c x = x+1; int i = 0; /* ********** */ then compile them: $ gcc a.c; It will compile successfully because the code of b.c was coppied "as is" in the place where #include "b.c" is called. Thus not only the code in b.c got access to the code defined before the include statement in a.c, as well as the code defined after the include has access to the code defined in b.c. That's exactly the behavior of eval without "use strict", and "half" the behavior of the eval with "use strict". About eval being bad, I'm not so sure yet. I know most of the planet repeat Douglas Crockford's mantra "eval is evil" all day long, but it seems eval is more like "something that usually is very badly used by most" than "something that is necessarily bad wherever used". I had yet no in depth arguments about the performance of eval, and personally I guess that it "must be slower but not so perceively slower". About the security, that surely opens doors to malicious code, but the exact functionality I seek can not be achieved otherwise, at least not until ecmascript 6 gets out of the drafts and becomes standard. About the debugging issue, I think that's the worst part, but as already said, there is no other way to achieve what I seek without it. SebastianNette When I said javascript couldn't include other javascript files it was because "javascript alone" doesn't have includes. The default, de-facto, way of including javascript files is of course through script tags (it was the default way since the beginning of the language). But the script tag is a functionality that is part of the html markup language, not of the javascript programming language. Javascript itself, in it's language definition standards, does note have (yet) a standards defined way to include/require other javascript files. I was already aware of the Function constructor. I really don't know the innards of the javascript engines, but I bet that internally there is no difference between evalling a string and passing a string to a function constructor (jshint says that “The Function constructor is a form of eval”). I did run your tests on, and eval gave me a performance only 1.2% slower than the function constructor (on firefox 31). On chrome 36, it gave me a difference of 1.45%, which are both not so bad. I'm sure that one big js file bundled through browserify can be much more easily chewed by the javascript engines out there. The question could be about "how much slower" does a code recently acquired through a xmlhttprequest runs in comparison of a code that was always bundled since the beginning? And does this slowdown happens only after the first execution? and what if I cache the code? will it run faster afterwards? or it will always run slower? I don't know the answer, I never studied compilers, interpreters or virtual machines architectures. At least, my results in the jsperf test you gave me where good to me =) Anyway, I changed the eval to the “new Function” because I noticed that I wasn't caching the retrieved codes AT ALL. Now I've switched to a slightly better design. Everyone I have now implemented a limited commonjs style module loading on executejs (without a build step). It does not handles circular dependencies yet, and it expects only full paths (not relative paths). What bothers me of browserify is that it compels you to a build step. RequireJS does not have it, you can use your modules as separate files or bundle them together, you decide. But that's not true with browserify, and I prefer the commonjs require style than the amd style. I searched for a browser module loader that supports commonjs, but every one of them seem to need a build step. The only one I found was this: And it seems to be too big and complicated for something that should not be so complex...
  6. Hi everyone I named this post "Alternatives to organize code base into different files" because it is a more general than "alternatives to make modular code" or something like that. I like javascript a lot, but it being the ONLY LANGUAGE IN THE WORLD that does not have a way to load/reference a file from within another is what pisses me off most. Every language has it. C and C++ has "include", C# has "using", Java has "import" and php has "require" and "require_once". I bet even X86 assembly may have something like that. Nonetheless, javascript yet don't have it and only God knows if (and when) the ecmascript 6 draft that proposes modules like python's will come to really become standard and come the masses. And WHEN it come, will everyone use updated browsers ??? And, when it comes, people will already be familiar with what they are already familiar, like AMD (requirejs style modules) and browserify (commonjs style modules) That being said, I would like to know your experiences and opinions about how (and why) you divide your code base into several files. It seems most use browserify and others use requirejs. But there are still others that just define and use globals into several files and put several script tags in the html. I made an attempt to create what seemed to me the simplest way to emulate a include/import/using/require in javascript, using xmlhttprequest+eval to "include" synchronously one js file into another: I think the current ecmascript 6 draft propposal for modules is probably the best. But old javascript files meant to just be included in html with a script tag will probably need to be patched to work with the modules system. My solution is just a way to use the "old way" of adding several script tags, without really adding several script tags. I would like to know, then, what you're using to manage large codebases. Are you bundling every js file into one? with what? browserify? requirejs' bundling tool? linux's cat command? If you're not building your code base into one single file, how are you calling one file from anoter? several script tags? AMD (with requirejs or others)? something with commonjs sintax? And at last, independent of bundling your code into one file or not, how the functionality of each file is exposed to other files? by exporting amd style modules? exporting commonjs style modules? defining globals? defining namespaces? The creator of UglifyJS has given some thoughts on this matter wich is really worth reading. He says, for example, that for many years and still today, C programmers are used to define globals using unique names and are happy with it =) (those are not his words, those are my interpretation of his whole blog post) Your experiences and opinions would be really important to me and probably to several other people that may read this thread. Thanks.
  7. If you do not want/neet to concatenate the entire code base into a single js file, you can use my solution, executejs: It is not an asynchronous module loader like requirejs or a build tool like browserify, it is a simple facility to execute one js file from within other js file, like you would do with require and require_once in php or the several include/import/using statements from other languages like C, java and C#