Competition results are usually rather abstract- or casual-style games - as for one because of programmer art (here OGA can excell) and as for number two because of complexity. If you shuffle the theme with some *random* verbs you may come up with games quite unique one couldn't have imagined before - yet quite simple. Often simpler than a lot of other things.
Me as a fan of Team-OGA shouting from the tribune random thoughts:
For libraries:
The rules allow you to choose/discuss/learn your libraries (weapons) beforehand and in my opinion you should do that completely beforehand. And then do what you can with these once the competition starts. Maybe just decide on additional libs when really really really necessary (better convert data?). Just being pragmatic.
For art, theme and goal:
Once you know the theme-key-word you may try to gather available open art in a brainstormy parallel fashion. Then review what you all found. And then decide for the concrete goal/plan (depending on the available art). Sure adding complementary art where necessary.
(2d or 3d should be clear rather early. If I'd be you I'd say 2d for simplicity and the ability to use all 2d art + pre-rendered 3d art)
For portability:
Compiler option -Wall helps to get rid of (some) possible portability-/version-issues in your source, too. Of course make sure that there are precompiled/packaged libraries for your three platforms. "/" works on windows, too - at least with mingw which is the best option for cross-platform.
As for Tech-Demo and implementation steps:
"Tech Demo" sounds bad to me. Don't demonstrate technology. Demonstrate your primitive game mechanics or your primary-game-loop and enhance it. Iterate as often as time permits. Refine mechanics and refine art.
And remember to keep everything real simple - no time for elegant code that may be useful someday.
No problem, just a suggestion to point out the universality :)
As it seems Alexandre, has a lot of the art replaced already anyway. But what's missing are 2 of 3 main characters and maybe tiles for more levels and/or work to make gathered art coherent. Of course 3d-pre-rendering does not help coherency when most is hand pixeled already. So I wouldn't try to convince him to redo everything from scratch - that's never a good idea and killed a lot of projects.
As for 3d as it may not be obvious enough from my previous post:
3d to create pre-rendered tiles and sprites was meant, and not 3d-game, ie. render to bitmaps beforehand and use these bitmaps in game. It is just how tiles and sprites are created in the art-production pipeline. And this does(!) work for low-res, too, and is independent from any technology, except that it is especially meant for 2d (or mixed 3d/2d as in FF backgrounds). Indeed the original sonic maps have a somewhat polygonal and rendered style. And indeed Sega tried prerendered characters several times which were described as ugly. But Donkey Kong Country is another pre-rendered low-res (a bit higher res?) example which looks letter - but a bit artificial, too.
How to have the pre-rendered art not to look artificical/rendered/plastic/metallic remains to be figured out and wasn't solved good at that time - but it is simply a matter of lighting, material/shaders and color-post-processing and palette. For sure going 3d-pre-rendered has an initial overhead compared to directly pixeling art in the first place but it is an investment. Especially for animated objects. From a global view point more projects can profit from indirect collaboration, by using a common pool of 3d models like on OGA, it is a kind of cross-investment.
I would recommend to go for pre-rendered-pixel'ish art, use what's available, and use crude re-place-holders at will.
Why not low-res pixel-art directly: A low-res pixel art image can be used in one single place only ever (with exceptions, sure). Change resolution: break. Change animation: break. Change point of view: break. Change a single detail: redo for all frames. It is like a compiled binary - an end result.
At least use high-res art - and yes, there are ways to scale it down and look pixel-art'ish to some degree (but why low-res anyway?). Any why rendered art? If you or anybody else creates a 3d model it can be easily modified - it can be used in a lot of games to come and you may find useable bits and pieces everywhere. You may postprocess the rendering to have it your style, you can use cell-shading to get a toon look.
Say, if one would make a 3d surge character model, I can see a lot of places where it could be used, which would rise one's motivation to create one: Super Tux Cart (as a cart pilot), OpenAreana (FPS), Ultimate Smash Friends (2d arcade fight game), all jRPG projects out there, or maybe someday even "Open Surge 3D", or "Open Sourge HD". And you can make renderings for support purposes: Advertising, articles, merchandise, ...
All this from a single 3d model (and please don't go for lowpoly). And you may start out with a crude model, no problem. For the background-tiles you could make vector graphics, which are then exported to bitmaps, also I would probably go for 3d as well and assemble bits and pieces.
This kind of pipeline fits better to open source's decentralised development because it keeps art easily editable and interchangeable at the 3d-source level, even for gamers, modders and programmers. I can't talk for others but I personally wouldn't spend my time pixeling a single animation frame for hours - even knowing that I'd have x more to paint before it becomes useful - when I can make a 3d model in the same time - which could be rigged and animated by others, so that it can be useful anywhere.
As a sidenote and disclaimer because this topic should be in the Procedural-Content-Generation-Forum: We aren't talking about mandatory OGA-submission standards for tilesets - but maybe what may become an OGA-standard for metadata - we want to get more out of available data for PCG, rapid game development, or even gamemakers and as a sideproduct mapediting-ease. Where the last two probably are the killer-apps for metadata to reach critical mass. Come tools, come standards.
@p0ss: I understand your arguments and will take them into consideration - that's why I talked about automatic metadata generation given a map and a tileset. But whoever it does: It can't be too much to expect them to give a tile at least a name (ie. an individual unique tag) and *maybe* even some more tags to describe it (*). It would be best if this could be done in a mapeditor, where and when the map using these tiles is created. The actual writing of a metafile could then be automatic. But of course somebody else could do these "inbetweeny tasks", it would be just one step in the pipeline.
(*) just something like "mygrass0, grass, summer, dry", or "mygrass1, grass, summer, wet" will do - other fields of the proposal above could be generated automaticaly from the fact that these two tiles are (or aren't) neighbours on a given map. It could even be reasoned (automatically) that a wet-tagged grass-tile is never placed beside a dry-tagged grass-tile. It may even be a better idea to have neighbourship probability instead of binary yes/no. But these are technical considerations.
Q: Creating metadata for tilesets to automate mapping is a daunting task, what could be done to make it easier?
When you (you reading this) got a map and a tileset then the map already contains a lot of metadata waiting to be extracted.
Q: How?
A map-editor like Tiled could promt you for tile-tags (ie. unique tile name and other describing tags) then it may extract metadata from an example map:
Which tags or combination of tags where used on which layer?
Which neighbouring tags does a tagged tile have to the east, west, north, south?
Which neighbours on the layer above and below the tile?
This may be aggregated to add additional metadata to single tiles - which then again can be used to autocreate maps or assist in editing. Just like learning from examples.
Seems they recommend a triangle filter to keep diagonal edges. And it seems they are considering to add special pixel-art-scaling algorithms. But they can't use available source code because of licensing issues.
These links contain reference on available imagemagic methods:
I second the opinion that maybe one format is not enough but would not urge to use all formats.
BUT, please, provide all necessary files to fulfil these requests if possible:
Provide your original editing source
Provide source readable by a common open source editor
Provide a fallback format readable by most editors
Provide a lossless high quality format
This may be a single file format (eg. png with no/low compression) but most often not (eg. psd -> xcf -> png) / (eg. max -> blender -> obj) / (eg. audacity/whatnot -> wav -> ogg/mp3). I think the point should be to provide files for most use cases - not most formats.
So, if you make audio, please provide a high quality wav. If you make models provide an portable Obj and probably a file for common Bender - BTW as I recently learned there are things which can go wrong when exporting/converting yourself (contrary to the artists intention). The artist is THE professional for converting his model/sound to a portable/fallback format in the best fitting way with the least effort.
I think it isn't a large request to provide at least one alternative format. That would already reduce the programmers effort to install/learn every program somebody may have used by a huge amount. And then it may only be a minor step for programmers to convert these files to the game's native format.
"If you don't have source, eat binary" (or vise versa) does not help. ;)
Competition results are usually rather abstract- or casual-style games - as for one because of programmer art (here OGA can excell) and as for number two because of complexity. If you shuffle the theme with some *random* verbs you may come up with games quite unique one couldn't have imagined before - yet quite simple. Often simpler than a lot of other things.
Me as a fan of Team-OGA shouting from the tribune random thoughts:
For libraries:
The rules allow you to choose/discuss/learn your libraries (weapons) beforehand and in my opinion you should do that completely beforehand. And then do what you can with these once the competition starts. Maybe just decide on additional libs when really really really necessary (better convert data?). Just being pragmatic.
For art, theme and goal:
Once you know the theme-key-word you may try to gather available open art in a brainstormy parallel fashion. Then review what you all found. And then decide for the concrete goal/plan (depending on the available art). Sure adding complementary art where necessary.
(2d or 3d should be clear rather early. If I'd be you I'd say 2d for simplicity and the ability to use all 2d art + pre-rendered 3d art)
For portability:
Compiler option -Wall helps to get rid of (some) possible portability-/version-issues in your source, too. Of course make sure that there are precompiled/packaged libraries for your three platforms. "/" works on windows, too - at least with mingw which is the best option for cross-platform.
As for Tech-Demo and implementation steps:
"Tech Demo" sounds bad to me. Don't demonstrate technology. Demonstrate your primitive game mechanics or your primary-game-loop and enhance it. Iterate as often as time permits. Refine mechanics and refine art.
And remember to keep everything real simple - no time for elegant code that may be useful someday.
I want a clean fight. Now get it on! ;)
Your sculptings are first-class and I hope you wont stop!
No problem, just a suggestion to point out the universality :)
As it seems Alexandre, has a lot of the art replaced already anyway. But what's missing are 2 of 3 main characters and maybe tiles for more levels and/or work to make gathered art coherent. Of course 3d-pre-rendering does not help coherency when most is hand pixeled already. So I wouldn't try to convince him to redo everything from scratch - that's never a good idea and killed a lot of projects.
As for 3d as it may not be obvious enough from my previous post:
3d to create pre-rendered tiles and sprites was meant, and not 3d-game, ie. render to bitmaps beforehand and use these bitmaps in game. It is just how tiles and sprites are created in the art-production pipeline. And this does(!) work for low-res, too, and is independent from any technology, except that it is especially meant for 2d (or mixed 3d/2d as in FF backgrounds). Indeed the original sonic maps have a somewhat polygonal and rendered style. And indeed Sega tried prerendered characters several times which were described as ugly. But Donkey Kong Country is another pre-rendered low-res (a bit higher res?) example which looks letter - but a bit artificial, too.
How to have the pre-rendered art not to look artificical/rendered/plastic/metallic remains to be figured out and wasn't solved good at that time - but it is simply a matter of lighting, material/shaders and color-post-processing and palette. For sure going 3d-pre-rendered has an initial overhead compared to directly pixeling art in the first place but it is an investment. Especially for animated objects. From a global view point more projects can profit from indirect collaboration, by using a common pool of 3d models like on OGA, it is a kind of cross-investment.
I would recommend to go for pre-rendered-pixel'ish art, use what's available, and use crude re-place-holders at will.
Why not low-res pixel-art directly: A low-res pixel art image can be used in one single place only ever (with exceptions, sure). Change resolution: break. Change animation: break. Change point of view: break. Change a single detail: redo for all frames. It is like a compiled binary - an end result.
At least use high-res art - and yes, there are ways to scale it down and look pixel-art'ish to some degree (but why low-res anyway?). Any why rendered art? If you or anybody else creates a 3d model it can be easily modified - it can be used in a lot of games to come and you may find useable bits and pieces everywhere. You may postprocess the rendering to have it your style, you can use cell-shading to get a toon look.
Say, if one would make a 3d surge character model, I can see a lot of places where it could be used, which would rise one's motivation to create one: Super Tux Cart (as a cart pilot), OpenAreana (FPS), Ultimate Smash Friends (2d arcade fight game), all jRPG projects out there, or maybe someday even "Open Surge 3D", or "Open Sourge HD". And you can make renderings for support purposes: Advertising, articles, merchandise, ...
All this from a single 3d model (and please don't go for lowpoly). And you may start out with a crude model, no problem. For the background-tiles you could make vector graphics, which are then exported to bitmaps, also I would probably go for 3d as well and assemble bits and pieces.
This kind of pipeline fits better to open source's decentralised development because it keeps art easily editable and interchangeable at the 3d-source level, even for gamers, modders and programmers. I can't talk for others but I personally wouldn't spend my time pixeling a single animation frame for hours - even knowing that I'd have x more to paint before it becomes useful - when I can make a 3d model in the same time - which could be rigged and animated by others, so that it can be useful anywhere.
As a sidenote and disclaimer because this topic should be in the Procedural-Content-Generation-Forum: We aren't talking about mandatory OGA-submission standards for tilesets - but maybe what may become an OGA-standard for metadata - we want to get more out of available data for PCG, rapid game development, or even gamemakers and as a sideproduct mapediting-ease. Where the last two probably are the killer-apps for metadata to reach critical mass. Come tools, come standards.
@p0ss: I understand your arguments and will take them into consideration - that's why I talked about automatic metadata generation given a map and a tileset. But whoever it does: It can't be too much to expect them to give a tile at least a name (ie. an individual unique tag) and *maybe* even some more tags to describe it (*). It would be best if this could be done in a mapeditor, where and when the map using these tiles is created. The actual writing of a metafile could then be automatic. But of course somebody else could do these "inbetweeny tasks", it would be just one step in the pipeline.
(*) just something like "mygrass0, grass, summer, dry", or "mygrass1, grass, summer, wet" will do - other fields of the proposal above could be generated automaticaly from the fact that these two tiles are (or aren't) neighbours on a given map. It could even be reasoned (automatically) that a wet-tagged grass-tile is never placed beside a dry-tagged grass-tile. It may even be a better idea to have neighbourship probability instead of binary yes/no. But these are technical considerations.
Q: Creating metadata for tilesets to automate mapping is a daunting task, what could be done to make it easier?
When you (you reading this) got a map and a tileset then the map already contains a lot of metadata waiting to be extracted.
Q: How?
A map-editor like Tiled could promt you for tile-tags (ie. unique tile name and other describing tags) then it may extract metadata from an example map:
Which tags or combination of tags where used on which layer?
Which neighbouring tags does a tagged tile have to the east, west, north, south?
Which neighbours on the layer above and below the tile?
This may be aggregated to add additional metadata to single tiles - which then again can be used to autocreate maps or assist in editing. Just like learning from examples.
Just want to add some useful links I found on this topic for the imagemagic-tool:
Discussion about best approach: http://www.imagemagick.org/discourse-server/viewtopic.php?f=1&t=17447
Seems they recommend a triangle filter to keep diagonal edges. And it seems they are considering to add special pixel-art-scaling algorithms. But they can't use available source code because of licensing issues.
These links contain reference on available imagemagic methods:
Resizing-Reference with pictured examples: http://www.imagemagick.org/Usage/resize/
Downscaling esp. for aspect ratio changes: http://www.imagemagick.org/Usage/resize/#liquid-rescale
Blury upscale and then sharpening: http://www.imagemagick.org/Usage/resize/#resize_unsharp
The resize_unsharp is a general purpose method when you want to preserve edges and crispness not (only) for pixel-art.
I second the opinion that maybe one format is not enough but would not urge to use all formats.
BUT, please, provide all necessary files to fulfil these requests if possible:
This may be a single file format (eg. png with no/low compression) but most often not (eg. psd -> xcf -> png) / (eg. max -> blender -> obj) / (eg. audacity/whatnot -> wav -> ogg/mp3). I think the point should be to provide files for most use cases - not most formats.
So, if you make audio, please provide a high quality wav. If you make models provide an portable Obj and probably a file for common Bender - BTW as I recently learned there are things which can go wrong when exporting/converting yourself (contrary to the artists intention). The artist is THE professional for converting his model/sound to a portable/fallback format in the best fitting way with the least effort.
I think it isn't a large request to provide at least one alternative format. That would already reduce the programmers effort to install/learn every program somebody may have used by a huge amount. And then it may only be a minor step for programmers to convert these files to the game's native format.
"If you don't have source, eat binary" (or vise versa) does not help. ;)
I like it - it has some 80's scifi bladerunner flare!
Pages