Ah, it seems this TinkerCAD already performs CSG merging on export, so the screenshots you posted before really threw me off. Still, the decimation in Meshlab is far superior to the Blender modifier, to bring down the polycount even more and get them ready for baking the high-poly details back onto. If you need any specific help in any other area, feel free to ask me. Maybe all this stuff I posted before will be of use to anyone else looking at kit-bashing their way into content creation.
The first paragraph of my previous post, the link to that webpage I posted, is probably the most important thing. Sometimes you might halve the polycount of your model, or even more. These are just a couple of easy steps in Meshlab so I doubt it's beyond anyone's abilities. For that example, something built out of virtual LEGO blocks, the reduction in polycount must be pretty huge there. In my experience, it usually leaves more internal leftovers than any gaps in the outer geometry.
First, for removing internal/hidden geometry (which is likely the biggest concern), it can be done with baking a bunch of lights onto the model and then using that information to colourise the vertices and then use that as a selection, deleting everything that isn't affected by the light. Another technique (which is much easier and quicker, although a little less reliable) is this:- http://meshlabstuff.blogspot.com/2009/04/how-to-remove-internal-faces-wi... This will shave a bunch of triangles and also help to minimise any overdraw issues when the mesh is being rendered in real-time. Neither approach is 100% reliable, depending on the mesh(es), light can leak through or not be thoroughly caught, resulting in either triangles remaining inside or holes on the outermost geometry. For the latter case, you can try to "close holes".
Then do a bit of cleaning up with "remove duplicate faces, remove duplicate vertex, remove unreferenced vertex, remove zero area faces". Infact this is probably worth doing at different intervals, and definitely at the end of the process.
For merging everything together, you would need a "CSG" operation or a surface reconstruction (Poisson) - one requiring extra time to piece all the different meshes together, and the other requiring a lot of computer power, and resulting in a more high-poly and organic shape. To the former for machines like robotics, and the latter for organic objects like creatures, I'd say. This will make your model "watertight" and an enclosed contiguous mesh, which is what a GPU wants to chew on, but it would also create a lot of tiny useless triangles and possibly some infinitely sharp ones, so those need to be cleaned up and the entire mesh simplified.
For cleaning and optimising the model, use "Quadratic edge collapse decimation", or (in combination with) some other simplification methods. Every model is different and you'll just need to play with parameters until you get something that brings down the polycount, helps get a consistent topology and doesn't make your mesh too ugly in the process. Depending on if it's something mechanical or organic, different methods and parameters would be used.
At this point, you would really need to weigh 2 considerations - if the extra amount of triangles (even after cleaning up) is a worthwhile benefit over the possible overdraw penalties from how it originally was. It should be, but not always, so keep your eye on the polycount when you're doing this. Each case would be different and there's no general advice to give. To generalise, a modest amount of a higher polycount is worth it against the overdraw and batches that a "kit-basher" will face in real-time rendering.
Finally, you can do things like take the original version, using subdivision, texture-painting, displacements, adding extra geometric details, etc. and unwrapping your clean low-poly version, and baking everything onto that. Textures, baked lighting/AO, normalmaps, etc. Auto-unwrapped UVs are never going to be as good as a UV layout that has been carefully unwrapped and orientated manually, but you might be lucky come out with minimal artifacts and decent texel scales. For the high-poly version you're going to bake from, there are basically no rules, just do anything to make it how you want it to look and bake it all out.
If you want to animate, that's a different story, but it may just require some cutting up the mesh to create a couple of new loops for joints, with a little bit of cleaning. Again, it's too case-specific to go into, but sometimes a model can be easy to set up for animators with a few little cuts here and there.
Ultimately, at the end of all this, the real question is - is it worth doing all of this or just making a real-time 3D model correctly - from the beginning?
The main issues I can tell (without too much examination) - bad topology = inconsistent detail concerning the size of triangles in relation to each other, and intersections and hidden geometry, which not only wastes triangles but potentially causes unnecessary overdraw issues. Neither of these issues are very easy to fix but if so, Meshlab would be the best (free) bet.
This has a lot of remeshing/resurfacing functions that could possibly take a kit-bashed mess and turn it into something that could possibly be suitable for real-time engine:-
It would also be possible to just automatically unwrap UVs and bake the details onto the model. It's never going to be "great", but with enough time and care something might end up "suitable" to some degree, at least.
I honestly believe this is the way to go, at least when working with a stranger, even if they seem to have a portfolio and apparent reputation.
Split the jobs into small manageable pieces, all planned and documented, and begin piece by piece with very small payments. Save the pricey/difficult things until very last. All jobs would be per-task and regardless of the time it takes, because there are always excuses for time but the tasks remain. This also gives the added benefit of seeing the work in progress and making iterations along the way.
Anyone not willing to work like that is not someone I could possibly trust. I've been burned before and ever since, I completely insist on doing things this way. Someone with real integrity and skills should not have a problem with it.
If it's deemed good enough an effort, for the short amount of time I spent on it, I will upload to this site again (sometime), under the same public domain license.
I think MedicineStorm has the right idea, to make it a fundamental function of the website itself to gauge whether a user has potential to be an active and capable developer right from the moment they sign up. The registration process itself should make it pretty clear that it's about getting teams together and getting projects finished, a community effort, and nothing less. Infact, it would be funny if "game designer" was automatically ticked and greyed-out.
I don't think having a closed "elitist" section is good, anyone should be able to read any part of the site, even if they can't post there. It would be a great learning tool in itself, for everyone to read how the doers are actually doing the do. :) So long as people can update their profile at any time, at any moment they could be classified as a "machine" in the "development factory" instead of a "talker" in the "junkyard of dreams".
There's a big difference between outsourcing assets (which pretty much everyone does) and being an "asset flip", which the key word here is 'flip' as in selling something off without bringing any value to it. If you build a game using outsourced assets, and the game is more than just that, then you can't be accused of flipping it off because you'd bring extra value to it. It's a term getting thrown around a lot by people who don't understand that concept, like mistaking "old school" as meaning just "old", etc... Not the kind of people you'd ever make happy anyway, they get off on being unhappy, I believe.
My approach is to keep outsourcing to vague materials and little bits of scenery, but make sure the characters/weapons/vehicles/items/GUI and anything that bring real character to the project as unique and original.
As for the resource question, it really depends on your engine and the target platform. If overdraw is a problem VS your texture memory and project size. It's really difficult to answer without more details about the project on a technical level.
What a cutie! :D
Ah, it seems this TinkerCAD already performs CSG merging on export, so the screenshots you posted before really threw me off. Still, the decimation in Meshlab is far superior to the Blender modifier, to bring down the polycount even more and get them ready for baking the high-poly details back onto. If you need any specific help in any other area, feel free to ask me. Maybe all this stuff I posted before will be of use to anyone else looking at kit-bashing their way into content creation.
The first paragraph of my previous post, the link to that webpage I posted, is probably the most important thing. Sometimes you might halve the polycount of your model, or even more. These are just a couple of easy steps in Meshlab so I doubt it's beyond anyone's abilities. For that example, something built out of virtual LEGO blocks, the reduction in polycount must be pretty huge there. In my experience, it usually leaves more internal leftovers than any gaps in the outer geometry.
First, for removing internal/hidden geometry (which is likely the biggest concern), it can be done with baking a bunch of lights onto the model and then using that information to colourise the vertices and then use that as a selection, deleting everything that isn't affected by the light. Another technique (which is much easier and quicker, although a little less reliable) is this:-
http://meshlabstuff.blogspot.com/2009/04/how-to-remove-internal-faces-wi...
This will shave a bunch of triangles and also help to minimise any overdraw issues when the mesh is being rendered in real-time. Neither approach is 100% reliable, depending on the mesh(es), light can leak through or not be thoroughly caught, resulting in either triangles remaining inside or holes on the outermost geometry. For the latter case, you can try to "close holes".
Then do a bit of cleaning up with "remove duplicate faces, remove duplicate vertex, remove unreferenced vertex, remove zero area faces". Infact this is probably worth doing at different intervals, and definitely at the end of the process.
For merging everything together, you would need a "CSG" operation or a surface reconstruction (Poisson) - one requiring extra time to piece all the different meshes together, and the other requiring a lot of computer power, and resulting in a more high-poly and organic shape. To the former for machines like robotics, and the latter for organic objects like creatures, I'd say.
This will make your model "watertight" and an enclosed contiguous mesh, which is what a GPU wants to chew on, but it would also create a lot of tiny useless triangles and possibly some infinitely sharp ones, so those need to be cleaned up and the entire mesh simplified.
For cleaning and optimising the model, use "Quadratic edge collapse decimation", or (in combination with) some other simplification methods. Every model is different and you'll just need to play with parameters until you get something that brings down the polycount, helps get a consistent topology and doesn't make your mesh too ugly in the process. Depending on if it's something mechanical or organic, different methods and parameters would be used.
At this point, you would really need to weigh 2 considerations - if the extra amount of triangles (even after cleaning up) is a worthwhile benefit over the possible overdraw penalties from how it originally was. It should be, but not always, so keep your eye on the polycount when you're doing this. Each case would be different and there's no general advice to give. To generalise, a modest amount of a higher polycount is worth it against the overdraw and batches that a "kit-basher" will face in real-time rendering.
Finally, you can do things like take the original version, using subdivision, texture-painting, displacements, adding extra geometric details, etc. and unwrapping your clean low-poly version, and baking everything onto that. Textures, baked lighting/AO, normalmaps, etc. Auto-unwrapped UVs are never going to be as good as a UV layout that has been carefully unwrapped and orientated manually, but you might be lucky come out with minimal artifacts and decent texel scales. For the high-poly version you're going to bake from, there are basically no rules, just do anything to make it how you want it to look and bake it all out.
If you want to animate, that's a different story, but it may just require some cutting up the mesh to create a couple of new loops for joints, with a little bit of cleaning. Again, it's too case-specific to go into, but sometimes a model can be easy to set up for animators with a few little cuts here and there.
Ultimately, at the end of all this, the real question is - is it worth doing all of this or just making a real-time 3D model correctly - from the beginning?
The main issues I can tell (without too much examination) - bad topology = inconsistent detail concerning the size of triangles in relation to each other, and intersections and hidden geometry, which not only wastes triangles but potentially causes unnecessary overdraw issues. Neither of these issues are very easy to fix but if so, Meshlab would be the best (free) bet.
This has a lot of remeshing/resurfacing functions that could possibly take a kit-bashed mess and turn it into something that could possibly be suitable for real-time engine:-
http://www.meshlab.net/
It would also be possible to just automatically unwrap UVs and bake the details onto the model. It's never going to be "great", but with enough time and care something might end up "suitable" to some degree, at least.
I honestly believe this is the way to go, at least when working with a stranger, even if they seem to have a portfolio and apparent reputation.
Split the jobs into small manageable pieces, all planned and documented, and begin piece by piece with very small payments. Save the pricey/difficult things until very last. All jobs would be per-task and regardless of the time it takes, because there are always excuses for time but the tasks remain. This also gives the added benefit of seeing the work in progress and making iterations along the way.
Anyone not willing to work like that is not someone I could possibly trust. I've been burned before and ever since, I completely insist on doing things this way. Someone with real integrity and skills should not have a problem with it.
Here's what I came up with:-
http://www.violae.net/temp/pyrolbrwnplstcwndw1.zip
If it's deemed good enough an effort, for the short amount of time I spent on it, I will upload to this site again (sometime), under the same public domain license.
What do you guys think?
I think MedicineStorm has the right idea, to make it a fundamental function of the website itself to gauge whether a user has potential to be an active and capable developer right from the moment they sign up. The registration process itself should make it pretty clear that it's about getting teams together and getting projects finished, a community effort, and nothing less. Infact, it would be funny if "game designer" was automatically ticked and greyed-out.
I don't think having a closed "elitist" section is good, anyone should be able to read any part of the site, even if they can't post there. It would be a great learning tool in itself, for everyone to read how the doers are actually doing the do. :) So long as people can update their profile at any time, at any moment they could be classified as a "machine" in the "development factory" instead of a "talker" in the "junkyard of dreams".
There's a big difference between outsourcing assets (which pretty much everyone does) and being an "asset flip", which the key word here is 'flip' as in selling something off without bringing any value to it. If you build a game using outsourced assets, and the game is more than just that, then you can't be accused of flipping it off because you'd bring extra value to it. It's a term getting thrown around a lot by people who don't understand that concept, like mistaking "old school" as meaning just "old", etc... Not the kind of people you'd ever make happy anyway, they get off on being unhappy, I believe.
My approach is to keep outsourcing to vague materials and little bits of scenery, but make sure the characters/weapons/vehicles/items/GUI and anything that bring real character to the project as unique and original.
As for the resource question, it really depends on your engine and the target platform. If overdraw is a problem VS your texture memory and project size. It's really difficult to answer without more details about the project on a technical level.
Pages