Currently im training lora to create tileset and so far its going good, I was using this as ispiration. But decided to dich entire tileset cuz it was more 3D-ish than flat 2D.
I did draw my own inital tiles and used tilesetter to create all variations of tiles, i decided to make first minimalistic tilest as v1. In version v2 i took all tiles that are repeating and set em as single instance of it, you could rotate those and saving some space for future. For V3 what i wanna create is possible 1 click entire tileset with repaeting tiles that has variations like: Cracks, flowers, snow, dirt, vines etc, depending on what type of tiles it is. Currently building "blueprint"
So based on what i did learn is that you can create animation generator training your own lora/finetuned model, but you need to create same animation in same format with same heights and width, otherwise lora/model won't know what is where, it will get confused and starts to throw up and make something that is NOT a spritesheet. To A.I. to learn something you feed it with a objects of diffrent rotations, colors, like apple, but is it fully red?, is it rotten?, has holes?, etc. Each of states has to be seperate as dataset (text file that has words like "apple worm, apple, red, organic" etc. So whatever you want to make him learn you can, it requres some nice GPU and more time than anything.
If you want to generate animations with AI you can. You would probably spend 90% of time just in setup dataset before training and about 10% try and error. But there is always smart way. I use trigger words and lora (smaller in size), and if i can train my own model hopefully one day. For example, to create data set you need to first figure out what you want, Enitre set? What comes in that tileset, walking? Running? Jumping? Slashing? Dying? Revivng? Casting Spells,....list can be pretty big. If you want to make such spritesheet it would be really big and kinda not possible as of right now.
You can do spritesheet of 1536x1536 as highest resolution as base. Im using 1024px SDXL model as base for lora.
So now you have canvas to work with, pick up to some space that your PC can generate and stick with that for beggining. To Setup animation, lets use walking as example. In 1024 x 1024 px we can put somewhat 128 x 256, Now remove 16 for spacing, and you have 112x 496 pixels to work for 1 frame. Since we have limited space you can have 64 frames but all frames dose not have to be used. Or you can generate 8 animations for each row and 8 collums for each animation or 4 animations and more frames. You get point. Now lets say you created your first "blueprint". There is now more ways to create rest of variations. There is hard and time consuming, and dirty and easy but maybe less quality.
Hard way First one is create all variations of animations as blueprints like : walking, dying, springing, jumping...whatever you wish (it takes time). And use that as transparent to draw other characters in similar animations, so that you can have 10-15 datasets of diffrenct characters in those animations.
Easy way Use created bluepring and use A.I. to generate similar animations with diffrent characters, and use those genearted as dataset. Use controlnet for pose, canny for cohirance. *Canny will try to draw similar guy*, recommend to look at google that you out of controlnets. Depth can be good to see what is in front and what is in back...usefull for legs/arms.
Note That once creating "blueprint" you can create other effects, like creating naked dude/girl (some cloth) and draw over with animation as overlay in back a sword swing. Or Diffrent shoes, armor etc, that can match with that animations. This way you have by having "blueprint" you quickly add/remove things you need.
Creating other assets with A.I. is not using prompts only but using your imagation. I'm using Krita with AI, i do have prompt but hes imaging that i want i just draw shape of it, again depending on "noise" it can generate existing or something compleatly new. Mostly you want to be really going into detalies about your characters. Where is legs, arms, what is where, is hair growing out of hat? Are ears on sides out hat or they are merged. A.I. as of right now won't create even with perfect dataset out of box correctt sprite or picture, you have to correct it, eather drawing yourself or prompting inpainting....... You will notice that Creating AI is more about creating dataset that training any lora/model. Training is more like test to see if it works. To see if what you created pleases you, is it flawless? That is debetable.
Currently im training lora to create tileset and so far its going good, I was using this as ispiration. But decided to dich entire tileset cuz it was more 3D-ish than flat 2D.
I did draw my own inital tiles and used tilesetter to create all variations of tiles, i decided to make first minimalistic tilest as v1. In version v2 i took all tiles that are repeating and set em as single instance of it, you could rotate those and saving some space for future.
For V3 what i wanna create is possible 1 click entire tileset with repaeting tiles that has variations like: Cracks, flowers, snow, dirt, vines etc, depending on what type of tiles it is. Currently building "blueprint"
So based on what i did learn is that you can create animation generator training your own lora/finetuned model, but you need to create same animation in same format with same heights and width, otherwise lora/model won't know what is where, it will get confused and starts to throw up and make something that is NOT a spritesheet.
To A.I. to learn something you feed it with a objects of diffrent rotations, colors, like apple, but is it fully red?, is it rotten?, has holes?, etc. Each of states has to be seperate as dataset (text file that has words like "apple worm, apple, red, organic" etc. So whatever you want to make him learn you can, it requres some nice GPU and more time than anything.
If you want to generate animations with AI you can. You would probably spend 90% of time just in setup dataset before training and about 10% try and error. But there is always smart way.
I use trigger words and lora (smaller in size), and if i can train my own model hopefully one day.
For example, to create data set you need to first figure out what you want, Enitre set? What comes in that tileset, walking? Running? Jumping? Slashing? Dying? Revivng? Casting Spells,....list can be pretty big. If you want to make such spritesheet it would be really big and kinda not possible as of right now.
You can do spritesheet of 1536x1536 as highest resolution as base. Im using 1024px SDXL model as base for lora.
So now you have canvas to work with, pick up to some space that your PC can generate and stick with that for beggining. To Setup animation, lets use walking as example. In 1024 x 1024 px we can put somewhat 128 x 256, Now remove 16 for spacing, and you have 112x 496 pixels to work for 1 frame. Since we have limited space you can have 64 frames but all frames dose not have to be used. Or you can generate 8 animations for each row and 8 collums for each animation or 4 animations and more frames. You get point.
Now lets say you created your first "blueprint". There is now more ways to create rest of variations. There is hard and time consuming, and dirty and easy but maybe less quality.
Hard way
First one is create all variations of animations as blueprints like : walking, dying, springing, jumping...whatever you wish (it takes time). And use that as transparent to draw other characters in similar animations, so that you can have 10-15 datasets of diffrenct characters in those animations.
Easy way
Use created bluepring and use A.I. to generate similar animations with diffrent characters, and use those genearted as dataset. Use controlnet for pose, canny for cohirance. *Canny will try to draw similar guy*, recommend to look at google that you out of controlnets. Depth can be good to see what is in front and what is in back...usefull for legs/arms.
Note
That once creating "blueprint" you can create other effects, like creating naked dude/girl (some cloth) and draw over with animation as overlay in back a sword swing. Or Diffrent shoes, armor etc, that can match with that animations. This way you have by having "blueprint" you quickly add/remove things you need.
Creating other assets with A.I. is not using prompts only but using your imagation. I'm using Krita with AI, i do have prompt but hes imaging that i want i just draw shape of it, again depending on "noise" it can generate existing or something compleatly new. Mostly you want to be really going into detalies about your characters. Where is legs, arms, what is where, is hair growing out of hat? Are ears on sides out hat or they are merged. A.I. as of right now won't create even with perfect dataset out of box correctt sprite or picture, you have to correct it, eather drawing yourself or prompting inpainting.......
You will notice that Creating AI is more about creating dataset that training any lora/model. Training is more like test to see if it works. To see if what you created pleases you, is it flawless? That is debetable.
How can i add it to Artificial Intelligence Assisted Artwork collection? Aka mark my own as it??