under current terms i believe the content made using firefly is allowed for non-commercial uses, when i fiddled with it it applied a no commercial use watermark to my images (see the attatchment above)
i do think it is really cool that companies are paying attention to the debate and trying to do this in a way that doesn't violate anyone's rights.
i think in the future we will see more open datasets
The current Firefly generative AI model is trained on a dataset of Adobe Stock, along with openly licensed work and public domain content where copyright has expired.
As Firefly evolves, Adobe is exploring ways for creators to be able to train the machine learning model with their own assets so they can generate content that matches their unique style, branding, and design language without the influence of other creators’ content. Adobe will continue to listen to and work with the creative community to address future developments to the Firefly training models.
With the Content Authenticity Initiative (CAI), Adobe is setting the industry standard for responsible generative AI. With more than 900 members today, the CAI is leading the conversation around digital content attribution. The CAI offers free, publicly available open source tools and is working on a widely adopted technical standard in collaboration with leading technology organizations through the nonprofit Coalition for Content Provenance and Authenticity (C2PA).
Firefly will automatically attach a tag in the embedded Content Credentials to make AI-generated art and content easily distinguishable from work created without generative AI.
"
however i cant find any "demonstration" of there claims. i mean they obviously own the rights to adobe stock images, but i guess we are just supposed to trust them that the rest of the dataset is public domain.
but since my first generation with it churned out this thing, it cant be all bad:
yah i guess you could make visual novels and some stuff with only ai generated content. but your not going to get animated characters, functional pixel art, or much that is coherent. atleast i have not gotten after spending hours playing with different algorithms.
and i bet there are grannies out there trying to make games. atleast i hope so.
but to put real artists out of work? i dont think so. the way these algorithms work is imitative. if real people stop making real art then the algorithms will have nothing new to imitate.
my biggest issue is that i believe the datasets that these algorithms use (and from a tech perspective, the ai algorithm and training dataset are two separate things) are stealing from artists. just because someone posted their painting on artstation or somesuch, does not give tech-bros in the "disruption" industry the right to put their work, without permission or credit, in a training dataset. the art that is created from that dataset is a derivative work of stolen art.
ai art is cool. it is also cool to be able to play old retro games without owning the original cartridge, or to use public roadways but not pay taxes. "cool" does not make right or legal.
i think it is most useful for "sketching" out concepts and ideas. produce game ready art? i have spent numerous hours playing with multiple different ai art algorithms, researched and tested prompt engineering, and i have never generated anything that was useful to me out of the box with no editing needed on my part.
under current terms i believe the content made using firefly is allowed for non-commercial uses, when i fiddled with it it applied a no commercial use watermark to my images (see the attatchment above)
i do think it is really cool that companies are paying attention to the debate and trying to do this in a way that doesn't violate anyone's rights.
i think in the future we will see more open datasets
i did some perusing, and there is no transparency of the dataset that i can see. "adobe says" is the only think i can see.
from the FAQ
"
The current Firefly generative AI model is trained on a dataset of Adobe Stock, along with openly licensed work and public domain content where copyright has expired.
As Firefly evolves, Adobe is exploring ways for creators to be able to train the machine learning model with their own assets so they can generate content that matches their unique style, branding, and design language without the influence of other creators’ content. Adobe will continue to listen to and work with the creative community to address future developments to the Firefly training models.
With the Content Authenticity Initiative (CAI), Adobe is setting the industry standard for responsible generative AI. With more than 900 members today, the CAI is leading the conversation around digital content attribution. The CAI offers free, publicly available open source tools and is working on a widely adopted technical standard in collaboration with leading technology organizations through the nonprofit Coalition for Content Provenance and Authenticity (C2PA).
Firefly will automatically attach a tag in the embedded Content Credentials to make AI-generated art and content easily distinguishable from work created without generative AI.
"
however i cant find any "demonstration" of there claims. i mean they obviously own the rights to adobe stock images, but i guess we are just supposed to trust them that the rest of the dataset is public domain.
but since my first generation with it churned out this thing, it cant be all bad:
thanks for the detailed translations!
any indication of what these tags actually translate to?
very nice. i like what you did with cethiel's weapons, it's cool that you did it with these too!
oh you are shatteredreality on discord?
i'm glad you ended up posting these, they look awesome!
i had never heard of your mindcramps model editor, i am checking it out now, seems interesting.
thats very nifty
how much prodding and prompt tweaking did it take for you to get that poem? did it just spit it out as is?
yah i guess you could make visual novels and some stuff with only ai generated content. but your not going to get animated characters, functional pixel art, or much that is coherent. atleast i have not gotten after spending hours playing with different algorithms.
and i bet there are grannies out there trying to make games. atleast i hope so.
but to put real artists out of work? i dont think so. the way these algorithms work is imitative. if real people stop making real art then the algorithms will have nothing new to imitate.
my biggest issue is that i believe the datasets that these algorithms use (and from a tech perspective, the ai algorithm and training dataset are two separate things) are stealing from artists. just because someone posted their painting on artstation or somesuch, does not give tech-bros in the "disruption" industry the right to put their work, without permission or credit, in a training dataset. the art that is created from that dataset is a derivative work of stolen art.
ai art is cool. it is also cool to be able to play old retro games without owning the original cartridge, or to use public roadways but not pay taxes. "cool" does not make right or legal.
i think it is most useful for "sketching" out concepts and ideas. produce game ready art? i have spent numerous hours playing with multiple different ai art algorithms, researched and tested prompt engineering, and i have never generated anything that was useful to me out of the box with no editing needed on my part.
Pages