"What does 'impregnable' mean? Does it mean you can't get it pregnant? Oooooh!!! It means you can't get through it"IRyS

AI Waifus and You 101

The Proctor

Manager Arc Unlocked?
Staff member
Lovebug Proctologist
Joined:  Sep 9, 2022
You’ve somewhat got an idea of one of the potential next steps for this technology. The “ai” we have doesn’t have enough input data or alternatively doesn’t have the hard coded approaches like what you mentioned about giving them a sort of blueprint for anatomy, so they end up doing well on simpler body parts like arms, legs, torso which look similar from different perspectives relative to hands or feet(think about the amount of different overlaps between digits and how fingers could be hidden). Of course the issue with any sort of “blueprint” is making them abstract enough to be able to be used across different art styles and creatures(probably very far off for the latter for obvious reasons). Which loops back to the reason an ai was made(first/successfully) for digital art instead of a hard coded approach, it’s a lot easier to slap a bunch of labels onto pictures and data(the math+code wasn’t TOO hard in the grand scheme of things).
Edit: Alien abduction and replacement confirmed, though it seems that the aliens forgot that nene isn't a double H or whatever those things are, orrrr that humans only have 5 toes on each foot...

This reminds me of how some of the more advanced 2D sprites were made during the awkward transition between 2D and 3D gaming. Some companies made 3D models of all the characters, then just took a bunch of stills of them in motion and turned those into sprites. Feels like that kind of mindset and methodology would be a logical progression here, too. Create a basic wireframe 3D body model with all the right features and portions and then let the A.I puppet it into a desired position, then paint a 2D image over the top, using the camera perspective as a reference point.
 

Fucking YTs

I just want to annoy people in peace.
Early Adopter
Joined:  Sep 11, 2022
Here are some Junji Ito inspired AI art I generated tonight.

IRyS
00045-337491786-sketch, detail___.png
Pekora
00051-2591915955-sketch, detai___.png00050-2528412228-sketch, detai___.png
Mori
00054-6834801-sketch, detailed___.png
Ame
00056-759729827-sketch, detail___.png
Fauna
00058-1518630032-sketch, detai___.png00057-336726893-sketch, detail___.png
Gura
00059-2321282339-sketch, detai___.png
Mumei
00063-3428494064-sketch, detai___.png
Was supposed to be a group photo, but didn't really work, or did it?
00053-1870727073-sketch, detai___.png
 
Last edited:

Jan K. Hater

Active member
Joined:  Sep 19, 2022
This reminds me of how some of the more advanced 2D sprites were made during the awkward transition between 2D and 3D gaming. Some companies made 3D models of all the characters, then just took a bunch of stills of them in motion and turned those into sprites. Feels like that kind of mindset and methodology would be a logical progression here, too. Create a basic wireframe 3D body model with all the right features and portions and then let the A.I puppet it into a desired position, then paint a 2D image over the top, using the camera perspective as a reference point.
Yeah, you can do this. There's img2img generation and also inpainting which is img2img on small chunks of a picture. People have also started using simple 3d figures in daz or blender to get a base look set.
unknown.png
Here, I was trying to make off-brand pekos and drew in simple shapes for the carrots and don, re-imported and inpaited them.
 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
New sheriff in town.

1667401851499.png
 

The Proctor

Manager Arc Unlocked?
Staff member
Lovebug Proctologist
Joined:  Sep 9, 2022
It has a weird issue with mouths I notice, which is a funny thing to get wrong considering it's so trivial.

Edit: Ya'know actually I wonder if there'll ever be a marketplace for after-market 'touch-ups', I.E when you get a picture that's NEARLY perfect but not QUITE, so you slip a starving drawfag a fiver to touch it up.
 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
It has a weird issue with mouths I notice, which is a funny thing to get wrong considering it's so trivial.

Edit: Ya'know actually I wonder if there'll ever be a marketplace for after-market 'touch-ups', I.E when you get a picture that's NEARLY perfect but not QUITE, so you slip a starving drawfag a fiver to touch it up.
Mouths are very dependent on what you use as expressions, I just usually stick to the same ones I know because I'm too lazy to look up the different ones. The zip file I added in the original post has a good selection though.
 

BioBreak

You gonna eat that?
Early Adopter
Joined:  Sep 14, 2022
The novelty kinda wore off for me after a week or two of playing around with it but for those of you guys who want to get more "in the weeds" and autistic about it - here's a few resources:

  • VOLDY RETARD GUIDE (The definitive Stable Diffusion experience)

  • Stable Diffusion Models

 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
To breathe some life into the topic, would people be interested in a monthly/weekly theme or member-focused challenge? Non-competitive and mainly to show off.
 

Fucking YTs

I just want to annoy people in peace.
Early Adopter
Joined:  Sep 11, 2022
To breathe some life into the topic, would people be interested in a monthly/weekly theme or member-focused challenge? Non-competitive and mainly to show off.
I'm down for that, sounds fun. I don't think I've really gotten anywhere, but it could be a fun experiment. I'll be gone a lot in December, but I think this would be a good idea to keep this alive.
 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
Since it's November, maybe War themed due to Remembrance Day or Veterans Day. There's Thanksgiving but there's only so much you can do as them being dressed up as Native Americans or Pilgrims.

1668871506438.png07758-1909526586.png1668893399156.png

08091-737922943.png
 
Last edited:

The Proctor

Manager Arc Unlocked?
Staff member
Lovebug Proctologist
Joined:  Sep 9, 2022
Botan as the God Emperor of Mankind is a fanfic I'd read. But then again it'd be the most boring fanfic ever. She'd defeat everyone else in the galaxy in about an hour, tops.
 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
Gyaru Suisei, no need to thank me.

09118-2694069374.png

1669075146619.png
 
Last edited:

Azehara

Well-known member
!!Foot Dox Confirmed!!
Early Adopter
Joined:  Sep 11, 2022
Tried using WebUI but it was straining my GPU too much so I just use Novel website for the most part.

Been using it for thumbnails. Also used it to create the Baked Fresh lewd shitpost and one of the photos for the catgirl.

1girl, facing viewer, blue eyes, [grey eyes], white hair, medium hair, hair bow, s-904332640 (1).pngPaprika Lewds Final.pngcat ear headset, game controller, {{white background}}, gyaru, {{blonde hair}},  s-337555879.png

Overall im really loving this whole AI thing as it is basically a Waifu gatcha that you can also use for shit like D&D character creator, background art generator or even Vtuber creation. I used about 5 different photos to pass along to an artist of hair type, eye type, body type, for them to create a Vtuber model for me.

_kantai collection, {{mexican flag}} skindentation, medium breasts, wide hips, curvy, battle, ...png_kantai collection, ha-class destroyer, ship_girl, skin tight, cannon,  s-1445520565.pngShip Pippa AI.png

1girl, mechanical arms, artificial eye, jacket, crop top, thong, cyborg, markings, multicolore...png1girl, {masterpiece}, perfect face, {{{android}}}, robot, attached to machine, electric plug, ...png1girl, {masterpiece}, perfect face, damaged, {{{android}}}, multicolored hair, neon, single me...png

1girl, {perfect face}, {masterpiece}, {concept art}, tiefling, devil, tail, {{re s-3302948275.png1girl, {perfect face}, {masterpiece}, {concept art}, tiefling, devil, tail, {{{r s-197088695.png



(5girls, muscular female, dark skin, tomboy, midriff, crop top, shot hair), AND, s-1216015324.png

I really want to work on this one more. Make them all tan and massive.
 
Last edited:

Clem the Gem

Unknown member
Early Adopter
Joined:  Sep 10, 2022
These are my prompts for my few Pekora generations, but hopefully it gives everyone a good idea what to look for and how to describe them.

((Usada Pekora)), (Hololive), (Rabbit girl), virtual youtuber, rabbit ears, brown eyes, blue hair, white hair, twintails braid, bangs, crossed bangs, ((Yuuki Hagure)),

This is just Pekora, no setting, camera angle, clothes, etc.

Something I like to do as well is to have a hard stop between the things I'm describing. Starting with how I want the picture to look, who I want to picture to be of, what I want them to wear and finally setting, lighting and camera settings or stuff I forget. So for example:

cinematic lighting, highres, best quality, masterpiece, (These are always safe bets)
((Usada Pekora)), (Hololive), (Rabbit girl), virtual youtuber, rabbit ears, brown eyes, blue hair, white hair, twintails braid, bangs, crossed bangs, multicolored hair, ((Yuuki Hagure)),
small breasts, denim shorts, bomber jacket, white leotard, leotard under clothes, belt, black gloves, single thighhigh, holster, waving, fingerless gloves, sunglasses,
1girl, solo, official art, cowboy shot, outdoors, rooftop, rain, dutch angle, contrapposto
Are you using the NovelAI model with additional settings to get your Hololive pictures, and does it take many tries to get a good result? I tried using your example
((Usada Pekora)), (Hololive), (Rabbit girl), virtual youtuber, rabbit ears, brown eyes, blue hair, white hair, twintails braid, bangs, crossed bangs, ((Yuuki Hagure)),
and only got results that looked nothing like the rabbit, so I must be doing something wrong.

I followed another guide a while ago which I believe used the WaifuDiffusion model (wd-v1-3-float16.ckpt) and had some fun playing with that, and now today I downloaded/tried the NovelAI animefull-final-pruned as per the guide to try and get some Hololive stuff, but it's not recognising the names I put in.

I can see the resemblence, but something seems a bit off...
wat.png
 
Last edited:

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
Are you using the NovelAI model with additional settings to get your Hololive pictures, and does it take many tries to get a good result? I tried using your example
((Usada Pekora)), (Hololive), (Rabbit girl), virtual youtuber, rabbit ears, brown eyes, blue hair, white hair, twintails braid, bangs, crossed bangs, ((Yuuki Hagure)),
and only got results that looked nothing like the rabbit, so I must be doing something wrong.

I followed another guide a while ago which I believe used the WaifuDiffusion model (wd-v1-3-float16.ckpt) and had some fun playing with that, and now today I downloaded/tried the NovelAI animefull-final-pruned as per the guide to try and get some Hololive stuff, but it's not recognising the names I put in.
Recently I've been leaning more towards Everything 3.0 over NovalAI for most of my generating. But I will say that some members still work better in NovalAI, like Subaru. My settings are the same as the NovelAI part of the guide. For the most part, I stick to 28 Sampling Steps and 11 CFG Scale.

I'm not entirely sure of your situation, as in are those the only prompts you're putting in, or are you adding things to them? Pekora I will admit is one of the harder members to get semi-accurate since her design has so many things going on. To the point where I'm having a hard time replicating my result and making me wonder if an update messed up how NAI works. But I've never had it flat-out create an animal hybrid like your Mio puppy. The problem might be your negative prompts now that I think about it.

lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, extra arms, extra legs, extra feet, extra hands, loli, child, young,
Are mine, you can obviously add or remove ones.

Mio is a lot easier than Pekora luckily.

grid-1760-3073604840.pnggrid-1758-3073604840.png

Both with Anything 3.0, and these should have the meta data intact still. So if you download either, and in the WebUI feed the image into the PNG info tab you can import the settings and prompts from that. But I'll post the prompts from the left one to give a better idea anyway.

cinematic lighting, absurd res, highres, best quality, masterpiece, ultra detailed, intricate, official art, 1 girl, solo,
((ookami mio)), ((hololive)), ((virtual youtuber)), (by Izumi Sai), fox girl, (black hair), fox ears, fox tail, bangs, hair between eyes, very long hair,
(short sheer sundress), (flower pattern), see through, busty, slender body, (thin), hourglass shape, choker, glasses, [smirk], looking away, sandals, ((looking away)), off shoulders, (hands behind head),
cowboy shot, dutch angle, outdoors, park, sideways, extra wide shot

Now you could also use embeds if you really want Pekora or other members if you can find them. And you put these into your embeddings folder and use it in the prompt at the end, or wherever if you want to experiment. And you do this by just using the name of the embed between <>. So something like <Kronii-10000> if the file is Kronii-10000.png. You can find embeds in a few places.

https://mega.nz/folder/23oAxTLD#vNH9tPQkiP1KCp72d2qINQ
https://forum.holopirates.moe/t/stable-diffusion-embeds/2103
 

Clem the Gem

Unknown member
Early Adopter
Joined:  Sep 10, 2022
Some real nice pictures you got there.

Looks like it was really jsut user error. I was being dumb thinking I could use the bare minimum amount of tags and get the character I wanted in some kind of random pose, but you really have to describe them even if you think it's an already established character.

Using only the tags "masterpiece, best quality, hololive, ookami mio" it was a dice roll whether I got something looking kind of like Mio, or some weird animal thing:

Adding a couple of descriptive tags like "black hair, long hair, black and white hoodie" improved results a lot:, though I still can't figure out how to get her signature hoodie, and the hair clip always ends up as some kind of random yellow shape. The general face, ears, bangs and eyes are usually pretty spot-on:


This was all with NovelAI, and I guess the default settings of 20 Sampling Steps, Euler a Sampling Method, CFG Scale 7 and no negative prompts. Using those same tags/settings with WaifuDiffusion gave results that looked nothing like Mio. Guess I'll try the Anything 3.0 model next, and will keep experimenting
 
Last edited:

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
Some real nice pictures you got there.

Looks like it was really jsut user error. I was being dumb thinking I could use the bare minimum amount of tags and get the character I wanted in some kind of random pose, but you really have to describe them even if you think it's an already established character.

Using only the tags "masterpiece, best quality, hololive, ookami mio" it was a dice roll whether I got something looking kind of like Mio, or some weird animal thing:

Adding a couple of descriptive tags like "black hair, long hair, black and white hoodie" improved results a lot:, though I still can't figure out how to get her signature hoodie, and the hair clip always ends up as some kind of random yellow shape. The general face, ears, bangs and eyes are usually pretty spot-on:


This was all with NovelAI, and I guess the default settings of 20 Sampling Steps, Euler a Sampling Method, CFG Scale 7 and no negative prompts. Using those same tags/settings with WaifuDiffusion gave results that looked nothing like Mio. Guess I'll try the Anything 3.0 model next, and will keep experimenting
You might also want to play around with the resolution size. Like doubling the width or height depending on if you want her to stand, or lie down, etc. Since it heavily relies on that context too surprisingly enough.
 

Clem the Gem

Unknown member
Early Adopter
Joined:  Sep 10, 2022
One thing I was a bit unsure about was how I shouldd write my prompts. Should it be more of a descriptive sentence, or just a list of tags like you'd see on a booru site. Does it depend on the model you're using?
Maybe you can prove me wrong, but from what I've tried, it doesn't matter.

Here's a bunch of examples using a pretty basic prompt, and the same seed.

On the left we have "1 girl, long hair, blonde hair, pink yukata, sitting, bench, park"
and on the right, "a girl with long, blonde hair wearing a pink yukata sitting on a bench in the park"

Novel AI (925997e9)

Anything 3.0 (6569e224)

Waifu Diffusion 1.3 (84692140)

Stable Diffusion 1.4 (7460a6fa)

Another thing I'm curious about is the hardware requirements. I'm using an RTX 3070 8GB, and the images I've been generating take about 15 seconds to make, with virtually no activity shown on the hardware monitor. I thought this stuff was supposed to be demanding on the hardware?
I have found that occasionally the whole thing will hang and take a very long time to complete, using the exact same prompt and setting that I had been speeding through before.
 
Last edited:

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
One thing I was a bit unsure about was how I shouldd write my prompts. Should it be more of a descriptive sentence, or just a list of tags like you'd see on a booru site. Does it depend on the model you're using?
Maybe you can prove me wrong, but from what I've tried, it doesn't matter.

Here's a bunch of examples using a pretty basic prompt, and the same seed.

On the left we have "1 girl, long hair, blonde hair, pink yukata, sitting, bench, park"
and on the right, "a girl with long, blonde hair wearing a pink yukata sitting on a bench in the park"

Novel AI (925997e9)

Anything 3.0 (6569e224)

Waifu Diffusion 1.3 (84692140)

Stable Diffusion 1.4 (7460a6fa)

Another thing I'm curious about is the hardware requirements. I'm using an RTX 3070 8GB, and the images I've been generating take about 15 seconds to make, with virtually no activity shown on the hardware monitor. I thought this stuff was supposed to be demanding on the hardware?
I have found that occasionally the whole thing will hang and take a very long time to complete, using the exact same prompt and setting that I had been speeding through before.
Prompt stuff is a personal preference really. I like to have a lot of control over what I generate. Which usually includes Hololive members, which either require an embed as I mentioned or multiple descriptors. Since adding "Oozora Subaru" only goes so far. So I lean towards using Danbooru-esque tags like you mentioned.

I take it those are default 512/512 pictures, which are fairly easy and quick to generate, still demanding for anything under a 1060gtx. The more realistic use case are usually images twice that size in length/width or more. And in multitudes of 8. Since even with a lot of prompts it still involves brute forcing it and see if there's a result you like, most of the time at least. On my 1080gtx, usually generating 8 images, 512 by 1024, it takes about 4-5ish minutes. And if you're using the self-hosted webui, it doesn't show the GPU usage for some reason, but if you have anything like MSI Afterburner it will definitely be noticeable in the GPU temps.

As for as Models, it's also very much personal preference. Like I said I've been mainly sticking to Anything 3.0 but I tried some combinations today that were on https://rentry.org/hdgrecipes.

These are all the same seed and prompts. 50/50 Anything/Novel AI, NovalAI, berrymixv3, Anything V3.0, Anything + Everything V2. I've heard some of the mixes have more accurate hands and stuff.

grid-1824-245317149-cinematic lighting, highres, best quality, masterpiece, ultra detailed, in...pnggrid-1825-245317149-cinematic lighting, highres, best quality, masterpiece, ultra detailed, in...pnggrid-1826-245317149-cinematic lighting, highres, best quality, masterpiece, ultra detailed, in...pnggrid-1827-245317149-cinematic lighting, highres, best quality, masterpiece, ultra detailed, in...pnggrid-1828-245317149-cinematic lighting, highres, best quality, masterpiece, ultra detailed, in...png
 
Last edited:

Clem the Gem

Unknown member
Early Adopter
Joined:  Sep 10, 2022
Prompt stuff is a personal preference really. I like to have a lot of control over what I generate. Which usually includes Hololive members, which either require an embed as I mentioned or multiple descriptors. Since adding "Oozora Subaru" only goes so far. So I lean towards using Danbooru-esque tags like you mentioned.

I take it those are default 512/512 pictures, which are fairly easy and quick to generate, still demanding for anything under a 1060gtx. The more realistic use case are usually images twice that size in length/width or more. And in multitudes of 8. Since even with a lot of prompts it still involves brute forcing it and see if there's a result you like, most of the time at least. On my 1080gtx, usually generating 8 images, 512 by 1024, it takes about 4-5ish minutes. And if you're using the self-hosted webui, it doesn't show the GPU usage for some reason, but if you have anything like MSI Afterburner it will definitely be noticeable in the GPU temps.

As for as Models, it's also very much personal preference. Like I said I've been mainly sticking to Anything 3.0 but I tried some combinations today that were on https://rentry.org/hdgrecipes.

These are all the same seed and prompts. 50/50 Anything/Novel AI, NovalAI, berrymixv3, Anything V3.0, Anything + Everything V2. I've heard some of the mixes have more accurate hands and stuff.

I've been doing similar to what you do, organising tags by line. One line to describe the character's body, one for the image quality, one for what they're wearing and doing, one for the framing and so on. Makes it easy to have one prompt you can whip out and adjust as necessary.

Oh yeah, those dumb example pics were 512x512, and took about 5 seconds to generate. I copied the settings for one of your Subaru pictures up there and that took about 13 seconds for a single one, or about 40 for the grid of 4. I guess it's good to see my graphics card I spent way too much on is actually getting used for something!

I've also found the Everything 3.0 model to be the best so far, but now I'm going to have to look at these mixed models which I didn't know was a thing.
The next thing to look at will be embedding / training to help generate those chuubas that the AI just can't seem to handle. But that's a whole thing I've not even begun to look into yet..

I've already spent way more time than I thought I would, but it really is a lot of fun seeing what results you can get. Just now I thought I'd share a couple of pictures, thinking I'd got the perfect one, if not for that one glitch with the hair, or an extra finger or whatever. So you roll the dice again and again, change some of the tags slightly, and suddenly an hour has passed.

Here's what I ended up with anyway, errors and all.


Something interesting to note on tha tlast image (other than the mangled fingers) is that I added the tag "yellow eyes," right before "white tshirt", and this resulted in a yellow tshirt every time. So maybe the tags are not being read completely logically.
 
Last edited:
Top Bottom