"take dawn sord! be frendry!!!!"Usada Pekora
  • :[💰 User Sponsored Segment 💰]:


    🐻🌹Hey there, Guest, Do you like magic tricks? Well, at 6PM EST, vtuber Charlottexbear will be performing a MAGIC TRICK where she turns from a very hot bear with huge boing-boings into a really smol and cute 🦇VAMPIRE bear!🦇 Go check out this marvelous transformation yourself, or the Mexicans will be very upset...

  • 💰💰💰Gentlemen, it is bill-paying day. That means I am doing a Bernie Sanders and asking for your financial support. The banner ad is already taken, but remember you can still shill clips on the sidebar video. 💰💰💰

AI Waifus and You 101

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
Where I saw this - reddit
Is this something that vtubers could use as a cheap 3D? There is mocopi but you still need a 3D model. Just curious as I know jack about either.
In time probably, I feel currently the best use-case scenario for AI could be improving tracking via camera using visual noise or automatic depth mapping. But I'm not sure if any of the current face/body tracking software implements something like that.

The video/animation aspect of it will probably take a few years to get to that level. Each frame takes a few seconds to generate even using a high-end consumer GPU so currently, it's technically not really possible to do something like that. Unless you have an immense budget supporting you that can afford whatever the AI/DeepLearning equivalent is of a render farm.
 

Clem the Gem

Unknown member
Early Adopter
Joined:  Sep 10, 2022
Where I saw this - reddit
Is this something that vtubers could use as a cheap 3D? There is mocopi but you still need a 3D model. Just curious as I know jack about either.
Don't know if you're aware, but if it's just a 3D model with tracking you're after, there is already free software to do it using just a basic webcam for head tracking. Programs like Vroid Studio let you make your 3D character very easily, but you do run the risk of looking like every other indie vtuber since you're essentially just slapping a different hair, eyes and clothes on a pre-made figure.

Free programs I've tried are VSeeFace, LoLive and 3teneFree

Don't worry, Japan has you covered.

It's got Gura in a bikini, so I'm spoilering it. IMG2IMG running over the top of an MMD animation.



At the start it looked a bit like a flat 2D plate face stuck on a 3D body, so in other words a pretty good representation of the real Gura!
 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
So I gave the animated stuff a try myself, not IMG2IMG or anything but just using the same seed and changing the prompts. It's not great but I'm happy with it for a first attempt. I will emphasize that this is NSFW. And if you are a Mio-fam, probably best not to open. And the only reason I went with that was that I saw the Face Poser (NSFW) Lora, and figured it would be a nice test.

MioTest.gif

While trying to fine-tune I got this picture which I thought was surprisingly neat. Not inherently NSFW but I mean if the animation is anything to go by, lewd.
22852-3982954333-portrait20cropped20breasts20open20mouth20face20close-up2020face20focus20strai...png
 

Clem the Gem

Unknown member
Early Adopter
Joined:  Sep 10, 2022
So I gave the animated stuff a try myself, not IMG2IMG or anything but just using the same seed and changing the prompts. It's not great but I'm happy with it for a first attempt. I will emphasize that this is NSFW. And if you are a Mio-fam, probably best not to open. And the only reason I went with that was that I saw the Face Poser (NSFW) Lora, and figured it would be a nice test.


While trying to fine-tune I got this picture which I thought was surprisingly neat. Not inherently NSFW but I mean if the animation is anything to go by, lewd.
My god, what have you done ...I thought I was done downloading new things for a bit
 

RestlessRain

Well-known member
Early Adopter
Joined:  Sep 21, 2022
Spotted this artwork of Lia on Twitter, which I thought was really good. This is probably the best thread to share it in.

FtzGV_EX0AIyEja
FtzGV_CWcAAIgKL
FtzGV_BXoAI1sod

Twitter
 

Azehara

Well-known member
!!Foot Dox Confirmed!!
Early Adopter
Joined:  Sep 11, 2022
Thats petty cool. That said im not a fan or the more realistic type images. I need to get off my ass and fix my computer, there is a guy that has refined the Pippa model (not really a model but the name escapes me at the moment, sample of photos that the AI uses to recreate a character).

AI Pippa Anon:
pcg_AInon (twitter)
FttMk21XgAIELPn


Ft7Tz47XoAUuttK
 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
Thats petty cool. That said im not a fan or the more realistic type images. I need to get off my ass and fix my computer, there is a guy that has refined the Pippa model (not really a model but the name escapes me at the moment, sample of photos that the AI uses to recreate a character).

AI Pippa Anon:
pcg_AInon (twitter)
FttMk21XgAIELPn


Ft7Tz47XoAUuttK
There are 2 Pippa Loras publicly available. It's how I made all of mine. I will say I don't think either are trained on her new outfit. So either he has a different one or he brute forces the new outfit with some specific prompts and/or adding an Azure Lane Lora.

 

Azehara

Well-known member
!!Foot Dox Confirmed!!
Early Adopter
Joined:  Sep 11, 2022
He has probably updated his by training the new ship model into it.



EDIT: Posted in the other thread but it's more suitable for this one. Looking into the whole Balenciaga thing. Will need to add more voice clips of her speaking to get it to sound better.

_1girl, rabbit ears, pink hair, medium hair, pink eyes, hair between eyes, smile s-3995354644.png
 
Last edited:

RestlessRain

Well-known member
Early Adopter
Joined:  Sep 21, 2022
So I've been trying to figure out how to do AI art and I'm not quite able to follow the guidelines on the first post. I wouldn't mind some tips to get set up.

Are there any simple instructions that can be followed? I want to use a set of pictures of a vtuber since tags don't quite manage to get all their features right. Alternatively, I would like to start with a base picture that can be modified.
 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
So I've been trying to figure out how to do AI art and I'm not quite able to follow the guidelines on the first post. I wouldn't mind some tips to get set up.

Are there any simple instructions that can be followed? I want to use a set of pictures of a vtuber since tags don't quite manage to get all their features right. Alternatively, I would like to start with a base picture that can be modified.
Could you give me the base image and what you want from it? I could give it a try and make a sort of step-by-step guide to the end product. (If within my skillrange)
 

RestlessRain

Well-known member
Early Adopter
Joined:  Sep 21, 2022
Could you give me the base image and what you want from it? I could give it a try and make a sort of step-by-step guide to the end product. (If within my skillrange)
I'll try to describe where I am in more detail, perhaps you can tell me what I'm not doing right.

I am able to get onto the Stable Diffusion offline web site LAN address (the http://127.0.0.1:7860/ address), and I can generate basic images.

However, I'm not sure how to upload speciifc models, nor how to create them. For example, I've downloaded models for Tenma Maemi and Pipkin Pippa from the civitAI site. I have these in file format "pipkinPippa_pippaV10.safetensors" and "maemiTenma_tenmaV10.safetensors", and I can't figure out how to use these models to start generating custom images based off of Pippa or Tenma. Where do these files go, and what prompts do I use to tell the AI: "use the Pipkin Pippa information" ?

Once I'm proficient in generating this image, I'll try to work out how to create my own model.
 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
I'll try to describe where I am in more detail, perhaps you can tell me what I'm not doing right.

I am able to get onto the Stable Diffusion offline web site LAN address (the http://127.0.0.1:7860/ address), and I can generate basic images.

However, I'm not sure how to upload speciifc models, nor how to create them. For example, I've downloaded models for Tenma Maemi and Pipkin Pippa from the civitAI site. I have these in file format "pipkinPippa_pippaV10.safetensors" and "maemiTenma_tenmaV10.safetensors", and I can't figure out how to use these models to start generating custom images based off of Pippa or Tenma. Where do these files go, and what prompts do I use to tell the AI: "use the Pipkin Pippa information" ?

Once I'm proficient in generating this image, I'll try to work out how to create my own model.
Aah ok, the Pippa and Tenma files I imagine are Loras which go in the \models\Lora folder. I personally rename them. So going forward I'll act like pipkinPippa_pippaV10.safetensors is renamed to Pippa.safetensors.

Once you've put Pippa.safetensors in the Lora folder, or any other Lora for that matter, you have 2 ways to make AUTOMATIC11.11 actually use them. You can either just add it to your prompts by adding <lora:pippa:x.x> (x.x = weights. Something like 0.1 or 1) or the Additional Networks pull-down menu (Screenshot attached). The UI does need a refresh for new Lora's to be added to this though. After that you just add the usual prompts like "pipkin pippa, rabbit girl, rabbit ears, pink hair" etc, just be as specific as you want. That's my methodology though so feel free to try things differently.

Edit: Something I forgot to mention is the importance of which model you're using and the sampling steps and CFG Scale. Your best bet is to check out Civitai.com and look for models that fit the style that you're after and then play around with the steps and CFG scale. But a safe starting point would be 20-28 steps and a 7 CFG Scale.

Screenshot 2023-04-23 233002.png

To give a complete example of my prompts using multiple Loras:
(best quality), (ultra-detailed), masterpiece, shiny skin, bloom, motion blur, film grain, ((absurd detail, highly detailed skin)), soft lighting, cinematic lighting, light particles, light rays, blue sky,
cropped legs, smug, breast focus, contrapposto,
<lora:Okayu:0.8>, nekomata okayu, hololive, cat girl, cat ears, cat tail, ahoge, bangs, crossed bangs, fang, hair between eyes, purple eyes, purple hair, short hair, large breasts,
<lora:Towa:0.5>, cosplaying tokoyami towa, black short shorts, black crop top, white cropped jacket, puffy sleeves, choker, black camisole,
1girl, outdoors, city, rooftop, nsfw,
With a negative prompt of:
(worst quality, low quality:1.4), (depth of field, blurry:1.4), (greyscale, monochrome:1.2), (censored, censorship:1.3), (text, speech bubble), error, bad anatomy, bad hands, two tails, ((floating tail)), blurry,
00081-354617891-(best quality), (ultra-detailed), masterpiece, shiny skin, bloom, motion blur,...jpg
 
Last edited:

Lesbian Solid Snake

Pettan Hag Supremacy
Joined:  Sep 19, 2022
>downloads Git from voldy
>3 step installation guide
>installation is more than 3 steps
:annoyedpippa:
What the FUCK is the WebUI repo
 

Lesbian Solid Snake

Pettan Hag Supremacy
Joined:  Sep 19, 2022

Clem the Gem

Unknown member
Early Adopter
Joined:  Sep 10, 2022
I'll try to describe where I am in more detail, perhaps you can tell me what I'm not doing right.

I am able to get onto the Stable Diffusion offline web site LAN address (the http://127.0.0.1:7860/ address), and I can generate basic images.

However, I'm not sure how to upload speciifc models, nor how to create them. For example, I've downloaded models for Tenma Maemi and Pipkin Pippa from the civitAI site. I have these in file format "pipkinPippa_pippaV10.safetensors" and "maemiTenma_tenmaV10.safetensors", and I can't figure out how to use these models to start generating custom images based off of Pippa or Tenma. Where do these files go, and what prompts do I use to tell the AI: "use the Pipkin Pippa information" ?

Once I'm proficient in generating this image, I'll try to work out how to create my own model.
It was mentioned already, but it's worth mentioning again - you'll really want to go on Civitai.com and download some models before you do anything else.
There are 3 important types of file you will encounter (you can filter by filetype on Civitai):
  • Models, ie. checkpoint files: These are the actual AI models and are 2-4 GB in size.
  • LORAs: Add them to your prompt to get a certain character, style or sex act. Typically 144 MB. Hypernetworks are the older version or LORAs. You need to trigger them in your prompt as explained above by @Tatsunoko
  • Textual Inversions / Embeddings: Same sort of idea as LORAs, but harder to make. These days you'll mostly see people using them in their negative prompts. Popular ones are easynegative and bad_prompt_version2. This means instead of paragraphs of text in your negative prompt, you can just throw in "easynegative" to take the ugly out of your pictures. Just a few KB in size.
Now, from memory, the setup guide will get you to download SD1.4 or 1.5 as your model. Here's a lovely picture of our favourite Hololive clown fox Polka I generated just now with this model:



Yeah.. I'm still not sure how you are supposed to use this model exactly. Even if I rewrite the prompt and end up with an image that at least makes sense, it still never looks as good as other models.
On the other hand, here are a selection of some of my favourite models, using the exact same prompt as the image above:
(All models linked - beware some of them will be NSFW)
Generated with a pretty simple prompt. Witness the power of the model. The prompt was not changed at all between images. If it was, I'm sure they could be improved. You might notice most of them seem to be holding some kind of stick - I suspect this might be the fault of the Polka LORA. Maybe it was trained on pictures of her holding her circus baton thing?
One of my favourites is IllusionMix which will adds lots of detail and random items that fit the scene, and gives a nice painted look. I imagine you could get a more realistic look while keeping the detail if you were to mix it with another model.

The prompt:
Code:
(masterpeice:1.2, best quality:1.2, ultra detailed, cinematic lighting, sharp focus),
omaru polka standing, japanese street, (festival), fireworks, neon lights, looking at viewer, from side, smile, closed mouth, cowboy shot, reaching out,
blonde hair, fennec fox ears, single braid, (yukata), floral print, purple eyes, bow, ribbon,
<lora:omaru polka:0.7>

Negative prompt: (worst quality, low quality:1.4), easynegative, striped, (holding object) , bad hands, missing fingers, deformed

Sampler: DPM++ 2M Karras
Sampling steps: 50
CFG Scale 9

Initial size 512x512, HiresFix used to upscale by 1.5
 
Last edited:

RestlessRain

Well-known member
Early Adopter
Joined:  Sep 21, 2022
Okay, did some experimenting generating art with Pippa's model. I used "Meinamix" for modelling the general art style, and Pippa's specific model for reference for her character as a starting point. Took a lot of regenerations and tweaking, but I got some good results. I like the look of the other art styles, I'll have to try those as well.

1682326007063.png
1682326039647.png
1682326108449.png
1682326180184.png
1682326118716.png
1682326190634.png

1682327353407-png.21831

I varied this a bit while experimenting, such as using night instead of day, changing the outfit colours a few times, but generally stuck to a list similar to this:

Positive Prompt
(best quality), (ultra-detailed), masterpiece, shiny skin, bloom, motion blur, film grain, ((absurd detail, highly detailed skin)), soft lighting, cinematic lighting, light particles, light rays, blue sky
Cropped legs, frontal shot, happy
<loraippa:0.5> (bunny ears), pastel pink hair, ahoge, (dark pink eyes), black shirt, red tie, black dress, small breasts, black sunglasses
1 girl, outdoors, city, day

Negative Prompt
(worst quality, low quality:1.4), (depth of field, blurry:1.4), (greyscale, monochrome:1.2), (censored, censorship:1.3), (text, speech bubble), error, bad anatomy, bad hands

My next question is: how do I make my own AI model?

Also, @Clem the Gem - thanks for the artwork model suggestions, and @Tatsunoko - thanks for the help getting set up
 

Attachments

  • 1682327353407.png
    1682327353407.png
    820.1 KB · Views: 81

Clem the Gem

Unknown member
Early Adopter
Joined:  Sep 10, 2022
Looking good, now you got the hang of it!

My next question is: how do I make my own AI model?

Do you really want to make your own model, or do you mean your own LORA to get a character's likeness? I can't help with making a model and imagine it's quite a task.
LORAs however are pretty easy and I have done one myself (and a bunch of hypernetworks before that). Pretty much just followed this tutorial:

 

Watamate

Previously known as Tatsunoko
Early Adopter
Joined:  Oct 8, 2022
Okay, did some experimenting generating art with Pippa's model. I used "Meinamix" for modelling the general art style, and Pippa's specific model for reference for her character as a starting point. Took a lot of regenerations and tweaking, but I got some good results. I like the look of the other art styles, I'll have to try those as well.


I varied this a bit while experimenting, such as using night instead of day, changing the outfit colours a few times, but generally stuck to a list similar to this:

Positive Prompt
(best quality), (ultra-detailed), masterpiece, shiny skin, bloom, motion blur, film grain, ((absurd detail, highly detailed skin)), soft lighting, cinematic lighting, light particles, light rays, blue sky
Cropped legs, frontal shot, happy
<loraippa:0.5> (bunny ears), pastel pink hair, ahoge, (dark pink eyes), black shirt, red tie, black dress, small breasts, black sunglasses
1 girl, outdoors, city, day

Negative Prompt
(worst quality, low quality:1.4), (depth of field, blurry:1.4), (greyscale, monochrome:1.2), (censored, censorship:1.3), (text, speech bubble), error, bad anatomy, bad hands

My next question is: how do I make my own AI model?

Also, @Clem the Gem - thanks for the artwork model suggestions, and @Tatsunoko - thanks for the help getting set up
An alternative that I've used for making Loras is this guide's Google Colab:


The first Colab helps you scrape images off Gelbooru which I find very helpful and prepares them to be ready to use training images, including giving the images accurate tags. And the second Colab does the actual training. I've used them to make a Fumihiko Artstyle Lora, Gyaru Lora and an A-chan one. I'm not including the A-chan one since it's honestly not as good as the one on Civitai.

My Loras:

Edit: As for training full models/checkpoints, I honestly don't think it's worth it. But what you could do is merge/mix ones that you like. And you can do this fairly easily in the Automatic 11.11 WebUI. You can go to the Checkpoint Merger tab and choose the ones you want to mix and by how much.
 
Last edited:

RestlessRain

Well-known member
Early Adopter
Joined:  Sep 21, 2022
Okay, one last picture for now. This was too good not to keep.

1682371209201.png


Original in case I want it later
1682371348070.png
 

Thomas Talus

Εκ λόγου άλλος εκβαίνει λόγος
Early Adopter
Joined:  Sep 15, 2022
Looks like she's holding a fusion rifle (with a broken middle finger).
 
Top Bottom