This is the second one for the series, this one focuses on generating images from text. It’s done with Stable Diffusion. I have integrated the setup with previously deployed openwebUI. I think we can use the same feature-image (just changing the overlay text) .
[Ed. follows "Running Generative AI Models Locally with Ollama and Open WebUI" as the first in the series.]
Metadata Update from @rlengland: - Custom field preview-link adjusted to https://fedoramagazine.org/?p=41503&preview=true&preview_id=41503 - Custom field publish adjusted to 2025-01-14 - Issue assigned to sumantrom - Issue tagged with: article, needs-image, needs-series
Metadata Update from @rlengland: - Custom field image-editor adjusted to rlengland - Issue untagged with: needs-image
@sumantrom I've created the featured image and edited the article. I modified some of the text styling and some formatting to conform more closely to the standard we use for articles.
I also reversed the order of the "What is..." sections since it seemed to flow more smoothly that way.
It was not clear to me exactly where Stable Diffusion is installed. I confess I didn't follow all the steps extremely closely but would some clarification help there?
Please look this over and make certain I haven't bodgered any thing up. :-)
Also, 14 January is not a normal publication date, since it is a Tuesday, but we can use that date if you wish. ( We normally publish on Mon., Wed., or Fri. )
@sumantrom I've created the featured image and edited the article. I modified some of the text styling and some formatting to conform more closely to the standard we use for articles. I also reversed the order of the "What is..." sections since it seemed to flow more smoothly that way. It was not clear to me exactly where Stable Diffusion is installed. I confess I didn't follow all the steps extremely closely but would some clarification help there?
So one step that I should have explicitly done is state that one needs to create a working directory. The webui.sh which is being wget-ed will actually install everything in the same working directory. In my case, I've suggested that our readers create stable diffusion. What the script actually does is, it abstracts a LOT of things .. pip install torch==2.1.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu; pip install torch_npu==2.1.0 pip install transformers==4.19.2 diffusers invisible-watermark --prefer-binary then setting up all the CUDA and necessary requirements. The very reason of using Automactic1111 is not to scare the people too much, this project has a LOT of dependencies ( some of them are different in AMD ROCm and Nvidia) but the webui.sh takes care of everything!
All looks good
In that case, I would want this to go out on Monday(today) if its possible and I will follow up with another article next Wed :)
@sumantrom Thank you for that explanation. Re-reading and I see where I was not following the process very well. Scheduled for 13 January 08:00 UTC
Issue status updated to: Closed (was: Open) Issue close_status updated to: published
Log in to comment on this ticket.