[2604.03249] BLK-Assist: A Methodological Framework for Artist-Led Co-Creation with Generative AI Models

[2604.03249] BLK-Assist: A Methodological Framework for Artist-Led Co-Creation with Generative AI Models

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.03249: BLK-Assist: A Methodological Framework for Artist-Led Co-Creation with Generative AI Models

Computer Science > Computers and Society arXiv:2604.03249 (cs) [Submitted on 10 Mar 2026] Title:BLK-Assist: A Methodological Framework for Artist-Led Co-Creation with Generative AI Models Authors:Daniel Grimes, Rachel M. Harrison View a PDF of the paper titled BLK-Assist: A Methodological Framework for Artist-Led Co-Creation with Generative AI Models, by Daniel Grimes and 1 other authors View PDF HTML (experimental) Abstract:This paper presents BLK-Assist, a modular framework for artist-specific fine-tuning of diffusion models using parameter-efficient methods. The system is implemented as a case study with a single professional artist's proprietary corpus and consists of three components: BLK-Conceptor (LoRA-adapted conceptual sketch generation), BLK-Stencil (LayerDiffuse-based transparency-preserving asset generation), and BLK-Upscale (hybrid Real-ESRGAN and texture-conditioned diffusion for high-resolution outputs). We document dataset composition, preprocessing, training configurations, and inference workflows to enable reproducibility with publicly available models to illustrate a privacy-preserving, consent-based approach to human-AI co-creation that maintains stylistic fidelity to the source corpus and can be adapted for other artists under similar constraints. Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC) Cite as: arXiv:2604.03249 [cs.CY]   (or arXiv:2604...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

The loss curve said tie. The judges said otherwise. Seeking replication for an early LLM training result [R]

TL;DR - I've written two novel functions that shape the training signal for LLMs. Early tests show people prefer responses from models tr...

Reddit - Machine Learning · 1 min ·
Machine Learning

Fast experiment on T4 GPU. Self play training on Dark Hex (Colab notebook) [P]

Last week I run a fun experiment on Dark Hex. Here's a visualization of two iterations (1800 vs 1900) of agent playing agains each other ...

Reddit - Machine Learning · 1 min ·
Machine Learning

Dynamic batching for Encoder-Decoder MT training or generation when long sequence caps the batch size [P]

I built a small pytorch sampler called dynabatch after facing this specific batching issue while fine tuning a NLLB-200 600M model. Train...

Reddit - Machine Learning · 1 min ·
Machine Learning

Google signs deal with Pentagon, allowing 'any lawful' use of AI models

https://preview.redd.it/hbbp7hn1cxxg1.png?width=811&format=png&auto=webp&s=a633fe43837bf60e014afaa4c6cf3fe72a4976d3 I feel li...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime