Fast Sketch Cleanup
This is a plugin that allows you to process the canvas with a neural network model of a specific type.
Currently the available models allow you to clean up the sketch, extract pencil lines from a photographed pencil sketch, and create a lineart from a digital sketch.
Usage
	- Open or create a white canvas with grey-white strokes (note that the plugin will take the current projection of the canvas, not the current layer).
- Go to Tools → Fast Sketch Cleanup to open the plugin dialog.
- Select the model (recommended one: SketchyModel.xml). Advanced Options will be automatically selected for you.
- Wait until it finishes processing (the dialog will then close automatically).
- See that it created a new layer with the result.
Training a new model
To train the model:
	
	-   
		Clone the repository at https://invent.kde.org/tymond/fast-line-art:
 git clone https://invent.kde.org/tymond/fast-line-art.git
- 
		Then, prepare the folder:
 
			- 
				Create a new folder for the training. 
			
-  
				In the folder, run:
 python3 [repository folder]/spawnExperiment.py --path [path to new folder, either relative or absolute] --note "[your personal note about the experiment]"
 
- 
		Prepare data:
		
			-  
				If you have existing data, put it all in data/training/anddata/verify/, 
				keeping in mind that paired pictures inink/andsketch/subfolders must have the exact same names 
				(for example if you have sketch.png and ink.png as data, you need to put one insketch/aspicture.pngand another inink/aspicture.pngto be paired).
- 
				If you don't have existing data:  
 
						- 
							Put all your raw data in data/raw/, keeping in mind that paired pictures should have the exact same names 
							with added prefix eitherink_orsketch_(for example if you havepicture_1.pngbeing the sketch picture andpicture_2.pngbeing the ink picture, you need to name themsketch_picture.pngandink_picture.pngrespectively.)
- 
							Run the data preparer script:
 python3 [repository folder]/dataPreparer.py -t taskfile.yml 
 That will augment the data in therawdirectory in order for the training to be more successful.
 
 
- 
		Edit the taskfile.ymlfile to your liking. The most important parts you want to change are:
			- model type - code name for the model type, use tinyTinier,tooSmallConv,typicalDeeportinyNarrowerShallow
- optimizer - type of optimizer, use adadeltaorsgd
- learning rate - learning rate for sgdif in use
- loss function - code name for loss function, use msefor mean squared error orblackWhitefor a custom loss function based on mse, 
				but a bit smaller for pixels where the target image pixel value is close to 0.5
 
- 
		Run the training code:
 python3 [repository folder]/train.py -t taskfile.yml -d "cpu"
 
- 
		Convert the model to an openvino model:
 python3 [repository folder]/modelConverter.py -s [size of the input, recommended 256] -t [input model name, from pytorch] -o [openvino model name, must end with .xml]
- 
		Place both the .xmland.binmodel files in your Krita resource folder alongside other models to use them in the plugin.