0.3.80: local model + koboldcpp


→ integrated koboldcpp with local model*
→ desktop: added welcome screen with choice of local model or via api
→ in the sandbox added location selection (upper left corner)
→ updated failed Ava images
→ redesign of selection buttons
→ reworked path to one of the sexual scenes with Dina in the plot
→ now if koboldcpp is selected and koboldcpp-remote tab kobold in settings is open by default
→ minimum screen size in windowed mode increased from 640x360 to 900x500
→ improved bots determining location changes and arrival of second character. fixed work on local models via koboldcpp
→ maximum temperature value lowered to 1.4, recommended to 0.72
→ fixed a bug with version checking
→ in the storyline the earthquake animation has been corrected (and possibly some others)
→ replaced music in the narrative event with fiona in the shower
koboldcpp:
→ now upon closing the game processes koboldcpp and electron terminate correctly and do not remain in the task manager
→ fixed premature token generation termination (error eos token triggered! id:2)
→ parameters adjusted
→ fixed the behavior of suggestions and generated options
→ maximum response length reduced to 630 tokens, recommended value up to 190
→ added a button to shut down koboldcpp
→ added information about approximate system requirements on the start screen and in settings (if there are no models in the folder)
→ added custom triggers for stopping


*now for windows there will be three options: standard (with koboldcpp), -local (with local model) and -lite (without koboldcpp). for linux u will need to download the model separately and select it in settings (later I will figure out the configuration and make it the same as on windows with the folder resources/koboldcpp)


windows: koboldcpp with models is located in the folder resources/koboldcpp, all gguf models from this folder the game will load on the start screen and tab in settings

windows/linux: a model from any folder can be loaded from the settings tab by clicking on "select model"

the local model where I stopped - Gemma-2-Ataraxy-v4d-9B.i1-Q4_K_M - by feeling ~ level of chatgpt-3.5, a bit weak, but much better than cosmosrp handles triggers-emojis, I think it should work fine for most users

among the models I tested, the one that might be best suited for the game—but is more demanding—is ChatWaifu_v1.4-Q4_K_M-GGUF. (If there are no models in the resources/koboldcpp folder, it will display system requirements and a link on the main screen and in the settings)

Files

multiic-win-local-0380 (6.3gb)
External
20 days ago
multiic-win-0380 (1.1gb)
External
20 days ago
multiic-0380.apk (0.6gb) 555 MB
20 days ago
multiic-linux-0380 (1.3gb)
External
20 days ago

Get Multiic

Download NowName your own price

Comments

Log in with itch.io to leave a comment.

Unfortunatly it doesnt work for me :(

I only get this error when I try to load it in

what video card do you have? the problem is that kobold couldn't use the available video card, most likely you have a radeon. local models work well with nvidia, but not so much with radeon yet. u can try running koboldcpp separately following the instructions in the help section and adjust the settings for your card there, but I'm not sure if it will work :(

I have a radeon card...

Already downloaded koboldcpp seperatly and it doesnt work either...

Looks like I have to wait for the moment :(