seedhartha

Members
  • Content Count

    92
  • Joined

  • Last visited

  • Days Won

    11

seedhartha last won the day on May 5

seedhartha had the most liked content!

Community Reputation

116 Jedi Grand Master

5 Followers

About seedhartha

  • Rank
    Jedi Padawan

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. KotorBlender only officially supports MDL format. Technically speaking, material information (textures, colors and transparency) is stored in addon-specific data structures, and is only converted to Blender materials for preview purposes. GLB / GLTF exporters for sure don't have access to KotorBlender data structures, and how they interpret Blender materials is up to them.
  2. Nice job on the document, very organized. One thing I noticed is that you don't have to touch Shader Editor at all, you can just press Rebuild All Materials on the root object or Rebuild Material on individual objects. So the root cause is not normalized bone weights, i.e. total sum of all bone weights per vertex exceeds 1.0 in your model. You can fix that by pressing Weights - Normalize All while in Weight Paint mode: However, that made exported model crash on load, so I had to fix it. Please download version 3.10.1 of KotorBlender from DeadlyStream. Here is your fixed model in-game:
  3. I do see the value of good documentation / tutorials. I'm sceptical though if it will actually bring more people to use KotorBlender. So to anyone following this thread, do you think a tutorial is necessary here or just a nice-to-have? What topics would you like to have covered? In which format?
  4. Hey! I see your point. Some high-level instructions that are specific to KotorBlender can be found in the README: https://deadlystream.com/files/file/1853-kotorblender-for-blender-33/ As for the Blender itself, you are much better of learning from some respected YouTube channels, like this one: https://www.youtube.com/@blenderguru That said, if you have any issues or questions about the tool, you can ask them here or on Discord.
  5. So I have added Cyrillic support to the program, and in attached ZIP archive you will find it with a Russian pronounciation dictionary included: After opening toolkit.exe, click Tools -> Compose LIP You should see the dialog like on the attached screenshot Enter text in the field above Click Load and open a .WAV or .MP3 audio file You should see a black & white waveform when loaded You can use this same app to extract and deobfuscate audio files from the game Click Compose to generate a LIP file from text and audio, and follow the program instructions If audio has pauses in it, as indicated by black lines on the waveform, use parentheses to create word groups Number of word groups must exactly match number of non-silent spans on the waveform Phonemes for each word are loaded from `ru.dic` file found in the program directory When a word is not found in a dictionary, you can add your own definition in the Pronounciation area on the right. Don't forget to click Save. When in doubt, copy pronouciation from similar words in `ru.dic` Other than this, you're on your own, comrade. reone_toolkit_1_0_beta2+ru.zip
  6. Hey! reone toolkit has Compose LIP tool that can generate a LIP file from text and audio. It doesn't support Cyrillic, but you can mimic it by inserting similarly pronounced words from a dictionary. Alternatively, your best bet is probably scaling up individual keyframes in "LipSynchEditor", yes.
  7. More updates: Version 3.9.0 has added support for Bezier-type controllers, compressing quaternions on export and armature-based animations, i.e. it is now possible to animate a character using a Blender armature and then copy keyframes onto regular "bone" objects before exporting the model. More importantly, version 3.10.0 has implemented semi-automated minimap rendering. It works like this: Import a module layout via File → Import → KotOR Layout (*.lyt) Press KotOR → Minimap → Render (auto) Open "Render Result" image in Image Editor area and save it as "lbl_map{modulename}.tga" file Open "MinimapCoords" text in Text Editor area and copy-paste generated properties into module .ARE file using any GFF editor
  8. Big upgrade in KotorBlender version 3.8.0. Most importantly: Instead of choosing a normals algorithm on import, KotorBlender now automatically merges vertices with exactly the same position, while storing imported UVs and normals as part of the edge loops. Conversely, when exporting a model, it will split vertices with exactly the same position, but different UVs and normals. This makes modelling much more straightforward, because you don't have to manually join and split vertices every time. One-click lightmap baking is now possible via KotOR → Bake Lightmaps. KotorBlender will automatically prepare object materials for baking, hide non-lightmapped objects and restore everything to normal when finished. Note that, while it works out of the box, you will want to tweak some settings, as described in README. Material import has been rewritten, adding support for environment maps, bump maps and transparent objects. This is effectively how KotOR would look with phyiscally-based renderer. As a bonus, previous version of KotorBlender has added support for loading TPC textures directly, so you don't have to convert them to TGA anymore. As for future plans, I'm considering adding minimap creation tools, and rewriting animation export to enable armature-based edits.
  9. Out of curiosity, are you going to train voice models on sound files extracted from the game? I am toying with an idea of a tool that would automate that training, and generate speech / LIP files. What software are you using? How does it compare to something like flowtron?
  10. Starting with version 1.0 of the toolkit, it is now possible to visually edit 2DA, GFF, TLK, LIP, SSF, NCS and plaintext files. After opening a resource in resource explorer and making your changes, click "File" → "Save copy as..." and choose destination directory, e.g. Override, to save the modified resource. As for future plans, focus is mostly on raw resource extraction, preview and editing. Visual template editing (e.g. UTC), dialog and module editors are out of scope for this project, even though I am toying with some ideas for high-level tools.
  11. uint32_t temp = *reinterpret_cast<const uint32_t *>(&data[rowDataIdx]); float x = 1.0f - static_cast<float>(temp & 0x7ff) / 1023.0f; float y = 1.0f - static_cast<float>((temp >> 11) & 0x7ff) / 1023.0f; float z = 1.0f - static_cast<float>(temp >> 22) / 511.0f; float dot = x * x + y * y + z * z; float w; if (dot >= 1.0f) { float len = glm::sqrt(dot); x /= len; y /= len; z /= len; w = 0.0f; } else { w = -glm::sqrt(1.0f - dot); }
  12. Rotations in MDL files are stored as quaternions. They can be uncompressed (4 floats), or compressed (1 integer). Both should work, but Max is converting rotations from axis-angle representation, which does produce some artifacts. There's less chance of that occuring when rotations area compressed, I guess.
  13. reone toolkit version 0.3 has just received a new tool. LIP Composer is a complete replacement of CSLU Toolkit and LipSynchEditor, enabling modders to create LIP files from text and audio files. And it does a better job, too. From my experiments, LipSynchEditor and derivatives incorrectly translate phonemes to LIP shapes. Most noticeably, shape 0 is being interpreted as phoneme EE, while if you look at the animation, it is clearly supposed to be rest position. LIP Composer algorithm is the following: Analyze the audio file and find continuous spans of silence, controlled by parameters min silence duration and max silence amplitude Split text into word groups, ignoring punctuation. By default, whole text is considered a single group. User can create groups by wrapping multiple words in parentheses. Match word groups to non-silent sound spans, indicated by white lines on the waveform. Number of word groups must be equal to the number of sound spans. Within each word group, convert words into phonemes using open-source CMU Pronouncing Dictionary, and evenly spread phonemes across the corresponding sound span. For every word that is not present in the CMU dictionary, user can add it's phonemes in the Pronounciation tab. Finally, convert phonemes to LIP keyframes, also creating rest keyframes for each span of silence. From my testing, resulting LIP files are almost undistinguishable from original ones.
  14. This is not possible at the moment, but I'll see how to make it more configurable. Just released a GUI version of this tool, console version is now deprecated.
  15. Released version 0.2 of the toolkit, most significant improvement being in NCS decompilation. Decompilation now happens in four steps: Compiled script is parsed into assembly-like list of instructions Instruction list is converted into equivalent language-agnostic expression tree Expression tree is optimized for readibility, making a heavy use of inlining Optimized expression tree is converted to pseudo-NWScript code Decompiler tries to convert instructions into expressions in the most straightforward manner and does not rely on specific instruction patterns produced by the Bioware compiler. Downside is that it doesn't yet know how to detect loops and structures, hence the output is only a pseudo-code and cannot be compiled without some manual refactoring. Known issues: Decompiler hangs on some large scripts, in particular combat-related Comparison between version 0.1 and version 0.2 of the decompiler: