Sat down, all excited, to get some NPC AI done. I wanted to finish off the Rat’s AI but the day didn’t go according to plan.
Given how modest the Rat is – it just runs about, pausing occasionally – I have serious misgivings about the initial implementation. My first (and possibly biggest) problem is with the number of files that were produced. Some of these are unavoidable; Anim Blueprints, for example… but they’re self-contained Finite State Machines and actually very easy to use and debug. Behaviour Trees, on the other hand, generate a BP for each Task and Environment Queries can generate even more. That’s not including code in the AI controller and the Actor Blueprint.
My Rat’s AI already has nine files. It’s not attacking anything or doing any sort of avoidance. Any one of those nine files could need stepping through when I debug, and that’s a killer for me. Nine is a lot of tabs to keep open. That’s a lot of pausing, looking for the right thing… If I don’t like that when I’m making it, what’s it going to feel like in a month when I’ve forgotten all about it and have to fix it?
I know that this is a clean, powerful, implementation of Behaviour Trees. But nah… Me no likey.
I did some searching to see if I was missing anything obvious and there’s surprisingly little activity around this stuff. Hardly anyone talks about it, the You Tube videos are all showing the same simple examples (Hide from the player!) and the docs are poor, even by Epic’s standards. It sorta smells like one of those features that Epic never actually used in anger…
I do have a cold, though, so admittedly, my sense of smell is all over the place…
Woke up feeling exhausted, annoyingly, as I thought I was over the worst of my cold. My head felt like cotton wool so I tried to do some art stuff and ended up hating everything.
Meh, waste of a day…
Added in the damage handler so you can kill the Rats and progress out of combat rooms. I’m not going to do anything else with the AI until I get the Vlog done, but as I’m not going to have any decent AI to talk about, the Vlog’s gonna be a bit boring. Woo!
Added a visual effect around the player when relocating with the Sticky Mitt. It didn’t come out how I was hoping. It looks like the cloth simulation isn’t being read by Niagara, so the particles are being coloured by the body mesh, not the cape, but I can probably fake it by setting the particle colour to red in the emitter… I just want a little shimmer to remain.
Had another attempt at making a door for the Dungeon Exit. Threw something together quickly so there’s not a hole in the wall for the Vlog, and accidentally had the material’s metal property set too high. The reflection in the door was… interesting, leading me down the rabbit-hole of Ray Tracing…
The door will do as a placeholder. The archway is horrible, has to go, but I’ll replace that later. There’s something about archways. The ones on the basic dungeon doors are like my third attempt, and I don’t like those, either!
Was teaching in the morning. Quiet class was quiet…
I wanted to see how far I could get sequencing the opening of the Boss Door, while ignoring the ugly assets. I’m leaning more and more on Timelines for stuff like this, as they’re quick to implement and test. Like the room-based stuff I’ve tried, I was able to do the entire door sequence really quickly, with simple nodeage. I love a good curve!
I’ve got rotation, position, material emissive, material opacity and events for particles on a single timeline here. Rock solid, and I’m not even using the full sequencer yet.
You can do something similar using curves in Unity, but back when I was tempted to try, the curve editor was horrible and there was a lot of boiler-plate code that needed to be written, just to get the basics up and running. So I didn’t bother. I did do nice implementation of something similar using DearImGUI for a 2D project, though.
Anyway, with some audio, a couple of reaction animations on the player character, and an archway that doesn’t look like toss, this will be nice.
Made a new vlog! I’m still trying to work out what sticks with these, so this is an attempt to get something as short and concise as possible. Which is handy, really, cos I couldn’t think of anything super-interesting to talk about, since I bailed out of the AI… It still took the better part of 3.5 hours to edit, though. Watch and share, people, watch and share...
I’ve also created a Peer Tube channel, which I’ll use to mirror YouTube. You can sub here: Peer Tube
Now to go and update all the socials… And schedule Substack. And tomorrow's tweet. Fridays are fun!