Jump to content

Recommended Posts

Posted

Hello TruckersMP Community and Team,

 

I’m a regular player and an AI enthusiast currently working on an independent project to develop an AI-powered system that can automatically review TruckersMP report videos (MP4 clips) submitted by users. The goal is to help speed up the reporting process by using machine learning to detect potential rule violations based on TruckersMP’s official rules.

 

Here’s how my project works so far:

  • Users upload report videos (MP4 files).
  • The videos are processed on a GPU-powered server where an AI analyzes the footage.
  • The AI evaluates the video for rule violations according to TruckersMP’s rules.
  • Based on this evaluation, the AI provides a suggestion on whether a ban might be justified.

 

A showcase for my video:

image.png.42e1883d69a695cb14d46c1b78948c26.png

 

Please note, this is purely a research and development project on my side — the AI does not have any integration with TruckersMP’s moderation system and cannot execute bans automatically. It only provides an analysis and recommendation based on the video content.

 

I’m currently collecting videos to train and improve the AI’s accuracy and would love to hear if the TruckersMP team or moderators think such a tool could be helpful as a supplemental aid in the future.

 

Thanks for your attention! I’m eager to get your thoughts and feedback.

Best regards,

Boomy

  • Like 1
  • Woah! 2
  • Awesome! 1
Posted

Honestly, beyond just analyzing raw video footage, one of the best sources for training an AI moderation system is actually the in-game report system itself. When players report someone in TruckersMP, the game usually creates a “demo” file that captures everything important—player positions, speeds, timestamps, and all that telemetry data. Now, even if there’s no actual demo file, I’m pretty sure there’s some kind of data saved behind the scenes, like JSON or something similar.

 

From what I understand, these demo files or datasets contain rich info, including which player got flagged or banned after manual review. If we can collect these demos linked to confirmed bans or warnings, we end up with a solid, labeled dataset that shows exactly what breaking the rules looks like in-game.

 

Feeding this data into AI models means the system doesn’t have to rely only on shaky video clips—it can use precise, timestamped gameplay info. That makes the AI way better at figuring out the difference between accidental collisions and intentional ramming, or spotting reckless driving patterns more accurately.

 

This kind of setup turns the AI from a simple video-watcher into a smart assistant that actually understands the game. It can make moderation faster and fairer, without burning out the human moderators.

 

So yeah, combining demo file analysis with report metadata is a real game-changer. It’s like giving the AI the full picture instead of just a blurry snapshot—perfect for dealing with TruckersMP’s multiplayer chaos.

Posted

I'll need to see it in action before i support Ai anything.

 

  • What if 3+ players are involved, will the system be able to detect the who was in the wrong?
  • Can this system be applied to in game reports? That's where moderation really struggles.
  • I hope this system doesn't make the Game Moderator Role outdated.
  • True Story 3
Posted

Even though I am a staff member, I would rather see this kind of "tool" or "AI moderation" as not beneficial, even from a player's viewpoint.

Ai is far from analyzing stuff like this on a level that humans do, and even with a proper set up, I don't think this tool would work flawlessly or in favor of players or even staff in the end.

Each incident with players involved is unique and requires it's own separate consideration and discretion. Adding autonomic chains of reads, commands, and later workflow to such a reviewing tool just cannot work. AI is maybe able to have a basic understanding of what could be a possible x or y rule violation, but is never able to understand the actual circumstances and context to it. 

For example, "[DETECTION] Collision Event Detected between Reporter and Perpetrator"

How exactly would AI be able to figure out if this was an intentional or negligent act, and would be able to tell it apart from actual desync/lags or contact made with multiple individuals involved? AI would not only be required to load every player model in a report to ensure that everything is being considered, looked at, and put into context, but would also need to follow all those players' live or stored data in order to confirm an actual event had taken place in the way it did. While such can not only lead to false conclusion from the system itself easily, reports in calais or duisburg easily have 200 player models to load around them. While this can already take up some more space for actual demos, this just exceeds another unneeded layer of temporary or permanent data for the AI system and/or our server storage generally.

Also, even when being able to track players' movement data live or stored, how would it be able to tell if someone violated rules like §2.3, §2.4, §2.6, etc. 

Don't get me wrong here, I am not saying this idea is bad or anything. But it is far from realistically applicable and being useful, regardless if for players, staff, or later our system, such as appealing a ban like that.

Also, on a personal note, I don't think players would necessarily be happy or satisfied to get banned by what is just a bot in the end, as the Staff Member who may deal with the punishment later has zero connection to the incident in the first place. It just takes a layer of connection and understanding completely away if those bans were issued in-game.

Posted
7 hours ago, Sunstrider said:

I'll need to see it in action before i support Ai anything.

 

  • What if 3+ players are involved, will the system be able to detect the who was in the wrong?
  • Can this system be applied to in game reports? That's where moderation really struggles.
  • I hope this system doesn't make the Game Moderator Role outdated.

 

Honestly, AI here isn't meant to kick GMs out of their job — it's more like a nerdy assistant that never sleeps and can go frame-by-frame through 100 demos at once. The cool part is when someone gets banned through the in-game report system, the demo file linked to it usually contains actual gameplay telemetry — stuff like vehicle speed, angles, braking, GPS, player IDs, timestamps. That kind of structured data is a goldmine for training AI, way better than just guessing based on video pixels.

 

So imagine you give it access to past demo files where bans were confirmed — you basically teach it what "wrong" looks like. Now fast forward: a new report comes in, and AI says "Yo, that was reckless. Player B was chill." It’s not replacing humans, just saving time and reducing false bans — and helping prioritize legit reports.

 

Oh, and yes, it can handle 3+ player cases. That’s where it shines — it's all math to it. If it has the data, it’ll cross-check everything like a tire-screeching detective.

 

Just an example, I used this video since it was public: 

 

 

image.png.6f6a09a15570c0ba9e2c661c6b8f833d.png

 

But again — no data = no party. So it's not magic. It’s just really good help for real moderators doing real work. ❤️

 

4 hours ago, Koneko said:

Even though I am a staff member, I would rather see this kind of "tool" or "AI moderation" as not beneficial, even from a player's viewpoint.

Ai is far from analyzing stuff like this on a level that humans do, and even with a proper set up, I don't think this tool would work flawlessly or in favor of players or even staff in the end.

Each incident with players involved is unique and requires it's own separate consideration and discretion. Adding autonomic chains of reads, commands, and later workflow to such a reviewing tool just cannot work. AI is maybe able to have a basic understanding of what could be a possible x or y rule violation, but is never able to understand the actual circumstances and context to it. 

For example, "[DETECTION] Collision Event Detected between Reporter and Perpetrator"

How exactly would AI be able to figure out if this was an intentional or negligent act, and would be able to tell it apart from actual desync/lags or contact made with multiple individuals involved? AI would not only be required to load every player model in a report to ensure that everything is being considered, looked at, and put into context, but would also need to follow all those players' live or stored data in order to confirm an actual event had taken place in the way it did. While such can not only lead to false conclusion from the system itself easily, reports in calais or duisburg easily have 200 player models to load around them. While this can already take up some more space for actual demos, this just exceeds another unneeded layer of temporary or permanent data for the AI system and/or our server storage generally.

Also, even when being able to track players' movement data live or stored, how would it be able to tell if someone violated rules like §2.3, §2.4, §2.6, etc. 

Don't get me wrong here, I am not saying this idea is bad or anything. But it is far from realistically applicable and being useful, regardless if for players, staff, or later our system, such as appealing a ban like that.

Also, on a personal note, I don't think players would necessarily be happy or satisfied to get banned by what is just a bot in the end, as the Staff Member who may deal with the punishment later has zero connection to the incident in the first place. It just takes a layer of connection and understanding completely away if those bans were issued in-game.

 

Appreciate the detailed take — it's always great to hear thoughtful perspectives, especially from someone on the team. You're absolutely right that AI can't replace the human touch, especially when it comes to judgment, context, and discretion. This tool isn’t trying to automate bans or replicate full incident evaluations. It simply flags clear, visual violations (like ramming or blocking) in recorded videos, and even then, it doesn’t act on its own — it just suggests.

 

You're spot on about complex situations: when there's desync, multiple players, or nuanced rule breaks like §2.3 or §2.6, AI would fall short. That’s why this tool won’t be used for those edge cases, nor does it access any player telemetry, server-side data, or anything beyond visible frames in a submitted video. It won’t replace mod involvement or in-game reporting — if anything, it just filters out the super obvious stuff so staff can spend more time where their judgment is needed most.

 

And totally agree — trust is key. A ban that feels “bot-issued” without human oversight would only hurt the community. That’s not what this is about. Every case flagged by AI will still go through a human before any action is taken.

 

Again, thanks for bringing this angle — it's important, and we’re keeping these concerns front and center while building this.

 

2 hours ago, [ S.PLH ] Warrior said:

My only concern is if we have to double check what the AI is suggesting, cuz we cannot fully rely on it and follow it blindly without seeing/knowing the context ourself.

so in the end it will only cause double work that means more time we spend on a report.

 

That's true, but the goal isn’t to blindly follow AI—it’s to speed things up over time. Think of how VACnet evolved in CS:GO/CS2. At first, players handled Overwatch manually, but Valve used those results to train their system. Now, it handles a huge chunk automatically. It took years, but it works. Same could happen here if we give it a chance 🙂

  • Awesome! 1
Posted

Apart from the concerns above, I am all up for new ideas on how to improve our systems and make it more efficient.

 

My only concern with this sort of system is the removal of any human element. We know it’s a video game, however we can be a little lenient depending on the situation and make informed decisions based on the situation that has occurred.

 

AI simply does not have this of course, being artificial it cannot understand or perceive human thought processing and weighing up the actions.

 

This is what makes it a community is that human interaction between staff and players, relying on AI to inform people of their actions based on millimetre in depth detective work, well yes can prove who is at fault or not but completely eliminates that human interaction because AI says so.

 

Im not against the advancement of technology but I fear this will impact the human emotional interaction.

  • Thanks 1
  • Upvote 1
  • Haha 1

> 29 Nov 2014 - Present<

 

History

Previously known as Louie G 

Former Truckers.FM Station Manager & Presenter

Former Apollo Logistics, Atlas Logistics & Nordic LTD Manager

Posted

i think it would be a good idea, maybe not to take over but to maybe section them into groups to make it a little more simple for a GM to work through batches, i feel with AI is not in the place to take over as stated prior AI cant fully understand the Human emotion but can understand behaviors of common conceptions, once thats been achieved we will officially be Skynet 🤠

Basic(2).png.13072add18d22529275124c5e4b4bb58.png

Guest average_player_f
Posted

It doesn’t matter how accurate AI is, in the future, it will likely become even more accurate at detecting who’s at fault. But I think human interaction is still important, especially to allow for leniency in some cases. Even though AI can detect things accurately now or in the future, it isn’t really necessary here. That’s because report moderators (RMs) and game moderators (GMs) are volunteers, that means they’re unpaid, so there’s no financial pressure to replace them. Plus, there are still many players willing to volunteer for moderation roles.
 

Game moderation itself is a kind of game. For example, a GM can "play" by reviewing mistakes or accidents caused by others and deciding whether to ban or kick them. I think that’s part of the reason many people are happy to volunteer. it’s engaging in its own way. So, there’s no real need to replace game moderators with AI. That said, AI could still be helpful in assisting with in-game moderation reports. But as I mentioned, human involvement is important to allow for some leniency.
Maybe in the future, AI can be developed to be both strict and lenient enough to behave like a human and at that point, it could be more helpful in-game moderation.

  • 1 month later...
Posted

I have seen someone on Twitch who is a game moderator talk about this and the whole idea of AI could be good for basic things but for more complex things how would the AI handle? was his whole point and also yes AI could be used atleast for suggestions on reports like what should they be banned for ect but I really think there is no point to add it for the time being if anything we just need more moderators? and if AI is added how can we assume it won't false ban people and will it take lag in to account for certain player incidents?

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.