Deployment Architecture

Search results differ between real-time search and searches such as the last 4 hours or Today.

Codyy_Fast
Explorer

Hello Splunk Community,

 

i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a few hours pass, I can no longer find it with the same search query. Of course, I adjust the time settings accordingly. First, I search in real-time (last 30 minutes), then I switch to, for example, Today or the last 4 hours.

I have noticed that this happens with searches that include "transaction msg maxspan=5m". I want to see all the related transactions.

When I have the command transaction msg maxspan=5m in my search, I find all the related transactions in real-time. After a few hours, I no longer get any hits with the same search query. Only when I remove the transaction command from the search do I see the entries again, but then I don't see as much information as before. Nothing changes if i switch to transaction msg maxevent=3.

Do I possibly have a wrong configuration of my environment here, or do I need to adjust something?

Thanks in advance.

Search Query:

index="sys_linux" sourcetype="linux_audit"
| transaction msg maxspan=5m
| search type=SYSCALL (auid>999 OR auid=0) auid!=44444 auid!=4294967295 comm!=updatedb comm!=ls comm!=bash comm!=find comm!=crond comm!=sshd comm!="(systemd)"
| rex field=msg "audit\((?P<date>[\d]+)"
| convert ctime(date) | sort by date
| table date, type, comm, uid, auid, host, name
Labels (4)
0 Karma

bowesmana
SplunkTrust
SplunkTrust

Using transaction is rarely a good solution, as it has numerous limitations and results will silently disappear, as you have noticed.

It seems you're looking for the same msg within a 5 minute window, that has a syscall and not from certain comm types, but given that audit messages are typically time based, can you elaborate on what you're trying to do here.

You are asking Splunk to hold 5 minutes of data in memory for every msg combination, so if your data volume is large then lots of those combinations will get discarded.

Whenever you use transaction, you should filter out as much data as possible before you use it. Can you give an example of what groups of events you are trying to collect together - the stats command is generally a much better way of doing this task and does not have limitations.

Also, note that sort by date is not valid SPL as "by" is treated here as a field and not a command word - just use sort date

 

 

0 Karma

PaulPanther
Motivator

There exist some limits for the transaction command you can fnd them under Memory control options   transaction - Splunk Documentation

More details to these limits can be found in transactions stanza in limits.conf - Splunk Documentation

 

0 Karma
Get Updates on the Splunk Community!

Splunk Decoded: Service Maps vs Service Analyzer Tree View vs Flow Maps

It’s Monday morning, and your phone is buzzing with alert escalations – your customer-facing portal is running ...

What’s New in Splunk Observability – September 2025

What's NewWe are excited to announce the latest enhancements to Splunk Observability, designed to help ITOps ...

Fun with Regular Expression - multiples of nine

Fun with Regular Expression - multiples of nineThis challenge was first posted on Slack #regex channel ...