Deployment Architecture

Search results differ between real-time search and searches such as the last 4 hours or Today.

Codyy_Fast
Explorer

Hello Splunk Community,

 

i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a few hours pass, I can no longer find it with the same search query. Of course, I adjust the time settings accordingly. First, I search in real-time (last 30 minutes), then I switch to, for example, Today or the last 4 hours.

I have noticed that this happens with searches that include "transaction msg maxspan=5m". I want to see all the related transactions.

When I have the command transaction msg maxspan=5m in my search, I find all the related transactions in real-time. After a few hours, I no longer get any hits with the same search query. Only when I remove the transaction command from the search do I see the entries again, but then I don't see as much information as before. Nothing changes if i switch to transaction msg maxevent=3.

Do I possibly have a wrong configuration of my environment here, or do I need to adjust something?

Thanks in advance.

Search Query:

index="sys_linux" sourcetype="linux_audit"
| transaction msg maxspan=5m
| search type=SYSCALL (auid>999 OR auid=0) auid!=44444 auid!=4294967295 comm!=updatedb comm!=ls comm!=bash comm!=find comm!=crond comm!=sshd comm!="(systemd)"
| rex field=msg "audit\((?P<date>[\d]+)"
| convert ctime(date) | sort by date
| table date, type, comm, uid, auid, host, name
Labels (4)
0 Karma

bowesmana
SplunkTrust
SplunkTrust

Using transaction is rarely a good solution, as it has numerous limitations and results will silently disappear, as you have noticed.

It seems you're looking for the same msg within a 5 minute window, that has a syscall and not from certain comm types, but given that audit messages are typically time based, can you elaborate on what you're trying to do here.

You are asking Splunk to hold 5 minutes of data in memory for every msg combination, so if your data volume is large then lots of those combinations will get discarded.

Whenever you use transaction, you should filter out as much data as possible before you use it. Can you give an example of what groups of events you are trying to collect together - the stats command is generally a much better way of doing this task and does not have limitations.

Also, note that sort by date is not valid SPL as "by" is treated here as a field and not a command word - just use sort date

 

 

0 Karma

PaulPanther
Motivator

There exist some limits for the transaction command you can fnd them under Memory control options   transaction - Splunk Documentation

More details to these limits can be found in transactions stanza in limits.conf - Splunk Documentation

 

0 Karma
Get Updates on the Splunk Community!

CX Day is Coming!

Customer Experience (CX) Day is on October 7th!! We're so excited to bring back another day full of wonderful ...

Strengthen Your Future: A Look Back at Splunk 10 Innovations and .conf25 Highlights!

The Big One: Splunk 10 is Here!  The moment many of you have been waiting for has arrived! We are thrilled to ...

Now Offering the AI Assistant Usage Dashboard in Cloud Monitoring Console

Today, we’re excited to announce the release of a brand new AI assistant usage dashboard in Cloud Monitoring ...