All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, and thank you for your help! Here is my what my dashboard looks like now: <event> <search> <query>$case_token$ sourcetype=hayabusa $host_token$ $level_token$ $rule_token$ | fields Timestam... See more...
Hello, and thank you for your help! Here is my what my dashboard looks like now: <event> <search> <query>$case_token$ sourcetype=hayabusa $host_token$ $level_token$ $rule_token$ | fields Timestamp, host, Computer, Level, Channel, RecordID, EventID, Ruletitle, Details</query> </search> <fields>Timestamp, host, Computer, Level, Channel, RecordID, EventID, RuleTitle, Details, _time</fields> <option name="count">50</option> <option name="list.drilldown">none</option> <option name="list.wrap">1</option> <option name="raw.drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="table.drilldown">all</option> <option name="table.sortDirect">asc</option> <option name="table.wrap">1</option> <option name="type">table</option> <drilldown> <condition field="Channel"> <set token="channel_token">$click.value$</set> </condition> </drilldown> </event> Here is what the corresponding search looks like: index=test-index sourcetype=hayabusa host=* Level=* RuleType=* | fields Timestamp, host, Computer, Level, Channel, RecordID, EventID, Ruletitle, Details  
Have your lookup return the common name for the ha pair and detect when the pair has not sent logs (recently)
Hello.  I am working on creating an alert in Splunk for detecting when a firewall stops sending logs. We have all logs from firewalls forwarded to syslog in Splunk as sourcetype=pan:traffic . The pr... See more...
Hello.  I am working on creating an alert in Splunk for detecting when a firewall stops sending logs. We have all logs from firewalls forwarded to syslog in Splunk as sourcetype=pan:traffic . The problem is we have ha-pairs/ active and passive firewall and I don't see how to construct the query to check when BOTH firewalls (let's say active city-fw01 and passive city-fw02) don't send logs. We have more than 100 devices so I am using a lookup table with the list.  Any idea would be great, thanks.
Can't reproduce either - please share your dashboard/report search so we can see what else might be going on?
Can't reproduce. | makeresults count=100 | eval _raw="2025-05-19 12:38:40 aaa <something> bbb <something else> let's make this event long. Or at least long-ish. reason=we'll see how it works <br> &... See more...
Can't reproduce. | makeresults count=100 | eval _raw="2025-05-19 12:38:40 aaa <something> bbb <something else> let's make this event long. Or at least long-ish. reason=we'll see how it works <br> &lt;<how about <now>/>&rt; No change. Thisisfine...",host="a",source="b" | transaction maxevents=10000 host source | table _time host source _raw Splunk 9.3.0. Works as it should.
And what is the problem you're trying to solve?
Hi @ArtieZ  Please can you confirm/check two things? 1) Is the GUID on each of your indexers unique? I assume you'd have bigger problems if they werent but its worth checking. This can be found in ... See more...
Hi @ArtieZ  Please can you confirm/check two things? 1) Is the GUID on each of your indexers unique? I assume you'd have bigger problems if they werent but its worth checking. This can be found in $SPLUNK_HOME/etc/instance.cfg 2) When you remediated by renaming the conflicting buckets - did you rename all replicas of these buckets on other indexers too? If you just renamed on a single indexer then it may well replicate the original conflicting bucket back again.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
But there are many fields inside _raw, so when i execute, table *, there is no problem with that.  BUt when the whole data comes under _raw, those operator changes to xml values.
I have exactly copied my splunk query. My final results are _time, host,source,_raw, there are no more fields in it, whole of the event is in _raw     The one highlighted in green is, when... See more...
I have exactly copied my splunk query. My final results are _time, host,source,_raw, there are no more fields in it, whole of the event is in _raw     The one highlighted in green is, when executed till transaction. The one in pink is with Table command.    
The table command doesn't of itself modify the contents of fields. How are you displaying the contents of the fields?
Hi @sassofrasso  It is not possible to modify the robots.txt file. You may discover the robots.txt file at "$SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/robots.txt" - however changing this ha... See more...
Hi @sassofrasso  It is not possible to modify the robots.txt file. You may discover the robots.txt file at "$SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/robots.txt" - however changing this has no change to the Splunk system, even after a _bump / restart etc.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello , My splunk query is simple:   index=abc,source=xxx.trc | transaction host source max events=100000 | table _time host source _raw   Now when i execute this until transaction command, it... See more...
Hello , My splunk query is simple:   index=abc,source=xxx.trc | transaction host source max events=100000 | table _time host source _raw   Now when i execute this until transaction command, it is fine "<", ">" they appear as it is.   But when i give table command, "<", ">" changed to "&lt;" and "&gt;"   Is there anyway i can prevent this?
@sassofrasso  As far as I know, it may not be possible to customize these files.
Hello everybody, is there a way to customize the default values for robots.txt and sitemap.xml on splunk?
@qq-stan No i haven't installed that. 
Hi  It seems there is some confusion in this thread. Please see the below docs from SentinelOne. I believe there is duplicate of functionality between the apps. (See the Details tab of https://splun... See more...
Hi  It seems there is some confusion in this thread. Please see the below docs from SentinelOne. I believe there is duplicate of functionality between the apps. (See the Details tab of https://splunkbase.splunk.com/app/5433)  The reason you see API inputs on the different apps is due to the duplication in functionality, e.g. the SentinelOne app is able to pull data and also interact with SentinelOne via alert actions, however its not recommended to run on a searchhead unless its a single instance deployment, in which case you would use the SentinelOne app on the SH configured with the API so you can utilise the Alert Actions, and the IA-sentintelone app for the inputs on a HF, does that make sense? Deployment Guide Note: Do not install Add-Ons and Apps on the same system. Single Instance (8.X) (Pre-requisite) Splunk CIM Add-on Only the SentinelOne App (sentinelone_app_for_splunk) Single Instance + Heavy Forwarder (8.X) Single Instance: (Pre-requisite) Splunk CIM Add-on SentinelOne App (sentinelone_app_for_splunk) Heavy Forwarder: IA-sentinelone_app_for_splunk (IA-sentinelone_app_for_splunk) Distributed deployment (8.x) Heavy Forwarder: IA-sentinelone_app_for_splunk (IA-sentinelone_app_for_splunk) Search Head: (Pre-requisite) `Splunk CIM Add-on https://splunkbase.splunk.com/app/1621/`_ SentinelOne App (sentinelone_app_for_splunk) Indexer: TA-sentinelone_app_for_splunk (TA-sentinelone_app_for_splunk)    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
1. Load-balancing syslogs usually doesn't work very well. 2. Your description doesn't make much sense in some points. 3. This is a public community where volutneers share their experience/expertise... See more...
1. Load-balancing syslogs usually doesn't work very well. 2. Your description doesn't make much sense in some points. 3. This is a public community where volutneers share their experience/expertise for common good. If you want a private audit - well, that's a service you're normally paying for. Reach out in your area for a friendly Splunk Partner and ask them to review your environment. 4. Without knowing both the details of your environment as well as seeing what really happened within your environment (checking internal logs if they haven't rolled out yet, maybe verifying some other logs and external monitoring) it's impossible to say what exactly happened. What _might have_ happened is the usual - lack of connectivity, there was enough data buffered so that extra data just overflowed and didn't make it into the buffer. Maybe - as you're saying you had "some issues with indexes" - some data was indexed but got lost. We don't know. And it might or might not, depending on how much data you have on your environment still left, be something that's really only findable on-site by examining the environment in question. As a side note vis-a-vis your sharing "somewhere where it's not private" - are you sure you're in position to freely disclose such information to a third party? Without a prior service agreement and possibly an NDA?
Hi @xiyangyang  No, the Linux UF does not use kernel hook technology like eBPF to monitor or collect data. It relies on reading log files, monitoring system logs, and other user-space data sources. ... See more...
Hi @xiyangyang  No, the Linux UF does not use kernel hook technology like eBPF to monitor or collect data. It relies on reading log files, monitoring system logs, and other user-space data sources. The Universal Forwarder primarily operates by: Monitoring files and directories (often using OS-level file system event notifications like inotify on Linux, or polling). Listening on specified network ports (TCP/UDP) for incoming data. Executing scripts and collecting their standard output. Reading from Windows event logs or other system-specific logging mechanisms. Splunk does use eBPF in other product, but not in the UF. For more on eBPF see https://www.splunk.com/en_us/blog/learn/what-is-ebpf.html and for info on how its used in Splunk Infrastructure Monitoring (part of o11y suite) see https://www.splunk.com/en_us/products/infrastructure-monitoring-features.html  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Isoutamo. Sorry for the late response, had some time off. But If I share our Splunk enviroment, perhaps there would be some places you could recommend for troubleshooting.  Our clients are send... See more...
Hi Isoutamo. Sorry for the late response, had some time off. But If I share our Splunk enviroment, perhaps there would be some places you could recommend for troubleshooting.  Our clients are sending syslogs through Netscaler enviroment to the syslogs servers. Syslog servers are connected to the Splunk Index Cluster, which is connected to the Search Head cluster and Heavy Forwarders.  Heavy forwarders are connected to Universal Forwarder. There was no changes in our network. The logs from this specific host was missing for about 5 hours. Earlier we had some issues with the indexes in our Splunk enviroment, which they are saying is fixed. In this case our syslogs servers have Netscaler between it self and the clients. The logs are at the end send via Internet to a Cybersecurity enviroment. Does this make the picture more clear, or can I send you our diagram over the Splunk envorment (some place where it is not public) ? Brgds DD
I cannot find confirmation now, but if I recall right linux is using inotify to get information in new events on log files with splunk?