All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi  I am trying to use earliest and latest on Date time  Could you please advise the right format to use , i am not sure the below spl format is correct Event Code="1234" AND earliest="5/8/2024:10... See more...
Hi  I am trying to use earliest and latest on Date time  Could you please advise the right format to use , i am not sure the below spl format is correct Event Code="1234" AND earliest="5/8/2024:10:07:20" latest="5/8/2024:10:17:20
@jaibalaraman search can be in any time zone. can you elaborate your question what you need exactly
Hello, How do I set a flag in based on field value in multiple row? For example: In the following table,  network-1 is set to yes because server-1 that is on network-1 is also on fw-network-1 that... See more...
Hello, How do I set a flag in based on field value in multiple row? For example: In the following table,  network-1 is set to yes because server-1 that is on network-1 is also on fw-network-1 that is behind a firewall.    Please suggest. Thank you!! server network firewall server-1 network-1 yes server-1 fw-network-1 yes server-2 network-2 no server-3 network-1 yes server-3 fw-network-1 yes server-4 network-2 no server-5 network-3 yes server-5 fw-network-3 yes
We would like to ask for help regarding the DB Connect for DB2, we are currently trying to connect the DB2 of an IBM I Server but to no avail, are there any method needs to be done first for a DB2 on... See more...
We would like to ask for help regarding the DB Connect for DB2, we are currently trying to connect the DB2 of an IBM I Server but to no avail, are there any method needs to be done first for a DB2 on IBM-I be able to successfully connect on SPLUNK?
Also the search can be done in  UTC or any time zone'
Splunk search  " EventCode="4688" AND earliest="5/8/2024:10:07:20" latest="5/8/2024:10:17:20 "  Could you please the time search is correct 
Hello @Josh.Varughese  Yes, the old version machineagent is only supported by the docker runtime but the latest MA is supported by the contatinerd. Please use the latest MA. Best Regards, Rajesh... See more...
Hello @Josh.Varughese  Yes, the old version machineagent is only supported by the docker runtime but the latest MA is supported by the contatinerd. Please use the latest MA. Best Regards, Rajesh Ganapavarapu
The tstats command only works with indexed fields.  If the field is not indexed and is not in a data model (same thing, really), then it can't be used with tstats.
I have a wineventlog index to alert on locked accounts (EventCode=4740), but want to limit this based on certain users.  The data for the wineventlog index is pretty limited, so it looks like I would... See more...
I have a wineventlog index to alert on locked accounts (EventCode=4740), but want to limit this based on certain users.  The data for the wineventlog index is pretty limited, so it looks like I would have to reference another index like activedirectory, that contains similar data.  I was thinking I could reference the "OU" field in the activedirectory index so that this is possible, but I'm struggling  on what I need to combine in order to make this search work.  I've looked at using coalesce, and can get results from both indexes/sourcetypes, but can't seem to just limit my search using EventCode=4740 and OU=Test Users Group. (index="wineventlog" AND sourcetype="wineventlog" AND EventCode=4740) OR (index="activedirectory" AND sourcetype="ActiveDirectory" AND sAMAccountName=*) | eval Account_Name = lower( coalesce( Account_Name, sAMAccountName)) | search Account_Name="test-user" Some of the key fields that I'm trying to reference from the indexes are as follows: index = wineventlog sourcetype = wineventlog EventCode=4740 Security_ID = domain\test-user Account_Name = test-user Account_Name = dc index = activedirectory sourcetype = ActiveDirectory Account_Name = test-user sAMAccountName = test-user OU = Test Users Group
I just stumbled upon this post while looking for something semi-unrelated. FWIW: There are some instances where it must be set to "true" in the .conf files. I had an issue back in Feb where queries ... See more...
I just stumbled upon this post while looking for something semi-unrelated. FWIW: There are some instances where it must be set to "true" in the .conf files. I had an issue back in Feb where queries were not displaying length of execution in Splunk 9.0.8.  Found a KB article in Splunk support that suggested it might be caused by a setting** in limits.conf that was set to "1" instead of "true". We changed it to "true" and that fixed it. We did a little digging with the rest API and found that it would return 1/0 for the configs, but when looking at the .confs, they were written as true/false. **I won't reference the setting so as to not upset the Splunk Gods who may hold support contracts sacred.
Hi,  how can I rewrite the following search using tstats and datamodel Network_Traffic? index=*pan* sourcetype="pan:threat" severity IN ("high", "critical") so far I have tested the following: | ... See more...
Hi,  how can I rewrite the following search using tstats and datamodel Network_Traffic? index=*pan* sourcetype="pan:threat" severity IN ("high", "critical") so far I have tested the following: | tstats count from datamodel=Network_Traffic by All_Traffic.src_ip but given the fact that “severity” is not a field included in the datamodel but just in the index, how can I add the condition severity IN ("high", "critical")?   thank you!
Is there any playbook for setting up Heavy Forwarders ?
Hi have you read this https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/AboutHECIDXAck? And have you implemented ack response handling on your HEC client? Are you using separate channel value... See more...
Hi have you read this https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/AboutHECIDXAck? And have you implemented ack response handling on your HEC client? Are you using separate channel values on every HEC client instances? How many HEC receiver you have and are those behind load balancer? r. Ismo
Hi Splunk has designed to archive data from buckets, not from collection phase. If you are running on AWS then you could try to archive data before it’s indexed by ingest action, but I think that yo... See more...
Hi Splunk has designed to archive data from buckets, not from collection phase. If you are running on AWS then you could try to archive data before it’s indexed by ingest action, but I think that you are running it in on premise?  The best option is use archive script in indexes.conf for archiving buckets. If this is not an option to you, then you have two option. setup some props + transforms.conf files to duplicate that data e.g. to syslog server and use it to store and archive it. But as splunk use UDP to send syslog feed, you will lose some events time by time. Use some other tool to collect and archive those events and send those also to splunk by that tool r. Ismo
Thanks for the side message. cms.apache-access | eval request_time_num = tonumber(request_time) | eval category = case( request_time_num < 100000, "<1s", request_time_num < 200000, "<2s", request_t... See more...
Thanks for the side message. cms.apache-access | eval request_time_num = tonumber(request_time) | eval category = case( request_time_num < 100000, "<1s", request_time_num < 200000, "<2s", request_time_num < 300000, "<3s" ) | stats count by category
Hi you could try like LINE_BREAKER = ([\n\r]+)\d{8} \d\d:\d\d: TIME_FORMAT = %Y%m%d %H:%M%:%S.%3Q TIME_PREFIX = ^  r. Ismo
Hello, I'm new to Dashboard Studio. I'm looking for a way to show/hide certain visualizations based on user selection in a dropdown, e.g. based on token value. As I understand, this is pretty easy t... See more...
Hello, I'm new to Dashboard Studio. I'm looking for a way to show/hide certain visualizations based on user selection in a dropdown, e.g. based on token value. As I understand, this is pretty easy to achieve in the older (xml-based) version of Dashboards using the "depends" attribute. Is there an equivalent of this in Dashboard Studio? I wasn't able to find any good info on this.
I'm trying to figure out how to query all of the events from an Apache log and produce a report with counts of the number events with request_time less than 3s, less than 2s and less than 1s.
Running the index="firewall" works successfully and adding the sourcetype="firewall" lets me search through the logs successfully but it will only let me filter and look for the fields below for some... See more...
Running the index="firewall" works successfully and adding the sourcetype="firewall" lets me search through the logs successfully but it will only let me filter and look for the fields below for some reason.  I can't look for destination IP addresses?