All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you are Army you need to be on versions 9.0.10, 9.1.5, or 9.2.2 There was a bug that was fixed and pushed on 7/1/2024
We want to limit the ingestion of data that is coming from some sources (in this case the value would be in Properties.HostName) because they basically are not working correctly (customer machines) a... See more...
We want to limit the ingestion of data that is coming from some sources (in this case the value would be in Properties.HostName) because they basically are not working correctly (customer machines) and continue to spam the system. (Turning them off is not an option. ). I know that we can add hardcoded filters such as below: Name: Serilog:Filter:nn:Args:expression Value: @p['AssemlyName'] = 'SomeAssembly.xxx.yyy' and @p['HostName'] in ['Spammer1', 'Spammer2', ...] But the spammers change from time to time and we can generate their list.  The question is, if I have a list of these spammers (in any form needed) can I somehow use some sort of a value above of some other method to read from that list (in place of the "in [... ]" expression above)? 
Is there any way to authenticate DB Connect using key pair instead of user/password?  If not, any suggested workarounds anyone has found?
In Indexes.conf from the CM, I tried to set thawedHomePath to a volume, which I have since learned does not work. I set the path from volume:cold back to $SPLUNK_DB, but no matter what I do the inde... See more...
In Indexes.conf from the CM, I tried to set thawedHomePath to a volume, which I have since learned does not work. I set the path from volume:cold back to $SPLUNK_DB, but no matter what I do the indexer will not acknowledge that I changed it back. It still thinks it's set to the volume. I modified it, commented it  out, deleted the whole indexes.conf file and loaded a manual one in the `/etc/system/local/indxes.conf  and nothing will un-stick it. Every time I start the indexer, the logs show it won't start because thawedHomePath is mapped to a volume still. When I run ~\splunk btool indexes list --debug  it shows the thawedHomePath in question is configured correctly. Has anyone ever experienced this before? Any suggestions on how to get it to accept the change?  Running Splunk 9.2 on RHEL 8 with 1 CM and 2 IDXs clustered together. Fairly new deployment, still working the bugs out.
Has anyone found a solution or workaround for this? 
I've got this search index=my_index data_type=my_sourcetype earliest=-15m latest=now | eval domain_id=if(isnull(domain_id), "NULL_domain_id", domain_id) | eval domain_name=if(isnull(domain_name), "... See more...
I've got this search index=my_index data_type=my_sourcetype earliest=-15m latest=now | eval domain_id=if(isnull(domain_id), "NULL_domain_id", domain_id) | eval domain_name=if(isnull(domain_name), "NULL_domain_name", domain_name) | eval group=if(isnull(group), "NULL_Group", group) | eval non_tier_zero_principal=if(isnull(non_tier_zero_principal), "NULL_non_tier_zero_principal", non_tier_zero_principal) | eval path_id=if(isnull(path_id), "NULL_path_id", path_id) | eval path_title=if(isnull(path_title), "NULL_path_title", path_title) | eval principal=if(isnull(principal), "NULL_principal", principal) | eval tier_zero_principal=if(isnull(tier_zero_principal), "NULL_tier_zero_principal", tier_zero_principal) | eval user=if(isnull(user), "NULL_user", user) | eval key=sha512(domain_id.domain_name.group.non_tier_zero_principal.path_id.path_title.principal.tier_zero_principal.tier_zero_principal.user) | table domain_id, domain_name, group, non_tier_zero_principal, path_id, path_title, principla, tier_zero_principal, user, key Due to the fact that we get repeating events where the only difference is the timestamp, I'm trying to put together a lookup that contains the sha512 key and that will allow an event to be skipped.  What I found is I can't have a blank value in the sha512 command.  Does anyone have a better way of doing this, then what I have? TIA, Joe
Using Splunk Add-on for Microsoft Windows, Splunk Add-on for Unix and Linux on Splunk Enterprise v9.3.0 What are the Linux (RHEL 8 ) equivalents for these Splunk Windows queries? e.g. Network Tra... See more...
Using Splunk Add-on for Microsoft Windows, Splunk Add-on for Unix and Linux on Splunk Enterprise v9.3.0 What are the Linux (RHEL 8 ) equivalents for these Splunk Windows queries? e.g. Network Traffic: Windows: index=wmi host=MyWindowsHost sourcetype="Perfmon:Network Interface" counter=Bytes* | timechart span=15m max(Value) as "Bytes/sec" by counter Linux: ? e.g. CPU:  Windows: index=wmi host=MyWindowsHost sourcetype="Perfmon:CPU Load" | timechart span=15m max(Value) as "CPU Load" by counter Linux: index=os host=MyLinuxHost source=cpu CPU="all" | timechart span=15m max(pctSystem),max(pctUser) by CPU
Thanks for the info. I tried both solutions and they are functionally equivalent although the "untable" approach only includes events where there are data points, i.e., if the time range is > than ev... See more...
Thanks for the info. I tried both solutions and they are functionally equivalent although the "untable" approach only includes events where there are data points, i.e., if the time range is > than events within the time range, the ultimate timechart only includes the time range for the events included. I inspected each search within splunk and for a relatively short time range, I see the following, so the foreach seems to be more efficient.  foreach approach : This search has completed and has returned 30 results by scanning 174 events in 1.185 seconds untable approach: This search has completed and has returned 11 results by scanning 528 events in 1.674 seconds
We're on Splunk Cloud version 9.1
My original query only returned start and end events so the duration calculation worked.  With the change to the base query, we'll have to change how we extract times. "My Base query" ("Starting ex... See more...
My original query only returned start and end events so the duration calculation worked.  With the change to the base query, we'll have to change how we extract times. "My Base query" ("Starting execution for request" OR "Successfully completed execution" OR "status" OR "Path") | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | rex "timestamp\:\s+(?<timestamp>.*)\"" | eval startTime = if(searchmatch("Starting execution for request"), timestamp, startTime), endTime = if(searchmatch("Successfully completed execution"), timestamp, endTime) | stats max(startTime) as startTime, max(endTime) as endTime, values(*) as * by Message_Id | stats values(*) as * by Message_Id | eval end_timestamp_s = endTime/1000, start_timestamp_s = startTime/1000 | eval duration = end_timestamp_s - start_timestamp_s | eval human_readable_etime = strftime(end_timestamp_s, "%Y-%m-%d %H:%M:%S"), human_readable_stime = strftime(start_timestamp_s, "%Y-%m-%d %H:%M:%S"), duration = tostring(duration, "duration") | table Message_Id human_readable_stime human_readable_etime duration Status ResourcePath
The bulletin message is trying to help you avoid data exfiltration by saying content in alert actions can go anywhere in the world.  It will appear if the allowedDomainList is empty.  If you are OK w... See more...
The bulletin message is trying to help you avoid data exfiltration by saying content in alert actions can go anywhere in the world.  It will appear if the allowedDomainList is empty.  If you are OK with that then you can ignore the message. If you prefer to limit alert actions to your own domain (and/or others) then update the allowedDomainList and the bulletin messages will stop. I'm not aware of a way to have an empty allowedDomainList and not get the warning message.
Which version of Splunk are you running? I've heard this can happen on 9.1.x
What kind of logs are you trying to fetch? Does the system have a forwarder or Splunk Enterprise installed on it?
Hi richgalloway the below query gives me all the required  results when OR "status" OR "Path" is added to the query.   However, its taking the wrong time stamp. Its taking the difference between the ... See more...
Hi richgalloway the below query gives me all the required  results when OR "status" OR "Path" is added to the query.   However, its taking the wrong time stamp. Its taking the difference between the first two events.  i need the duration of  "Successfully completed execution" - "Starting execution for request" time stamps.  "My Base query"  ("Starting execution for request" OR "Successfully completed execution" OR "status" OR "Path") | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | rex "timestamp\:\s+(?<timestamp>.*)\"" | stats min(timestamp) as startTime, max(timestamp) as endTime, values(*) as * by Message_Id | stats values(*) as * by Message_Id | eval end_timestamp_s = endTime/1000, start_timestamp_s = startTime/1000 | eval duration = end_timestamp_s - start_timestamp_s | eval human_readable_etime = strftime(end_timestamp_s, "%Y-%m-%d %H:%M:%S"), human_readable_stime = strftime(start_timestamp_s, "%Y-%m-%d %H:%M:%S"), duration = tostring(duration, "duration") | table Message_Id human_readable_stime human_readable_etime duration Status ResourcePath
Hi @Sarath Kumar.Sarepaka, Thanks for asking your question on the community. Let's see if the community jumps in with more help. In the meantime, I found this AppD Docs page that I think could be h... See more...
Hi @Sarath Kumar.Sarepaka, Thanks for asking your question on the community. Let's see if the community jumps in with more help. In the meantime, I found this AppD Docs page that I think could be helpful.
I need a help for writing a query to fetch logs in the system
Hi there! I'm looking for a comprehensive list of report ideas for all of security, including management/metrics, operations, and compliance. Has anyone created such a list? Would you mind sharing?... See more...
Hi there! I'm looking for a comprehensive list of report ideas for all of security, including management/metrics, operations, and compliance. Has anyone created such a list? Would you mind sharing? I'd like to see a long list or reports so I can help identify gaps in security posture. Thanks!!!
This looks like a good case for Ingest Actions. Ingest actions allows you to select data, filter it using regex or eval expressions and then set the index when the conditions are met. It's under sett... See more...
This looks like a good case for Ingest Actions. Ingest actions allows you to select data, filter it using regex or eval expressions and then set the index when the conditions are met. It's under settings -> Ingest actions
What is the best approach for data visualization using tstats? I am new to using tstats, I moved away from using the regular search index because it speeds up the query process. for example making... See more...
What is the best approach for data visualization using tstats? I am new to using tstats, I moved away from using the regular search index because it speeds up the query process. for example making this query to show the vulnerabilities found on each ip   | tstats summariesonly=t dc(Vulnerability.signature) as vulnerabilities from datamodel=Vulnerability by Vulnerability.dest | sort -vulnerabilities | rename Vulnerability.dest as ip_address | table ip_address vulnerabilities   for example, first line from that query show ip 192.168.1.5 has 4521 vulnerabilities found then I also created another detail table to verify and show some other columns related to that ip (click ip and send token) but it shows a different amount of data (4638 events).   | tstats summariesonly=t count FROM datamodel=Vulnerability WHERE Vulnerability.destination="192.168.1.5" AND Vulnerability.signature="*" BY Vulnerability.destination, Vulnerability.signature, Vulnerability.severity, Vulnerability.last_scan, Vulnerability.risk_score, Vulnerability.cve, Vulnerability.cvss_v3_score, Vulnerability.solution | `drop_dm_object_name(Vulnerability)` | rename destination as ip_address | fillnull value="Unknown" ip_address signature severity last_scan risk_score cve cvss_v3_score solution | table ip_address signature severity last_scan risk_score cve cvss_v3_score solution   and I know this is related to the inaccuracy of the query, because if Ichange the "BY" parameter it will change the amount of data displayed too. how to make the data count of this query match the same output as the first query, but still display other fields even though they are empty.
The appendpipe effectively reprocesses the stats event returned by the first timechart, but in order to do this they have to be broken out of the chart format, which is what the untable does. The xys... See more...
The appendpipe effectively reprocesses the stats event returned by the first timechart, but in order to do this they have to be broken out of the chart format, which is what the untable does. The xyseries puts the events back into the chart format with the additional column for the count of nodes for each time period.