All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all I have a riddle. Query A and query B does not collect the same events and I don’t understand why. Query A) results 2 events as transaction   | multisearch [search (11111111 OR 22222222... See more...
Hi all I have a riddle. Query A and query B does not collect the same events and I don’t understand why. Query A) results 2 events as transaction   | multisearch [search (11111111 OR 22222222) host=x index=y level=z (logger=a "text_a") ] [search (11111111 OR 22222222) host=x index=y level=z (logger=b message="text_b") ] | rex field=_raw "<sg: ID>(?<ID>.*?)<" | transaction ID keepevicted=false startswith="text_a" endswith=message="text_b"     Query B) results 1 event as transaction   | multisearch [search (11111111 OR 22222222) host=x index=y level=z (logger=a "text_a") ] [search host=x index=y level=z (logger=b message="text_b") ] | rex field=_raw "<sg: ID>(?<ID>.*?)<" | transaction ID keepevicted=false startswith="text_a" endswith=message="text_b"     11111111 and 22222222 is used as an ID to test the query and to confirm the correctness. But if I remove these IDs from second search like in query B) than I get only one result, the other is missing.   I thought at the first time, it is because of the enormous amount of records. I used a time filter to reduce the records, at the end with 19.351 events. Unfortunately it didn’t help. Of course, if I replace the multisearch to OR, it works. Query C) If I move the ID filter in second search, booth events are there.   | multisearch [search host=x index=y level=z (logger=a "text_a") ] [search (11111111 OR 22222222) host=x index=y level=z (logger=b message="text_b") ] | rex field=_raw "<sg: ID>(?<ID>.*?)<" | transaction ID keepevicted=false startswith="text_a" endswith=message="text_b"     Query D) Just to be sure, if I remove the "text_a" and message="text_b" from search the event is still missing.   | multisearch [search (11111111 OR 22222222) host=x index=y level=z logger=a ] [search host=x index=y level=z logger=b ] | rex field=_raw "<sg:ID>(?<ID>.*?)<" | transaction ID keepevicted=false startswith="text_a" endswith=message="text_b"     Maybe someone of you already had similar issues in transaction with multi-search and know, what could cause this problem. Thank you for your answers. Best regards, Robert
Hello guys, I need your help. I'm trying to connect Cisco AMP via Cisco AMP for Endpoints Events 2.0.2 Input via REST API, it doesn't work in the logs, an error constantly appears ERROR Amp4eEvents ... See more...
Hello guys, I need your help. I'm trying to connect Cisco AMP via Cisco AMP for Endpoints Events 2.0.2 Input via REST API, it doesn't work in the logs, an error constantly appears ERROR Amp4eEvents - API Error (status 429): {"error":{"code":"429","message":"RATE LIMIT EXCEEDED next slot in 41m38s"}}     Is it possible to set a request limit somewhere? Or maybe the problem is something else?    
We are moving away from using Windows Event Collection to installing the Universal Forwarder on as many Windows machines as we can. I ran into an interesting issue that I don't know how to resolve. ... See more...
We are moving away from using Windows Event Collection to installing the Universal Forwarder on as many Windows machines as we can. I ran into an interesting issue that I don't know how to resolve. Event 1646, when collected using WEC and then forwarded to Splunk shows this information, which doesn't appear if the same event is sent directly by the UF. I copied the stanza used by the UF on the WEC server and deployed it to the machine where the event is generated but I am still not seeing the "extra" data when not using WEC. What am I missing? (Something easy, no doubt). Seems as though I don't see the "Message" field when the event is collected by the UF. Thanks in advance.
We have data coming in and we are still searching for a best practice on what alerts to monitor, however, my question is on the query below:  index="storage_vmax" sourcetype="dellemc:vmax:rest" typ... See more...
We have data coming in and we are still searching for a best practice on what alerts to monitor, however, my question is on the query below:  index="storage_vmax" sourcetype="dellemc:vmax:rest" type=ARRAY severity = FATAL |search (severity!=NORMAL AND severity!=INFORMATION) | stats count by _time,created_date,source,reporting_level,severity,asset_id,array_id,type, state,description Where I would like to bring in only what was created in the last 24 hours.. The problem with the existing query is that it is bring in created log entries from a year ago which are stale. If we are going to have SNOW open tickets we do not want it to so on stale data only new. Thanks, Dali  
Hi I need to count time events between now() and now() - 10 minutes Something like this : eval delta =now() - 10 minutes  Couleur you help please ?  
I'm trying to create a search macro which accepts a field to match on and enriches the results with matches and outputs those enriching fields appending the matching value's matching field name as th... See more...
I'm trying to create a search macro which accepts a field to match on and enriches the results with matches and outputs those enriching fields appending the matching value's matching field name as the new field names. For example: `my_macro(sourceAddress)` Should output the following field names (if it matches): sourceAddress_WHOIS sourceAddress_Severity sourceAddress_lastCheck Where WHOIS, Severity, and lastCheck are field names in the lookup table. This should also exhibit the same behavior, dynamically, for `my_macro(destinationAddress)`: destinationAddress_WHOIS destinationAddress_Severity destinationAddress_lastCheck This macro may be called multiple times against multiple field names in a single search.  destinationAddress, sourceAddress, clientAddress, proxyAddress, and more are all potential field names in the searches this macro would be used for and multiple combinations of each can potentially exist in each result.  I'd like to be able to clearly see which fields were enriched by the lookup table, if enrichment occurred.
Disclaimer: Totally new to Splunk.  Started using it this week and nobody else in my office knows Splunk either. I created dashboards for Windows events like this one:  EventCode=4625 | timechart c... See more...
Disclaimer: Totally new to Splunk.  Started using it this week and nobody else in my office knows Splunk either. I created dashboards for Windows events like this one:  EventCode=4625 | timechart count by host sep=1hr.  That shows a nice bar chart which gives information, like the number of events, when hovering the mouse over a bar.  I want to either/or:  1.) click on a bar and show all the event(s) information.  2.) display all the events in another panel in the dashboard.  Thank you for you assistance.
I'm currently trying to upload a malware feed into Threat Intelligence Management. The feed itself is being pulled from the following URL: https://bazaar.abuse.ch/export/csv/recent/ The issue is th... See more...
I'm currently trying to upload a malware feed into Threat Intelligence Management. The feed itself is being pulled from the following URL: https://bazaar.abuse.ch/export/csv/recent/ The issue is that while it is in CSV format, the values themselves are also encapsulated by quotes, so they are being imported into the file_intel like the following. To extract out the actual values since they are surrounded by quotes I put together a regular expression under "Extracting regular expression" which works on regexr and regex101, but this regular expression does not appear to be getting used as the values in the lookup still look like the above.   Here is what the csv looks like. Is there a setting I am missing that is causing the regex to not be utilized?
Splunk Enterprise 8.0.4.1 There was a low disk space issue and Health Status alert was raised as expected. But now there is plenty of disk space and the message says: 04-22-2022 15:05:05.257 +... See more...
Splunk Enterprise 8.0.4.1 There was a low disk space issue and Health Status alert was raised as expected. But now there is plenty of disk space and the message says: 04-22-2022 15:05:05.257 +0000 WARN DiskMon - MinFreeSpace=5000. The diskspace remaining=221121 is less than 2 x minFreeSpace 221121 is definitely not less than than 2x5000 Am I missing something or is it a bug?
presuming there are limits (which may have changed over time), what are the current default limits for search exports from splunk web? Is it record count or search job size in bytes?
Hi All. We have a need to log only one event in Splunk for each Case_ID. However a single case can have multiple problems and solutions entered by the user in our Website. And based on event in Spl... See more...
Hi All. We have a need to log only one event in Splunk for each Case_ID. However a single case can have multiple problems and solutions entered by the user in our Website. And based on event in Splunk we need to publish some metrics in the dashboard. Need suggestion for better way to log Problem solution combination in a single event for a case_id; which can help regenerate the table format within Splunk using query effectively to further populate the dashboard metrics shown in below screenshots. Please assist.  
Hi, is Splunk Enterprise still free after 60 days of free trial? Thanks!
Hi,   I having an issue with setting up my search head cluster environment. I have a stand alone deployment server instance, a SHC deployer, and 3 search heads. Do I need KV Store setup in all of t... See more...
Hi,   I having an issue with setting up my search head cluster environment. I have a stand alone deployment server instance, a SHC deployer, and 3 search heads. Do I need KV Store setup in all of the instances or is it only in the SHC deployer or Deployment server?   Thank you,
Hi, With have some applications running on kubernetes. All the logs produced by the application are sent to the standard output of the pod instance. On those logs, we would like to be able to extr... See more...
Hi, With have some applications running on kubernetes. All the logs produced by the application are sent to the standard output of the pod instance. On those logs, we would like to be able to extract them (based on a pattern for exemple) and send them to a specific index. The others logs would go to a "by default" index. Can we acheive this with splunk OTEL for kubernetes? do you have some hints where i should start first ? thank you  
Hi, I have created a timeline of URLs hit over a given session. Here is my chart:   and here is the respective XML code:   However, I need to add the time and dates on t... See more...
Hi, I have created a timeline of URLs hit over a given session. Here is my chart:   and here is the respective XML code:   However, I need to add the time and dates on the top of the timeline as such: How can I do this? Many thanks, Patrick
Hello Splunk friends, I'm trying to send a report from Splunk that contains an attached report. The email subject needs to be last months date, i.e. "My Report Name _ Mar_22", and the same for th... See more...
Hello Splunk friends, I'm trying to send a report from Splunk that contains an attached report. The email subject needs to be last months date, i.e. "My Report Name _ Mar_22", and the same for the email attachment filename.  I currently have this working using hidden field eval values like so, but I've noticed that if my table returns no results, I'll also get no value for last months date. My Search looks like so:         Index = myIndex Process = myProcess earliest=-1mon@mon latest=now | eval _date_one_month_ago = relative_time (now(), "-1mon@mon") | eval _reporting_date = strftime (_date_one_month_ago, "%b_%Y") | stats count by orgName           Any help would be really appreciated in populating the email subject and attachment name with last months date, without depending on my table to have data. Thank you
Hello colleagues, I would like to know I have events where there is a unixTime field. But the _time field does not show correctly how can I write in props.conf so that the _time field takes time... See more...
Hello colleagues, I would like to know I have events where there is a unixTime field. But the _time field does not show correctly how can I write in props.conf so that the _time field takes time from the unixTime field
I am not able to create multiple form in single dashboard. I want to create fieldset in multiple rows in dashboard. 
In Splunk documentation for the outlier command, it say: " The transform option truncates the outlying values to the threshold for outliers." Would like to understand how it calculates the thresh... See more...
In Splunk documentation for the outlier command, it say: " The transform option truncates the outlying values to the threshold for outliers." Would like to understand how it calculates the threshold mentioned above.  For this SPL below, the total_bytes value of 92000, is replaced with 000244. How does Splunk come up with the value of 244?   | makeresults | fields - _time | eval data="101,20220101,3;101,20220102,200;101,20220103,210;101,20220104,220;101,20220105,200;101,20220106,210;101,20220107,220;101,20220108,92000;101,20220109,200;101,20220110,3;" | makemv delim=";" data | mvexpand data | eval splitted = split(data,",") | eval day_hour_key=mvindex(splitted,0,0), date=mvindex(splitted,1,1) , total_bytes=mvindex(splitted,2,2) | fields day_hour_key,total_bytes,date| outlier action=transform mark=true total_bytes | rename total_bytes as transform_total_bytes    
Hi Splunkers, I am struggling to verify connection status of master node in indexer through VM using linux command.  Does someone know what command can I use to view the connection status between t... See more...
Hi Splunkers, I am struggling to verify connection status of master node in indexer through VM using linux command.  Does someone know what command can I use to view the connection status between them?