All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am working on upgrading an instance of heavy forwarder that is running an out of support version of 7.3.3. In order to upgrade this to 9.0.1, is there another version level this must be upgraded to... See more...
I am working on upgrading an instance of heavy forwarder that is running an out of support version of 7.3.3. In order to upgrade this to 9.0.1, is there another version level this must be upgraded to prior to bringing it to version 9.0.1? I searched for upgrade path and no luck.    Thanks.
  Hello, we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B. We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Ind... See more...
  Hello, we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B. We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Indexer and three HF. Site B contains one SH, three Indexer and one HF and will be updated later. Considering that the patching of OS will require a restart of the nodes, can you please tell me Splunk Best Practice to restart the Splunk nodes? I'd start with the SH nodes then the Indexer nodes, Deployer, MN and HF. All one by one. Do I have to enable maintenance mode on each node, restart the node and disable maintenance mode, or is it sufficient to stop Splunk on each node and restart the machine? Thank you, Andrea
Worked, thanks
Hello Team, I got a weird issue, that I struggle to troubleshoot. A month ago, I realized that my WinEventLog logs were consuming too much of my licenses, so I decided to index them in the XmlWinEv... See more...
Hello Team, I got a weird issue, that I struggle to troubleshoot. A month ago, I realized that my WinEventLog logs were consuming too much of my licenses, so I decided to index them in the XmlWinEventLog format. To do this, I simply modified the inputs.conf file of my Universal Forwarder. I changed from this configuration : [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\sgroupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\sgroupPolicyContainer)" renderXml = false sourcetype = WinEventLog index = wineventlog To this configuration: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\sgroupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\sgroupPolicyContainer)" renderXml = true sourcetype = XmlWinEventLog index = wineventlog Then I started receiving events and my license usage reduced, which made me happy. However, upon closer observation, I realized that I wasn't receiving all the events as before. Indeed, I now observe that the event frequency of the XmlWinEventLog logs is random. You can observe this on these timelines :   And in the metrics :   On the other hand, with the WinEventLog format, I have no issues:   I tried reinstalling the UF, there are no interesting errors in the splunkd.log, and I am out of ideas for troubleshooting. Thank you for your help.
@bowesmana Great, that works. This is what I have done.  Parameters for dashboard A earliest = $form.t_time.earliest$ latest = $form.t_time.latest$ Then on dashboard B my timepicker should refe... See more...
@bowesmana Great, that works. This is what I have done.  Parameters for dashboard A earliest = $form.t_time.earliest$ latest = $form.t_time.latest$ Then on dashboard B my timepicker should refer to dashboard A tokens, see below.  (leaving out the token name). Now you can adjust your default accordingly, if you want the default to be your token then use $earliest$ $latest$ . But by setting your default to 15 minutes when you directly go to dashboard B you will not receive an error of missing earliest.  <input type="time"> <label></label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input>  
Hello, can you please tell me what happens to email alerts if the smtp used for email delivery is temporary offline? Is there a buffer where alerts are saved and then are sent once the smtp server ... See more...
Hello, can you please tell me what happens to email alerts if the smtp used for email delivery is temporary offline? Is there a buffer where alerts are saved and then are sent once the smtp server becomes available again? Is there a link to Splunk documentation about that? Thank you, Andrea
There's also this method to get a list of data sources | tstats count where index=* by source  
Hi @gcusello  this works fine, you can see Stephen_Sorkin answer at https://community.splunk.com/t5/Getting-Data-In/Summary-indexing-on-a-search-head/m-p/34175  Splunk expert told me I may test thi... See more...
Hi @gcusello  this works fine, you can see Stephen_Sorkin answer at https://community.splunk.com/t5/Getting-Data-In/Summary-indexing-on-a-search-head/m-p/34175  Splunk expert told me I may test this : https://community.splunk.com/t5/Getting-Data-In/Search-time-Mask/td-p/14363 [mysourcetype] EXTRACT-out = (?s)^(?:\<\d+\>)?(?<altraw>.*) FIELDALIAS-raw = altraw as _raw What do you think? Thanks for your help!
Hi @AL3Z , you could use one of these searches: list of endpoints: | tstats count WHERE index=* BY host list of data sources: | tstats count WHERE index=* BY sourcetype you can also gave both t... See more...
Hi @AL3Z , you could use one of these searches: list of endpoints: | tstats count WHERE index=* BY host list of data sources: | tstats count WHERE index=* BY sourcetype you can also gave both the information in pone search: | tstats values(sourcetype) AS sourcetype count WHERE index=* BY host Ciao. Giuseppe
Hi @roopeshetty , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @splunkreal , I suppose that you have your summary indexes on the Indexers, not on Search Heads! if not review this position, you can have the correct result centralizing summary indexes on the ... See more...
Hi @splunkreal , I suppose that you have your summary indexes on the Indexers, not on Search Heads! if not review this position, you can have the correct result centralizing summary indexes on the Indexers. So, you already have all that you need for anonymizing data. Let me know if I can help you more, otherwise, please accept one answer foer the other people of Community. Ciao. Giuseppe P.S.: Karma Points are appreciated
Hi @gcusello we already use summary indexing with local index on that particular search head. Thanks.
Hi, I am trying to get the information how many datasources and endpoints we have Integrated in to splunk.How can we get this information can anyone pls provide me a query to find this ..
I said "config files" followed by an actual config file path in my first post. But for clarification. I check it with `btool` and `show config`. I am also aware that the config files are not automati... See more...
I said "config files" followed by an actual config file path in my first post. But for clarification. I check it with `btool` and `show config`. I am also aware that the config files are not automatically active if I change them on disc. I do a restart (Not debug refresh) if I change anything on disk. I also keep track of the restarts. The SH is also not part of a SH cluster which could also be a source of confusion. I don't use any other remote managing agents which could change the files. About the two commands you kindly provided. Neither `splunk btool` or `splunk show config` has the indexes definition for index_a or index_b on the SH. Only on the IX. Authorization is set only for index_a in etc/systems/local/authorization.conf for a specific group. Please take note that I cant just post the outputs of the commands because there is some confidential information within. 
I don’t know if this is the right place to ask, but I’m currently looking for three members for BotS v7 coming 7th December in Tokyo.   if anyone interested, give me a reply to this post, or if ... See more...
I don’t know if this is the right place to ask, but I’m currently looking for three members for BotS v7 coming 7th December in Tokyo.   if anyone interested, give me a reply to this post, or if anyone knows the right place for me to look for members, greatly appreciated if you’d let me know!
Yes, I remembered encountering the same issue. You may want to try with a different browser to see if it works? Otherwise, if you hover your mouse over the "here" link: You may notice (depends on... See more...
Yes, I remembered encountering the same issue. You may want to try with a different browser to see if it works? Otherwise, if you hover your mouse over the "here" link: You may notice (depends on your browser & OS) that it's trying to send an email request to AppD Education team with the subject to request for lab... You can also inspect the link with browser tool to find out the details. Anyway, it's likely your machine/laptop is unable to determine the right app to launch a new email, hence nothing happens. If you're using say, Outlook, then make sure this is the default app for email in your OS.
Figured out the issue. It had to do with the permissions of the API keys. I was so focused on the event service permissions, I never stop to realise that the query needed the permissions to acces... See more...
Figured out the issue. It had to do with the permissions of the API keys. I was so focused on the event service permissions, I never stop to realise that the query needed the permissions to access the logs.
CrowdStrike Falcon FileVantage Technical Add-On https://splunkbase.splunk.com/app/7090 When the api return more than one event, the result in splunk is one event with the all jsons merged toget... See more...
CrowdStrike Falcon FileVantage Technical Add-On https://splunkbase.splunk.com/app/7090 When the api return more than one event, the result in splunk is one event with the all jsons merged together making splunk json parsing to fail. For the python code it is seem to be what was wished with the join here  :         ~/etc/apps/TA_crowdstrike_falcon_filevantage/bin/TA_crowdstrike_falcon_filevantage_rh_crowdstrike_filevantage_json.py try: helper.log_info(f"{log_label}: Preparing to send: {len(event_data)} FileVantage events to Splunk index: {data_index}") --> events = '\n'.join(json.dumps(line) for line in event_data) filevantage_data = helper.new_event(source=helper.get_input_type(), index=helper.get_output_index(), sourcetype=helper.get_sourcetype(), data=events) ew.write_event(filevantage_data) helper.log_info(f"{log_label}: Data for {len(event_data)} events from FileVantage successfully pushed to Splunk index: {data_index}")           So it is important to make a proper splunk props.conf to un-split events with a LINE_BREAKER :           splunk@ncesplkpoc01:~/etc/apps/TA_crowdstrike_falcon_filevantage$ cat local/props.conf [crowdstrike:filevantage:json] SHOULD_LINEMERGE = false LINE_BREAKER = \n NO_BINARY_CHECK = true            
Hello, I wonder if there are plans to extend the MITRE ATTACK Framework coverage for ICS? How could someone build-upon what this SSE brings in features to add additional Framework elements? Any st... See more...
Hello, I wonder if there are plans to extend the MITRE ATTACK Framework coverage for ICS? How could someone build-upon what this SSE brings in features to add additional Framework elements? Any step-by-step guide that could be shared? Thanks, Mihaly
I have a saved search with 'n' number of results and I need to setup an alert mail for the results by creating an alert. If I use the |map "savedsearch", the result is no events found. But there is ... See more...
I have a saved search with 'n' number of results and I need to setup an alert mail for the results by creating an alert. If I use the |map "savedsearch", the result is no events found. But there is event in the result of the saved search. Please help me on this