All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I am having an issue with the initial configuration to generate LDAP queries. In the GUI i have my settings as such  Anyone that is a splunk admin have any idea on how to make the ldap queries w... See more...
I am having an issue with the initial configuration to generate LDAP queries. In the GUI i have my settings as such  Anyone that is a splunk admin have any idea on how to make the ldap queries work? I have tried the configurating via the add on for active directory and my ldap.conf file is set as [ADS] alternatedomain = abc.bcd.com basedn = dc=ads,dc=abc,dc=com binddn = CN=SplunkUser,OU=Service Accounts,OU=User Accounts,DC=ads,DC=abc,DC=com port = 3269 server = x.x.x.x ssl = 1 however, with the test connection, it does not work. 
Hello, I'm aiming to test event blacklists on my host system locally, but I'm uncertain about the correct location within the inputs.conf file to place these blacklists. Would it be in: C:\Program... See more...
Hello, I'm aiming to test event blacklists on my host system locally, but I'm uncertain about the correct location within the inputs.conf file to place these blacklists. Would it be in: C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf C:\Program Files\SplunkUniversalForwarder\etc\apps\myapp\local\inputs.conf Thanks...
I have 3 standalone indexers, and another 3 indexers in a cluster.  We want to decommission the 3 standalones but first, have to move the data off the 3  onto the cluster.   I imagine the process wou... See more...
I have 3 standalone indexers, and another 3 indexers in a cluster.  We want to decommission the 3 standalones but first, have to move the data off the 3  onto the cluster.   I imagine the process would be something like to roll all hot buckets to warm.. then rsync the warm and cold mounts/directory to a temp directory on one of the idx cluster members? standalone 1 to idxcluster 1,, 2 to 2, then 3 to 3.. But when we do rsync the data over.. How do i get the new indexer to recognize the old imported data?  is it as simple as merging the old imported data into the appropriate index directory on the new indexer?  for example.. copy the old wineventlog index, into the same named directory on the  new indexer? would that work or is there  more to it?  Is there some kind of splunk native command to move all data from idx A to idx B? Is there a better (or correct) way to make the new idx recognize the imported data? I appreciate any help!  Thanks.
Hi @Mo.Abdelrazek, Thanks for the info. If you can share what you learned from Support as a reply here that would be awesome.
@Ryan.Paredez  I regenerated my oAuth token, and i have read AppD documents and i still see the error I open a case with the support already
Is that image public - I can click on it, but just get redirected to sharepoint main page - so presumably Splunk Cloud can't also get the image.
Look at the JMX MBean browser to confirm the metrics exist.  If they do, look closely at the MBean name and compare with the JMX definition.  I know there are some complaints on StackOverflow about s... See more...
Look at the JMX MBean browser to confirm the metrics exist.  If they do, look closely at the MBean name and compare with the JMX definition.  I know there are some complaints on StackOverflow about sometimes seeing the domain Tomcat and sometimes Catalina.  If the AppD config expects Catalina but it's actually Tomcat or vice versa, you won't see anything in the metric browser.  You would need to change or duplicate/change the JMX config.
I've posted a number of solutions for this problem, see a post from yesterday that references some of those https://community.splunk.com/t5/Splunk-Search/Multiple-time-searches/m-p/669128#M229514 E... See more...
I've posted a number of solutions for this problem, see a post from yesterday that references some of those https://community.splunk.com/t5/Splunk-Search/Multiple-time-searches/m-p/669128#M229514 Effectively you have a global search that sees your Datepkr token and does a small search to calculate the relative dates - it needs addinfo, as that makes sure the tokens from the time picker are converted to epoch. Then in your main search you can do search (earliest=$Datepkr.earliest$ latest=$Datepkr.latest$) OR (earliest=$my_other_token_earliest$ latest=$my_other_token_latest$) ... | eval category=if(_time <= $my_other_token_latest$, "PREV", "CURRENT") | eval _time=if(_time <= $my_other_token_latest$, _time+my_offset, _time) ... | timechart bla by category which looks for both date ranges and then sets the category based on which range it's from, and then adjusts the PREV range _time to the current time, so they are overlaid. - my_offset is the amount of time between your two ranges. This methodology works, so if you're struggling to get something working, post what you've got and we can help.
See this article (I assume you're talking about Cloud) https://www.splunk.com/en_us/blog/platform/dynamic-data-data-retention-options-in-splunk-cloud.html?locale=en_us
The java agent doesn't always understand some thread handoffs or exit calls.  There are examples in the documentation for how to use the AppD API to handle this.  However I would like to know if it c... See more...
The java agent doesn't always understand some thread handoffs or exit calls.  There are examples in the documentation for how to use the AppD API to handle this.  However I would like to know if it can be done in a vendor-neutral way.  For example, with other APM agents, it's sometimes possible to use OTel manual propagation and it "just works."  Can this be done with the AppD java agent? thanks
Converting the time to a string is a peculiar way to do binning. I'd rather simply use the bin command with a proper set of parameters for binning. If you want to display your time in a human-readab... See more...
Converting the time to a string is a peculiar way to do binning. I'd rather simply use the bin command with a proper set of parameters for binning. If you want to display your time in a human-readable form you can still do fieldformat.
I have a data visualized as a table.  name value time App1 123 5s App2 0 2s App3 111 10s   I know that drilldown option can be used to go through other pages and pass tokens... See more...
I have a data visualized as a table.  name value time App1 123 5s App2 0 2s App3 111 10s   I know that drilldown option can be used to go through other pages and pass tokens to other dashboard. Currently I set clicking a row will open Value dashboard. However, is there any way that I can drilldown to multiple dashboards based on which column I clicked? (e.g. Clicking value column sends to Value dashboard, clicking name column sends to Name dashboard, etc)?
That's the right way to go. But it can be improved! You don't have to do two separate searches you can search for index=<INDEX> host=<HOST> timestamp=t1 OR timestamp=t2 or index=<INDEX> host=<HOS... See more...
That's the right way to go. But it can be improved! You don't have to do two separate searches you can search for index=<INDEX> host=<HOST> timestamp=t1 OR timestamp=t2 or index=<INDEX> host=<HOST> timestamp IN (t1,t2) Also while your solution with values(timestamp) is generally right, I prefer to clasify the events with a nummeric classifier (either 1 or 2) and do sum on them. This way you're doing a bit more performant operations. But your search is still OK.
Splunk has "free" licenses and "trial" licenses (also free) with different capabilities.  Which do you have? Splunk forwarders typically send data to Splunk indexers on port 9997. Splunk can receiv... See more...
Splunk has "free" licenses and "trial" licenses (also free) with different capabilities.  Which do you have? Splunk forwarders typically send data to Splunk indexers on port 9997. Splunk can receive syslog data on port 514, but it's not recommended.  To set that up, go to Settings->Data inputs->TCP (or UDP, if you prefer) and click the green button.  Then fill in the form and click Save.
Firewall logs needs some purification for threat monitoring, below are couple events,  From the events below action=Accept AND Service=23 along with protection_type=geo_protection, we need "protecti... See more...
Firewall logs needs some purification for threat monitoring, below are couple events,  From the events below action=Accept AND Service=23 along with protection_type=geo_protection, we need "protection_type=geo_protection" to be removed from raw in indextime extraction. Current:   2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513220|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=000|time=1700513220|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Denied|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=67|src=111.11.1.111|src_country=Other   Expected:   2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513220|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=000|time=1700513220|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|---|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|---|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Accept|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|---|proto=99|s_port=1234|service=23|src=111.11.1.111|src_country=Other 2023-11-20K00:12:00-05:00 111.111.11.111 time=1700513221|hostname=firewallhost|product=Firewall|action=Denied|ifdir=inbound|ifname=eth3-01|logid=xxxx|loguid={xxxx,xxxx,xxxx,xxxx}|origin=111.111.11.111|originsicname=PK\=originsicname,O\=xpljdkdk..xpl78kdk|sequencenum=00|time=1700513221|version=5|dst=111.11.1.111|dst_country=PL|inspection_information=Geo-location outbound enforcement|inspection_profile=Geo_settings_upgraded_from_FWPRMLP_Internet_v4|protection_type=geo_protection|proto=99|s_port=1234|service=67|src=111.11.1.111|src_country=Other   Thanks in Advance!
Oh, that diff may just work for my immediate needs! I do want something a bit more visually pleasing, so I've been looking at multisearches as well, and was able to put this together: | multisearch... See more...
Oh, that diff may just work for my immediate needs! I do want something a bit more visually pleasing, so I've been looking at multisearches as well, and was able to put this together: | multisearch [ search index=<INDEX> host=<HOST> timestamp=<Timestamp 1> | fields timestamp dst ] [ search index=<INDEX> host=<HOST> timestamp=<Timestamp 2> | fields timestamp dst ] | stats values(timestamp) as timestamp by dst | where mvcount(timestamp) = 1 | eval diff=if(timestamp=<Timestamp 1>, "No longer present in latest snapshot", "New route in latest snapshot") | stats values(dst) by diff I think I might find a way to use both somehow in a dashboard as long as I can keep the search from getting too complex. 
@meetmshah  I am trying to implement this solution. It doesn't work for me When I enter To field {{src_user}} which contains the user email address, I get the error message as "There was an error s... See more...
@meetmshah  I am trying to implement this solution. It doesn't work for me When I enter To field {{src_user}} which contains the user email address, I get the error message as "There was an error saving the correlation search: One of the email addresses in 'action.email.to' is invalid
Hi all!   What I thought was going to be a fairly simple panel on a dashboard has been giving me fits.  We have a global time picker (Datepkr) for our dashboard, and based on other picker selection... See more...
Hi all!   What I thought was going to be a fairly simple panel on a dashboard has been giving me fits.  We have a global time picker (Datepkr) for our dashboard, and based on other picker selections from that dashboard would like to display a simple count of events in a timechart for the time window selected by the datepicker, and for the same time window the week prior.  So if someone selected events for the past 4 hours, we would get a line chart of events for the past four hours with a second line of events for events of the last four hours exactly one week prior.  Same deal if someone selected events in the time range Wednesday, t-18 16:00 through Thursday, Oct-19 12:00, they would get events for that range plus a second line for events Wednesday, Oct-11 16:00 through Thursday, Oct-12 12:00.  I think it would get a bit weird as you start selecting increasingly large windows of time larger than one week, but that's ok, for the most part people will be using times less than one week.   I've run into two hurdles so far, one is how to get the second "-7d" time range to be created from the time picker, and then once the two searches can be made, how to effectively merge the two together.   I saw a few posts mentioning using makeresults or addinfo and info_min_time/info_max_time but these don't seem to be resolving correctly (the way I was using them at least), and setting the last week time in the body of the query seems wrong, or at least less useful than having it resolved somewhere that it could be used on other panels.   I tried to add two new tokens to set the past window, but because the time picker can produce times in varying formats this didn't seem to work.  I tried different ways of converting to epoch time and back but didn't get anywhere with that either.   Timepicker config including the eval:   <input type="time" token="Datepkr"> <label>Time Range Picker</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> <change> <eval token="date_last_week.earliest">relative_time($Datepkr.earliest$, "-7d")</eval> <eval token="date_last_week.latest">relative_time($Datepkr.latest$, "-7d")</eval> </change> </input>   I haven't been able to get as far as to get a search that produces the right results, but assuming I can, I'm not sure how to overlay two the times on top of each other since they are different time ranges.  Wouldn't they display end to end?  I'd like them to overlay.   I saw the timewrap function, but given that a time field is required timewrap as well as a time-span for the chart I don't think that would mesh with the time picker.   Maybe something like:   Search for stuff from -7d | eval ReportKey=”Last_Week” | modify the “_time” field | append [subsearch for stuff today | eval ReportKey=”Today”] | timechart it based on ReportKey   Thanks in advance for any help!
I have created a dashboard in dashboard studio. I have a table visualization, see my code below.  So, the "Time" column auto sets my | bin to one minute. When I update my timepicker to say the las... See more...
I have created a dashboard in dashboard studio. I have a table visualization, see my code below.  So, the "Time" column auto sets my | bin to one minute. When I update my timepicker to say the last 7 days it still shows the time |bin as one minute.  How can I dynamically change the |bin to best fit my timepicker selection?   | search cat IN ($t_endpoint$) AND Car IN ($t_car$) | eval Time=strftime(_time,"%Y-%m-%d-%I:%M %p") | stats limit=15 sum(Numbercat) as Numbercat, avg(catTime) as AvgcatSecs by Time, Car, cat    
Hello @Hamed.Khosravi, Your post was flagged as spam, I just released it. Please review your post and make sure there is no sensitive information in there. You can edit your post to remove that dat... See more...
Hello @Hamed.Khosravi, Your post was flagged as spam, I just released it. Please review your post and make sure there is no sensitive information in there. You can edit your post to remove that data if you need to.  Let's see if the Community can jump in here and help out.