All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a delay in the Splunk API server 'seeing' events that are already indexed? I use the Splunk API to query logs for some testcases. I can submit a job to the API server (`POST https://<SERV... See more...
Is there a delay in the Splunk API server 'seeing' events that are already indexed? I use the Splunk API to query logs for some testcases. I can submit a job to the API server (`POST https://<SERVER>:8089/services/search/jobs`). That works fine. But intermittently, the search job returns no results (GET https://<SERVER>:8089/services/search/jobs/<JOB_ID>/results returns a 204/No Content HTTP header, and no HTTP body)  I checked if there was an indexing delay using the command below. Apparently there was not - the relevant logs were ingested and indexed well in time. It's just the Splunk API server that intermittently returns no results.      <SPLUNK QUERY> | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S")       Any pointers to how I can dig into this further? I'm just a dev, not a Splunk admin, so guidelines on what I do next are much appreciated.
Hi all - this one is hurting my brain. I need to pull two distinct numbers from my events: one with a total count of assets, and one with a total count of assets that contain a vulnerability. What... See more...
Hi all - this one is hurting my brain. I need to pull two distinct numbers from my events: one with a total count of assets, and one with a total count of assets that contain a vulnerability. What I think it should look like is not working:     | (stats dc(AssetNames) AS TotalExternalAssets, (dc(Asset_Names) AS TotalExposedAssets | where vulnerability!="missing"))       How do I get these two counts out of my events?
This is not a question, rather I am sharing something that I discovered with a Splunk OnDemand support call. I thought I was a bit of a Splunk pro, but just goes to show there's always something to... See more...
This is not a question, rather I am sharing something that I discovered with a Splunk OnDemand support call. I thought I was a bit of a Splunk pro, but just goes to show there's always something to learn and this one was so simple it's a little embarrassing    Imagine you have a very large lookup with 5 million+ rows (in my case a KV store, containing an extract from Maxmind DB with some other internal references added). If you want to do a CIDR match on this, you set up a lookup definition with Match Type "CIDR(network)". Now run your query (IP address obfuscated in example)     | makeresults | eval ip4 = "127.0.0.1" | lookup maxmind network AS ip4     For me on Splunk Cloud, this takes around 50 seconds, and I contacted Splunk Support for some assistance in making this viable (50 seconds just way too high).   I already understood that the issue is that the cidrmatch has to run on every single row - I added a pre filter to my lookup definition and proved that with the pre filter it ran much much faster, but obviously that limited it's use to just the filtered rows.   I tried messing around with inputlookup, but couldn't get it any better     | inputlookup maxmind | where country_name="Japan" city_name="Osaka"     This still took the same ~50 seconds to run   Of course if I had of read the documentation properly, I would have seen that "where" is actually an argument of the inputlookup clause. Changing this to:     | inputlookup maxmind where country_name="Japan" city_name="Osaka"     Made all the difference - this now runs in 5-7 seconds as the WHERE clause is now running the same as if you had added the pre-filter to the lookup definition.   To use this in a search to enrich other data, you can use:     | makeresults | eval ip4 = "127.0.0.1" | appendcols [| inputlookup maxmind where country_name="Japan" city_name="Osaka"] | where cidrmatch(network, ip4)     Obviously, the tighter you can get your WHERE clause the faster this runs - you can also use accelerated fields in your lookup (if using KV store) to further enhance, this will depend entirely on your data and how you can filter it down to the smallest possible data set before continuing.   For me, using the same 5 million row KV store and maxmind data as my example     | makeresults | eval ip4 = "127.0.0.1" | appendcols [| inputlookup maxmind where country_name="Japan" city_name="Osaka" postal_code="541-0051" network_cidr="127.0.*"] | where cidrmatch(network, ip4)     runs in ~0.179 seconds [using actual IP address, not the fake one above]. Your mileage may vary, but I hope this helps someone else trying to figure this out. I haven't tried the same with a CSV lookup, but I imagine it would be very similar.
We had Splunk working on a domain and joined a different domain and then this happend to our search head cluster  
When starting splunk: Splunk web-interface doesnt open and goes to blank page but I can go to 127.0.01:8000 attached is snippit of issue
Hi i am new,  I have 2 excel documents, one containing firewall logs and the other containing Sys logs. how would i combine the data in splunk so i can view them on one page I want to compare whe... See more...
Hi i am new,  I have 2 excel documents, one containing firewall logs and the other containing Sys logs. how would i combine the data in splunk so i can view them on one page I want to compare when the firewall  was used (and its destination IP) to when FTP was used (from syslogs).   Thank you
I have tried multiple methods to get a complex URL to work as a redirect. Unfortunately, there are a lot of different base URLs in my actual data so using multiple conditions is probably not a good o... See more...
I have tried multiple methods to get a complex URL to work as a redirect. Unfortunately, there are a lot of different base URLs in my actual data so using multiple conditions is probably not a good option. It seems as though it wants no "/" in the URL field, only parameters. <dashboard> <label>URL Test</label> <row> <panel> <table> <search> <query>| makeresults count=3 | streamstats count | eval url = case(count=1,"https://bing.com",count=2,"https://google.com",count=3,"https://nvd.nist.gov/vuln/search/results?form_type=Advanced&amp;results_type=overview&amp;search_type=all&amp;cpe_vendor=cpe%3A%2F%3Aapache&amp;cpe_product=cpe%3A%2F%3Aapache%3Alog4net&amp;cpe_version=cpe%3A%2F%3Aapache%3Alog4net%3A1.2.9.0") | eval site = case(count=1,"bing",count=2,"google",count=3,"nist") | table site, url</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <fields>["site"]</fields> <drilldown> <eval token="u">replace($row.url$, "https://", ""</eval> <link target="_blank"> <![CDATA[ https://$u$ ]]> </link> </drilldown> </table> </panel> </row> </dashboard>  
Hello,   How do I effectively whitelist events like excessive failed logins, and abnormal new processes? These are known, non malicious issues in our network that generate a lot of hits that do n... See more...
Hello,   How do I effectively whitelist events like excessive failed logins, and abnormal new processes? These are known, non malicious issues in our network that generate a lot of hits that do not amount to anything upon extensive investigation. Thanks in advance.
Is it possible, or advised, to add a custom log to the Splunk _internal index?  What are the formatting rules if a log is added to the default location?  
Hi All, Our JSON payload looks like as shown below. The msg.details array can have any number key/value pairs in any order.     { "appName": "TestApp", "eventType": "Response", "msg": {... See more...
Hi All, Our JSON payload looks like as shown below. The msg.details array can have any number key/value pairs in any order.     { "appName": "TestApp", "eventType": "Response", "msg": { "transId": "Trans1234", "status": "Success", "client": "clientXyz", "responseTime": 1650, "details": [ { "keyName": "returnUrl", "keyValue": "https://abc.com/onlineshop?prod=112&cat=1349" }, { "keyName": "customer", "keyValue": "xyz" } ], "url": "/v1/test" } }     I want to filter events using partial wildcard keyValue for a keyName in the array in the msg.details array. Your help is appreciated. Thanks. index=* appName="TestApp" msg.url="/v1/test" |  spath | search msg.details{}.keyName=returnUrl AND msg.details{}.keyValue!="*abc.com*" The search may include multiple keyValue filters in the array like this. Thanks. index=* appName="TestApp" msg.url="/v1/test" |  spath | search (msg.details{}.keyName=customer AND msg.details{}.keyValue!="xyz") AND (msg.details{}.keyName=returnUrl AND msg.details{}.keyValue!="*abc.com*")
Hi, Interested in knowing if  federated  search results from Splunk Cloud could be stored in a summary index located in a On-Premise Enterprise instance / cluster? The thought is this could allow ... See more...
Hi, Interested in knowing if  federated  search results from Splunk Cloud could be stored in a summary index located in a On-Premise Enterprise instance / cluster? The thought is this could allow to offload some high usage dashboards to On-Premise  and allow for data correlation with On-premise data.    Thank you  
I have installed the Custom Visualization - donut app version 1.0.3 on Splunk Enterprise version 8.2.9. It seems to work well out of the box, however i have a couple of problems. 1. The legend text ... See more...
I have installed the Custom Visualization - donut app version 1.0.3 on Splunk Enterprise version 8.2.9. It seems to work well out of the box, however i have a couple of problems. 1. The legend text disappears in dark mode. I have attempted to resolve using CSS, but so far I have been unsuccessful. 2. I have tried to "center" the donut viz in the panel, but have not been able to do so... it by default is always on the left. 3. I have been unable to find a list of the viz options for this can you supply please. Thanks  
Hi, I am trying to get a list of workstations trying to connect to malicious DNS using PaloAlto and SYSMON logs. From PaloAlto logs I get the list of malicious domains detected and blocked with the... See more...
Hi, I am trying to get a list of workstations trying to connect to malicious DNS using PaloAlto and SYSMON logs. From PaloAlto logs I get the list of malicious domains detected and blocked with the following query and I do a join statement looking for each malicious domain a DNS request entry in the sysmon log. The query    index="pan_logs" dns sourcetype="pan:threat" dest_zone=External dest_port=53 vendor_action=sinkhole (action=dropped OR action=blocked) | dedup _time,file_name | table _time file_name | rename file_name as QueryName | join QueryName [ search index=sysmon EventID=22 | eval host_querying=Computer | table QueryName, host_querying] | table _time QueryName host_querying   my issue comes when there are several computers accessing to the same malicious domain. The first occurrence found in the sysmon index is assigned to all the requests. I would like to join based on the domain and a time limit between correlated events. is it possible to do this?
Is there  a setting that stops the "AutomIatic lifetime extensions"  (https://docs.splunk.com/Documentation/Splunk/9.0.3/Search/Extendjoblifetimes) of scheduled searches? We have some dashboard wit... See more...
Is there  a setting that stops the "AutomIatic lifetime extensions"  (https://docs.splunk.com/Documentation/Splunk/9.0.3/Search/Extendjoblifetimes) of scheduled searches? We have some dashboard with scheduled searches , refreshed every x time, that are mainly used during business hours. so we scheduled them for only the business hours.  Sometime someones need to look at the dash in the night.  To be sure, they see a recent result we changed the dispatch.ttl too 600 (normally its run every 5 minutes) . So if the last schedule would be @6 pm, the results should be gone @6:10 pm .   If someone opens the dash @ 8pm, there aren't any results, and the search should run again. Now the real problem, some people aren't closing the dashboard. So the last run (that was @ 6 pm) keeps getting extended!   So if someone else needs to take a look at the dashboard it's displaying the last run! So is there a setting to force the results to be deleted after x period?        
Extension setup information here and here reference additional notes for the APM Machine Agent Installation Scenario. This link is broken. If it still exists, how do I install extensions for APM Mach... See more...
Extension setup information here and here reference additional notes for the APM Machine Agent Installation Scenario. This link is broken. If it still exists, how do I install extensions for APM Machine Agents and where is the instructions?
So far I can get the hosts and forwarder version, but I am unable to get the index the forwarders belong to: index="_internal" source="*metrics.lo*" group=tcpin_connections fwdType=uf | dedup hostn... See more...
So far I can get the hosts and forwarder version, but I am unable to get the index the forwarders belong to: index="_internal" source="*metrics.lo*" group=tcpin_connections fwdType=uf | dedup hostname| table hostname,version,os How can I tie the above with the index that the hosts belong to?  
Hi, I have a use case where in i want to find out how many download api failed for a given document and how many out of the failed were successful after subsequent call I have no clue how to sear... See more...
Hi, I have a use case where in i want to find out how many download api failed for a given document and how many out of the failed were successful after subsequent call I have no clue how to search this on splunk right now I am finding the failed ones using the below query  index=ty_ss “download/docIds?=“ “500”  | Rex “docId=(?<docId>.*)” | eval event_time = strftime() | table docIds, event_time
I am trying to monitor drop in events per index. What is the best way to get a baseline and detect deviation to the volume? I am more interesting in drop in events and not increase.
data stopped coming from vcenter to splunk. not sure which DCN is used to configure those Vcenter, could you please help for troubleshooting like how to  check for the error (which cause data to st... See more...
data stopped coming from vcenter to splunk. not sure which DCN is used to configure those Vcenter, could you please help for troubleshooting like how to  check for the error (which cause data to stopped coming). as well as how I can find out the DCN which is using to collect the data from Vcenter. 
We re migrating splunk from our own AWS environment to Splunk SaaS. I am wondering if someone has steps and a tentative effort it requires, as that can help me in creating  a project plan.  Thank You