All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone. I'm trying to find the most efficient way to filter results for a list of values that may have a match within two (or more) distinct fields. Say, a list of IP addresses that can match... See more...
Hello everyone. I'm trying to find the most efficient way to filter results for a list of values that may have a match within two (or more) distinct fields. Say, a list of IP addresses that can match either the source or destination fields. I'm almost certain this question has been answered in the past but I couldn't find the correct wording to find the answer (am braindead atm), so here I am.   To help filter some potential answers to my request, I do know that I can do something like: index=my_index ...etc... (field1 IN (value1, value2, value3, value4, value5, ...) OR field2 IN (value1, value2, value3, value4, value5, ...))   However, what I am attempting to do is make this query more "efficient". Or, perhaps just less of an eye-sore. Provided a list of 10+ values to filter for, it's easy to see how this query can get out of hand at least visually. Here is an example query I intuitively tried that should help illustrate what I'm looking for: index=my_index ...etc... (field1 OR field2 IN (value1, value2, value3, value4, value5, ...))   Splunk Pros, please help: What am I overlooking/overthinking? Or is my first example the best (or most "efficient") way to go about this?   Thanks so much!
So the goal is to to have a drilldown perform an eval which sets a token with the "lookup-dataset" value for another panel to use in performing the lookup. Lookup documentation states the syntax i... See more...
So the goal is to to have a drilldown perform an eval which sets a token with the "lookup-dataset" value for another panel to use in performing the lookup. Lookup documentation states the syntax is:   | lookup <lookup-dataset> ...   The eval includes a case command containing 3 match commands - the results of this should be set into a token.   <drilldown> <unset token="lookup_token"></unset> <eval token="lookup_token">case(match("$row.field1$","abc"),"lookup1",match("$row.field1$","def"),"lookup2",match("$row.field1$","xyz"),"lookup1")</eval> </drilldown>   And another panel does:   <search depends="$lookup_token$" base="the_base_search"> <query>| lookup $lookup_token$ lookup_value as field_value</query> </search>   But the eval doesn't appear to be re-evaluating on clicking a new row within the first panel - checking the search jobs shows it only ever seems to get the first eval value. Is there something I am missing here?
In ES 6.6.x and higher, what is the meaning of "Parse Domain from URL" under the Global Setting of Threat Intelligence Management?  Does it try to parse the domain from the URL which are the IOCs/thr... See more...
In ES 6.6.x and higher, what is the meaning of "Parse Domain from URL" under the Global Setting of Threat Intelligence Management?  Does it try to parse the domain from the URL which are the IOCs/threat artifacts, thus creating more domain IOCs, or is it trying to parse the logs (or Web.url where the events are) to get the domain? I know that in the older version, the "Threat Gen" searches would search for domain IOCs in the Web.url field, but I don't think the new version is doing that anymore.
i am getting error for this  index=_internal earliest="26/02/2022:00:00:00" latest=now()  
Hi everyone, I have a Splunk Enterprise standalone instance. It is running on Ubuntu server 14.04.6 LTS. I recently upgraded from 6.5 to 7.2 and from there to 8.1.0. There are a few custom apps in... See more...
Hi everyone, I have a Splunk Enterprise standalone instance. It is running on Ubuntu server 14.04.6 LTS. I recently upgraded from 6.5 to 7.2 and from there to 8.1.0. There are a few custom apps installed on this Splunk instance. Like a month ago, I realized that the searches to the _audit index weren't returning any result.  /opt/splunk/var/log/splunk/audit.log file is receiving data, and permissions are ok. It seems like the data is not being monitored and injected in the _audit index. I have already checked and compared backup files to find the missing link here, but no luck, and I can't find any significant error in the splunkd.log file. Please, any suggestions?
I want to pick up logs from the same directory that have *.out and *.log in them, is there a way to create one monitor statement with a whitelist that would pick up *.log or *.out or can I use a sele... See more...
I want to pick up logs from the same directory that have *.out and *.log in them, is there a way to create one monitor statement with a whitelist that would pick up *.log or *.out or can I use a selection at the end of the monitor like http://xxx/*.log/*.out?
Hello, are there any queries we can use to find the Total Number of Events, Total Size/Volume (in GB) of Data, Frequencies of data coming into SPLUNK by index and sourcetype. Any help will be highl... See more...
Hello, are there any queries we can use to find the Total Number of Events, Total Size/Volume (in GB) of Data, Frequencies of data coming into SPLUNK by index and sourcetype. Any help will be highly appreciated, thank you!    
  02-24-2022 21:24:10.711 INFO ScopedTimer [9796 searchOrchestrator] - search.optimize 0.030224023 02-24-2022 21:24:10.711 INFO SearchPhaseGenerator [9796 searchOrchestrator] - Failed to create pha... See more...
  02-24-2022 21:24:10.711 INFO ScopedTimer [9796 searchOrchestrator] - search.optimize 0.030224023 02-24-2022 21:24:10.711 INFO SearchPhaseGenerator [9796 searchOrchestrator] - Failed to create phases using AST:Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1.. Falling back to 2 phase mode. 02-24-2022 21:24:10.711 INFO SearchPhaseGenerator [9796 searchOrchestrator] - Executing two phase fallback for the search=| dbxquery query="SELECT * FROM \"ngcs2_0\".\"public\".\"responder\"" connection="PROV_DB_WA_2.0" timeout=6000 02-24-2022 21:24:10.711 INFO SearchParser [9796 searchOrchestrator] - PARSING: | dbxquery query="SELECT * FROM \"ngcs2_0\".\"public\".\"responder\"" connection="PROV_DB_WA_2.0" timeout=6000 02-24-2022 21:24:10.712 INFO ChunkedExternProcessor [9796 searchOrchestrator] - Running process: /export/home/splunk/splunk/bin/python3.7 /export/home/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery_bridge.py 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: File "/export/home/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery_bridge.py", line 125, in <module> 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: main() 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: File "/export/home/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery_bridge.py", line 121, in main 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: bridge = DbxQueryBridge(sys.argv) 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: File "/export/home/splunk/splunk/etc/apps/splunk_app_db_connect/bin/dbxquery_bridge.py", line 65, in _init_ 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: self.sock.connect(('localhost', port)) 02-24-2022 21:24:10.738 ERROR ChunkedExternProcessor [9807 ChunkedExternProcessorStderrLogger] - stderr: ConnectionRefusedError: [Errno 111] Connection refused 02-24-2022 21:24:10.741 ERROR ChunkedExternProcessor [9796 searchOrchestrator] - EOF while attempting to read transport header read_size=0 02-24-2022 21:24:10.741 ERROR ChunkedExternProcessor [9796 searchOrchestrator] - Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1. 02-24-2022 21:24:10.741 ERROR SearchPhaseGenerator [9796 searchOrchestrator] - Fallback to two phase search failed:Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1. 02-24-2022 21:24:10.743 ERROR SearchStatusEnforcer [9796 searchOrchestrator] - sid:1645766650.38_B885E1F4-85FA-453C-A035-E8DCD64B223F Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1. 02-24-2022 21:24:10.743 INFO SearchStatusEnforcer [9796 searchOrchestrator] - State changed to FAILED due to: Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1. 02-24-2022 21:24:10.744 INFO SearchStatusEnforcer [9796 searchOrchestrator] - Enforcing disk quota = 10485760000 02-24-2022 21:24:10.747 INFO DispatchStorageManager [9796 searchOrchestrator] - Remote storage disabled for search artifacts. 02-24-2022 21:24:10.747 INFO DispatchManager [9796 searchOrchestrator] - DispatchManager::dispatchHasFinished(id='1645766650.38_B885E1F4-85FA-453C-A035-E8DCD64B223F', username='admin') 02-24-2022 21:24:10.747 INFO UserManager [9796 searchOrchestrator] - Unwound user context: admin -> NULL 02-24-2022 21:24:10.747 INFO SearchStatusEnforcer [9789 RunDispatch] - SearchStatusEnforcer is already terminated 02-24-2022 21:24:10.747 INFO UserManager [9789 RunDispatch] - Unwound user context: admin -> NULL 02-24-2022 21:24:10.747 INFO LookupDataProvider [9789 RunDispatch] - Clearing out lookup shared provider map 02-24-2022 21:24:10.749 ERROR dispatchRunner [28370 MainThread] - RunDispatch::runDispatchThread threw error: Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1.
Hi. I need to download Splunk Enterprise 7.x.x to install in a test environment, to test some compatibility issues. Where can we find this old versions? I can only see 8.x.x in older versions. ... See more...
Hi. I need to download Splunk Enterprise 7.x.x to install in a test environment, to test some compatibility issues. Where can we find this old versions? I can only see 8.x.x in older versions. I really need 7.x.x versions. Thanks.
I have an external API subscription that I want to call when a specific field in my Splunk event is present (e.g. City_Name). The REST API call would query the external API for <City_Name> and add th... See more...
I have an external API subscription that I want to call when a specific field in my Splunk event is present (e.g. City_Name). The REST API call would query the external API for <City_Name> and add the returned data (in JSON format) into Splunk to enrich the event. I've seen something similar with using "lookup" but looking for a tutorial on how to build this so that when the event field is present, the external API can be called to download the additional enrichment data. Suggestions / tutorials on how I might go about implementing this in Splunk? Thanks.
There is probably a simple solution to this, but unfortunately I was not able to find the answer in the documentation, nor by searching the community. I am injecting events into Splunk, with a cert... See more...
There is probably a simple solution to this, but unfortunately I was not able to find the answer in the documentation, nor by searching the community. I am injecting events into Splunk, with a certain JSON structure, e.g.   [ { "foo": { "k1": 1, "k2": 2 }, "bar": { "m1": 5, "m2": 6 }, "string1": "hi", "string2": "bye" }, { "foo": { "k1": 11, "k2": 22 }, "bar": { "m1": 55, "m2": 66 }, "string1": "hi2", "string2": "bye2" }, ... and so on ... ]       I can nicely search these events in Splunk, e.g. by | where foo.k1 > 10 Now when searching through the REST API, I can specify which fields I would like to get, e.g. with | fields string1, foo | fields - _* The problem I am having is as follows: When specifying the field "foo" - which has a map (or some other complex structure) in the above naive way, I am not getting any contents from it in my search result (the results are nicely visible in the event view of the Splunk web UI - but in the REST API) When using fields foo*, I am getting an expanded result: { "foo.k1": 1, "foo.k2": 2 } I tried spath, like in: | spath output=myfoo path=foo | fields myfoo | fields - _* which however gives me a string that contains JSON: {"myfoo": "{\"k1\": 1,\"k2\": 2}"} The above are all sub-optimal; I would like to get a search result which is pure JSON, and preserves the structure of the "foo" field, so that I would get: { ..., "foo": { "k1": 1, "k2": 2 }, ... } Or in other words: I would like to pass through some of the event content as is to the result, such that I would get a nice hierarchical data structure when parsing the JSON search result. Thanks a lot for your valuable advice!
Hi all, We are running the latest version of URL Toolbox (at the time of writing, 1.9.1 released on Dec 2021) on Splunk 8.2.3 with Splunk ES 6.6.2. After the upgrade, we have noticed that the mozi... See more...
Hi all, We are running the latest version of URL Toolbox (at the time of writing, 1.9.1 released on Dec 2021) on Splunk 8.2.3 with Splunk ES 6.6.2. After the upgrade, we have noticed that the mozilla list is not working properly anymore. To test it: | makeresults | eval domain="http://www.example.com/123/123.php",list="mozilla" | `ut_parse_extended(domain,list)` Gives: domain list ut_domain ut_domain_without_tld ut_fragment ut_netloc ut_params ut_path ut_port ut_query ut_scheme ut_subdomain ut_subdomain_count ut_tld http://www.example.com/123/123.php mozilla None None None www.example.com None /123/123.php 80 None http None 0 None With iana no problems at all (even if the parsing is a bit different and mozilla would be the ideal ones for our user cases): | makeresults | eval domain="http://www.example.com/123/123.php",list="iana" | `ut_parse_extended(domain,list)`   domain list ut_domain ut_domain_without_tld ut_fragment ut_netloc ut_params ut_path ut_port ut_query ut_scheme ut_subdomain ut_subdomain_count ut_subdomain_level_1 ut_tld http://www.example.com/123/123.php iana example.com example None www.example.com None /123/123.php 80 None http www 1 www com   Is anyone having the same issue and/or a fix that we might apply? Thank you and cheers!
I am working on a Splunk Deployment with a cluster of search heads spanning two physical sites.  At Site1 there is actually only one search head.  At Site2 there are two search heads. The load bala... See more...
I am working on a Splunk Deployment with a cluster of search heads spanning two physical sites.  At Site1 there is actually only one search head.  At Site2 there are two search heads. The load balancer managing Splunk Web access tends to favor Site1 and so almost all users end up landing on the sole search head at Site1 when they access our Splunk Web url. What I have noticed, however, is that the search head at Site1 also seems to run almost all of the scheduled searches.  Also, because it is favored by the load balancer for user access, users log into that search head and it takes on most of the ad-hoc searches. I know that this setup is very non-optimal, and as the story goes, this is a mess I inherited recently in taking over Splunk responsibilities.  More search heads are needed actually at both physical Sites, but in the meantime, I am trying to understand why the cluster captain is not more evenly distributing the saved searches, alerts, reports, etc.  Why do they all seem to execute from the sole cluster member at Site1?
Hoping that I may be able to get some assistance with this dashboard. Full disclosure, I am not a Splunk aficionado by any stretch, but I am trying to put together a dashboard that.. 1. takes an acc... See more...
Hoping that I may be able to get some assistance with this dashboard. Full disclosure, I am not a Splunk aficionado by any stretch, but I am trying to put together a dashboard that.. 1. takes an account as input and queries the ports that it hits the AD domain controller on. 2. drills down with a query for F5 logs using the time frame passed from drill down selection as well as the ADDC IP and ports. In a nutshell, we want to follow the service account authentication to the loadbalancer and identify the actual client since logs against AD only show the F5 IP. Have tried a number of different methods for this but can't get the drill down to process as intended with the necessary parameters passing. Also open to rethinking the approach if necessary. Originally tried using a transaction to capture events and associate them but there was a lot of noise to filter out. XML shown below:   <form theme="dark"> <label>Account Drilldown</label> <fieldset submitButton="false" autoRun="false"> <input type="time" token="dt" searchWhenChanged="true"> <label>Timeframe</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="directoryValue" searchWhenChanged="true"> <label>Directory</label> <choice value="index=index host=*ad* &quot;Source_Port&quot; OR &quot;Port&quot; Account_Name=&quot;*">Active Directory</choice> </input> <input type="text" token="accountSearch" searchWhenChanged="true"> <label>Account Name</label> <default>Type Account Here</default> </input> <input type="multiselect" token="srcport" depends="$dt.earliest$,$dt.latest$" searchWhenChanged="true"> <label>Domain Controller Ports</label> <choice value="*">All</choice> <search id="activityList"> <query>$directoryValue$$accountSearch$*" | fields Source_Network_Address, Port, Source_Port | eval srcip = Source_Network_Address, Port = Source_Port, srcport = Port | table _time, srcip, srcport</query> <earliest>@d</earliest> <latest>now</latest> </search> <fieldForLabel>srcport</fieldForLabel> <fieldForValue>srcport</fieldForValue> <prefix>(</prefix> <suffix>)</suffix> <delimiter> OR </delimiter> </input> </fieldset> <row> <panel> <table> <search base="activityList"> <![CDATA[index=index host=*ad* &quot;Source_Port&quot; OR &quot;Port&quot; Account_Name=&quot;*" | eval _querystring=replace(replace(ltrim(rtrim("$srcport$",")"),"("),"srcport=","form.srcport=")," OR ","&")]]> </search> <option name="count">5</option> <option name="drilldown">row</option> <option name="refresh.display">preview</option> <drilldown> <link> <![CDATA[/app/team/account_activity_drilldown?form.dt.earliest=$earliest$&form.dt.latest=$latest$&form.srcip=$row.srcip$&form.srcport=$row.srcport$]]> </link> </drilldown> </table> </panel> </row> <row> <panel id="drilldown" depends="$row.srcip$"> <table> <search> <query>| search index=index type=traffic dstport=* action=* policyid=* srcip="$srcip$" OR srcport="$srcport$"</query> <earliest>$dt.earliest$</earliest> <latest>$dt.latest$</latest> </search> <option name="drilldown">row</option> <option name="refresh.display">preview</option> <option name="rowNumbers">true</option> </table> </panel> </row> </form>  
Hello, I wanted to create a detection rule on the LLMNR protocol knowing that I don't have Sysmon just with the logs. Can you help me please? thank you and have a great day
Hi, I have created a dashboard to filter firewall statuses. One of the inputs I need is a checkbox to eliminate duplicates based on host, source IP, destination IP and destination port.  However... See more...
Hi, I have created a dashboard to filter firewall statuses. One of the inputs I need is a checkbox to eliminate duplicates based on host, source IP, destination IP and destination port.  However, the checkbox input is not working and every time the use checks and unchecks the box, it has no effect on the dashboard. The following is my dashboard and the XML code, respectively: Can you please help? Thank you!
Hello, I'm experiencing some issues on kvstore: [conn4556] SCRAM-SHA-1 authentication failed for __system on local from client xxx.xxx.x.xx:xxxxx ; AuthenticationFailed: SCRAM-SHA-1 authenticatio... See more...
Hello, I'm experiencing some issues on kvstore: [conn4556] SCRAM-SHA-1 authentication failed for __system on local from client xxx.xxx.x.xx:xxxxx ; AuthenticationFailed: SCRAM-SHA-1 authentication failed, storedKey mismatch I followed this https://community.splunk.com/t5/Deployment-Architecture/Why-is-the-KV-Store-status-is-showing-as-quot-starting-quot-in/m-p/284690 as for 1SH (total of 3)  I'm reciving: This member: backupRestoreStatus : Ready disabled : 0 guid : xxxxxxxxxxxxxxxxxxxxxx port : 8191 standalone : 0 status : starting storageEngine : mmapv1 I appreciate any help
Hi Team, We have SaaS-based Appdynamics, where login is through the Access key.  Now I want to deploy the AppDynamics cluster agent in Kubernetes using the helm chart.  In the helm chart, the ... See more...
Hi Team, We have SaaS-based Appdynamics, where login is through the Access key.  Now I want to deploy the AppDynamics cluster agent in Kubernetes using the helm chart.  In the helm chart, the following code is there  api-user: {{ cat (.username | trim | required "AppDynamics controller username is required!") "@" (.account | trim | required "AppDynamics controller account is required!") ":" (.password | trim | required "Appdynamics controller password is required!") | nospace | b64enc -}} Values.yaml file looks like  controllerInfo:   url: "https://myinstance:443"   account: "xysystems"   username:    password:    accessKey: "xxxxxxxxxxxxxx"   globalAccount: null # To be provided when using machineAgent Window Image # SSL properties customSSLCert: null Could someone please explain, instead of using username\@account:password  I want to use the account Name with the Access key for login. Please guide me. thanks
Can i implemenet something like this ?   Process: Service Parameters – Average of the percentage reported by the IT Application health parameters ie Transaction Timeouts . We count the Nu... See more...
Can i implemenet something like this ?   Process: Service Parameters – Average of the percentage reported by the IT Application health parameters ie Transaction Timeouts . We count the Number of transactions is a time period and compare with the transaction timeouts that were reported. Example:. If number of transactions for 1 day is 5000 ,and the transaction timeouts were 30. Then the transaction success rate is 99.4 % The colour coding scheme would be -: Green- If after subtracting the transaction timeouts from the number of transactions of the entire time period ,the transaction success rate is more than 98% then the application is considered Green. Yellow-- If after subtracting the transaction timeouts from the number of transactions of the entire time period ,the transaction success rate is more than 90% but less than 98 % then the application is considered Yellow. Red - If after subtracting the transaction timeouts from the number of transactions of the entire time period ,the transaction success rate is less than 90% then the application is considered Red.  
Hi Team, Our team is planning to install Defender for Endpoint on Splunk server. Can anyone please confirm if there are any restrictions for having Microsoft Defender AV solution on the Splunk serv... See more...
Hi Team, Our team is planning to install Defender for Endpoint on Splunk server. Can anyone please confirm if there are any restrictions for having Microsoft Defender AV solution on the Splunk servers. Means would there be any impact if we install the same on the Splunk server. Thanks & Regards,