All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I don't think "NOT Eventcode IN (4768,4770,4624) AND Eventcode=4769" is what it takes.  The idea is that those three codes do not precede 4769.  So, you exclude any completed transactions instead. i... See more...
I don't think "NOT Eventcode IN (4768,4770,4624) AND Eventcode=4769" is what it takes.  The idea is that those three codes do not precede 4769.  So, you exclude any completed transactions instead. index="xx" | transaction user maxspan=24h maxpause=10h startswith="Eventcode IN (4768,4770,4624)" endswith="Eventcode=4769" keepevicted=true keeporphaned=true | where closed_txn==0  
I believe you need to upgrade to the supported version like at least Enterprise 9.1.0.
I am trying find a way where I can send a test email through SOAR to check the connectivity. Where can I see the option? Splunk SMTP app is also present.
Hi Community,   One of the log source (e.g. index=my_index) at my company's splunk became inter=main. After multiple investigation, i found that Infrastructure Team has refreshed the device to a ne... See more...
Hi Community,   One of the log source (e.g. index=my_index) at my company's splunk became inter=main. After multiple investigation, i found that Infrastructure Team has refreshed the device to a new hardware due to product EOL (same brand, same product, e.g. Palo Alto 3020 to PA3220). Also, the device IP is changed. Thus, i have modified the monitoring path at inputs.conf in Add-on and distribute to HF by deployment server.   Here is the example for what i modified:   [monitor:///siem/data/syslog/192.168.1.101/*] #original ip was 192.168.1.100  disabled = false  index = my_index sourcetype = my:sourcetype host_segment = 4   After such changes, i tried to verify the result on HF, the inputs.conf was successfully update to the new version.    However, the logs remain to index=main when searching on Search Head after the changes i did above.   Anyone know if any other thing i need to modify? Or else there are other root cause that making the logs fall under wrong index apart from the ip changes?  
I have a bucket in fixup tasks in indexer cluster-> bucket status, its been struck.  Both SF & RF. So, both SF and RF are not met in indexer cluster.  I tried to roll and resync bucket manually, tha... See more...
I have a bucket in fixup tasks in indexer cluster-> bucket status, its been struck.  Both SF & RF. So, both SF and RF are not met in indexer cluster.  I tried to roll and resync bucket manually, that didn't work. There're no buckets in excess buckets, i've cleared them like more than 3hrs. Is there any way to meet SF & RF without loosing data or bucket ? I even tried to restart Splunk process on that Indexer Forgot to mention, i had a /opt/cold drive that has I/O error on an indexer. To get it fix i had stop Splunk and remove an indexer from indexer cluster, All other indexers are up and running since last night.  All 45 indexers in cluster-master are up and running and left it to bucket fixup tasks to fix and it also to rebalance overnight. When i check morning there're only 2 fixup tasks left one is in SF & one in RF.  Does it also need manual data rebalance to perform from indexer-cluster as well ?
Hi @maa_splunk , I encountered the same issue. Did you resolve this error?
Also, if you could update us more details about why you need "version control" using github / gitlab...  We can use the Deployment Server(and/or Deployer, Cluster Master) for apps, addons, TA's vers... See more...
Also, if you could update us more details about why you need "version control" using github / gitlab...  We can use the Deployment Server(and/or Deployer, Cluster Master) for apps, addons, TA's version control management. 
Hi experts, After submitting a search query via REST API, is there a way to check number of events the search results for the job id? Without which, I won't know if  how many GET each limited to 50... See more...
Hi experts, After submitting a search query via REST API, is there a way to check number of events the search results for the job id? Without which, I won't know if  how many GET each limited to 50K results which is something I run into as well. Alternatively, is there an argument that I can use in HTTP GET to splunk to override the 50K limit? Thanks, MCW
Creating a generic signed cert for all UFs is what I did.  Our techsec policy doesn't require mutual TLS, so creating a single, valid cert for all UFs works a treat.  You'll want to stop your UF's fr... See more...
Creating a generic signed cert for all UFs is what I did.  Our techsec policy doesn't require mutual TLS, so creating a single, valid cert for all UFs works a treat.  You'll want to stop your UF's from listening on external interfaces if your security team is going to port-scan, though, as the cert will come back as not matching the hostname and therefore a security emergency. (Not really, but with TechSec, what isn't an emergency?)  I droped the cert into a common app to all UFs and updated the cert path as well. 
Hi @sizemorejm .. As this is a very rare topic, its difficult to explain with out own experience. we can only give an educated guess.  >> would this add-on work for gitlab as well? So, i assume yo... See more...
Hi @sizemorejm .. As this is a very rare topic, its difficult to explain with out own experience. we can only give an educated guess.  >> would this add-on work for gitlab as well? So, i assume you would like to use version control of the Splunk configs, importantly using the gitlab..  as per my understanding, the github app should work gitlab as well.  both are very similar products and from same base source code(i think).  gitlab looks like an advanced version of github. so, this version control app also should work seamlessly. thanks.    Comparison between github and gitlab...  https://about.gitlab.com/competition/github/
Thanks.   When I hardcode data like you've done, and I add escape backslash quotes, it works. | makeresults | fields - _time | eval loggingObject.responseJson = "{\"meta\":{\"code\":400},\"flag1\... See more...
Thanks.   When I hardcode data like you've done, and I add escape backslash quotes, it works. | makeresults | fields - _time | eval loggingObject.responseJson = "{\"meta\":{\"code\":400},\"flag1\":false,\"flag2\":false,\"flag3\":true,\"flag3status\":\"3\",\"flag4\":false,\"flag5\":false,\"flag6\":false,\"flag7\":false, \"flag7reason\":\"xyz\"}" | spath input=loggingObject.responseJson | foreach * [eval trueflag = mvappend(trueflag, if(<<FIELD>> == "true", "<<FIELD>>", null()))] | stats count by trueflag     When I use my real data results, I do get results, but also some splunk errors: | eval responseJson='loggingObject.responseJson' | spath input=responseJson | foreach * [eval trueflag = mvappend(trueflag, if(<<FIELD>> == "true", "<<FIELD>>", null()))] | stats count by trueflag   Errors:   [shsplnkprnap008,shsplnkprnap009,shsplnkprnap010,shsplnkprnap011,shsplnkprnap012,shsplnkprnap013] Failed to parse templatized search for field 'tag::eventtype' [shsplnkprnap008,shsplnkprnap009,shsplnkprnap011,shsplnkprnap012,shsplnkprnap013] Failed to parse templatized search for field 'loggingObject.methodParams{}.className' [shsplnkprnap008,shsplnkprnap009,shsplnkprnap011,shsplnkprnap012,shsplnkprnap013] Failed to parse templatized search for field 'loggingObject.methodParams{}.value' [shsplnkprnap008,shsplnkprnap009,shsplnkprnap012,shsplnkprnap013] Failed to parse templatized search for field 'loggingObject.requestHeaders.user-agent' [shsplnkprnap008,shsplnkprnap009,shsplnkprnap012,shsplnkprnap013] Failed to parse templatized search for field 'loggingObject.requestHeaders.x-forwarded-for' [shsplnkprnap008,shsplnkprnap009,shsplnkprnap013] Failed to parse templatized search for field 'Device-ID' [shsplnkprnap008,shsplnkprnap009,shsplnkprnap013] Failed to parse templatized search for field 'valid-beacon-dept-count' [shsplnkprnap009] Failed to parse templatized search for field 'steps{}'   I am able to do something like this without splunk errors;   | eval responseJson='loggingObject.responseJson' | stats count by responseJson
As of version 6 we're able to run playbooks when a container is closed. That's the easy part. Canceling running playbooks takes a few custom API calls.   # Pulls the id for this playbook. It sh... See more...
As of version 6 we're able to run playbooks when a container is closed. That's the easy part. Canceling running playbooks takes a few custom API calls.   # Pulls the id for this playbook. It shouldn't be hardcoded because the ID changes with each version and may not increment as expected my_id_url = phantom.build_phantom_rest_url('playbook') + '?_filter_name="my_playbook_name"' my_id_resp_json = phantom.requests.get(my_id_url, verify=False).json() my_id = my_id_resp_json['data'][0]['id'] # Runs a query to pull the audit data of the current container audit_url = phantom.build_phantom_rest_url('container', container_id, 'audit') audit_resp_json = phantom.requests.get(audit_url, verify=False).json() for i in audit_resp_json: # Looks for any playbook that has run in the container if i['AUDIT SOURCE'] == 'Playbook Run': # Runs a query to find details on each run runs_url = phantom.build_phantom_rest_url('playbook_run', i['AUDIT ID']) runs_resp_json = phantom.requests.get(runs_url, verify=False).json() # Finds any playbook that is currently running which isn't this one if runs_resp_json['status'] == 'running' and runs_resp_json['playbook'] != my_id: #Sends a POST to cancel any that match the above criteria cancel_url = phantom.build_phantom_rest_url('playbook_run', runs_resp_json['id']) cancel_post = phantom.requests.post(cancel_url, data='{"cancel":true}', verify=False) # If successful, up the succes count if cancel_post.status_code == 200: # Success else: # Failure  
Another option would be to create a dashboard with the base search to pull up the errors and then use a drill down to get the rest of the detail. Here is some example simple xml for a dashboard. <fo... See more...
Another option would be to create a dashboard with the base search to pull up the errors and then use a drill down to get the rest of the detail. Here is some example simple xml for a dashboard. <form theme="dark"> <label>Error Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timePicker"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <table> <search> <query>index=_internal sourcetype IN (splunkd) AND ERROR | table _time index host source sourcetype _raw</query> <earliest>$timePicker.earliest$</earliest> <latest>$timePicker.latest$</latest> </search> <option name="count">5</option> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <drilldown> <eval token="tok_time_earliest">$click.value$ - 10</eval> <eval token="tok_time_latest">$click.value$ + 10</eval> <set token="tok_index">$row.index$</set> <set token="tok_host">$row.host$</set> <set token="tok_source">$row.source$</set> <set token="tok_sourcetype">$row.sourcetype$</set> </drilldown> </table> </panel> </row> <row> <panel depends="$tok_index$"> <title></title> <event> <search> <query>index=$tok_index|s$ host=$tok_host|s$ source=$tok_source|s$ sourcetype=$tok_sourcetype|s$ earliest=$tok_time_earliest|s$ latest=$tok_time_latest|s$ | highlight error</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="type">raw</option> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form>  
Do you mean to say that Splunk gives you a field named 'loggingObject.responseJson' with that JSON object as value?  In that case, you need to first extract from JSON with spath. (A newer alternative... See more...
Do you mean to say that Splunk gives you a field named 'loggingObject.responseJson' with that JSON object as value?  In that case, you need to first extract from JSON with spath. (A newer alternative is fromjson.)   | spath input=loggingObject.responseJson | foreach flag* [eval trueflag = mvappend(trueflag, if(<<FIELD>> == "true", "<<FIELD>>", null()))] | stats count by trueflag   Here is an emulation you can play with and compare with real data   | makeresults | fields - _time | eval loggingObject.responseJson = "{\"meta\":{\"code\":400},\"flag1\":false,\"flag2\":false,\"flag3\":true}" ``` data emulation above ```  
I found "VersionControl For Splunk" on Github would this add-on work for gitlab as well?
Looks like spaces and quotes are being identified as shell.  Try escaping them like below: curl -k -u user:password https://10.236.142.0:8089/services/search/jobs/export -d search='search index=list... See more...
Looks like spaces and quotes are being identified as shell.  Try escaping them like below: curl -k -u user:password https://10.236.142.0:8089/services/search/jobs/export -d search='search index=list-service source=\"eventhub://sams-jupiter-prod-scus-logs-premium-1.servicebus.windows.net/list-service;\" \"kubernetes.namespace_name\"=\"list-service\" | stats dc(kubernetes.pod_name) as pod_count' I had a very long query that needed to be passed via rest api. I ran into such issues but url encoding the query was very helpful. I used this website for that: https://meyerweb.com/eric/tools/dencoder/
Try adding following at the end of your search If your current search result contains _time and one field say "count" | eventstats sum(count) as TotalCount   If your search result contains _time ... See more...
Try adding following at the end of your search If your current search result contains _time and one field say "count" | eventstats sum(count) as TotalCount   If your search result contains _time and several dynamic column names, | addtotals fieldname="Totalcount"
Hello, I have the same problem in the Topology Overview in a distributed environment. The queries stay loading... and they don't work. I manage many environments and the same thing happens in both W... See more...
Hello, I have the same problem in the Topology Overview in a distributed environment. The queries stay loading... and they don't work. I manage many environments and the same thing happens in both Windows and Linux. I think this version 9.1.1 has that problem.  
Hi @ashok968, Can you please share your current SPL and its output and a visual representation of what output you may need to fulfill the requirement? Please mask the relevant sensitive content if a... See more...
Hi @ashok968, Can you please share your current SPL and its output and a visual representation of what output you may need to fulfill the requirement? Please mask the relevant sensitive content if any. Thank you
Hi @Sunil.Agarwal, Any idea how to resolve this? I am still getting the same error. I even tried accessing it internally from the linux server running the controller and getting the same error.