All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

can we use wildcards in the Log File Prefix/S3 Key Prefix ? Id like to collect an org cloudtrail and use something like AWSLogs/o-xxx/*/CloudTrail/ but this does not work. how can we collect organi... See more...
can we use wildcards in the Log File Prefix/S3 Key Prefix ? Id like to collect an org cloudtrail and use something like AWSLogs/o-xxx/*/CloudTrail/ but this does not work. how can we collect organizational trails correctly without wildcards?  
I need to manipulate some fields in the URL threat match search in Splunk ES 6.4, but am at a loss as for how to do so. When viewing the SPL at ES-> Data Enrichment -> Threat Intelligence Management ... See more...
I need to manipulate some fields in the URL threat match search in Splunk ES 6.4, but am at a loss as for how to do so. When viewing the SPL at ES-> Data Enrichment -> Threat Intelligence Management -> Threat Matching , any changes I make to the SPL are not saved, and when I grep for snippits of the threat match search in the splunk/etc directory, I can't find where they are stored.  Our cloud-based web proxy logs does not include the protocol header in the URL field. Since the Web data model requires this and several of our custom threat intelligence sources include this, we need to bridge the gap in order to perform threat matches from the Web.url and Web.http_referrer fields against threat intelligence. Previously, I had directly edited the Threat - URL Matches - Threat Gen search included some eval statements just before the threat_intel lookups to make the Web.url field into an mvfield including the three protocol headers we see in our threat intelligence, then mvjoining them into one field for whitelisting later on.  Here's my additions to the original threat gen search:       | eval url=mvappend("http://".url, "https://".url, "ftp://".url) | extract domain_from_url | `threatintel_url_lookup(url)` | `threatintel_domain_lookup(url_domain)` | eval url=mvjoin(url, " ")       It wasn't the prettiest solution, but it was the only one we could come up with to get URL matches out of the Threat Intelligence framework.    Since the old threat gen searches are deprecated, I replicated this effort with the code shown for the URL threat match search found at ES-> Data Enrichment -> Threat Intelligence Management -> Threat Matching        | eval Web.url=mvappend("http://".'Web.url', "https://".'Web.url', "ftp://".'Web.url') | lookup "threatintel_by_url" value as "Web.url" OUTPUT threat_collection as tc0,threat_collection_key as tck0 | lookup "threatintel_by_url_wildcard" value as "Web.url" OUTPUT threat_collection as tc1,threat_collection_key as tck1 | eval Web.url=mvjoin('Web.url', " ")       However, I need to save my new version of the threat match search over the existing one. As stated above, I'm not sure how to do this. It seems like  the SPL shown at ES-> Data Enrichment -> Threat Intelligence Management -> Threat Matching  may be generated based on the various GUI options that are user-configurable. If this is the case, how can I ensure that my Web Proxy logs can be processed through the threat intelligence framework? 
Disk Quota Limits, Search API Endpoint Differences and Parameters Looking for better clarity and deeper understanding to better solve a recurring issue I'm seeing. We have a script performing searc... See more...
Disk Quota Limits, Search API Endpoint Differences and Parameters Looking for better clarity and deeper understanding to better solve a recurring issue I'm seeing. We have a script performing searches using the API. Currently, the flow works like this: Start search by doing POST /services/search/jobs with search as parameter, get back search id (sid). Run loop to do GET /services/search/jobs/{sid} and check the search job status until done Pull back in results of search by doing GET /services/search/jobs/{sid}/results After results are pulled back, do POST /services/search/jobs/{sid}/control to cancel search and delete the result cache The script is designed to not try to run more than a few searches at a time, and to wait until earlier searches have been canceled to start the next search. However, we are still sometimes hitting the search disk quota limitation. We've increased this limit a few times well past the default, which has reduced frequency but issue still comes up. We do NOT want to change this to unlimited, nor keep increasing it every time it gets hit.There are a few questions I'm not able to find documentation on when trying to figure out solutions: Is there normally delay after a search has successfully been cancelled before the results cache is removed? Or somehow until the disk usage quota is updated to reflect the cleared space? Would doing a DELETE /services/search/jobs/{sid} clear up space quicker? Would switching to using the /services/search/jobs/export endpoint help? If the results are streaming, do they also still persist on disk? The Python & Java SDK docs say export searches '...return results in a stream, rather than as a search job that is saved on the server.' But I'm not sure that means the result cache isn't saved. Does setting a low 'timeout' value in the search/jobs parameter clear the disk space after that value has passed? With the 'auto_cancel' parameter what counts as 'inactivity'?  checking the status of the SID? retrieving results? If accidently set to do 'search index=* ' for all time, does this stop it before completion? ( I assume so, but wanted confirmation ) The documentation is unclear on what some phases mean (like 'inactivity', or 'rather than saved on the server' in the SDK docs ), and some other parts are likely simplifications/abstracts of concepts I need to understand more in-depth (cancelling/deleting jobs, clearing disk space). Trying to avoid just putting band-aids on a bullet wound, but need more details to determine the right treatment.
Hi team,    I prepared a stats query and it is working fine. But I need to know the Application names which are not having the transactions.  Below is the query that I was prepared.  index="index1... See more...
Hi team,    I prepared a stats query and it is working fine. But I need to know the Application names which are not having the transactions.  Below is the query that I was prepared.  index="index1" ApplicationName="app1" OR ApplicationName="app2" OR ApplicationName="app3" OR ApplicationName="app4" OR ApplicationName="app5" OR ApplicationName="app6" OR ApplicationName="app7" OR ApplicationName="app8" OR ApplicationName="app9" | chart count(ApplicationName) over ApplicationName by Status |addtotals From the above query I am getting the results as follows:  ApplicationName         Success      Failed     Total app2                                      3                  0                3 app5                                      9                  1              10 Now I need to enhance the above query to get the results like below: ApplicationName         Success      Failed     Total app1                                      0                  0                0 app2                                      3                  0                3 app3                                      0                  0                0 app4                                      0                  0                 0 app5                                      9                  1              10 app6                                      0                  0                 0  app7                                      0                  0                 0 can anyone help me on this, thanks in advance.
Hoping this is the correct board to post this question.  I need help in setting up a health rule for our Oracle database so I can be alerted if one of our application user accounts get locked.  For t... See more...
Hoping this is the correct board to post this question.  I need help in setting up a health rule for our Oracle database so I can be alerted if one of our application user accounts get locked.  For the life of me I can't seem to find where I can define that.  Any help would be greatly appreciated. Thanks Greg
Hello All, I am trying to visualize data in a choropleth map using shapefiles. My goal is to show a count of a field by county in PA. I have yet to find a well configured shapefile for all counties ... See more...
Hello All, I am trying to visualize data in a choropleth map using shapefiles. My goal is to show a count of a field by county in PA. I have yet to find a well configured shapefile for all counties in PA, so I had to use one for the counties in all of the US. Unfortunately when I use the geom command this includes any other counties of the same name in the map vis.  Is there a way to filter out other states and just include PA? or can I set bounds for this map so it only visualizes PA? Image of Choropleth map as is Search: sourcetype=*sourcetype* province=Pennsylvania | stats latest(*stats*) as "*name*" by county | geom nat_counties_lookup featureIdField=county
Hello splunkers, Please help me to figure out this issue! I have a realtime alert which triggers an alert and send the email to users. when i ingest 62 files in splunk index, triggered alerts ar... See more...
Hello splunkers, Please help me to figure out this issue! I have a realtime alert which triggers an alert and send the email to users. when i ingest 62 files in splunk index, triggered alerts are 52 but i have received only 44 email notifications only. I figured out that the first email was received at 20:35 and 44th email at 20:40 and not received any further. I have also tried changing these two parameters of alert from there default value 5m, 1. action.email.maxtime-->1800 2. action.script.maxtime-->1800   splunk enterprise v6.6.3 Please help me if any other parameter is to be changed, or any known issue like this.
Hello everyone I have linux server machine  (CentOS 7) where Splunk universal ( version 8.0.3 ) has been installed After installation of agent there was deployed app for connecting on deployment ... See more...
Hello everyone I have linux server machine  (CentOS 7) where Splunk universal ( version 8.0.3 ) has been installed After installation of agent there was deployed app for connecting on deployment server. Unfortunately machine is not visible in DS, First I have checked if dns name of DS can be resolved there and if host can communicate with DS on 8089 dest port at all with telnet program. There were not find any issues. I decided to look at internal splunk logs by following search: index=_internal earliest=-20d@d latest=now hostname OR x.x.x.x sourcetype=splunkd ( where x.x.x.x is IP of linux server) I could see following warning there: " 02-03-2021 08:04:09.080 +0100 WARN HttpListener - Socket error from x.x.x.x:13258 while idling: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version numb " My question can that error be related to problem with connection between linux server and DS ? If yes, how i can resolve that issue. On DS i have installed following version of Splunk: Splunk 7.2.0 (build 8c86330ac18) Thanks in advance for your help BR Dawid
Hi all, I am struggling with an issue about Splunk Developing. Our target is to freeze a row. Every time that anyone click on any of the field labels, the rows move. But we want that ,an specific ro... See more...
Hi all, I am struggling with an issue about Splunk Developing. Our target is to freeze a row. Every time that anyone click on any of the field labels, the rows move. But we want that ,an specific row, gets frozen and blocked at the time that someone click on any of the field labels. Thanks in advance. Regards
Hello, I am trying to get the x-crfs-token authentication token from my SAP application per GET as this is necessary for further access:   | makeresults count=1 | eval header="{\"x-csrf-token\":\"... See more...
Hello, I am trying to get the x-crfs-token authentication token from my SAP application per GET as this is necessary for further access:   | makeresults count=1 | eval header="{\"x-csrf-token\":\"fetch\"}" | curl user= pass= method=get headerfield= header debug=true uri="https://myip/sap/opu/odata/sap/INBOUNDCONNECTOR/InboundAlertSet"     Now, when I do the above in the Postman, I get an error from the corresponding function (InboundAlertSet) in the response body but also I get the required x-csrf-token passed in the response header. How would I get / read the response header with the WebTools? Is it possible? Kind Regards, Kamil    
I have a timestamp like this "2020-Jan-01 21:59" When I ingest data, I want this timestamp field to be registered as _time field in splunk What is the right striptime() string to use to parse this ... See more...
I have a timestamp like this "2020-Jan-01 21:59" When I ingest data, I want this timestamp field to be registered as _time field in splunk What is the right striptime() string to use to parse this my timestamp?      
Hey! My team is interested in integration of Splunk (especially ES) and TheHive Project products. The goal is to provide automated sending  Splunk Alerts (Notable Events in case of ES) to TheHive... See more...
Hey! My team is interested in integration of Splunk (especially ES) and TheHive Project products. The goal is to provide automated sending  Splunk Alerts (Notable Events in case of ES) to TheHive platform for further automatic analysis by Cortex and returning results back to Splunk. I don't have any experience in stuff like that so I would like to get any ideas of solving this problem. Maybe anyone have done that before on their project and would like to share any solutions?
I have a query to find missing forwarders.  It is based on code I received here and it is so very close to working.  Here is the code and my issue is below.   (fwdType=* group=tcpin_connections gui... See more...
I have a query to find missing forwarders.  It is based on code I received here and it is so very close to working.  Here is the code and my issue is below.   (fwdType=* group=tcpin_connections guid=* index=_internal sourcetype=splunkd (connectionType=cooked OR connectionType=cookedSSL)) | stats values(fwdType) as forwarder_type, latest(version) as version, values(arch) as arch, values(os) as os, max(_time) as last_connected, sum(kb) as new_sum_kb, avg(tcp_KBps) as new_avg_tcp_kbps, avg(tcp_eps) as new_avg_tcp_eps by guid, hostname | stats values(forwarder_type) as forwarder_type, max(version) as version, values(arch) as arch, values(os) as os, max(last_connected) as last_connected, values(new_sum_kb) as sum_kb, values(new_avg_tcp_kbps) as avg_tcp_kbps, values(new_avg_tcp_eps) as avg_tcp_eps by guid, hostname | addinfo | eval status=if(((isnull(sum_kb) OR (sum_kb <= 0)) OR (last_connected < (info_max_time - 60))),"missing","active"), sum_kb=round(sum_kb,2), avg_tcp_kbps=round(avg_tcp_kbps,2), avg_tcp_eps=round(avg_tcp_eps,2) | eval age = now() - last_connected | search age > 60 | sort age d | convert ctime(last_connected) | rename hostname as host | fields host, forwarder_type, version, arch, os, status, last_connected, age, sum_kb, avg_tcp_kbps, avg_tcp_eps | join type=left host [| tstats count where index="*" by host, index | stats values(index) as indexes by host] | append [| inputlookup MaintenanceToggle.csv |fields MaintenanceONOFF]|reverse|filldown MaintenanceONOFF |reverse|where MaintenanceONOFF!="ON" | search NOT [ | inputlookup PreProduction.csv | rename Index as indexes ] | search host=* NOT [ | inputlookup Forwarders.csv | rename IgnoreForwarder as host ]   Everything up to the join is code I found here and appears to work perfectly.  The result is all of the hosts where age > 60.  This works every time. | join type=left host [| tstats count where index="*" by host, index | stats values(index) as indexes by host]   The join is meant to link a host with its index.  As my goal is to exempt certain indexes.  The join appears to work but occasionally fails to  link a host with the index, resulting in the proper results with a few hosts missing an index.  Obviously all of the hosts have an index.   I am fairly confident of the remaining logic as it is working well for many other queries.  So the problem is that the join fails to join 100% of the time.  I have this scheduled to run every five minutes.  I can replicate the results using the exact time frame in which the query occurred and minutes later the exact same search will return properly.  Can anyone suggest what might be the cause of this and how to correct? Thank you in advance.
Scenario: I have 10 machines infected with malware. The believed infection source is email, I am attempting to create a search to find if any emails with the same subject line or sender have been sen... See more...
Scenario: I have 10 machines infected with malware. The believed infection source is email, I am attempting to create a search to find if any emails with the same subject line or sender have been sent to all 10 individuals. Basis of search is below, I am just wondering what operator would be used to compare a field to itself and only return the results which are present in the logs of all users . index=email sourcetype=email recipient= user1 OR recipient=user2 OR recipient=user3 AND subject="unknown subject that is the same for all recipients"    
Hi all, I want to create a Sequent template  that triggers when two correlation searches triggers for the same source IP. Correlation Search 1: EDR Detection Correlation Search 2: Traffic to susp... See more...
Hi all, I want to create a Sequent template  that triggers when two correlation searches triggers for the same source IP. Correlation Search 1: EDR Detection Correlation Search 2: Traffic to suspicious URL Fields of Interest from Correlation Search 1:Source IP, File Name, File Path, File Hash etc Fields of Interest from Correlation Search 2:Source IP, URL, URL_Domain, Destination IP etc How can I get the fields of interest from correlation search 2 in the sequenced events? The ‘Output Fields’ session in the Sequence template is accepting only the ‘status labels’ defined in the ‘start’ session(ie, fields from Correlation Search 1).
ReconnectedTime ReconnectedDetails 2021-02-02T16:46:19.000 2021-02-02T08:54:48.000|viceusr|0xA310B|BEK-329999910922|11.188.92.6 2021-02-02T09:29:59.000|shuani|0xF2C223|NTIC4|1.273.6.189 202... See more...
ReconnectedTime ReconnectedDetails 2021-02-02T16:46:19.000 2021-02-02T08:54:48.000|viceusr|0xA310B|BEK-329999910922|11.188.92.6 2021-02-02T09:29:59.000|shuani|0xF2C223|NTIC4|1.273.6.189 2021-02-02T16:46:19.000|scrmp_install|0x4216DA|GLB163|21.1.218.15 2021-02-02T08:54:48.000 2021-02-02T09:29:59.000 2021-02-02T08:54:48.000|viceusr|0xA310B|BEK-329999910922|11.188.92.6 2021-02-02T09:29:59.000|shuani|0xF2C223|NTIC4|1.273.6.189 2021-02-02T16:46:19.000|scrmp_install|0x4216DA|GLB163|21.1.218.15   Both ReconnectedTime and ReconnectedDetails are multivalue fields. In each event the "ReconnectedTime" value [ substring which needs to be valuated ] exists in "ReconnectedDetails" then only the matched substring values of ReconnectedDetails should be the final Ouput. ReconnectedTime ReconnectedDetails 2021-02-02T16:46:19.000 2021-02-02T16:46:19.000|scrmp_install|0x4216DA|GLB163|21.1.218.15 2021-02-02T08:54:48.000 2021-02-02T09:29:59.000 2021-02-02T08:54:48.000|viceusr|0xA310B|BEK-329999910922|11.188.92.6 2021-02-02T09:29:59.000|shuani|0xF2C223|NTIC4|1.273.6.189
My data Send_Data Error All_Request 2018-01-02 0 10 2018-01-03 1 60 2018-01-04 2 30 2018-01-05 0 20 .... ... ... 2021-02-01 5 20 I want to make  chart from tho... See more...
My data Send_Data Error All_Request 2018-01-02 0 10 2018-01-03 1 60 2018-01-04 2 30 2018-01-05 0 20 .... ... ... 2021-02-01 5 20 I want to make  chart from those data. The x-axis is the number of weeks passed.The y-axis is the error rate during this week This is the effect i want The data used in the first week is 2018-01-03->2018-01-09.The y-axis is made using all Error/All_Request in this time period. The data used in the second week is 2018-01-10->2018-01-16 and so on. I have used many methods, but they can’t be achieved.    
Hi All, While registering to Splunk i mistakenly put wrong company name so now i cannot access the Splunk Support services. As i can only use my company's email for this (which is already registered... See more...
Hi All, While registering to Splunk i mistakenly put wrong company name so now i cannot access the Splunk Support services. As i can only use my company's email for this (which is already registered ) and re registration with the same mail id is not possible. I need to change the Company's name in my account. Any suggestions.
Hello Team, I just learning Splunk and Python so feel sorry for silly questions I made scrypt on python import requests data = { 'username': 'admin', 'password': 'password' } response = re... See more...
Hello Team, I just learning Splunk and Python so feel sorry for silly questions I made scrypt on python import requests data = { 'username': 'admin', 'password': 'password' } response = requests.post('https://ip/services/auth/login', data=data, verify=False)   and trying to run it: ./splunk cmd python curl1.py but there is a error with SSL /opt/splunk/lib/python3.7/site-packages/urllib3/connectionpool.py:986: InsecureRequestWarning: Unverified HTTPS request is being made to host '10.22.22.207'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning,   Please help me how can I fix it. Thanks  
Hi,  all I've seen similar use-cases on forum but haven't found any that applies here So, I've got a panel that lists and compares data with domain whitelist with option to add it. Idea is to refr... See more...
Hi,  all I've seen similar use-cases on forum but haven't found any that applies here So, I've got a panel that lists and compares data with domain whitelist with option to add it. Idea is to refresh panel values and remove domain from list once it has been  added to the whitelist. Now, it works in scenario where only one domain is added or  all domain are added with refreshing page in between but it does not work if I was about to  click and add domains one by one with no page refresh. Can anyone help? Thanks <form> <label>The Test Panel</label> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-72h@h</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="input_white_domain_for_add"> <label>input_white_domain_for_add</label> <search> <query>| inputlookup wlist_domain.csv | eval domain="$add_domain$" | dedup domain | fields domain | outputlookup append=t wlist_domain.csv</query> <earliest>0</earliest> <latest></latest> </search> </input> <input type="text" token="value1" depends="$hidden$"> <label>value1</label> <default>rename comment as run</default> </input> </fieldset> <row> <panel> <table> <search> <query> | inputlookup domain_list.csv | stats count by domain | sort -count | search NOT [| inputlookup wlist_domain.csv | dedup domain | table domain] | eval ADD_TO_WLIST="add_to_whitelist" | rename domain As Domain | table Domain count ADD_TO_WLIST | $value1$</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <condition field="ADD_TO_WLIST"> <eval token="add_domain">$row.Domain$</eval> <set token="value1">rename comment as run1</set> </condition> </drilldown> </table> </panel> </row> </form>