All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Team,  We need to integrate the puppet integration with splunk for the security related events are pushed to our Splunk SIEM. is this good to use HEC tokens for this ? or syslog is fine ?   
Splunk DB Connect: Why am I getting "The value is not set for the parameter number 1" when updating the SQL query in the Edit Input panel?   ERROR :    com.microsoft.sqlserver.jdbc.SQLServerE... See more...
Splunk DB Connect: Why am I getting "The value is not set for the parameter number 1" when updating the SQL query in the Edit Input panel?   ERROR :    com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.  
I have a JSON payload that's ingested through a REST API input on a heavy forwarder, with the following configuration in props.conf (on the heavy forwarder, not on the indexer):     [json_result]  ... See more...
I have a JSON payload that's ingested through a REST API input on a heavy forwarder, with the following configuration in props.conf (on the heavy forwarder, not on the indexer):     [json_result]     INDEXED_EXTRACTIONS = json     KV_MODE = none     DATETIME_CONFIG = CURRENT     SHOULD_LINEMERGE = false     TRUNCATE = 200000 The ensuing event in Splunk looks like this (minified): {"totalCount":3,"nextPageKey":null,"result":[{"metricId":"builtin:synthetic.http.resultStatus","data":[{"dimensions":["HTTP_CHECK-02B087D58EC18C33","SUCCESS","SYNTHETIC_LOCATION-2CD023FA5F455E28"],"dimensionMap":{"Result status":"SUCCESS","dt.entity.synthetic_location":"SYNTHETIC_LOCATION-2CD023FA5F455E28","dt.entity.http_check":"HTTP_CHECK-02B087D58EC18C33"},"timestamps":[1639254360000],"values":[1]},{"dimensions":["HTTP_CHECK-02B087D58EC18C33","SUCCESS","SYNTHETIC_LOCATION-833A207E28766E49"],"dimensionMap":{"Result status":"SUCCESS","dt.entity.synthetic_location":"SYNTHETIC_LOCATION-833A207E28766E49","dt.entity.http_check":"HTTP_CHECK-02B087D58EC18C33"},"timestamps":[1639254360000],"values":[1]},{"dimensions":["HTTP_CHECK-02B087D58EC18C33","SUCCESS","SYNTHETIC_LOCATION-1D85D445F05E239A"],"dimensionMap":{"Result status":"SUCCESS","dt.entity.synthetic_location":"SYNTHETIC_LOCATION-1D85D445F05E239A","dt.entity.http_check":"HTTP_CHECK-02B087D58EC18C33"},"timestamps":[1639254360000],"values":[1]}]}]} The text in red reflects what I'm trying to extract from the payload; basically, it's three fields ("Result status", "dt.entity.synthetic_location" and "dt.entity.http_check") and their associated values. I'd like to have three events created from the payload, one event for each occurrence of the three fields, with the fields searchable in Splunk. I've tried this approach in props.conf to get what I'm looking for...     [json_result]         SHOULD_LINEMERGE = false     LINE_BREAKER = },     DATETIME_CONFIG = CURRENT     TRUNCATE = 0     SEDCMD-remove_prefix = s/{"totalCount":.*"nextPageKey":.*"result":\[{"metricId" :.*"data":\[//g     SEDCMD-remove_dimensions = s/{"dimensions":.*"dimensionMap"://g     SEDCMD-remove_timevalues = s/,"timestamps":.*"values":.*}//g     SEDCMD-remove_suffix = s/\]}\]}//g ...but I'm only getting one set of fields to show up as an event in Splunk: And, the fields aren't showing up as "interesting fields" in the left navbar (possibly because the props.conf is not on the indexer?). Any assistance would be greatly appreciated. UPDATE: I referenced this post that's pretty close to what I'm trying to accomplish: https://community.splunk.com/t5/Getting-Data-In/How-to-split-a-json-array-into-multiple-events-with-separate/m-p/139851 The format of the JSON payload cited in this post is different than the format of the payload I'm using, though...so I'm guessing that some additional logic would be necessary to accommodate my format.
Hi  I get a log in below format of JSON obj message{              Dashboard{                              status: SUCCESS                               operationName:gettingResult } In the abo... See more...
Hi  I get a log in below format of JSON obj message{              Dashboard{                              status: SUCCESS                               operationName:gettingResult } In the above logs i get a value of SUCCESS/FAILURE for status. Now my requirement is to calculate a total,totalSuccess and totalFailure based on operationName. Tried the below query but it is not working out    ......messgae.Dashboard.status=*| stats count as total,count(eval(messgae.Dashboard.status=SUCCESS)) as totalSuccess, count(eval(messgae.Dashboard.status=FAILURE)) as totalFailure by messgae.Dashboard.operationName   getting value for total but not for totaSuccess /totalFailure
Hi, This question is related to CVE-2021-44228.  As far as we could see/scan, Splunk binaries, including Universal Forwarders ones, do not rely on or use the Log4j library but we wanted to get some... See more...
Hi, This question is related to CVE-2021-44228.  As far as we could see/scan, Splunk binaries, including Universal Forwarders ones, do not rely on or use the Log4j library but we wanted to get some sort of "official confirmation" of this. Thanks if you can point any public document regarding this and regarding to Splunk potential exposure to this particular CVE.  Best Regards.
Hello! Could somebody please suggest if it is possible to do a map search search more effectively? What I am trying to do: 1. there are events with client transactions. A huge list (thousands ever... See more...
Hello! Could somebody please suggest if it is possible to do a map search search more effectively? What I am trying to do: 1. there are events with client transactions. A huge list (thousands every second). 2. I search for transaction chains, which are suspicious by some conditions for last hour 3. If a transaction chain is suspicious, I make a longer search (last 3 weeks) because some operations do not fit into the last hour. I basically do the same calculations, but with longer time interval and with more strict conditions The following search works, but it takes several minutes and sometimes cancelled due to timeout:           <MY_SEARCH> | stats first(orgCode) AS orgCode first(accountId) AS accountId sum(amount) AS totalAmount sum(controlAmount) AS totalControlAmount by transactionChainRef | where totalControlAmount>0 and totalControlAmount<totalAmount | map search="search <MY_SEARCH> AND message=\"*transactionChainRef\\\":$transactionChainRef$*\" earliest=-3w | eval orgCode=$orgCode$ | eval accountId=$accountId$ | eval totalControlAmount=$totalControlAmount$ | stats first(orgCode) AS orgCode first(accountId) AS accountId sum(amount) AS totalAmount first(totalControlAmount) AS totalControlAmount by transactionChainRef | where totalControlAmount<totalAmount " maxsearches=9999           Unfortunately I cannot make query right away for the last 3 weeks because there will be still transaction chains, which may go outside of the 3 weeks (a chain has finished, say, 2.5 weeks ago; its start may be 5.5 weeks ago). My idea currently is to make a map search by chunks, for example, by 100 transactionChainRefs Thanks in advance!
Hi everyone I installed Splunk 8.2.2.1 and then install Splunk Stream 801 add-on but I can't find streamfwd.conf file in Directory or find Splunk_TA_stream directory. Does anybody face this proble... See more...
Hi everyone I installed Splunk 8.2.2.1 and then install Splunk Stream 801 add-on but I can't find streamfwd.conf file in Directory or find Splunk_TA_stream directory. Does anybody face this problem? did I do anything else to receive NetFlow?
externally externally MHN Server GCP Firewall rule
when inputing a customer field of data/time in container,  is there any ways to do hints of input and input validation  ?  Currently it only support text/select in 4.10.X in customer field ,   and i... See more...
when inputing a customer field of data/time in container,  is there any ways to do hints of input and input validation  ?  Currently it only support text/select in 4.10.X in customer field ,   and it is not real-time if done by playbook or some action.
Failure to open phantom (4.10.x)  GUI after setting up warm/standby ,   no error message when setup warm/standby and starting phantom.  Any further troubleshooting or logs to check ?
Hi there, I have 2 separate queries that I built using Rex. 1. This query captures the logg on and logg off status of the service. Query: index=windows_log host=abc-05-hiddencam logged* | rex fi... See more...
Hi there, I have 2 separate queries that I built using Rex. 1. This query captures the logg on and logg off status of the service. Query: index=windows_log host=abc-05-hiddencam logged* | rex field=_raw  "(?<Date>\w{3}\s+\d+ \d+:\d+:\d+)\s(?<hostname>\w+-\w+-\w+).+Audit\S+\s\w+\s\w+\s(?<status>.+).\s\s\s\sSub.*" | eval "Hidden Cam Monitoring" = Date + " : " + hostname + " " + status | table "Hidden Cam Monitoring" 1. Sample Output: Dec 10 13:35:12 : abc-05-hiddencam successfully logged on Dec 10 06:19:24 : abc-05-hiddencam successfully logged on Dec 10 06:17:01 : abc-05-hiddencam logged off Dec 10 06:11:55 : abc-05-hiddencam logged off   2. This query captures the service entering the start or stop status. Query: index=windows_log host=abc-05-hiddencam entered* | rex field=_raw "(?<Date>\w{3}\s+\d+ \d+:\d+:\d+)\s(?<hostname>\w+-\d+-\w+).*(?<status>service\s\w+\s\w+\s\w+\s\w+)" | eval "Hidden Cam Monitoring" = Date + " : " + hostname + " " + status | table "Hidden Cam Monitoring"   2. Sample Output: Dec 10 16:10:04 : abc-05-hiddencam service entered the stopped state Dec 10 15:31:31 : abc-05-hiddencam service entered the stopped state Dec 10 15:28:19 : abc-05-hiddencam service entered the running state Dec 10 15:28:18 : abc-05-hiddencam service entered the running state   My issue is, I want to combine above queries into a single query and get an output in a table as shown below. 3. Expected sample results: Dec 10 13:35:12 : abc-05-hiddencam successfully logged on Dec 10 16:10:04 : abc-05-hiddencam service entered the stopped state Dec 10 06:19:24 : abc-05-hiddencam successfully logged on Dec 10 15:28:18 : abc-05-hiddencam service entered the running state Dec 10 06:17:01 : abc-05-hiddencam logged off Dec 10 15:28:19 : abc-05-hiddencam service entered the running state Dec 10 06:11:55 : abc-05-hiddencam logged off Dec 10 15:31:31 : abc-05-hiddencam service entered the stopped state ( The results are going to be different to above based on the timestamp and the events. What I mean here is the results come mixing together in a single table as and when they take place.) Thank you heaps in advance. 
Batch input is described when discussing file ingestion using inputs.conf.  I do not see a mentioning in Monitor files and directories in Splunk Enterprise with Splunk Web and cannot find a button in... See more...
Batch input is described when discussing file ingestion using inputs.conf.  I do not see a mentioning in Monitor files and directories in Splunk Enterprise with Splunk Web and cannot find a button in GUI.   Is there an option to do so?
Our SHs & Indexers are clustered. Not sure if this has to do with AWS going down yesterday. But I noticed the error early today & it has red exclamation on it at end of the day. Restarting the cluste... See more...
Our SHs & Indexers are clustered. Not sure if this has to do with AWS going down yesterday. But I noticed the error early today & it has red exclamation on it at end of the day. Restarting the cluster master did not help. Thanks & Happy holidays. 
Hi, I am new to SPL and have figured out how to do one rex Field extract - like this index=xxxxx  "PUT /app/1/projects" | rex field=_raw "HTTP\/1\.1\" (?P<Status_Code>[^\ ]*)" this is from the fo... See more...
Hi, I am new to SPL and have figured out how to do one rex Field extract - like this index=xxxxx  "PUT /app/1/projects" | rex field=_raw "HTTP\/1\.1\" (?P<Status_Code>[^\ ]*)" this is from the following search results log line HTTP/1.1" 200 44 188 This gives me the Status code and I can sort them and report - example 200 , 201, 400 or 500 I need to use the last field (2 or 3) digits to get the speed - how would I do that - I am stuck with formatting   Thanks in advance
Hello community,    I have an issue in my environment and I have been for a while trying to catch the root cause and I feel I am not even close. I am receiving this message frequently: A... See more...
Hello community,    I have an issue in my environment and I have been for a while trying to catch the root cause and I feel I am not even close. I am receiving this message frequently: And I don't know where this come from: I checked the %iowait at the SO and never is up to 0.02 but the alert about IOWait is stilling coming for search heads and indexers as well.   I checked the resources and there is not issue: Also I check the CPU running this search and by the MC and there is not a huge use of the CPU. This is for the last 4 hours So I am really confused, I don't know if I missing something. Version is 8.2.2 - Cluster environment. Can you please can help me on this? Kind Regards.
We have 2 inputlookup files, 1 with All-users and another with Disabled-users.   Is there a way to remove the records from the All-users inputlookup file if the user matches/exists in the Disabled-us... See more...
We have 2 inputlookup files, 1 with All-users and another with Disabled-users.   Is there a way to remove the records from the All-users inputlookup file if the user matches/exists in the Disabled-users file and/or if needed generate a new outputlookup file with the new results?   Both files have the same field name, sAMAccountName.    We've tried dedup and append=f, with no luck so far.  We also tried uniq which I think should've only returned unique records, but unfortunately could not get it to work. Thanks in advanced for your help.  
Hi all, This is the sample Azure nsg log ingested from Azure log analytics  "aaaedbb3-407b-4d6c-9f11-dc4640e9acf4", "Azure", "", "", "2021-12-10T19:06:17.001Z", "", "", "", "", "", "", "", "", ""... See more...
Hi all, This is the sample Azure nsg log ingested from Azure log analytics  "aaaedbb3-407b-4d6c-9f11-dc4640e9acf4", "Azure", "", "", "2021-12-10T19:06:17.001Z", "", "", "", "", "", "", "", "", "", "", "2", "2021-12-10T18:00:00Z", "2021-12-10T19:00:00Z", "2021-12-10T18:09:01Z", "2021-12-10T18:36:26Z", "S2S", "", "10.115.1.77", "34.206.244.234", "", 54443, "T", "Unknown", "O", false, "A", "d88af0da-cfee-4f3e-bb50-58341fe4e132/c-hal-it-ss-prod-eus-rg/cap-subnet1-nsg", "0|cap_mgmt_to_hal|O|A|4", "cap_mgmt_to_hal", "UserDefined", "d88af0da-cfee-4f3e-bb50-58341fe4e132", "", "eastus", "", "c-halazops-connectivity-eus-criticalassetprotection-rg/np1caps009v-nic1", "c-halazops-connectivity-eus-criticalassetprotection-rg/np1caps009v-nic1", "", "c-halazops-connectivity-eus-criticalassetprotection-rg/np1caps009v", "c-halazops-connectivity-eus-criticalassetprotection-rg/np1caps009v", "", "c-hal-it-ss-prod-eus-rg/c-hal-it-ss-prod-eus-vnet1/cap-subnet1", "", "", "", "", "", "", "", "", "d88af0da-cfee-4f3e-bb50-58341fe4e132/c-hal-it-ss-prod-scus-rg/c-hal-it-ss-prod-scus-er2", "AzurePrivatePeering", "d88af0da-cfee-4f3e-bb50-58341fe4e132/c-hal-it-ss-prod-eus-rg/c-hal-it-ss-prod-eus-scus-conn2", "", "", "", 0, 0, 4, 0, 4, 39, 34, 26863, 4706, 4, "", "", "", null, "", "", "", "", "", "", "", null, "", "", "", "", "", "", "ExpressRoute", null, "", null, "", "", null, "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "c-hal-it-ss-prod-eus-rg/c-hal-it-ss-prod-eus-vnet1/cap-subnet1", "", "", "", "", "", "", null, null, "", null, "", "", "", "", null, null, "", "", "", null, null, "", "", null, null, "", null, "", "", "", null, "", "", "", "", "eastus", "", "FlowLog", "d88af0da-cfee-4f3e-bb50-58341fe4e132", "", "2021-12-10T19:06:11.622Z", "", "", "", "", "", "", "", null, "", "", "", null, "", "", "", "", "", "", null, "00-0D-3A-1A-C0-F7", "", "", "", "", null, "", "", null, null, null, null, "", "", "AzureNetworkAnalytics_CL", "" Can anybody please help me in parsing and get into meaningful data.
Hi, hoping to get some more insight on my current problem. My problem is the following  I am using a where clause to capture data for a specific field value. If the specific value does not exist f... See more...
Hi, hoping to get some more insight on my current problem. My problem is the following  I am using a where clause to capture data for a specific field value. If the specific value does not exist for the current time period I get the following message as a result 'No results found. Try expanding the time range.' Instead of the no results message showing up I would like to display something else. The following is an example. index=sample_idex sourcetype="smf001" | fields _time,  FIELD | lookup sample_lookup.csv system as FIELD output sample_env | eval e=if(in(sample_env, "env"), 1, 0) | where e=1 | where FIELD=="value" | table FIELD I was thinking of doing something like the following with proper syntax: | eval where FIELD=="value" else   
Hi all, We have Splunk on-prem and have recently started using DUO for authentication.  We are interested in knowing if anyone has configured SSO for their Splunk and how they did it?  The DUO docu... See more...
Hi all, We have Splunk on-prem and have recently started using DUO for authentication.  We are interested in knowing if anyone has configured SSO for their Splunk and how they did it?  The DUO documentation points at having to install a Duo Access Gateway.  At the moment we have an AD sync to Duo but no D.A.G.  We also have access to Azure and wondered if we could use that as an identity provider instead?   Thanks!
Hi @Anonymous  / @Anonymous  I have recently started using your "File/Directory Information Input" app. I believe that it does not work with splunk python3 - which is the default version in splunk8... See more...
Hi @Anonymous  / @Anonymous  I have recently started using your "File/Directory Information Input" app. I believe that it does not work with splunk python3 - which is the default version in splunk8.  Is this something that you still work on and maintain? I have been able to get it working if I set splunk system/server.conf to "python.version = python2".  It would be better though if I could set this within the app and not splunk system wide. In general it has been working for me when I use within a UF that has the latest version of python2,  so 2.7.5-89 works in Linux. It does have some issues around the 'file_filter` when filtering, this again seemed to work closer to as expected for me when python2 was patched to latest minor release version of 2.7.5-89 But when it works it is great and does exactly what I want, so thank you very much regards