All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello.  I have a playbook that must be the only running instance of that playbook.  I can't seem to find any "lock" functionality to facilitate this.  Does anyone know if any sort of lock functionali... See more...
Hello.  I have a playbook that must be the only running instance of that playbook.  I can't seem to find any "lock" functionality to facilitate this.  Does anyone know if any sort of lock functionality exists out of the box?  Thanks in advance!
Hello, I have a log file with dates occurring inside the lines (not just at the beginning of the line). Splunk is creating a separate event each time the date/timestamp is encountered, not just at t... See more...
Hello, I have a log file with dates occurring inside the lines (not just at the beginning of the line). Splunk is creating a separate event each time the date/timestamp is encountered, not just at the beginning of the line. I've done a lot of research on these forums and have tried playing extensively with props.conf inside my etc/system/local directory (which I believe is highest priority). I've tried using "LINE_BREAKER" with a regular expression (date/time stamp at the beginning of the line) and "SHOULD_LINEMERGE" set to false, have also tried "BREAK_ONLY_BEFORE", "TIME_PREFIX", "TIME_FORMAT", etc. Anytime I've made these changes and re-started Splunk, I am able to see them when I use the btool command to check for props settings, so they do seem to be picking up. However, in my GUI, my log files continue to break at any date/timestamp encountered. Perhaps there is something else wrong with my settings. Here's what my input.conf looks like and one thing I've tried for props.conf in the same folder. input.conf entry: [monitor:///path_to_log/log_file_name*.log] disabled = 0 sourcetype = log_file_name props.conf entry (just one of many settings I've tried): [log_file_name] BREAK_ONLY_BEFORE_DATE = false BREAK_ONLY_BEFORE = ^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} sourcetype = log_file_name Any suggestions would be appreciated.
I'm trying to look for senders where they don't contain values from the lookup mimics.csv. Examples of values in the lookup is: *google.com* *yahoo.com* I've already set WILDCARD(sender) in the de... See more...
I'm trying to look for senders where they don't contain values from the lookup mimics.csv. Examples of values in the lookup is: *google.com* *yahoo.com* I've already set WILDCARD(sender) in the definition.  Below is the search I'm trying to do: index=test | search sender IN [inputlookup mimics.csv] | table _time,mid,src_ip,sender,subject,recipient But I keep getting this error:  Error in 'search' command: Unable to parse the search: Right hand side of IN must be a collection of literals.'(sender = "*google.com*")' is not a literal.
Hi,  is it possible to join Splunk with Kolide, so I could see eventlogs? Thank you
After upgrading to Splunk 7.3.6 and DB_connect 3.x, query is throwing the error: "SQL level 1 ORA-01882: timezone region not found" I have already tried setting up the following option in JR... See more...
After upgrading to Splunk 7.3.6 and DB_connect 3.x, query is throwing the error: "SQL level 1 ORA-01882: timezone region not found" I have already tried setting up the following option in JRE:     -Doracle.jdbc.timezoneAsRegion=false or -Duser.timezone=GMT     The connection works perfectly, but when I try to query the data, it shows the region error. Also I have tried the following jdbc versions:     $ java -jar ojdbc6.jar -getversion Oracle 12.1.0.2.0 JDBC 4.0 compiled with JDK6 on Mon_Jun_30_11:28:06_PDT_2014 #Default Connection Properties Resource #Mon Jan 11 14:28:28 EST 2021 java -jar ojdbc7.jar -getversion Oracle 12.1.0.1.0 JDBC 4.1 compiled with JDK7 on Thu_Apr_04_15:09:24_PDT_2013 #Default Connection Properties Resource #Thu Jan 28 15:28:52 EST 2021 $ java -jar ojdbc8.jar -getversion Oracle 12.2.0.1.0 JDBC 4.2 compiled with javac 1.8.0_91 on Tue_Dec_13_06:08:31_PST_2016 #Default Connection Properties Resource #Thu Jan 28 15:29:24 EST 2021     Java Version:     $ java -version java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)       Can anyone please take a look into this? Any help is greatly appreciated. Thanks in advance.
Hey Splunkers!     I'm very new to Splunk and I really need your help, because I couldn't find a proper solution searching through the topics.   I determine the TotalCount of events containing F... See more...
Hey Splunkers!     I'm very new to Splunk and I really need your help, because I couldn't find a proper solution searching through the topics.   I determine the TotalCount of events containing Field1="Wirecheck" with all Results="Pass", "Reject", "Warn". The percentage of the events with the result "Pass" is determined, as well.         index="IndexTest" Field1="Wirecheck" | stats count as "TotalCountWirecheckField2" by Field2 | appendcols [search index="IndexTest" Field1="Wirecheck" Result="Pass" | stats count as "ResultCountPassWirecheckField2" by field2] | eval percent=(ResultCountPassWirecheckField2/TotalCountWirecheckField2)*100 | eval RoundIntegerWirecheckField2 = round(percent, 1)           I would like to visualize the "Pass"-events using column charts. Next to the height of the columns, which is represented by the TotalCountPass, I'd like to show a range represented by the color of the column. Therefore I have seperated new fields depending on the pass-percentage:       ...| eval FieldColorDetectionWirecheckGreen = if(RoundIntegerWirecheckField2>=95, ResultCountPassWirecheckField2, 0) | eval FieldColorDetectionWirecheckYellow = if(RoundIntegerWirecheckField2<95 AND RoundIntegerWirecheckField2>=85, ResultCountPassWirecheckField2, 0) | eval FieldColorDetectionWirecheckRed = if(RoundIntegerWirecheckField2<85 AND RoundIntegerWirecheckField2>=0, ResultCountPassWirecheckField2, 0) | stats values(FieldColorDetectionWirecheckGreen) values(FieldColorDetectionWirecheckYellow) values(FieldColorDetectionWirecheckRed) by Field2         Depending on the percentage, the according column is set equal to the TotalCountPass.       <option name="charting.fieldColors">{"values(FieldColorDetectionWirecheckGreen)":0x009900, "values(FieldColorDetectionWirecheckYellow)":0xFF9900, "values(FieldColorDetectionWirecheckRed)":0xFF0000}</option>         I'm not satisfied with my solution, because the width of "zero"-columns is still visible.   Question 1a: Is there a possibility to remove the columns of the value="0"? Question 1b: Or do you know a solution to improve my search that avoids this problem?      Question 2: Is there a possibility to add the value(RoundIntegerWirecheckField2) in the information Box (I don't know the explicit name for the popup-field)?   I hope you could understand the explanation of my problem Thanks in advance Nanosam
Logs have been working fine until this week, now I get the error:     ERROR pid=15289 tid=MainThread file=base_modinput.py:log_error:307 | _Splunk_ Error getting event hub data for hub: insights-l... See more...
Logs have been working fine until this week, now I get the error:     ERROR pid=15289 tid=MainThread file=base_modinput.py:log_error:307 | _Splunk_ Error getting event hub data for hub: insights-logs-signinlogs, resource: 3. Detail: The service was unable to process the request; please retry the operation. For more information on exception types and proper exception handling, please refer to http://go.microsoft.com/fwlink/?LinkId=761101 TrackingId:abe05384f2aa4f528eaad64feccc1e53_G8, SystemTracker:gateway5, Timestamp: ErrorCodes.InternalServerError: The service was unable to process the request; please retry the operation. For more information on exception types and proper exception handling, please refer to http://go.microsoft.com/fwlink/?LinkId=761101 TrackingId:abe05384f2aa4f528eaad64feccc1e53_G8, SystemTracker:gateway5, Timestamp:   Also seeing these errors around the same time: ERROR pid=48797 tid=MainThread file=base_modinput.py:log_error:307 | _Splunk_ Error getting event hub data for hub: insights-logs-auditlogs, resource: 2. Detail: ('Connection aborted.', BadStatusLine("''",)) This is happening for multiple hubs? Azure App v2.1.0 Spunk v7.3.3 @jconger !
Hello. Good afternoon.  We are receiving a "DBX Server did not respond within 310 seconds, please make sure it is started and listening".  Attached is a screenshot of the error message.  What i... See more...
Hello. Good afternoon.  We are receiving a "DBX Server did not respond within 310 seconds, please make sure it is started and listening".  Attached is a screenshot of the error message.  What is the best approach to resolving this? Regards, Max  
I have completed installing Splunk on server. My question is we have several servers and quite a few PC's. How would I go about getting the data from all of the PC's to show up on the Splunk server? ... See more...
I have completed installing Splunk on server. My question is we have several servers and quite a few PC's. How would I go about getting the data from all of the PC's to show up on the Splunk server? Is there something I need to install on all of the PC's to pull the information? Thanks
Can someone tell me how to create an index that can produce an email whenever ISE posturing failed?   Thanks
Hi, I have a join that is failing to pull through all users. I can see that the issue is due to case sensitivity so I tried to convert both sides of the join to lower case but this hasn't worked for ... See more...
Hi, I have a join that is failing to pull through all users. I can see that the issue is due to case sensitivity so I tried to convert both sides of the join to lower case but this hasn't worked for all events. My SPL is as follows: index=prod_end_user_services sourcetype=csvAOVPN | dedup UserName | eval UserName=lower(UserName) | rex field=UserName "(?<UserID>.*)@corp.*" | replace *.co.uk WITH *.com IN UserName | join UserName type=left [ search index=prod_service_now sourcetype=snow:sys_user_list | fields dv_email dv_user_name sys_id | dedup sys_id | rename dv_email as UserName | eval UserName=lower(UserName) | table UserName dv_user_name ] | fillnull value=NULL | eval dv_user_name =if(match(dv_user_name,"NULL"), UserID, dv_user_name) | table UserName dv_user_name UserID   Is there something glaringly obvious that I have missed?? Thanks
I have a Splunk Connect instance on my OpenShift cluster that's currently sending all logs to a logging index. There's no special configuration and the only tweeking done after installation is pointi... See more...
I have a Splunk Connect instance on my OpenShift cluster that's currently sending all logs to a logging index. There's no special configuration and the only tweeking done after installation is pointing to the right Splunk instance / applying the HEC token value.  Is there a way to set the config map such that all logs from a namespace (i.e. 'specificApplication') goes to an index? Here's a snippet of what the current config map for logging looks like - not sure if this would shed insight as I'm not too familiar with Splunk:       <match **>         @type splunk_hec         protocol http         hec_host "xx.x.xx.xx"         hec_port 8088         hec_token "#{ENV['SPLUNK_HEC_TOKEN']}"         index_key index         #insecure_ssl true         host "#{ENV['K8S_NODE_NAME']}"         source_key source         sourcetype_key sourcetype
Hi, See, I have been trying to group my result query based on the latest date in order to remove duplicates and get the most recent one. This is an example of what I have: Field1 Field2 Date ... See more...
Hi, See, I have been trying to group my result query based on the latest date in order to remove duplicates and get the most recent one. This is an example of what I have: Field1 Field2 Date AAA 111 21 Jan 2021 AAA 111 22 Jan 2021 BBB 332 20 Jan 2021 BBB 552 22 Jan 2021   And what I would want to have: Field1 Field2 Date AAA 111 22 Jan 2021 BBB 332 20 Jan 2021 BBB 552 22 Jan 2021   I would really appreciate the help.   Thanks
Here is my data normally. 2021-01-26 00:00:44.2885 [INFO] SIXPACService.SplunkForwarder.SplunkWriter Attempting to Splunk Message from SITA: <?xml version="1.0" encoding="utf-8"?> <DCNSMessage> <I... See more...
Here is my data normally. 2021-01-26 00:00:44.2885 [INFO] SIXPACService.SplunkForwarder.SplunkWriter Attempting to Splunk Message from SITA: <?xml version="1.0" encoding="utf-8"?> <DCNSMessage> <ID>SIXPAC</ID> <RType>14</RType> <DateTime>2021-01-26T00:00:35Z</DateTime> <ActiveLink> <StartDateTime>2021-01-25T23:50:00Z</StartDateTime> <StopDateTime>2021-01-26T00:00:00Z</StopDateTime> <LocationActive> <Location>S-SLC01</Location> <Active>0</Active> </LocationActive> </ActiveLink> </DCNSMessage>   for some reason when the data gets indexed, it's line breaking, so I only get the following data: 2021-01-26 00:00:44.2885 [INFO] SIXPACService.SplunkForwarder.SplunkWriter Attempting to Splunk Message from SITA: <?xml version="1.0" encoding="utf-8"?> <DCNSMessage> <ID>SIXPAC</ID> <RType>14</RType> Any idea on why it's breaking at the DateTime tag? 
Re-initiation of an older question I had asked:   Hi, I have a need for an alternative of | lookup abc field1 AS field2 OUTPUT field1, fieldA, fieldB, fieldC. For above, I have a lookup definitio... See more...
Re-initiation of an older question I had asked:   Hi, I have a need for an alternative of | lookup abc field1 AS field2 OUTPUT field1, fieldA, fieldB, fieldC. For above, I have a lookup definition from a lookup that holds information about more than 50,000 vulnerabilities. I am using this lookup definition in my queries and result set is no more than 1000. 1000 is the maxmatch limit of lookup definition that Splunk supports. I need an alternative e.g. a subsearch using lookup itself or anything that allows me to do match for all the values in my lookup which is approximately 50,000 on average as efficiently as possible. Sample query (original query is much longer but I will  be using your provided solution to consolidate) index=ABC sourcetype="XYZ" `comment (This is to reduce Splunk's internal fields to keep my table size smaller)` | fields - index, source, sourcetype, splunk_server, splunk_server_group, host, eventtype, field, linecount, punct, tag, tag::eventtype, _raw `comment (This is to limit to the only fields which I need)` | fields dns, vuln_id `comment (vuln_id is a multivalued field and I have to separate them to get accurate stats. When stats is run, it takes care of expanding them and it works as expected)` | makemv delim="," vuln_id | stats count by vuln_id, dns | lookup vuln_info VulnID AS vuln_id OUTPUT Scan_Type, OS, Environment The below approach is what I have tried that is not returning anything but it should. I am missing something in this: index=ABC sourcetype="XYZ" | fields - index, source, sourcetype, splunk_server, splunk_server_group, host, eventtype, field, linecount, punct, tag, tag::eventtype, _raw | fields dns, vuln_id | makemv delim="," vuln_id | stats count by vuln_id, dns [| inputlookup vuln_info.csv | fields VulnID, Scan_Type, OS, Environment | rename VulnID as vuln_id] Any solution that will work as efficiently as possible to get all records from lookup instead of incomplete dataset due to lookup definition's maxmatch limit of 1000 in Splunk. Thanks in-advance!!!
Hello folks!   Tried to upgrade this App and found that 3.1.4 is the only optión avaliable to upgrade using dbx_app_migration.py because python versions.  The problem is I am getting the next erro... See more...
Hello folks!   Tried to upgrade this App and found that 3.1.4 is the only optión avaliable to upgrade using dbx_app_migration.py because python versions.  The problem is I am getting the next errors: without parameters failed to login to splunkd, cause=[Errno 111] Connection refused abort!   with parameters  -scheme https://<ip> -port 8089 -verbose failed to login to splunkd, cause=HTTP 405 Method Not Allowed   ¿Maybe there is something I can do to use the script? Thanks
Hi, I have set some SQS inputs on Splunk Add-on for AWS, but the following error is occurring:   2021-01-28 13:25:08,541 level=CRITICAL pid=3271 tid=Thread-2 logger=splunk_ta_aws.modinputs.sqs_bas... See more...
Hi, I have set some SQS inputs on Splunk Add-on for AWS, but the following error is occurring:   2021-01-28 13:25:08,541 level=CRITICAL pid=3271 tid=Thread-2 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_process:268 | datainput="SQS_INPUT" start_time=1611850827, created=1611851108.54 message_id="MESSAGE_ID" ttl=600 job_id=JOB_ID | message="An error occurred while processing the message." Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 247, in _process records = self._parse(message) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 315, in _parse raise ValueError("Unable to parse message.") ValueError: Unable to parse message.   There are two inputs running into this issue, one is using Custom Data Type and the other one is using Cloudfront Access Logs. Also, I have some other SQS inputs that are running with no errors. Does anyone has any hints on how to solve this "Unable to parse message" errors? Thanks
The following search gives me a table that contains the number of lines of code on the first of each month and calculates the number of lines changed over the month: index=foo date_mday="1" | dedup... See more...
The following search gives me a table that contains the number of lines of code on the first of each month and calculates the number of lines changed over the month: index=foo date_mday="1" | dedup lineCount sortby +_time | sort -_time | delta lineCount AS linesChanged | fieldformat linesChanged=linesChanged*-1 | table _time date_month lineCount linesChanged This works great as long as there is a log generated on the 1st. I'm looking to add a condition that will add the log from that last day that one was generated if there was none on the 1st of the month. For example, there is no data for June because nothing was generated on the 1st (see attached picture). The last day that a a log was generated was May 30th, which I would like to include. I know that date_mday=1 filter will have to be dropped but don't know where to start otherwise. Any help would be much appreciated!
Good morning fellow Splunkers, I am wondering what the caller=init_roll is and if someone could point me to some documentation about it, as I haven't found any yet. Basically, I changed the maxData... See more...
Good morning fellow Splunkers, I am wondering what the caller=init_roll is and if someone could point me to some documentation about it, as I haven't found any yet. Basically, I changed the maxDataSize in indexes.conf to 'auto_high_volume' (10GB) from the default of 750MB, and I assume I see all of the current hot buckets for that index rolling to warm so Splunk can start filling the new hot buckets that have the new, increased data size cap. However, I haven't been able to find any documentation via google about the 'caller=init_roll' so I can confirm my educated guess.
I'm trying to set up DB Connect to use Oracle Wallet.  I'm at step 5 under "Connect Splunk DB Connect to Oracle Wallet environments using ojdbc8"   (https://docs.splunk.com/Documentation/DBX/3.4.1/De... See more...
I'm trying to set up DB Connect to use Oracle Wallet.  I'm at step 5 under "Connect Splunk DB Connect to Oracle Wallet environments using ojdbc8"   (https://docs.splunk.com/Documentation/DBX/3.4.1/DeployDBX/Troubleshooting), where it asks me to prepend some text in JVM Options.  However, I have two JVM Options,  task server JVM Options and Query Server JVM Options.   Which one should I use?   BTW, I tried adding the text to both and it didn't work.