All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've set up a generic S3 input and it's working pretty well. However, I sometimes get duplicate events. I believe the issue is explained here: > The S3 data input is not intended to read frequently... See more...
I've set up a generic S3 input and it's working pretty well. However, I sometimes get duplicate events. I believe the issue is explained here: > The S3 data input is not intended to read frequently modified files. If a file is modified after it has been indexed, the Splunk platform indexes the file again, resulting in duplicated data. Use key, blocklist, and allowlist options to instruct the add-on to index only those files that you know will not be modified later. https://docs.splunk.com/Documentation/AddOns/released/AWS/S3   My setup involves S3 files that may be updated for a period of 5 minutes. After 5 minutes, they'll never be modified again. Let's start by assuming that I can't change that. In the majority of cases, the file contents aren't actually changed - only the last modification date is changed. I'd like the ability to do the following: Only index files that are older than 5 minutes, or Keep a CRC/hash of each file and only reindex if the hash changes, or Keep track of which line we're up to in each file and only index appended lines 3 is ideal, 1 completely fixes the problem for me (at the cost of some indexing delay), 2 greatly reduces the problem (and I think Splunk already does this for local files?) Is any of what I'm asking for possible? Or is there another solution to my problem? Thanks!
I have a JSON file with an embedded JSON field that I am trying to extract.  I have been doing some searching and have finally come up with an SPL search that will extract the information into my rel... See more...
I have a JSON file with an embedded JSON field that I am trying to extract.  I have been doing some searching and have finally come up with an SPL search that will extract the information into my relevant key pairs.  The SPL is basically   index=foo sourcetype=foosource | eval log_message=replace(log_message, "\\\"\"","") | rex "log_message\\\\\":\\\\\"(?<log_message>[^\\\]+)" | spath input=log_message   I have also read that I shouldn't need the spath however if I remove this from my SPL then it doesn't extract as required. I would like to put this into transforms but unsure how to apply the "spath".  Thoughts around my props/transforms so far is: props.conf REPORT-logmessage = log_message   transforms.conf [log_message] INGEST_EVAL = eval_log_message=replace(log_message=replace(log_message, "\\\"\"","") REX = "log_message\\\\\":\\\\\"(?<log_message>[^\\\]+)"  
I have 'BuildInfo' field as "ABCYYYY_Number|XYZ" for eg "ABC2020_17|XYZ.  Number will be keep increasing for a year( like 1, 2, 3, 4.....) . For next year the Number will again reset to 1 and then k... See more...
I have 'BuildInfo' field as "ABCYYYY_Number|XYZ" for eg "ABC2020_17|XYZ.  Number will be keep increasing for a year( like 1, 2, 3, 4.....) . For next year the Number will again reset to 1 and then keep increasing. "ABC2021_1|XYZ" I want to compare "error1" in consecutive  "BuildInfo" . Please help
Hello folks, Please help me figure out how to customize the legend values of my choropleth map. I'd like to get the average logon duration of each device and then have a choropleth map (which is wor... See more...
Hello folks, Please help me figure out how to customize the legend values of my choropleth map. I'd like to get the average logon duration of each device and then have a choropleth map (which is working based on country) and set legend green if avg_logon_duration was equal or lower than 10; yellow if it was between 11 and 20 and red if greater or equal to 21. Here is my current table structure: country device logon_duration Brazil XYZ01 21.05 USA ABC01 16.99 Germany FE-01 5.75 India MUM01 10.00   I've already tried to use rangemap and also editing XML with mapping.fieldColors ({"green":0xFF0000,"yellow":0xFFFF00,"red":0x00FF00}) which I found in another article, but it did not work. Here is my last code:       | stats avg(logon_duration) as avg_logon_duration by CountryName | geom geo_countries featureIdField=CountryName | rangemap field=avg_logon_duration green=0-10 yellow=11-20 red=21-99 default=white       The closest I got so far was using mapping.seriesColors with colors array [0xFF0000,0xFFFF00,0x00FF00], but it completely ignores the categorical values I choose on the choropleth map. Any help would be really appreciated.
I have installed Cloudera_HiveJDBC4_2.5.6.1008 driver for Hive from the Cloudera website 1. I have placed the following files in C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\drivers 2... See more...
I have installed Cloudera_HiveJDBC4_2.5.6.1008 driver for Hive from the Cloudera website 1. I have placed the following files in C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\drivers 2. I created db_connection_types.conf in C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\local with the following data [hive2] displayName = Hive Server 2 dbcDriverClass = com.cloudera.hive.jdbc4.HS1Driver defaultPort = 10001 connectionUrlFormat = jdbc:hive2://{0}:{1}/{2} defaultSchema = default defaultCatalogName = default   Even after doing these steps - It says hive driver is not installed! Can someone please help me!       The JRE installation path - C:\Program Files\Java\jre1.8.0_261
SplunkEnterpriseを利用しています。 過去1年の日ごとの総ログ量の推移を確認したいのですが、 _internalインデックスには過去30日までしかログを保管されておりません。 過去30日より以前のログに関してはどのインデックスを指定すれば確認できるでしょうか。
Our team got out of memory instance and Enabled Automatic Leak Detection in environment. Intention is to understand the reason but not sure what to expect as next step. If there is another out of mem... See more...
Our team got out of memory instance and Enabled Automatic Leak Detection in environment. Intention is to understand the reason but not sure what to expect as next step. If there is another out of memory issue, what type of support is provided by AppDynamics? Enabled Automatic Leak Detection can produce some type of warning or someone needs to keep eye on dashboard to see Potentially leaking collections? 2nd question is that, while reading online also found about "JVM Crash Guard"; is it mandatory to setup this option to investigate out of memory or "Enabled Automatic Leak Detection" can help?
Any suggestions how the query can be improved   This returns no events as of now. In the lookup csv Im having UserId and Name as two columns.  index=123  AND organizationId=00TY00000005677 AND log... See more...
Any suggestions how the query can be improved   This returns no events as of now. In the lookup csv Im having UserId and Name as two columns.  index=123  AND organizationId=00TY00000005677 AND logRecordType=ailtn (Lightning_Console) | dedup sessionKey | lookup test123.csv UserId AS userId OUTPUT UserId AS userId | table userId, sessionKey, _time
Hi all! I'm looking to extract certain values from a Linux command line. Take the following example   | makeresults | eval sampleCLI="find / -perm -664 -type d 2>/dev/null" | eval sampleCLI2="find... See more...
Hi all! I'm looking to extract certain values from a Linux command line. Take the following example   | makeresults | eval sampleCLI="find / -perm -664 -type d 2>/dev/null" | eval sampleCLI2="find . -perm -2 -type d 2>/dev/null" | eval sampleCLI3="find . \( -perm -o rwx \) -type d 2>/dev/null" | eval sampleCLI4="find / -perm -o+w"   What I want is to extract the permissions and run some checks on it. For example, 664, 2, rwx and +w respectively in the above example. I'm having a hard type writing the regex query for this. I'm new to this and was trying to keep it simple and just get the value using negative matching like so   | rex field=sampleCLI* “find * -perm (?<permval>\S+)”   However the rex command doesn't even run. Throws up the following error: Error in 'rex' command: The regex '“find' does not extract anything. It should specify at least one named group. Format: (?<name>...).  Could someone please explain where I'm going wrong? Thanks very much!
Hi, Is there any way to specify a start point when using DELIMS for field extraction? An example, the log looks like this: Sep 13 Some Variable Format String: value1 value2 value3 value4 value5...... See more...
Hi, Is there any way to specify a start point when using DELIMS for field extraction? An example, the log looks like this: Sep 13 Some Variable Format String: value1 value2 value3 value4 value5... I want to extract the "valueX" using a space delimiter. But the beginning of the log has a section with variable number of spaces, which would break the order of fields. Is there any way to specify where the DELIMS extraction should start from? Other than having to extract in a new field the list of values with regex (value1 value2 value3 value4 value5...), then use DELIMS on the new field, and at the end set the new field to null()? Thank you.
I`m trying to remove a hash string from my output- "Example hash is 3ernksMt7b3EzKwHuW4papuEFtvePZtDs9CQFeVYy57= will not be cracked" As the hash is changing but has unique specifications, I'm tryi... See more...
I`m trying to remove a hash string from my output- "Example hash is 3ernksMt7b3EzKwHuW4papuEFtvePZtDs9CQFeVYy57= will not be cracked" As the hash is changing but has unique specifications, I'm trying to implement a rex rule to catch every event with this rule- "[a-zA-Z0-9]+=" Unfortunately, I've tried multiple solutions from the forum but non worked for this case. I would also appreciate a reference to the documentation on this issue (Understanding each part of the rex command)
Hi team, I am trying to connect Tableau with Splunk via Splunk ODBC. On the community forum, it is mentioned at several places that Splunk ODBC is not compatible with Splunk version 7.x onwards. Bu... See more...
Hi team, I am trying to connect Tableau with Splunk via Splunk ODBC. On the community forum, it is mentioned at several places that Splunk ODBC is not compatible with Splunk version 7.x onwards. But I can also see that a latest Splunk ODBC driver (version 3.0.1) has been released recently and it is mentioned that it is compatible with Splunk version 7.3 and higher as below:- https://splunkbase.splunk.com/app/1606/ My Splunk enterprise version is 7.2.9.1 and the tableau version is 2019.1.7. Is this new Splunk ODBC compatible with my version of Splunk (7.2.9.1)? If not then what is the alternative for this? Thanks.
Hi, I'm trying to forward all data received by one indexer to a third party Indexer, but rewriting the index for the data sent to the third party. Everything is configured to come in from the UF's ... See more...
Hi, I'm trying to forward all data received by one indexer to a third party Indexer, but rewriting the index for the data sent to the third party. Everything is configured to come in from the UF's using specific index's (Such as WinEventLog). SplunkA indexes the data into those index's happily. There is a requirement to forward all data to SplunkB, but to a dedicated index (Such as SubsidiaryIndex instead of WinEventLog). I was originally using IndexandForward in an outputs.conf, but the forwarded data includes the index set by the UF's. The guide here is close to what I want to do, but I want it to forward all data, and rewrite the index - https://docs.splunk.com/Documentation/Splunk/8.0.5/Forwarding/Routeandfilterdatad#Replicate_a_subset_of_data_to_a_third-party_system  props.conf - Note: I feel like the default stanza is wrong, but I'm not sure how else to capture everything. Also, I've tried TRANSFORMS-routing and TRANSFORMS-forward...   [default] TRANSFORMS-routing = routeAll   transforms.conf   [routeAll] REGEX=(.) DEST_KEY = _MetaData:Index FORMAT = SubsidiaryIndex DEST_KEY=_TCP_ROUTING FORMAT=Subsidiary   outputs.conf   outputs.conf [tcpout] defaultGroup=nothing [tcpout:Subsidiary] disabled=false sendCookedData=false server=169.254.153.83:9999   Any guidance would really be appreciated! 
How to backup Splunk dashboards,reports and alerts from default search app before upgrading to the latest Splunk enterprise version?
I would like to create a table that displays the first and last event from a duplicate set of events.  A duplicate run may be interrupted by a non-duplicate event in which case I also want to the dis... See more...
I would like to create a table that displays the first and last event from a duplicate set of events.  A duplicate run may be interrupted by a non-duplicate event in which case I also want to the display the first and last message when the duplicate events appear again. For example, consider this list of events   | makeresults count=10 | streamstats count | eval _time=1599978591-(count*60) | eval Message = case(count=10, "MessageA", count=9, "MessageA", count=8, "MessageA", count=7, "MessageB", count=6, "MessageB", count=5, "MessageB", count=4, "MessageA", count=3, "MessageA", count=2, "MessageA", count=1, "MessageB") | reverse | table _time, Message   which will create a table similar to: _time Message 2020-09-13 06:19:51 MessageA 2020-09-13 06:20:51 MessageA 2020-09-13 06:21:51 MessageA 2020-09-13 06:22:51 MessageB 2020-09-13 06:23:51 MessageB 2020-09-13 06:24:51 MessageB 2020-09-13 06:25:51 MessageA 2020-09-13 06:26:51 MessageA 2020-09-13 06:27:51 MessageA 2020-09-13 06:28:51 MessageB   Now, I would like to display the first and last time a message is seen, removing any duplicates.. but it needs to consider that the same message may be seen again in another sequence and should be considered a different run to be displayed. I thought of adding this   | stats earliest(_time) as Earliest, latest(_time) as Latest BY Message | eval FirstEvent=strftime(Earliest,"%+"), LastEvent=strftime(Latest,"%+") | table FirstEvent, LastEvent, Message   but it doesn't take into account that there may be multiple sequences of repeated events so it generates FirstEvent LastEvent Message Sun Sep 13 06:19:51 UTC 2020 Sun Sep 13 06:19:51 UTC 2020 MessageA Sun Sep 13 06:22:51 UTC 2020 Sun Sep 13 06:22:51 UTC 2020 MessageB But what I desire is the following output FirstEvent LastEvent Message Sun Sep 13 06:19:51 UTC 2020 Sun Sep 13 06:21:51 UTC 2020 MessageA Sun Sep 13 06:22:51 UTC 2020 Sun Sep 13 06:24:51 UTC 2020 MessageB Sun Sep 13 06:25:51 UTC 2020 Sun Sep 13 06:27:51 UTC 2020 MessageA Sun Sep 13 06:28:51 UTC 2020 Sun Sep 13 06:28:51 UTC 2020 MessageB Any help is extremely appreciated.  Thanks in advance. Roger Cruz
Getting the below  on Splunk restart    Waiting for web server at https://127.0.0.1:8000 to be available...............................WARNING: Cannot decrypt private key in "/opt/splunk/etc/auth/s... See more...
Getting the below  on Splunk restart    Waiting for web server at https://127.0.0.1:8000 to be available...............................WARNING: Cannot decrypt private key in "/opt/splunk/etc/auth/splunkweb/illinsplunkprd01.corp.amdocs.com.key.pem" without a password. Network communication with splunkweb may fail or hang. Consider using an unencrypted private key for splunkweb's SSL certificate.  
Hi All, Hope everyone doing good.  I have a dashboard contain Visualisation of chart and map. I need to export these pdf to share port directly using cron schedule is there any way to do it. Kindly... See more...
Hi All, Hope everyone doing good.  I have a dashboard contain Visualisation of chart and map. I need to export these pdf to share port directly using cron schedule is there any way to do it. Kindly help me with the same.    Thanks & Regards, Reddy.
Hi All, I hope everyone doing good.    I have One deployment server and around 10,000 Universal forwarders in my environment. I need to update the server.conf configuration in all(10,000) Universa... See more...
Hi All, I hope everyone doing good.    I have One deployment server and around 10,000 Universal forwarders in my environment. I need to update the server.conf configuration in all(10,000) Universal forwarders under the path $Splunk_Home$ /etc/system/local/server.conf .  But when I try to deploy using app it is not deploying globally. Kindly help me with the same.    Thanks & Regards, Reddy  
How to create Splunk Import Set Table for ServiceNow? Whether we need to configure it on ServiceNow instance or Splunk instance? Can someone help me out on this?
I'm sorry for the terrible subject. I can't think of a simplified title for what I'm trying to do. I'm trying to graph wireless SSID utilization. I want to pie chart our SSIDs by counting unique use... See more...
I'm sorry for the terrible subject. I can't think of a simplified title for what I'm trying to do. I'm trying to graph wireless SSID utilization. I want to pie chart our SSIDs by counting unique users for each ssid based on RADIUS authentication logs. The logs will contain.. 1. A Reason-Code that lets me know that it is a successful authentication 2. A Called-Station-Id that I can extract the SSID from 3. A User-Name that I can use to get unique users My first attempt is  index=radius | xmlkv | search "Reason-Code"=0 | rex field="Called-Station-Id" ":(?<SSID>.+)" | stats count by SSID This is getting close, but it's going to count users more than once if they authenticate more than once on the same SSID, which they will. I know that dc("User-Name") will let me do distinct counting on the user, but I can't figure out how to put these together to get the number of unique users per SSID. I either don't understand the pipeline, or I'm trying to do something that can't be done this way. To make it a little more complicated, I DO want to count the same user for each SSID. For example, if a user connects to ssid1 and then leaves and connects to ssid2, I would like to count them one time for each of the SSIDs.