All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I want to do an api call to sharepoint and retrieve some information using a python script on the splunk server.  Now i have a .pfx file provided with a password.  Please help me from the star... See more...
Hi, I want to do an api call to sharepoint and retrieve some information using a python script on the splunk server.  Now i have a .pfx file provided with a password.  Please help me from the starting point on how to proceed. 1) on my splunk server where do i keep this .pfx file and do i have to convert it to a .pem file ? 2) while putting this file will it contradict with any other certificates ? As i  have to be very careful not to ruin any existing key/file 3) how to use it in my sharepoint api call? Any leads will be helpful
Hi at all, I have a json log that in a single json contains many events:   {"response":{"caseEvents":[{"eventDetails":{"eventDescription":"SCT","eventId":"TRX8551","eventTime":"2020-06-24T13:21:00... See more...
Hi at all, I have a json log that in a single json contains many events:   {"response":{"caseEvents":[{"eventDetails":{"eventDescription":"SCT","eventId":"TRX8551","eventTime":"2020-06-24T13:21:00.664+00:00","eventType":"PAYMENT"}},{"eventDetails":{"eventDescription":"SCT","eventId":"TRX8552","eventTime":"2020-06-24T13:21:01.664+00:00","eventType":"PAYMENT"}}}]}   in the same json I have many eventDetails section (here only two with few fields, but they are many more). I tried to use:   indexed_extractions=JSON   and   LINE_BREAKER = \{\"eventDetails\"   but it still remains one. How can I approach the problem? Ciao. Giuseppe
Hi, i have two searches first give open alert data and second gives closed alert data i want to merge both results. alert id message server opentriggredtime           1 fsdf 127.0.0... See more...
Hi, i have two searches first give open alert data and second gives closed alert data i want to merge both results. alert id message server opentriggredtime           1 fsdf 127.0.0.1 01/09/20           2 fdsfs 127.0.0.1 01/09/20                             closed id message server closedtriggredtime           3 fdsfs 127.0.0.0 01/09/20           4 fsdf 127.0.0.0 01/09/20                             alert id & closed id message server opentriggredtime closedtriggredtime         1 fsdf 127.0.0.1 01/09/20           2 fdsfs 127.0.0.1 01/09/20           3 fdsfs 127.0.0.0   01/09/20         4 fsdf 127.0.0.0   01/09/20        
If Maintenance mode is enabled in Splunk Indexer Cluster for suppose continuous 10 hours, Incoming data volume is very high and Hot Buckets are getting full for one very high volume index, will hot b... See more...
If Maintenance mode is enabled in Splunk Indexer Cluster for suppose continuous 10 hours, Incoming data volume is very high and Hot Buckets are getting full for one very high volume index, will hot bucket move to warm or It can't move to Warm due to enabled maintenance mode and what happen to hot bucket data if this will be the case ?
We are planning to use Splunk Add on for AWS for streaming AWS cloudwatch logs. How can we ensure security of data streamed over internet. Can you please clarify?
We have splunk enterprise trial version on machine. We are trying to call that instance through API by putting IP address in URL. But not able to connect? Is it possible to make API call to IP addres... See more...
We have splunk enterprise trial version on machine. We are trying to call that instance through API by putting IP address in URL. But not able to connect? Is it possible to make API call to IP address.  
I want my nested JSON to be parsed only at 1st level instead of parsing all the nested parts. I have below JSON: { "Name":Naman, "Age":25, "Address": { "H.No":"23", "Street_no":2, ... See more...
I want my nested JSON to be parsed only at 1st level instead of parsing all the nested parts. I have below JSON: { "Name":Naman, "Age":25, "Address": { "H.No":"23", "Street_no":2, "Area":"Model Town" }, "Country":"IND" }  I want output like below: Name | Age | Address | Country Naman 25 "H.No":"23", IND "Street_no":2, "Area":"Model Town" I don't want to handle Address field separately as these are dynamic fields that are coming in from source.
Web server on Splunk cluster master does'nt starts , below is the mesage which i see when starting it   Checking prerequisites... Checking http port [8443]: open Checking mgmt port [8089]: open ... See more...
Web server on Splunk cluster master does'nt starts , below is the mesage which i see when starting it   Checking prerequisites... Checking http port [8443]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main summary Done Bypassing local license checks since this instance is configured with a remote license master. Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-8.0.5-a1a6394cc5ae-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... Done [ OK ] Waiting for web server at https://127.0.0.1:8443 to be available.    
I am getting the below error all of the suddent in environment. Error: The percentage of small of buckets created (63) over the last hour is very high and exceeded the red thresholds (50) for index=... See more...
I am getting the below error all of the suddent in environment. Error: The percentage of small of buckets created (63) over the last hour is very high and exceeded the red thresholds (50) for index=main, and possibly more indexes, on this indexer.     Please assist on this I am new to Splunk.
I want my nested JSON to be parsed only at 1st level instead of parsing all the nested parts. I have below JSON:   { "Name":Naman, "Age":25, "Address": { "H.No":"23", "Street_no":2, "Area":"Mo... See more...
I want my nested JSON to be parsed only at 1st level instead of parsing all the nested parts. I have below JSON:   { "Name":Naman, "Age":25, "Address": { "H.No":"23", "Street_no":2, "Area":"Model Town" }, "Country":"IND" }   I want output like below:   Name | Age | Address | Country Naman 25 "H.No":"23", IND "Street_no":2, "Area":"Model Town"    I don't want to handle Address field separately as these are dynamic fields that are coming in from source.
In our non-prod environment, some files are not written to on a regular basis.  In these cases the UF often needs to be restarted to start the ingestion of events. The monitor stanza in use is very ... See more...
In our non-prod environment, some files are not written to on a regular basis.  In these cases the UF often needs to be restarted to start the ingestion of events. The monitor stanza in use is very basic and does not have an 'ignoreOlderThan' value set. Here is an example -  [monitor:///my_non_prod_path/Log/app.log] index=my_non_prod_index sourcetype = nonprd_sourcetype disabled=false Are there global variables that would override this and cause events to not be ingested? Thanks  
If the above displayed data is the result for my stats command [stats values(Values) as Values by Category], how can I use the search to check if the values for each category are incremental by 1... See more...
If the above displayed data is the result for my stats command [stats values(Values) as Values by Category], how can I use the search to check if the values for each category are incremental by 1 and output the values that have been missed.  I want the result to look like this :
Hello, i am trying to set the color of writing depending on the user input, i.e., he inputs two numbers and if number 1 is larger than number 2 then the writing is green. This problem is related to... See more...
Hello, i am trying to set the color of writing depending on the user input, i.e., he inputs two numbers and if number 1 is larger than number 2 then the writing is green. This problem is related to my actual problem and supposed to simplify the setting. Feel free to point out any rudimentary/logical problems in the code, i am new to html/xml and find everything quite confusing and (weirdly?) complicated. Thank you       <form> <label>condition</label> <init> <unset token="dif"></unset> </init> <fieldset submitButton="false"> <input type="text" token="field1"> <label>field1</label> </input> <input type="text" token="field2"> <label>field2</label> </input> </fieldset> <row> <panel> <single> <search> <query> index = _internal |eval dif = $field1$-$field2$ |table dif </query> <done> <condition match="$result.dif$>0"> <set token="color1">"green"</set> </condition> <condition match="$result.dif$<0"> <set token="color1">"red"</set> </condition> </done> </search> </single> </panel> </row> <row> <panel> <html> <body> <div> <p color="$color1$">Number 1 is larger than number 2!</p> </div> </body> </html> </panel> </row> </form>          
Hi, I want to know which entities are newly created or updated in ITSI. Thanks, Praveen
Hi , how to change the below raw time field to yyyy-mm-dd hh:mm:ss 2020-09-09T18:21:12.2685607Z am using the below query and didnt get any result  eval time = strftime(activityDateTime,"%... See more...
Hi , how to change the below raw time field to yyyy-mm-dd hh:mm:ss 2020-09-09T18:21:12.2685607Z am using the below query and didnt get any result  eval time = strftime(activityDateTime,"%Y-%m-%d %H:%M:%S") Can someone please help
I got below warning: "'anomalydetection' command: limit for values of field 'message' reached. Some values may have been truncated or ignored." 1) Does this means that some events are removed. For ... See more...
I got below warning: "'anomalydetection' command: limit for values of field 'message' reached. Some values may have been truncated or ignored." 1) Does this means that some events are removed. For eg, if there are 2000 events , then less than 2000 events are considered (like 1500) ? 1) Does it means all events are considered, but length of the event is truncated. For eg, if there are 2000 events, it will consider all 2000 events, "message" value will be truncated like "message"="ABC" then truncated value  "message"="AB"?   
I am trying to configure Splunk App for Web Analytics for my splunk environment. I have completed all the config steps mentioned on the app page. The data is netmaps logs with sourcetype "iis" . All ... See more...
I am trying to configure Splunk App for Web Analytics for my splunk environment. I have completed all the config steps mentioned on the app page. The data is netmaps logs with sourcetype "iis" . All steps are done properly, i.e. tag=web returns data, lookups return results, datamodel accelerated etc. However, only server error panels in Troubleshooting dashboard returns any data. What am I missing here?  @jbjerke_splunk @Anonymous 
Generally, I want to transform: "sort_index" "89080_10.9.2.0" "89090_10.9.1.0" "89150_10.8.5.0" ... into: "sort_index" "10.9.2.0" "10.9.1.0" "10.8.5.0" In all, I want to remove anythin... See more...
Generally, I want to transform: "sort_index" "89080_10.9.2.0" "89090_10.9.1.0" "89150_10.8.5.0" ... into: "sort_index" "10.9.2.0" "10.9.1.0" "10.8.5.0" In all, I want to remove anything before character "_". I have tried so many rex, wildcard expressions but nothing worked. Like: | rex field=sort_index “\w{5}_(?<sort_index>\S+)”     (remove 5 characters before _ ) | rename \d+_* as * | rename \w{5}_* as * Could anyone please help me to solve this problem? How does this problem come from? Originally I created a timechart. As illustrated, the version is lexicon-graphically sorted. I want it (field: version ) to be sorted in reverse order.  But | sort -_time, -version simply did not work. So I created a new field named 'sort_index' and sort this new field. In order not to forget 'version', I combine new 'sort_index' with 'version' by adding '_' in the middle. Now it is in the right order: 10.9.2.0 10.9.1.0 10.8.5.0 10.8.2.0 10.7.3.0 10.5.2.0  But I need to remove the prefix created previously. These are the backgrounds why I want to do this work. If you have any better advice to achieve this target, please give me your suggestion. Best, Chenglong
Hi,   I've a field with name URL and values are like this --  https://community.splunk.com/t5/forums/postpage/21321231312331112/id http://www.google.com/search?rlz=1C1GCEA_enU I need to extract ... See more...
Hi,   I've a field with name URL and values are like this --  https://community.splunk.com/t5/forums/postpage/21321231312331112/id http://www.google.com/search?rlz=1C1GCEA_enU I need to extract it like this (it can be http or https and it can be other tld too)-  https://community.splunk.com http://www.google.com   So basically need a rex like this - parta://partb/Ignorable_strings and then I'll concatenate parta and partb fields to get desired result. Someone please help.
Greetings, I have a problem with my Splunk index. My Splunk indexed data from a file log in FTP Server using FTP Pull Add-On. But the problem is, there is missing file content that my Splunk index ... See more...
Greetings, I have a problem with my Splunk index. My Splunk indexed data from a file log in FTP Server using FTP Pull Add-On. But the problem is, there is missing file content that my Splunk index did not index. For example: file_name.log has some 6 content (6 rows data) But my Splunk did not index all of 6 content on that file, resulting only 5 content (5 rows data) indexed. So, in my Splunk, there are 5 events produced.   My question is: 1. How to track my Splunk index, when indexing the data with all details, like the file content, the time-indexed, and etc?  2. How to fix my Splunk index miss some of the file content when the index happens?   Thanks in advanced