All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, everyone! Currently, I have the Splunk Add-on for Unix and Linux version 8.1.0 installed on my heavy forwarder. However, I need to upgrade it to the latest version, and I am seeking recommend... See more...
Hello, everyone! Currently, I have the Splunk Add-on for Unix and Linux version 8.1.0 installed on my heavy forwarder. However, I need to upgrade it to the latest version, and I am seeking recommendations on how to carry out this process. Additionally, I would appreciate guidance on utilizing the deployment server to distribute the update to the Universal Forwarders. God bless. Regards
Hello! Our Splunk server receives dc logs on a daily basis from another network team. Under Files & Directories in Data Inputs, I have the file path for those logs configured to be continuously moni... See more...
Hello! Our Splunk server receives dc logs on a daily basis from another network team. Under Files & Directories in Data Inputs, I have the file path for those logs configured to be continuously monitored since we receive those logs from another organization. I set a custom index for those logs and it's not showing any data in that index. I've verified that it's not a permissions issue. I decided to manually upload one of those files into Splunk and noticed that they are .tsidx files. After uploading, I wasn't able to read any of the data on the .tsidx file. Is that normal? Am I doing anything incorrect? We need to be able to audit those dc logs. Thanks in advance!
Hello   Im working on testing something but Im not sure exactly would be the best solution. What I am trying to do is, using the timepicker, have a panel that loads id's. Then I'd like another pane... See more...
Hello   Im working on testing something but Im not sure exactly would be the best solution. What I am trying to do is, using the timepicker, have a panel that loads id's. Then I'd like another panel to search over the same timespan, in a different dataset, but only for the id's from the first panel. Is there a way to pass the results of a search that runs on page load to another search, maybe with a token(s)? the catch is that there may be a single id or there may be many id's. It would have to be a boolean of some sort I believe unless there's a better way to search one to many instances of a something.   My thinking is something like  search 1: <base search> | stats count by MSGID | fields - count that populates a <tok> on page load(or time selection) but the results would have to be formatted like    654165464 OR MSGID=584548549494 OR MSGID=54654645645   search 2 <base search2> MSGID=<tok> | stats count by MSGID | fields - count Is this something that can be done? What might I have to do to accomplish this? Thanks for the assistance!
Hi! I received an event with the following time string:  2023-12-12T13:39:25.400399Z CEF:0..... This time is already in the correct timezone, but because of Z, splunk adds to 5 hours. I understan... See more...
Hi! I received an event with the following time string:  2023-12-12T13:39:25.400399Z CEF:0..... This time is already in the correct timezone, but because of Z, splunk adds to 5 hours. I understand that Z it is timezone indicator but how i can ignore it? Flow of this event is : Source --> HF --> Indexers. On HF or Indexers i dont have any props or transoforms settings. On Search Heads I extract a few fields from this event and it works. But i can't to extract this time correctly without Z. I put the following regex inside props.conf on my SHs. Also i tried to put this on indexer's props.conf:   TIME_PREFIX = ^\d{2,4}-\d{1,2}-\d{1,2}T\d{1,2}:\d{1,2}:\d{1,2}\.\d{1,6}    I tried to add TZ or TZ_ALIAS inside props.conf, but no effect. Where can I be wrong? Thanks
Hi, I am facing issue with "no recent logs found for the sourcetype =abc:xyz (example) and index=pqr (example) after 25th November Like we are able to see the logs till 25th of Nov So, please ... See more...
Hi, I am facing issue with "no recent logs found for the sourcetype =abc:xyz (example) and index=pqr (example) after 25th November Like we are able to see the logs till 25th of Nov So, please guide how to check it would be helpful. Thanks, Pooja
Hello! A team at my organization is concerned with MongoDB 4.2 running on my splunk  hosts and want me to create a plan to upgrade them to 6.0 at a minimum. From what I've read it seems like this ... See more...
Hello! A team at my organization is concerned with MongoDB 4.2 running on my splunk  hosts and want me to create a plan to upgrade them to 6.0 at a minimum. From what I've read it seems like this is either not possible or a bad idea due to possible modifications that have been done by splunk. Is there a documented way to upgrade to MongoDB 6.0 or newer? Thanks.
Hi, I am trying to ignore the logs that have level info and want to send them to null queue: example logs (not including the befor eand after pattern of the logs but its a json format and this is on... See more...
Hi, I am trying to ignore the logs that have level info and want to send them to null queue: example logs (not including the befor eand after pattern of the logs but its a json format and this is one of the fields):  "level":"info",   I have tried below and it does not work, can someone help if this is correct or is there another way, the below is in heavy forwarder props: [abc] TRANSFORMS-null = infonull   transforms [infonull] SOURCE_KEY = level REGEX = info DEST_KEY = queue FORMAT = nullQueue
We got output in table but all values are in one column  for each fields of output table. We want to split values in row. Below is the output table for reference. Please help to split it.   
Hello Team,   I would like to install UF on Linux server but I got confused. Which one should I open "9997 for İndexer cluster and 8089 for deployment server" OR "9997 and 8089 for deployment serve... See more...
Hello Team,   I would like to install UF on Linux server but I got confused. Which one should I open "9997 for İndexer cluster and 8089 for deployment server" OR "9997 and 8089 for deployment server"? Can any body help about port requirement?   
HI , Need some help on removing the duplicates from table.  Am querying the accounts which uses the plain port connection as LDAP for particular timestamp.  My query : index=***  host=host1 OR hos... See more...
HI , Need some help on removing the duplicates from table.  Am querying the accounts which uses the plain port connection as LDAP for particular timestamp.  My query : index=***  host=host1 OR host=host2 source=logpath | transaction startswith=protocol=LDAP | search BIND REQ NOT "protocol=LDAPS" NOT  | dedup "uid" If i uses the above query in a table am getting two values in a row and again for other timestamp the same value got repeated even though am using dedup .  I have tried consecutive=true. In the UID column am seeing duplicates still. results came like this: timestamp uid 2023-12-12T05:44:23.000-05:00  abc xyz 2023-12-12T05:45:20.000-05:00 abc efg 123 2023-12-12T05:45:20.000-05:00 xyz 456 efg   I need each value in single row and no duplicates should displayed. Help will much appreciated!!!  
I have two different logs where the error is capturing in different fields in each log message...(error_message and error_response) I have to capture the error_message and error_response without d... See more...
I have two different logs where the error is capturing in different fields in each log message...(error_message and error_response) I have to capture the error_message and error_response without dropping the other logs.? Log 1 : message:"Lambda execution: exit with failure", message_type:"ERROR", error_message:"error reason update" Log 2 : message:"Lambda execution: exit with failure", message_type:"ERROR", error_response:"updated error reason" Expected Output : Error                                                   count 1. error reason update                  1 2. updated error reason                1
Hi I want to execute different SPL query in Dashboard studio panel on the basis of dropdown value. Drop down have two item only, if we select "Item1" in dropdown then in particular panel of Dashboa... See more...
Hi I want to execute different SPL query in Dashboard studio panel on the basis of dropdown value. Drop down have two item only, if we select "Item1" in dropdown then in particular panel of Dashboard should execute "Query1" if selected "item2" in dropdown then in same panel of Dashboard studio should execute "Query2" item1 = "Aruba NetWorks" Item2 = "Cisco" Query1 = index=dot1x_index sourcetype=cisco_failed_src OR sourcetype=aruba_failed_src| | eval node= if(isnotnull(node_vendor),"Cisco","Aruba NetWorks")| search node = $<dropdown token>$ | table  node_dns node_ip region Query2 = index=dot1x_index sourcetype=cisco_failed_src OR sourcetype=aruba_failed_src| eval node= if(isnotnull(node_vendor),"Cisco","Aruba NetWorks")| search node = $<dropdown token>$ | table  Name Kindly Guide. Thanks Abhineet Kumar
Hi, I'm using: loadjob savedsearch because my query is big and it takes time to load. I have some multi-select filters and i want to add input time range filter. (| loadjob savedsearch="mp:search:... See more...
Hi, I'm using: loadjob savedsearch because my query is big and it takes time to load. I have some multi-select filters and i want to add input time range filter. (| loadjob savedsearch="mp:search:queryName" | where $pc$ AND  $Version$ ) I'm not sure how to do that because i need to use a field called: Timestamp (i get it in my query, this is the time the event is written to the json file ) and not  the _time field. In addition, I don't know how to use loadjob savedsearch with time range filter Can you help me, please? Thank, Maayan
I have gone through a few questions which are related to lookup file changes. I tried to use the same query to get the internal logs regarding my lookup file changes but I am unable to fetch any logs... See more...
I have gone through a few questions which are related to lookup file changes. I tried to use the same query to get the internal logs regarding my lookup file changes but I am unable to fetch any logs. I would like to know where can I find the information about the changes made to my lookup file. The information is more related to the user who modified and the respective time. I tried to search in _audit index, but I am unable to find the exact logs (may be the way of my searching is wrong) Could anyone please help me in finding the history of modification/changes made to any lookup file?
Hi,  I want to export browser test results in some sort of csv or any file where I can see the performance of a browser test for the past year or month. How can this be possible?
How to get difference of  lastest value with now i have multiple values in latest column and only one value in now column i want output as difference  latest now 1701973800.000000 170145... See more...
How to get difference of  lastest value with now i have multiple values in latest column and only one value in now column i want output as difference  latest now 1701973800.000000 1701455400.000000 1701455400.000000 1700418600.000000 1700418600.000000 1702372339   1701973800.000000- 1702372339 = 1701455400.000000- 1702372339=  like this 
Hi experts, I want to extract below fields in separate separate event to further work on it . INFO 2023-12-11 17:06:01,726 [[Runtime].Pay for NEW_API : [ { "API_NAME": "wurfbdjd", "DEP_DATE": "2... See more...
Hi experts, I want to extract below fields in separate separate event to further work on it . INFO 2023-12-11 17:06:01,726 [[Runtime].Pay for NEW_API : [ { "API_NAME": "wurfbdjd", "DEP_DATE": "2023-12-08T00:00:00" }, { "API_NAME": "mcbhsa", "DEP_DATE": "2023-12-02T00:00:00" }, { "API_NAME": "owbaha", "DEP_DATE": "2023-12-02T00:00:00" }, { "API_NAME": "pdjna7aha", "DEP_DATE": "2023-11-20T00:00:00" } ]     I want to extrcat dep_date and apiname in separate row DEP_DATE API_NAME 2023-12-08T00:00:00 wurfbdjd   mcbhsa  
I've installed Python for Scientific Computing (window 64 bit) becauese it's a requirement for MLTK.   and while I'm setting Predict Numeric Fields Experiment, there is an error in fit command. ... See more...
I've installed Python for Scientific Computing (window 64 bit) becauese it's a requirement for MLTK.   and while I'm setting Predict Numeric Fields Experiment, there is an error in fit command.   the error message is :   Error in 'fit' command: (ImportError) DLL load failed while importing _arpack: The specified procedure could not be found.     what should I do to solve this problem?
Hi All, I need some help in searching, I have 1 index but it has multiple sources, Index = Index1 Source = source 1 Source = source 2 Source = source 3 Source = source 4 Source = source 5 Sou... See more...
Hi All, I need some help in searching, I have 1 index but it has multiple sources, Index = Index1 Source = source 1 Source = source 2 Source = source 3 Source = source 4 Source = source 5 Source = source 6 Source = source 7 Now i have a requirement to create an alert search with only first 4 source and exclude the remaining three source 5,6,7 I tried using below query Index = Index1 source IN ("source 1","source 2","source 3","source 4") when i tried to exclude 4,5,6 source,getting error.Can you help on this? Index = Index1 source IN ("source 1","source 2","source 3","source 4") source  NOT IN ("source 4","source 5","source 3","source 6") or Index = Index1 source ! IN ("source 4","source 5","source 6") source IN ("source 1","source 2","source 3","source 4") source ! IN ("source 4","source 5","source 3","source 6")
Hi everyone We have an on-premise edge device in the remote location, and it is added to the cloud. I would like to monitor and set an alert for both device offline and recovery statuses. While I... See more...
Hi everyone We have an on-premise edge device in the remote location, and it is added to the cloud. I would like to monitor and set an alert for both device offline and recovery statuses. While I can set an alert for the offline status, I'm a bit confused about including the recovery status. Can you please assist me in configuring the alert for both scenarios?