All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Have a log that is confusing me on how to extract the time. From hour 01:00:00 to 23:59:59, it's fine, but the vendor uses hour 24 instead of 0 for midnight to 1AM. So, at 00:30:00 (12:30AM) the ... See more...
Have a log that is confusing me on how to extract the time. From hour 01:00:00 to 23:59:59, it's fine, but the vendor uses hour 24 instead of 0 for midnight to 1AM. So, at 00:30:00 (12:30AM) the timestamp reads 24:30:00. Anyone run into this or know how to recognize the 24... hour stuff as the 00... hour it should be? Here's an actual cut and paste from the log timestamp: "24:57:05:996" Thanks in advance,
I supposed to get the some data in Splunk twice in a day. I want to create 2 email alerts as follows: 9 AM email alert: should alert if no data received at 5 AM and/or if no data received previou... See more...
I supposed to get the some data in Splunk twice in a day. I want to create 2 email alerts as follows: 9 AM email alert: should alert if no data received at 5 AM and/or if no data received previous day at noon.  3 PM email alert: should alert if no data received at noon and/or if no data received earlier the same morning at 5. Thanks for your help in advance. @bowesmana 
Greetings!!!   How can i  install splunk indexers in centos 7? What I will need and what are steps to follow ?   I need to setup splunk TEst Environment, So far I have only installed Splunk... See more...
Greetings!!!   How can i  install splunk indexers in centos 7? What I will need and what are steps to follow ?   I need to setup splunk TEst Environment, So far I have only installed Splunk enterprise as Search Head and I am able to browse it through web GUI and create users, I need also to INSTALL SPLUNK INDEXERS? AND splunk forwarder, and also install splunk management node that will be able to receive syslog from network security devices source and manages search head , KINDLY HELP me and guide me with the steps??? Thank you in advance!!!
Hi,  We are migrating our cluster from on-prem to a smart-store enabled cluster in AWS, a few indexes at a time, during which process event counts is not matching in some cases. Case1: Eventcount... See more...
Hi,  We are migrating our cluster from on-prem to a smart-store enabled cluster in AWS, a few indexes at a time, during which process event counts is not matching in some cases. Case1: Eventcount in aws cluster is less than event count in on-prem Case2: Eventcount in aws cluster is more than event count in on-prem Any idea what might cause the event count, not to match? 
can i get the data of indexers which is having more than 45 days old data.
I am in the process of creating a search to detect significant hard drive decreases.  Using the results from my search, I would like to then create a timechart to show how the usage has changed over... See more...
I am in the process of creating a search to detect significant hard drive decreases.  Using the results from my search, I would like to then create a timechart to show how the usage has changed over time.  This is my search:    index=perfmon collection=LogicalDisk sourcetype="Perfmon:LogicalDisk" counter="% Free Space" (instance!="HarddiskVolume*") (instance!=_Total) | eval usedSpace=round(100-Value,0) |stats min(usedSpace) as min, avg(usedSpace) as avg by host, instance |eval delta = avg - min |where delta>10 |rename instance as drive      My results return the hostname, the drive letter, the minimum, the average, and the delta for the disk space usage in a tabular format.   Let's say it returns one host, I would then like to use that same host to return a timechart for the host and drive.   Is this possible?    
I'm trying something like this:   my base search | where data.value1 == data.value2  my base search | where data.value1 != data.value2 I've tried variations of match = case as well.   A s... See more...
I'm trying something like this:   my base search | where data.value1 == data.value2  my base search | where data.value1 != data.value2 I've tried variations of match = case as well.   A single event has the two fields I want to compare. 
how to exclude user name that start with the number "0" on a correlation search on ES This is the query: | from inputlookup:access_tracker | stats min(firstTime) as firstTime,max(lastTime) as las... See more...
how to exclude user name that start with the number "0" on a correlation search on ES This is the query: | from inputlookup:access_tracker | stats min(firstTime) as firstTime,max(lastTime) as lastTime by user | where ((now()-'lastTime')/86400)>90 and I want to remove all user that start with "0" Thank you  
We are getting a limited amount of group members of 1500 in this inputlookup.  Any ideas on how we can expand it?     
Hi , I want to hide the queue dropdown which is inside the  row . How to do that ? could you please help ? And sometimes i see when i select one value from Application two values come in Queue fo... See more...
Hi , I want to hide the queue dropdown which is inside the  row . How to do that ? could you please help ? And sometimes i see when i select one value from Application two values come in Queue for the selection when only one is checked , another in buffer probably . Please help me select only one which is checked .  
Hi Team, I have multiple jobs runs daily . Showing the status of these jobs in table. Now, I want to highlight the cell depend on the jobs run time.  For example there is three jobs .. Job Name A... See more...
Hi Team, I have multiple jobs runs daily . Showing the status of these jobs in table. Now, I want to highlight the cell depend on the jobs run time.  For example there is three jobs .. Job Name A1 A2 A3  and data for 3 days is like date                    jobName              StartTime                                       Status 22-02-2022      A1                            22-02-2022 01:00:00         Success 22-02-2022      A2                            22-02-2022 04:00:00           Success 22-02-2022      A3                             22-02-2022  02:00:00          Success 23-02-2022      A1                            23-02-2022 00:50:00            Success 23-02-2022      A2                            23-03-2022 00:10:00           Success 23-02-2022      A3                             23-03-2033  03:00:00         Failed 24-02-2022      A1                            24-03-2022 00:20:00          Success 24-02-2022      A2                            24-03-2022 01:00:00          Success 24-02-2022      A3                             24-03-2033  00:00:00        Success Now , I want   Job dependencies 01: If A2 runs before A1 then it should be highlighted with a different color., suppose red Job dependencies 02: If A3 runs before A1 then it should be highlighted with a different color., Yellow And at last table will look like Date A1 A2 A3 22-02-2022 Success Success Success 23-02-2022 Success Success Failed 24-02-2022 Success Success Success Please help me to achieve this without using JS.
Hi Community, I am working on building SPL to combine results from two tables where there is a column field but with a complication. One of the tables to be combined has matching values as well a... See more...
Hi Community, I am working on building SPL to combine results from two tables where there is a column field but with a complication. One of the tables to be combined has matching values as well as subset values from the other table. Is there a possibility to combine them using a join or other command and get common values? Regards, Pravin
Hi I have configured a 3INX 1SH 1MN cluster. I have activated the license master on the SH, I have noticed that the "index=_internal source=*license_usage.log" Is only available on the SH, meanin... See more...
Hi I have configured a 3INX 1SH 1MN cluster. I have activated the license master on the SH, I have noticed that the "index=_internal source=*license_usage.log" Is only available on the SH, meaning the data from the indexes are being sent from the indexers to the SH. I am reading this document about how we should send all _internal data to the Indexers? https://docs.splunk.com/Documentation/Splunk/7.0.1/Indexer/Forwardmasterdata?_ga=2.199674158.1222254230.1645723900-1328632051.1639410282 So, when I activated this, am I not sending back the same data that the indexers have just sent me? Or does something happen where the data is not sent at all? Thanks in advance Rob 
Might be simple, but i run a search for tags and values and i get the information. What is the proper syntax to multiply one event value by another event value? Thanks in advance.
Hello,  I have the next following event : { [-]     dimensionMap: { [+]     }    dimensions: [ [+]     ]    timestamps: [ [-]       1645718340000      1645718400000      1645718460000   ... See more...
Hello,  I have the next following event : { [-]     dimensionMap: { [+]     }    dimensions: [ [+]     ]    timestamps: [ [-]       1645718340000      1645718400000      1645718460000      1645718520000      1645718580000      1645718640000      1645718700000      1645718760000      1645718820000      1645718880000      1645718940000    ]    values: [ [-]       0.54      0.63      0.37      0.56      0.47      0.45      0.65      0.64      1      null      null    ] } I would like to link each timestamp to its corresponding value. For instance, following this example, it could look like this as a table :       Timestamp                                                                 Value 1645716780000                                                            0.42 1645716840000                                                            0.79 1645716900000                                                            0.53 1645716960000                                                            0.63 1645717020000                                                            0.59 1645717080000                                                            0.5 1645717140000                                                            0.57 1645717200000                                                            0.59 1645717260000                                                            null 1645717380000                                                            null Thank you. Regards,
Hi, I am relatively new to creating forms in Splunk. At the moment, I am creating a form which contains a radio button called "Dedup". The function of this radio button is is to remove all dup... See more...
Hi, I am relatively new to creating forms in Splunk. At the moment, I am creating a form which contains a radio button called "Dedup". The function of this radio button is is to remove all duplicate events which are identical with respect to sourcetype, source IP, dest IP, and dest port. Furthermore, the radio button should be empty by default. At the moment, the radio button is simply greyed out on the UI. I am unsure whether I need to extend the base search already defined on the form? Can you please help? Attached is an image of the XML code and the UI output.
I've created an Ubuntu 20.04 server vm in Virtual Box. I'm trying to download Splunk Enterprise using the wget command, but it keeps timing out. I keep seeing "Connecting to download.splunk.com... 44... See more...
I've created an Ubuntu 20.04 server vm in Virtual Box. I'm trying to download Splunk Enterprise using the wget command, but it keeps timing out. I keep seeing "Connecting to download.splunk.com... 443... failed: Connection timed out." Anyone have any idea why it keeps happening and how to get the install to work?  I'm able to ping and have network connectivity. Thanks for any help.
Hi I just updated splunk from 7.3.3 to 8.1.7.2 and Splunk_TA_aws from 4.6.1 to 5.2.0  (build 882) via 5.0.4. After that I cannot enter to  TA's inputs page. It open, but after that rolling "Loading... See more...
Hi I just updated splunk from 7.3.3 to 8.1.7.2 and Splunk_TA_aws from 4.6.1 to 5.2.0  (build 882) via 5.0.4. After that I cannot enter to  TA's inputs page. It open, but after that rolling "Loading" and nothing happened after that. When I look from internal logs I found the next entries on _internal       02-24-2022 16:38:02.552 +0200 ERROR AdminManagerExternal - Stack trace from python handler: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 290, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 680, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1184, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1245, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- call not properly authenticated During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 114, in init_persistent hand.execute(info) File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 637, in execute if self.requestedAction == ACTION_LIST: self.handleList(confInfo) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/base_input_rh.py", line 64, in handleList inputs = self._collection.list() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1479, in list return list(self.iter(count=count, **kwargs)) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1438, in iter response = self.get(count=pagesize or count, offset=offset, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1668, in get return super(Collection, self).get(name, owner, app, sharing, **query) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 766, in get **query) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 304, in wrapper "Request failed: Session is not logged in.", he) splunklib.binding.AuthenticationError: Request failed: Session is not logged in.       and after that       02-24-2022 16:38:02.552 +0200 ERROR AdminManagerExternal - Unexpected error "<class 'splunklib.binding.AuthenticationError'>" from python handler: "Request failed: Session is not logged in.". See splunkd.log for more details.       This is an HF to get HEC and mod inputs from GCP and AWS into separate indexers. It also act as IHF for other UFs and HFs. It's on Clients own AWS environment. I found couple of answers which has some kind of similar cases (e.g. boto.cfg) but those didn't help us. Any ideas and hints how to solve this? We cannot update yet to 8.2.5+. This is probably some kind of hint: HTTP 401 Unauthorized -- call not properly authenticated All inputs are working as earlier after enabled those via conf files and restart splunkd. Before update those inputs are disabled with the same GUI (version 4.6.1). r. Ismo
I'm not that bad in searching but this case is a little over my head and I need some clever idea. I have postfix logs. They have three types of events. All events have queue_id field which identi... See more...
I'm not that bad in searching but this case is a little over my head and I need some clever idea. I have postfix logs. They have three types of events. All events have queue_id field which identifies the message. The events have either from, to or (to and orig_to) fields set. I want to do stats values(from) by to orig_to The problem is that the fields are in separate events. And both methods of joining them together are faulted in one way or another. If I do eventstats values(from) by queue_id  I get the desired result but if I search over a longer timespan I'm hitting a memory limit. Sure, I can raise the limit in the config but it would be a better solution to find a more reasonable search for it If I try to do transaction queue_id Of course everthing works but the transaction joins the to and orig_to fields into multivalued fields and there is no reliable way to "unjoin" them (over one transaction you can have more to values than orig_to and you can't simply do mapping between the values). So I'm a little stuck how to transform my data to get from, to, orig_to tuples so I can later pass it to stats. Any hints? Of course, if nothing works I'll simply raise the limits and do eventstats but it's not a pretty solution.