All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

how to exclude user name that start with the number "0" on a correlation search on ES This is the query: | from inputlookup:access_tracker | stats min(firstTime) as firstTime,max(lastTime) as las... See more...
how to exclude user name that start with the number "0" on a correlation search on ES This is the query: | from inputlookup:access_tracker | stats min(firstTime) as firstTime,max(lastTime) as lastTime by user | where ((now()-'lastTime')/86400)>90 and I want to remove all user that start with "0" Thank you  
We are getting a limited amount of group members of 1500 in this inputlookup.  Any ideas on how we can expand it?     
Hi , I want to hide the queue dropdown which is inside the  row . How to do that ? could you please help ? And sometimes i see when i select one value from Application two values come in Queue fo... See more...
Hi , I want to hide the queue dropdown which is inside the  row . How to do that ? could you please help ? And sometimes i see when i select one value from Application two values come in Queue for the selection when only one is checked , another in buffer probably . Please help me select only one which is checked .  
Hi Team, I have multiple jobs runs daily . Showing the status of these jobs in table. Now, I want to highlight the cell depend on the jobs run time.  For example there is three jobs .. Job Name A... See more...
Hi Team, I have multiple jobs runs daily . Showing the status of these jobs in table. Now, I want to highlight the cell depend on the jobs run time.  For example there is three jobs .. Job Name A1 A2 A3  and data for 3 days is like date                    jobName              StartTime                                       Status 22-02-2022      A1                            22-02-2022 01:00:00         Success 22-02-2022      A2                            22-02-2022 04:00:00           Success 22-02-2022      A3                             22-02-2022  02:00:00          Success 23-02-2022      A1                            23-02-2022 00:50:00            Success 23-02-2022      A2                            23-03-2022 00:10:00           Success 23-02-2022      A3                             23-03-2033  03:00:00         Failed 24-02-2022      A1                            24-03-2022 00:20:00          Success 24-02-2022      A2                            24-03-2022 01:00:00          Success 24-02-2022      A3                             24-03-2033  00:00:00        Success Now , I want   Job dependencies 01: If A2 runs before A1 then it should be highlighted with a different color., suppose red Job dependencies 02: If A3 runs before A1 then it should be highlighted with a different color., Yellow And at last table will look like Date A1 A2 A3 22-02-2022 Success Success Success 23-02-2022 Success Success Failed 24-02-2022 Success Success Success Please help me to achieve this without using JS.
Hi Community, I am working on building SPL to combine results from two tables where there is a column field but with a complication. One of the tables to be combined has matching values as well a... See more...
Hi Community, I am working on building SPL to combine results from two tables where there is a column field but with a complication. One of the tables to be combined has matching values as well as subset values from the other table. Is there a possibility to combine them using a join or other command and get common values? Regards, Pravin
Hi I have configured a 3INX 1SH 1MN cluster. I have activated the license master on the SH, I have noticed that the "index=_internal source=*license_usage.log" Is only available on the SH, meanin... See more...
Hi I have configured a 3INX 1SH 1MN cluster. I have activated the license master on the SH, I have noticed that the "index=_internal source=*license_usage.log" Is only available on the SH, meaning the data from the indexes are being sent from the indexers to the SH. I am reading this document about how we should send all _internal data to the Indexers? https://docs.splunk.com/Documentation/Splunk/7.0.1/Indexer/Forwardmasterdata?_ga=2.199674158.1222254230.1645723900-1328632051.1639410282 So, when I activated this, am I not sending back the same data that the indexers have just sent me? Or does something happen where the data is not sent at all? Thanks in advance Rob 
Might be simple, but i run a search for tags and values and i get the information. What is the proper syntax to multiply one event value by another event value? Thanks in advance.
Hello,  I have the next following event : { [-]     dimensionMap: { [+]     }    dimensions: [ [+]     ]    timestamps: [ [-]       1645718340000      1645718400000      1645718460000   ... See more...
Hello,  I have the next following event : { [-]     dimensionMap: { [+]     }    dimensions: [ [+]     ]    timestamps: [ [-]       1645718340000      1645718400000      1645718460000      1645718520000      1645718580000      1645718640000      1645718700000      1645718760000      1645718820000      1645718880000      1645718940000    ]    values: [ [-]       0.54      0.63      0.37      0.56      0.47      0.45      0.65      0.64      1      null      null    ] } I would like to link each timestamp to its corresponding value. For instance, following this example, it could look like this as a table :       Timestamp                                                                 Value 1645716780000                                                            0.42 1645716840000                                                            0.79 1645716900000                                                            0.53 1645716960000                                                            0.63 1645717020000                                                            0.59 1645717080000                                                            0.5 1645717140000                                                            0.57 1645717200000                                                            0.59 1645717260000                                                            null 1645717380000                                                            null Thank you. Regards,
Hi, I am relatively new to creating forms in Splunk. At the moment, I am creating a form which contains a radio button called "Dedup". The function of this radio button is is to remove all dup... See more...
Hi, I am relatively new to creating forms in Splunk. At the moment, I am creating a form which contains a radio button called "Dedup". The function of this radio button is is to remove all duplicate events which are identical with respect to sourcetype, source IP, dest IP, and dest port. Furthermore, the radio button should be empty by default. At the moment, the radio button is simply greyed out on the UI. I am unsure whether I need to extend the base search already defined on the form? Can you please help? Attached is an image of the XML code and the UI output.
I've created an Ubuntu 20.04 server vm in Virtual Box. I'm trying to download Splunk Enterprise using the wget command, but it keeps timing out. I keep seeing "Connecting to download.splunk.com... 44... See more...
I've created an Ubuntu 20.04 server vm in Virtual Box. I'm trying to download Splunk Enterprise using the wget command, but it keeps timing out. I keep seeing "Connecting to download.splunk.com... 443... failed: Connection timed out." Anyone have any idea why it keeps happening and how to get the install to work?  I'm able to ping and have network connectivity. Thanks for any help.
Hi I just updated splunk from 7.3.3 to 8.1.7.2 and Splunk_TA_aws from 4.6.1 to 5.2.0  (build 882) via 5.0.4. After that I cannot enter to  TA's inputs page. It open, but after that rolling "Loading... See more...
Hi I just updated splunk from 7.3.3 to 8.1.7.2 and Splunk_TA_aws from 4.6.1 to 5.2.0  (build 882) via 5.0.4. After that I cannot enter to  TA's inputs page. It open, but after that rolling "Loading" and nothing happened after that. When I look from internal logs I found the next entries on _internal       02-24-2022 16:38:02.552 +0200 ERROR AdminManagerExternal - Stack trace from python handler: Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 290, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 680, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1184, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 1245, in request raise HTTPError(response) splunklib.binding.HTTPError: HTTP 401 Unauthorized -- call not properly authenticated During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 114, in init_persistent hand.execute(info) File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 637, in execute if self.requestedAction == ACTION_LIST: self.handleList(confInfo) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/base_input_rh.py", line 64, in handleList inputs = self._collection.list() File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1479, in list return list(self.iter(count=count, **kwargs)) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1438, in iter response = self.get(count=pagesize or count, offset=offset, **kwargs) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 1668, in get return super(Collection, self).get(name, owner, app, sharing, **query) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/client.py", line 766, in get **query) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/3rdparty/python3/splunklib/binding.py", line 304, in wrapper "Request failed: Session is not logged in.", he) splunklib.binding.AuthenticationError: Request failed: Session is not logged in.       and after that       02-24-2022 16:38:02.552 +0200 ERROR AdminManagerExternal - Unexpected error "<class 'splunklib.binding.AuthenticationError'>" from python handler: "Request failed: Session is not logged in.". See splunkd.log for more details.       This is an HF to get HEC and mod inputs from GCP and AWS into separate indexers. It also act as IHF for other UFs and HFs. It's on Clients own AWS environment. I found couple of answers which has some kind of similar cases (e.g. boto.cfg) but those didn't help us. Any ideas and hints how to solve this? We cannot update yet to 8.2.5+. This is probably some kind of hint: HTTP 401 Unauthorized -- call not properly authenticated All inputs are working as earlier after enabled those via conf files and restart splunkd. Before update those inputs are disabled with the same GUI (version 4.6.1). r. Ismo
I'm not that bad in searching but this case is a little over my head and I need some clever idea. I have postfix logs. They have three types of events. All events have queue_id field which identi... See more...
I'm not that bad in searching but this case is a little over my head and I need some clever idea. I have postfix logs. They have three types of events. All events have queue_id field which identifies the message. The events have either from, to or (to and orig_to) fields set. I want to do stats values(from) by to orig_to The problem is that the fields are in separate events. And both methods of joining them together are faulted in one way or another. If I do eventstats values(from) by queue_id  I get the desired result but if I search over a longer timespan I'm hitting a memory limit. Sure, I can raise the limit in the config but it would be a better solution to find a more reasonable search for it If I try to do transaction queue_id Of course everthing works but the transaction joins the to and orig_to fields into multivalued fields and there is no reliable way to "unjoin" them (over one transaction you can have more to values than orig_to and you can't simply do mapping between the values). So I'm a little stuck how to transform my data to get from, to, orig_to tuples so I can later pass it to stats. Any hints? Of course, if nothing works I'll simply raise the limits and do eventstats but it's not a pretty solution.
Hello Team, We are trying to integrate one of the SQL data base using the splunk db connect add-on and we are getting the below error.  Id MS SQL 2012 is compatible with the below db connect and spl... See more...
Hello Team, We are trying to integrate one of the SQL data base using the splunk db connect add-on and we are getting the below error.  Id MS SQL 2012 is compatible with the below db connect and splunkversions ? Splunk DB Connect Version: 3.5.1 Build: 4 Splunk Enterprise : 8.1.7.2 DB version is Microsoft SQL Server 2012 ERROR : The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not conform to algorithm constraints". ClientConnectionId:xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Hi all, I want to get the syslog events of my VMware ESXi hosts (free hypervisor) in my splunk Enterprise (free edition). I set up the ESXi hosts and installed the "Add-on for VMware ESXi Logs" (... See more...
Hi all, I want to get the syslog events of my VMware ESXi hosts (free hypervisor) in my splunk Enterprise (free edition). I set up the ESXi hosts and installed the "Add-on for VMware ESXi Logs" (Splunk_TA_esxilogs 4.2.1). When I do a search with the IP address of a host, I only see events with the sourcetype "vmware:esxlog:Rhttpproxy". I'm not filtering the search with this sourcetype. And these events aren't the same I see in the syslog file of the ESXi hosts. When only searching for "vmware" I see more sourcetypes: But again, I don't see all events. The sourcetype "syslog" is binded to my Sophos UTM firewall. I want to get the events of smartd of the ESXi hosts for seeing if my SATA drives are OK. In the syslog file on the ESXi host there are events but I don't see them in splunk. Any ideas, how to see the events of the syslog file of the ESXi hosts in splunk? Thank You and kind Regards.
bonjour j’arrive pas a trouver le fichier authorize.conf je sais pas j’ai bien suivi le lien Comment déployer l’application Splunk pour l’infrastructure Windows - Documentation Splunk   j'a... See more...
bonjour j’arrive pas a trouver le fichier authorize.conf je sais pas j’ai bien suivi le lien Comment déployer l’application Splunk pour l’infrastructure Windows - Documentation Splunk   j'avais aussi ces 2 messages et j'ai lu plusieurs articles sans savoir le corriger: Received event for unconfigured/disabled/deleted index=perfmon with source="source::PerfmonMk:Network_Interface" host="host::SRV-DC-02" sourcetype="sourcetype::PerfmonMk:Network_Interface". So far received events from 1 missing index(es). Eventtype 'wineventlog_security' does not exist or is disabled.    
Dear Splunkers, we are trying to build a baseline of login events. We are using this example.   The search is at the end of the post.  The problem we are facing is that there are no Outlier event... See more...
Dear Splunkers, we are trying to build a baseline of login events. We are using this example.   The search is at the end of the post.  The problem we are facing is that there are no Outlier events detected. We are using the CERT Insider Threat Dataset r4.2. It doesn't matter if we change the amount of stdevs, it won't ever classify an event as outlier.  Maybe it won't work because there are different logins per user per day. How could we change it that it will only use the first login event per user per day? Does anyone have an idea what we can try? Thank you in advance. activity=Logon | eventstats avg("_time") AS avg stdev("_time") AS stdev | eval lowerBound=(avg-stdev*exact(2)), upperBound=(avg+stdev*exact(2)) | eval isOutlier=if('_time' < lowerBound OR '_time' > upperBound, 1, 0) | table _time isOutlier  
Hi , In one of the OLD UF,  fish bucket has occupied the complete disk space and service has been stopped.  will deleting the fish bucket file cause forwarder to send all the old data that is alrea... See more...
Hi , In one of the OLD UF,  fish bucket has occupied the complete disk space and service has been stopped.  will deleting the fish bucket file cause forwarder to send all the old data that is already indexed ?
Hello, we are on splunk 6.5.1 (same versione for the forwarder; unfortunately we can't upgrade at the moment). We installed the forwarder on a Windows machine, and we configured deployment.conf to ... See more...
Hello, we are on splunk 6.5.1 (same versione for the forwarder; unfortunately we can't upgrade at the moment). We installed the forwarder on a Windows machine, and we configured deployment.conf to talk with the deployment server, like this: [target-broker:deploymentServer] targetUri = deployment.ourdomain.ext:80 From the forwarder logs, we see that this error is showing up: 02-24-2022 12:19:54.474 +0100 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected The communication with deployment.ourdomain.ext seems to be working (telnet works; the DNS is transforming calls to port 80 to port 8089 of the deployment server). Why is the forwarder giving that error? We restarted it many times, but with no result. Thanks  
name uuid sysfs size dm-st paths failures action path_faults vend prod rev mpatha 360002ac000000000000010e30001c751 dm-1 120G active 4 0 0 3PARdata VV 3315 mpathb 360002ac000000000000010fb0001c751 ... See more...
name uuid sysfs size dm-st paths failures action path_faults vend prod rev mpatha 360002ac000000000000010e30001c751 dm-1 120G active 4 0 0 3PARdata VV 3315 mpathb 360002ac000000000000010fb0001c751 dm-0 240G active 4 0 0 3PARdata VV 3315   The above is my multiline event in table format... I need to extract the below values(mpath, uuid): mpatha 360002ac000000000000010e30001c751 mpathb 360002ac000000000000010fb0001c751 Please help me. im new to this.. thank you so much..
Is it possible to restrict specific role to search index=* * but allow access for a specific dashboard?