All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers, I would like to understand why a cert is need for the UF, when indexer already has requireClientCert disabled.  Thanks in advance. On indexer, we have the following inputs.conf stan... See more...
Hello Splunkers, I would like to understand why a cert is need for the UF, when indexer already has requireClientCert disabled.  Thanks in advance. On indexer, we have the following inputs.conf stanza configured: [splunktcp-ssl:9997] [SSL] serverCert = $SPLUNK_HOME/etc/auth/mycerts/myServerCert.pem sslPassword = mySecret requireClientCert = false   On the UF, we have the following outputs.conf stanza configured: [indexer_discovery:cm1] master_uri = https://cm1:8089 pass4SymmKey = mySecretSymmKey [tcpout] defaultGroup = ssl-test [tcpout:ssl-test] indexerDiscovery = master-es useACK = true useClientSSLCompression = false The UF failed to connect to the indexer with the following errors seen in the UF's splunkd.log: 02-11-2023 02:57:57.421 +0000 ERROR TcpOutputProc [1715593 TcpOutEloop] - target=x.x.x.x:9997 ssl=1 mismatch with ssl config in outputs.conf for server, skipping.. The issue is resolved once we have set the clientCert in forwarder's outputs.conf stanza: [tcpout:ssl-test] indexerDiscovery = master-es useACK = true useClientSSLCompression = false clientCert = $SPLUNK_HOME/etc/auth/mycerts/MyClientCert.pem   From our test so far, this requirement seems to be specific to splunktcp-ssl.  Inter-splunk communications between UF and deployment server or cluster manager (for indexer discovery) do not seem to require the client cert.      
We had a Splunk indexer crash out of nowhere, and this is the message I received before the crash in the logs. Encountered S2S Exception=Unexpected duplicate in addUncommittedEventId eventid=57 rec... See more...
We had a Splunk indexer crash out of nowhere, and this is the message I received before the crash in the logs. Encountered S2S Exception=Unexpected duplicate in addUncommittedEventId eventid=57 received from for data received from src=*.   What is the cause of this?
Hello, This is my very first post, so corrections are welcomed!  I am looking for a way to add Select/Deselect ALL in splunk Classic or if possible change the delimiter in Studio. I have a list ... See more...
Hello, This is my very first post, so corrections are welcomed!  I am looking for a way to add Select/Deselect ALL in splunk Classic or if possible change the delimiter in Studio. I have a list of ip/emails and i query them as multiselect from a lookup file. My issue is that in studio the delimiter is "," which does not work for me as i need OR/AND. For the classic, i did fixed it, but i have to select each one-by-one. Is there any fix/workaround for my issues? Help is much appreciated,  Thank you.
Hello, I would like to create a report about our daily exports. For each day I want to see, when the export started and when it ended. So on the X-axis I want to have a date, on Y-Axis the time. It... See more...
Hello, I would like to create a report about our daily exports. For each day I want to see, when the export started and when it ended. So on the X-axis I want to have a date, on Y-Axis the time. It should look like this: Additionally I would like to add a "Limit" line to show, when the export has to be ready at the latest.  How can I add the time on the Y-axis and a "limit" line on it? Thank you, Zuz  
While checking for the historical data for one of the KPI's in one of my glasstable 's  , it showed the latest alert_value for the global time range selected ,   tile is a single value visualization.... See more...
While checking for the historical data for one of the KPI's in one of my glasstable 's  , it showed the latest alert_value for the global time range selected ,   tile is a single value visualization. but my itsi_summary has multiple Alert_value values, which is updated by my KPI base search running every 5 min .  my global time range : 1 hour. glasstable tile is showing latest alert_value value from the 55 min to 60 min run data.  but idealy it should aggregate all the alert value according to service on alert_value and show final value in the tile (single value)
Hey There Folks, Im looking at a way to measure a decrease in logging levels by host and eventcode. Ive setup the below query which is fine from a static perspective but very much looking to try an... See more...
Hey There Folks, Im looking at a way to measure a decrease in logging levels by host and eventcode. Ive setup the below query which is fine from a static perspective but very much looking to try and identify that scenario where lets say we see a 50% log drop off in a specific eventcode in the past 24 hour period. index=wineventlog sourcetype=WinEventLog EventCode=4625 | fields EventCode, host | stats dc(host) as num_unique_hosts | where num_unique_hosts < 500 Is there an easy way to convert this over?    Thanks Kindly, Tom
I found this Index and Forward data into another splunk instance  and then found the current version of the referenced documentation:   Documentation - Splunk® Enterprise - Forwarding Data - Route an... See more...
I found this Index and Forward data into another splunk instance  and then found the current version of the referenced documentation:   Documentation - Splunk® Enterprise - Forwarding Data - Route and filter data , but I am still confused. We have a requirement to push data to another Splunk instance, outside out immediate network.   On which node of the Splunk cluster can I do this from ?  
Hello all, we use the following Cisco Apps, which are working fine in general. Cisco Networks App for Splunk Enterprise (https://splunkbase.splunk.com/app/1352) Cisco Networks Add-on for Splunk... See more...
Hello all, we use the following Cisco Apps, which are working fine in general. Cisco Networks App for Splunk Enterprise (https://splunkbase.splunk.com/app/1352) Cisco Networks Add-on for Splunk Enterprise(https://splunkbase.splunk.com/app/1467)   When I edit the dashboard of the Cisco Networks App, I can find the following macro which should lead to the possiblity to select a "tenant". Macro: `get_tenants_for_user_role($env:user$)` Expanded it looks like this:     inputlookup cisco_ios_tenants | stats values(index) AS index BY tenant_name,roles | eval index=mvjoin(index,",") | eval index=replace(index,","," OR index=") | eval index="index=" + index | search [| rest splunk_server=local /services/authentication/users/$user$ | fields roles]     Unfortunatelly there is no lookup (definition) named cisco_ios_tenants, not in the App nor the Addon. I also found in the default.xml nav that there should be a " <view name="cisco_networks_tenants" />" This does also not exist.   I'm wondering on how that tentant support works and how it could be configured. Does anymore has information about this? I was not able to find something.   Basically what we want to achieve (maybe there is a better way of doing it): Our network colleagues want to have the possiblity to select the data based on something like a location/region or something. As the tentant macro is implemented nearly on all dashboards, I think that could be something to solve the problem. Thanks in advance! Many Regards Michael
Hi, I have different mails in my logs and I need to filter them in order to distinguish real users from technical users. I noticed that real users have an email like name.surname@company.com, so I ... See more...
Hi, I have different mails in my logs and I need to filter them in order to distinguish real users from technical users. I noticed that real users have an email like name.surname@company.com, so I would like to extract these emails matching "anycharacter.anycharacter@any" because in some case it could be possible to have an email with numbers (ex. name.surname1@company.com). Thank you in advance!
I have the following XML   <input type="multiselect" token="exclude_user" searchWhenChanged="true"> <label>Exclude User</label> <valuePrefix>"</valuePrefix> <valu... See more...
I have the following XML   <input type="multiselect" token="exclude_user" searchWhenChanged="true"> <label>Exclude User</label> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>, </delimiter> <fieldForLabel>user</fieldForLabel> <fieldForValue>user</fieldForValue> <search base="filtered"> <query>| stats values(User) as user | mvexpand user | dedup user</query> </search> <choice value="SYSTEM">SYSTEM</choice> <choice value="-">NONE</choice> <default>SYSTEM</default> <initialValue>SYSTEM</initialValue> </input>   The filter is setup as an exclusion filter using a post processing search in conjunction with the base search, such as: | search NOT User IN ($exclude_user$) The multiselect works, until the value of "NONE" is selected which inputs the values of | search NOT User IN (" ") into the post processing search. Text below the filter displays, "Duplicate values causing conflict".  This doesn't prevent the search from completing, and the results I receive are what I expect to be returned. It would be ideal for the message below the multiselect filter to not be displayed.  Anyone have a suggestion on how I can get rid of it? I have tried the following: Adding | dedup User to the post processing search. Changing the fieldForLabel value to " " and NONE Not sure what to try next.  Thanks in advanced.
Hi Splunkers. I have noticed a strange behavior from Splunk, I have a correlation search that I have created a while ago, ensured to select "Notable" under the Adaptive Responsive section so that i... See more...
Hi Splunkers. I have noticed a strange behavior from Splunk, I have a correlation search that I have created a while ago, ensured to select "Notable" under the Adaptive Responsive section so that it creates a notable, also tested that when I run the search manually it produced results. BUT it does not generate notables in the Incident Review dashboard! So I went and searched index=notable and found 4 events for this correlation search in the last 30 days! Then I checked the same index for another correlation search that DOES generate notables in the Incident Review dashboard (4 notables in the last 30 days) and indeed I found 4 events in the notable index! I also used the "Correlation Search Audit" app (https://splunkbase.splunk.com/app/4144) and Indeed this app shows that this correlation search has been triggered 4 times in the last 30 days!  The search does not have any lookups (In case you asked about the permissions of the lookups). The search does use the Web data model (and it has Global permissions). I'm using the admin user so I have sufficient privileges. I'm using: Splunk Enterprise version: 8.1.0 Enterprise Security version: 6.2.0 OS: Red Hat Enterprise Linux Server 7.7 (Maipo) Any Idea why this is happening? 
Hello there. Posting just for reference. It seems there is some misconfguration issue between splunkbase and the Splunk default config. The default config says: # /opt/splunk/bin/splunk btool ser... See more...
Hello there. Posting just for reference. It seems there is some misconfguration issue between splunkbase and the Splunk default config. The default config says: # /opt/splunk/bin/splunk btool server list applicationsManagement | grep updateHost updateHost = https://apps.splunk.com # /opt/splunk/bin/splunk btool server list applicationsManagement | grep Check sslAltNameToCheck = splunkbase.splunk.com, apps.splunk.com, cdn.apps.splunk.com sslCommonNameToCheck = apps.splunk.com, cdn.apps.splunk.com However, the servers respond with: # curl -v https://apps.splunk.com 2>&1 | grep subject: * subject: C=US; ST=California; L=San Francisco; O=Splunk Inc.; CN=splunkbase.splunk.com Whereas 8.2.5 (don't have any other 8.2 at hand to check) seems to work despite those settings, 9.0.3 enforces the settings strictly and says ERROR X509 [25665 TcpChannelThread] - X509 certificate (CN=splunkbase.splunk.com,O=Splunk Inc.,L=San Francisco,ST=California,C=US) common name (splunkbase.splunk.com) did not match any allowed names (apps.splunk.com,cdn.apps.splunk.com)   Walkaround: Overwrite the setting in server.conf with [applicationsManagement] sslCommonNameToCheck = splunkbase.splunk.com,apps.splunk.com,cdn.apps.splunk.com  
Hi, Can an Oracle DB be monitored in a hotel environment?
How can I display a value cumulatively every hour? For example 10:00 = 250 pieces, 11:00 = 200 pieces, 12:00 = 150 pieces. The value should be displayed as follows 10:00 = 250 pieces 11:00 = 450 ... See more...
How can I display a value cumulatively every hour? For example 10:00 = 250 pieces, 11:00 = 200 pieces, 12:00 = 150 pieces. The value should be displayed as follows 10:00 = 250 pieces 11:00 = 450 pieces 12:00 = 600 pieces Which search command should I enter here? Thanks in advance!
Hello, The Splunk add-on builder won't load, it has the header, but the rest is blank. (9.0.3 enterprise + 4.1.1 app) - Reinstall not helps -Nothing in splunkd.log, add-on builder logs - I fo... See more...
Hello, The Splunk add-on builder won't load, it has the header, but the rest is blank. (9.0.3 enterprise + 4.1.1 app) - Reinstall not helps -Nothing in splunkd.log, add-on builder logs - I found this event in web_service.log, but I don't know what should I do with it: File "/opt/splunk/etc/apps/splunk_app_addon-builder/bin/splunk_app_add_on_builder/solnlib/utils.py", line 169, in extract_http_scheme_host_port raise ValueError(http_url + " is not in http(s)://hostname:port format") ValueError: splunk."mydomain"."ext" is not in http(s)://hostname:port format note: I changed the real hostname. Everything else is ok, all other apps are works fine. I reach the server on  https://splunk."mydomain"."ext":8000  . splunk."mydomain"."ext" format set in server. conf as serverName and in web. conf for mgmtHostPort (with :"port") Any ideas?   Thanks in advance,    
I am new to slunk, I have to create one dashboard and compare current day with same day of last week based on request ids count.           index="test" s_name="test-app*" earliest=-0d@d l... See more...
I am new to slunk, I have to create one dashboard and compare current day with same day of last week based on request ids count.           index="test" s_name="test-app*" earliest=-0d@d latest=now | bucket span=1h _time | stats dc(message.req_id) as tcount by _time | eval ReportKey="today" | append [search index="test" s_name="test-app*" earliest=-7d@d latest=-6d@d | bucket span=1h _time | stats dc(message.req_id) as week by _time | eval ReportKey="lweek"] | timechart span=1h sum(week) as Lweek, sum(tcount) as Today by ReportKey           I want to create over lapping dashboard, like  Thanks in advance  
My query is this.   index=log AND 1378   There are two event   20230112, 1378, error A/B/C, duration 100 20230112, 1378, error A/B, duration 2   I want select only one event that du... See more...
My query is this.   index=log AND 1378   There are two event   20230112, 1378, error A/B/C, duration 100 20230112, 1378, error A/B, duration 2   I want select only one event that duration greater than another event.
Hi all, I want to have on a HF (8.1.4) multiple _meta of one field values in one stanza. Any sugestion how? Example: accountName = a _meta -> _meta = c-team1 accountName = b _meta -> _met... See more...
Hi all, I want to have on a HF (8.1.4) multiple _meta of one field values in one stanza. Any sugestion how? Example: accountName = a _meta -> _meta = c-team1 accountName = b _meta -> _meta = c-team2 accountName = c _meta -> _meta = c-team3 Regards Jan
Is it possible to do Line breaking and Event breaking in Universal Forwarder ?  
We have recently upgraded an indexer from 8.2.6 to 9.0.2 (running on Windows) and since then we have been plagued by an intermittent issue where the indexer stops indexing new data, but otherwise fun... See more...
We have recently upgraded an indexer from 8.2.6 to 9.0.2 (running on Windows) and since then we have been plagued by an intermittent issue where the indexer stops indexing new data, but otherwise functions fine. The indexing rate is 0, but it still returns search results. Restarting the Splunk service is all that is required and it starts indexing again. The problem seem very similar to this post, but I can't see that any of the known issues quoted relate to 9.0.2. It should be already fixed with the "server side fix" alluded to by one of the people replying to that post. When the problem happens, we see these errors in the splunkd log of the indexer: Sorry for the screen shots. Best I could do. Any clues as to what is going on here?