All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to disable some stanzas from, for example, the Windows TA, using another TA? I apologize ahead of time if this comes off newb-ish and needlessly complex, but we've run into an interest... See more...
Is it possible to disable some stanzas from, for example, the Windows TA, using another TA? I apologize ahead of time if this comes off newb-ish and needlessly complex, but we've run into an interesting problem. We have a number of Citrix XenApp servers in our environment and a little while back, they began struggling with some of the processes that Splunk spawns through scripted inputs. They were taking up a lot of CPU and RAM. The short-term fix seemed to be disabling these inputs, like [admon] and [WinRegMon] and setting them to interval =-1. I found these stanzas in both the \etc\system\ inputs.conf and the Windows TA inputs.conf. To keep them disabled, and only put this in place on Citrix servers, I created a Citrix server class. Then I took the Windows TA, copied it, renamed it to something else, and kept everything the same except that I disabled those stanzas for this "Citrix" Windows TA (they are enabled on the standard Windows TA). This accomplished the goal of keeping them disabled for Citrix, but I'm not a fan of this solution because there are probably references within the Windows TA that need it to have the standard folder path ("Splunk_TA_windows"). I've already located and corrected a couple. So I'm interested in moving the Citrix servers back to the standard Splunk_TA_windows TA, but want to keep those stanzas disabled for just those servers. My question: Can I create a custom TA, apply it just to that Citrix server class, and just have an inputs.conf file with those stanzas disabled? I'm just not sure which TA would "win", if they had conflicting instructions for the stanzas: Splunk_TA_windows saying that [WinRegMon] is not disabled, for example, but the custom TA saying that it should be disabled. So the custom TA inputs.conf file would have stanzas like this: [WinRegMon] disabled=1 interval=-1 baseline=0 [admon] interval=-1 disabled=1 baseline=0
Hello, I'm having some issues with the addon I created. I developed the addon with a simple python script to query an API, if the result is ok I got a json object and then write it to splunk. if it ... See more...
Hello, I'm having some issues with the addon I created. I developed the addon with a simple python script to query an API, if the result is ok I got a json object and then write it to splunk. if it fails it writes a json object with the failed response. I've tested the script locally in my computer and it works, I've tested it in the addon builder and it works. Even when there is an error it returns the failed response. I have set up several inputs with it's own API url, but somehow some hosts keep failing even if the host is up, looks to me like the script is stuck, when it fails it has a exception handling function so it catches errors.  I was testing up/down process and noticed when the host is down it will never detect when the host comes back online. Does anyone knows why this is happening??  
Hi Splunker, I'm using MSA (Managed Service Account) to run my Splunk. So let's say if I changed my MSA password from AD do I need to change the password in the Splunk server as well?
Hello! I have a lookup table with fields 'name' and 'last_login'. I'm trying to find users who haven't logged in the past 30 days.  Originally, I had this:   | inputlookup Users.csv | where strpti... See more...
Hello! I have a lookup table with fields 'name' and 'last_login'. I'm trying to find users who haven't logged in the past 30 days.  Originally, I had this:   | inputlookup Users.csv | where strptime('last_login',"%m/%d/%Y %H:%M:%S") < relative_time(now(),"-30d@d") OR isnull(last_login) | sort last_login desc   However, it is only outputting users that logged in 30+ days ago. I would like to exclude users who are still logging in recently (in those 30 days). Thank you! Any help would be greatly appreciated!
Hello All, I have a large dataset "audit.cost_records" wherein I am trying to locate a correlation based on a large number of fields. These fields are large in number (over 1000 total) and many can ... See more...
Hello All, I have a large dataset "audit.cost_records" wherein I am trying to locate a correlation based on a large number of fields. These fields are large in number (over 1000 total) and many can be grouped (for my current purposes). Some groups may consist of 10+ fields while others may be only 1. Some example field names are: ab, ab-HE, ab-SC, ab-LS, rs, rs-SH, rz, xr, xr-FL, xr-SH, xr-SS in this example, all of the ab items should be grouped, as with rs, and xr.  Unfortunately, I am new to splunk and my understanding of the splunk language is elementary at best. I do have somewhat advanced or at least jorneyman knowledge in SQL and basic in a few programming languages (Java and the like). Unfortunately that doesn't seem to be helping me here. Based on several hours of searching this community and trial and error I have arrived at the below. I was trying to use wildcards to group by similar field names. I've just read somewhere that Splunk may segment the field based on the '-' character, which makes my wildcard not work as I intend. | from datamodel:"AUDIT.COST_RECORDS" | eval Group1=if(match(fieldName,"ab*"), "ABGroup", Group1) | eval Group1=if(match(fieldnName,"rs*"),"RSGroup",Group1) | timechart span=30d sum(cost) as Cost by Group1 Does anyone have any recommendations on how to solve this search? My overall intent is to have a year-to-date line chart (spanned monthly) showing cost over time for each "Group".
Hello, we configured custom business transactions on one of our customer's environments, but we couldn't see our custom business transactions on the BT menu, all our custom BTs are being displayed ... See more...
Hello, we configured custom business transactions on one of our customer's environments, but we couldn't see our custom business transactions on the BT menu, all our custom BTs are being displayed under "All Other Traffic"  group. Could you please help us with this? Note:  Same configuration/BTs are working in other environments/applications.
Hi Team, After the upgradation to splunk version 8.2.2.1, we have started seeing "warn" in the one specific dashboard with warning as "cannot expand lookup field 'triggertype' due to a reference cyc... See more...
Hi Team, After the upgradation to splunk version 8.2.2.1, we have started seeing "warn" in the one specific dashboard with warning as "cannot expand lookup field 'triggertype' due to a reference cycle in the lookup configuration.check search.log for details and update configuration to remove th reference cycle". I have seen others splunk community answers for field userid. But i would like to know the exact resolution for this triggertype field. Regards,
hi all... i'm an experienced splunk user (oldschool splunk).   All the google pages seem to be pointing to the solution for the old version of splunk dashboard builder.. all i want to do is add a t... See more...
hi all... i'm an experienced splunk user (oldschool splunk).   All the google pages seem to be pointing to the solution for the old version of splunk dashboard builder.. all i want to do is add a timepicker to my board (like we used to be able to do).   Any ideas? (Using the latest version of splunk with the dashboard studio option selected   (all the board elements are set to 24 hrs).    
Question regarding the indexes.conf on my search heads. Each index contains the paths to the home/cold/thawed directories, but they also have a frozenTimePeriodInSecs value and MaxDataSize. My questi... See more...
Question regarding the indexes.conf on my search heads. Each index contains the paths to the home/cold/thawed directories, but they also have a frozenTimePeriodInSecs value and MaxDataSize. My question is are these two values, FTPIS and MDS, able to removed from the search heads? I thought that the indexers house the values for indexes.conf size requirements and search heads only hold the paths to retrieve the data. Please help me understand. Thanks
I want to change the default page to an alerting page after login.
1. My network security device (F5 WAF) sending syslog/events logs to siem tool(splunk) then what kind of forwarder will my network security device? 2. Can we purse payload on splunk receiving events... See more...
1. My network security device (F5 WAF) sending syslog/events logs to siem tool(splunk) then what kind of forwarder will my network security device? 2. Can we purse payload on splunk receiving events get from WAF and how?
Dear Splunk Community, I have the following search:   index=websphere 200 OK POST   And I have different platforms that I find like this:   index=websphere 200 OK POST LINUX index=websphere 2... See more...
Dear Splunk Community, I have the following search:   index=websphere 200 OK POST   And I have different platforms that I find like this:   index=websphere 200 OK POST LINUX index=websphere 200 OK POST Windows index=websphere 200 OK POST zLinux   I am currently using the following query to count all 200 OK POST events per platform:   index=websphere 200 OK POST LINUX | stats count | rename count AS "Linux" | append [search index=websphere 200 OK POST WINDOWS | stats count | rename count AS "Windows"] | append [search index=websphere 200 OK POST ZLINUX | stats count | rename count AS "zLinux"]   This is just an example, I have way more platforms that I search like in the query above. I have two issues: Its slow It counts per platform and generates table headers horizontally that I don't want I would like to change the above so that I get the following output: Platform | Count Linux | 24 Windows | 50 zLinux | 0 Also, using append search seems a bit devious. There must be a simpler, faster and better way to do this, but how? Thanks in advance   EDIT: Please note that the results are all shown in _raw , there are no platform fields or anything generated
Hi  I am trying to send data into a cluster with 1 SH, 1MN and 3 indexers. I am unsure if I A: Send data to the search head then use the output groups to send the data to the indexers B: Send th... See more...
Hi  I am trying to send data into a cluster with 1 SH, 1MN and 3 indexers. I am unsure if I A: Send data to the search head then use the output groups to send the data to the indexers B: Send the data directly to the indexers (However I don't have a way to load balance this data) Regards Robert
Trying to figure out how to loop in Splunk.  I have the below query and my end result is to map/chart into a timechart by the percentage over _time. index=anIndex sourcetype=aSource StringA earliest... See more...
Trying to figure out how to loop in Splunk.  I have the below query and my end result is to map/chart into a timechart by the percentage over _time. index=anIndex sourcetype=aSource StringA earliest=-480m latest=-240m | stats count as A | appendcols [search index=anIndex sourcetype=aSouce StringB earliest=-480m latest=-240m | stats count as B ] | eval _time = relative_time(now(), "-240m@m") | eval percentage = round(( A / B) * 100) | fields + _time, percentage   Variables that need to change with each loop. Lets assume I want to show percentage starting from 4 hour in the past to the current time by 30 minute increments. 1) Index: the earliest and latest need to increment by +30 minutes starting at (latest=-480, earliest = -240) till I get to 0 2) _time will need to be relative to when I start (beginning @ time now(), -240) and be adjusted on each loop by + 30 mins till I get to 0   I have looked at many examples but do not understand how to apply it to my requirements...
HI, guys, I want to get logs from splunk to me socket.io Server but i receive BAD MESSAGE REQUEST error on socket.io server side.  I can receive data from splunk to simple socket but i need to use so... See more...
HI, guys, I want to get logs from splunk to me socket.io Server but i receive BAD MESSAGE REQUEST error on socket.io server side.  I can receive data from splunk to simple socket but i need to use socket.io with websocket and i am facing this issue can you guys help me to receive data from splunk to socket.io Server?
I wonder whats the best practice when working with JS in Dashboards. Im on Splunk Enterprise 8.2.1 Windows single Instance for learning. When i use a JS for just setting tokens its enough to <h... See more...
I wonder whats the best practice when working with JS in Dashboards. Im on Splunk Enterprise 8.2.1 Windows single Instance for learning. When i use a JS for just setting tokens its enough to <host>:<port>/<language>/_bump after changes But when i require a second JS inside my JS (separated JS for customview) i have to rename the second JS and restart splunkd service and then _bump.  _bump alone is not working neither /debug/refresh here What is the best practice there? How does splunk behave on different Systems? Our productive Splunk for example ist clustered on Linux servers.
Hi - We have been using OT to send data into a single Splunk install and it is working very well. I am now looking to move this to production and send the data for my Cluster. 3 indexers, but I ... See more...
Hi - We have been using OT to send data into a single Splunk install and it is working very well. I am now looking to move this to production and send the data for my Cluster. 3 indexers, but I am unsure how to tell the exporter to do this? In a forwarder I would give it the host and post of the 3 indexers, but how do I do this in an exporter? Configure the exportor exporters: otlp/aggregation: # push to the aggregator endpoint: ${AGGREGATOR_HOST}:${AGGREGATOR_PORT} insecure: true splunk_hec: # pushed to splunk token: "a04daf32-68b9-48b2-88a0-6ac53b3ec002" endpoint: "https://mx33456vm:8088/services/collector" source: "mx" sourcetype: "otel" index: "metrics_test" insecure_skip_verify: true Thanks for you help in advance
Hi Team,   Can  someone guide me how can I extract the logs from the below raw data: 1)Need to Extract the id 5d302144-3cab-387d-8e8c-2532a32b78fe 2) Need to Extract the Starting Time and the Sto... See more...
Hi Team,   Can  someone guide me how can I extract the logs from the below raw data: 1)Need to Extract the id 5d302144-3cab-387d-8e8c-2532a32b78fe 2) Need to Extract the Starting Time and the Stopping Time 2021-09-01 22:08:48,329 INFO [main] o.a.n.controller.StandardProcessorNode Starting SalesforceBulkAPIJobStatusProcessorV1[id=5d302144-3cab-387d-8e8c-2532a32b78fe] 2021-08-20 12:53:23,476 INFO [main] o.a.n.controller.StandardProcessorNode Stopping processor: SalesforceBatchJobStatusProcessor[id=11c59e11-4bc5-3bbb-9fea-3c12407f3aa2]   Can someone please guide me on this 
Hi Experts! ,                       Wondered if there was a way of doing this. I have a need to compare a timestamp of a log to an EPOCH time also on the same log line and show the Diff Example 20... See more...
Hi Experts! ,                       Wondered if there was a way of doing this. I have a need to compare a timestamp of a log to an EPOCH time also on the same log line and show the Diff Example 2021-10-05 04:49:10.138 [pool-1-thread-1] INFO order - [Pool]Book={inst=example,1=[],2=[feed-|time=1633427347600000000} Manually looking the difference is  2021-10-05 04:49:10.138 -(Standard time) 2021-10-05 04:49:07.600 -(EPOCH time) Difference 2.54 seconds Thanks in advance
Installing a new HF and getting the  UiHttpListener - Web UI disabled in web.conf [settings]; not starting message /opt/splunk/etc/system/local [splunk@ilissplfwd10 local]$ cat web.conf [settings... See more...
Installing a new HF and getting the  UiHttpListener - Web UI disabled in web.conf [settings]; not starting message /opt/splunk/etc/system/local [splunk@ilissplfwd10 local]$ cat web.conf [settings] splunkdConnectionTimeout = 300 #privKeyPath =/opt/splunk/etc/auth/amd_certificates/ilissplfwd05.key #serverCert = /opt/splunk/etc/auth/amd_certificates/ilissplfwd05.pem #privKeyPath = etc/auth/splunkweb/ilissplfwd05.key #serverCert = etc/auth/splunkweb/ilissplfwd05.pem # # enableSplunkWebSSL = true httpport = 8000 [splunk@ilissplfwd10 local]$ cat server.conf [general] serverName = ilissplfwd10 pass4SymmKey = $7$Byj9tE1Bz0uc/sXtMDIlSnuR96UpkmVZHEuj7i0giRrtt5r1zNk= [sslConfig] sslPassword = $7$SMjaRC7EGQjvqnX8xl9tkV+VzYcXdQ2rt0Ui0WCC8UzO3IJLqsJd8Q== [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free [splunk@ilissplfwd10 local]$