All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have integrated SCOM with Splunk using Splunk Add-on for MS SCOM [HF]. We are getting ALL Perfmon Data in Splunk for whatever Performance Counter Enabled in SCOM. Do we have any way to control the... See more...
We have integrated SCOM with Splunk using Splunk Add-on for MS SCOM [HF]. We are getting ALL Perfmon Data in Splunk for whatever Performance Counter Enabled in SCOM. Do we have any way to control the data pulling from SCOM to Splunk?  We need only Disk, Memory and Processor. How do we restrict the Splunk SCOM Add-on to filter and fetch only these 3 Parameters? (We don't want other parameter performance data SQL, IIS etc to send it to Splunk.)
Working on a new ES install. Does the ES search head need the app and add-on for each technology or just the add-on? Does it matter if the app and add-on are both installed?
Hi There Folks! Please refer screen shot of the original log file in a NotePad. http://prntscr.com/w82jd1 Although its one single event, as illustrated with a red line separator, Splunk UF is read... See more...
Hi There Folks! Please refer screen shot of the original log file in a NotePad. http://prntscr.com/w82jd1 Although its one single event, as illustrated with a red line separator, Splunk UF is reading it as two separate events with diff time stamps.  I did read about MAX_EVENTS and TRUNCATE and my props.conf in UF and Indexer is updated to the following:   [default] MAX_EVENTS = 100000 TRUNCATE = 100000 BREAK_ONLY_BEFORE_DATE = false However, the problem persists. Any insights on what might be causing this issue?  Thanks for your help in advance. Cheers
i am trying to find out peakTPS for every one hour in last 24 hours duration, i have below query but thats giving peak TPS only for one hour , how do i run that loop that query for entire 24 hours du... See more...
i am trying to find out peakTPS for every one hour in last 24 hours duration, i have below query but thats giving peak TPS only for one hour , how do i run that loop that query for entire 24 hours duration but need results for every one hour.   index=whatever |timechart span=1s count AS TPS avg(RT) as Avg_RT | eventstats max(TPS) as peakTPS | eval peakTime=if(peakTPS==TPS,_time,null()) | stats avg(TPS) as avgTPS first(peakTPS) as peakTPS first(peakTime) as peakTime |eval peakTPS=round(peakTPS,2), avgTPS=round(avgTPS,2) |table avgTPS peakTPS peakTime |convert timeformat="%Y-%m-%d %H:%M:%S %Z" ctime(peakTime) as peakTime
Hi folks, I’m having an issue getting Juniper logs to show the correct sourcetype. Right now they simply all show up as “sourcetype=juniper”, instead of the expected ones like “juniper:junos:firewal... See more...
Hi folks, I’m having an issue getting Juniper logs to show the correct sourcetype. Right now they simply all show up as “sourcetype=juniper”, instead of the expected ones like “juniper:junos:firewall”. I have the Splunk Juniper app installed, and the input selected to use the juniper sourcetype, so the right props/transforms should be breaking down more sub-sourcetypes. Am I missing a step? Thanks!
I have two panels - Network panel and Details panel - and a text input. My Details panel loads based on the results from my text input. What I would like to do is to create a token for the results in... See more...
I have two panels - Network panel and Details panel - and a text input. My Details panel loads based on the results from my text input. What I would like to do is to create a token for the results in the Details panel, and then load the Network Panel with the matching values from the Details Panel.  My text input token is text_token, and I created a token for the Details Panel: <done> <set token="result_token">$result.my_field$</set> </done> Appreciate any help! Thank you.
Hello! I am new to Splunk and am tasked with automating the deployment (many many indexers on bare metal). I noticed that are two projects on github and was wondering if someone could clarify some qu... See more...
Hello! I am new to Splunk and am tasked with automating the deployment (many many indexers on bare metal). I noticed that are two projects on github and was wondering if someone could clarify some questions.  https://github.com/splunk/splunk-ansible This project seems thorough, but is it just made for the splunk-docker project? The playbooks only run against localhost, meaning I can only deploy one instance at a time. This doesn't make sense to me, as one of the purposes of Ansible is to run against many hosts at once. https://github.com/splunk/ansible-role-for-splunk This project isn't as built out, but seems like I can deploy multiple instances at once on bare metal. The question is; can I still deploy everything I'll need (indexers, sh, cluster master, etc). I know that these are broad questions. I would just like some advice on what project to go with. Much thanks, Chris
I want to find the first transaction that occurs after a different type of event. Let's say we have this event: "Service ready" Then we have these events: traceId=123 "Authentication started" tr... See more...
I want to find the first transaction that occurs after a different type of event. Let's say we have this event: "Service ready" Then we have these events: traceId=123 "Authentication started" traceId=123 "Authentication complete" I want to find this transaction: sourcetype=mytype | transaction traceId startswith="Authentication started" endswith="Authentication complete" But I want to filter these transaction and only see the first one that occurs after the "Service ready" event.
Hello All I found a similar question but did not see an answer. https://community.splunk.com/t5/Getting-Data-In/No-time-or-host-in-forwarded-syslog-messages/m-p/52627 I am forwarding Checkpoint lo... See more...
Hello All I found a similar question but did not see an answer. https://community.splunk.com/t5/Getting-Data-In/No-time-or-host-in-forwarded-syslog-messages/m-p/52627 I am forwarding Checkpoint logs that are coming in via tcp://514 and I am trying to forward the data to an HA syslog-ng environment.  There is a NetScaler in front two different syslog-ng servers with round robin load balancing happening.  I disabled the second syslog-ng host so that all logs get sent to sys-01.  I see the following coming in:     Msg: 2020-12-22 18:30 host-blah-blah.xxx.xxx.xxx.com time=1608661800|hostname=logger|product=Firewall|layer_name=xx-stl-private Security|layer_uuid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|match_id=197|parent_rule=0|rule_action=Accept|rule_uid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|action=Accept|conn_direction=Internal|ifdir=inbound|ifname=eth2-01.716|logid=0|loguid={0x00000000,0x00,0x0000000,0xc0000000}|origin=xxx.xxx.xxx.xxx|originsicname=blah_gw-stl-prv|sequencenum=199|time=1608661800|version=5|dst=xxx.xxx.xxx.xxx|log_delay=1608661800|proto=6|s_port=47298|service=7031|src=xxx.xxx.xxx.xxx|       From the previous link that seems to be a bug, but I am going to assume that it is an old bug and should not exist in Splunk version 8.0.6.   Is there a way in the outputs.conf to force a header that has the hostname?   Thanks ed
I was trying to Install the JMS TA with the new version and I have installed the JMS TA and Inputs app but I am not able to see the Inputs from the UI and splunk is showing below errors   RROR Modu... See more...
I was trying to Install the JMS TA with the new version and I have installed the JMS TA and Inputs app but I am not able to see the Inputs from the UI and splunk is showing below errors   RROR ModularInputs - Unable to initialize modular input "jms" defined in the app "jms_ta": Introspecting scheme=jms: script running failed (exited with code 1)..   utility:49 - name=javascript, class=Splunk.Error, lineNumber=965, message=Uncaught TypeError: Cannot set property 'loadParams' of undefined, fileName=https://louapplqs106:8000/en-US/manager/search/apps/local?     2020-12-22 12:44:35,536 ERROR [5fe23083837f14ec628550] utility:49 - name=javascript, class=Splunk.Error, lineNumber=965, message=Uncaught TypeError: Cannot set property 'loadParams' of undefined, fileNamecount=25&app_only=False&search=JMS&msgid=8179850.166860559966   source = /opt/splunk/var/log/splunk/web_service.log sourcetype = splunk_web_service apprecaite your help 
I have a requirement to find the duplicate events which are logged in Splunk with multiple sourcetypes. For each log we have a unique id : 8b18881a-c6fe-4561-91f3-61c31b1afef5.  I am able to get the... See more...
I have a requirement to find the duplicate events which are logged in Splunk with multiple sourcetypes. For each log we have a unique id : 8b18881a-c6fe-4561-91f3-61c31b1afef5.  I am able to get the logs with this unique id (multiple logs with different sourcetypes) Is there any way to find the sourcetypes which are having the unique id, like as below. unique id                                                                                  sourcetype1    sourcetype2           sourcetype2 8b18881a-c6fe-4561-91f3-61c31b1afef5             st_dev                st1_dev1 21edc48b-0d90-43f2-bc1f-3dc6e322c821          st_dev                st1_dev1                    st3_dev3    
I wish to replicate a python code that does an API call to Splunk in Golang:         import splunklib.client as client service = client.connect( host=host, port=port, username=username, pas... See more...
I wish to replicate a python code that does an API call to Splunk in Golang:         import splunklib.client as client service = client.connect( host=host, port=port, username=username, password=password, app=app)         while taking reference from https://godoc.org/github.com/kuba--/splunk#NewClient, this doesn't has the app argument. Can someone pls help me write this code in Golang.   Thanks, Mukesh Chandak  
Hey all, I'm currently working on setting up splunk which I have done but was asked for a setup that I have not done or attempted before and was curious about any thoughts somone could provide. They... See more...
Hey all, I'm currently working on setting up splunk which I have done but was asked for a setup that I have not done or attempted before and was curious about any thoughts somone could provide. They are asking me to setup a splunk cluster made up of multiple splunk searchhead/indexer instances. Essentially, we have the master splunk that over sees the whole system, and slave splunks that over see the subsystem. Those individual subsystems splunk data, would need to replicate to the master system splunk but not replicate back. For example:   Splunk slave 1 collects logs from its machines, and replicates to master splunk splunk slave 2 collects logs from its machines and gets replicated to master splunk Master splunk gets all this data but none of it gets replicated back so that the slave splunks do not contain one anothers data. The master would be a infrastructure wide instance able to view data across all systems while the slaves can only view its local systems data. Thats why each would have to have their own search head.  If i point to different indexers, I read it will count twice against the licensing. Replication gets around this but I have not found if you can setup one way replication so that only master splunk gets all the data while the local splunk can only see its own. Everything seems that if i enable replication, slaves would send to master, and master would replicate any difference in data to each one and that defeats the problem of keeping the slaves data separate.   
Splunk Noob. I have a custom http sourcetype with multiple data sources. For one of these sources (aws:firehose), I need to concatenate a field value (ecs_task_definition) to the source value, then ... See more...
Splunk Noob. I have a custom http sourcetype with multiple data sources. For one of these sources (aws:firehose), I need to concatenate a field value (ecs_task_definition) to the source value, then do a regex or an eval at some point to remove the trailing colon and numbers, preferably all at index time. I've been advised the field=ecs_task_definition will contain a few hundred dynamic values that will change from time to time, so I can't assign these statically. My example: sourcetype=httpevent source=aws:billing source=aws:s3 source=aws:inspector source=aws:firehose ecs_task_definition=arc-permission-service-worker:100 ecs_task_definition=arc-enrollment-service:182 ecs_task_definition=arc-reporting-service:234 ecs_task_definition=arc-tenant-service:332 I would like the final result to look like: source=aws:firehose:arc-tenant-service source=aws:firehose:arc-reporting-service I have been trying to do this in props and transforms without success. I think I'm having both syntax problems added to a general lack of understanding of what I can and can't do at index time verses search time. Any help would be much appreciated. Thanks
A user with non-admin role can only see own search jobs in "Jobs" page.  in a Splunk document (https://docs.splunk.com/Documentation/Splunk/8.1.1/Search/SupervisejobswiththeJobspage) it mentioned "I... See more...
A user with non-admin role can only see own search jobs in "Jobs" page.  in a Splunk document (https://docs.splunk.com/Documentation/Splunk/8.1.1/Search/SupervisejobswiththeJobspage) it mentioned "If you have the Admin role, or a role with an equivalent set of capabilities, you can manage the search jobs run by all users of your Splunk implementation."  Can anybody advise which minimum capabilities are required in order to manage (actually just need to read) all users search jobs? Thank you!
Hi I have an accelerated data model, when I run the search like below it returns result in a few seconds. "| datamodel Network_Traffic All_Traffic summariesonly=true search | search All_Traffic.src... See more...
Hi I have an accelerated data model, when I run the search like below it returns result in a few seconds. "| datamodel Network_Traffic All_Traffic summariesonly=true search | search All_Traffic.src=x.x.x.0/24 OR All_Traffic.dest=x.x.x.0/24 " But when I pipe it to table command query take more than an hour to complete. "| datamodel Network_Traffic All_Traffic summariesonly=true search | search All_Traffic.src=x.x.x.0/24 OR All_Traffic.dest=x.x.x.0/24 | table All_Traffic.src" I run both of them in last 7 days time range, What's the problem?
Hi Everyone, I have one requirement. We have over 100 dashboards built for our App. Our team spends a lot of time monitoring the availability and accuracy of these dashboards. I want to see the lis... See more...
Hi Everyone, I have one requirement. We have over 100 dashboards built for our App. Our team spends a lot of time monitoring the availability and accuracy of these dashboards. I want to see the list of users who are visiting the dashboards  with the count. I am using the below query: index=_internal sourcetype=splunkd_ui_access EPSF_Infrastructure NOT splunkd user!="-" | rex field=uri "^/[^/]+/app/(?<app>[^/]+)/(?<dashboard>[^?/\s]+)" | search NOT dashboard IN (alert alerts dashboards dataset datasets data_lab home lookup_edit reports report search splunk) | stats count by app dashboard user The issue I am facing is : I am not getting all the users who are visiting the dashboards. Can someone guide me on this.
Hi All, Basically the data (WinEventLogs) flow is UF -> HF -> indexer Group 1/ Indexer Group 2. All the data will go to Indexer Group 1, while subset/filtered data will go to Indexer Group 2 and to... See more...
Hi All, Basically the data (WinEventLogs) flow is UF -> HF -> indexer Group 1/ Indexer Group 2. All the data will go to Indexer Group 1, while subset/filtered data will go to Indexer Group 2 and to another index. I have managed to configure the HF to send the data to multiple indexer groups but I can't seem to change the index of the events for data that go to Indexer Group 2. Any advice? Additionally if I only want to change the index of events coming from a specific HF would that be possible? Heavy Forwarder Configuration: outputs.conf [tcpout] defaultGroup = none [tcpout:group1] disabled = false server = group1IDX:9997 [tcpout:group2] disabled = false server = group2IDX:9997   props.conf [source::WinEventLog:Security] TRANSFORMS-routing =routeGroup1, routeGroup2 transforms.conf [routeGroup1] REGEX = .  DEST_KEY = _TCP_ROUTING FORMAT = group1 [routeGroup2] REGEX = Filter DEST_KEY = _TCP_ROUTING FORMAT = group1, group2   Indexer Configurations: props.conf [source::WinEventLog:Security] TRANSFORMS-changeIndex = changeIndex   transforms.conf [changeWinIndex] REGEX = . DEST_KEY = _MetaData:Index FORMAT = newIndexName
Hello I have a stranfge behavior concerning the search below In the "host_allIND.csv" file, I have just HOSTNAME from a specific type which is "Type 1" But when I run the search below, I have also... See more...
Hello I have a stranfge behavior concerning the search below In the "host_allIND.csv" file, I have just HOSTNAME from a specific type which is "Type 1" But when I run the search below, I have also HOSTNAME with type = "Type 2" How is it possible to have events with HOSTNAME= Type 2 even if in "host_allIND.csv" lookup I have only HOSTNAME=Type 1?   `boot` | fields host BootTime | lookup host_allIND.csv HOSTNAME as host output SITE DEPARTMENT CATEGORY | stats max(BootTime) as "Boot time" last(SITE) as SITE last(CATEGORY) as CATEGORY last(DEPARTMENT) as DEPARTMENT by host    Thanks
Hello All, i have source with events ****4007656256*vwxmsghdlr.cpp*03523*08000*2020DEC22*14:01:30 Partition not defined for this node: ****4062182208*vwxmsghdlr.cpp*03523*08000*2020DEC22*14:00:01... See more...
Hello All, i have source with events ****4007656256*vwxmsghdlr.cpp*03523*08000*2020DEC22*14:01:30 Partition not defined for this node: ****4062182208*vwxmsghdlr.cpp*03523*08000*2020DEC22*14:00:01 Partition not defined for this node: ****4062182208*vwxmsghdlr.cpp*03523*08000*2020DEC22*14:00:01 Partition not defined for this node: ****4059036480*vwxmsghdlr.cpp*03523*08000*2020DEC22*14:00:00 Partition not defined for this node: ****4007656256*vwxmsghdlr.cpp*03523*08000*2020DEC22*14:00:00 Partition not defined for this node: ****4059036480*vwxmsghdlr.cpp*03523*08000*2020DEC22*14:00:00 Partition not defined for this node: ****4007656256*vwxmsghdlr.cpp*03523*08000*2020DEC22*14:00:00 Partition not defined for this node: ****4029676352*vwxmsghdlr.cpp*03523*08000*2020DEC22*13:58:54 Partition not defined for this node:   can someone help me in writing TIME_PREFIX and LINE_BREAKER?