All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have configured a couple of new hosts to forward Windows logs directly to Splunk cloud rather than going via on prem Splunk. I have implemented this configured on a Splunk distribution server ... See more...
Hi, I have configured a couple of new hosts to forward Windows logs directly to Splunk cloud rather than going via on prem Splunk. I have implemented this configured on a Splunk distribution server and defined the hosts via server class. I can see the hosts logs appearing in Splunk but am unsure how to verify they are being injested via Splunk cloud rather than on prem. Could someone advise on how I can validate this? Thanks
Hi Expert,                      Quite new to Splunk . From the example log line below 03:23:05.056 [publish-1] INFO LoggingAuditor - [testout] TracingOutgoing: 8=FIX.4.29=90635=8115=ONMI=SOMEVENUE3... See more...
Hi Expert,                      Quite new to Splunk . From the example log line below 03:23:05.056 [publish-1] INFO LoggingAuditor - [testout] TracingOutgoing: 8=FIX.4.29=90635=8115=ONMI=SOMEVENUE34=37249=BRX60256=testout 52=20210727-07:23:05.05 Is it possible to pull out in columns headers somehow ?  LogType=LoggingAuditor Destination=[testout] Direction=TracingOutgoing SendingTime=52=20210727-07:23:05.05 (just the time ) ? Thanks so much ! 
We are running Splunk on Windows and are moving from a multisite cluster with two sites to a single site. Are there any detrimental effects from leaving the cluster master in multisite mode even thou... See more...
We are running Splunk on Windows and are moving from a multisite cluster with two sites to a single site. Are there any detrimental effects from leaving the cluster master in multisite mode even though only one site is configured? We would like to avoid the down time required to switch to single site mode if possible.
Hello I have a auditd search like type=EXECVE msg=audit(16): a0="sendmail" a1="-t" I would like one field with any field like a (a0, a1, a2, a3  ect..) I try: "type=EXECVE msg=audit(16): argc=2 a... See more...
Hello I have a auditd search like type=EXECVE msg=audit(16): a0="sendmail" a1="-t" I would like one field with any field like a (a0, a1, a2, a3  ect..) I try: "type=EXECVE msg=audit(16): argc=2 a0="sendmail""  | foreach a* [ eval test = test +  '<<FIELD>>' ]   No result, I need you help please.  
It seems that the authenticationDetail resource type is no longer part of the: Sign-ins - Azure AD sign-ins including conditional access policies and MFA After researching the issue it seems only th... See more...
It seems that the authenticationDetail resource type is no longer part of the: Sign-ins - Azure AD sign-ins including conditional access policies and MFA After researching the issue it seems only the Beta API NOT the v1.0 API has the data we want. However toggling the addon to Beta Has not affect on the log structure we still don't see authenticationDetail resource type in the logs.  Microsoft Azure Add-on for Splunk Version: 3.1.1 Splunk Enterprise 8.1 Is this a problem with the TA not having the correct python to pull the data or the MS API changing ? worked in April this year. 
I'm searching about how can I get the saved searches creation date, but I didn't see it in any documentation. Is it possible to use rest command to see this info or any other command? I got only the... See more...
I'm searching about how can I get the saved searches creation date, but I didn't see it in any documentation. Is it possible to use rest command to see this info or any other command? I got only the updated field, but it's not what I need.   
Dear Splunk community, I received an alert mail  from Splunk about the need to update my splunkbase app with latest add-on builder by 31 august. If I understood correctly the message, the need for ... See more...
Dear Splunk community, I received an alert mail  from Splunk about the need to update my splunkbase app with latest add-on builder by 31 august. If I understood correctly the message, the need for upgrading is mandatory only if app is built with the addon builder. Actually I've never used the addon builder for packaging, I've used instead the packaging toolkit through the command line. Does that mean that no action is required from me? Thanks in advance for your time and support.
Hi, We have recently migrated from LEA to checkpoint log exporter facility to collect Checkpoint firewall logs in CEF format. Even after trying multiple props configuration, we still observe events... See more...
Hi, We have recently migrated from LEA to checkpoint log exporter facility to collect Checkpoint firewall logs in CEF format. Even after trying multiple props configuration, we still observe events breaking at irregular intervals. Some events parse correctly at the start and end of the event and some in between or abruptly. we even tried the splunk add-on - https://splunkbase.splunk.com/app/4180/ Does anyone have a working props?
I'm monitoring AD and DNS Server logs on Windows 2019 servers and Universal Forwarder has been the resource utilization offender. Is it possible to limit the server's memory or CPU usage by UF? I'm... See more...
I'm monitoring AD and DNS Server logs on Windows 2019 servers and Universal Forwarder has been the resource utilization offender. Is it possible to limit the server's memory or CPU usage by UF? I'm running UF version 8.2.1 Windows Server 2019, Splunk Enterprise 8.2.1 Linux_64. The amount of DNS events is huge. [monitor://C:\Windows\System32\Dns\dns*.log] Thanks in advance. James \0/
We are monitoring through HF a directory through 'batch' input. The directory contains 100s of zip files, each zip contains 1000s of log files. We have a requirement to write down each log file sep... See more...
We are monitoring through HF a directory through 'batch' input. The directory contains 100s of zip files, each zip contains 1000s of log files. We have a requirement to write down each log file separately, after it was being forwarded. 1. We tried to use splunkd.log in the HF The component "ArchiveProcessor" does not write each log file within the zip. The component "Metrics" holds stastics of top 10 every 30 seconds, so it does not write every file. Can we change the 'Metrics' paremeters, to write top X and not top 10? Is there any other component we can use, even in DEBUG mode, to get that information? 2. We tried to approach this through the index itself in the indexer. The log files timestamp is not close to 'now', but is spreaded across many years, so it is very slow and not efficient to search them through the index. For example: (| tstats count where earliest=0 latest=now index=myindex by source)
Hi, I'm upgrading my cluster master from version 8.0.3 to 8.2.1. After installing the new version over the old deployment and starting splunk, I get "ERROR: pid xxx terminated with signal 4 (core du... See more...
Hi, I'm upgrading my cluster master from version 8.0.3 to 8.2.1. After installing the new version over the old deployment and starting splunk, I get "ERROR: pid xxx terminated with signal 4 (core dumped)", and the Splunk web server is not available. How can I fix this? My Splunk environment is running on AWS Linux EC2s. This is the information i have about the OS: NAME="Amazon Linux AMI" VERSION="2018.03" ID_LIKE="rhel fedora"
Is it possible to apply for a formula in dashboard? For eg:  Taking a value from a panel1, and using that value in panel2?
Hi  I have a signal that I am updating every 5 seconds, but it jumps to 0 every now and again. The issue is a user can read this and think all the processes are gone off the system What it should ... See more...
Hi  I have a signal that I am updating every 5 seconds, but it jumps to 0 every now and again. The issue is a user can read this and think all the processes are gone off the system What it should look like This happens every now and again, So how do I get it to wait for the job to be 100% finished before it updates the answer?   <panel> <single> <search> <query>| mstats max("mx.process.cpu.utilization") as cpuPerc WHERE "index"="metrics_test" AND mx.env=http://mx20267vm:15000 span=2m BY pid cmd process.name service.type host.name service.name | table _time pid | stats count(pid) as Cnt by _time | table _time Cnt | timechart max(Cnt) span=2m</query> <earliest>-1h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>5s</refresh> <refreshType>delay</refreshType> </search>  
Need help with a Splunk query  to display % failures for each day during the time range selected, for same index but different search term % failures = A1/A2 *100 A1= Total number of events returne... See more...
Need help with a Splunk query  to display % failures for each day during the time range selected, for same index but different search term % failures = A1/A2 *100 A1= Total number of events returned by the below query: index="abc"  "searchTermForA1"   A2= Total number of events returned by the below query: index="abc"  "searchTermForA2"   Expected Output: -------Date-------|--------A1-------------|------A2----------|-----% failures------- Separate rows in the result set for date 1-Jul, 2-Jul, 3-Jul, 4-Jul, 5-Jul, 6-Jul and 7-Jul, for time range selected as 1Jul to 7-Jul. Please help with the query. Thanks!
Hi All, I have created script for synthetic jobs in EUM with selenium IDE on firefox 54.0.1  and I save the test and add in the script of a synthetic job, it breaks on all browsers except IE 11. A... See more...
Hi All, I have created script for synthetic jobs in EUM with selenium IDE on firefox 54.0.1  and I save the test and add in the script of a synthetic job, it breaks on all browsers except IE 11. Anything I am missing here. Please guide Regards, Tejaswi
Hello I want compare one field values with another when I tried to compare it is coming in this format as shown in below pic In above picture  Project.static_code metric* are the field name... See more...
Hello I want compare one field values with another when I tried to compare it is coming in this format as shown in below pic In above picture  Project.static_code metric* are the field names(left handside).On righthandside are the field values.But I want the format to be in the below pic form.    
Hi, I am kind of new to Splunk and have a problem with my search. I have a dashboard where I have an input field for a ID. I search two different inputlookups with this ID and my dashboard is base... See more...
Hi, I am kind of new to Splunk and have a problem with my search. I have a dashboard where I have an input field for a ID. I search two different inputlookups with this ID and my dashboard is based on this. But not every table entry has the field for this ID, sometimes the field is not existing. I tried to fix this by searching  id!=*, which works fine when I just enter * for ID Inputfield, but in case I want to put an actual ID I don't want to search for all entries with id!=* (not existing).  Is there a way to change my search based on the Tokenvalue I enter in the input field? So if I enter * the search looks for id=* & id!=*, but if I enter an actual id the search only looks for id=$id$?  Any help would be highly appreciated! Thanks
I'm trying to delete specific items from our kv store by using a python custom command. I retrieve the kv store with the following command:   collection = self.service.kvstore[collection_name] ... See more...
I'm trying to delete specific items from our kv store by using a python custom command. I retrieve the kv store with the following command:   collection = self.service.kvstore[collection_name]   I then retrieve all the entries in the kv store with:   data_list = collection.data.query()   This works correctly, however only 50000 of the entries are returned. Is there a parameter I can pass to query() to remove the limit of 50000?   Thanks!
I am trying to set up HEC for my indexer cluster (v8.0.7), with 2 indexers (and 3 search heads) managed by a master node. I read multiple docs and articles already, but I want to make sure I get som... See more...
I am trying to set up HEC for my indexer cluster (v8.0.7), with 2 indexers (and 3 search heads) managed by a master node. I read multiple docs and articles already, but I want to make sure I get some basic ideas correct first. In a non-clustered env, it's simple and each HEC client will talk to port 8088 of one indexer.  But in an indexer cluster environment: Which server will an HEC client talk to?  How can "load balancing of indexing" be achieved? Is the master node in any way involved?