All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have installed TA for Nutanix Prism and Nutanix Prism Dashboard for Splunk. The add-on is fetching data from 1 API endpoint, and not the others. I can see only clusters data. How can I debug... See more...
Hi, I have installed TA for Nutanix Prism and Nutanix Prism Dashboard for Splunk. The add-on is fetching data from 1 API endpoint, and not the others. I can see only clusters data. How can I debug this? Version 8.1.3 Enterprise Thanks!  
So having the idea in mind to have Alert dashboard where I can see how many alerts triggered in last 24 hours and If I can see the alert result events for the timerange it triggered. Possible?
I'm trying to determine how to allow users to enter a range of values as an input on a dashboard, and assign as a token Example:  1470-1493; 3019; 4267  (allowing ranges delineated by '-' and value... See more...
I'm trying to determine how to allow users to enter a range of values as an input on a dashboard, and assign as a token Example:  1470-1493; 3019; 4267  (allowing ranges delineated by '-' and values delineated by ';' Would like extract the values and use them with the IN command  index=blah sourcetype="blah" host=1.2.3.4 Skill IN (1470 1471 463 1472 1473 1482 1483 1484 1485 1490.... Any help is greatly appreaciated!
So I see data coming in: 04-13-2021 17:32:25.470 -0400 INFO StatusMgr - destPort=9997, eventType=connect_done, group=tcpin_connections, sourceHost=*.*.153.70, sourceIp=*.*.153.70, sourcePort=39820, ... See more...
So I see data coming in: 04-13-2021 17:32:25.470 -0400 INFO StatusMgr - destPort=9997, eventType=connect_done, group=tcpin_connections, sourceHost=*.*.153.70, sourceIp=*.*.153.70, sourcePort=39820, statusee=TcpInputProcessor   But I can't find where this data is going.  I have the source info and have searched all indexes on the above IPs as well as keywords/fields from the source but can't find it.  Where does it go next?  How can I tell if the indexer is generating errors or rejecting it? This data is being sent directly to my indexer cluster from a remote CRIBL source.  The data is text (comma delimited).   Where does the data go next?   Also, if I want to configure the IP do I configure a source in the cluster master under master apps and push it out: /master-apps/cribl/local/inputs.conf [tcp://*.*.153.70:9997] index=cribl   or do I have to do this in /system/local/inputs.conf on each indexer?    or do I have to forward to a HF first. Thanks!  
Monitoring console Search-->Scheduler Activity -->Scheduler Activity : Instance not producing any data. I am in search of Skipped searches & reports. Please tell me what am I missing to make this work?
I have a requirement to add new data format to splunk . Below is the sample data . If i want to setup a new sourcetype based on the field 'InputType', what are the changes required in the props.conf ... See more...
I have a requirement to add new data format to splunk . Below is the sample data . If i want to setup a new sourcetype based on the field 'InputType', what are the changes required in the props.conf file and transforms.conf file ? I want the sourcetype to display the values aMG, zRP or bKL Sample data InputType="aMG"|filename <K............ InputType="zRP"{dataformat..>........... InputType="bKL"$%mo << ........ InputType="aMG"\dataformat.......          
How do I change the jobs / Reports Frequency in DMC monitoring console to less time than scheduled. When I select Edit - Edit schedule it says that the permissions will be removed etc plus it results... See more...
How do I change the jobs / Reports Frequency in DMC monitoring console to less time than scheduled. When I select Edit - Edit schedule it says that the permissions will be removed etc plus it results in "time range picker"| and so on. Please advise.
When I pip install the splunk-sdk, it only creates a /splunklib directory. But, the READme on GitHub mentions adding the parent /splunk-sdk-python directory to PYTHONPATH, and some of the example sn... See more...
When I pip install the splunk-sdk, it only creates a /splunklib directory. But, the READme on GitHub mentions adding the parent /splunk-sdk-python directory to PYTHONPATH, and some of the example snippets import code from the sibling utils module, neither of which exist via pip install. Is this the expected behavior and/or should the examples be updated? I am using Mac OSX High Sierra with Python 3.7, installed via Anaconda.
I am getting this error message when adding a phantom server using the new server configuration in the Phantom Add-on. Phantom Failed to communicate with user "" on Phantom server . Error: Unhashabl... See more...
I am getting this error message when adding a phantom server using the new server configuration in the Phantom Add-on. Phantom Failed to communicate with user "" on Phantom server . Error: Unhashable type: 'list' I have managed to get the add-on working by installing a version 7 versions behind the current. The Phantom user is setup as per documentation(and works on older version of installed app), my user has all the Phantom capabilities, and I have disabled cert validation. I am on Splunk 8.1.2 and using the latest version of the Phantom Add-on. 
Is the data that is sent from a splunk Universal Forwarder to the heavy forwarder, syslog messages?  If so, how do I find out which format it is using (ie: syslog-ng)
Doing some hardening on my splunk and would like to block any outgoing connections not required. Besides DNS as far as i logged on last couple of days splunk only requires outgoing on port 443 over ... See more...
Doing some hardening on my splunk and would like to block any outgoing connections not required. Besides DNS as far as i logged on last couple of days splunk only requires outgoing on port 443 over /TCP/SSL on servers using certificates with names that fit "*splunk.com"? I am talking about license and etc conections required by splunk, for this question assume a standalone enterprise splunk server with no integration with other servers or forwarders.
Any query to anomali search for email data/intergrate with phantom?
I have a query where it runs a certain summary query every 15 minutes in a bucket. However, there are fairly frequent maintenance that happens for our splunk dev team which causes a lot of missing ti... See more...
I have a query where it runs a certain summary query every 15 minutes in a bucket. However, there are fairly frequent maintenance that happens for our splunk dev team which causes a lot of missing time frames in the query and causes our alerts to misfire. Currently:   index=summary-ecp-ord-oms source=ecp-ord-owls-transaction-perf-15min earliest=-10080m latest=-10070m | stats sum(count) as previous by request     I was wondering if there is something like a way to check if the results from the initial query is more than 3, then run the stats on the data. Otherwise, return nothing.     index=summary-ecp-ord-oms source=ecp-ord-owls-transaction-perf-15min earliest=-10080m latest=-10070m | where eventcount > 3 | stats sum(count) as previous by request    
I have two queries. One gets the total number of events using the message field:   index=my_index sourcetype=my_sourcetype | stats count(message) AS TotalA | table TotalA   And the other uses mul... See more...
I have two queries. One gets the total number of events using the message field:   index=my_index sourcetype=my_sourcetype | stats count(message) AS TotalA | table TotalA   And the other uses multiple regex statements and the message field to get the total number of errors:   index=my_index sourcetype=my_sourcetype | regex message = "(?:(^Problem.*)|((?i).*Issue.*)|((?i).*Error.*))" | stats count(message) AS TotalB | table TotalB   My goal is to have TotalA and TotalB side by side in a table. I would then like to divide TotalA by TotalB to get a ratio and display that alongside TotalA and TotalB in the table. How could I go about combining these? Thanks!
Hi, Anyone know the partitioning table needed for a clustered phantom installation.
hi there, I have some machines that collect Security logs from Windows. The universal forwarder on machines have this kind of conf:     [WinEventLog://Security] index=a _TCP_ROUTING=indexer1, ind... See more...
hi there, I have some machines that collect Security logs from Windows. The universal forwarder on machines have this kind of conf:     [WinEventLog://Security] index=a _TCP_ROUTING=indexer1, indexer2     Indexer1 and indexer2 are part of two different Splunk Enterprise installations: for indexer1 the 'a' index is correct, but the indexer2 puts security logs on index 'b'. So I need to change my index on indexers or on heavy forwarders. How can I do this? ########## I've tried this on indexer: props.conf     [source::WinEventLog:Security] TRANSFORMS-indexing1 = idx_change     transforms.conf     [idx_change] SOURCE_KEY=_raw REGEX=. DEST_KEY=_Metadata:Index FORMAT=b      
Hi we are getting duplicates on log events Events are : - multiline - large (to very large) - also the files can grow to very large - open for long time   symptoms: - when oneshotting the fil... See more...
Hi we are getting duplicates on log events Events are : - multiline - large (to very large) - also the files can grow to very large - open for long time   symptoms: - when oneshotting the file > no duplicates - duplicates may arrive on different indexers, or same indexer - duplicates may arrive in short time span or even up to an hour - we get them both from Solaris and Redhat hosts we have played with useAck, increasing filedescriptors, use of parallel mode, increasing bandwidth to try and rule out it being the flow towards the HF. From logs we do see files being read entirely up to 3 times, but this is not the case for all, so there may be several root causes. I am now thinking in the direction of writebuffers, but am not sure because I do also see smaller events being duplicated. we are using these settings for props UF: EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)(?:\w+\s+\w+\s+\d{1}) HF:  TRUNCATE = 99999    # because of large messages    TIME_PREFIX = ^(?:[^\t]+?\t){4} # Format is 2021-01-18 16:03:27.118885 so 26 characters MAX_TIMESTAMP_LOOKAHEAD = 26 TIME_FORMAT = %Y-%m-%d %H:%M:%S.%6N LINE_BREAKER = ([\r\n]+)(?:\w+\s+\w+\s+\d{1}) # newline characters can be part of the message SHOULD_LINEMERGE = false any pointers from your experience on how to further troubleshoot, or just extra pair of eyes/brain is more than appreciated.   thanks
Hi, I have two indexes from two different sources, but I want to use them together, preferably in a table. index1 contains data on hardware units in a rack, with fields ESN (serial number) and UnitT... See more...
Hi, I have two indexes from two different sources, but I want to use them together, preferably in a table. index1 contains data on hardware units in a rack, with fields ESN (serial number) and UnitTemperature. index2 contains external temperature data, with fields Thermocouple (which are just numbered 1 to 8 ) and Temperature. Thermocouples 1-4 are at the back and 5-8 are at the front of the rack. I have a lookup table to get the number of the two thermocouples (front and back) closest to each unit from their ESN. Now what I want is to make a table with columns: ESN, Unit Temperature, Closest Front Thermocouple Temperature, Closest Back Thermocouple Temperature.   Is there any easy way to do this? Thanks.
(?s)(subdialogJsp|jsFunction) id=\"(?<dbstate>[\w]+_DB)\"[^<]*?(<result returned|result).*?(statusCode|H_returnCode|responseHttpStatus|H_PendingRatePlan|[\w][a-gA-G][i-qI-Q][t-zT-Z]\s)(:|=)(?<H_retur... See more...
(?s)(subdialogJsp|jsFunction) id=\"(?<dbstate>[\w]+_DB)\"[^<]*?(<result returned|result).*?(statusCode|H_returnCode|responseHttpStatus|H_PendingRatePlan|[\w][a-gA-G][i-qI-Q][t-zT-Z]\s)(:|=)(?<H_returnCode>.*?)(,|\}\))'   I have the above regular expression looking for a certain db state and return a H_returnCode based on the matches in result_returned or result.   One of the H_returnCode i get is &quot;4992&quot; . In this return code i want to eliminate the & and everything that comes after it   How do i do that?  
Hi, In my dashboard i have set of inputs and when i submit the values gets stored in a lookup file.  2 dropdowns , 1 multiselect and 1 text field at times, user gives multiple items in the "Text i... See more...
Hi, In my dashboard i have set of inputs and when i submit the values gets stored in a lookup file.  2 dropdowns , 1 multiselect and 1 text field at times, user gives multiple items in the "Text input field" with "," , how do i separate the records while adding into CSV or after adding how can i separate it? text input  name - Type file , deik, disk, cpu, memory ... in the lookup file it shows like - file , deik, disk, cpu, memory. how do i separate it?