All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I have three visualisation elements: one table and two panels with single numbers. These three are now next to each other. I would like to put Table on the left and the other two panels vertical... See more...
Hi,  I have three visualisation elements: one table and two panels with single numbers. These three are now next to each other. I would like to put Table on the left and the other two panels vertically stacked on the right hand. Below is the source code. Can anyone help me out? Thanks in advance! <panel> <table> <title>xx</title> <search> <query>........</query> <earliest>........</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name=..........</option> </table> </panel> <panel> <single> <title>.......</title> <search> ......... </search> <option name=................</option> </single> </panel> <panel> <title>.........</title> <single> <search>...............</search> <option name="....................</option> </single> </panel>
Hi Splunk Experts, I'm a newbie to splunk and have been tasked with finding out if a couple of our users (e.g user1@domain.com, user2.....) received malicious emails from a list of domains in that i... See more...
Hi Splunk Experts, I'm a newbie to splunk and have been tasked with finding out if a couple of our users (e.g user1@domain.com, user2.....) received malicious emails from a list of domains in that i have bow created a lookup (baddomains.csv) table for. We have an index (index=emailgt) but the challenge is preparing a search query that would parse that index to the lookup table  (baddomains.csv) and then to match if any of the users interacted with these bad domains.   Thanks
I have my PANs forwarding events to a syslog-ng server over TCP, logs are parsed out to disk and then fowarded to the indexer (which replicates to another indexer) using the universal forwarder. Con... See more...
I have my PANs forwarding events to a syslog-ng server over TCP, logs are parsed out to disk and then fowarded to the indexer (which replicates to another indexer) using the universal forwarder. Config follows the model here: https://splunk.paloaltonetworks.com/universal-forwarder.html However my syslog-ng config also has us_dns(no) under the source stanza and sets permits under the destination stanza (creat_dirs, owner, group, perm, etc..) The indexers are already receiving events from multiple other sources from the same syslog-ng server with no issues. I have install the Splunk_TA_paloalto add-on on the Indexer and I figure that the issue is with the content of the inputs.conf file on the indexer. It doesn't make sense to listen on udp:514, since the data is already coming from a UFW, and it's not parsing the data properly (sourcetype is still pan:log or pan:firewall, doesn't seem to be affected what I put in at the syslog-ng server inputs.conf). There seem to be lots of examples of folks getting this working, is it just that the UFW doesn't work and it has to go through a HFW instead? Thanks, d      
Hi, I have just downloaded Splunk onto my MacBook and the Terminal has popped up to do its verification and won't let me enter a password.. I am receiving a key symbol next to the password line, the... See more...
Hi, I have just downloaded Splunk onto my MacBook and the Terminal has popped up to do its verification and won't let me enter a password.. I am receiving a key symbol next to the password line, then I press enter and it says credentials are invalid.. 
We have Network outages at times that make the FWs not available. I know that data can get quede up. But what if the outages is long & what does one do to make sure the SOC & IR have the latest data ... See more...
We have Network outages at times that make the FWs not available. I know that data can get quede up. But what if the outages is long & what does one do to make sure the SOC & IR have the latest data & events to work with? 
Hello folks, I have about 20 hosts that fall under index=devices I need a query that will display the count information for each host, once over a 24 hour period.  My intention is to use this to s... See more...
Hello folks, I have about 20 hosts that fall under index=devices I need a query that will display the count information for each host, once over a 24 hour period.  My intention is to use this to set an alert so that if a host does not have any count in 24 hours, it will send an alert.  What query would work best for this? 
We are trying to to extract the fields from Message in WinEventLog in the Avecto data. The data looks like -   Process Id: 21592 Parent Process Id: 24704 Workstyle: Avecto Defendpoint.Systems E... See more...
We are trying to to extract the fields from Message in WinEventLog in the Avecto data. The data looks like -   Process Id: 21592 Parent Process Id: 24704 Workstyle: Avecto Defendpoint.Systems Employees Application Group: Avecto Defendpoint.Add Admin - Privileged Users - Applications Reason: <None> File Name: <file name> Hash: 4478EBABE67B50EB111D59F95FE029D31329F1FC Certificate: <name> Description: Command line runner Application Type: exe Product Name: IntelliJ Platform Product Code: <None> Upgrade Code: <None> ....   Each line in Message has a name value pair, separated by a colon. The documentation at https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Rex shows -   | makeresults | eval test="a$1,b$2" | rex field=test max_match=0 "((?<field>[^$]*)\$(?<value>[^,]*),?)"   which works. The similar one I did for Avecto works fine -   index = <avecto index> Message=* | rex field=Message max_match=0 "((?<field>.+)\:(?<value>.+),?)" | table Message field value   We end up with field a and value, each is a multi-value field. Is there a way to change so, we'll have multiple fields, each with its own name/value pair, such as Process_Id having 21592 as its value.            
I have a CSV and a Keystore with data that I would like to join together.  I read the documentation: https://docs.splunk.com/Documentation/SplunkCloud/8.1.2101/SearchReference/Append https://do... See more...
I have a CSV and a Keystore with data that I would like to join together.  I read the documentation: https://docs.splunk.com/Documentation/SplunkCloud/8.1.2101/SearchReference/Append https://docs.splunk.com/Documentation/SplunkCloud/8.1.2101/SearchReference/Join It appears my type of search might not be covered by either join or append.  I have a scan.csv that has about 19,000 IP's with scan data in it and another CSV called IP's.csv that has about 40,000 IP's.  The IP's.csv has some systems that have a system name and some that do not. Shortened example below: IP's.csv System              IP                     ABC        192.168.1.1                       ABC        192.168.1.2                       DEF         192.168.1.3                       DEF         192.168.1.4                      DEF         192.168.1.5                     Scan.csv IP                                 scan-data 192.168.1.1                 yes 192.168.1.2                 yes 192.168.1.3                 yes 192.168.1.4                 yes  192.168.1.5                 yes 192.168.1.6                 yes 192.168.1.7                 yes When I do a join with left, outer, or inner, lookup, or and appendcols, etc, it only returns the data that has "system" field.  The data will return and look something like this: System              IP           scan-data ABC        192.168.1.1       yes                    ABC        192.168.1.2       yes                  DEF         192.168.1.3       yes                   DEF          192.168.1.4      yes                  DEF         192.168.1.5       yes However, 192.168.1.6   and 192.168.1.7 do not return at all (it appears joins and appends only return what is in common, like by the System field), how do I do a search that returns data that does not have a "System"? I want the data to return to look something like this: System              IP           scan-data ABC        192.168.1.1       yes                    ABC        192.168.1.2       yes                  DEF         192.168.1.3       yes                   DEF         192.168.1.4       yes                  DEF         192.168.1.5       yes                  192.168.1.6       yes                   192.168.1.7      yes Any ideas?  
Once RACF logs have been located, where would I need to send them so that they could be sent to Splunk?  In simple terms, I would assume that they would need to be sent to a Syslog server by followin... See more...
Once RACF logs have been located, where would I need to send them so that they could be sent to Splunk?  In simple terms, I would assume that they would need to be sent to a Syslog server by following the below process?  Also, I do not think TA's like Ironside or SFSherlock or IBM Common Data Provider for z Systems are required, or are they?   1.)  Forward SMF logs from z/OS system to Syslog into their own dedicated directory 2.)  On the Syslog server configure the inputs.conf/monitoring stanza to look for those SMF logs 3.)  The UF that is installed on the Syslog server is configured via its outputs.conf file to forward to the  IDX or HF for parsing 4.) IDX or HF (props.conf) the logs are parsed and then sent down to Splunk
I am trying to setup the Corelight App for Zeek data on a clustered Splunk setup, but it seems the TA doesn't want to work along with the App.  The App is required to be installed onto the Search Hea... See more...
I am trying to setup the Corelight App for Zeek data on a clustered Splunk setup, but it seems the TA doesn't want to work along with the App.  The App is required to be installed onto the Search Head cluster and the TA is to be on the Indexers, as per the guidelines from Corelight. I am wondering if the install isn't suited to a clustered Splunk setup. I am also having error codes for a Windows TA - but im not sure if this relates to this issue here.
Dear Splunk, Is there any estimate for log size in GB per day, that the CDP can generate from IBM.    
I'm using an on-prem Heavy Forwarder to filter some noisy logs coming in via syslog (HF is installed on syslog server). Logs are then forwarded to our Splunk Cloud instances.  I configured the input... See more...
I'm using an on-prem Heavy Forwarder to filter some noisy logs coming in via syslog (HF is installed on syslog server). Logs are then forwarded to our Splunk Cloud instances.  I configured the inputs.conf, props.conf, and transforms.conf using the regex forwarding the garbage to a nullQueue index to drop the necessary traffic. I reloaded the transforms using the "refresh" URL below (without restarting the entire splunkd service described here). This was working perfectly as expected. http://your-heavy-forwarder-splunk-server:8000/en-GB/debug/refresh I recently made a change to drop some more logs in a different file. So, changes were made to different inputs, props, and transform config file than the first time. I used the same method to reload the transforms. As soon as I did that, for about 10 to 30 minutes the previous log filter stopped working and tons of garbage started flowing into our Splunk Cloud account (see the crazy bump shown below).   After a while it stopped on its own and the new filter works as expected as well (I'm so confused). However, as you can imagine, this crazy amount of logs flowing into Splunk Cloud every time we want to discard logs is counterintuitive to the whole exercise.    I want to understand if this is a known issue and if there is a way around it.    
Hi, is it possible to ingest RACF (SMF) logs into Splunk without having to purchase an expensive third-party TA like Ironside or IBM Common Data Provider for z Systems to do it? 
unable to execute python script , below is the inputs.conf and py script. if i run the script from python interpreter , it gives output but not thro inputs.conf @kamlesh_vaghela    [script:///$SP... See more...
unable to execute python script , below is the inputs.conf and py script. if i run the script from python interpreter , it gives output but not thro inputs.conf @kamlesh_vaghela    [script:///$SPLUNK_HOME/etc/apps/search/bin/dockesd.py] disabled = false host = host1 index = indx interval = 30 source = Perform sourcetype = Memory    Script:   import os os.system('docker container ls --format="{{json .}}"')  
In a dashboard, a single panel using a lookup and geostats works fine.  When I take that search and split it up to use a base search with multiple panels it semi-breaks.  The Cluster map will start l... See more...
In a dashboard, a single panel using a lookup and geostats works fine.  When I take that search and split it up to use a base search with multiple panels it semi-breaks.  The Cluster map will start loading but the pie charts appear then disappear.  The other panels on the dashboard are pie charts and they all load appropriately. Once the search completes however, if you click refresh the cluster map results will display properly.  Is this a problem with my source, the SPL, or something else (bug)? Source below is just the Panel for the Cluster map I am having problems with.   <form> <label>Firewall Clustermap</label> <description>Inbound Traffic</description> <search id="Global_Traffic"> <query>index=xyz_firewall sourcetype=xyz_log policy_name="XYZ" direction=inbound |fields Country,src_ip,vendor_action,dest_ip,dest_port, src_port |iplocation src_ip |search Country=* [|inputlookup XYZ_Country_Block_List] </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <fieldset submitButton="true"> <input type="time" token="field1"> <label>Choose Time then Click Submit</label> <default> <earliest>-1m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>GLOBAL DROPS</title> <map> <title>ACTION: Drop</title> <search base="Global_Traffic"> <query>|Search vendor_action IN (Drop, Deny, Block, Reject) |geostats count by Country globallimit=0</query> </search> <option name="mapping.type">marker</option> <option name="refresh.display">progressbar</option> </map> </panel>
I have to share pdf report as part of the alert for every week how should I add the date range that is the start date to end date (where the user gives the time range in the search of splunk eg. Last... See more...
I have to share pdf report as part of the alert for every week how should I add the date range that is the start date to end date (where the user gives the time range in the search of splunk eg. Last 7 days) with pdf and share @mayurr98
    W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. F NETWORK [main] The provided SSL certificate is expired or not yet valid. F - [main] Fatal Ass... See more...
    W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. F NETWORK [main] The provided SSL certificate is expired or not yet valid. F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager.cpp 1145F F F - [main] ***aborting after fassert() failure      I am on a closed network so I copied these errors from other posts and removed their older time stamps.   Yes, I have tried removing server.pem and restarting splunk it does nto auto generate a new Server .pem.  Yes I followed the attached instructions: https://splunkonbigdata.com/2019/07/03/failed-to-start-kv-store-process-see-mongod-log-and-splunkd-log-for-details/  I do have in server.conf [sslConfig] caCertFile= $SPLUNK_HOME/etc/auth/cacert.pem caPath=$SPLUNK_HOME/etc/auth enableSplunkdSSL = true serverCert = /opt/splunk/etc/auth/mycerts/myCert.pem SSLRootCAPath = /opt/splunk/etc/auth/mycerts/CA-Chain-Cert.pem I do not have any Certs listed under [KVStore] section  Not sure if it defaults to use server.pem if not listed or if it defaults to the SSLConfig.  The certs in my SSLConfig ARE expired and I cannot get server team to generate new ones.  I have a distributed environment.  I can create local certs using ./splunk createssl if that helps and move off of the Current CA the enterprise uses since it is needing upgraded anyway.   I am using Red Hat Linux 7.5 and Splunk 7.3.4 and I have Enterprise Security and UBA as well.  I first noticed this error after a reboot on the ES Server search server.  I then later did a rolling restart on my index cluster and they all give kvstore errors now as well.  I do not have experience with Splunk ES or UBA and just arrived at this job a few months ago.  They have gone through a tone of quasi splunk admins who had little or no experience with SPLUNK due to difficulty finding splunk admins.   I feel like the servercert in sslconfig of server.conf may be my issue.  any help is HIGHLY appreciated! Yes, I will upvote troubleshooting assistance and answers
Good morning guys, I'm currently working on an analytics dashboard on the app Splunk Enterprise security. The dashboard consists of many modules and panels. For example there are several sections... See more...
Good morning guys, I'm currently working on an analytics dashboard on the app Splunk Enterprise security. The dashboard consists of many modules and panels. For example there are several sections that can only be shown by selecting a radio buttom. The problem is the best practices of the dashboard. The dashboard every time updates internally causing slowdowns. It would be possible to update the dashboard (and therefore the panels) only when i select the radio buttom? Ultimately to make sure that the dashboard does not update in full at each refresh? Thank you all, Xavier
I have similar XML data stated below and output of my query as shared in snapshot <root> <Jobs> <job> <id>Customer-1</id> <startrange> <sideA> <serialNumber>1</serialNumber> <timestramp>2021-... See more...
I have similar XML data stated below and output of my query as shared in snapshot <root> <Jobs> <job> <id>Customer-1</id> <startrange> <sideA> <serialNumber>1</serialNumber> <timestramp>2021-03-23T07:14:09.906-0700</timestramp> </sideA> <sideB> <serialNumber>2</serialNumber> <timestramp>2021-03-23T07:15:29.906-0700</timestramp> </sideB> </startrange> </job> <job> <id>Customer-2</id> <startrange> <sideA> <serialNumber>3</serialNumber> <timestramp>2021-03-23T07:14:29.906-0700</timestramp> </sideA> <sideB> <serialNumber>4</serialNumber> <timestramp>2021-03-23T07:15:39.906-0700</timestramp> </sideB> </startrange> </job> <job> <id>Customer-3</id> <startrange> <sideA> <serialNumber>5</serialNumber> <timestramp>2021-03-23T07:14:49.906-0700</timestramp> </sideA> </startrange> </job> <job> <id>Customer-4</id> <startrange> <sideA> <serialNumber>7</serialNumber> <timestramp>2021-03-23T07:14:29.906-0700</timestramp> </sideA> <sideB> <serialNumber>8</serialNumber> <timestramp>2021-03-23T07:15:39.906-0700</timestramp> </sideB> </startrange> </job> </Jobs> </root> Sample Query:   My output: for the customer-3 Side-B is un-available so Splunk is taking customer-4 -side B data and displaying for Customer-3, i need blank values for the customer B Required output: Could any one help me how can i achieve it.
Hi, I am trying to identify which power meter reading has stopped increasing for 5 days.  As these power values are accumulated, I assumed that they are always in sequential order with respect to ti... See more...
Hi, I am trying to identify which power meter reading has stopped increasing for 5 days.  As these power values are accumulated, I assumed that they are always in sequential order with respect to time. Therefore, I sorted the data by ASSET_NAME and _time to get the latest value. Then I took the difference between the latest value for every 5 days. So if the difference is 0, it means that there is no power increment. Do you think that this logic flow is correct? Below is my code:   | bucket _time span=5d | sort 0 ASSET_NAME _time | stats latest(VALUE) as latestValue by ASSET_NAME _time | delta latestValue as difference | search difference = 0   Also, let's say the power values are not in sequential order due to some issue, how can I accurately identify the  power meter that has stopped increasing? Please help. Thank you!