All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone, I am new to splunk and was unsuccessful with my query. Let's say many events are aggregated in an index from different sourcetype. I wish to compare the presence of fields in the re... See more...
Hi everyone, I am new to splunk and was unsuccessful with my query. Let's say many events are aggregated in an index from different sourcetype. I wish to compare the presence of fields in the respective sourcetype. Example Sourcetype 1 = gb-req.log Sourcetype 2 = fr-req.log My output should look like this Sourcetype City email Phone fr-req.log yes no No gb-req.log yes yes No
Hi All  I have limited experience with Splunk (just over a year) and I joined a new team with a pretty hefty Splunk roll out, many search heads, a large Index cluster (sorry I can't give away the de... See more...
Hi All  I have limited experience with Splunk (just over a year) and I joined a new team with a pretty hefty Splunk roll out, many search heads, a large Index cluster (sorry I can't give away the details) anyway I noticed that there are like 50 Indexes on the Index Cluster as shown on the Cluster Master yet some of the Search Heads (which are not clustered by the way, just letting you know) have maybe 75 or up to 95 Indexes on them, I see that these Search Heads are set up to forward their Indexes to the Index Clusters but I don't get two things: 1. how do you fit 75 Indexes from the Search Head into 50 Indexes on the Index Cluster, ha ha 2. are there any advantages or disadvantages to having local Indexes on the Search Heads which are totally empty and just forward them to the Index Cluster?  why would anyone do that? I hope you followed all that and can educate me on it, thank you
Hello Splunk team, My deployment is pretty simple. I am using EC2 instance with Splunk Enterprise trial installed and following the https://docs.splunk.com/Documentation/AddOns/released/Firehose/Con... See more...
Hello Splunk team, My deployment is pretty simple. I am using EC2 instance with Splunk Enterprise trial installed and following the https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureHECdistributed#collapseDesktop4 I created a CLB and pointed that 2 EC2 instances with Splunk Enterprise trial. I bought a custom domain name for instance abc123.info and created ACM certificate for domain name "abc123.info" as ACM console requires you to enter the domain name. My questions are below: 1. Is my deployment "Indexers in AWS VPC" as per my deployment. INdexers in AWS VPC is mentione dover here https://docs.splunk.com/Documentation/AddOns/released/Firehose/Deploymentoptions 2. The Splunk doc provides SSL configuration hyperlink for enabling SSL on Load balancer  https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureanELB  and further hyperlinks https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html?icmpid=docs_elb_console#config-backend-auth If I choose to generate SSL certificate using ACM, whats the "domain name"  for which I should be generating SSL certificate ? Please note that ACM Console requires to enter the "domain name" when provisioning SSL certificate. My failed attempt"I tried creating custom domain name( abc.com for my domain) and requested the ACM Certificate and attach it to ELB. Also, In R53 created A record and alias with abc.com poiniting to LB DNS Host name. Its still not working for me and it throws the error "Could not connect to the HEC endpoint. The host does not match the certificate provided by the peer. Make sure that the certificate and the host are valid." " Any help is appreciated by you guys !!! Thanks            
Good Afternoon, We are attempting to get our Stealthwatch data into Splunk. We are in Cloud 8.1 one so the only Add-on available is the Technology add-on for Cisco Stealthwatch from a 3rd party. Tec... See more...
Good Afternoon, We are attempting to get our Stealthwatch data into Splunk. We are in Cloud 8.1 one so the only Add-on available is the Technology add-on for Cisco Stealthwatch from a 3rd party. Technology Add-on for Cisco Stealthwatch Data Exporter | Splunkbase   We have installed the Data Exporter on our Flow Collector and confirmed that Docker Container is working. Based on the Data Exporter documentation I installed a Get-Flows script that is pulling data but I am not sure it is pulling everything and the format is clunky.    I am curious if anyone has experience with getting Stealthwatch data into Splunk Cloud with this App and what is the best way to do it. 
Hi, We are migrating Splunk from 7.3.0 to 8.1. We are trying to send Custom POST HTTP Request with the following add-on. HTTP Alert Action https://splunkbase.splunk.com/app/5022/ On 7.3.0, we are... See more...
Hi, We are migrating Splunk from 7.3.0 to 8.1. We are trying to send Custom POST HTTP Request with the following add-on. HTTP Alert Action https://splunkbase.splunk.com/app/5022/ On 7.3.0, we are using "REST alert action" version 1.1.0(the previous version of this APP). Not Supported on 8.1. We have done several test with the new APP and we are not able to pass custom parameters from Splunk search to do a custom POST HTTP Resquest with this add-on. Someone use it? Have you achived to pass custom parameters from splunk search? If you could provide an example, i will be glad. Thank you mates. Thank for your support.
Azure add-on has been install on HF for months.  The certificate expired last week. appears to have caused an issue with the Azure add-on. no data ingesting since certificate expired.  * certificat... See more...
Azure add-on has been install on HF for months.  The certificate expired last week. appears to have caused an issue with the Azure add-on. no data ingesting since certificate expired.  * certificate was replaced.  * data started ingested again right after the new certificate was working.  * then after about 4 hrs no data was being ingested again on the HF for the azure add-on * from recommendations on answers, changed the "Max Batch Set Iterations" to 1000 -- didn't help * why would data be ingesting into SPlunk after the certificate was replaced but then stop being ingested into Splunkl  
Hi Team, Please provide me the steps how I can connect my Agent to running JBOSS
We've got splunk enterprise at the company I work for, and I coded up a REST api client to make searching splunk a bit more convenient.  The client works just fine, except for one case;  any search t... See more...
We've got splunk enterprise at the company I work for, and I coded up a REST api client to make searching splunk a bit more convenient.  The client works just fine, except for one case;  any search that specifies a field including '.' returns zero results.    Other searches via the REST API work perfectly.  Anyone got a clue what may be going on?
run the below query and got the output index=xxx sc_status=201 OR sc_status=200 | stats count(eval(sc_status)) as "Total Hits", avg(time_taken) as Avg_Time_Taken by date, cs_host, sc_status Concer... See more...
run the below query and got the output index=xxx sc_status=201 OR sc_status=200 | stats count(eval(sc_status)) as "Total Hits", avg(time_taken) as Avg_Time_Taken by date, cs_host, sc_status Concern: required different Color based on status on y-axis(Total value) Required Out as per the below screen shot. )
Hello, I am new to Splunk and REGEX for that matter. What I am trying to accomplish is creating an alert when a specific event occurs with in an IP range without having to create an alert for every I... See more...
Hello, I am new to Splunk and REGEX for that matter. What I am trying to accomplish is creating an alert when a specific event occurs with in an IP range without having to create an alert for every IP individually. Here is my very basic query. index=n sourcetype=c message_text="you should not have done that" host="1.1.1.?" I put the question mark in there because this where I am stuck. The last octet is a range beginning at 23, and ending at 51. Excluding .26, .32, and .38-.44. Thank you for your time.
Dear community, I have a massive issue with a (single hosted) Splunk installation reading files from a local drive/ UNC paths: Splunk does not read these files and doesn't show them as "available" i... See more...
Dear community, I have a massive issue with a (single hosted) Splunk installation reading files from a local drive/ UNC paths: Splunk does not read these files and doesn't show them as "available" in the Files & directories config page:   The splunkd service which is running on WIndows 2016 is configured with a local administrator user who has also full permissions on the local drives/ permissions on the UNC paths. I have checked the access logging on with this technical user to the machine and opening the paths. There is also no Virus Scanner blocking Splunk (verified with procmon). The stanzas look as follows (and have always worked for other customers and for this one some time ago): and I searched through the logs but couldn't find something really useful. The log states no issue with the file watch: Since I'm really desperate, I also tried adding the following without success: - crcSalt = <SOURCE> - alwaysOpenFile = 1 Any ideas? Would be much appreciated. If I can't resolve it like this, I will have to try reinstalling spunk from scratch moving the configuration to the vanilla installation to see if it works in the new installation. thanks mading  
Dear all, I am trying to initiate a search using Splunk cloud rest API. Using following code         const accessToken = "--my-super-secret-token--"; const url = "https://company-installa... See more...
Dear all, I am trying to initiate a search using Splunk cloud rest API. Using following code         const accessToken = "--my-super-secret-token--"; const url = "https://company-installation.splunkcloud.com:8089/services/search/jobs"; try { const authHeaderValue = `Splunk ${accessToken}`; const config = { headers: { 'Authorization': authHeaderValue }, params: { 'output_mode': 'json', 'search':'search *' } }; const res = await axios.post(url, config); return { statusCode: 200, body: JSON.stringify(res.data), }; } catch (e) { return { statusCode: 400, body: JSON.stringify(e), }; }           When the code is executed I get a 401 at line const res = await axios.post(url, config); My api token is valid and my IP address is whitelisted When axios.post is replaced with axios.get, I get list of searches back which also verifies token and IP address are good Could anyone spot why the code is failing to create a search with HTTP POST please? I am very new to Splunk REST API and any help is much appreciated      
Hi, Within our splunk environment we have 1 search head, 3 search peers, 1 deployer/master/license and 500+ UF. The uf's are configerd with WMI monitoring.  Since the field DisplayName from the WMI... See more...
Hi, Within our splunk environment we have 1 search head, 3 search peers, 1 deployer/master/license and 500+ UF. The uf's are configerd with WMI monitoring.  Since the field DisplayName from the WMI output isn't correct extracted, I would like to perform a custom extraction at index time. I know that the search time extractions are the best practice, but since it is a lot of data i would like to do this at index time. So this is what I did; Deployed prop.conf and tranforms.conf on the search peers. Props.conf   [source::WMI...] TRANSFORMS-Display = DisplayNametrans SHOULD_LINEMERGE = false   transforms.conf   [DisplayNametrans] REGEX = DisplayName=(?<DisplayName2>.*)\nName FORMAT = DisplayName2::$1 WRITE_META = true   Created fields.conf on the search head fields.conf   [DisplayName2] INDEXED=true     After restarting everything, nothing happend.... To troubleshoot a little further, I changed the function TRANSFORMS-Display to REPORT-Display and then it's working which means my regex works... Does somebody has an idea what i do wrong? Kind regards, Joram
Hi, We have the following source tables (2 sourcetypes): 1st table: ACCOUNT_CODE Application Business Unit Application RTO AD30OH00 Accounts Reconciliation Control System Individual Lif... See more...
Hi, We have the following source tables (2 sourcetypes): 1st table: ACCOUNT_CODE Application Business Unit Application RTO AD30OH00 Accounts Reconciliation Control System Individual Life Insurance C - Greater than 4 hours and less than or equal to 24 hours ADR3OH00 Accounts Reconciliation Control System Individual Life Insurance C - Greater than 4 hours and less than or equal to 24 hours AJ52OH00 Accounts Reconciliation Control System Individual Life Insurance C - Greater than 4 hours and less than or equal to 24 hours AN00OH00 Accounts Reconciliation Control System Individual Life Insurance C - Greater than 4 hours and less than or equal to 24 hours AN0WOH00 Accounts Reconciliation Control System Individual Life Insurance C - Greater than 4 hours and less than or equal to 24 hours   2nd Table: DATE MVS_SYSTEM_ID ACCOUNT_CODE CALCMIPS 1/1/2020 SYST1 AD18IEDA 1.31 1/1/2020 SYST2 AD18IE00 2.11   We wanted to create a table where we can show the SUM or AVG CALCMIPS per Application Name based on month. However, the ACCOUNT_CODE can be used by Multiple Application so it needs to be calculated within all those occurrences. Can you help us on how we can achieve it? Table will look something like: Application Name Business Unit Application RTO January Avg. MIPS Accounts Reconciliation Control System Individual Life Insurance C - Greater than 4 hours and less than or equal to 24 hours 0.0472767857142857   Thanks and Regards,  
Can you please help me in masking the data. Raw Data: -> "login": "44337754-004613081080P" I want the number to be masked as the below pattern  Example: 44337754-004613081080P Expected result o... See more...
Can you please help me in masking the data. Raw Data: -> "login": "44337754-004613081080P" I want the number to be masked as the below pattern  Example: 44337754-004613081080P Expected result of masking Example (masked): ****7754-*********080P I tried with the following  | rex mode=sed "s/(\"login\"\:\s+\")(\w+)(\d\d\d)-/\1\2xxx-/g" But not getting the expected output
Good Morning, We are having a bit on an issue with our data "layout". In our wineventlogs we have a field in the XML called Parent Process Name but shows up in our events as Creation Process Name. T... See more...
Good Morning, We are having a bit on an issue with our data "layout". In our wineventlogs we have a field in the XML called Parent Process Name but shows up in our events as Creation Process Name. The issue we are having is our correlation searches are using the Parent Process Name to find certain events so this isn't matching up. I am curious the best way to make sure that Parent Process Name field stays named in this setup and not changed to Creation Process Name. I have attempted to turn on the rendered XML in order to have it come over in this format but to no avail. I have also thought about using transfroms and props.conf but not sure where the switch is taking place.  Field Example: Creator Process Name: C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exe The way that data is flowing is UF ----> Heavy Forwarder ------> Splunk Cloud Field example would be:
Hello, we have an issue with persistent queue not working correctly. As visible below we have enabled persistent Queue with size 100MB for windows security logs. Unfortunately even though we have di... See more...
Hello, we have an issue with persistent queue not working correctly. As visible below we have enabled persistent Queue with size 100MB for windows security logs. Unfortunately even though we have disabled the access to indexers from the UF the queue is still not being created in c:\programfiles\splunkuniversalforwarder/var/run/splunk/, even though i has been running for 24h. Can you point me what I'm doing wrong? C:\Program Files\SplunkUniversalForwarder\etc\apps\somethingels_TA_DS_DC_activedirectory_log_collection\local\inputs.conf [WinEventLog://Security] C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows\default\inputs.conf blacklist1 = something C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows\default\inputs.conf blacklist2 = something C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows\default\inputs.conf checkpointInterval = 5 C:\Program Files\SplunkUniversalForwarder\etc\apps\somethingels_TA_DS_DC_activedirectory_log_collection\local\inputs.conf current_only = 0 C:\Program Files\SplunkUniversalForwarder\etc\apps\somethingels_TA_DS_DC_activedirectory_log_collection\local\inputs.conf disabled = 0 C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dc_name = C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf evt_dns_name = C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows\default\inputs.conf evt_resolve_ad_obj = 1 host = something C:\Program Files\SplunkUniversalForwarder\etc\apps\somethingels_TA_DS_DC_activedirectory_log_collection\local\inputs.conf index = <<indexname>> C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf interval = 60 C:\Program Files\SplunkUniversalForwarder\etc\apps\win-pers-test\local\inputs.conf persistentQueueSize = 100MB C:\Program Files\SplunkUniversalForwarder\etc\apps\somethingels_TA_DS_DC_activedirectory_log_collection\local\inputs.conf renderXml = false C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows\default\inputs.conf start_from = oldest C:\Program Files\SplunkUniversalForwarder\etc\apps\somethingels_TA_DS_DC_activedirectory_log_collection\local\inputs.conf whitelist =
Hi, I have a field which contains epoch date and date time like %Y%m%d : I want this format %Y%m%d for all values in "date" field. How can I convert epoch to date like %Y%m%d in this field ?  ... See more...
Hi, I have a field which contains epoch date and date time like %Y%m%d : I want this format %Y%m%d for all values in "date" field. How can I convert epoch to date like %Y%m%d in this field ?  Thanks !  
An example of the file is below. I want to break on <Object> and I tried (\<Object>\) and (\<Object\s) with no success. Can someone offer some advice or something to try? <Objects> <Object> <id><\... See more...
An example of the file is below. I want to break on <Object> and I tried (\<Object>\) and (\<Object\s) with no success. Can someone offer some advice or something to try? <Objects> <Object> <id><\id> <mac><\mac> <ip><\ip> <ip6><\ip6> <description><\description> <firstSeen><\firstSeen> <lastSeen><\lastSeen> <manufacturer><\manufacturer> <os><\os> <user><\user> <vlan><\vlan> <wirelessCapabilities><\wirelessCapabilities> <smInstalled><\smInstalled> <recentDeviceMac><\recentDeviceMac> <clientVpnConnections><\clientVpnConnections> <lldp><\lldp> <cdp><\cdp> <Name><\Name> <Network><\Network> <NetID><\NetID> <MXSerial><\MXSerial> <OrgID><\OrgID> <OrgName><\OrgName> <PolicyName><\PolicyName> <Status><\Status> <PolicyId><\PolicyId> <BlockedDate><\BlockedDate> <\Object> <Object> <id><\id> <mac><\mac> <ip><\ip> <ip6><\ip6> <description><\description> <firstSeen><\firstSeen> <lastSeen><\lastSeen> <manufacturer><\manufacturer> <os><\os> <user><\user> <vlan><\vlan> <wirelessCapabilities><\wirelessCapabilities> <smInstalled><\smInstalled> <recentDeviceMac><\recentDeviceMac> <clientVpnConnections><\clientVpnConnections> <lldp><\lldp> <cdp><\cdp> <Name><\Name> <Network><\Network> <NetID><\NetID> <MXSerial><\MXSerial> <OrgID><\OrgID> <OrgName><\OrgName> <PolicyName><\PolicyName> <Status><\Status> <PolicyId><\PolicyId> <BlockedDate><\BlockedDate> <\Object> <\Objects>
Hi all, I wrote a SPL query to generate the table to be used as data source for a world choropleth map within my glass table on Splunk Cloud with the ITSI app. The query is defined as follows: ind... See more...
Hi all, I wrote a SPL query to generate the table to be used as data source for a world choropleth map within my glass table on Splunk Cloud with the ITSI app. The query is defined as follows: index=test | rename country as iso2 | stats count by iso2 | lookup geo_attr_countries iso2 OUTPUT country | fields - iso2 | geom geo_countries featureIdField="country" It runs well in the search interface (both on a Splunk Cloud instance without ITSI and with ITSI) and in a world choropleth map on a "traditional" Splunk dashboard: the country are always colored as expected. However, when I add this query in the data configuration section of a new world choropleth map on a new glass table in the ITSI app, I get only a message of "Cannot get geoJson data.". I supposed the glass table cannot find the data, but if I use the same query in another visualization type (e.g. table), the data are displayed as expected. Data are correct and the query works well even if I click on "open in search" near the empty widget of the map while in visualization mode. Could you please support me in resolving this issue? I couldn't find any suggestion or example online for choropleth maps in glass tables. Thanks, G.P.