All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I am running a heavy forwarder with HEC and it is sending data to 3 indexers.  I am starting to read about ways to optimise this configuration, but I am not sure if I have all the settings.  ... See more...
Hi I am running a heavy forwarder with HEC and it is sending data to 3 indexers.  I am starting to read about ways to optimise this configuration, but I am not sure if I have all the settings.     [tcpout] defaultGroup = default-autolb-group [tcpout-server://hp923srv:9997] [tcpout-server://hp924srv:9997] [tcpout:default-autolb-group] disabled = false server = hp923srv:9997,hp924srv:9997,hp925srv:9997 [tcpout-server://hp925srv:9997]       Or if someone has a few settings that they know work. All machines are on 56 threads with HT on - so I have lots of CPU free. 1st - How to I monitor the history of the data coming in from the HF->indexers 2nd - Can you share some settings for the heavy forwarder and the indexers please to get the data into Splunk the fastest This is what I have read so far, but I am  not 100% sure about some of them any advice would be great parallelIngestionPipelines = X (This is to be set on the HF and the Indexer, i think) dedicatedIoThreads=Y (To be set on the HF) Thanks in advance Robert   
Hello Experts Please do not route to Splunk PS or Partner help. i want to do it by myself but with help of you experts. I have  1 HQ , and 2 main big branch and + 100  small branches, i want to h... See more...
Hello Experts Please do not route to Splunk PS or Partner help. i want to do it by myself but with help of you experts. I have  1 HQ , and 2 main big branch and + 100  small branches, i want to have visibility from all the sites what is the best design approach for this type of network. The data Ingestion is approximately 200 GB/Day which includes from all the sites ( HQ +Main Sites+ 100 branches) Thanks
Hey all, I have a 2019 windows client - sending some default data , but there are a handful of inputs that are visible on the inputs.conf in the apps folder (which some of the inputs on that very ... See more...
Hey all, I have a 2019 windows client - sending some default data , but there are a handful of inputs that are visible on the inputs.conf in the apps folder (which some of the inputs on that very same inputs.conf are working)  that are not showing up on the front end.  These same apps and corresponding inputs are working on non-windows 2019 FWDs/clients Any ideas? Thanks!
I have App_1 that is adding metadata in the inputs.conf file:     ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 ch... See more...
I have App_1 that is adding metadata in the inputs.conf file:     ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 1 ## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false. renderXml = true host=WinEventLogForwardHost index = system-win _meta = machine_class::workstation     I now need to uniquely identify the host that the UF runs on. I expect that this would just be "_meta = uf_name::HostnameOfUF", BUT.... This App_1 is distributed to several hosts, and I cannot modify it to uniquely identify anything. Instead, I created a new app (App_2) consisting of just an inputs.conf with     ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] _meta = uf_name::HostnameX     Unfortunately, this never shows up, but I believe this is because the App_1 cannot "merge" the _meta with the _meta contained in App_2. How can I uniquely identify my host?
I have a 10GB Dev Licence including ITSI: Splunk Developer Personal License DO NOT DISTRIBUTE (with ITSI).  How can I download ITSI? Where can I get the download link?  
Hi, it seems the "splunkd service" process has significant CPU consumption (eg 40%; 31% and so on). These virtual machines have 2 cores. how many CPUs are recommended in a windows server running t... See more...
Hi, it seems the "splunkd service" process has significant CPU consumption (eg 40%; 31% and so on). These virtual machines have 2 cores. how many CPUs are recommended in a windows server running the splunk universal forwarders agent?
I have the following events in splunk:     company,name,email,status Acme,John Doe,john.doe@example.com,inactive Company Inc.,John Doe,john.doe@example.com,active HelloWorld Inc.,John Doe,john.... See more...
I have the following events in splunk:     company,name,email,status Acme,John Doe,john.doe@example.com,inactive Company Inc.,John Doe,john.doe@example.com,active HelloWorld Inc.,John Doe,john.doe@example.com,inactive Contoso,John Doe,john.doe@example.com,inactive Contoso,Mary Doe,mary.doe@example.com,inactive HelloWorld Inc.,Mary Doe,mary.doe@example.com,inactive       I want to create a new field called "cumulativeStatus" that will be "active" if that email is active in at least one row, and will be "inactive" if the person is inactive in all rows. Like this:     company,name,email,status,cumulativeStatus Acme,John Doe,john.doe@example.com,inactive,active Company Inc.,John Doe,john.doe@example.com,active,active HelloWorld Inc.,John Doe,john.doe@example.com,inactive,active Contoso,John Doe,john.doe@example.com,inactive,active Contoso,Mary Doe,mary.doe@example.com,inactive,inactive HelloWorld Inc.,Mary Doe,mary.doe@example.com,inactive,inactive       Is it possible, how?
Hello, Anyone can help me to provide the Javascript for the toggle buttons which we have used in the dashboard. Appreciated in advance
Hi Team,    There is a two reports one report(1st report) has timestamp other report(2nd report) doesn't have timestamp so, extracted date from source filename..the report which doesn't have timest... See more...
Hi Team,    There is a two reports one report(1st report) has timestamp other report(2nd report) doesn't have timestamp so, extracted date from source filename..the report which doesn't have timestamp have few fields which needs to be join from 1st report and 2nd report but sometimes the 2nd report will not be received. In that case, which need to be pick up the previous day or available previous file. is there any earliest comparison and pick the values from 2nd report? what command to be used in that case?
Hello Community, I have distributed environment with 2 indexers (each has 48 vCPU, 64gb RAM), which are ingesting 200 gb logs/day (each indexer). I want to send to them another 200 gb  syslog log... See more...
Hello Community, I have distributed environment with 2 indexers (each has 48 vCPU, 64gb RAM), which are ingesting 200 gb logs/day (each indexer). I want to send to them another 200 gb  syslog logs per day (for each indexer), but I want to filter the logs before indexing. I would index only 10% of 200gb of that additional syslog logs at each indexer, so 90% would be rejected. Could you please tell me what are hardware requirements for such setup? I couldn't find any hints.
Dear All, I'm taking the freedom, to write here, just, to see, if, maybe, it would be possible, to get, some, of your support (it would be, regarding, query parameters, in a dashboard URL) So,... See more...
Dear All, I'm taking the freedom, to write here, just, to see, if, maybe, it would be possible, to get, some, of your support (it would be, regarding, query parameters, in a dashboard URL) So, just, to describe, the scenario: We are creating, a dashboard, using Splunk Enterprise Dashboard Studio (version 8.2.5) In that dashboard, we have placed, some inputs (3 normal dropdowns, 3 multiselect) Now, as, you may, probably, imagine, for each input, we have, an associated token (token1, token2, etc.) (so far, I guess, it's clear  ) Now, as you may, probably, know, when we use, a Classic dashboard (and, when we access, a specific one), we, in the dashboard URL, can, see, the inputs names (and their values) (just, to provide, with a basic example): https://server:8000/en-US/app/app_name/dashboard_name?form.input1=val1&form.input2=val2 (so, as we can see, with a Classic dashboard, this, all, is, good) Now, in our case, we are using, Dashboard Studio (as, you may, probably know, here, the URL, gets displayed, slightly different) In our example: https://server:8000/en-US/app/app_name/dashboard_name Now, our client, asks, if, when accessing this dashboard, it would, be possible, to see, the inputs (and their values), in the URL (just, as we do, in Classic dashboards) (as per my research, I haven't been able, to find, this possibility) To sum up: Kindly wondering, if, maybe, some of you, would be, kind enough, to provide, with some guidance  Thanks a lot! Sincerely, Francisco
Hi all Is there a way to set up a multi-domain certificate and a wildcard certificate? if yes then can anyone tell me the step by step procedure to implement this ?
Because alert queries normally look back, say the last 15 minutes to the current time, we need to have our jobs start at say 12:15pm thru midnight. For now our cron schedule is like this: */15 12-2... See more...
Because alert queries normally look back, say the last 15 minutes to the current time, we need to have our jobs start at say 12:15pm thru midnight. For now our cron schedule is like this: */15 12-23 * * *, which of course runs from 12pm to 23:45. We see an issue where at 12pm, it may produce a false positive; at midnight (the next day) the alert will not run, and thus we may miss an important alert. We want it to run from 12:15pm thru 00:00 (next day), because of the 'look back' to the previous 15 minutes. It may be very simple, but so far I'm at a loss. What is the correct way of doing this?
I have been using tstats to get event counts by day per sourcetype, but when I search for events in some of the identified sourcetypes search returns no results. I am a Splunk admin and have access t... See more...
I have been using tstats to get event counts by day per sourcetype, but when I search for events in some of the identified sourcetypes search returns no results. I am a Splunk admin and have access to All Indexes. Here are the searches I have run:   | tstats count where index=myindex groupby sourcetype,_time   One of the sourcetype returned was novell_groupwise (which was quite a surprise to me), but when I search   index=myindex sourcetype=novell_groupwise   on a day that tstats indicated there was events on, nothing is returned. Can anyone explain this discrepancy?
Can anyone help why this Warning message is coming in Splunkd log
Hi All, I hope someone can enlighten me with this seemingly simple problem. I have this very simple search return 32 rows and showing that all events have a transaction_type value. If I click ... See more...
Hi All, I hope someone can enlighten me with this seemingly simple problem. I have this very simple search return 32 rows and showing that all events have a transaction_type value. If I click on the D highlighted above I would expect it to show me just the 20 D rows but instead I get: Very weird. If I change the search to    index=orafin sourcetype=ORAFIN2 NOT transaction_type!=D   Then I get what I want: Can someone please explain what is happening? Thanks, Keith  
Hi, Team I want to use tokens for email and xMater notification. I have one field named Server. So this is what I write for message for xMatter alerting: Data isn't refreshed in time on $result.Ser... See more...
Hi, Team I want to use tokens for email and xMater notification. I have one field named Server. So this is what I write for message for xMatter alerting: Data isn't refreshed in time on $result.Server$ But here's what I received: Data isnt refreshed in time on genesys-pulse-tko-04.hk.hsbc genesys-pulse-tko-04.hk.hsbc The name of server shows twice on the message.  Another case is I use token for email notification: here's what I write on splunk: The alert condition for $result.Server$ was triggered. here's what I receive when the alert is triggered: Anyone knows the reason of these cases?
Hi, I am trying to write a query that would get me the average TPS and average response time for services in the same table. I tried this -   <search> | eval <evaluate response time as RT> | bi... See more...
Hi, I am trying to write a query that would get me the average TPS and average response time for services in the same table. I tried this -   <search> | eval <evaluate response time as RT> | bin _time AS "TIME" span=1s | eventstats count as TPS by TIME, service | stats count AS txnCount, avg(TPS) as avgTPS, avg(RT) as avgRT by service   However, the numbers don't seem to match when I am running the TPS query individually like this -   <search> | bin _time AS "TIME" span=1s | eventstats count as TPS by TIME, service | stats count AS txnCount, avg(TPS) as avgTPS by service   Any suggestions what I could be doing wrong here? Thank you!
When using HF to collect logs on the cloud, Because the add-on used cannot set host, So the host of the data is the name of HF, but it needs to reflect that the data comes from an impassable envir... See more...
When using HF to collect logs on the cloud, Because the add-on used cannot set host, So the host of the data is the name of HF, but it needs to reflect that the data comes from an impassable environment, And the same data type uses the same sourcetype. At present, the way I use is First, use different sourcetypes to access data  At this time, they have the same host (HF name) then, I use props and transforms to modify their host and Change their sourcetype to the same one the question is modify host and change sourcetype  Only one will take effect. Is there a way to modify the host first and then modify the sourcetype? Or something better ?
Hello Expert, Please help me arrive on a regex to extract a xml node in a xml field. I have a field value like below <Reponse status="failure">  <messages>         <message id="Payload">      ... See more...
Hello Expert, Please help me arrive on a regex to extract a xml node in a xml field. I have a field value like below <Reponse status="failure">  <messages>         <message id="Payload">             <UpdateAccountRq>                 <AccountId>123465</AccountId>                 <NewStatus>Active</NewStatus>             </UpdateAccountRq>         </message>     </messages> </Reponse>   And I want to extract the below xml node and display it in a separate field.    <UpdateAccountRq>         <AccountId>123465</AccountId>        <NewStatus>Active</NewStatus> </UpdateAccountRq>   I tried many ways, but nothing works.   Attempt 1:  rex field=Action "messages>(?<Payload>.+)<\/messages" | table Action, Payload Attempt 2:  rex field=Action "\<message id=\"Payload\">(?<Payload>[^<\/message]+)" | table Action, Payload   Please help. Thanks