All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Extending a previously answered question is perhaps not the best way of getting your question answered, particularly when the extension is a bit vague. Please start a new question with more specifics... See more...
Extending a previously answered question is perhaps not the best way of getting your question answered, particularly when the extension is a bit vague. Please start a new question with more specifics about your particular usecase and the difficulties you are having i.e. what would you want the solution to look like.
@gcusello , Thank you for your swift response. For the Deployment Master server, we have around 1,000+ client machines in our environment. So it would be helpful if you could help me with the recomm... See more...
@gcusello , Thank you for your swift response. For the Deployment Master server, we have around 1,000+ client machines in our environment. So it would be helpful if you could help me with the recommended hardware specifications for this setup? As for the Heavy Forwarders, we will be ingesting over 40 GB of approximate data daily from both the HF servers. The primary data sources include Microsoft Azure Storage Table and Blob using the Splunk Add-On for Microsoft Cloud Services, the Qualys Technology Add-On, Splunk DB Connect, and data parsing for approximately 120+ client machines per Heavy Forwarder. What would be the recommended hardware specifications for these servers? , 
There is no single good answer to such question. A Deployment Server (not Deployment Master), depending on your environment size and configuration parameters, can run perfectly well on a relatively ... See more...
There is no single good answer to such question. A Deployment Server (not Deployment Master), depending on your environment size and configuration parameters, can run perfectly well on a relatively small server (like 4CPU and 8GB; if you disable GUI, probably even smaller) but can need to be load-balanced over several quite big machines if you have many clients and many often changing apps. As for HF, good thing is that you don't have to have just one HF in your environment (technically, you can have multiple separate DS instances for separate segments of your deployment but it makes app management more troublesome).. So you can start with a moderately sized HF (like a reference all-in-one server) and either scale out by adding cores/memory if you start lacking resources or add more instances of HF and migrate some inputs there.
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exact... See more...
Hi everyone, I have configured otx alienvault taxii source in Threat Intelligence Management , as I can see in logs some data was downloaded successfully, but is there a way to know which data exactly?
I increased the limit several times, but eventually I got the same error. Do you know a way to see what data was received, for example, to do a search?
Hi @maspiro , for my knowledge, restarting the search is the only way to reset a token. Ciao. Giuseppe
Hi @anandhalagaras1 , there isn't any formal requirement from Splunk about Deployment Server and Heavy Forwarders, the only requirements are for a normal stand-alone Splunk Server: 12 CPUs and 12 GB... See more...
Hi @anandhalagaras1 , there isn't any formal requirement from Splunk about Deployment Server and Heavy Forwarders, the only requirements are for a normal stand-alone Splunk Server: 12 CPUs and 12 GB RAM. From my experience, I could add that, for DS, it depends on the number of client, if they aren't so many (some hundreds), you could also have less CPUs and RAM (8+8), in addition, from few time, you can also use more than one DS. It's different for HFs: if they have to do an hard job for parsing logs (regexes), it's better to give them more resources (expecially CPUs); in one heavy project, where our 4 HF had to receive and parse hundreds of GB every day, I used 24 CPUs and 64 GB RAM for each one. My hint is to start with the normal reference hardware (12+12), analyze machine loads and queues and eventually add more resources (we're usually speaking of virtual servers). In addition, if you have to receive syslogs, don't use Splunk for them, but use an rsyslog (or syslog-ng) server and then Splunk can read the written files. Ciao. Giuseppe
Hi Giuseppe! It's very useful, but this soluton needs to restart the search. My needing is that one panel is related via token to another: when I click on a field in the second panel the previous s... See more...
Hi Giuseppe! It's very useful, but this soluton needs to restart the search. My needing is that one panel is related via token to another: when I click on a field in the second panel the previous show only related record. How can I reset the token in order to have all the records in the first panel without restart the search? Thanks a lot! 
Ok, what you're describing is more of a SOAR functionality. If you wanted to do something like that within Splunk Enterprise you'd have to implement it yourself. And I'm pretty sure an app doing that... See more...
Ok, what you're describing is more of a SOAR functionality. If you wanted to do something like that within Splunk Enterprise you'd have to implement it yourself. And I'm pretty sure an app doing that would not pass vetting on Cloud.
No, more like this index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort 0 +_time -count | streamstats count as ... See more...
No, more like this index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort 0 +_time -count | streamstats count as row by _time | where row <= 10 | streamstats latest(count) as previous by table_Name window=1 global=f current=f | eval increase=round(100*(count-previous)/previous,0) The previous answer was based on the green table - since this is based on my first answer, combining the two should work for you (I removed the extra sort as this is redundant given the first sort.
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirement... See more...
Hi Team, We are planning to host the Deployment Master server and two Splunk Heavy Forwarder servers in our on-prem Nutanix environment. Could you please provide the recommended hardware requirements for hosting these servers? Based on your input, we will plan and provision the necessary hardware. The primary role of the Deployment Master server will be to create custom apps and collect data from client machines using Splunk Universal Forwarder. For the Heavy Forwarders, we will be installing multiple add-ons to configure and fetch data from sources such as Azure Storage (Table, Blob), O365 applications, Splunk DB Connect, Qualys, AWS, and client machine data parsing. We are looking for the minimum, moderate, and maximum hardware requirements as recommended by Splunk Support to host the Splunk DM and HF servers in the Nutanix environment. If there are any support articles or documentation available, that would be greatly appreciated. Thank you!
@ITWhisperer did you mean the final splunk query would look like as below? index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _tim... See more...
@ITWhisperer did you mean the final splunk query would look like as below? index=myindex RecordType=abc DML_Action=INSERT earliest=-4d | bin _time span=1d | stats sum(numRows) as count by _time,table_Name | sort limit=10 +_time -count | sort 0 _time | streamstats latest(count) as previous by Table_Name window=1 global=f current=f | eval increase=round(100*(count-previous)/previous,0)
Optimisation will usually depend on the data set(s) you are dealing with, which you haven't provided. Having said that, the dedup by Ordernumber and movement_category will mean that there is only one... See more...
Optimisation will usually depend on the data set(s) you are dealing with, which you haven't provided. Having said that, the dedup by Ordernumber and movement_category will mean that there is only one event with each unique combination of the values in these fields, which means the count from the stats will always be 1, so what is the point of doing the stats? Your join is to an inputlookup, can this be replaced by a simple lookup?
Hi @hazem , now I don't find the parameter, also because I try to avoid to change it, the default value usually is the best solution. Ciao. giuseppe
Hi @neerajs_81 , good for you, see next time! maybe you could try the hint from @ITWhisperer  to put inputs in different rows, bat always one by one in each panel. Ciao and happy splunking Giusep... See more...
Hi @neerajs_81 , good for you, see next time! maybe you could try the hint from @ITWhisperer  to put inputs in different rows, bat always one by one in each panel. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @gcusello  could you please provide me with the stanza to change the interval required to read logs from the log file?   ,EX MSSQL-  ERROR.log file 
Please provide more detail - what is the source of your dashboard? how are you using the tokens? if the tokens both have the same value, can you not just use one token?
Hello Splunker!! Could you please help me to optimize below query ? Customer saying dedup is taking so much resource consumption. So what should I change in the query so that the complete query get... See more...
Hello Splunker!! Could you please help me to optimize below query ? Customer saying dedup is taking so much resource consumption. So what should I change in the query so that the complete query gets optimized? index=abc sourcetype=abc _tel type=TEL (trigger=MFC_SND OR trigger=FMC_SND) telegram_type=CO order_type=TO area=D10 aisle=A01 *1000383334* | rex field=_raw "(?P<Ordernumber>[0-9]+)\[ETX\]" | fields _time area aisle section source_tel position destination Ordernumber | join area aisle [ inputlookup isc where section="" | fields area aisle mark_code | rename area AS area aisle AS aisle] | lookup movement_type mark_code source AS source_tel position AS position destination AS destination OUTPUT movement_type | fillnull value="Unspecified" movement_type | eval movement_category = case( movement_type like "%IH - LH%", "Storage", movement_type like "%LH - R%", "Storage", movement_type like "%IH - IH%", "Storage", movement_type like "%R - LH%", "Retrieval", movement_type like "%LH - O%", "Retrieval", 1 == 1, "Unknown" ) | fields - source_tel position destination | dedup Ordernumber movement_category | stats count AS orders by area aisle section movement_category movement_type Ordernumber _raw
You could put them in panels in different rows <form version="1.1" theme="light"> <label>Inputs</label> <row> <panel> <input type="text" token="src_ip"> <label>Source IP</label... See more...
You could put them in panels in different rows <form version="1.1" theme="light"> <label>Inputs</label> <row> <panel> <input type="text" token="src_ip"> <label>Source IP</label> </input> <input type="text" token="dest_ip"> <label>Destination IP</label> </input> </panel> </row> <row> <panel> <input type="radio" token="srcIPcondition"> <label>SrcIP Condition</label> <choice value="=">Equal</choice> <choice value="!=">Not Equal</choice> </input> <input type="radio" token="destIPcondition"> <label>DestIP Condition</label> <choice value="=">Equal</choice> <choice value="!=">Not Equal</choice> </input> </panel> </row> </form>
Hi @hazem , it's usually continouslòy monitored every 30 seconds, but you can cheange this frequency, even fi I'didn't do it. Ciao. Giuseppe