All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Daniel, A very big thank you dear, I truly appreciate the time and effort you put in to resolve it. The settings worked like a charm..
What is the reason for the planned the migration? For me it sounds like more that you just wanna install a new Universal forwarder on a different server to collect the logs. Usually you have all spe... See more...
What is the reason for the planned the migration? For me it sounds like more that you just wanna install a new Universal forwarder on a different server to collect the logs. Usually you have all specific configuration like inputs.conf and outputs.conf on your deployment server and in case of setting up a new UF you only add it to a existing or new serverclass to rollout the configuration files. I would do a fresh installation on the new server, configure the local configurations (e.g. deploymentclient.conf) and then distribute all other configurations via the Deployment Server. Regarding the installation routine I recommend to take a look into the documentation Install a Windows universal forwarder - Splunk Documentation There is also a silent installation on command line described.
Okay, and I guess the data is pulled via the tenable API, right? Could you try: transforms.conf [tenable_remove_logs] REGEX = (?m)(ABCSCAN) DEST_KEY = queue FORMAT = nullQueue  If it is not wo... See more...
Okay, and I guess the data is pulled via the tenable API, right? Could you try: transforms.conf [tenable_remove_logs] REGEX = (?m)(ABCSCAN) DEST_KEY = queue FORMAT = nullQueue  If it is not working I would increase the depth_limit for testing.
Hello, Bit of a novice here. I am in the process of planning to migrate a Splunk universal forwarder from one windows server to another.   To my understanding, this is the following process I hav... See more...
Hello, Bit of a novice here. I am in the process of planning to migrate a Splunk universal forwarder from one windows server to another.   To my understanding, this is the following process I have come up with: 1. Copy the Splunk home folder from the original  forwarder to the newly commissioned server. 2. Download the same version of Splunk. 3. Run the MSI executable, agree to the terms and conditions and open customise settings and select the install location as the same location as the pre-existing configuration.   Will the installer then prompt me for any other information, as it already has the configuration? For example will it ask me the deployment server address or the indexor address, or what system account is being used, or to create a splunk local administration account.   Will I need to change the host name in any configuration files? If it is not the same as the original server.  
in Heavy Forwarder
Where you have applied these settings? On an indexer or on a Heavy Forwarder?
It also does not work for me. We had 8.2.6 UF version and upgraded to 9.1.7. We also tried with versions 9.0.9, 9.2.4 and 9.3.2. Regardless of the wec_event_format = raw_event , we still have errors... See more...
It also does not work for me. We had 8.2.6 UF version and upgraded to 9.1.7. We also tried with versions 9.0.9, 9.2.4 and 9.3.2. Regardless of the wec_event_format = raw_event , we still have errors in the log  Invalid WEC content-format:'Events', for splunk-format = rendered_eventSee the description for the 'wec_event_format' setting at $SPLUNK_HOME/etc/system/README/inputs.conf.spec for more details. And the data is not coming in.
Hi I have a tenable json logs, i wrote rex and trying to send the logs to null queue, howevene it is not going to nullqueue, Sample log is given below   { [-]  SC_address: X.xx.xx acceptRisk: f... See more...
Hi I have a tenable json logs, i wrote rex and trying to send the logs to null queue, howevene it is not going to nullqueue, Sample log is given below   { [-]  SC_address: X.xx.xx acceptRisk: false acceptRiskRuleComment: acrScore: assetExposureScore: baseScore: bid: checkType: summary cpe: custom_severity: false cve: cvssV3BaseScore: cvssV3TemporalScore: cvssV3Vector: cvssVector: description: This plugin displays, for each tested host, information about the scan itself : - The version of the plugin set. - The type of scanner (Nessus or Nessus Home). - The version of the Nessus Engine. - The port scanner(s) used. - The port range scanned. - The ping round trip time - Whether credentialed or third-party patch management checks are possible. - Whether the display of superseded patches is enabled - The date of the scan. - The duration of the scan. - The number of hosts scanned in parallel. - The number of checks done in parallel. dnsName: xxxx.xx.xx exploitAvailable: No exploitEase: exploitFrameworks: family: { [+] } firstSeen: X hasBeenMitigated: false hostUUID: hostUniqueness: repositoryID,ip,dnsName ip: x.x.x.x ips: x.x.x.x keyDrivers: lastSeen: x macAddress: netbiosName: x\x operatingSystem: Microsoft Windows Server X X X X patchPubDate: -1 pluginID: 19506 pluginInfo: 19506 (0/6) Nessus Scan Information pluginModDate: X pluginName: Nessus Scan Information pluginPubDate: xx pluginText: <plugin_output>Information about this scan : Nessus version : 10.8.3 Nessus build : 20010 Plugin feed version : XX Scanner edition used : X Scanner OS : X Scanner distribution : X-X-X Scan type : Normal Scan name : ABCSCAN Scan policy used : x-161b-x-x-x-x/Internal Scanner 02 - Scan Policy (Windows & Linux) Scanner IP : x.x.x.x Port scanner(s) : nessus_syn_scanner Port range : 1-5 Ping RTT : 14.438 ms Thorough tests : no Experimental tests : no Scan for Unpatched Vulnerabilities : no Plugin debugging enabled : no Paranoia level : 1 Report verbosity : 1 Safe checks : yes Optimize the test : yes Credentialed checks : no Patch management checks : None Display superseded patches : no (supersedence plugin did not launch) CGI scanning : disabled Web application tests : disabled Max hosts : 30 Max checks : 5 Recv timeout : 5 Backports : None Allow post-scan editing : Yes Nessus Plugin Signature Checking : Enabled Audit File Signature Checking : Disabled Scan Start Date : x/x/x x Scan duration : X sec Scan for malware : no </plugin_output> plugin_id: xx port: 0 protocol: TCP recastRisk: false recastRiskRuleComment: repository: { [+] } riskFactor: None sc_uniqueness: x_x.x.x.x_xxxx.xx.xx seeAlso: seolDate: -1 severity: informational severity_description: Informative severity_id: 0 solution: state: open stigSeverity: synopsis: This plugin displays information about the Nessus scan. temporalScore: uniqueness: repositoryID,ip,dnsName uuid: x-x-x-xx-xxx vendor_severity: Info version: 1.127 vprContext: [] vprScore: vulnPubDate: -1 vulnUUID: vulnUniqueness: repositoryID,ip,port,protocol,pluginID xref: }   in props.conf [tenable:sc:vuln] TRANSFORMS-Removetenable_remove_logs = tenable_remove_logs transforms.conf [tenable_remove_logs] SOURCE_KEY = _raw REGEX = ABCSCAN DEST_KEY = queue FORMAT = nullQueue It is not working. Any solution ?. i have removed  SOURCE_KEY later , that is also not working    
Have you tried to map the "Name" to the "role" variable?  Have you checked the supported group information formats in the docs and verified it? Configure SAML SSO using configuration files on Splun... See more...
Have you tried to map the "Name" to the "role" variable?  Have you checked the supported group information formats in the docs and verified it? Configure SAML SSO using configuration files on Splunk Enterprise - Splunk Documentation
Please share the screenshots and the search that you use to fill the summary index.
Hi @YuliyaVassilyev , in Community, there are many solutions to your request, see at https://community.splunk.com/t5/Splunk-Search/How-do-I-combine-subtotals-and-totals-in-a-search-query/m-p/391298... See more...
Hi @YuliyaVassilyev , in Community, there are many solutions to your request, see at https://community.splunk.com/t5/Splunk-Search/How-do-I-combine-subtotals-and-totals-in-a-search-query/m-p/391298 https://community.splunk.com/t5/Splunk-Search/How-do-I-edit-my-search-to-get-both-subtotals-and-the-grand/m-p/240503 https://community.splunk.com/t5/Splunk-Search/Show-subtotals-in-results-table/m-p/102875 https://community.splunk.com/t5/Splunk-Search/How-to-add-sub-totals-to-a-table/m-p/317028 Test them. Ciao. Giuseppe P.S.: Karma Points are appreciated by all the contributors  
Hi Team, To reduce the time taken to load my Splunk dashboard, I created a new summary index to collect the events which retrieves the past 30 days of events and scheduled the same report to run e... See more...
Hi Team, To reduce the time taken to load my Splunk dashboard, I created a new summary index to collect the events which retrieves the past 30 days of events and scheduled the same report to run every hour and "enabled the summary indexing" as per the documentation: Create a summary index in Splunk Web - Splunk Documentation However, while checking the index, I could see that the data ingestion is not taking place as per the scheduled report. Please find the attached screenshots for additional reference. Looking forward for a workaround or a solution.    
Hi normally Splunk’s way of working is different than what you have in procedural languages. It helps us to help you if we can see your real use case and sample events. If you really need this kind... See more...
Hi normally Splunk’s way of working is different than what you have in procedural languages. It helps us to help you if we can see your real use case and sample events. If you really need this kind of functionality then you could look command map to achieve this. Anyway it has some restrictions which could make some additional challenges to you. r. Ismo
Hi @matthewroberson  ES 8.X is not yet available to download for on prem yet. Some cloud customers are in ES 8.0 Hopefully in next couple of weeks it will be available for on prem to downloa... See more...
Hi @matthewroberson  ES 8.X is not yet available to download for on prem yet. Some cloud customers are in ES 8.0 Hopefully in next couple of weeks it will be available for on prem to download from splunkbase  
What is the correlation to join the two datasets together, i.e. in the second index where you want field4, how does it know which event in the second data correlates with which event in the first ind... See more...
What is the correlation to join the two datasets together, i.e. in the second index where you want field4, how does it know which event in the second data correlates with which event in the first index. Generally the solution is to search both datasets and then combine the two with some common correlation element using stats. Can you be a bit more specific and give a more detailed example.
@Anubis wrote: 1. how many events in base search: 1.6 million This is your problem. Although not specifically about dashboard studio, which you seem to be using as you talk about chained ... See more...
@Anubis wrote: 1. how many events in base search: 1.6 million This is your problem. Although not specifically about dashboard studio, which you seem to be using as you talk about chained searches, the limit is 500,000 events. What you are intending to do, i.e. post filter a base, is indeed logical, but there is no way you can manage 1.6 million event in a base search - see this link for a discussion on base searches - a chained search is what is referred to as a post-processing search. https://docs.splunk.com/Documentation/Splunk/9.3.2/Viz/Savedsearches with particular reference to Event retention and Limit base search results and post-process complexity and do not think about increasing limits.conf, that will not make things bette You can still bump the post processing/chained search out of the base, but you need to consider each use case of your base search to work out how that post filtering can work. If your panel searches are all doing things like stats, then move the stats down to the base search. You can always do something like | stats count by a b c d and if you only want a count by c, you can then do this in your panel search. | stats sum(c) as c  
Hello, thanks for the response. I forgot to include the 'by device_name' in my post. Sorry about that.  1. how many events in base search: 1.6 million 2. I used the tokens in the chained search... See more...
Hello, thanks for the response. I forgot to include the 'by device_name' in my post. Sorry about that.  1. how many events in base search: 1.6 million 2. I used the tokens in the chained search to not call the index every time a token is changed. Seemed logical 3. Putting head infront of table is better. honest mistake
so i have search a which creates a variable from the search results (variableA) i need to search another index using variableA in the source and want to append one column from the second search int... See more...
so i have search a which creates a variable from the search results (variableA) i need to search another index using variableA in the source and want to append one column from the second search into a table with results from the first like this: index=blah source=blah | rex variableA=blah, field1=blah,field2=blah,field3=blah, index=blah source=$variableA$ | rex field4=blah table field1, field2, field3,field4 not sure how this gets done?
This is not really what chained searches are good for - what you are doing is basically loading the entire dataset into memory in your index= base search. How many events do you have - there is a lim... See more...
This is not really what chained searches are good for - what you are doing is basically loading the entire dataset into memory in your index= base search. How many events do you have - there is a limit to the number of results a base search can retain. For this type of usage, you will often make the search slower because all the processing is done on the search head rather than using the benefit of search distribution. The issue you are seeing I suspect will be related to the data volumes you are trying to manage through your base search. Depending on what other searches you have, your search chain might be better expressed by this Base search   index=yum sourcetype=woohoo earliest=-12h@h | stats sum(field3) as field3 by field1 field2   Chain search   | search field1="$field1_tok$" AND field2="$field2_tok$"   Panel Your original search makes no sense, because you are doing a stats command and then trying to use device_name which does not exist after the stats. Also you only have a single row after the stats, so the sort and head are pointless....   | sort - field3 | head 10 | table device_name field3   Note, you should ALWAYS put your transforming commands as late as possible in any pipeline - i.e. it is better to put the head BEFORE the table, so in a normal search you would only get 10 results from the indexer to the search head rather than sending all results and discarding all but 10. The above will depend on what other usage you are making from your base search, but please come back with more details if you still need advice.
I have a search on my dashboard that takes ~20 seconds to complete.  This search is a member of a chain. base:     index=yum sourcetype=woohoo earliest=-12h@h | table device_name field1 field2 fie... See more...
I have a search on my dashboard that takes ~20 seconds to complete.  This search is a member of a chain. base:     index=yum sourcetype=woohoo earliest=-12h@h | table device_name field1 field2 field3     Chained search:     | search field1="$field1_tok$" AND field2="$field2_tok$"     Panel:     | stats sum(field3) as field3 by device_name | sort - field3 | table device_name field3 | head 10     Everything works fine but when I change tokens the panel loads  a cached version of the table with incorrect values. 5 to 10 seconds later the panel updates with the correct values but without any indication. So, Is there a setting to turn off these cached results?