All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'd like to monitor log files and ingest specific lines from these files. My props.conf and transforms.conf has no error. But for some reason the props.conf is not working and instead of indexing spe... See more...
I'd like to monitor log files and ingest specific lines from these files. My props.conf and transforms.conf has no error. But for some reason the props.conf is not working and instead of indexing specific lines , it is indexing the whole log. Is there any specific path to place .conf files, or any other solution?
We had a Nessus scan but Nessus configuration was not completed on tenable add-on on the splunk side. Hence we missed the scan and data was not onboarded to splunk now we want to get data back , so h... See more...
We had a Nessus scan but Nessus configuration was not completed on tenable add-on on the splunk side. Hence we missed the scan and data was not onboarded to splunk now we want to get data back , so how should we get that data back?
Hello everyone, We are currently running Splunk Enterprise version 9.0.6 on a Windows Server 2016 machine as part of a distributed Splunk environment. Due to compliance requirements, we need to upgr... See more...
Hello everyone, We are currently running Splunk Enterprise version 9.0.6 on a Windows Server 2016 machine as part of a distributed Splunk environment. Due to compliance requirements, we need to upgrade to at least version 9.1.4. However, Splunk Enterprise 9.1.4 officially lists Windows Server 2019 as a prerequisite. I have tested the upgrade in our lab environment on Windows Server 2016, and it appears to work without any immediate issues. Despite this, I am concerned about potential unforeseen impacts or compatibility problems since the official documentation recommends Windows Server 2019. Additionally, our OS team has advised that upgrading the OS from Windows Server 2016 to 2019 could potentially corrupt the servers, necessitating a rebuild. My boss is understandably reluctant to take this risk, especially since the current server is planned for retirement by the end of this year. Has anyone else performed a similar upgrade on Windows Server 2016 within a distributed Splunk environment? Are there any known issues or potential risks we should be aware of? Any insights or experiences would be greatly appreciated.
Hello @kackerman7  I'm sharing the POC details I worked on a few years ago. If you are using client-server architecture for your External React Application, I suggest checking my architecture which ... See more...
Hello @kackerman7  I'm sharing the POC details I worked on a few years ago. If you are using client-server architecture for your External React Application, I suggest checking my architecture which easily integrates with your existing architecture.   Just go through it, try it in your local lab, and let me know if you need more help.    I hope this will help you. Thanks KV An upvote would be appreciated if any of my replies help you solve the problem or gain knowledge.  
You could use a KVStore with fields "received_date", "file_date", and "company_id". See https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/ConfigureKVstorelookups  Once your KVSto... See more...
You could use a KVStore with fields "received_date", "file_date", and "company_id". See https://docs.splunk.com/Documentation/SplunkCloud/latest/Knowledge/ConfigureKVstorelookups  Once your KVStore lookup is defined, you could use it like this: index=wealth | search transform-file | search ace_message | rex field=_raw "inputFileName: (?<inputFileName>.*?)," | rex field=_raw "outputFileName: (?<outputFileName>.*?)," | rex field=inputFileName "file\_\d+\_(?<CompanyId>\d+)\_" | rex field=inputFileName "file\_(?<Date>\d+)\_" | table inputFileName,outputFileName, CompanyId, Date | lookup received_files_lookup file_date as Date, company_id as CompanyId | where received_date>(now()-(60*60*24*30)) Your alert can trigger if this search returns any rows of data. You will also need a corresponding mechanism to store any new files in the KVStore: index=wealth | search transform-file | search ace_message | rex field=_raw "inputFileName: (?<inputFileName>.*?)," | rex field=inputFileName "file\_\d+\_(?<company_id>\d+)\_" | rex field=inputFileName "file\_(?<file_date>\d+)\_" | table company_id, file_date | eval received_date=now() | outputlookup received_files_lookup append=true
It's okay to repeat yourself. Your comment suggests you may not understand, and that's okay too. To give you a hint, RPMs are - as we know - signed manifests and content, which includes overlay files ... See more...
It's okay to repeat yourself. Your comment suggests you may not understand, and that's okay too. To give you a hint, RPMs are - as we know - signed manifests and content, which includes overlay files and scripts. The format allows for very detailed specification on what's required and all its dependencies. Yum will take those requirement specs and, since we know it's identical, repeatedly install exactly what we require, over and over again. It's consistent, and in a verifiable way. Not there yet? OK. You should know: this idea that YUM === "blindly installing anything in prod without assessment and no other workflow is possible" is verrrrrry nai--uh, simplistic. It's possible, sure; same as without it. Every tool can be used poorly. But using it properly really opens up some adequate features. And we'd like Splunk to be adequate. Here's the water, if it wants to drink. I *do* install a lot of things automatically. When working on the largest single-owner intranet in the world, careful automation helps. When I promote a version of software, I know it's going to get installed on all my hosts exactly as I want by specifying a nevra. This has been possible-- no, scratch that. This has been reliably consistent in a verifiable way with an excellent (simulated) rollback mechanism for 25+ years. People born AFTER this was a proven feature have learned to crawl, walk, run, add, multiply, converse, demonstrate, compete,learn, love, graduate and excel in a field; all in that time. People born after this feature was a feature could have learned this feature while looking after their own newborn children. EVERY competitor to Splunk figured it out in that time. Splunk has a willing army of volunteers who'd love to show them, I'm sure, but who also remain a valuable resource completely untapped. I hope Splunkisco can learn more about it and catch up to 1999. But look at the time: it's almost 5 months to the 13th birthday. See ya there!
I issued the following command on client server, heading for VIP: $ ssh splunk@10.1.2.10 ####################################### ### pspfwd01 ### ####################################### ********... See more...
I issued the following command on client server, heading for VIP: $ ssh splunk@10.1.2.10 ####################################### ### pspfwd01 ### ####################################### ****************************************************************** WARNING: To protect the system from unauthorized use and to ensure that the system is functioning properly, activities on this system are monitored and recorded and subject to audit. Use of this system is expressed consent to such monitoring and recording. Any unauthorized access or use of this Automated Information System is prohibited and could be subject to criminal and civil penalties. ****************************************************************** splunk@10.1.2.10's password: Heavy forwarder 1 [pspfwd01 -- 10.1.2.11] replied...
There are two heavy forwarders at our site.  The current setup is that there is a VIP defined for client server access. Here is example for IP definitions: heavy forwarder 1 [10.1.2.11] heavy forw... See more...
There are two heavy forwarders at our site.  The current setup is that there is a VIP defined for client server access. Here is example for IP definitions: heavy forwarder 1 [10.1.2.11] heavy forwarder 2 [10.1.2.12] VIP [10.1.2.10] When client server wants to forward monitoring data to splunk, it just pointed to the VIP, 10.1.2.10. However, I could not find the IP [10.1.2.10] on client server and both heavy forwarders by issuing ifconfig OS command. How was the VIP defined?  There is no load balancer in front of both heavy forwarders.  
setting "allowRemoteLogin" in server.conf did allow default password and then I changed the password using above ./splunk edit user ... Thanks.
Growing a bit exasperated with the issue that Im facing while integrating Splunk with Duo admin api, seeing the following error right form the get go during the initial configuration. EOF occured in ... See more...
Growing a bit exasperated with the issue that Im facing while integrating Splunk with Duo admin api, seeing the following error right form the get go during the initial configuration. EOF occured in violation of protocol (_ssl. c:1106). I have not seen it before and its even stranger as there is no connectivity issues, a curl to my api host shows connectivity is fine, no problem there. TLS handshake is successful. TCP dump shows that it was able to reach out to Duo cloud's IP,  here's a screenshot of the error preventing me from proceeding   The error is happening at intial setup, its so hard to determine why with no information or logs to go off... is anyone familiar with this?
Don't pre-optimize - if you think you may have duplicates, then plan to deal with it conceptually in the search logic, i.e. how would you recognised those events as duplicates. As it's a dashboard t... See more...
Don't pre-optimize - if you think you may have duplicates, then plan to deal with it conceptually in the search logic, i.e. how would you recognised those events as duplicates. As it's a dashboard then you can set tokens to play with time, so you could easily set limiting tokens to control date ranges between the two indexes. i.e. if your search range is last 3 months then you can have a search, e.g. | makeresults | addinfo | eval cut_off_date=strptime("2024-06-01", "%F") ``` INDEX A ``` | eval index_a_earliest = min(info_min_time, cut_off_date) | eval index_a_latest = min(info_max_time, cut_off_date) ``` INDEX B ``` | eval index_b_earliest = max(info_min_time, cut_off_date) | eval index_b_latest = max(info_max_time, cut_off_date) and then set tokens in a <done> clause for these values, i.e. <done> <set token="index_a_earliest">$result.index_a_earliest$</set> <set token="index_a_latest">$result.index_a_latest$</set> <set token="index_b_earliest">$result.index_b_earliest$</set> <set token="index_b_latest">$result.index_b_latest$</set> </done> and then in your searches use the tokens to define the search (index=A earliest=$index_a_earliest$ latest=$index_a_latest$) OR (index=B earliest=$index_b_earliest$ latest=$index_b_latest$)...
Something like this should work if the timestamps are unique for each id: index=mylogs | sort + _time | streamstats latest(eval(if(operation="update",value,NULL))) as Current by id | eval STATUS=ca... See more...
Something like this should work if the timestamps are unique for each id: index=mylogs | sort + _time | streamstats latest(eval(if(operation="update",value,NULL))) as Current by id | eval STATUS=case(isnull(Current),"OK",Current=value,"OK",1=1,"FAIL")   With sample data (adjusted slightly for demo purposes and unique timestamps):   | makeresults | eval id=124945912 | eval value="FALSE" | eval _time=1718280482 | eval operation="get" | append [| makeresults | eval id=124945938 | eval value="FALSE" | eval _time=1718280373 | eval operation="get"] | append [| makeresults | eval id=124945938 | eval value="FALSE" | eval _time=1718280373 | eval operation="update"] | append [| makeresults | eval id=124945938 | eval value="null" | eval _time=1718280363 | eval operation="get"] | append [| makeresults | eval id=124945937 | eval value="FALSE" | eval _time=1718280350 | eval operation="get"] | append [| makeresults | eval id=124945937 | eval value="TRUE" | eval _time=1718280349 | eval operation="update"] | append [| makeresults | eval id=124945937 | eval value="FALSE" | eval _time=1718280348 | eval operation="update"] | append [| makeresults | eval id=124945937 | eval value="null" | eval _time=1718280337 | eval operation="get"] | append [| makeresults | eval id=124945936 | eval value="FALSE" | eval _time=1718280331 | eval operation="get"] | append [| makeresults | eval id=124945936 | eval value="FALSE" | eval _time=1718280330 | eval operation="update"] | sort + _time | streamstats latest(eval(if(operation="update",value,NULL))) as Current by id | eval STATUS=case(isnull(Current),"OK",Current=value,"OK",1=1,"FAIL")    
It seems like as soon as you add the key_field argument the append=false option is ignored (despite what the documentation says). In my case I was trying to overwrite the collection by using this ... See more...
It seems like as soon as you add the key_field argument the append=false option is ignored (despite what the documentation says). In my case I was trying to overwrite the collection by using this | outputlookup append=false key_field=host_id <kv_lookup_ref > I overcame the problem by using the following approach | rename host_id as _key | outputlookup <kv_lookup_ref> Which overwrote the collection successfully whilst still using my desired _key field (host_id) rather than system generated _key values.
So, my question about what you have in your real search before eventstats is significant because ALL the data you have in the search up to eventstats will travel to the search head. Using the fields ... See more...
So, my question about what you have in your real search before eventstats is significant because ALL the data you have in the search up to eventstats will travel to the search head. Using the fields statement will remove fields you don't want from the data sent to the SH. If you have a table statement before the eventstats, then that is also a transforming command so will cause the data to go to the SH - for efficiency you want to keep as much of the search on the indexers and only go to the SH with the minimum amount of data you actually need. Can you post the full search? Your 3rd eventstats is splitting by servergroup, which is now a multivalue field, which  As for creating the lookup, from your examples, I surmise that if "name" is titled "LoadBalancer-XXX" then it is a load balancer  so collect all network names for all load balancers into a lookup, e.g. | makeresults format=csv data="ip,name,network, 192.168.1.1,LoadBalancer-A,Loadbalancer-to-Server 172.168.1.1,LoadBalancer-A,Firewall-to-Loadbalancer 172.168.1.2,LoadBalancer-B,Loadbalancer-to-Server 192.168.1.6,server-A,Loadbalancer-to-Server 192.168.1.7,server-A,Loadbalancer-to-Server 192.168.1.8,server-B,Loadbalancer-to-Server 192.168.1.9,server-C,network-1 192.168.1.9,server-D,network-2" | search network="Firewall-to-Loadbalancer" OR name="LoadBalancer-*" | stats values(network) as network by name | eval behindfirewall = if(match(network,"Firewall-to-Loadbalancer"),"1","0") | outputlookup output_format=splunk_mv_csv firewall.csv Then do | lookup firewall.csv network OUTPUT behindfirewall Not sure if that will do what you want, but maybe it gives you some ideas - I don't know your data well enough to know what's what.
I'm a little unclear on your requirement, but your working eventstats example that gives you the "Expected result" of grade name student A student-1-a student-1-a     student-1-b  ... See more...
I'm a little unclear on your requirement, but your working eventstats example that gives you the "Expected result" of grade name student A student-1-a student-1-a     student-1-b     student-1-c A student-1-b student-1-a     student-1-b     student-1-c ... so you want all values of student-X-Y to be included for each combination of student-X-Y? In that case, you don't need the match statement, so what is the issue? Depending on the data volume, eventstats can be slower, so you could use this variant ... | eval partialname=substr(name,0,9) | stats values(name) as student by grade partialname | eval name=student | mvexpand name that uses stats, which will be more efficient than eventstats, but then mvexpand will be slower, but you cna measure the performance if volume is an issue.
Search-time extractions are preferred over index-time extractions, because they use less storage (none) and don't slow down indexing. You can extract fields automatically at search time by adding EX... See more...
Search-time extractions are preferred over index-time extractions, because they use less storage (none) and don't slow down indexing. You can extract fields automatically at search time by adding EXTRACT settings in the sourcetypes's props.conf stanza. [xmlwineventlog] EXTRACT-s_p_n = (server_principal_name:(?<server_principal_name>\S+)) in EventData_Xml EXTRACT-s_i_n = (server_instance_name:(?<server_instance_name>\S+)) in EventData_Xml EXTRACT-a_i = (action_id:(?<action_id>\S+)) in EventData_Xml EXTRACT-succeeded = (succeeded:(?<succeeded>\S+)) in EventData_Xml  
Try this | rex field=message "reqPath\\\":\\\".*/(?<reqPath>\w+)" where the .* is a greedy capture up to the final / character
Good luck!
Take a look at https://ideas.splunk.com/ However, I suspect you will not get any traction with that, your example is defining colour based on index and sourcetype rather than Splunk deciding on the... See more...
Take a look at https://ideas.splunk.com/ However, I suspect you will not get any traction with that, your example is defining colour based on index and sourcetype rather than Splunk deciding on the colour to use, so I am not sure I understand your original distinction between pleasant and unpleasant results and how that is defined. Anyway, have you looked at event types, where you can define colours for events.  
Yes, you can automate Splunk upgrades.  Many customers do so using a variety of tools. There's nothing special needed, just teach your automation to perform the same steps you would do manually. Th... See more...
Yes, you can automate Splunk upgrades.  Many customers do so using a variety of tools. There's nothing special needed, just teach your automation to perform the same steps you would do manually. Those steps are documented at https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller