All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello all, hoping someone can help me. We are setting up IAM User Keys that are supposed to rotate on a monthly basis. We use those keys to send email from AppDynamics. I can connect to the SMTP serv... See more...
Hello all, hoping someone can help me. We are setting up IAM User Keys that are supposed to rotate on a monthly basis. We use those keys to send email from AppDynamics. I can connect to the SMTP server just fine. What I need to find out is where is this information stored so that I can create a script that will update that information when the keys get rotated. Is it in the database, and if so what table? Or if its in a file what file? Thanks for any and all help!
Hi @splunklearner , yes sorry, it was a mistyping! I don't know exactly the differences between Splunk Cloud ans Splunk on AWS, probably they are very quite because the infrastructure is the same a... See more...
Hi @splunklearner , yes sorry, it was a mistyping! I don't know exactly the differences between Splunk Cloud ans Splunk on AWS, probably they are very quite because the infrastructure is the same and the product is the same, it's different only who manages it. If you want am on premise solution take in consideration Splunk on-premise, if you want a cloud solution, take in consideration Splunk Cloud. Ciao. Giueppe
Adding to @ITWhisperer 's question - remember that if you're detecting a downtime as lack of events you are unable to either detect downtime longer than your search window completely (if you're not u... See more...
Adding to @ITWhisperer 's question - remember that if you're detecting a downtime as lack of events you are unable to either detect downtime longer than your search window completely (if you're not using a list of values to compare your results to) or at least unable to detect their real length beyond your search window.
That thread linked by @gcusello is relatively old but quite valid. Generally speaking - in terms of the basic user's experience, they are prety similar and you could have difficult time telling one ... See more...
That thread linked by @gcusello is relatively old but quite valid. Generally speaking - in terms of the basic user's experience, they are prety similar and you could have difficult time telling one from the other. The difference is who is responsible for the infrastructure and who does the "low-level" stuff on the environment (and what you can do there). Because obviously you don't have direct access to the underlying servers in Splunk Cloud. Some of the settings you normally can adjust from the CLI you can only manipulate via apps uploaded to the Cloud (and remember that your private apps go through the vetting process so you can't just throw anything in them). Some settings may only be set by support. Some cannot be changed. But you don't need to worry about mundane stuff like backups. If you set up your Splunk environment in AWS (I assume that's what you mean by AWS Splunk), it's exactly like an on-prem Splunk Enterprise installation but without having to maintain the hardware.
Hi @gcusello , Expecting it is not QWS it is AWS.(correct me if I am wrong) Can you please illustrate more about Splunk cloud vs AWS splunk? 
Hi @splunklearner , Splunk on premise is installed on your own infrastructure. Splunk Cloud is a service that is managed by Splunk itself and it's located on QWS infrastructure but it's transparent... See more...
Hi @splunklearner , Splunk on premise is installed on your own infrastructure. Splunk Cloud is a service that is managed by Splunk itself and it's located on QWS infrastructure but it's transparent for you. Splunk on QWS is a service from AWS, is similar to Splunk on premise but installed on a private clud on AWS. You can find a comparative analysis between On-Premise and Cloud at: https://community.splunk.com/t5/Splunk-Enterprise/Main-differences-between-Splunk-Enterprise-and-Splunk-Cloud/m-p/218797 https://www.conducivesi.com/about-splunk/splunk-enterprise-vs-splunk-cloud https://www.gartner.com/reviews/market/security-information-event-management/compare/product/splunk-cloud-vs-splunk-enterprise Ciao. Giuseppe
I am pretty new to Splunk. What is the difference between Splunk on premises vs Splunk cloud vs AWS splunk? Please enlighten me.
How (in non-SPL terms) do you determine what the downtime for a component is?
Hi @PickleRick ,   Thanks for the response. I agree that usually web service would be disabled but we keep the UI so that we can see the changes. I managed to clean the indexer completely of all t... See more...
Hi @PickleRick ,   Thanks for the response. I agree that usually web service would be disabled but we keep the UI so that we can see the changes. I managed to clean the indexer completely of all the configurations. Then recreate from backup and it worked.   Thanks, Pravin
Please update your subject to be more descriptive of the question you would like help with. We are volunteers here and would prefer to spend our time working on issues we can help with, so by being m... See more...
Please update your subject to be more descriptive of the question you would like help with. We are volunteers here and would prefer to spend our time working on issues we can help with, so by being more descriptive will allow us to focus our time and energy, and potentially get you a quicker and more accurate response: win/win!
Splunk claims this was fixed in 9.2.2, and is listed in the "fixed issues" for this version.  I wish I could confirm, but as of 9.2.2 my DS struggles to render any page in "Forwarder Management".   S... See more...
Splunk claims this was fixed in 9.2.2, and is listed in the "fixed issues" for this version.  I wish I could confirm, but as of 9.2.2 my DS struggles to render any page in "Forwarder Management".   Support is struggling to determine cause for 3+ months now
We have been able to validate that the issue was with the TIBCO File Watcher process locking the file until it completed writing it to the disk and, therefore, the Splunk UF could  not open/read the ... See more...
We have been able to validate that the issue was with the TIBCO File Watcher process locking the file until it completed writing it to the disk and, therefore, the Splunk UF could  not open/read the file to ingest it. I wanted to check with the TIBCO people if there was a way to change the permissions with which the File Watcher process opened the file (ensure it had the FILE_SHARE_READ) but they suggested a simpler and just-as-effective solution. TIBCO will create the files, initially, as ".tmp" files, so they won't match the name pattern on the monitor stanza. When the process of writing to disk has completed, TIBCO will drop the ".tmp" so the files match the monitor stanza. That way, Splunk will only try to ingest the files that have been written into the disk and, therefore, are not locked.
Dear Splunkers, I would like to ask your support in order to adapt my search query to return results if downtime taking specific time window e.g. 3 consecutive days. May search query is following:... See more...
Dear Splunkers, I would like to ask your support in order to adapt my search query to return results if downtime taking specific time window e.g. 3 consecutive days. May search query is following: | table _time, status, component_hostname, uptime | sort by _time asc | streamstats last(status) AS status by component_hostname | sort by _time asc | reverse | delta uptime AS Duration | reverse | eval Duration=abs(round(Duration/60,4)) | search uptime=0 Like this I was able identify components with uptime=0.  Now I would like to extend my query to display result when specific component downtime=0 for several consecutive days e.g. 3 or 2 days. Thank you
I @splunklearner , you could (not mandatory) put props.conf and transforms.conf on UFs and I hint to do this, also because these files are usually in standard add-ons. Then you have to put them on ... See more...
I @splunklearner , you could (not mandatory) put props.conf and transforms.conf on UFs and I hint to do this, also because these files are usually in standard add-ons. Then you have to put them on Search Heads and on Indexers. Are you speaking about F5 Waf Security add-on I suppose, did you read the documentation at https://splunkbase.splunk.com/app/2873 ? Ciao. Giuseppe
I am deployed to new project in splunk. We have logs coming from F5 WAF devices sent to our syslog server. Then we will install UF on our syslog server and forward it to our indexer. Syslog --- UF -... See more...
I am deployed to new project in splunk. We have logs coming from F5 WAF devices sent to our syslog server. Then we will install UF on our syslog server and forward it to our indexer. Syslog --- UF --- Indexer And we have few on premise servers and few are there in AWS EC2 instances. Can someone explain me more indepth about this project? There is no HF in our env as of now. So where can we write props.conf and transforms.conf? In indexer or UF? if we write in indexer, will it work because indexing is already done right? Will props.conf work before indexing the data in indexer?
Hi @PickleRick , you said in a perfect way what I tried to explain: on DC there are the connection events (e.g. 4524 or 4634 etc...) but not the local events fron the clients. For this reason I hin... See more...
Hi @PickleRick , you said in a perfect way what I tried to explain: on DC there are the connection events (e.g. 4524 or 4634 etc...) but not the local events fron the clients. For this reason I hinted to install the UF also on Clients and not only on DC. Ciao and thanks for the details. Giuseppe
Thanks - this is defnitely helping a lot. I would love to join the tables in the results. And what I also noticed is that the description isn't always exactly "Leaver Request for" that is why I added... See more...
Thanks - this is defnitely helping a lot. I would love to join the tables in the results. And what I also noticed is that the description isn't always exactly "Leaver Request for" that is why I added affect_dest="STL Leaver" which checks just for leaver tickets identity email extensionattribute10 extensionattribute11 first last _time affect_dest active description dv_state number nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01
https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Propsconf#Structured_Data_Header_Extraction_and_configuration   Start here and see what you can find, otherwise please provide your props.co... See more...
https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Propsconf#Structured_Data_Header_Extraction_and_configuration   Start here and see what you can find, otherwise please provide your props.conf configuration if possible so we can actually see what is being attempted vs an example of the actual output.  A sample of the log helps when deciphering how your existing props.conf is interacting with the data.
My team has setup with correlation_search_1, service1 creating notable events that have the notable event aggregation policy - policy1.  Now I made additions, correlation_search2, service2 and pol... See more...
My team has setup with correlation_search_1, service1 creating notable events that have the notable event aggregation policy - policy1.  Now I made additions, correlation_search2, service2 and policy2. But when I went to the episodes review window I find out that notable event episodes from search2 are still using policy1, how do I get these set of episodes to follow policy2 without disturbing the previous setup following policy1, I cant find any setting that allows me to do so, please help where I can find this if it exists.
We are trying to onboard data from F5 WAF devices to our splunk. F5 team sending it by key value pairs. And one of them is "headers:xxxxxxxxx" (nearly 40 words). When data is getting  onboarded and w... See more...
We are trying to onboard data from F5 WAF devices to our splunk. F5 team sending it by key value pairs. And one of them is "headers:xxxxxxxxx" (nearly 40 words). When data is getting  onboarded and we are checking in splunk web, below the table format headers field is not capturing correctly. It is giving some other value. Same with other field where its value is getting truncated. Please help me in this case.