All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Siddharthnegi , Can you execute queries in DB-Connect? if yes, see how you configured Splunk options (index, sourcetype, source, and host). If not, you have to check the Conncection inserting ... See more...
Hi @Siddharthnegi , Can you execute queries in DB-Connect? if yes, see how you configured Splunk options (index, sourcetype, source, and host). If not, you have to check the Conncection inserting the informations for the connection. Ciao. Giuseppe
I have an index in which data is coming DB_connect , but it showing NO EVENTS as it is showing this error "Invalid database connection"  and Everything is fine from database side.
@richgalloway , Thank you for your inputs. When installing the `splunkclouduf` app via the GUI, will it prompt for a username and password during installation, or will it proceed directly without r... See more...
@richgalloway , Thank you for your inputs. When installing the `splunkclouduf` app via the GUI, will it prompt for a username and password during installation, or will it proceed directly without requiring authentication? Since we haven’t previously installed the `splunkclouduf` app through the GUI, I’m curious to know what to expect. If installing by logging into the server directly, where should we place the `splunkclouduf` app—either in the `/opt/splunk/etc/apps` or `/opt/splunk/etc/deployment-apps` directory? After placing it in the appropriate directory, I assume we need to navigate to `/opt/splunk/bin` and execute the necessary command to complete the installation. Please confirm. Also, regarding ports, we know that 8000, 8089, and 9997 need to be open from our on-prem server. If there are any additional ports required, please let me know.  
No. I don't mean searching for the logs from the forwarder. This you won't find, it's obvious. You need to look into _internal log for events from your receiving indexer(s) or HF(s) depending on wha... See more...
No. I don't mean searching for the logs from the forwarder. This you won't find, it's obvious. You need to look into _internal log for events from your receiving indexer(s) or HF(s) depending on what your infrastructure looks like concerning that disconnecting forwarder.
Hi @Vnarunart , yes, you can clone the old HF to a new one but, in addition, remember to change also the hostname in $SPLUNK_HOME/etc/system/loca/server.conf and $SPLUNK_HOME/etc/system/loca/inputs.... See more...
Hi @Vnarunart , yes, you can clone the old HF to a new one but, in addition, remember to change also the hostname in $SPLUNK_HOME/etc/system/loca/server.conf and $SPLUNK_HOME/etc/system/loca/inputs.conf. Anyway, having a Deployment Server, you could create a new Splunk installation and manage both the HFs with the DS deploying the same apps. Ciao. Giuseppe
Again - it depends whether by "migrate" you mean just replace the box and leave everything as it was before (IP, name, storage layout) or are you planning any changes. Do you want to stay with the sa... See more...
Again - it depends whether by "migrate" you mean just replace the box and leave everything as it was before (IP, name, storage layout) or are you planning any changes. Do you want to stay with the same underlying OS or do you plan to migrate, for example, from debian to RH? How was your system installed? A dpkg/rpm package? A simple unpack from tgz? A docker container?
Hi Splunk Community, I’ve set up Azure Firewall logging, selecting all firewall logs and archiving them to a storage account (Event Hub was avoided due to cost concerns). The configuration steps tak... See more...
Hi Splunk Community, I’ve set up Azure Firewall logging, selecting all firewall logs and archiving them to a storage account (Event Hub was avoided due to cost concerns). The configuration steps taken are as follows: Log Archival: All Azure Firewall logs are set to archive in a storage account Microsoft Cloud Add-On I added the storage account to the Microsoft Cloud Add-On using the secret key with the following permissions: Input/Action API Permissions Role (IAM) Default Sourcetype(s) / Sources Azure Storage Table Azure Storage Blob N/A Access key  OR Shared Access Signature:   - Allowed services: Blob, Table   - Allowed resource types: Service, Container, Object   - Allowed permissions: Read, List N/A mscs:storage:blob (Received this) mscs:storage:blob:json mscs:storage:blob:xml mscs:storage:table We are receiving events from the source files in JSON format, but there are two issues: Field Extraction: Critical fields such as protocol, action, source, destination, etc., are not being identified. Incomplete Logs: Logs appear truncated, starting with partial data (e.g., “urceID:…”) and missing “Reso,” which implies dropped or incomplete events (As far as I understand) Few logs were received compared to the traffic on Azure Firewall. Attached is a piece of logs showing errors as mentioned in the question. ________________________________________________________________ Environment Details:  • Log Collector: Heavy Forwarder (HF) hosted in Azure. • Data Flow: Logs are being forwarded to Splunk Cloud    Questions: Can it be an issue with using storage accounts and not event-hub? Could the incomplete logs be due to a configuration issue with the Microsoft Cloud Add-On or possibly related to the data transfer between the storage account and Splunk? Has anyone encountered similar issues with field extraction from Azure Firewall JSON logs? Ultimate Goal: Receive Azure Firewall Logs with fields extracted as any other firewall logs received by Syslog (Fortinet for example) Any guidance or troubleshooting suggestions would be much appreciated!  
@PickleRick   like while am searching in Splunk indexer am not able to see host, index and source for the windows server at that specific time.
Hi @PickleRick , firstly we planning to migrate data from existing server new server then afterwards splunk upgrade so here i wanted to know the steps  how to migrate data from my existing server to... See more...
Hi @PickleRick , firstly we planning to migrate data from existing server new server then afterwards splunk upgrade so here i wanted to know the steps  how to migrate data from my existing server to new server..within the server we know but now its new server so asking you how to migrate data.and the second  thing the installation and upgrade my team will see that and here i need only how to migrate my data. so kindly please help on this
I would like to seek advice from experienced professionals. I want to add another heavy forwarder to my environment as a backup in case the primary one fails (on a different network and not necessari... See more...
I would like to seek advice from experienced professionals. I want to add another heavy forwarder to my environment as a backup in case the primary one fails (on a different network and not necessarily active-active).  * I have splunk cloud and 1 Heavy Forwarder, 1  Deployment server on premise. 1. If I copy a heavy forwarder (VM) from one vCenter to another, change the IP, and generate new credentials from Splunk Cloud, will it work immediately? (I want to preserve my existing configurations.) 2. I have a deployment server. Can I use it to configure two heavy forwarders? If so, what would be the implications? (Would there be data duplication, or is there a way to prioritize data? Or is there a better way I should do this? Please advise.
This issue comes from a distributed environment where you have your search head separated from the indexers. To solve this you will need to create a "dummy index" on your search head with the same na... See more...
This issue comes from a distributed environment where you have your search head separated from the indexers. To solve this you will need to create a "dummy index" on your search head with the same name as the one on you indexer which you want to write the message into. This solved it for me.     Source: https://community.splunk.com/t5/Alerting/Alerts-triggered-actions-log-events/m-p/693487
That might indicate issues with the receiving indexer. Check its logs and health.
@PickleRick  error is something like Read error. An existing connection was forcibly closed by the remote host.
Hi  I found the below information from the community page,, however i am bit confused on step by step procedure    Link to the splunk community - https://community.splunk.com/t5/Getting-Data-I... See more...
Hi  I found the below information from the community page,, however i am bit confused on step by step procedure    Link to the splunk community - https://community.splunk.com/t5/Getting-Data-In/Reusable-Script-How-to-Reset-All-Tokens-with-a-Single-Click/td-p/472141?_gl=1*kqgr9a*_gcl_au*MTQ1NTQ1MDI1My4xNzMwNzcyMzM2*_ga*MTAzODg1MjI0My4xNzMwNzcyMzM2*_ga_5EPM2P39FV*MTczMDk2OTY3OC4xMS4xLjE3MzA5Njk5MTEuNjAuMC4xNjEyNDY3NTUx*_fplc*eWNFa0M3NWtnT0VvQjdhUjltM0VxTU9ocG1TNjh3aHFIc1l1cnFHN2g3ZGpXaFExTEpBcTdJckFPJTJCJTJCM1czMDBEU1BrUWdzVkE0Z2JrJTJCdkNnOWdpMFRBNyUyQmFGcFU4R3A4d3ExZGdrajdDUVA2VElkcEdPSjMlMkYzc2pzRVZuUSUzRCUzRA.. Thanks 
1. I'm not sure what you mean by "DR servers" here since in the main environment you have four indexers and you have only one "DR indexer". 2. Search head must be able to contact CM, indexers and LM... See more...
1. I'm not sure what you mean by "DR servers" here since in the main environment you have four indexers and you have only one "DR indexer". 2. Search head must be able to contact CM, indexers and LM (there can be additional requirements if you're using Stream but I'm assuming you aren't). So you should simply install a new SH, replicate (most of) the configuration and state (including kvstore contents) from existing SH and you should be ready to go. Just tell people to use the new address or update the DNS entry to point to the new SH. Remember about adjusting your network settings (firewall holes) for the new SH and check if you don't have any IP-based or certificate based restrictions on your indexer tier.
Strictly technically speaking, you can configure almost any role on any server. But not every such deployment is considered a good practice. Especially with a bigger deployment the CM is already a r... See more...
Strictly technically speaking, you can configure almost any role on any server. But not every such deployment is considered a good practice. Especially with a bigger deployment the CM is already a relatively "well-stressed" member of your environment (it has enough to do on its own without adding additional roles) so that while you can do that you should rather find another component to "colocate" the LM role. Anyway, the "connection timeout" messages typically indicate network-level issues - somewhere the traffic is getting filtered on firewall (or you have routing problems).
1. If you're migrating into another environment there will be issues. You can't completely seamlessly move from one point to another without anyone noticing. Even if you have some form of HTTP LB in ... See more...
1. If you're migrating into another environment there will be issues. You can't completely seamlessly move from one point to another without anyone noticing. Even if you have some form of HTTP LB in front of your SH(S) so that you can simply point it to other backend, you are bound to at least break existing browsing sessions, there will be issues with replicating last minute changes between those environment and so on. 2. There is no way to give a precise step by step fool-proof instructions which can be just executed without knowing what you're doing. 3. You're talking about migrating just a search head but you're posting in the Splunk Cloud section of the forum so it's not clear what you actually wanna do. Overall, as usual with more complicated stuff and people who ask question which seem to be significantly above their knowledge/expertise level (I'm not trying to offend you here - I'm trying to save you time/money by preventing you from breaking your stuff) - I'd advise to seek help from either Professional Services or your local friendly Splunk Partner who has certified and experienced engineers who will help you get through this process.
Firstly, check what happens - when the UF "stops", check what's at the end of splunkd.log to see whether anything out of the ordinary happened and see the windows system/application logs for entries ... See more...
Firstly, check what happens - when the UF "stops", check what's at the end of splunkd.log to see whether anything out of the ordinary happened and see the windows system/application logs for entries regarding splunkd.exe to see if you see any indication of process crashing. It might be a configuration issue but it indeed might be a software bug so you might end up calling support for help.
Splunk does not have native capability to authenticate users against RADIUS server. If you're using an external app (there is at least one on Splunkbase but it doesn't seem to be actively maintained)... See more...
Splunk does not have native capability to authenticate users against RADIUS server. If you're using an external app (there is at least one on Splunkbase but it doesn't seem to be actively maintained), you probably have to either dig into the script code or try to contact the author. I don't suppose it's a very popular way of authentication with Splunk.
That is an interesting issue and it's definitely a browser issue. If I run your search I see the results with proper spacing differences. But. If I go into page source in developer tools I get... See more...
That is an interesting issue and it's definitely a browser issue. If I run your search I see the results with proper spacing differences. But. If I go into page source in developer tools I get this: They look evenly spaced, right? But they aren't. If I double click on those values to edit them, they "spread" (I think something changes font-wise when you're editing the contents). So it's definitely something with text rendering on the browser's side.