All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

No, it is not necessary.  The Deployment Server has been used to manage indexers and search heads, but that's not its intended purpose. 
@r0b1n wrote: I'm guessing I should modify it, save it to a tar.gz and upload/install it as a custom app? That is correct.  You'll also need to change the app ID (the directory name and the a... See more...
@r0b1n wrote: I'm guessing I should modify it, save it to a tar.gz and upload/install it as a custom app? That is correct.  You'll also need to change the app ID (the directory name and the associated value in app.conf).
@PickleRick , How should I do it? This way below should work? [keycloak] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = f... See more...
@PickleRick , How should I do it? This way below should work? [keycloak] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false TRANSFORMS-field_extract = field_username When you say "external preprocessing" do you mean somewhere else before the Heavy Forwarder?
I'd like to take an existing Splunkbase app and make a few changes to the bin files. What is the proper way to do this in Splunk Cloud?  I'm guessing I should modify it, save it to a tar.gz and uplo... See more...
I'd like to take an existing Splunkbase app and make a few changes to the bin files. What is the proper way to do this in Splunk Cloud?  I'm guessing I should modify it, save it to a tar.gz and upload/install it as a custom app?
For unspecified reasons I cannot use K8 yet.  For this build I was attempting to utilize Splunk docker image with Podman in a rootless mode.  Didn't know if anyone had any experience with this.  It s... See more...
For unspecified reasons I cannot use K8 yet.  For this build I was attempting to utilize Splunk docker image with Podman in a rootless mode.  Didn't know if anyone had any experience with this.  It seems to be a privilege issue since root:root registers fine with vm host IP of the indexer peer.  For some reason rootless:root does not and assumes the container IP of the manager container. 
As per error logs it seems that public access for the storage account overrides the public access settings for all containers in that storage account, preventing anonymous access to blob data in that... See more...
As per error logs it seems that public access for the storage account overrides the public access settings for all containers in that storage account, preventing anonymous access to blob data in that account. When public access is disallowed for the account, it is not possible to configure the public access setting for a container to permit anonymous access, and any future anonymous requests to that account will fail.   To set the AllowBlobPublicAccess property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the Microsoft.Storage/storageAccounts/write action. Built-in roles with this action include: The Azure Resource Manager Owner role The Azure Resource Manager Contributor role The Storage Account Contributor role Once you have an account with such permissions, you can work your way through any of the following ways to enable the "Blob public access" setting on your Storage Account: After that you can also set the public access setting at the container and blob levels: This above information will help you to resolve this issue  
Cool, works like a charm 
Thanks for the quick replies, we have configured a HF and removed the input from the SH. With the help of the guides we also managed to set the necessary EntraID permissions for the app. Now it wor... See more...
Thanks for the quick replies, we have configured a HF and removed the input from the SH. With the help of the guides we also managed to set the necessary EntraID permissions for the app. Now it works and all dashboards show data. Thank you very much!
Hi @ameet , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the c... See more...
Hi @ameet , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Well, "correct" depends on what you want to achieve. Often dedup is not needed if you're going to stats the data right after it. And it's a tricky command and often misunderstood and misused. It filt... See more...
Well, "correct" depends on what you want to achieve. Often dedup is not needed if you're going to stats the data right after it. And it's a tricky command and often misunderstood and misused. It filters out from your results all further events with a particular value of a given field (or a set of values if you use it on more fields) regardless of what the remaining content of those events are. So if you had, for example, logs containing fields criticality (being one of INFO, WARN, DEBUG, ERR or CRIT) and message after using | dedup criticality you'd only get one INFO, one DEBUG and so on - the first one Splunk encountered in your data. You'd lose all subsequent INFOs, DEBUGs and so on even though they had different message value. So you'd be aware that - for example - there was a CPU usage spike but wouldn't know that your system was also out of disk space and over the temperature threshold. Dedup is really rarely useful. For me it works only as an "extended head".
Thank you @gcusello and @isoutamo for your suggestions. And yes i agree should have checked and plan better. Yes it is the Splunk Enterprise single instance installation.  Just to ellaborate a littl... See more...
Thank you @gcusello and @isoutamo for your suggestions. And yes i agree should have checked and plan better. Yes it is the Splunk Enterprise single instance installation.  Just to ellaborate a little bit more - i untar the UF installtion file and then tried starting the UFs from the bin folder using sudo ./splunk start command but i ended up getting the error message below   
No. In case of this particular bug it was the other way around. Don't remove grantableRoles. Add it if you don't have it. But that was my issue. Yours can be a different one.
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Depl... See more...
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Deployment Server in Splunk environment. We currently have a Deployment Server set up, but we are unsure if registering the other instances as clients is a required step, or if any specific configurations need to be made for each type of component. Thank you in advance. Best regards,
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal For... See more...
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal Forwarder to send logs to Splunk Observability Cloud directly as we don't have the IP Address / hostname of Splunk Observability Cloud as well the 9997 port open atSplunk Observability Cloud end, like in general we can the below steps to configure Splunk Universal Forwarder to Splunk Enterprise/Cloud as mentioned below:  Add IP_Address/Host_Name where the log has to be sent "./splunk add <IP/HOST_NAME>:9997" Add the file whose log has to collected "./splunk add monitor /var/log/GRPCServer.log"  Thank You
@PickleRick  As per the link you shared at first place. There mentioned that remove 'grandableRoles' =admin  from the admin user from authorize.conf file. Is that workaround will work or shall I try ?
below image should give some clarity,  Currently I have 2 different dashboard and I want single dashboard with all 3 details.  
Flap time = when one of the peer or you can consider the cable connected to device went down   If you see below dashboard we can see the device IP + Flap time , In the other dashboard you can see t... See more...
Flap time = when one of the peer or you can consider the cable connected to device went down   If you see below dashboard we can see the device IP + Flap time , In the other dashboard you can see the Device_name + Device IP, I just want to see all 3 details ( Device name, Device IP & Flap time) in one dashboard,   Does it answered your query ?   
Thanks I will post in new thread
I edited the datamodel using web UI, also I just edited a macro and it is reflected in the cluster
This is a different issue - more visualization-related. Please post it as a new thread in a proper forum section to keep this forum tidy