All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes. If you had your event sanitized before ingesting (not have a whole json structure inserted as a text member of another json), you could have it parsed as a normal json without manually having to... See more...
Yes. If you had your event sanitized before ingesting (not have a whole json structure inserted as a text member of another json), you could have it parsed as a normal json without manually having to extract each field (and manipulating structured data with regexes is bound to hit some walls sooner or later). Also - I'd advise against doing indexed extractions unless you have a very good use case for them.
Hi @Ste , in dashboards, you cannot use <> and you have to replace them with &lt; and &gt; | where my_time&gr;=relative_time(now(),"-1d@d") AND my_time&lt;=relative_time(now(),"@d") Ciao. giuseppe
As @PickleRick said, you should use separate HF in distributed environment for all modular inputs, don’t put those into SH. Of course you need TA in SH too, but not inputs configured there.
Dear experts Why is the following line   | where my_time>=relative_time(now(),"-1d@d") AND my_time<=relative_time(now(),"@d")   Accepted as a valid statement in a search window, but as soon I wa... See more...
Dear experts Why is the following line   | where my_time>=relative_time(now(),"-1d@d") AND my_time<=relative_time(now(),"@d")   Accepted as a valid statement in a search window, but as soon I want to use exactly this code in a dashboard, I get the error message: "Error in line 100: Unencoded <" ? The dashboard code validator somehow fails with the <= comparison.  >= works, as well = but not <=  We're on splunkcloud. 
As this is your clean new installation on your test machine, I propose that you remove whole UF installation (probably in /opt/splunkforwarder or something similar). Then if you really want to install... See more...
As this is your clean new installation on your test machine, I propose that you remove whole UF installation (probably in /opt/splunkforwarder or something similar). Then if you really want to install also UF on the same machine install it with your package manager from rpm or dpg file. Then just try to start it again. Just follow UF’s installation instructions from docs.splunk.com.
Ok makes sense. Thanks for the reply!
No, it is not necessary.  The Deployment Server has been used to manage indexers and search heads, but that's not its intended purpose. 
@r0b1n wrote: I'm guessing I should modify it, save it to a tar.gz and upload/install it as a custom app? That is correct.  You'll also need to change the app ID (the directory name and the a... See more...
@r0b1n wrote: I'm guessing I should modify it, save it to a tar.gz and upload/install it as a custom app? That is correct.  You'll also need to change the app ID (the directory name and the associated value in app.conf).
@PickleRick , How should I do it? This way below should work? [keycloak] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = f... See more...
@PickleRick , How should I do it? This way below should work? [keycloak] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false TRANSFORMS-field_extract = field_username When you say "external preprocessing" do you mean somewhere else before the Heavy Forwarder?
I'd like to take an existing Splunkbase app and make a few changes to the bin files. What is the proper way to do this in Splunk Cloud?  I'm guessing I should modify it, save it to a tar.gz and uplo... See more...
I'd like to take an existing Splunkbase app and make a few changes to the bin files. What is the proper way to do this in Splunk Cloud?  I'm guessing I should modify it, save it to a tar.gz and upload/install it as a custom app?
For unspecified reasons I cannot use K8 yet.  For this build I was attempting to utilize Splunk docker image with Podman in a rootless mode.  Didn't know if anyone had any experience with this.  It s... See more...
For unspecified reasons I cannot use K8 yet.  For this build I was attempting to utilize Splunk docker image with Podman in a rootless mode.  Didn't know if anyone had any experience with this.  It seems to be a privilege issue since root:root registers fine with vm host IP of the indexer peer.  For some reason rootless:root does not and assumes the container IP of the manager container. 
As per error logs it seems that public access for the storage account overrides the public access settings for all containers in that storage account, preventing anonymous access to blob data in that... See more...
As per error logs it seems that public access for the storage account overrides the public access settings for all containers in that storage account, preventing anonymous access to blob data in that account. When public access is disallowed for the account, it is not possible to configure the public access setting for a container to permit anonymous access, and any future anonymous requests to that account will fail.   To set the AllowBlobPublicAccess property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the Microsoft.Storage/storageAccounts/write action. Built-in roles with this action include: The Azure Resource Manager Owner role The Azure Resource Manager Contributor role The Storage Account Contributor role Once you have an account with such permissions, you can work your way through any of the following ways to enable the "Blob public access" setting on your Storage Account: After that you can also set the public access setting at the container and blob levels: This above information will help you to resolve this issue  
Cool, works like a charm 
Thanks for the quick replies, we have configured a HF and removed the input from the SH. With the help of the guides we also managed to set the necessary EntraID permissions for the app. Now it wor... See more...
Thanks for the quick replies, we have configured a HF and removed the input from the SH. With the help of the guides we also managed to set the necessary EntraID permissions for the app. Now it works and all dashboards show data. Thank you very much!
Hi @ameet , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the c... See more...
Hi @ameet , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Well, "correct" depends on what you want to achieve. Often dedup is not needed if you're going to stats the data right after it. And it's a tricky command and often misunderstood and misused. It filt... See more...
Well, "correct" depends on what you want to achieve. Often dedup is not needed if you're going to stats the data right after it. And it's a tricky command and often misunderstood and misused. It filters out from your results all further events with a particular value of a given field (or a set of values if you use it on more fields) regardless of what the remaining content of those events are. So if you had, for example, logs containing fields criticality (being one of INFO, WARN, DEBUG, ERR or CRIT) and message after using | dedup criticality you'd only get one INFO, one DEBUG and so on - the first one Splunk encountered in your data. You'd lose all subsequent INFOs, DEBUGs and so on even though they had different message value. So you'd be aware that - for example - there was a CPU usage spike but wouldn't know that your system was also out of disk space and over the temperature threshold. Dedup is really rarely useful. For me it works only as an "extended head".
Thank you @gcusello and @isoutamo for your suggestions. And yes i agree should have checked and plan better. Yes it is the Splunk Enterprise single instance installation.  Just to ellaborate a littl... See more...
Thank you @gcusello and @isoutamo for your suggestions. And yes i agree should have checked and plan better. Yes it is the Splunk Enterprise single instance installation.  Just to ellaborate a little bit more - i untar the UF installtion file and then tried starting the UFs from the bin folder using sudo ./splunk start command but i ended up getting the error message below   
No. In case of this particular bug it was the other way around. Don't remove grantableRoles. Add it if you don't have it. But that was my issue. Yours can be a different one.
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Depl... See more...
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Deployment Server in Splunk environment. We currently have a Deployment Server set up, but we are unsure if registering the other instances as clients is a required step, or if any specific configurations need to be made for each type of component. Thank you in advance. Best regards,
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal For... See more...
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal Forwarder to send logs to Splunk Observability Cloud directly as we don't have the IP Address / hostname of Splunk Observability Cloud as well the 9997 port open atSplunk Observability Cloud end, like in general we can the below steps to configure Splunk Universal Forwarder to Splunk Enterprise/Cloud as mentioned below:  Add IP_Address/Host_Name where the log has to be sent "./splunk add <IP/HOST_NAME>:9997" Add the file whose log has to collected "./splunk add monitor /var/log/GRPCServer.log"  Thank You