All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As per error logs it seems that public access for the storage account overrides the public access settings for all containers in that storage account, preventing anonymous access to blob data in that... See more...
As per error logs it seems that public access for the storage account overrides the public access settings for all containers in that storage account, preventing anonymous access to blob data in that account. When public access is disallowed for the account, it is not possible to configure the public access setting for a container to permit anonymous access, and any future anonymous requests to that account will fail.   To set the AllowBlobPublicAccess property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the Microsoft.Storage/storageAccounts/write action. Built-in roles with this action include: The Azure Resource Manager Owner role The Azure Resource Manager Contributor role The Storage Account Contributor role Once you have an account with such permissions, you can work your way through any of the following ways to enable the "Blob public access" setting on your Storage Account: After that you can also set the public access setting at the container and blob levels: This above information will help you to resolve this issue  
Cool, works like a charm 
Thanks for the quick replies, we have configured a HF and removed the input from the SH. With the help of the guides we also managed to set the necessary EntraID permissions for the app. Now it wor... See more...
Thanks for the quick replies, we have configured a HF and removed the input from the SH. With the help of the guides we also managed to set the necessary EntraID permissions for the app. Now it works and all dashboards show data. Thank you very much!
Hi @ameet , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the c... See more...
Hi @ameet , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Well, "correct" depends on what you want to achieve. Often dedup is not needed if you're going to stats the data right after it. And it's a tricky command and often misunderstood and misused. It filt... See more...
Well, "correct" depends on what you want to achieve. Often dedup is not needed if you're going to stats the data right after it. And it's a tricky command and often misunderstood and misused. It filters out from your results all further events with a particular value of a given field (or a set of values if you use it on more fields) regardless of what the remaining content of those events are. So if you had, for example, logs containing fields criticality (being one of INFO, WARN, DEBUG, ERR or CRIT) and message after using | dedup criticality you'd only get one INFO, one DEBUG and so on - the first one Splunk encountered in your data. You'd lose all subsequent INFOs, DEBUGs and so on even though they had different message value. So you'd be aware that - for example - there was a CPU usage spike but wouldn't know that your system was also out of disk space and over the temperature threshold. Dedup is really rarely useful. For me it works only as an "extended head".
Thank you @gcusello and @isoutamo for your suggestions. And yes i agree should have checked and plan better. Yes it is the Splunk Enterprise single instance installation.  Just to ellaborate a littl... See more...
Thank you @gcusello and @isoutamo for your suggestions. And yes i agree should have checked and plan better. Yes it is the Splunk Enterprise single instance installation.  Just to ellaborate a little bit more - i untar the UF installtion file and then tried starting the UFs from the bin folder using sudo ./splunk start command but i ended up getting the error message below   
No. In case of this particular bug it was the other way around. Don't remove grantableRoles. Add it if you don't have it. But that was my issue. Yours can be a different one.
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Depl... See more...
Hello, I know that it is necessary to do this for the forwarders but I would like to confirm whether it is necessary to add other components (such as indexers, search heads) as clients to our Deployment Server in Splunk environment. We currently have a Deployment Server set up, but we are unsure if registering the other instances as clients is a required step, or if any specific configurations need to be made for each type of component. Thank you in advance. Best regards,
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal For... See more...
Hello, Is it possible to send logs (for example: /var/log/GRPCServer.log) directly to Splunk Observability Cloud using  Splunk Universal Forwarder? If yes, how we can configure Splunk Universal Forwarder to send logs to Splunk Observability Cloud directly as we don't have the IP Address / hostname of Splunk Observability Cloud as well the 9997 port open atSplunk Observability Cloud end, like in general we can the below steps to configure Splunk Universal Forwarder to Splunk Enterprise/Cloud as mentioned below:  Add IP_Address/Host_Name where the log has to be sent "./splunk add <IP/HOST_NAME>:9997" Add the file whose log has to collected "./splunk add monitor /var/log/GRPCServer.log"  Thank You
@PickleRick  As per the link you shared at first place. There mentioned that remove 'grandableRoles' =admin  from the admin user from authorize.conf file. Is that workaround will work or shall I try ?
below image should give some clarity,  Currently I have 2 different dashboard and I want single dashboard with all 3 details.  
Flap time = when one of the peer or you can consider the cable connected to device went down   If you see below dashboard we can see the device IP + Flap time , In the other dashboard you can see t... See more...
Flap time = when one of the peer or you can consider the cable connected to device went down   If you see below dashboard we can see the device IP + Flap time , In the other dashboard you can see the Device_name + Device IP, I just want to see all 3 details ( Device name, Device IP & Flap time) in one dashboard,   Does it answered your query ?   
Thanks I will post in new thread
I edited the datamodel using web UI, also I just edited a macro and it is reflected in the cluster
This is a different issue - more visualization-related. Please post it as a new thread in a proper forum section to keep this forum tidy
Hi @Ashish0405 , at first you don't need dedup before stats: index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | stats count by Device_name src_ip state_to ... See more...
Hi @Ashish0405 , at first you don't need dedup before stats: index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | stats count by Device_name src_ip state_to | eval primarycolor=case(state_to="Down", "#D93F3C", state_to="Up", "#31A35F"), secondarycolor=primarycolor then, what do you mean with flat time? if the time borders of your search, you can use addinfo command (https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Addinfo) that with the info_min_time and info_max_time fields gives you the time borders of your search. index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | stats count by Device_name src_ip state_to | eval primarycolor=case(state_to="Down", "#D93F3C", state_to="Up", "#31A35F"), secondarycolor=primarycolor | addinfo | table Device_name src_ip state_to count primarycolor secondarycolor info_min_time info_max_time Ciao. Giuseppe
OK. How did you edit the datamodel? Normally from the WebUI? Or did you fiddle with the jsons directly?
Yes, thank you for these details, I guess if I sort the output with time ( # sort _time) the result will be rearranged as per date & time, AM I CORRECT ? So if with the help of sort _time data get ... See more...
Yes, thank you for these details, I guess if I sort the output with time ( # sort _time) the result will be rearranged as per date & time, AM I CORRECT ? So if with the help of sort _time data get re-arranged then the latest one result will be either #UP or #DOWN, then the AIM is achieved
As far as I remember (that was some time ago) it happened when users' roles allowed them to grant roles but had no list of grantable roles specified whatsoever. So you have to make sure that there is... See more...
As far as I remember (that was some time ago) it happened when users' roles allowed them to grant roles but had no list of grantable roles specified whatsoever. So you have to make sure that there is at least one role listed as grantablRoles for those users.
Would anyone be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing... See more...
Would anyone be able to help me on one more thing please !!!  I have a Number display dashboard which represent the BGP flap details as # Device_name & #BGP peer IP , however I cannot add the timing when the BGP flap on Number display Current Query : index="network" %BGP-5 *clip* | rex field=_raw "^(?:[^ \n]* ){4}(?P<Device_name>[^:]+)" | dedup Device_name,src_ip | stats count by Device_name,src_ip,state_to | eval primarycolor=case(state_to="Down", "#D93F3C", state_to="Up", "#31A35F") | eval secondarycolor=primarycolor     Is there something we can add to display flap time in the same number display