All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Stives , you should modify the permissions of the alerts and dashboard that you want to modify giving the "Write" permission for the role of these users. It isn't a role problem, but a knowledg... See more...
Hi @Stives , you should modify the permissions of the alerts and dashboard that you want to modify giving the "Write" permission for the role of these users. It isn't a role problem, but a knowledge objects sharing permissions problem. Ciao. Giuseppe
Hi @Stives , you should ask to Splunk Cloud Support to remove your app, because there isn't a feature to remove apps, you should delete the app folder, but you cannot access system in SSH. Ciao. G... See more...
Hi @Stives , you should ask to Splunk Cloud Support to remove your app, because there isn't a feature to remove apps, you should delete the app folder, but you cannot access system in SSH. Ciao. Giuseppe P.S.: Karma Points are appreciated by all the contributors
H @Znerox , I don't think that you can use a token from two other Single values, but you could use the same search (eventually as base search in this third Single value, using a search like mine. C... See more...
H @Znerox , I don't think that you can use a token from two other Single values, but you could use the same search (eventually as base search in this third Single value, using a search like mine. Ciao. Giuseppe
Dear Splunkers, I would like to ask for support in order to provide specific users with the capabilities they can edit permissions for Alerts and Dashboards. We do have several different Users creat... See more...
Dear Splunkers, I would like to ask for support in order to provide specific users with the capabilities they can edit permissions for Alerts and Dashboards. We do have several different Users created but this specific user had inherited Power roles. But despite users are not allowed to modify permissions even for own dashboards or alerts.  Can you please suggest ? Thank you Stives
I'm at a loss here. I already have A and B visualized as "single values". The only thing that is missing is the calculation of A-B. I've tried modifying your code to something that looks like it mig... See more...
I'm at a loss here. I already have A and B visualized as "single values". The only thing that is missing is the calculation of A-B. I've tried modifying your code to something that looks like it might make sense. Here I'm trying to reference the searches that are used to visualize A and B. (Access search results ormetadata). | stats values($<All requests>$) AS A values($<All responses>$) AS B | eval C=A-B  
Hi @Znerox , you have to append two searches: <your_search_A> | stats sum(X) AS A | append [ <your_search_B> | stats sum(Y) AS B ] | stats values(A) AS A values(B) AS B | eval C=A-B ... See more...
Hi @Znerox , you have to append two searches: <your_search_A> | stats sum(X) AS A | append [ <your_search_B> | stats sum(Y) AS B ] | stats values(A) AS A values(B) AS B | eval C=A-B | table A B C Ciao. Giuseppe
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where res... See more...
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where response is missing. How can I calculate C?
Actually, no. The indexes definition must be consistent across indexers in a cluster. From the technical point of view you don't need indexes definition on search heads. But it's useful for defining... See more...
Actually, no. The indexes definition must be consistent across indexers in a cluster. From the technical point of view you don't need indexes definition on search heads. But it's useful for defining role permissions in UI and for auto-completion in search so since you don't need to index data on SHs it's actually a common practice to define an app with indexes definition and distribute it across both layers. But since typically indexers differ in storage layout from SHs it's also a well established practice, good from maintainability perspective, to define one app with indexes themselves based on volumes (this app you can distribute without changes on both tiers) and another app with volume definitions. That's the case in @Gregski11 's case and it's that's actually a sound practice and there's nothing wrong with it. You can kinda compare it with CIM datamodels which are defined withn an app and which you don't touch and their `cim_something_indexes` macros which externalize configuration of your datamodels from their actual definitions.
With congestion you would have a drop in throughput but you'd have some values if only from local internal inputs. Here you seem to have no data points whatsoever which means that it's probably an al... See more...
With congestion you would have a drop in throughput but you'd have some values if only from local internal inputs. Here you seem to have no data points whatsoever which means that it's probably an all-in-one installation or the whole splunk infrastructure was down.
Hi @uagraw01 , in general there can be two potential root causes: the server is down, there's a network or server congestion so, the internal Splunk logs have a minor priority than the other logs.... See more...
Hi @uagraw01 , in general there can be two potential root causes: the server is down, there's a network or server congestion so, the internal Splunk logs have a minor priority than the other logs. I don't think tha you can find a root cause in _internal, see the server and network logs. Ciao. Giuseppe
Hi @Gregski11 , at first I never saw a production Splunk infrastructure based on Windows (at least labs), think to use Linux! Then, don't put conf files in $SPLUNK_HOME\system\local becaue you cann... See more...
Hi @Gregski11 , at first I never saw a production Splunk infrastructure based on Windows (at least labs), think to use Linux! Then, don't put conf files in $SPLUNK_HOME\system\local becaue you cannot manage them using a Deployment server: it's always better to put them in a cstom app (called e.g. TA_indexers) containing at least two files: app.conf, indexes.conf. Anyway, having an Indexer Cluster, you define Volumes and Indexes on the Cluster Manager. Then volume isn't relevant for Search Heads (Clustered or not): they point to the Cluster Manager to know the active indexers (using Indexers Discovery: https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/indexerdiscovery ) or directly to the Indexers: https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/Indexerclusterinputs). About volumes and indexes, you can put them in your indexes.conf file following the nstructions at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Indexesconf Ciao. Giuseppe
Hi,  Yes it is there  [root@ nav]$ ls -la $SPLUNK_HOME/etc/apps/search/default/data/ui/na -r--r--r--. 1 svc-splunk eb-svc-splunk 235 Oct 9 10:29 default.xml Content  <nav search_view="search"... See more...
Hi,  Yes it is there  [root@ nav]$ ls -la $SPLUNK_HOME/etc/apps/search/default/data/ui/na -r--r--r--. 1 svc-splunk eb-svc-splunk 235 Oct 9 10:29 default.xml Content  <nav search_view="search" color="#5CC05C"> <view name="search" default="true" /> <view name="analytics_workspace" /> <view name="datasets" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" /> </nav>  
Hi @jatkb , usually connectione between Splunk systems are configured in autoloadbalancing so you have load distribution and failover management between the receiverse (both HFs or IDXs): [tcpout] ... See more...
Hi @jatkb , usually connectione between Splunk systems are configured in autoloadbalancing so you have load distribution and failover management between the receiverse (both HFs or IDXs): [tcpout] defaultGroup=autoloadbalancing [tcpout:autoloadbalancing] disabled=false server=10.1.0.1:9997, 10.1.0.2:9997, 10.2.0.1:9997, 10.2.0.2:9997 Otherwise I don't think that it's possible to have an automatic failover management. Ciao. Giuseppe
Hi @timtekk , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @timtekk , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I'm afraid I still don't understand what you are trying to do.  It makes absolutely no sense to join 50000 raw events.  In fact, it generally makes zero sense to join raw events to begin with. It is... See more...
I'm afraid I still don't understand what you are trying to do.  It makes absolutely no sense to join 50000 raw events.  In fact, it generally makes zero sense to join raw events to begin with. It is best to describe what the end goal is with zero SPL.  I posted my four commandments of asking an answerable question many times.  Here they are again. Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. In your case, simply avoid illustrating SPL.  Just illustrate what your data is,characteristics, etc., and what your result look like and why the illustrated data should lead to illustrated result, all without SPL.  There is a chance that some volunteers can understand if you do NOT show SPL.
If you have lots of events, performing lookup after stats will be more efficient. index=* | chart count by X | lookup my-lookup.csv Y AS X OUTPUT X_description This will add an extra field.  If you... See more...
If you have lots of events, performing lookup after stats will be more efficient. index=* | chart count by X | lookup my-lookup.csv Y AS X OUTPUT X_description This will add an extra field.  If you don't want to see X, just remove it with fields command.
With the same log, I would expect a single duration.  Perhaps the maxspan option to the transaction command will help.
Yes, It is the best practice  to have consistent index configurations & definitions throughout the cluster. Thanks @gcusello @PickleRick  for the good points. To further elaborate on this topic and... See more...
Yes, It is the best practice  to have consistent index configurations & definitions throughout the cluster. Thanks @gcusello @PickleRick  for the good points. To further elaborate on this topic and provide more details, I'd like to add the following: EDIT: I'd like to expand on my previous answer with some additional best practices: 1. Separate Index Definition from Storage Definition: - It's typically best practice to keep these configurations separate. - In a production environment, use a legitimate app for your main indexes.conf file, not the system/local directory . - This ensures better manageability, version control, and consistency across your Splunk deployment. 2. Use Separate Apps for Configurations: - Implement a base config methodology with different apps for different aspects. - Create apps like: a) org_all_indexes: For consistent index definitions across the deployment. b) org_idxer_volume_indexes: For indexer-specific configurations. c) org_srch_volume_indexes: For search head-specific configurations. 3. Flexibility and Scalability: - This approach allows different storage tiers for indexers and search heads as needed. - It maintains a consistent view of available indexes across the deployment while allowing for component-specific optimizations. These practices will help create a more robust, manageable, and scalable Splunk infrastructure.
We are looking to deploy Edge Processors (EP) in a high availability configuration - with 2 EP systems per site and multiple sites. We need to use Edge Processors (or Heavy Fowarders, I guess?) to in... See more...
We are looking to deploy Edge Processors (EP) in a high availability configuration - with 2 EP systems per site and multiple sites. We need to use Edge Processors (or Heavy Fowarders, I guess?) to ingest and filter/transform the event logs before they leave our environment and go to our MSSP Splunk Cloud. Ideally, I want the Universal Forwarders (UF) to use the local site EPs. However, in the case that those are unavailable, I would like the UFs to failover to use the EPs at another site. I do not want to have the UFs use the EPs at another site by default, as this will increase WAN costs, so I can't simply list all the servers in the defaultGroup. For example: [tcpout] defaultGroup=site_one_ingest [tcpout:site_one_ingest] disabled=false server=10.1.0.1:9997,10.1.0.2:9997 [tcpout:site_two_ingest] disabled=true server=10.2.0.1:9997,10.2.0.2:9997 Is there any way to configure the UFs to prefer the local Edge Processors (site_one_ingest), but then to failover to the second site (site_two_ingest) if those systems are not available? Is it also possible for the configuration to support automated failback/recovery?
i am currently taking udemy classes also and my splunk enterprise health is in the red. who can help me with this?