All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In Classic XML dashboards, you can add a <done> stanza to the searches for your singles and set tokens from the first row of the results. You can then use these tokens in your subsequent search.
Hi @Stives , if they are private, you cannot do anithing, he shoudl share it at least at app level, enabling the roles to edit alerts and dashboards. If these knowledge objects are orphaned (becaus... See more...
Hi @Stives , if they are private, you cannot do anithing, he shoudl share it at least at app level, enabling the roles to edit alerts and dashboards. If these knowledge objects are orphaned (because e.g. the account was disabled), there's a feature to assign them to another user, then you can share to the correct roles. Otherwise, the only way is to search them on conf files and copy in another user or app area. Ciao. Giuseppe
Hi there. This morning i did a SHC restart, and found something very strange from SHC Members, WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#1:8089 Authenticati... See more...
Hi there. This morning i did a SHC restart, and found something very strange from SHC Members, WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#1:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#2:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#3:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#4:8089 Authentication Failed GetRemoteAuthToken [1964778 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#1:8089 due to: Connect Timeout; exceeded 5000 milliseconds GetBundleListTransaction [1964778 DistributedPeerMonitorThread] - Unable to get bundle list from peer: https://OLDIDX#2:8089 due to: Connect Timeout; exceeded 60000 milliseconds GetRemoteAuthToken [2212932 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#3:8089 due to: Connect Timeout; exceeded 5000 milliseconds GetRemoteAuthToken [2212932 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#4:8089 due to: Connect Timeout; exceeded 5000 milliseconds All OLDIDX are old servers, turned off and shut down! None of SHC Members has OLDIDX#* in DistributedSeach conf Recently i update a V7 to V8 Infrastructure. I also searched all .conf for all ip of OLDIDX#*, none of them was found. WHERE are those "artifact" stored? Is there something in "raft" of new SHC? Need to remove alla SHC conf, and redo it from begin?   This messages in splunkd.log appears ONLY DURING the restart of SHC. During the days, using the SHC, i never had, and still i don't have any type of similar message. Thanks.
| table Time count1 count2 count3 The first field (column) will be the x axis, the other columns will be the lines.
Hi Giuseppe, thank you for feedback. This is exactly the problem user who created dashboard can´t edit permissions and on the other side I´m not able see his dashboard as it´s set to Private so we n... See more...
Hi Giuseppe, thank you for feedback. This is exactly the problem user who created dashboard can´t edit permissions and on the other side I´m not able see his dashboard as it´s set to Private so we not able move forward like this. Thanks
Okay, have you checked the internal logs on the affected instance for any WARN or ERROR events especially during a reboot?
Hi @Stives , you should modify the permissions of the alerts and dashboard that you want to modify giving the "Write" permission for the role of these users. It isn't a role problem, but a knowledg... See more...
Hi @Stives , you should modify the permissions of the alerts and dashboard that you want to modify giving the "Write" permission for the role of these users. It isn't a role problem, but a knowledge objects sharing permissions problem. Ciao. Giuseppe
Hi @Stives , you should ask to Splunk Cloud Support to remove your app, because there isn't a feature to remove apps, you should delete the app folder, but you cannot access system in SSH. Ciao. G... See more...
Hi @Stives , you should ask to Splunk Cloud Support to remove your app, because there isn't a feature to remove apps, you should delete the app folder, but you cannot access system in SSH. Ciao. Giuseppe P.S.: Karma Points are appreciated by all the contributors
H @Znerox , I don't think that you can use a token from two other Single values, but you could use the same search (eventually as base search in this third Single value, using a search like mine. C... See more...
H @Znerox , I don't think that you can use a token from two other Single values, but you could use the same search (eventually as base search in this third Single value, using a search like mine. Ciao. Giuseppe
Dear Splunkers, I would like to ask for support in order to provide specific users with the capabilities they can edit permissions for Alerts and Dashboards. We do have several different Users creat... See more...
Dear Splunkers, I would like to ask for support in order to provide specific users with the capabilities they can edit permissions for Alerts and Dashboards. We do have several different Users created but this specific user had inherited Power roles. But despite users are not allowed to modify permissions even for own dashboards or alerts.  Can you please suggest ? Thank you Stives
I'm at a loss here. I already have A and B visualized as "single values". The only thing that is missing is the calculation of A-B. I've tried modifying your code to something that looks like it mig... See more...
I'm at a loss here. I already have A and B visualized as "single values". The only thing that is missing is the calculation of A-B. I've tried modifying your code to something that looks like it might make sense. Here I'm trying to reference the searches that are used to visualize A and B. (Access search results ormetadata). | stats values($<All requests>$) AS A values($<All responses>$) AS B | eval C=A-B  
Hi @Znerox , you have to append two searches: <your_search_A> | stats sum(X) AS A | append [ <your_search_B> | stats sum(Y) AS B ] | stats values(A) AS A values(B) AS B | eval C=A-B ... See more...
Hi @Znerox , you have to append two searches: <your_search_A> | stats sum(X) AS A | append [ <your_search_B> | stats sum(Y) AS B ] | stats values(A) AS A values(B) AS B | eval C=A-B | table A B C Ciao. Giuseppe
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where res... See more...
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where response is missing. How can I calculate C?
Actually, no. The indexes definition must be consistent across indexers in a cluster. From the technical point of view you don't need indexes definition on search heads. But it's useful for defining... See more...
Actually, no. The indexes definition must be consistent across indexers in a cluster. From the technical point of view you don't need indexes definition on search heads. But it's useful for defining role permissions in UI and for auto-completion in search so since you don't need to index data on SHs it's actually a common practice to define an app with indexes definition and distribute it across both layers. But since typically indexers differ in storage layout from SHs it's also a well established practice, good from maintainability perspective, to define one app with indexes themselves based on volumes (this app you can distribute without changes on both tiers) and another app with volume definitions. That's the case in @Gregski11 's case and it's that's actually a sound practice and there's nothing wrong with it. You can kinda compare it with CIM datamodels which are defined withn an app and which you don't touch and their `cim_something_indexes` macros which externalize configuration of your datamodels from their actual definitions.
With congestion you would have a drop in throughput but you'd have some values if only from local internal inputs. Here you seem to have no data points whatsoever which means that it's probably an al... See more...
With congestion you would have a drop in throughput but you'd have some values if only from local internal inputs. Here you seem to have no data points whatsoever which means that it's probably an all-in-one installation or the whole splunk infrastructure was down.
Hi @uagraw01 , in general there can be two potential root causes: the server is down, there's a network or server congestion so, the internal Splunk logs have a minor priority than the other logs.... See more...
Hi @uagraw01 , in general there can be two potential root causes: the server is down, there's a network or server congestion so, the internal Splunk logs have a minor priority than the other logs. I don't think tha you can find a root cause in _internal, see the server and network logs. Ciao. Giuseppe
Hi @Gregski11 , at first I never saw a production Splunk infrastructure based on Windows (at least labs), think to use Linux! Then, don't put conf files in $SPLUNK_HOME\system\local becaue you cann... See more...
Hi @Gregski11 , at first I never saw a production Splunk infrastructure based on Windows (at least labs), think to use Linux! Then, don't put conf files in $SPLUNK_HOME\system\local becaue you cannot manage them using a Deployment server: it's always better to put them in a cstom app (called e.g. TA_indexers) containing at least two files: app.conf, indexes.conf. Anyway, having an Indexer Cluster, you define Volumes and Indexes on the Cluster Manager. Then volume isn't relevant for Search Heads (Clustered or not): they point to the Cluster Manager to know the active indexers (using Indexers Discovery: https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/indexerdiscovery ) or directly to the Indexers: https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/Indexerclusterinputs). About volumes and indexes, you can put them in your indexes.conf file following the nstructions at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Indexesconf Ciao. Giuseppe
Hi,  Yes it is there  [root@ nav]$ ls -la $SPLUNK_HOME/etc/apps/search/default/data/ui/na -r--r--r--. 1 svc-splunk eb-svc-splunk 235 Oct 9 10:29 default.xml Content  <nav search_view="search"... See more...
Hi,  Yes it is there  [root@ nav]$ ls -la $SPLUNK_HOME/etc/apps/search/default/data/ui/na -r--r--r--. 1 svc-splunk eb-svc-splunk 235 Oct 9 10:29 default.xml Content  <nav search_view="search" color="#5CC05C"> <view name="search" default="true" /> <view name="analytics_workspace" /> <view name="datasets" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" /> </nav>  
Hi @jatkb , usually connectione between Splunk systems are configured in autoloadbalancing so you have load distribution and failover management between the receiverse (both HFs or IDXs): [tcpout] ... See more...
Hi @jatkb , usually connectione between Splunk systems are configured in autoloadbalancing so you have load distribution and failover management between the receiverse (both HFs or IDXs): [tcpout] defaultGroup=autoloadbalancing [tcpout:autoloadbalancing] disabled=false server=10.1.0.1:9997, 10.1.0.2:9997, 10.2.0.1:9997, 10.2.0.2:9997 Otherwise I don't think that it's possible to have an automatic failover management. Ciao. Giuseppe
Hi @timtekk , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @timtekk , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated