All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Znerox , you have to append two searches: <your_search_A> | stats sum(X) AS A | append [ <your_search_B> | stats sum(Y) AS B ] | stats values(A) AS A values(B) AS B | eval C=A-B ... See more...
Hi @Znerox , you have to append two searches: <your_search_A> | stats sum(X) AS A | append [ <your_search_B> | stats sum(Y) AS B ] | stats values(A) AS A values(B) AS B | eval C=A-B | table A B C Ciao. Giuseppe
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where res... See more...
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where response is missing. How can I calculate C?
Actually, no. The indexes definition must be consistent across indexers in a cluster. From the technical point of view you don't need indexes definition on search heads. But it's useful for defining... See more...
Actually, no. The indexes definition must be consistent across indexers in a cluster. From the technical point of view you don't need indexes definition on search heads. But it's useful for defining role permissions in UI and for auto-completion in search so since you don't need to index data on SHs it's actually a common practice to define an app with indexes definition and distribute it across both layers. But since typically indexers differ in storage layout from SHs it's also a well established practice, good from maintainability perspective, to define one app with indexes themselves based on volumes (this app you can distribute without changes on both tiers) and another app with volume definitions. That's the case in @Gregski11 's case and it's that's actually a sound practice and there's nothing wrong with it. You can kinda compare it with CIM datamodels which are defined withn an app and which you don't touch and their `cim_something_indexes` macros which externalize configuration of your datamodels from their actual definitions.
With congestion you would have a drop in throughput but you'd have some values if only from local internal inputs. Here you seem to have no data points whatsoever which means that it's probably an al... See more...
With congestion you would have a drop in throughput but you'd have some values if only from local internal inputs. Here you seem to have no data points whatsoever which means that it's probably an all-in-one installation or the whole splunk infrastructure was down.
Hi @uagraw01 , in general there can be two potential root causes: the server is down, there's a network or server congestion so, the internal Splunk logs have a minor priority than the other logs.... See more...
Hi @uagraw01 , in general there can be two potential root causes: the server is down, there's a network or server congestion so, the internal Splunk logs have a minor priority than the other logs. I don't think tha you can find a root cause in _internal, see the server and network logs. Ciao. Giuseppe
Hi @Gregski11 , at first I never saw a production Splunk infrastructure based on Windows (at least labs), think to use Linux! Then, don't put conf files in $SPLUNK_HOME\system\local becaue you cann... See more...
Hi @Gregski11 , at first I never saw a production Splunk infrastructure based on Windows (at least labs), think to use Linux! Then, don't put conf files in $SPLUNK_HOME\system\local becaue you cannot manage them using a Deployment server: it's always better to put them in a cstom app (called e.g. TA_indexers) containing at least two files: app.conf, indexes.conf. Anyway, having an Indexer Cluster, you define Volumes and Indexes on the Cluster Manager. Then volume isn't relevant for Search Heads (Clustered or not): they point to the Cluster Manager to know the active indexers (using Indexers Discovery: https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/indexerdiscovery ) or directly to the Indexers: https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/Indexerclusterinputs). About volumes and indexes, you can put them in your indexes.conf file following the nstructions at https://docs.splunk.com/Documentation/Splunk/9.3.1/Admin/Indexesconf Ciao. Giuseppe
Hi,  Yes it is there  [root@ nav]$ ls -la $SPLUNK_HOME/etc/apps/search/default/data/ui/na -r--r--r--. 1 svc-splunk eb-svc-splunk 235 Oct 9 10:29 default.xml Content  <nav search_view="search"... See more...
Hi,  Yes it is there  [root@ nav]$ ls -la $SPLUNK_HOME/etc/apps/search/default/data/ui/na -r--r--r--. 1 svc-splunk eb-svc-splunk 235 Oct 9 10:29 default.xml Content  <nav search_view="search" color="#5CC05C"> <view name="search" default="true" /> <view name="analytics_workspace" /> <view name="datasets" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" /> </nav>  
Hi @jatkb , usually connectione between Splunk systems are configured in autoloadbalancing so you have load distribution and failover management between the receiverse (both HFs or IDXs): [tcpout] ... See more...
Hi @jatkb , usually connectione between Splunk systems are configured in autoloadbalancing so you have load distribution and failover management between the receiverse (both HFs or IDXs): [tcpout] defaultGroup=autoloadbalancing [tcpout:autoloadbalancing] disabled=false server=10.1.0.1:9997, 10.1.0.2:9997, 10.2.0.1:9997, 10.2.0.2:9997 Otherwise I don't think that it's possible to have an automatic failover management. Ciao. Giuseppe
Hi @timtekk , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @timtekk , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I'm afraid I still don't understand what you are trying to do.  It makes absolutely no sense to join 50000 raw events.  In fact, it generally makes zero sense to join raw events to begin with. It is... See more...
I'm afraid I still don't understand what you are trying to do.  It makes absolutely no sense to join 50000 raw events.  In fact, it generally makes zero sense to join raw events to begin with. It is best to describe what the end goal is with zero SPL.  I posted my four commandments of asking an answerable question many times.  Here they are again. Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious. In your case, simply avoid illustrating SPL.  Just illustrate what your data is,characteristics, etc., and what your result look like and why the illustrated data should lead to illustrated result, all without SPL.  There is a chance that some volunteers can understand if you do NOT show SPL.
If you have lots of events, performing lookup after stats will be more efficient. index=* | chart count by X | lookup my-lookup.csv Y AS X OUTPUT X_description This will add an extra field.  If you... See more...
If you have lots of events, performing lookup after stats will be more efficient. index=* | chart count by X | lookup my-lookup.csv Y AS X OUTPUT X_description This will add an extra field.  If you don't want to see X, just remove it with fields command.
With the same log, I would expect a single duration.  Perhaps the maxspan option to the transaction command will help.
Yes, It is the best practice  to have consistent index configurations & definitions throughout the cluster. Thanks @gcusello @PickleRick  for the good points. To further elaborate on this topic and... See more...
Yes, It is the best practice  to have consistent index configurations & definitions throughout the cluster. Thanks @gcusello @PickleRick  for the good points. To further elaborate on this topic and provide more details, I'd like to add the following: EDIT: I'd like to expand on my previous answer with some additional best practices: 1. Separate Index Definition from Storage Definition: - It's typically best practice to keep these configurations separate. - In a production environment, use a legitimate app for your main indexes.conf file, not the system/local directory . - This ensures better manageability, version control, and consistency across your Splunk deployment. 2. Use Separate Apps for Configurations: - Implement a base config methodology with different apps for different aspects. - Create apps like: a) org_all_indexes: For consistent index definitions across the deployment. b) org_idxer_volume_indexes: For indexer-specific configurations. c) org_srch_volume_indexes: For search head-specific configurations. 3. Flexibility and Scalability: - This approach allows different storage tiers for indexers and search heads as needed. - It maintains a consistent view of available indexes across the deployment while allowing for component-specific optimizations. These practices will help create a more robust, manageable, and scalable Splunk infrastructure.
We are looking to deploy Edge Processors (EP) in a high availability configuration - with 2 EP systems per site and multiple sites. We need to use Edge Processors (or Heavy Fowarders, I guess?) to in... See more...
We are looking to deploy Edge Processors (EP) in a high availability configuration - with 2 EP systems per site and multiple sites. We need to use Edge Processors (or Heavy Fowarders, I guess?) to ingest and filter/transform the event logs before they leave our environment and go to our MSSP Splunk Cloud. Ideally, I want the Universal Forwarders (UF) to use the local site EPs. However, in the case that those are unavailable, I would like the UFs to failover to use the EPs at another site. I do not want to have the UFs use the EPs at another site by default, as this will increase WAN costs, so I can't simply list all the servers in the defaultGroup. For example: [tcpout] defaultGroup=site_one_ingest [tcpout:site_one_ingest] disabled=false server=10.1.0.1:9997,10.1.0.2:9997 [tcpout:site_two_ingest] disabled=true server=10.2.0.1:9997,10.2.0.2:9997 Is there any way to configure the UFs to prefer the local Edge Processors (site_one_ingest), but then to failover to the second site (site_two_ingest) if those systems are not available? Is it also possible for the configuration to support automated failback/recovery?
i am currently taking udemy classes also and my splunk enterprise health is in the red. who can help me with this?  
still a total newb here so please be gentle, on Microsoft Window 2019 servers we have an Index cluster and here's how the Hot and Cold volumes are defined on it: C:\Program Files\Splunk\etc\system\... See more...
still a total newb here so please be gentle, on Microsoft Window 2019 servers we have an Index cluster and here's how the Hot and Cold volumes are defined on it: C:\Program Files\Splunk\etc\system\local\indexes.conf [default] [volume:cold11] path = E:\Splunk-Cold maxVolumeDataSizeMB = 12000000 [volume:hot11] path = D:\Splunk-Hot-Warm maxVolumeDataSizeMB = 1000000   that I can live with, but on our Search Heads here's how we point on the volumes, and this don't look right to me: C:\Program Files\Splunk\etc\apps\_1-LDC_COMMON\local\indexes.conf [volume:cold11] path = $SPLUNK_DB [volume:hot11] path = $SPLUNK_DB   should the stanzas on the Search Heads match the ones on our Indexers?
Thanks for the quick response! U saved me a lot of time
Does Splunk for Cisco Identity Services (ISE) support data containing IPv6 addresses?  
Event type cannot "merge" multiple events. As simple as that. So either process your data prior to ingesting so that you have a whole login event containing all interesting fields or do summary index... See more...
Event type cannot "merge" multiple events. As simple as that. So either process your data prior to ingesting so that you have a whole login event containing all interesting fields or do summary indexing and create synthetic events after ingesting original events.
The general answer is no - you have no indication in the main search whatsoever that your subsearches (regardless of whether this is a "straight" subsearch, append or join) have been finalized before... See more...
The general answer is no - you have no indication in the main search whatsoever that your subsearches (regardless of whether this is a "straight" subsearch, append or join) have been finalized before full completion due to hitting limits. They are simply silently finalized and the returns yielded so far are returned and that's it. This is why using subsearches is tricky and they're best avoided unless you can make strong assumptions about their time of execution and size of the result set. Maybe, just maybe (haven't checked it) you could retroactively find that information in _internal but to be honest, I doubt it. The search itself doesn't return such metadata.