All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Apps are empty either because the data they need isn't present or because the data can't be found.  You've shown the former is not true so it must be the latter. Confirm the data is in the index(es)... See more...
Apps are empty either because the data they need isn't present or because the data can't be found.  You've shown the former is not true so it must be the latter. Confirm the data is in the index(es) where Veeam expects to find it.  If Veeam uses a datamodel (I suspect it does) then your data must be tagged so it is found by the DM.  Look at the DM definition to see which indexes and tags it needs.
Morning, Splunkers! I've got a fun one today. I need to find the most resource efficient way (i.e., fastest way that won't have my IT guys calling me up and wanting to know why their ports are smoki... See more...
Morning, Splunkers! I've got a fun one today. I need to find the most resource efficient way (i.e., fastest way that won't have my IT guys calling me up and wanting to know why their ports are smoking) to return the unique values in one field that only have unique values in another field. For example, in the following table my search result needs to only return Value B; Values A and C will be thrown out, because they don't have a unique value in Field B. Field A Field B Value A Value A1 Value A Value A2 Value B Value B1 Value C Value C1 Value C Value C2 Value C Value C3   The big problem here is Field B can be any number of different values, so I can't query specifically on what those values may be. I have a solution for this, and it works, but it doesn't work "at scale" because I'm looking through literally billions of records to pull this information.  Here's what I'm already doing:   | tstats count where index=myindex by Field A Field B | stats values(Field B) as Field B by Field A | where mvcount(Field B)=1     This takes a few minutes if I'm pulling, say, over 15 minutes, and I need to pull 90 days, and I really don't want to have to do it 15 minutes at a time and stitch everything together afterward. I will if I have to, but there's got to be a better way to do what I'm trying to do that won't make the system flip me the bird and call it a day. Suggestions?
Yes, especially in distributed environments, the search head must be aware of the index.  No storage needs to be created, however.  The SH merely needs to know the index exists.
It's impossible to reingest all the data as they are collected since years. One of my first tasks is to check bucket's ids and make sure there ie no duplicates but I'm pretty sure there is not.
Hi,   I would like to merge two different index clusters. One has always been here and the other have been added after from an existing environment. Except for internal indexes, each cluster have ... See more...
Hi,   I would like to merge two different index clusters. One has always been here and the other have been added after from an existing environment. Except for internal indexes, each cluster have their own indexes. The "expiration scenario" is the last option we want because we would like to remove cluster A servers as they are too old.
Of course assuming you don't have any escaped quote in your error string. That's the problem with 1) Manipulating structured data with regexes 2) Sending structured data (json, xml) as part of an o... See more...
Of course assuming you don't have any escaped quote in your error string. That's the problem with 1) Manipulating structured data with regexes 2) Sending structured data (json, xml) as part of an otherwise unstructured event. So generally, it should work but be aware that there might be border cases where it will not capture whole message. This captures with possible escaped quotes: (?<msg>"errorMessage":".*?(?<!\\)")
You could try opening a support case to see if they will export your data.  Consider using the REST API to run queries (index=foo earliest=0 latest=now | table *) that return all the data from an in... See more...
You could try opening a support case to see if they will export your data.  Consider using the REST API to run queries (index=foo earliest=0 latest=now | table *) that return all the data from an index and then save that data in the desired format.
Hello @Xiangning.Mao ,Thanks for the details. It worked as expected. 
The underlabel option does not perform evaluations.  Do the eval in a separate statement. <search> <query>index=* EventCode=25753 | stats count(EventCode) as toto | append [| search index=* ... See more...
The underlabel option does not perform evaluations.  Do the eval in a separate statement. <search> <query>index=* EventCode=25753 | stats count(EventCode) as toto | append [| search index=* EventCode=* | stats count(EventCode) as toto2]</query> <earliest>-7d@h</earliest> <latest>now</latest> <done> <condition> <set token="NbHost">$result.toto$</set> <set token="NbHost2">$result.toto2$</set> <eval token="ratio">$NbHost$ / $NbHost2$</eval> </condition> </done> </search> <option name="drilldown">none</option> <option name="underLabel">$ratio$</option>
Hi Splunkers, I have a doubt about a specific Splunk Alert triggered actions: the log event one. From doc I can see, on the end: "You must also define the destination index on both the search head ... See more...
Hi Splunkers, I have a doubt about a specific Splunk Alert triggered actions: the log event one. From doc I can see, on the end: "You must also define the destination index on both the search head and the indexers. " Does it means that, even if I am in a distributed environments, I must created index used to save alerts on both Indexers and search heads?
To answer my own question This was a browser issue. Both the splunk REST API and Splunk Web must use https for the REST call to succeed. In my case, this means https://localhost:8000 for splunk web... See more...
To answer my own question This was a browser issue. Both the splunk REST API and Splunk Web must use https for the REST call to succeed. In my case, this means https://localhost:8000 for splunk web and https://localhost:8090 for the API  
sort truncates at 10k values - try something like this | sort 0 -clientip
@dkmcclory- It depends on what your API call does. If your API call collects data and ingest into Splunk, then use Input else use scheduled alert/report.
Are you sure you have searchable buckets from this site2 index in site1 and the other way around?  site_search_factor = origin:2,total:2 In this case a bucket originating in site2 will stay in site... See more...
Are you sure you have searchable buckets from this site2 index in site1 and the other way around?  site_search_factor = origin:2,total:2 In this case a bucket originating in site2 will stay in site2. So search will reach across intersite link for primaries since it has no searchable primaries in their own site.
This regex will extract the highlighted text (?<msg>"errorMessage":"[^"]+")
Hi @Hod152, why you did this? if you have tstats BY _time, you already have the timechart:   | tstats count WHERE case=test responseCode=200 requestStatus!=legal BY clientIp _t... See more...
Hi @Hod152, why you did this? if you have tstats BY _time, you already have the timechart:   | tstats count WHERE case=test responseCode=200 requestStatus!=legal BY clientIp _time span=1h   Anyway, it's always better to indicate the indexes to use in the search, to have more performant searces  and avoid default search path issues. Ciao. Giuseppe
Hi @Cloud001, what are the Replication Factor and the Search Factor? anyway, usually logs are plicated between the indexers of each site anche between the sites, in this way, you have at least one ... See more...
Hi @Cloud001, what are the Replication Factor and the Search Factor? anyway, usually logs are plicated between the indexers of each site anche between the sites, in this way, you have at least one searcheabel copy (or more) in each site. e.g. to have two copies of data in each site, you should have: site_replication_factor = origin:2, site1:2, total:4 for more details see at https://docs.splunk.com/Documentation/Splunk/9.2.2/Indexer/Multisitearchitecture Ciao. Giuseppe
We did the same thing you did.  DS and HF on the same internet facing server.  We disabled the web interface and manage the deployment server with .conf files only. All of the deployment clients for... See more...
We did the same thing you did.  DS and HF on the same internet facing server.  We disabled the web interface and manage the deployment server with .conf files only. All of the deployment clients for the DS/HF show up on in the Settings > Forwarder Management page as you describe.  All of the deployment clients in another deployment server show up too.  My guess is that the logs in the new _dsappevent, _dsphonehome, and  _dsclient indexes that are created in 9.2 is where that page gets its information.  It's very confusing.  There should be a column for the splunk_server in the display, so that we can tell which server is serving apps to which clients.
Hey, Iv'e noticed some wierd behviour that is making me suspect the relaibility of my queries so I'm really looking for an explanation, I was making some searches and displaying them on a timechart,... See more...
Hey, Iv'e noticed some wierd behviour that is making me suspect the relaibility of my queries so I'm really looking for an explanation, I was making some searches and displaying them on a timechart, for some reason the timechart looks completly different when I sort the fields befor. this is the basic search and it's results:     |tstats count WHERE case=test responseCode=200 requestStatus!=legal by clientIp _time span=1h| timechart sum(count) span=1h     After sorting clientIp field this is how the graph looks like:     |tstats count WHERE case=test responseCode=200 requestStatus!=legal by clientIp _time span=1h| sort -clientIp |timechart sum(count) span=1h         |tstats count WHERE case=test responseCode=200 requestStatus!=legal by clientIp _time span=1h| sort +clientIp |timechart sum(count) span=1h     Note that the count is decreased on the sorted search.     What can explain that behaviour? Which chart should I relay on? Is that a feature of sorting? Thanks  
2024-07-16T10:59:41.259Z eff08259-3379-5637-b5fe-dd4967aee355 ERROR Invoke Error {"errorType":"Error","errorMessage":"Required Message Attribute [EventTimestamp] is missing","errorCode":_ERROR","na... See more...
2024-07-16T10:59:41.259Z eff08259-3379-5637-b5fe-dd4967aee355 ERROR Invoke Error {"errorType":"Error","errorMessage":"Required Message Attribute [EventTimestamp] is missing","errorCode":_ERROR","name":"Error","stack":["Error: Required Message Attribute [EventTimestamp] is missing"," at throwRequiredParameterError  In the above log i need to extract errorMessage which was highlighted....Can anyone of you please help me in writing regex for the same.