All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello I am building a new search head cluster, the cluster work fine, however the deployer is throwing error whenever I run "/bin/splunk show shcluster-status" on the deployer Here is error I am ... See more...
Hello I am building a new search head cluster, the cluster work fine, however the deployer is throwing error whenever I run "/bin/splunk show shcluster-status" on the deployer Here is error I am getting "Encountered some errors while trying to obtain shcluster status. Search Head Clustering is not enabled on this node. REST endpoint is not available" my server.conf look like the following [shclustering] pass4SymmKey = xxxxxxxxx   Your input  be appreciated
I am having the error I tried your solution, unfortunately it does not work.
Hi @Ryan.Paredez , Can you please check for the Account [redacted] ^ Post edited by @Ryan.Paredez to remove email from the post. For security and privacy reasons, if you need to share PII, do it p... See more...
Hi @Ryan.Paredez , Can you please check for the Account [redacted] ^ Post edited by @Ryan.Paredez to remove email from the post. For security and privacy reasons, if you need to share PII, do it privately using the Community Private Message feature.
Got it. actually renamed the field in csv and re-uploaded... join UserId [inputlookup adexport.csv | fields UserId Title ] why did I complicate things.  If there is a faster way let me know
I'm getting these failures after a prior disk corruption. ERROR TailReader [1876879 tailreader0] - Ignoring path="/somepath/somefile" due to: BTree::Exception: unexpected resulting offset 0 while on... See more...
I'm getting these failures after a prior disk corruption. ERROR TailReader [1876879 tailreader0] - Ignoring path="/somepath/somefile" due to: BTree::Exception: unexpected resulting offset 0 while on order 1, starting from offset 2056 node offset: 53209432 order: 255 keys: { } children: { 0 } I thought I had ejected buckets affected by the corruption; in addition, recent ingestions all go into buckets created after the corruption. What can I do to fix this?
I have an inputlookup called adexport.csv thats big... trying to join and match two fields in the lookup UserName and with the splunk field UserId.   trying but this don't seem to work. Tried a va... See more...
I have an inputlookup called adexport.csv thats big... trying to join and match two fields in the lookup UserName and with the splunk field UserId.   trying but this don't seem to work. Tried a variation of join and append, my splunk foo is dead index=data | lookup adexport.csv UserName as UserId OUTPUT UserId Title | table _time UserId Title    
but i'm asking if there is a default fields related to microservices in Splunk  I understand that it is tempting to view Splunk as a unique data source.  But in reality, Splunk data is what ... See more...
but i'm asking if there is a default fields related to microservices in Splunk  I understand that it is tempting to view Splunk as a unique data source.  But in reality, Splunk data is what you collect in your business.  Volunteers here has zero visibility of what fields are available in your_sourcetype that may or may not be related to microservices. In simple terms, no.  There is no such a thing as default fields related to anything other than time.  host, source, and sourcetype are usually mandatory in most deployments.  You need to ask whoever is writing logs in your_sourcetype how to identify a microservice.  They may have already put such in a key-value pair using either a delimiter or using a structured format such as JSON.  Even if they haven't, Splunk can easily extract it as long as it is present in the data.  However, Splunk itself cannot tell you where your developers placed such information. As @PickleRick suggested, you can also show some raw events (anonymize as needed) for volunteers to inspect and speculate.  Still, the best is if you can also ask your developers to identify information themselves.
  When using Pplunks  security essentials :  MITRE ATT&CK Framework  we are lacking a significant amount of alerts.  we used to have around 1500 in active and 300 ish on needs data; however, overnig... See more...
  When using Pplunks  security essentials :  MITRE ATT&CK Framework  we are lacking a significant amount of alerts.  we used to have around 1500 in active and 300 ish on needs data; however, overnight drop to the 200 mark total (between active and needs data) .  The following troubleshooting steps have been taken  1. updated content with the "force update under system configuration". 2. verify communication to the urls (yes it can connect) 3. uninstall and reinstall current SSE version, this cleared the data mapping upon installed it showed  enabled 0-active-0- missing data 1715: after the weekend it dropped to 0-8-195      4. After i rebuilt the data inventory  it looked as such:   Here are some SS of the security content:   1. shows content  2. drop down shows 12 mitre attack platforms but the dropdown is all 0;s   3.  Some times the data sources would show a filter of none. with 1300+  items, like the item below 134,  and sometimes it just doesnt appear.      4. MITRE map missing from the  configuration tags           
The cache for the summary index drop-down is apparently a bit too small for our environment. I noticed it was missing everything after the Ts so I deleted my index (started with a V) and put it at th... See more...
The cache for the summary index drop-down is apparently a bit too small for our environment. I noticed it was missing everything after the Ts so I deleted my index (started with a V) and put it at the top of the alphabet. Sure enough, there it was.
Ok. Do a simple index=_internal host=your_uf search and run it as a real-time search. That's one of the very few use cases when real-time search is actually warranted. If you see something, check ... See more...
Ok. Do a simple index=_internal host=your_uf search and run it as a real-time search. That's one of the very few use cases when real-time search is actually warranted. If you see something, check validity of the data. Typical problem when you're supposedly ingesting data but don't see it (apart from non-existent destination index) is time problems - if you have misconfigured timezone on the source, the data can be seen as coming from a long time ago so it's indexed into the past. So you're not seeing it looking for it in "last 15 minutes" or so.
1. Post your searches in code block or preformatted paragraph - it helps readability. 2. Don't use the join command if you can avoid it (in this case you can probably go with stats instead) 3. Fiel... See more...
1. Post your searches in code block or preformatted paragraph - it helps readability. 2. Don't use the join command if you can avoid it (in this case you can probably go with stats instead) 3. Fields depend on the data you onboard. The only "default" thing about them is when you have them normalized to be CIM-compliant. But I don't see any datamodel applicable here.
1. This is a 5-year-old thread. Post your question as a new thread to give it visibility. 2. Be a bit more descriptive. I suspect I know what you mean but it's nice to be sure that all parties are o... See more...
1. This is a 5-year-old thread. Post your question as a new thread to give it visibility. 2. Be a bit more descriptive. I suspect I know what you mean but it's nice to be sure that all parties are on the same page.
Clustering is an internal thing of the indexers from the source's (in this case your search head's) point of view it doesn't matter. You just set the output group to both your indexers and you're goo... See more...
Clustering is an internal thing of the indexers from the source's (in this case your search head's) point of view it doesn't matter. You just set the output group to both your indexers and you're good. If your indexers were clustered they'd replicate the incoming data among themselves. When they're not clustered only the one directly receiving event will hold it.
Is it possible to download the splunkcloud.spl file by using curl?
Hi We are Getting "GnuTLS handshake retry returned error" when try to communicate with ForeScout".   Any suggestion 
Hmmm, are you talking about the role defined within the Monitoring Console? I am having tons of issues resolving this. 
You just need the _time field that isn't multivalued to be present for timechart to work.  Assuming you are working with the example you replied to you would simply add "| eval _time=min(_time) | tim... See more...
You just need the _time field that isn't multivalued to be present for timechart to work.  Assuming you are working with the example you replied to you would simply add "| eval _time=min(_time) | timechart count" after the where line.  The eval is ensuring you take the earliest time if it is multi-valued.
Well, in that case I am really confused.   I can telnet from UF host to the indexer on port 9997. One more thing, I do see UF host names in metrics.log on my indexer logs. And tcpdump shows traffic... See more...
Well, in that case I am really confused.   I can telnet from UF host to the indexer on port 9997. One more thing, I do see UF host names in metrics.log on my indexer logs. And tcpdump shows traffic being sent from UF host to the indexer and on the indexer, traffic being received from the UF host. 
This is not working for me   I have the app in place on my DS (/etc/apps/DS_Fix/local/outputs.conf 
@dmarling   pretty good explanation ! .  But now I need to go one step further . Based on result of stats commands I want to create timechart.   First I tried to replace stats with timechart but it ... See more...
@dmarling   pretty good explanation ! .  But now I need to go one step further . Based on result of stats commands I want to create timechart.   First I tried to replace stats with timechart but it simply does not work.  Then I created table from stats result and base on this table wanted to create a time chart. But I rather feel than know  it is not a good way.  Would you give any example how to do it please. Even based on this simple from source post