All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Click on the graph to go into Metric Browser... In Metric Browser, right-click on the Metrics to show the REST URL: regards, Terence
This is a tailreader message so it should be about ingestion, not indexed data. You might need to clean fishbucket (of course it will reingest all monitor inputs).
First things first - check splunkd.log for errors. It looks like come communication problems. between nodes. If all else fails, just reinstall the node from scratch and bootstrap it as a SHC member.
Yes. @gcusello is 100% spot-on. I admit it is a bit counterintuitive (especially compared to indexer cluster and manager node) but the deployer is not a part of a search head cluster. It is an auxili... See more...
Yes. @gcusello is 100% spot-on. I admit it is a bit counterintuitive (especially compared to indexer cluster and manager node) but the deployer is not a part of a search head cluster. It is an auxiliary component meant to be used for... well, deploying the configuration to the SHC but apart from that it doesn't take part in any other SHC activity.
Hi experts, For Splunk App for Data Science and Deep Learning is it possible at all to build custom docker image native for M1 or M2 Mac? Currently available ones on Docker hub are for x86 architect... See more...
Hi experts, For Splunk App for Data Science and Deep Learning is it possible at all to build custom docker image native for M1 or M2 Mac? Currently available ones on Docker hub are for x86 architecture. Running them in M1 Docker environment has been problematic. All notebooks that use transformers and keras crash the kernel upon import.  Also, if possible to leverage native M1and/or M2 GPU will also be useful. Any plan for native docker image build supprot for M1 and/or M2? Thanks, MCW
Hi, I am trying to install Splunk SOAR (On-premises) as an unprivileged user on CentOS 7.9 and when I am running ./soar-prepare-system script I get the following error message: ./usr/python39/bin/p... See more...
Hi, I am trying to install Splunk SOAR (On-premises) as an unprivileged user on CentOS 7.9 and when I am running ./soar-prepare-system script I get the following error message: ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0) ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.26' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0) ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.27' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0) ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0)  I tried to install it on CentOS 8 and got another error said Unable to read CentOS/RHEL version from /etc/redhat-release.  Any sugestions?
Hi @jenkinsta , your solution surely work, but you could also try:   index=data | lookup adexport.csv UserName AS UserId OUTPUT Title | table _time UserId Title   Ciao. Giuseppe
Hi @adoumbia , for my knowedge, you can run this command only on Search Heads, not on Deployer, because it isn't a component of the Cluster, it's only the system that deployes apps to the Cluster, b... See more...
Hi @adoumbia , for my knowedge, you can run this command only on Search Heads, not on Deployer, because it isn't a component of the Cluster, it's only the system that deployes apps to the Cluster, but after deployment, the Cluster runs by itself. Ciao. Giuseppe
something is not clear because it does not return any rows now: (...) | stats values(*) as * by joiner | where ctx_ecid=ecid_d | eval _time=min(_time) | timechart span=5min avg(time_taken) ... See more...
something is not clear because it does not return any rows now: (...) | stats values(*) as * by joiner | where ctx_ecid=ecid_d | eval _time=min(_time) | timechart span=5min avg(time_taken) without the last line with timechart it returns all expected rows, with it - none     and one more question . Right now I do stats(*) as * by joiner  but this returns all fields from both indexes. I suppose it cost a lot to transfer such big amount of data especially each my index has 10-20 fields and thousands records. In fact I use only 2-3 fields from each index so I tried to something like : | stats values(field1) as field1, values(field2) as field2, values(field3) as field3 by joiner (...) but it does not return rows    Why ?   how  to modify it to returns only fields I need?  
Hello I am building a new search head cluster, the cluster work fine, however the deployer is throwing error whenever I run "/bin/splunk show shcluster-status" on the deployer Here is error I am ... See more...
Hello I am building a new search head cluster, the cluster work fine, however the deployer is throwing error whenever I run "/bin/splunk show shcluster-status" on the deployer Here is error I am getting "Encountered some errors while trying to obtain shcluster status. Search Head Clustering is not enabled on this node. REST endpoint is not available" my server.conf look like the following [shclustering] pass4SymmKey = xxxxxxxxx   Your input  be appreciated
I am having the error I tried your solution, unfortunately it does not work.
Hi @Ryan.Paredez , Can you please check for the Account [redacted] ^ Post edited by @Ryan.Paredez to remove email from the post. For security and privacy reasons, if you need to share PII, do it p... See more...
Hi @Ryan.Paredez , Can you please check for the Account [redacted] ^ Post edited by @Ryan.Paredez to remove email from the post. For security and privacy reasons, if you need to share PII, do it privately using the Community Private Message feature.
Got it. actually renamed the field in csv and re-uploaded... join UserId [inputlookup adexport.csv | fields UserId Title ] why did I complicate things.  If there is a faster way let me know
I'm getting these failures after a prior disk corruption. ERROR TailReader [1876879 tailreader0] - Ignoring path="/somepath/somefile" due to: BTree::Exception: unexpected resulting offset 0 while on... See more...
I'm getting these failures after a prior disk corruption. ERROR TailReader [1876879 tailreader0] - Ignoring path="/somepath/somefile" due to: BTree::Exception: unexpected resulting offset 0 while on order 1, starting from offset 2056 node offset: 53209432 order: 255 keys: { } children: { 0 } I thought I had ejected buckets affected by the corruption; in addition, recent ingestions all go into buckets created after the corruption. What can I do to fix this?
I have an inputlookup called adexport.csv thats big... trying to join and match two fields in the lookup UserName and with the splunk field UserId.   trying but this don't seem to work. Tried a va... See more...
I have an inputlookup called adexport.csv thats big... trying to join and match two fields in the lookup UserName and with the splunk field UserId.   trying but this don't seem to work. Tried a variation of join and append, my splunk foo is dead index=data | lookup adexport.csv UserName as UserId OUTPUT UserId Title | table _time UserId Title    
but i'm asking if there is a default fields related to microservices in Splunk  I understand that it is tempting to view Splunk as a unique data source.  But in reality, Splunk data is what ... See more...
but i'm asking if there is a default fields related to microservices in Splunk  I understand that it is tempting to view Splunk as a unique data source.  But in reality, Splunk data is what you collect in your business.  Volunteers here has zero visibility of what fields are available in your_sourcetype that may or may not be related to microservices. In simple terms, no.  There is no such a thing as default fields related to anything other than time.  host, source, and sourcetype are usually mandatory in most deployments.  You need to ask whoever is writing logs in your_sourcetype how to identify a microservice.  They may have already put such in a key-value pair using either a delimiter or using a structured format such as JSON.  Even if they haven't, Splunk can easily extract it as long as it is present in the data.  However, Splunk itself cannot tell you where your developers placed such information. As @PickleRick suggested, you can also show some raw events (anonymize as needed) for volunteers to inspect and speculate.  Still, the best is if you can also ask your developers to identify information themselves.
  When using Pplunks  security essentials :  MITRE ATT&CK Framework  we are lacking a significant amount of alerts.  we used to have around 1500 in active and 300 ish on needs data; however, overnig... See more...
  When using Pplunks  security essentials :  MITRE ATT&CK Framework  we are lacking a significant amount of alerts.  we used to have around 1500 in active and 300 ish on needs data; however, overnight drop to the 200 mark total (between active and needs data) .  The following troubleshooting steps have been taken  1. updated content with the "force update under system configuration". 2. verify communication to the urls (yes it can connect) 3. uninstall and reinstall current SSE version, this cleared the data mapping upon installed it showed  enabled 0-active-0- missing data 1715: after the weekend it dropped to 0-8-195      4. After i rebuilt the data inventory  it looked as such:   Here are some SS of the security content:   1. shows content  2. drop down shows 12 mitre attack platforms but the dropdown is all 0;s   3.  Some times the data sources would show a filter of none. with 1300+  items, like the item below 134,  and sometimes it just doesnt appear.      4. MITRE map missing from the  configuration tags           
The cache for the summary index drop-down is apparently a bit too small for our environment. I noticed it was missing everything after the Ts so I deleted my index (started with a V) and put it at th... See more...
The cache for the summary index drop-down is apparently a bit too small for our environment. I noticed it was missing everything after the Ts so I deleted my index (started with a V) and put it at the top of the alphabet. Sure enough, there it was.
Ok. Do a simple index=_internal host=your_uf search and run it as a real-time search. That's one of the very few use cases when real-time search is actually warranted. If you see something, check ... See more...
Ok. Do a simple index=_internal host=your_uf search and run it as a real-time search. That's one of the very few use cases when real-time search is actually warranted. If you see something, check validity of the data. Typical problem when you're supposedly ingesting data but don't see it (apart from non-existent destination index) is time problems - if you have misconfigured timezone on the source, the data can be seen as coming from a long time ago so it's indexed into the past. So you're not seeing it looking for it in "last 15 minutes" or so.
1. Post your searches in code block or preformatted paragraph - it helps readability. 2. Don't use the join command if you can avoid it (in this case you can probably go with stats instead) 3. Fiel... See more...
1. Post your searches in code block or preformatted paragraph - it helps readability. 2. Don't use the join command if you can avoid it (in this case you can probably go with stats instead) 3. Fields depend on the data you onboard. The only "default" thing about them is when you have them normalized to be CIM-compliant. But I don't see any datamodel applicable here.