All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear All, Is there any delay option in Splunk multisite M4/M14? Requirement:  Site A is Active site and Site N passive site. Data ingestion from Active site should be in real time and data from sit... See more...
Dear All, Is there any delay option in Splunk multisite M4/M14? Requirement:  Site A is Active site and Site N passive site. Data ingestion from Active site should be in real time and data from site N would be Ingest at 1 AM every day.  Is there any option in mu
It looks like you are trying to find the app.name for the parent_span_id? To avoid using joins, try something like this: index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, paren... See more...
It looks like you are trying to find the app.name for the parent_span_id? To avoid using joins, try something like this: index=your_index sourcetype=your_sourcetype | fields trace_id, span_id, parent_span_id,app.name | rename app.name as current_service | eval join_id=parent_span_id | appendpipe [| rename current_service as parent_service | eval join_id = span_id] | eventstats values(parent_service) as parent_service by join_id trace_id | where isnotnull(current_service) | table trace_id parent_service current_service If this isn't correct, please share some anonymised, but representative raw events and a description of what it is you are trying to do
If you had bucket problems, the error would come from another component Normally with stale fishbucket entries and similar problems you'd simply remove a particular entry from the fishbucket but ... See more...
If you had bucket problems, the error would come from another component Normally with stale fishbucket entries and similar problems you'd simply remove a particular entry from the fishbucket but in this case it looks that the fishbucket database itself is damaged so you probably need to remove whole fishbucket - stop splunkd, remove var/lib/splunk/fishbucket, start splunkd You might want to fiddle with the db first. If I remember correctly (but I wouldn't bet any money on it), it's a berkeleydb so it might be possible to repair it.
Indeed this error is related to ingestion failure.  I just thought the BTree problem is inside the buckets.  To clean up, do I just delete everything in fishbucket?  Reingestion is not a problem, but... See more...
Indeed this error is related to ingestion failure.  I just thought the BTree problem is inside the buckets.  To clean up, do I just delete everything in fishbucket?  Reingestion is not a problem, but I do not want to cause other behavior changes.
Click on the graph to go into Metric Browser... In Metric Browser, right-click on the Metrics to show the REST URL: regards, Terence
This is a tailreader message so it should be about ingestion, not indexed data. You might need to clean fishbucket (of course it will reingest all monitor inputs).
First things first - check splunkd.log for errors. It looks like come communication problems. between nodes. If all else fails, just reinstall the node from scratch and bootstrap it as a SHC member.
Yes. @gcusello is 100% spot-on. I admit it is a bit counterintuitive (especially compared to indexer cluster and manager node) but the deployer is not a part of a search head cluster. It is an auxili... See more...
Yes. @gcusello is 100% spot-on. I admit it is a bit counterintuitive (especially compared to indexer cluster and manager node) but the deployer is not a part of a search head cluster. It is an auxiliary component meant to be used for... well, deploying the configuration to the SHC but apart from that it doesn't take part in any other SHC activity.
Hi experts, For Splunk App for Data Science and Deep Learning is it possible at all to build custom docker image native for M1 or M2 Mac? Currently available ones on Docker hub are for x86 architect... See more...
Hi experts, For Splunk App for Data Science and Deep Learning is it possible at all to build custom docker image native for M1 or M2 Mac? Currently available ones on Docker hub are for x86 architecture. Running them in M1 Docker environment has been problematic. All notebooks that use transformers and keras crash the kernel upon import.  Also, if possible to leverage native M1and/or M2 GPU will also be useful. Any plan for native docker image build supprot for M1 and/or M2? Thanks, MCW
Hi, I am trying to install Splunk SOAR (On-premises) as an unprivileged user on CentOS 7.9 and when I am running ./soar-prepare-system script I get the following error message: ./usr/python39/bin/p... See more...
Hi, I am trying to install Splunk SOAR (On-premises) as an unprivileged user on CentOS 7.9 and when I am running ./soar-prepare-system script I get the following error message: ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0) ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.26' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0) ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.27' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0) ./usr/python39/bin/python3.9: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by /opt/splunk-soar/usr/python39/bin/../lib/libpython3.9.so.1.0)  I tried to install it on CentOS 8 and got another error said Unable to read CentOS/RHEL version from /etc/redhat-release.  Any sugestions?
Hi @jenkinsta , your solution surely work, but you could also try:   index=data | lookup adexport.csv UserName AS UserId OUTPUT Title | table _time UserId Title   Ciao. Giuseppe
Hi @adoumbia , for my knowedge, you can run this command only on Search Heads, not on Deployer, because it isn't a component of the Cluster, it's only the system that deployes apps to the Cluster, b... See more...
Hi @adoumbia , for my knowedge, you can run this command only on Search Heads, not on Deployer, because it isn't a component of the Cluster, it's only the system that deployes apps to the Cluster, but after deployment, the Cluster runs by itself. Ciao. Giuseppe
something is not clear because it does not return any rows now: (...) | stats values(*) as * by joiner | where ctx_ecid=ecid_d | eval _time=min(_time) | timechart span=5min avg(time_taken) ... See more...
something is not clear because it does not return any rows now: (...) | stats values(*) as * by joiner | where ctx_ecid=ecid_d | eval _time=min(_time) | timechart span=5min avg(time_taken) without the last line with timechart it returns all expected rows, with it - none     and one more question . Right now I do stats(*) as * by joiner  but this returns all fields from both indexes. I suppose it cost a lot to transfer such big amount of data especially each my index has 10-20 fields and thousands records. In fact I use only 2-3 fields from each index so I tried to something like : | stats values(field1) as field1, values(field2) as field2, values(field3) as field3 by joiner (...) but it does not return rows    Why ?   how  to modify it to returns only fields I need?  
Hello I am building a new search head cluster, the cluster work fine, however the deployer is throwing error whenever I run "/bin/splunk show shcluster-status" on the deployer Here is error I am ... See more...
Hello I am building a new search head cluster, the cluster work fine, however the deployer is throwing error whenever I run "/bin/splunk show shcluster-status" on the deployer Here is error I am getting "Encountered some errors while trying to obtain shcluster status. Search Head Clustering is not enabled on this node. REST endpoint is not available" my server.conf look like the following [shclustering] pass4SymmKey = xxxxxxxxx   Your input  be appreciated
I am having the error I tried your solution, unfortunately it does not work.
Hi @Ryan.Paredez , Can you please check for the Account [redacted] ^ Post edited by @Ryan.Paredez to remove email from the post. For security and privacy reasons, if you need to share PII, do it p... See more...
Hi @Ryan.Paredez , Can you please check for the Account [redacted] ^ Post edited by @Ryan.Paredez to remove email from the post. For security and privacy reasons, if you need to share PII, do it privately using the Community Private Message feature.
Got it. actually renamed the field in csv and re-uploaded... join UserId [inputlookup adexport.csv | fields UserId Title ] why did I complicate things.  If there is a faster way let me know
I'm getting these failures after a prior disk corruption. ERROR TailReader [1876879 tailreader0] - Ignoring path="/somepath/somefile" due to: BTree::Exception: unexpected resulting offset 0 while on... See more...
I'm getting these failures after a prior disk corruption. ERROR TailReader [1876879 tailreader0] - Ignoring path="/somepath/somefile" due to: BTree::Exception: unexpected resulting offset 0 while on order 1, starting from offset 2056 node offset: 53209432 order: 255 keys: { } children: { 0 } I thought I had ejected buckets affected by the corruption; in addition, recent ingestions all go into buckets created after the corruption. What can I do to fix this?
I have an inputlookup called adexport.csv thats big... trying to join and match two fields in the lookup UserName and with the splunk field UserId.   trying but this don't seem to work. Tried a va... See more...
I have an inputlookup called adexport.csv thats big... trying to join and match two fields in the lookup UserName and with the splunk field UserId.   trying but this don't seem to work. Tried a variation of join and append, my splunk foo is dead index=data | lookup adexport.csv UserName as UserId OUTPUT UserId Title | table _time UserId Title    
but i'm asking if there is a default fields related to microservices in Splunk  I understand that it is tempting to view Splunk as a unique data source.  But in reality, Splunk data is what ... See more...
but i'm asking if there is a default fields related to microservices in Splunk  I understand that it is tempting to view Splunk as a unique data source.  But in reality, Splunk data is what you collect in your business.  Volunteers here has zero visibility of what fields are available in your_sourcetype that may or may not be related to microservices. In simple terms, no.  There is no such a thing as default fields related to anything other than time.  host, source, and sourcetype are usually mandatory in most deployments.  You need to ask whoever is writing logs in your_sourcetype how to identify a microservice.  They may have already put such in a key-value pair using either a delimiter or using a structured format such as JSON.  Even if they haven't, Splunk can easily extract it as long as it is present in the data.  However, Splunk itself cannot tell you where your developers placed such information. As @PickleRick suggested, you can also show some raw events (anonymize as needed) for volunteers to inspect and speculate.  Still, the best is if you can also ask your developers to identify information themselves.