All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What data you have and what search you have so far?
While Informix is not officially supported - https://docs.splunk.com/Documentation/DBX/3.17.2/DeployDBX/Installdatabasedrivers - you can try to configure it with proper jdbc drivers for your db but y... See more...
While Informix is not officially supported - https://docs.splunk.com/Documentation/DBX/3.17.2/DeployDBX/Installdatabasedrivers - you can try to configure it with proper jdbc drivers for your db but you have to look for them yourself. It might work.
No, in an all-in-one setup you don't have to separately install the for_indexers addon. It's used if you have multi-tier environment because you then install the main ES app on search head(s) which m... See more...
No, in an all-in-one setup you don't have to separately install the for_indexers addon. It's used if you have multi-tier environment because you then install the main ES app on search head(s) which means you don't have indexes defined on the indexer tier. But in an all-in-one installation you install the ES app on the component working as both indexer and search head so the indexes should be created during installation. The indexes themselves (the data directories) should be in the same place as all the other indexers so by default it would be /opt/splunk/var/lib/splunk If you want to see where are the configs that define notable index run spluni btool index list notable --debug
Also to ask this, all the indexes referred here in this doc,  https://docs.splunk.com/Documentation/ES/7.3.2/Install/Indexes Under specific app, are these apps installed when I install ES, and al... See more...
Also to ask this, all the indexes referred here in this doc,  https://docs.splunk.com/Documentation/ES/7.3.2/Install/Indexes Under specific app, are these apps installed when I install ES, and also after installing SPlunk_TA_FORIndexers, will I have access to all this indexes listed above .  How are the apps associated there installed on my all-in-one instance, are the apps above isntalled when I installed ES and the indexes are installed when i install the TA?  This just has my head confused a bit, thank you for answering all this!
Hello, new to splunk. I am trying to exclude certain applications in an SPL search, specifically by app name.  What field would I need to consider in order to apply the '!=' boolean plus app name? Th... See more...
Hello, new to splunk. I am trying to exclude certain applications in an SPL search, specifically by app name.  What field would I need to consider in order to apply the '!=' boolean plus app name? Thanks again.
Hello @PickleRick ,  Sorry forgot to answer your question,  Yes, its all -in-one config for my splunk deployment on one machine
Hello @gcusello ,  Yes I have one machine running the splunk server, not a complex deployment one search head.  So from my understanding, just deploying the TA-ForIndexers will let me have inde... See more...
Hello @gcusello ,  Yes I have one machine running the splunk server, not a complex deployment one search head.  So from my understanding, just deploying the TA-ForIndexers will let me have index=notable, notable_summary and risk?  I dont want to change any settings for the this TA, just a vanilla download. is there any video guide that shows this if there is it would be really helpful!  Thank you again!
I have one server with installation for it of splunk on it,  Just to confirm @PickleRick , installing the Splunk_TA_forIndexers will have the those indexes installed?  also any correlation sear... See more...
I have one server with installation for it of splunk on it,  Just to confirm @PickleRick , installing the Splunk_TA_forIndexers will have the those indexes installed?  also any correlation search that has notable event action will be get indexed under index=notable?  Am i getting this right? Thank you so much for all the help!
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i c... See more...
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i can download these drivers?I am using linux environment.   Thanks in advance.
Thanks.  OK, yeah, I just had to use the tar -C flag/option to unzip to a new directory, to make sure it unzipped *ONLY* the archive files there, and then I just zip it back up normally.  So, just cr... See more...
Thanks.  OK, yeah, I just had to use the tar -C flag/option to unzip to a new directory, to make sure it unzipped *ONLY* the archive files there, and then I just zip it back up normally.  So, just create a new dir, and then use the -C flag/option on the tar command to unzip.  That's the easy fix.  Good to go.
I want to print, the age group with the highest fraud activity by a merchant, I found the solution for it, through the query that I mentioned earlier, it appears that the age group 19-35 performed mo... See more...
I want to print, the age group with the highest fraud activity by a merchant, I found the solution for it, through the query that I mentioned earlier, it appears that the age group 19-35 performed more fraud activities. 
It's a bit complicated and can get messy. Remember that as soon as you add the peer to a cluster, it announces its all buckets to the CM and the CM will try to find a way to meet RF/SF across the clu... See more...
It's a bit complicated and can get messy. Remember that as soon as you add the peer to a cluster, it announces its all buckets to the CM and the CM will try to find a way to meet RF/SF across the cluster. Even if a peer is in detention, it still can and will be a _source_ of replication. It won't only be the target of replicated buckets. So it's more complicated than it seems and you'll most probably will end up with extra copies of your buckets which you will have to get rid of manually. It would be easiest to add new nodes to cluster B, add cluster A as search peers to your SH layer, reconfigure your outputs to send to cluster B only and just wait for the data in A to freeze. But if you can't afford that, as you'll still be installing new nodes anyway, you could try do something else (this doesn't include rolling hot buckets in cluster A if necesssary): 1) install new nodes for cluster B (don't add them to cluster yet) 2) Find primary copies of buckets in cluster A 3) Copy over only a primary copy for each bucket from cluster A to the new nodes (don't copy all of them into one indexer - spread them over the new boxes) 4) Put the cluster in maintenance mode 5) Add the new nodes to cluster B 6) Disable maintenance mode, rebalance buckets. 7) Decommission cluster A That _could_ work but I'd never do that in prod before testing in lab.  
Ok. "values(fraud)" will always be 1 because you're only searching for the events which have fraud=1. As I said before, the question is a bit ambiguous. If you do your (without the values() aggregat... See more...
Ok. "values(fraud)" will always be 1 because you're only searching for the events which have fraud=1. As I said before, the question is a bit ambiguous. If you do your (without the values() aggregation which makes no sense. | stats count by merchant age it will indeed count your frauds splitting it for each age-merchant pair. But the question is whether you want this - the biggest merchant-age pair or if you want two separate stats one by age and one by merchant and want to find two separate maximum values - one for each stats. The former you already have. The latter you can get by doing two separate searches - one with count by age and onewith count by merchant. Getting both values from a single search will be more complicated.
If your column order is known and does not change, you can define delimiter-based extractions in props.conf for your sourcetype. But then you must explicitly name the fields and their order. Otherwis... See more...
If your column order is known and does not change, you can define delimiter-based extractions in props.conf for your sourcetype. But then you must explicitly name the fields and their order. Otherwise the only way to handle such file is using indexed extractions (which has its own drawbacks). Remember that indexed extractions happen on the initial forwarder!
Hi this is doable, but probably it needs some way to recognize which line is header line. And position in file is not that. But as @yuanliu said it's much better to use INDEXED_EXTRACTIONS=csv and t... See more...
Hi this is doable, but probably it needs some way to recognize which line is header line. And position in file is not that. But as @yuanliu said it's much better to use INDEXED_EXTRACTIONS=csv and then define HEADER_FIELD_LINE_NUMBER if it didn't recognize automatically that header line. You should put props.conf also on your UF to get this work. Structured Data Header Extraction and configuration r. Ismo
Hi what is the issue which you try to solve? Merging buckets (that is what you are trying to do) between two different indexer clusters are something what I really don't propose you to do. Especial... See more...
Hi what is the issue which you try to solve? Merging buckets (that is what you are trying to do) between two different indexer clusters are something what I really don't propose you to do. Especially if/when you have same indexes on both clusters. There will be conflicts with bucket numbering etc. which leads to service interruptions. Best way is to create missed indexes on cluster B, then update outputs from UFs of cluster A to point cluster B. Then just disable external receiving on cluster A. After that decrease node amount in cluster A to minimum and wait that data has expired on it.  r. Ismo
source="sampleprepared_data.csv" fraud="1" | stats count values(fraud) by age,merchant | sort - count I have tried this query to aggregate the data by age and merchant and sorted the data in des... See more...
source="sampleprepared_data.csv" fraud="1" | stats count values(fraud) by age,merchant | sort - count I have tried this query to aggregate the data by age and merchant and sorted the data in descending order, i feel  like something is missing, i can't figure out what
1. The question is a bit ambiguous. 2. We don't know your data. Post some (possibly anonymized but I don't think it's necessary in this case). 3. What have you tried so far and how the results diff... See more...
1. The question is a bit ambiguous. 2. We don't know your data. Post some (possibly anonymized but I don't think it's necessary in this case). 3. What have you tried so far and how the results differ from what you expected?  
HI, i am new to Splunk and trying to gain hands-on experience, i am facing trouble to search the data based on this query "Which age group performed the most fraudulent activities and to what merchan... See more...
HI, i am new to Splunk and trying to gain hands-on experience, i am facing trouble to search the data based on this query "Which age group performed the most fraudulent activities and to what merchant?" can any one help me to figure out the soulution . 
Hi another option is use e.g. refresh.link.visible etc.  from Shared options r. Ismo