All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks, I'll try your suggestion And yes I agree, I think it's a syntax error, that's the error: "Error in 'EvalCommand': The expression is malformed."
It would help to know the error you received, but I suspect it's a syntax error of some sort.  That's because subsearches have to be placed where their results would make semantic sense. IOW, if the... See more...
It would help to know the error you received, but I suspect it's a syntax error of some sort.  That's because subsearches have to be placed where their results would make semantic sense. IOW, if the subsearch produces a result like (original_user=foo OR original_user=bar) then this makes no sense. | eval Name= mvindex((newValue),1) (original_user=foo OR original_user=bar) | stats values(*) as *  Try this, instead (index=<my index>) EventType="A" EventType=A | rename username as original_user | eval Id= mvindex((newValue),0) | eval Name= mvindex((newValue),1) | search [ search index=<my index> <filtering by a string> | eval src_email= mvindex((newValue),3) | rex field=src_email "(?<original_user>[\w\d\.\-]+\@[\w\d\.]+)" | fields original_user | format ] | stats values(*) as * Or this similar query for better performance (index=<my index>) EventType="A" EventType=A [ search index=<my index> <filtering by a string> | eval src_email= mvindex((newValue),3) | rex field=src_email "(?<original_user>[\w\d\.\-]+\@[\w\d\.]+)" | fields original_user | rename original_user as username | format ] | rename username as original_user | eval Id= mvindex((newValue),0) | eval Name= mvindex((newValue),1) | stats values(*) as *
Hello, I'm doing a detection for an event on the same index with 2 logs, I want to filter events of Event A based on if the username field exists with the same value in Event B. I tried doing a sub-... See more...
Hello, I'm doing a detection for an event on the same index with 2 logs, I want to filter events of Event A based on if the username field exists with the same value in Event B. I tried doing a sub-search but I get errors going by the below query, I want to filter Event A by if there are any events from Event B with the same original_user       (index=<my index>) EventType="A" EventType=A | rename username as original_user | eval Id= mvindex((newValue),0) | eval Name= mvindex((newValue),1) [ search index=<same index> <filtering by a string> | eval src_email= mvindex((newValue),3) | rex field=src_email "(?<original_user>[\w\d\.\-]+\@[\w\d\.]+)" | fields original_user] | stats values(*) as *       The above query says my eval is malformed Is there any way to solve it? Append/Join?   I also tested the query inside the sub-search by itself and it works with no issues  
You are great, that worked! Thank you for sharing knowledge.
Hi I have always Makefile which generates deployment ready xxx.spl files from current clients all apps into one directory + combined tar file. Those are easy to transfer and use where they are needed... See more...
Hi I have always Makefile which generates deployment ready xxx.spl files from current clients all apps into one directory + combined tar file. Those are easy to transfer and use where they are needed. r. Ismo
What data you have and what search you have so far?
While Informix is not officially supported - https://docs.splunk.com/Documentation/DBX/3.17.2/DeployDBX/Installdatabasedrivers - you can try to configure it with proper jdbc drivers for your db but y... See more...
While Informix is not officially supported - https://docs.splunk.com/Documentation/DBX/3.17.2/DeployDBX/Installdatabasedrivers - you can try to configure it with proper jdbc drivers for your db but you have to look for them yourself. It might work.
No, in an all-in-one setup you don't have to separately install the for_indexers addon. It's used if you have multi-tier environment because you then install the main ES app on search head(s) which m... See more...
No, in an all-in-one setup you don't have to separately install the for_indexers addon. It's used if you have multi-tier environment because you then install the main ES app on search head(s) which means you don't have indexes defined on the indexer tier. But in an all-in-one installation you install the ES app on the component working as both indexer and search head so the indexes should be created during installation. The indexes themselves (the data directories) should be in the same place as all the other indexers so by default it would be /opt/splunk/var/lib/splunk If you want to see where are the configs that define notable index run spluni btool index list notable --debug
Also to ask this, all the indexes referred here in this doc,  https://docs.splunk.com/Documentation/ES/7.3.2/Install/Indexes Under specific app, are these apps installed when I install ES, and al... See more...
Also to ask this, all the indexes referred here in this doc,  https://docs.splunk.com/Documentation/ES/7.3.2/Install/Indexes Under specific app, are these apps installed when I install ES, and also after installing SPlunk_TA_FORIndexers, will I have access to all this indexes listed above .  How are the apps associated there installed on my all-in-one instance, are the apps above isntalled when I installed ES and the indexes are installed when i install the TA?  This just has my head confused a bit, thank you for answering all this!
Hello, new to splunk. I am trying to exclude certain applications in an SPL search, specifically by app name.  What field would I need to consider in order to apply the '!=' boolean plus app name? Th... See more...
Hello, new to splunk. I am trying to exclude certain applications in an SPL search, specifically by app name.  What field would I need to consider in order to apply the '!=' boolean plus app name? Thanks again.
Hello @PickleRick ,  Sorry forgot to answer your question,  Yes, its all -in-one config for my splunk deployment on one machine
Hello @gcusello ,  Yes I have one machine running the splunk server, not a complex deployment one search head.  So from my understanding, just deploying the TA-ForIndexers will let me have inde... See more...
Hello @gcusello ,  Yes I have one machine running the splunk server, not a complex deployment one search head.  So from my understanding, just deploying the TA-ForIndexers will let me have index=notable, notable_summary and risk?  I dont want to change any settings for the this TA, just a vanilla download. is there any video guide that shows this if there is it would be really helpful!  Thank you again!
I have one server with installation for it of splunk on it,  Just to confirm @PickleRick , installing the Splunk_TA_forIndexers will have the those indexes installed?  also any correlation sear... See more...
I have one server with installation for it of splunk on it,  Just to confirm @PickleRick , installing the Splunk_TA_forIndexers will have the those indexes installed?  also any correlation search that has notable event action will be get indexed under index=notable?  Am i getting this right? Thank you so much for all the help!
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i c... See more...
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i can download these drivers?I am using linux environment.   Thanks in advance.
Thanks.  OK, yeah, I just had to use the tar -C flag/option to unzip to a new directory, to make sure it unzipped *ONLY* the archive files there, and then I just zip it back up normally.  So, just cr... See more...
Thanks.  OK, yeah, I just had to use the tar -C flag/option to unzip to a new directory, to make sure it unzipped *ONLY* the archive files there, and then I just zip it back up normally.  So, just create a new dir, and then use the -C flag/option on the tar command to unzip.  That's the easy fix.  Good to go.
I want to print, the age group with the highest fraud activity by a merchant, I found the solution for it, through the query that I mentioned earlier, it appears that the age group 19-35 performed mo... See more...
I want to print, the age group with the highest fraud activity by a merchant, I found the solution for it, through the query that I mentioned earlier, it appears that the age group 19-35 performed more fraud activities. 
It's a bit complicated and can get messy. Remember that as soon as you add the peer to a cluster, it announces its all buckets to the CM and the CM will try to find a way to meet RF/SF across the clu... See more...
It's a bit complicated and can get messy. Remember that as soon as you add the peer to a cluster, it announces its all buckets to the CM and the CM will try to find a way to meet RF/SF across the cluster. Even if a peer is in detention, it still can and will be a _source_ of replication. It won't only be the target of replicated buckets. So it's more complicated than it seems and you'll most probably will end up with extra copies of your buckets which you will have to get rid of manually. It would be easiest to add new nodes to cluster B, add cluster A as search peers to your SH layer, reconfigure your outputs to send to cluster B only and just wait for the data in A to freeze. But if you can't afford that, as you'll still be installing new nodes anyway, you could try do something else (this doesn't include rolling hot buckets in cluster A if necesssary): 1) install new nodes for cluster B (don't add them to cluster yet) 2) Find primary copies of buckets in cluster A 3) Copy over only a primary copy for each bucket from cluster A to the new nodes (don't copy all of them into one indexer - spread them over the new boxes) 4) Put the cluster in maintenance mode 5) Add the new nodes to cluster B 6) Disable maintenance mode, rebalance buckets. 7) Decommission cluster A That _could_ work but I'd never do that in prod before testing in lab.  
Ok. "values(fraud)" will always be 1 because you're only searching for the events which have fraud=1. As I said before, the question is a bit ambiguous. If you do your (without the values() aggregat... See more...
Ok. "values(fraud)" will always be 1 because you're only searching for the events which have fraud=1. As I said before, the question is a bit ambiguous. If you do your (without the values() aggregation which makes no sense. | stats count by merchant age it will indeed count your frauds splitting it for each age-merchant pair. But the question is whether you want this - the biggest merchant-age pair or if you want two separate stats one by age and one by merchant and want to find two separate maximum values - one for each stats. The former you already have. The latter you can get by doing two separate searches - one with count by age and onewith count by merchant. Getting both values from a single search will be more complicated.
If your column order is known and does not change, you can define delimiter-based extractions in props.conf for your sourcetype. But then you must explicitly name the fields and their order. Otherwis... See more...
If your column order is known and does not change, you can define delimiter-based extractions in props.conf for your sourcetype. But then you must explicitly name the fields and their order. Otherwise the only way to handle such file is using indexed extractions (which has its own drawbacks). Remember that indexed extractions happen on the initial forwarder!
Hi this is doable, but probably it needs some way to recognize which line is header line. And position in file is not that. But as @yuanliu said it's much better to use INDEXED_EXTRACTIONS=csv and t... See more...
Hi this is doable, but probably it needs some way to recognize which line is header line. And position in file is not that. But as @yuanliu said it's much better to use INDEXED_EXTRACTIONS=csv and then define HEADER_FIELD_LINE_NUMBER if it didn't recognize automatically that header line. You should put props.conf also on your UF to get this work. Structured Data Header Extraction and configuration r. Ismo