All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @PickleRick ,  Sorry forgot to answer your question,  Yes, its all -in-one config for my splunk deployment on one machine
Hello @gcusello ,  Yes I have one machine running the splunk server, not a complex deployment one search head.  So from my understanding, just deploying the TA-ForIndexers will let me have inde... See more...
Hello @gcusello ,  Yes I have one machine running the splunk server, not a complex deployment one search head.  So from my understanding, just deploying the TA-ForIndexers will let me have index=notable, notable_summary and risk?  I dont want to change any settings for the this TA, just a vanilla download. is there any video guide that shows this if there is it would be really helpful!  Thank you again!
I have one server with installation for it of splunk on it,  Just to confirm @PickleRick , installing the Splunk_TA_forIndexers will have the those indexes installed?  also any correlation sear... See more...
I have one server with installation for it of splunk on it,  Just to confirm @PickleRick , installing the Splunk_TA_forIndexers will have the those indexes installed?  also any correlation search that has notable event action will be get indexed under index=notable?  Am i getting this right? Thank you so much for all the help!
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i c... See more...
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i can download these drivers?I am using linux environment.   Thanks in advance.
Thanks.  OK, yeah, I just had to use the tar -C flag/option to unzip to a new directory, to make sure it unzipped *ONLY* the archive files there, and then I just zip it back up normally.  So, just cr... See more...
Thanks.  OK, yeah, I just had to use the tar -C flag/option to unzip to a new directory, to make sure it unzipped *ONLY* the archive files there, and then I just zip it back up normally.  So, just create a new dir, and then use the -C flag/option on the tar command to unzip.  That's the easy fix.  Good to go.
I want to print, the age group with the highest fraud activity by a merchant, I found the solution for it, through the query that I mentioned earlier, it appears that the age group 19-35 performed mo... See more...
I want to print, the age group with the highest fraud activity by a merchant, I found the solution for it, through the query that I mentioned earlier, it appears that the age group 19-35 performed more fraud activities. 
It's a bit complicated and can get messy. Remember that as soon as you add the peer to a cluster, it announces its all buckets to the CM and the CM will try to find a way to meet RF/SF across the clu... See more...
It's a bit complicated and can get messy. Remember that as soon as you add the peer to a cluster, it announces its all buckets to the CM and the CM will try to find a way to meet RF/SF across the cluster. Even if a peer is in detention, it still can and will be a _source_ of replication. It won't only be the target of replicated buckets. So it's more complicated than it seems and you'll most probably will end up with extra copies of your buckets which you will have to get rid of manually. It would be easiest to add new nodes to cluster B, add cluster A as search peers to your SH layer, reconfigure your outputs to send to cluster B only and just wait for the data in A to freeze. But if you can't afford that, as you'll still be installing new nodes anyway, you could try do something else (this doesn't include rolling hot buckets in cluster A if necesssary): 1) install new nodes for cluster B (don't add them to cluster yet) 2) Find primary copies of buckets in cluster A 3) Copy over only a primary copy for each bucket from cluster A to the new nodes (don't copy all of them into one indexer - spread them over the new boxes) 4) Put the cluster in maintenance mode 5) Add the new nodes to cluster B 6) Disable maintenance mode, rebalance buckets. 7) Decommission cluster A That _could_ work but I'd never do that in prod before testing in lab.  
Ok. "values(fraud)" will always be 1 because you're only searching for the events which have fraud=1. As I said before, the question is a bit ambiguous. If you do your (without the values() aggregat... See more...
Ok. "values(fraud)" will always be 1 because you're only searching for the events which have fraud=1. As I said before, the question is a bit ambiguous. If you do your (without the values() aggregation which makes no sense. | stats count by merchant age it will indeed count your frauds splitting it for each age-merchant pair. But the question is whether you want this - the biggest merchant-age pair or if you want two separate stats one by age and one by merchant and want to find two separate maximum values - one for each stats. The former you already have. The latter you can get by doing two separate searches - one with count by age and onewith count by merchant. Getting both values from a single search will be more complicated.
If your column order is known and does not change, you can define delimiter-based extractions in props.conf for your sourcetype. But then you must explicitly name the fields and their order. Otherwis... See more...
If your column order is known and does not change, you can define delimiter-based extractions in props.conf for your sourcetype. But then you must explicitly name the fields and their order. Otherwise the only way to handle such file is using indexed extractions (which has its own drawbacks). Remember that indexed extractions happen on the initial forwarder!
Hi this is doable, but probably it needs some way to recognize which line is header line. And position in file is not that. But as @yuanliu said it's much better to use INDEXED_EXTRACTIONS=csv and t... See more...
Hi this is doable, but probably it needs some way to recognize which line is header line. And position in file is not that. But as @yuanliu said it's much better to use INDEXED_EXTRACTIONS=csv and then define HEADER_FIELD_LINE_NUMBER if it didn't recognize automatically that header line. You should put props.conf also on your UF to get this work. Structured Data Header Extraction and configuration r. Ismo
Hi what is the issue which you try to solve? Merging buckets (that is what you are trying to do) between two different indexer clusters are something what I really don't propose you to do. Especial... See more...
Hi what is the issue which you try to solve? Merging buckets (that is what you are trying to do) between two different indexer clusters are something what I really don't propose you to do. Especially if/when you have same indexes on both clusters. There will be conflicts with bucket numbering etc. which leads to service interruptions. Best way is to create missed indexes on cluster B, then update outputs from UFs of cluster A to point cluster B. Then just disable external receiving on cluster A. After that decrease node amount in cluster A to minimum and wait that data has expired on it.  r. Ismo
source="sampleprepared_data.csv" fraud="1" | stats count values(fraud) by age,merchant | sort - count I have tried this query to aggregate the data by age and merchant and sorted the data in des... See more...
source="sampleprepared_data.csv" fraud="1" | stats count values(fraud) by age,merchant | sort - count I have tried this query to aggregate the data by age and merchant and sorted the data in descending order, i feel  like something is missing, i can't figure out what
1. The question is a bit ambiguous. 2. We don't know your data. Post some (possibly anonymized but I don't think it's necessary in this case). 3. What have you tried so far and how the results diff... See more...
1. The question is a bit ambiguous. 2. We don't know your data. Post some (possibly anonymized but I don't think it's necessary in this case). 3. What have you tried so far and how the results differ from what you expected?  
HI, i am new to Splunk and trying to gain hands-on experience, i am facing trouble to search the data based on this query "Which age group performed the most fraudulent activities and to what merchan... See more...
HI, i am new to Splunk and trying to gain hands-on experience, i am facing trouble to search the data based on this query "Which age group performed the most fraudulent activities and to what merchant?" can any one help me to figure out the soulution . 
Hi another option is use e.g. refresh.link.visible etc.  from Shared options r. Ismo
What  splunk list inputstatus shows on UF? It tells what files it has read and how much. Are you sure that times are correctly picked from files? If there are mismatch between European and USA tim... See more...
What  splunk list inputstatus shows on UF? It tells what files it has read and how much. Are you sure that times are correctly picked from files? If there are mismatch between European and USA time format then you must look those events with some other times as now. When you are porting a new source it's useful to use real time search with known hosts / sources for all time. With that way you can catch wrongly recognized timestamps. 
Here is link to CIM (Splunk Common Information Model) https://docs.splunk.com/Documentation/CIM/latest/User/Overview. By following it you can easily utilize create only once dashboard / report etc. a... See more...
Here is link to CIM (Splunk Common Information Model) https://docs.splunk.com/Documentation/CIM/latest/User/Overview. By following it you can easily utilize create only once dashboard / report etc. and just add a new data sources and then those will be shown there.
Or is there a possibility to use separate index for those events and afterwards even swipe out that content? Anyhow as @PickleRick said a bucket is removed after all events inside it has expired. Us... See more...
Or is there a possibility to use separate index for those events and afterwards even swipe out that content? Anyhow as @PickleRick said a bucket is removed after all events inside it has expired. Using old and new data (timestamp/_time point of view) usually make this to take quite long time.
Maybe you could utilize that priority attribute with those two sources and use same TRANSFORMS-null attribute with both of those sources? See details from previous doc link.
While monitoring Real User Monitoring, should the performance of the web application deteriorate for any reason, we would like to pause RUM agent and resume the monitoring later on based on the situa... See more...
While monitoring Real User Monitoring, should the performance of the web application deteriorate for any reason, we would like to pause RUM agent and resume the monitoring later on based on the situation. Request the necessary Splunk RUM agent API reference documentation that provides full list of API methods include pause. resume and other methods