All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, in the official compatibility matrix there is no column for Indexer 8.0.x anymore as its no longer supported. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibi... See more...
Hi, in the official compatibility matrix there is no column for Indexer 8.0.x anymore as its no longer supported. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers   Does anyone know up to which version of the Universal Forwarder is compatibel with an 8.0.x Indexer (with an 8.0.x Heavy Forwarder infront) ?
Not much to go on here - how are your forwarders configured? have they ever worked? do you have network connectivity between your forwarders and indexers? are there any errors or other messages in th... See more...
Not much to go on here - how are your forwarders configured? have they ever worked? do you have network connectivity between your forwarders and indexers? are there any errors or other messages in the logs where your forwarders are running? what have you looked at to try and determine the root cause?
Hi, I'd use SEDCMD in props.conf. You can find more details in the props.conf.spec. This is used for anonymization, but it should also work for your use case. If you want detailed steps to set it u... See more...
Hi, I'd use SEDCMD in props.conf. You can find more details in the props.conf.spec. This is used for anonymization, but it should also work for your use case. If you want detailed steps to set it up, you can follow this guide: Anonymize data with a sed script. smurf
How is the _time field stored in your lookup? If it is a string, then you may need to use the strptime() function to parse it into an epoch time for use in the chart.
Hello, We ingest logs from another vendor to Splunk, each event contains a "score" field which is predetermined by the 3rd party ranging from 0 - 100. Is there away to add that field value to the r... See more...
Hello, We ingest logs from another vendor to Splunk, each event contains a "score" field which is predetermined by the 3rd party ranging from 0 - 100. Is there away to add that field value to the risk object score instead of a static risk score in the Risk analysis Adaptive response?  Have been looking at using the Risk factor editor but cant see a way other than setting the static value in the Adaptive response to 100 then creating 100 risk factor like this if('score'="10",0.1,1) if('score'="11",0.11,1) if('score'="12",0.12,1) so on and so on. Thanks       
Hi, This might not be the answer you are looking for, but a better practice for your use case would be to use Summary Indexing. You would do basically the same as you do with the lookup but use an i... See more...
Hi, This might not be the answer you are looking for, but a better practice for your use case would be to use Summary Indexing. You would do basically the same as you do with the lookup but use an index instead. With this, you would be able to search your data as you would any other indexes.  smurf
That's normal and is just like on-prem.
We use the ansible-role-for-splunk project found in GitHub: https://github.com/splunk/ansible-role-for-splunk Now we want to install third-party apps from Splunkbase. The framework seem to rely on ... See more...
We use the ansible-role-for-splunk project found in GitHub: https://github.com/splunk/ansible-role-for-splunk Now we want to install third-party apps from Splunkbase. The framework seem to rely on all Splunk apps being available from a git repository. How are third-party apps such as "Splunk Add-on for Amazon Web Services (AWS)" supposed to be installed unless extracted to a custom git repository first?
I had an issue with storage.  I was at another site for 2 weeks and we reached the max limit on our drive.  I had to reprovision in VMware and while it was out of storage we had issues, I can't remem... See more...
I had an issue with storage.  I was at another site for 2 weeks and we reached the max limit on our drive.  I had to reprovision in VMware and while it was out of storage we had issues, I can't remember the error message but it was related to storage.  Fixed the storage issue and rebooted and had to reset my certificate and everything looked fine.  A day later we started getting the license issue.  I read the articles in the community.  I didn't fully understand.  I think its polling the environment for the time that my storage limits were reached? It's been 4 days with us being over the licensing limit.  Looking back over the last year, we have never been close to our limits. Any help would be appreciated. 
Thanks so much @yuanliu @bowesmana  both for the great help,   @yuanliu  So after you post the second query with the results there I was to catch the difference from your previous query and the las... See more...
Thanks so much @yuanliu @bowesmana  both for the great help,   @yuanliu  So after you post the second query with the results there I was to catch the difference from your previous query and the last one, I was not getting results because in the stats command I was giving space between "count and Eval" , if I do that , it does not get execute. :d Anyway, it a perfect query for my use-case, Much Appreciated !
I tried the suggested query, but I am still not able to get the output for the Country and City. Attaching the output image.
Hi Team,  I have a got a request to plot graph of previous 30 days. But the org has a retention period of 7days set on the data set.  As a solution, I am pushing data from query having HTTP status ... See more...
Hi Team,  I have a got a request to plot graph of previous 30 days. But the org has a retention period of 7days set on the data set.  As a solution, I am pushing data from query having HTTP status captured to a lookup file. The CSV file consists of following fields: 1. _time 2. 2xx 3. 4xx 4. 5xx Also, I have created a time-based lookup definition. But when I try to plot the graph, "_time" field is not coming up in x-axis.  Can you please help with how this can be achieved? 
Splunk Forwarder did not send any data
No info at all?
hello, am looking for reference of field "rec_type" and what is actually means? I tried searching cisco documentation but no luck. Please share with me? the link @halfreeman @dkeck 
Hi @AL3Z, if the target server is managed by the DS, you cannot manually change a conf file, check why the new configuration isn't pushed. Ciao. Giuseppe
Hi @man03359, the timechart command has onlòy one output not more, eventually grouprd using the BY clause. If you want more values, you have to use bin and stats: index=idx-stores-pos sourcetype=G... See more...
Hi @man03359, the timechart command has onlòy one output not more, eventually grouprd using the BY clause. If you want more values, you have to use bin and stats: index=idx-stores-pos sourcetype=GSTR:Adyen:log | transaction host startswith="Transaction started" maxpause=90s | search "*Additional Data : key - cardType*" | eval Store= substr(host,1,7) | eval Register= substr(host,8,2) | rex field=_raw "AdyenPaymentResponse.+\scardType;\svalue\s-\s(?<CardType>.+)" | eval girocard=if((CardType=="girocard"),1,0) | append [| inputlookup Stores_TimeZones.csv where Store=tkg* ] | bin span=5m _time | stats sum(girocard) AS "Girocard" latest(Country) AS Country latest(City) AS City BY _time Ciao. Giuseppe
Hi All, Hope this find you well, I have built a pretty simple search query for my dashboard, plotting line chart graph (for monitoring payments done by different debit/credit card types e.g., Giro,... See more...
Hi All, Hope this find you well, I have built a pretty simple search query for my dashboard, plotting line chart graph (for monitoring payments done by different debit/credit card types e.g., Giro, Mastercard etc. for every 5 minutes) using transaction command and then searching for the card type in the log and then extracting the value using regex in the field named "Card Type".       index=idx-stores-pos sourcetype=GSTR:Adyen:log | transaction host startswith="Transaction started" maxpause=90s | search "*Additional Data : key - cardType*" | eval Store= substr(host,1,7) | eval Register= substr(host,8,2) | rex field=_raw "AdyenPaymentResponse.+\scardType;\svalue\s-\s(?<CardType>.+)" | eval girocard=if((CardType=="girocard"),1,0) | timechart span=5m sum(girocard) AS "Girocard"     Now I have to modify the query in order to filter it out based on Country and Store, query I am using is-     index=idx-stores-pos sourcetype=GSTR:Adyen:log | transaction host startswith="Transaction started" maxpause=90s | search "*Additional Data : key - cardType*" | eval Store= substr(host,1,7) | eval Register= substr(host,8,2) | rex field=_raw "AdyenPaymentResponse.+\scardType;\svalue\s-\s(?<CardType>.+)" | eval girocard=if((CardType=="girocard"),1,0) | append [| inputlookup Stores_TimeZones.csv where Store=tkg* ] | timechart span=5m sum(girocard) AS "Girocard" latest(Country) AS Country latest(City) AS City     I am unable to get the output for Country and City, what am I doing wrong? Please help. Thanks in advance
If you go straight to sendemail command, it will execute every time, it just might send empty set of results. You could use the map command to execute a search (in this case - the sendemail one) for... See more...
If you go straight to sendemail command, it will execute every time, it just might send empty set of results. You could use the map command to execute a search (in this case - the sendemail one) for each result. Two caveats though: 1. map is considered a risky command so you need additional permissions to run it (and judging from the fact that you can't define an alert I assume you might not have those capabilities). 2. The subsearch is called for every result in your pipeline separately so if you want to just send the whole batch of your main search, you'd need to firts combine it into a single row, pass it to the map command and then "unpack" it again into multiple lines within the subsearch. A bit ugly.
@gcusello , The changes made on the DS app inputs.conf are not reflecting on the host splunk forwarder etc apps local inputs.conf file , in this case can we paste regex in  this app inputs.conf so t... See more...
@gcusello , The changes made on the DS app inputs.conf are not reflecting on the host splunk forwarder etc apps local inputs.conf file , in this case can we paste regex in  this app inputs.conf so that it can work ??