Hi, is there a way to to change which versions of Forwarders are in support according to the Cloud Monitoring Console? Currently at the moment, v9.1.1 is showing as out of support? Many Th...
See more...
Hi, is there a way to to change which versions of Forwarders are in support according to the Cloud Monitoring Console? Currently at the moment, v9.1.1 is showing as out of support? Many Thanks
_time field looks something like "2023-09-06T18:30:00.000+00:00" in the lookup CSV. Whereas in the results generated by the query it looks like "2023-09-06 18:30:00" I tried converting the _time f...
See more...
_time field looks something like "2023-09-06T18:30:00.000+00:00" in the lookup CSV. Whereas in the results generated by the query it looks like "2023-09-06 18:30:00" I tried converting the _time field as suggested with help of one of solutions provided earlier by you (Solved: Re: convert date to epoch - Splunk Community). But no luck. Can you please help with the query?
Hi All, I have two csv files. File1.csv -> id, operation_name, session_id File2.csv -> id, error, operation_name I want to list the entries based on session_id like ->id, operation_name, sessi...
See more...
Hi All, I have two csv files. File1.csv -> id, operation_name, session_id File2.csv -> id, error, operation_name I want to list the entries based on session_id like ->id, operation_name, session_id, error. Basically all the entries from file1.csv for the session_id and errors from file2.csv. Could you please help how to combine these csv? Note: I am storing the data to CSV as a output lookup since I couldn't find a way to search these via single query. So trying to join from csv.
Hi, in the official compatibility matrix there is no column for Indexer 8.0.x anymore as its no longer supported. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibi...
See more...
Hi, in the official compatibility matrix there is no column for Indexer 8.0.x anymore as its no longer supported. https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers Does anyone know up to which version of the Universal Forwarder is compatibel with an 8.0.x Indexer (with an 8.0.x Heavy Forwarder infront) ?
Not much to go on here - how are your forwarders configured? have they ever worked? do you have network connectivity between your forwarders and indexers? are there any errors or other messages in th...
See more...
Not much to go on here - how are your forwarders configured? have they ever worked? do you have network connectivity between your forwarders and indexers? are there any errors or other messages in the logs where your forwarders are running? what have you looked at to try and determine the root cause?
Hi, I'd use SEDCMD in props.conf. You can find more details in the props.conf.spec. This is used for anonymization, but it should also work for your use case. If you want detailed steps to set it u...
See more...
Hi, I'd use SEDCMD in props.conf. You can find more details in the props.conf.spec. This is used for anonymization, but it should also work for your use case. If you want detailed steps to set it up, you can follow this guide: Anonymize data with a sed script. smurf
How is the _time field stored in your lookup? If it is a string, then you may need to use the strptime() function to parse it into an epoch time for use in the chart.
Hello, We ingest logs from another vendor to Splunk, each event contains a "score" field which is predetermined by the 3rd party ranging from 0 - 100. Is there away to add that field value to the r...
See more...
Hello, We ingest logs from another vendor to Splunk, each event contains a "score" field which is predetermined by the 3rd party ranging from 0 - 100. Is there away to add that field value to the risk object score instead of a static risk score in the Risk analysis Adaptive response? Have been looking at using the Risk factor editor but cant see a way other than setting the static value in the Adaptive response to 100 then creating 100 risk factor like this if('score'="10",0.1,1) if('score'="11",0.11,1) if('score'="12",0.12,1) so on and so on. Thanks
Hi, This might not be the answer you are looking for, but a better practice for your use case would be to use Summary Indexing. You would do basically the same as you do with the lookup but use an i...
See more...
Hi, This might not be the answer you are looking for, but a better practice for your use case would be to use Summary Indexing. You would do basically the same as you do with the lookup but use an index instead. With this, you would be able to search your data as you would any other indexes. smurf
We use the ansible-role-for-splunk project found in GitHub: https://github.com/splunk/ansible-role-for-splunk Now we want to install third-party apps from Splunkbase. The framework seem to rely on ...
See more...
We use the ansible-role-for-splunk project found in GitHub: https://github.com/splunk/ansible-role-for-splunk Now we want to install third-party apps from Splunkbase. The framework seem to rely on all Splunk apps being available from a git repository. How are third-party apps such as "Splunk Add-on for Amazon Web Services (AWS)" supposed to be installed unless extracted to a custom git repository first?
I had an issue with storage. I was at another site for 2 weeks and we reached the max limit on our drive. I had to reprovision in VMware and while it was out of storage we had issues, I can't remem...
See more...
I had an issue with storage. I was at another site for 2 weeks and we reached the max limit on our drive. I had to reprovision in VMware and while it was out of storage we had issues, I can't remember the error message but it was related to storage. Fixed the storage issue and rebooted and had to reset my certificate and everything looked fine. A day later we started getting the license issue. I read the articles in the community. I didn't fully understand. I think its polling the environment for the time that my storage limits were reached? It's been 4 days with us being over the licensing limit. Looking back over the last year, we have never been close to our limits. Any help would be appreciated.
Thanks so much @yuanliu @bowesmana both for the great help, @yuanliu So after you post the second query with the results there I was to catch the difference from your previous query and the las...
See more...
Thanks so much @yuanliu @bowesmana both for the great help, @yuanliu So after you post the second query with the results there I was to catch the difference from your previous query and the last one, I was not getting results because in the stats command I was giving space between "count and Eval" , if I do that , it does not get execute. :d Anyway, it a perfect query for my use-case, Much Appreciated !
Hi Team, I have a got a request to plot graph of previous 30 days. But the org has a retention period of 7days set on the data set. As a solution, I am pushing data from query having HTTP status ...
See more...
Hi Team, I have a got a request to plot graph of previous 30 days. But the org has a retention period of 7days set on the data set. As a solution, I am pushing data from query having HTTP status captured to a lookup file. The CSV file consists of following fields: 1. _time 2. 2xx 3. 4xx 4. 5xx Also, I have created a time-based lookup definition. But when I try to plot the graph, "_time" field is not coming up in x-axis. Can you please help with how this can be achieved?
hello, am looking for reference of field "rec_type" and what is actually means? I tried searching cisco documentation but no luck. Please share with me? the link @halfreeman @dkeck
Hi @AL3Z, if the target server is managed by the DS, you cannot manually change a conf file, check why the new configuration isn't pushed. Ciao. Giuseppe