All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good afternoon, I have a Salesforce org in which I have a need to do audit trail to monitor changes in standard and custom object fields, a setup audit trail to also monitor changes to the setup of ... See more...
Good afternoon, I have a Salesforce org in which I have a need to do audit trail to monitor changes in standard and custom object fields, a setup audit trail to also monitor changes to the setup of the org and event monitoring for me to be able to monitor who see and does what. I have this need, to respect compliance and auditing rules, and I need this information to be able to be accessible 24/7 on a database, and need the data to be able to be seen/retrieve for, at least, 10 years. I would also like to have an export capability in which it would allow me to setup an export to an external database. Does splunk have such capabilities in its add-ons? and if so, which one. Thank you all,  Paulo  
Hello @ITWhisperer, thank you a lot for this good explanation. Now i know where to look at. Kind regards, Flenwy
Try something like this | stats values('db name') as "db name" by host 'sql instance'
Finding something that doesn't exist is not one of Splunk's strong suits! You can get around this by telling Splunk what to look for, for example index=* Initialised xxxxxxxxxxxx xxxxxx|rex "\{cons... See more...
Finding something that doesn't exist is not one of Splunk's strong suits! You can get around this by telling Splunk what to look for, for example index=* Initialised xxxxxxxxxxxx xxxxxx|rex "\{consumerName\=\'(MY REGEX)"|chart count AS Connections by name | append [| makeresults format=csv data="name Container A Container B Container C Container D"] | stats count by name | where count < 2
Hello, Currently my search looks for the list of containers which includes initialised successfully message and lists them. The alert I have set is to look for the number of containers under total c... See more...
Hello, Currently my search looks for the list of containers which includes initialised successfully message and lists them. The alert I have set is to look for the number of containers under total connections column and if it is less then 28, then some of them did not connect successfully   How can pass I the list of containers in my search and compare that with the result produced and state if the result is missing a container please?     Sorry I cannot provide the exact search on a public forum, so I am sharing something similar.   Example: (my example just shows 4 containers)   The search should return container A, container B, container C, container D   Current search:   index=*  Initialised xxxxxxxxxxxx xxxxxx|rex "\{consumerName\=\'(MY REGEX)"|chart count AS Connections by name| addcoltotals labelfield="Total Connections"     The current result is   Container_Name      |  Count       |    Total_Connections Container A                 |    1              | Container B                 |   1                | Container C                 |    1               | Container D                 |    1               |                                                                                     4            How can I tweak the above search to include container A,B,C and D and if container D is missing in the result, the search should compare the result with the values passed in the search and state which container is missing as the last line in the above table i.e. preserve the existing result but state which container is missing from the result as well please?   Regards
@Dennis Dedup is often used where stats could be used instead and I would normally suggest using stats as the tool to solve de-duplication, unless you specifically need to dedup and retain data, and ... See more...
@Dennis Dedup is often used where stats could be used instead and I would normally suggest using stats as the tool to solve de-duplication, unless you specifically need to dedup and retain data, and where event order is important in the deduplication. I wouldn't say it's useless for large data sets, but doing things with _raw will be inefficient. A note of caution on transaction. Memory constraints are controlled through limits.conf, see the [transactions] stanza, so you will not see massive amounts of memory being consumed, but what will happen is that evictions will occur and as a results, your transactions MAY not reflect reality. I believe the default for maxopentxn (max open transactions) is 5,000 and maxopenevents is 100,000, so in your 24 hour period for your 13 billion events, you are going to have 150,000 events per second. The chances are reasonably high that you will have more than that number of open transactions, so in that case it's a reasonable possibility that you will miss a duplicate because an open transaction has been evicted to make way for a new one. Unless you can prove that the numbers you have reached are in fact correct, which on the size of your data may be challenging, I would treat that with an element of scepticism. I would suggest that you do a shorter time range search with the sha() technique and the transaction command to see that they come up with the same results. You should be able to get a good feeling of trust by running each of the searches over a 5-10 minute period.  
It depends on what your earliest is and how quickly your events are indexed, and what timestamps are used on your events. For example, some systems such as Apache HTTPD often log with a timestamp wh... See more...
It depends on what your earliest is and how quickly your events are indexed, and what timestamps are used on your events. For example, some systems such as Apache HTTPD often log with a timestamp which represents when the request was received, but it is only written when the response is sent, so an event which takes more than say 5 minutes, may not show up in your search if you are only looking back 5 minutes every 5 minutes.
It is AND - which documentation are you referring to?
In the original, you had 9 series and in the second, you have 5. Your aggregation is using median(minsElapsed) so it's quite possible that the media is going to be less than the 33 shown in the first... See more...
In the original, you had 9 series and in the second, you have 5. Your aggregation is using median(minsElapsed) so it's quite possible that the media is going to be less than the 33 shown in the first graph. In the first graph, you have the A* series for Oct 10 appear to be 33, 10 and maybe 6, so if you combine all the values for all of these events, the median is likely to be different as it's the median of all 3 sets of events rather than the median on the single LOB value.  
Re, I tried that to remove the timestamp and it's work now but don't know if it's correct or if I need to optimse that ?      Regards,    
There doesn't appear to be anything wrong with what you are doing (apart from I don't think you need the quotes around the token in the relative_time function, and the condition is probably superfluo... See more...
There doesn't appear to be anything wrong with what you are doing (apart from I don't think you need the quotes around the token in the relative_time function, and the condition is probably superfluous); the issue may be to do with the version of Splunk you are using as there have been issues with this before iirc. Which version are you using?
Hey @Gaikwad , I think this is a custom macro created by you. The default dashboards in the app uses a few macros that comes shipped with the app itself. You can change the filter from Visible in th... See more...
Hey @Gaikwad , I think this is a custom macro created by you. The default dashboards in the app uses a few macros that comes shipped with the app itself. You can change the filter from Visible in the app to Created in the app and you'll be able to identify a list of macros that comes by default.   For example for the first panel to display IAM errors,  it looks for the following macro: aws-security-cloudtrail-service("IAM", $notable$). The macro applies the lookup command and fetches the values from aws_security_all_eventName lookup file. Below is it's definition from the app. lookup aws_security_all_eventName eventName OUTPUT function | fillnull value="N/A" function | search function="$service$" | eval notable=if(match(eventName, "(^Get*|^List*|^Describe*)"), 0, 1) | search notable=$notable$   Similarly, the further panels would be using few other macros and populating the panels. You'll need to check those macro definitions and update them as per your environment. That should help you load the dashboard properly.   Thanks, Tejas.
Hello everyone, I'm currently setting up a lot of alarms in Splunk, and a question has arisen regarding what is better in terms of time. We've considered two scenarios: latest=-5m@m latest=now... See more...
Hello everyone, I'm currently setting up a lot of alarms in Splunk, and a question has arisen regarding what is better in terms of time. We've considered two scenarios: latest=-5m@m latest=now If I run the alarm every 5 minutes, I naturally want to avoid having "gaps" in my search.  I thinks both options will work but iam not 100% sure. I appreciate any insights on this. Thank you.
It doesn't matter to the Single Value viz (or any other viz for that matter) what the search was, it just visualises the result events appropriately, so, if your search uses sum() or count(), at the ... See more...
It doesn't matter to the Single Value viz (or any other viz for that matter) what the search was, it just visualises the result events appropriately, so, if your search uses sum() or count(), at the end of it you specify only one field, the Single Value viz will visualise this.
| eval search=replace(search, "\"", "")
As @gcusello says, stats will count the occurrences easily, but only if they are in a multi-value field, so it depends on how your data is actually represented. The following runanywhere example uses... See more...
As @gcusello says, stats will count the occurrences easily, but only if they are in a multi-value field, so it depends on how your data is actually represented. The following runanywhere example uses the lines you gave as an example as the starting point, but your actually data may be different to this. | makeresults | eval _raw="Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc Satisfied Conditions: bcd, ABC, 123, abc Satisfied Conditions: XYZ, ABC, 456, abc" | multikv noheader=t | rename _raw as Condition | table Condition ``` The lines above set up some dummy data - possibly similar to your post? ``` ``` First split out the conditions ``` | eval Condition=mvindex(split(Condition,": "),1) ``` Second split the conditions into a multi-value field ``` | eval Condition=split(Condition,", ") ``` Now stats can count the occurrences of the conditions ``` | stats count by Condition  
i want the output in the below format :- Input as below:- host           sql instance           db name abc              sql1                          db1 abc               sql1                  ... See more...
i want the output in the below format :- Input as below:- host           sql instance           db name abc              sql1                          db1 abc               sql1                          db2 abc               sql2                           db123 abc               sql2                           db1234 xyz               xyzsql1                    db11 xyz                xyzsql2                   db321 xyz                xyzsql2                    db123 xyz                xyzsql2                    db1234 www             wwwsql1              db123 www            wwwsql1                db1234 outpu as below:- host           sql instance           db name abc              sql1                          db1                                                          db2  abc              sql2                        db123                                                          db1234 xyz               xyzsql1                    db11 xyz                xyzsql2                   db321                                                           db123                                                           db1234 www             wwwsql1              db123                                                           db1234
Hello, I just tried to remove the entire timestamp before the json data and it's work      But how can I remove the timestamp for all query with different timestamp ?    Regards,
We will try to check it  Thanks for advice
The letter Z at the end of 2023-09-30T04:59:59.000Z signifies Zulu time. (Zulu equals UTC for practical purposes.)  All you need to do is strptime(due_at, "%Y-%m-%d %H:%M:%S.%3N%Z").