All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have create two different timechart like: Timechart1(cable connection on/off): index=cisco_asa dest_interface=outside | timechart span=10m dc(count) by count   Timechart2(login user listed... See more...
Hi, I have create two different timechart like: Timechart1(cable connection on/off): index=cisco_asa dest_interface=outside | timechart span=10m dc(count) by count   Timechart2(login user listed): host=10.1.1.1 src_sg_info=* | timechart span=10m dc(src_sg_info) by src_sg_info   Individually the display is perfect, but it would be even better if we could combined into one graph with common timestamps.    I search through splunk documents, also tried different setup without success. Hope someone could help me with it  
Hi Ryan, Cansel, Can I please a setup a call to discuss this further? I can't explain without images and sharing screen and its better if we talk in Teams call? Thanks & Regards, Deepak Paste
index=notable| top limit=15 app | fields - percent Use chart overlay to have a second y-axis  
Hello, I have one more begginers question regarding reports and dashboards I am trying to do overview of most used services, I am using this query:   index=notable| top limit=15 app   When I... See more...
Hello, I have one more begginers question regarding reports and dashboards I am trying to do overview of most used services, I am using this query:   index=notable| top limit=15 app   When I put this report into dashboard studio, there are appearing count as well as percentage: I would like to remove percentages completely from the chart. Can you tell me how to do it, please? And one more option just coming to my mind - if I would like to use both - count and percentages, is it possible to adapt x axis in the way that it would use separate scale like 0-100 percent for percentages?
What is your question?  What problem are you trying to solve?  How have you tried to solve it so far?
Good afternoon, I have a Salesforce org in which I have a need to do audit trail to monitor changes in standard and custom object fields, a setup audit trail to also monitor changes to the setup of ... See more...
Good afternoon, I have a Salesforce org in which I have a need to do audit trail to monitor changes in standard and custom object fields, a setup audit trail to also monitor changes to the setup of the org and event monitoring for me to be able to monitor who see and does what. I have this need, to respect compliance and auditing rules, and I need this information to be able to be accessible 24/7 on a database, and need the data to be able to be seen/retrieve for, at least, 10 years. I would also like to have an export capability in which it would allow me to setup an export to an external database. Does splunk have such capabilities in its add-ons? and if so, which one. Thank you all,  Paulo  
Hello @ITWhisperer, thank you a lot for this good explanation. Now i know where to look at. Kind regards, Flenwy
Try something like this | stats values('db name') as "db name" by host 'sql instance'
Finding something that doesn't exist is not one of Splunk's strong suits! You can get around this by telling Splunk what to look for, for example index=* Initialised xxxxxxxxxxxx xxxxxx|rex "\{cons... See more...
Finding something that doesn't exist is not one of Splunk's strong suits! You can get around this by telling Splunk what to look for, for example index=* Initialised xxxxxxxxxxxx xxxxxx|rex "\{consumerName\=\'(MY REGEX)"|chart count AS Connections by name | append [| makeresults format=csv data="name Container A Container B Container C Container D"] | stats count by name | where count < 2
Hello, Currently my search looks for the list of containers which includes initialised successfully message and lists them. The alert I have set is to look for the number of containers under total c... See more...
Hello, Currently my search looks for the list of containers which includes initialised successfully message and lists them. The alert I have set is to look for the number of containers under total connections column and if it is less then 28, then some of them did not connect successfully   How can pass I the list of containers in my search and compare that with the result produced and state if the result is missing a container please?     Sorry I cannot provide the exact search on a public forum, so I am sharing something similar.   Example: (my example just shows 4 containers)   The search should return container A, container B, container C, container D   Current search:   index=*  Initialised xxxxxxxxxxxx xxxxxx|rex "\{consumerName\=\'(MY REGEX)"|chart count AS Connections by name| addcoltotals labelfield="Total Connections"     The current result is   Container_Name      |  Count       |    Total_Connections Container A                 |    1              | Container B                 |   1                | Container C                 |    1               | Container D                 |    1               |                                                                                     4            How can I tweak the above search to include container A,B,C and D and if container D is missing in the result, the search should compare the result with the values passed in the search and state which container is missing as the last line in the above table i.e. preserve the existing result but state which container is missing from the result as well please?   Regards
@Dennis Dedup is often used where stats could be used instead and I would normally suggest using stats as the tool to solve de-duplication, unless you specifically need to dedup and retain data, and ... See more...
@Dennis Dedup is often used where stats could be used instead and I would normally suggest using stats as the tool to solve de-duplication, unless you specifically need to dedup and retain data, and where event order is important in the deduplication. I wouldn't say it's useless for large data sets, but doing things with _raw will be inefficient. A note of caution on transaction. Memory constraints are controlled through limits.conf, see the [transactions] stanza, so you will not see massive amounts of memory being consumed, but what will happen is that evictions will occur and as a results, your transactions MAY not reflect reality. I believe the default for maxopentxn (max open transactions) is 5,000 and maxopenevents is 100,000, so in your 24 hour period for your 13 billion events, you are going to have 150,000 events per second. The chances are reasonably high that you will have more than that number of open transactions, so in that case it's a reasonable possibility that you will miss a duplicate because an open transaction has been evicted to make way for a new one. Unless you can prove that the numbers you have reached are in fact correct, which on the size of your data may be challenging, I would treat that with an element of scepticism. I would suggest that you do a shorter time range search with the sha() technique and the transaction command to see that they come up with the same results. You should be able to get a good feeling of trust by running each of the searches over a 5-10 minute period.  
It depends on what your earliest is and how quickly your events are indexed, and what timestamps are used on your events. For example, some systems such as Apache HTTPD often log with a timestamp wh... See more...
It depends on what your earliest is and how quickly your events are indexed, and what timestamps are used on your events. For example, some systems such as Apache HTTPD often log with a timestamp which represents when the request was received, but it is only written when the response is sent, so an event which takes more than say 5 minutes, may not show up in your search if you are only looking back 5 minutes every 5 minutes.
It is AND - which documentation are you referring to?
In the original, you had 9 series and in the second, you have 5. Your aggregation is using median(minsElapsed) so it's quite possible that the media is going to be less than the 33 shown in the first... See more...
In the original, you had 9 series and in the second, you have 5. Your aggregation is using median(minsElapsed) so it's quite possible that the media is going to be less than the 33 shown in the first graph. In the first graph, you have the A* series for Oct 10 appear to be 33, 10 and maybe 6, so if you combine all the values for all of these events, the median is likely to be different as it's the median of all 3 sets of events rather than the median on the single LOB value.  
Re, I tried that to remove the timestamp and it's work now but don't know if it's correct or if I need to optimse that ?      Regards,    
There doesn't appear to be anything wrong with what you are doing (apart from I don't think you need the quotes around the token in the relative_time function, and the condition is probably superfluo... See more...
There doesn't appear to be anything wrong with what you are doing (apart from I don't think you need the quotes around the token in the relative_time function, and the condition is probably superfluous); the issue may be to do with the version of Splunk you are using as there have been issues with this before iirc. Which version are you using?
Hey @Gaikwad , I think this is a custom macro created by you. The default dashboards in the app uses a few macros that comes shipped with the app itself. You can change the filter from Visible in th... See more...
Hey @Gaikwad , I think this is a custom macro created by you. The default dashboards in the app uses a few macros that comes shipped with the app itself. You can change the filter from Visible in the app to Created in the app and you'll be able to identify a list of macros that comes by default.   For example for the first panel to display IAM errors,  it looks for the following macro: aws-security-cloudtrail-service("IAM", $notable$). The macro applies the lookup command and fetches the values from aws_security_all_eventName lookup file. Below is it's definition from the app. lookup aws_security_all_eventName eventName OUTPUT function | fillnull value="N/A" function | search function="$service$" | eval notable=if(match(eventName, "(^Get*|^List*|^Describe*)"), 0, 1) | search notable=$notable$   Similarly, the further panels would be using few other macros and populating the panels. You'll need to check those macro definitions and update them as per your environment. That should help you load the dashboard properly.   Thanks, Tejas.
Hello everyone, I'm currently setting up a lot of alarms in Splunk, and a question has arisen regarding what is better in terms of time. We've considered two scenarios: latest=-5m@m latest=now... See more...
Hello everyone, I'm currently setting up a lot of alarms in Splunk, and a question has arisen regarding what is better in terms of time. We've considered two scenarios: latest=-5m@m latest=now If I run the alarm every 5 minutes, I naturally want to avoid having "gaps" in my search.  I thinks both options will work but iam not 100% sure. I appreciate any insights on this. Thank you.
It doesn't matter to the Single Value viz (or any other viz for that matter) what the search was, it just visualises the result events appropriately, so, if your search uses sum() or count(), at the ... See more...
It doesn't matter to the Single Value viz (or any other viz for that matter) what the search was, it just visualises the result events appropriately, so, if your search uses sum() or count(), at the end of it you specify only one field, the Single Value viz will visualise this.
| eval search=replace(search, "\"", "")